url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/1605.05226 | On two-generator subgroups in SL_2(Z), SL_2(Q), and SL_2(R) | We consider what some authors call 'parabolic Möbius subgroups' of matrices over Z, Q, and R and focus on the membership problem in these subgroups and complexity of relevant algorithms. | \section{Introduction: two theorems of Sanov}
Denote $A(k) = \left(
\begin{array}{cc} 1 & k \\ 0 & 1 \end{array} \right) , \hskip .2cm B(k) = \left(
\begin{array}{cc} 1 & 0 \\ k & 1 \end{array} \right).$ In an old
paper \cite{Sanov}, I. N. Sanov proved two simple yet remarkable
theorems:
\begin{theorem}\label{th1}
The subgroup of $SL_2(\mathbb{Z})$ generated by $A(2)$ and $B(2)$ is free.
\end{theorem}
\begin{theorem}\label{th2}
The subgroup of $SL_2(\mathbb{Z})$ generated by $A(2)$ and $B(2)$ consists
of {\it all} matrices of the form $\left(
\begin{array}{cc} 1+4n_1 & 2n_2 \\ 2n_3 & 1+4n_4\end{array}
\right)$ with determinant 1, where all $n_i$ are arbitrary integers.
\end{theorem}
These two theorems together yield yet another proof of the fact that
the group $SL_2(\mathbb{Z})$ is virtually free. This is because the group of
all invertible matrices of the form $\left(
\begin{array}{cc} 1+4n_1 & 2n_2 \\ 2n_3 & 1+4n_4\end{array}
\right)$ obviously has finite index in $SL_2(\mathbb{Z})$. Thus, we have:
\begin{corollary}\label{corollary1}
The group $SL_2(\mathbb{Z})$ is virtually free.
\end{corollary}
There is another interesting corollary of Theorem \ref{th2}:
\begin{corollary}\label{corollary2}
The membership problem in the subgroup of $SL_2(\mathbb{Z})$ generated by
$A(2)$ and $B(2)$ is solvable in constant time.
\end{corollary}
We note that this is, to the best of our knowledge, the only example
of a natural (and nontrivial) algorithmic problem in group theory
solvable in constant time. In fact, even problems solvable in
sublinear time are very rare, see \cite{Sublinear}, and in those
that are, one can typically get either ``yes" or ``no" answer in
sublinear time, but not both. Complexity of an input in our case is
the ``size" of a given matrix, i.e., the sum of the absolute values
of its entries. In light of Theorem \ref{th2}, deciding whether or
not a given matrix from $SL_2(\mathbb{Z})$ belongs to the subgroup generated
by $A(2)$ and $B(2)$ boils down to looking at residues modulo 2 or 4
of the entries. The latter is decided by looking just at the last
one or two digits of each entry (assuming that the entries are given in the binary or, say, decimal form). We emphasize though that solving
this membership problem in constant time is only possible if an input
matrix is known to belong to $SL_2(\mathbb{Z})$; otherwise one would have
to check that the determinant of a given matrix is equal to 1, which
cannot be done in constant time, although can still be done in
sublinear time with respect to the complexity $|M|$ of an input
matrix $M$, as defined in the next Section \ref{results}, see
Corollary \ref{complexity}.
\section{Our results}
\label{results}
In this paper, we show that what would be a natural generalization
of Sanov's Theorem \ref{th2} to $A(k)$ and $B(k)$, $k \in \mathbb{Z}_+$, is
not valid for $k \ge 3$ and moreover, the subgroup generated by
$A(k)$ and $B(k)$ has infinite index in $SL_2(\mathbb{Z})$ if $k \ge 3$.
\begin{theorem}\label{infinite}
The subgroup of $SL_2(\mathbb{Z})$ generated by $A(k)$ and $B(k)$, $k \in
\mathbb{Z}, ~k \ge 3$, has infinite index in the group of all matrices of
the form $\left(
\begin{array}{cc} 1+k^2m_1 & km_2 \\ km_3 & 1+k^2m_4\end{array}
\right)$ with determinant 1.
\end{theorem}
The group of all matrices of the above form, on the other hand,
obviously has finite index in $SL_2(\mathbb{Z})$.
Our main technical result, proved in Section \ref{Peak}, is the
following
\begin{theorem} \label{peak}
Let $M = \left(
\begin{array}{cc} m_{11} & m_{12} \\ m_{21} & m_{22} \end{array}
\right)$ be a matrix from $SL_2(\mathbb{R})$. Call ``elementary operations"
on $M$ the following 8 operations: multiplication of $M$ by either
$A(k)^{\pm 1}$ or by $B(k)^{\pm 1}$, on the right or on the left.
\medskip
\noindent {\bf (a)} If $k \in \mathbb{Z}$ and $M$ belongs to the subgroup of
$SL_2(\mathbb{Z})$ generated by $A(k)$ and $B(k)$, then it has the form
$\left(
\begin{array}{cc} 1+k^2n_1 & kn_2 \\ kn_3 & 1+k^2n_4\end{array}
\right)$ for some integers $n_i$.
If $k \in \mathbb{R}$ and $M$ belongs to the subgroup of $SL_2(\mathbb{R})$
generated by $A(k)$ and $B(k)$, then it has the form $\left(
\begin{array}{cc} 1+\sum_i k^in_i & \sum_j k^jn_j \\ \sum_r k^rn_r & 1+\sum_s k^sn_s\end{array}
\right)$ where all $n_i$ are integers and all exponents on $k$ are
positive integers.
\medskip
\noindent {\bf (b)} Let $k \in \mathbb{R}, ~k \ge 2$. If $M \in SL_2(\mathbb{R})$
and there is a sequence of elementary operations that reduces
$\sum_{i,j}|m_{ij}|$, then there is a single elementary operation
that reduces $\sum_{i,j}|m_{ij}|$.
\noindent {\bf (c)} Let $k \in \mathbb{Z}, ~k \ge 2$. If $M \in SL_2(\mathbb{Z})$
and no single elementary operation reduces $\sum_{i,j}|m_{ij}|$, then
either $M$ is the identity matrix or $M$ does not belong to the
subgroup generated by $A(k)$ and $B(k)$.
\end{theorem}
We also point out a result, similar to Theorem \ref{peak}, about the
{\it monoid} generated by $A(k)$ and $B(k)$ for $k>0$. Unlike
Theorem \ref{peak} itself, this result is trivial.
\begin{proposition}
Let $M = \left(
\begin{array}{cc} m_{11} & m_{12} \\ m_{21} & m_{22} \end{array}
\right)$ be a matrix from $SL_2(\mathbb{R})$. Call ``elementary operations"
on $M$ the following 4 operations: multiplication of $M$ by either
$A(k)^{-1}$ or by $B(k)^{-1}$, on the right or on the left.
\medskip
\noindent {\bf (a)} If $k \in \mathbb{Z}, ~k > 0,$ and $M$ belongs to the
{\it monoid} generated by $A(k)$ and $B(k)$, then it has the form
$\left(
\begin{array}{cc} 1+k^2n_1 & kn_2 \\ kn_3 & 1+k^2n_4\end{array}
\right)$ for some nonnegative integers $n_i$.
If $k \in \mathbb{R}, ~k > 0,$ and $M$ belongs to the {\it monoid}
generated by $A(k)$ and $B(k)$, then it has the form $\left(
\begin{array}{cc} 1+\sum_i k^in_i & \sum_j k^jn_j \\ \sum_r k^rn_r & 1+\sum_s k^sn_s\end{array}
\right)$ where all $n_i$ are nonnegative integers and all exponents
on $k$ are positive integers.
\medskip
\noindent {\bf (b)} Let $k \in \mathbb{Z}, ~k \ge 2$. If $M$ is a matrix
from $SL_2(\mathbb{Z})$ with nonnegative entries and no elementary operation
reduces $\sum_{i,j} m_{ij}$, then either $M$ is the identity matrix
or $M$ does not belong to the {\it monoid} generated by $A(k)$ and
$B(k)$.
\end{proposition}
Thus, for example, the matrix $\left(
\begin{array}{cc} 5 & 4 \\ 6 & 5 \end{array}
\right)$ does not belong to the {\it monoid} generated by $A(2)$
and $B(2)$, although it does belong to the {\it group} generated by
$A(2)$ and $B(2)$ by Sanov's Theorem \ref{th2}.
Theorem \ref{peak} yields a simple algorithm for the membership
problem in the subgroup generated by $A(k)$ and $B(k)$ in case $k
\in \mathbb{Z}, ~k \ge 2.$ We note in passing that in general, the subgroup
membership problem for $SL_2(\mathbb{Q})$ is open, while in $SL_2(\mathbb{Z})$ it is
solvable since $SL_2(\mathbb{Z})$ is virtually free. The general solution,
based on the automatic structure of $SL_2(\mathbb{Z})$ (see \cite{Epstein}),
is not so transparent and has quadratic time complexity (with respect to
the word length of an input). For our
special subgroups we have:
\begin{corollary}\label{complexity} Let $k \in \mathbb{Z}, ~k \ge 2$, and
let the complexity $|M|$ of a matrix $M = \left(
\begin{array}{cc} m_{11} & m_{12} \\ m_{21} & m_{22} \end{array}
\right)$ be the sum of all $|m_{ij}|$. There is an algorithm that
decides whether or not a given matrix $M \in SL_2(\mathbb{Z})$ is in the
subgroup of $SL_2(\mathbb{Z})$ generated by $A(k)$ and $B(k)$ (and if it
does, finds a presentation of $M$ as a group word in $A(k)$ and
$B(k)$) in time $O(n \cdot \log n)$, where $n=|M|$.
\end{corollary}
\noindent {\bf Remark.} The relation between $|M|= \sum |m_{ij}|$
and the word length of $M$ (with respect to the standard generators
$A(1)$ and $B(1)$, say) is not at all obvious and is an interesting
problem in its own right.
\medskip
Statement similar to Corollary \ref{complexity} holds also for the
{\it monoid} generated by $A(k)$ and $B(k)$, for any $k \in \mathbb{Z}, ~k
\ge 2$.
The $O(n \cdot \log n)$ is the worst-case complexity of the
algorithm referred to in Corollary \ref{complexity}. It would be
interesting to find out what the {\it generic-case complexity} (in
the sense of \cite{KMSS}) of this algorithm is. Proposition 1 in
\cite{BSV} tacitly suggests that this complexity might be, in fact,
sublinear in $n=|M|$, which would be a really interesting result, so
we ask:
\begin{problem}\label{generic} Is the generic-case complexity of the algorithm
claimed in Corollary \ref{complexity} sublinear in $|M|$?
\end{problem}
We note that, unlike the algorithms with low generic-case complexity
considered in \cite{KMSS}, this algorithm has a good chance to have
low generic-case complexity giving both ``yes" and ``no" answers,
see our Section \ref{Corollary} for more details.
Finally, we note that if $M$ is in the subgroup generated by $A(k)$
and $B(k)$, $k \ge 2$, then the presentation of $M$ as a group word
in $A(k)$ and $B(k)$ is unique since the group generated by $A(k)$
and $B(k)$ is known to be free for any $k \in \mathbb{R}, ~k \ge 2$, see
e.g. \cite{JP}. On the other hand, the group generated by $A(1)$ and
$B(1)$ (i.e., the whole group $SL_2(\mathbb{Z})$) is not free. This implies,
in particular, that for any integer $n \ge 1$, the group generated
by $A(\frac{1}{n})$ and $B(\frac{1}{n})$ is not free because it
contains both matrices $A(1)$ and $B(1)$. Many examples of rational
$k, ~ 0 < k < 2$, for which the subgroup of $SL_2(\mathbb{Q})$ generated by
$A(k)$ and $B(k)$ is not free were found over the years, starting
with \cite{L-U}; see a recent paper \cite{Gutan} for more
references. (We can single out the paper \cite{Beardon} where the
question of non-freeness for this subgroup was reduced to
solvability of particular Diophantine equations.) In particular, it
is known that for any $k, ~ 0 < k < 2$, of the form $\frac{m}{mn+1}$
or $\frac{m+n}{mn}, ~m, n \in Z_+$, the group generated by $A(k)$
and $B(k)$ is not free. This includes $k=\frac{2}{3}, \frac{3}{2},
\frac{3}{7}$, etc. Also, if the group is not free for some $k$, then
it is not free for any $\frac{k}{n}, ~n \in Z_+$.
The following problem, however, seems to be still open:
\begin{problem} (Yu. Merzlyakov \cite{Problems}, \cite{Kourovka})
For which rational $k, ~ 0 < k < 2$, is the group generated by
$A(k)$ and $B(k)$ free? More generally, for which algebraic $k, ~ 0
< k < 2$, is this group free?
\end{problem}
To the best of our knowledge, there are no known examples of a
rational $k, ~ 0 < k < 2$, such that the group generated by $A(k)$
and $B(k)$ is free. On the other hand, since any matrix from this
group has the form $ \left(
\begin{array}{cc} p_{11}(k) & p_{12}(k)\\
p_{21}(k) & p_{22}(k) \end{array}
\right)$ for some polynomials $p_{ij}(k)$ with integer coefficients,
this group is obviously free if $k$ is transcendental. For the same
reason, if $r$ and $s$ are algebraic numbers that are Galois
conjugate over $\mathbb{Q}$, then the group generated by $A(r)$ and $B(r)$
is free if and only if the group generated by $A(s)$ and $B(s)$ is.
For example, if $r=2-\sqrt 2$, then $A(r)$ and $B(r)$ generate a
free group because this $r$ is Galois conjugate to $s=2+\sqrt 2 >
2$. More generally, $A(r)$ and $B(r)$ generate a free group for
$r=m- n\sqrt 2$, and therefore also for $r=k \cdot (m- n\sqrt 2)$,
with arbitrary positive $k, m, n \in \mathbb{Z}$. This implies, in
particular, that the set of algebraic $r$ for which the group is
free is dense in $\mathbb{R}$ because $(m- n\sqrt 2)$ can be arbitrarily
close to 0. All these $r$ are irrational though.
\section{Peak reduction}
\label{Peak}
Here we prove Theorem \ref{peak} from Section \ref{results}. The
method we use is called {\it peak reduction} and goes back to
Whitehead \cite{Wh}, see also \cite{L-S}. The idea is as follows.
Given an algorithmic problem that has to be solved, one first
somehow defines {\it complexity} of possible inputs. Another
ingredient is a collection of {\it elementary operations} that can
be applied to inputs. Thus, we now have an action of the semigroup
of elementary operations on the set of inputs. Usually, of
particular interest are elements of minimum complexity in any given
orbit under this action. The main problem typically is to find these
elements of minimum complexity. This is where the peak reduction
method can be helpful. A crucial observation is: if there is a
sequence of elementary operations (applied to a given input) such
that at some point in this sequence the complexity goes up (or
remains unchanged) before eventually going down, then there must be
a pair of {\it subsequent} elementary operations in this sequence (a
``peak") such that one of them increases the complexity (or leaves
it unchanged), and then the other one decreases it. Then one tries
to prove that such a peak can always be reduced, i.e., if there is
such a pair, then there is also a single elementary operation that
reduces complexity. This will then imply that there is a ``greedy"
sequence of elementary operations, i.e., one that reduces complexity
{\it at every step}. This will yield an algorithm for finding an
element of minimum complexity in a given orbit.
In our situation, inputs are matrices from $SL_2(\mathbb{R})$. For the
purposes of the proof of Theorem \ref{peak}, we define complexity of
a matrix $M = \left( \begin{array}{cc} m_{11} & m_{12} \\ m_{21} &
m_{22} \end{array} \right)$ to be the maximum of all $|m_{ij}|$.
Between two matrices with the same $\max |m_{ij}|$, the one with the
larger $\sum_{i,j}|m_{ij}|$ has higher complexity. We will see,
however, that in case of $2 \times 2$ matrices with determinant 1,
the ``greedy" sequence of elementary operations would be the same as
if we defined the complexity to be just $\sum_{i,j}|m_{ij}|$.
Elementary operations in our situation are multiplications of a
matrix by either $A(k)^{\pm 1}$ or by $B(k)^{\pm 1}$, on the right
or on the left. They correspond to elementary row or column
operations; specifically, to operations of the form $(row_1 \pm k
\cdot row_2)$, $(row_2 \pm k \cdot row_1)$, $(column_1 \pm k \cdot
column_2)$, and $(column_2 \pm k \cdot column_1)$.
We now get to
\noindent {\bf Proof of Theorem \ref{peak}.} Part (a) is established
by an obvious induction on the length of a group word representing a
given element of the subgroup generated by $A(k)$ and $B(k)$. Part
(c) follows from part (b). We omit the details and proceed to part
(b).
We are going to consider various pairs of subsequent elementary
operations of the following kind: the first operation increases the
maximum of $|m_{ij}|$ (or leaves it unchanged), and then the second
one reduces it. We are assuming that the second elementary operation
is not the inverse of the first one.
In each case like that, we show that either the maximum of
$|m_{ij}|$ in the given matrix could have been reduced by just a
single elementary operation (and then $\sum |m_{ij}|$ should be reduced, too, to keep the determinant unchangedk), or $\sum |m_{ij}|$ could have been
reduced by a single elementary operation leaving the maximum of
$|m_{ij}|$ unchanged. Because of a ``symmetry", it is sufficient to
consider the following cases.
First of all, we note that since the determinant of $M$ is equal to
1, there can be 0, 2, or 4 negative entries in $M$. If there are 2
negative entries, they can occur either in the same row, or in the
same column, or on the same diagonal. Because of the symmetry, we
only consider the case where two negative entries are in the first
column and the case where they are on the main diagonal. Also, cases
with 0 and 4 negative entries are symmetric, so we only consider the
case where there are no negative entries.
It is also convenient for us to single out the case where $M$ has
two zero entries, so we start with
\medskip
\noindent {\bf Case 0.} There are two zero entries in $M$. Since the
determinant of $M$ is 1, the two nonzero entries should be on a
diagonal and their product should be $\pm 1$. If they are not on
the main diagonal, then $M^4 = I$, in which case $M$ cannot belong
to the subgroup generated by $A(k)$ and $B(k)$, $k \ge 2$, since
this subgroup is free.
Now suppose that the two nonzero entries are on the main diagonal,
so $M = \left(
\begin{array}{cc} x & 0\\
0 & \frac{1}{x} \end{array}
\right)$ for some $x \in \mathbb{R}, ~x \ne 1$. Without loss of generality,
assume $x > 0$. We are going to show, by way of contradiction, that
such a matrix is not in the subgroup generated by $A(k)$ and $B(k)$.
We have: $MA(k)M^{-1} = \left(
\begin{array}{cc} 1 & x^2k\\
0 & 1 \end{array}
\right),$ and for some $r \in \mathbb{Z}_+$ we have $A(k)^{-r}MA(k)M^{-1} =
\left(
\begin{array}{cc} 1 & yk\\
0 & 1 \end{array}
\right),$ where $0 < yk \le k$. If $yk = k$, then we have a
relation $A(k)^{-r}MA(k)M^{-1} = A(k)$, so again we have a
contradiction with the fact that the subgroup generated by $A(k)$
and $B(k)$ is free. Now let $0 < yk < k$, so $0 < y < 1$, and let $C
= \left(\begin{array}{cc} 1 & yk\\
0 & 1 \end{array}\right)$. We claim that $C$ does not belong to the
subgroup generated by $A(k)$ and $B(k)$. If it did, then so would
the matrix $T(m, n) = A(k)^{-m}C^n = \left(\begin{array}{cc} 1 & (ny-m)k\\
0 & 1 \end{array}\right)$ for any $m, n \in \mathbb{Z}_+$. Since $(ny-m)k$
can be arbitrarily close to 0, the matrix $T(m, n)$ can be arbitrarily close to
the identity matrix, which contradicts the well-known fact (see e.g.
\cite{Jorgensen}) that the group generated by $A(k)$ and $B(k)$ is
{\it discrete} for any $k \ge 2$.
\medskip
In what follows, we assume that all matrices under consideration
have at most one zero entry. Even though we use strict inequalities
for all entries of a matrix, the reader should keep in mind that one
of the inequalities may be not strict; this does not affect the
argument.
\smallskip
\noindent {\bf Case 1.} There are 2 negative entries, both in the
first column. Thus, $m_{11}<0, m_{21}<0, m_{12}>0, m_{22}>0$.
\medskip
\noindent {\bf Case 1a.} Two subsequent elementary operations
reducing some entry after increasing it are both $(column_1 + k
\cdot column_2)$. If, after one operation $(column_1 + k \cdot
column_2)$, the element $m_{11}$, say, becomes positive, then
$m_{21}$ should become positive, too, for the determinant to remain
unchanged. Then, after applying $(column_1 + k \cdot column_2)$ one
more time, new $|m_{11}|$ will become greater than it was, contrary
to the assumption.
If, after one operation $(column_1 + k \cdot column_2)$, $m_{11}$
remains negative, then this operation reduces $|m_{11}|$, and this
same operation should also reduce $|m_{21}|$ for the determinant
to remain unchanged. Indeed, the determinant is $m_{11}m_{22} -
m_{12}m_{21} = 1$. If $|m_{11}|$ decreases while $m_{11}$ remains
negative, then the value of $m_{11}m_{22}$ increases (but remains
negative). Therefore, the value of $m_{12}m_{21}$ should increase,
too, for the difference to remain unchanged. Since $m_{21}$ should
remain negative, this implies that $|m_{21}|$ should decrease, hence
$\sum |m_{ij}|$ decreases after one operation $(column_1 + k \cdot
column_2)$.
The same kind of argument works in the case where both operations
are $(row_1 - k \cdot row_2)$,
If both operations are $(row_1 + k \cdot row_2)$, or $(column_1 - k
\cdot column_2)$, then no $|m_{ij}|$ can possibly decrease since
$m_{11}<0, m_{21}<0, m_{12}>0, m_{22}>0$.
\smallskip
\noindent {\bf Case 1b.} Two subsequent elementary operations are:
$(row_1 + k \cdot row_2)$, followed by $(column_1 + k \cdot
column_2)$. The result of applying these two operations to the
matrix $M$ is: $ \left(
\begin{array}{cc} m_{11} + k \cdot m_{21} +k \cdot m_{12} + k^2 m_{22} & m_{12}+k \cdot m_{22}\\
m_{21}+k \cdot m_{22} & m_{22} \end{array}
\right)$. This case is nontrivial only if the first operation
increases the absolute value of the element in the top left corner
(or leaves it unchanged), and then the second operation reduces it.
But then the second operation should also reduce the absolute value
of $m_{21}$ for the determinant to remain unchanged. This means a
single operation $(column_1 + k \cdot column_2)$ would reduce
$|m_{21}|$ to begin with, and this same operation should also
reduce $|m_{11}|$ for the determinant to remain unchanged. Thus, a
single elementary operation would reduce the complexity of $M$.
The same argument takes care of any of the following pairs of
subsequent elementary operations: $(row_1 \pm k \cdot row_2)$,
followed by $(column_1 \pm k \cdot column_2)$, as well as $(row_1
\pm k \cdot row_2)$, followed by $(column_2 \pm k \cdot column_1)$.
\smallskip
\noindent {\bf Case 1c.} Two subsequent elementary operations are:
$(row_1 + k \cdot row_2)$, followed by $(row_2 + k \cdot row_1)$. The result of
applying these two operations to the matrix $M$ is: $ \left(
\begin{array}{cc} m_{11}+k \cdot m_{21} & m_{12}+k \cdot m_{22}\\
(k^2+1) m_{21}+k \cdot m_{11} & (k^2+1)m_{22}+k \cdot m_{12}\end{array}
\right)$. In this case, obviously $|(k^2+1)m_{21}+ k \cdot m_{11}|
> |m_{21}|$ and $|(k^2+1)m_{22}+ k \cdot m_{12}| > |m_{22}|$, so we do not
have a decrease, i.e., this case is moot.
\smallskip
\noindent {\bf Case 1d.} Two subsequent elementary operations are:
$(row_1 + k \cdot row_2)$, followed by $(row_2 - k \cdot row_1)$. The result of
applying these two operations to the matrix $M$ is: $ \left(
\begin{array}{cc} m_{11}+k \cdot m_{21} & m_{12}+k \cdot m_{22}\\
-(k^2-1)m_{21}- k \cdot m_{11} & -(k^2-1)m_{22}- k \cdot m_{12}\end{array}
\right)$. If $k > \sqrt{2}$, then $|-(k^2-1)m_{21}- k \cdot m_{11}|
> |m_{21}|$ and $|-(k^2-1)m_{22}- k \cdot m_{12}| > |m_{22}|$, so we
do not have a decrease, i.e., this case is moot, too.
\smallskip
\noindent {\bf Case 1e.} Two subsequent elementary operations are:
$(row_1 - k \cdot row_2)$, followed by $(row_2 + k \cdot row_1)$. The result of
applying these two operations to the matrix $M$ is: $ \left(
\begin{array}{cc} m_{11}- k \cdot m_{21} & m_{12}- k \cdot m_{22}\\
k \cdot m_{11}-(k^2-1) \cdot m_{21} & k \cdot m_{12}-(k^2-1) \cdot m_{22} \end{array}
\right)$. If the first operation increases $|M|$ (or leaves it
unchanged) and then the second one reduces it, then the second
operation should reduce $|m_{21}|$ or $|m_{22}|$. Assume, without
loss of generality, that $|m_{21}| \ge |m_{22}|$.
We may assume that $|m_{11}-k \cdot m_{21}| \ge |m_{11}| = -m_{11}$
and $|m_{12}-k \cdot m_{22}| \ge |m_{12}| = m_{12}$ because
otherwise, the complexity of $M$ could be reduced by a single
operation $(row_1 - k \cdot row_2)$.
Now we look at the inequality $|k \cdot m_{11}-(k^2-1) \cdot
m_{21}| < |m_{21}|=-m_{21}$. Re-write it as follows: $|k
\cdot(m_{11}-k \cdot m_{21}) + m_{21}| < -m_{21}$. We may assume
that $m_{11}-k \cdot m_{21}>0$ because otherwise, a single
operation $(row_1 - k \cdot row_2)$ would reduce the complexity of
$M$. We also know that $|m_{11}-k \cdot m_{21}| \ge -m_{11}$ (see
the previous paragraph). Thus, $m_{11}-k \cdot m_{21} \ge -m_{11}$.
This inequality, together with $|k \cdot(m_{11}-k \cdot m_{21}) +
m_{21}| < -m_{21}$, yield $|-k \cdot m_{11} + m_{21}| < -m_{21}$.
This means a single operation $(row_2 - k \cdot row_1)$ would
reduce $|m_{21}|$.
\smallskip
\noindent {\bf Case 1f.} Two subsequent elementary operations are:
$(row_1 - k \cdot row_2)$, followed by $(row_2 - k \cdot row_1)$.
The result of applying these two operations to the matrix $M$ is:
$\left(
\begin{array}{cc} m_{11}- k \cdot m_{21} & m_{12}- k \cdot m_{22}\\
(k^2+1) \cdot m_{21} - k \cdot m_{11} & (k^2+1) \cdot m_{22} - k \cdot m_{12} \end{array}
\right)$. If the first operation increases $|M|$ (or leaves it
unchanged) and then the second one reduces it, then the second
operation should reduce $|m_{21}|$ or $|m_{22}|$.
We may assume that $m_{12} - k \cdot m_{22} < 0$ because otherwise,
a single operation $(row_1 - k \cdot row_2)$ would reduce $|m_{12}|$
and therefore also $|m_{11}|$ for the determinant to remain
unchanged in this case. Now look at the element in the bottom right
corner: $(k^2+1) \cdot m_{22} - k \cdot m_{12} = k \cdot (k \cdot
m_{22} -m_{12}) + m_{22}$. Since $k \cdot m_{22} -m_{12} >0$, we
have $|(k^2+1) \cdot m_{22} - k \cdot m_{12}|
> |m_{22}|$, a contradiction.
\smallskip
\noindent {\bf Case 1g.} Two subsequent elementary operations are:
$(column_1 + k \cdot column_2)$, followed by $(column_2 + k \cdot
column_1)$. The result of applying these two operations to the
matrix $M$ is: $\left(
\begin{array}{cc} m_{11}+ k \cdot m_{12} & (k^2+1) \cdot m_{12}+ k \cdot m_{11}\\
m_{21} + k \cdot m_{22} & (k^2+1) \cdot m_{22} + k \cdot m_{21} \end{array}
\right)$. If the first operation increases $|M|$ (or leaves it
unchanged) and then the second one reduces it, then the second
operation should reduce $|m_{12}|$ or $|m_{22}|$. Let us assume
here that $|m_{22}| \ge |m_{12}|$.
Now look at the element in the bottom right corner: $(k^2+1) \cdot
m_{22} + k \cdot m_{21} = k \cdot (m_{21} + k \cdot m_{22}) +
m_{22}$. We may assume that $m_{21} + k \cdot m_{22} > 0$ because
otherwise, a single operation $(column_1 + k \cdot column_2)$ would
reduce $|m_{21}|$. In that case, however, we have $|(k^2+1) \cdot
m_{22} + k \cdot m_{21}| = k \cdot (m_{21} + k \cdot m_{22}) +
m_{22} > m_{22} = |m_{22}|$, a contradiction.
\smallskip
\noindent {\bf Case 1h.} Two subsequent elementary operations are:
$(column_1 + k \cdot column_2)$, followed by $(column_2 - k \cdot
column_1)$. The result of applying these two operations to the
matrix $M$ is: $\left(
\begin{array}{cc} m_{11}+ k \cdot m_{12} & (1-k^2) \cdot m_{12}- k \cdot m_{11}\\
m_{21} + k \cdot m_{22} & (1-k^2) \cdot m_{22} - k \cdot m_{21} \end{array}
\right)$. If the first operation increases $|M|$ (or leaves it
unchanged) and then the second one reduces it, then the second
operation should reduce $|m_{12}|$ or $|m_{22}|$. Assume here that
$|m_{22}| \ge |m_{12}|$. Then we should have $|(1-k^2) \cdot m_{22}
- k \cdot m_{21}| < |m_{22}| = m_{22}$.
We may assume that $m_{21} + k \cdot m_{22} > 0$ because otherwise,
a single operation $(column_1 + k \cdot column_2)$ would reduce
$|m_{21}|$ while keeping the element in this position negative. Then
this same operation should reduce $|m_{11}|$, too, for the
determinant to remain unchanged. Also, since the first operation was
supposed to increase $|M|$ (or leave it unchanged), we should have,
in particular, $|m_{21} + k \cdot m_{22}| = m_{21} + k \cdot m_{22}
\ge |m_{21}| = -m_{21}$. This, together with the inequality in the
previous paragraph, gives $|(1-k^2) \cdot m_{22} - k \cdot m_{21}| =
|m_{22} - k \cdot (m_{21} + k \cdot m_{22})| \ge |k \cdot m_{22} +
m_{21}|$. Therefore, we should have $|k \cdot m_{22} + m_{21}| <
|m_{22}| = m_{22}$. Since $k \ge 2$, this implies $|m_{21}| >
|m_{22}|$, contradicting the assumption of $m_{22}$ having the
maximum absolute value in the matrix $M$.
\smallskip
\noindent {\bf Case 1i.} Two subsequent elementary operations are:
$(column_1 - k \cdot column_2)$, followed by $(column_2 + k \cdot
column_1)$. The result of applying these two operations to the
matrix $M$ is: $\left(
\begin{array}{cc} m_{11}- k \cdot m_{12} & (1-k^2) \cdot m_{12}+ k \cdot m_{11}\\
m_{21} - k \cdot m_{22} & (1-k^2) \cdot m_{22} + k \cdot m_{21} \end{array}
\right)$. Since $k \ge 2$, we have $|(1-k^2) \cdot m_{22} + k \cdot
m_{21}| > |m_{22}|$, so this case is moot.
\smallskip
\noindent {\bf Case 1j.} Two subsequent elementary operations are:
$(column_1 - k \cdot column_2)$, followed by $(column_2 - k \cdot
column_1)$. The result of applying these two operations to the
matrix $M$ is: $\left(
\begin{array}{cc} m_{11}- k \cdot m_{12} & (1+k^2) \cdot m_{12}- k \cdot m_{11}\\
m_{21} - k \cdot m_{22} & (1+k^2) \cdot m_{22} - k \cdot m_{21} \end{array}
\right)$. Since $|(1+k^2) \cdot m_{22} - k \cdot m_{21}| = |m_{22} -
k \cdot (m_{21} - k \cdot m_{22})| > |m_{22}|$, this case is moot,
too.
\medskip
\noindent {\bf Case 2.} Two negative entries are on a diagonal.
Without loss of generality, we assume here that $m_{11}>0, m_{22}>0,
m_{12}<0, m_{21}<0$. Because of the ``symmetry" between row and
column operations in this case, we can reduce the number of
subcases (compared to Case 1 above) and only consider the following.
\medskip
\noindent {\bf Case 2a.} Two subsequent elementary operations are:
$(row_1 + k \cdot row_2)$, followed by $(column_1 + k \cdot
column_2)$. The result of applying these two operations to the
matrix $M$ is: $ \left(
\begin{array}{cc} m_{11} + k \cdot m_{21} +k \cdot m_{12} + k^2 m_{22} & m_{12}+k \cdot m_{22}\\
m_{21}+k \cdot m_{22} & m_{22} \end{array}
\right)$. This case is nontrivial only if the first operation
increases the absolute value of the element in the top left corner
(or leaves it unchanged), and then the second operation reduces it.
First let us look at the element $m_{21}+k \cdot m_{22}$. If
$m_{21}+k \cdot m_{22} <0$, then a single operation $(column_1 + k
\cdot column_2)$ would reduce $m_{21}$ while keeping that element
negative. In that case, this operation would reduce $m_{11}$, too,
while keeping it positive because otherwise, the determinant would
change. Thus, a single operation $(column_1 + k \cdot column_2)$
would reduce the complexity of $M$ in that case. The same argument
shows that $m_{12}+k \cdot m_{22} \ge 0$ because otherwise, a single
operation $(row_1 + k \cdot row_2)$ would reduce the complexity of
$M$.
If $m_{21}+k \cdot m_{22} \ge 0$ and $m_{12}+k \cdot m_{22} \ge
0$, then $m_{11} + k \cdot m_{21} +k \cdot m_{12} + k^2 m_{22} \ge
0$ for the determinant to be equal to 1. But then the second
operation should also reduce the absolute value of $m_{21}$ for the
determinant to remain unchanged. This means a single operation
$(column_1 + k \cdot column_2)$ would reduce $|m_{21}|$ to begin
with, and this same operation should also reduce $|m_{11}|$ for the
determinant to remain unchanged, so this single operation would
reduce the complexity of $M$.
The same argument takes care of any of the following pairs of
subsequent elementary operations: $(row_1 \pm k \cdot row_2)$,
followed by $(column_1 \pm k \cdot column_2)$, as well as $(row_1
\pm k \cdot row_2)$, followed by $(column_2 \pm k \cdot column_1)$.
\smallskip
\noindent {\bf Case 2b.} Two subsequent elementary operations are:
$(row_1 + k \cdot row_2)$, followed by $(row_2 + k \cdot row_1)$.
The result of applying these two operations to the matrix $M$ is:
$\left(
\begin{array}{cc} m_{11}+k \cdot m_{21} & m_{12}+k \cdot m_{22}\\
(k^2+1) m_{21}+k \cdot m_{11} & (k^2+1)m_{22}+k \cdot m_{12}\end{array}
\right)$. If the first operation increases $|M|$ (or leaves it
unchanged) and then the second one reduces it, then the second
operation should reduce $|m_{21}|$ or $|m_{22}|$. Assume, without
loss of generality, that $|m_{21}| \ge |m_{22}|$.
Thus, we have $|(k^2+1)m_{21}+ k \cdot m_{11}| < |m_{21}| =
-m_{21}$. At the same time, we may assume that $|m_{11}+k \cdot
m_{21}| \ge |m_{11}| = m_{11}$ because otherwise, a single
operation $(row_1 + k \cdot row_2)$ would reduce the complexity of
$M$.
Then, if $m_{11}+k \cdot m_{21} >0$, then a single operation $(row_1
+ k \cdot row_2)$ would reduce $|m_{11}|$, and therefore also
$|m_{12}|$. Thus, we may assume that $m_{11}+k \cdot m_{21} <0$.
Now let us look at the inequality $|(k^2+1)m_{21}+ k \cdot m_{11}|
< -m_{21}$. We know that $m_{11}+k \cdot m_{21} <0$. Therefore, $k
\cdot m_{11}+k^2 \cdot m_{21} <0$. Now $|(k^2+1)m_{21}+ k \cdot
m_{11} = |k \cdot m_{11}+k^2 \cdot m_{21} + m_{21}| > -m_{21}$. This
contradiction completes Case 2b.
\smallskip
\noindent {\bf Case 2c.} Two subsequent elementary operations are:
$(row_1 + k \cdot row_2)$, followed by $(row_2 - k \cdot row_1)$. The result of
applying these two operations to the matrix $M$ is: $ \left(
\begin{array}{cc} m_{11}+k \cdot m_{21} & m_{12}+k \cdot m_{22}\\
-(k^2-1)m_{21}- k \cdot m_{11} & -(k^2-1)m_{22}- k \cdot m_{12}\end{array}
\right)$. The analysis here is similar to the previous Case 2b.
First we note that we may assume $m_{11}+k \cdot m_{21} <0$, so $-k
\cdot m_{11}-k^2 \cdot m_{21} >0$. On the other hand, we may assume
that $|m_{11}+k \cdot m_{21}| = -m_{11}-k \cdot m_{21} \ge m_{11}$
because otherwise, a single operation $(row_1 + k \cdot row_2)$
would reduce $|m_{11}|$. Therefore, $-k \cdot m_{11} - k^2 \cdot
m_{21} \ge k \cdot m_{11}$.
This, together with the inequality $|-(k^2-1)m_{21}- k \cdot m_{11}|
= |-k \cdot m_{11}-k^2 \cdot m_{21} + m_{21}| < |m_{21}| = -m_{21}$,
implies $|k \cdot m_{11} + m_{21}| < |m_{21}|$, in which case a
single operation $(row_2 + k \cdot row_1)$ would reduce $|m_{21}|$.
\smallskip
\noindent {\bf Case 2d.} Two subsequent elementary operations are:
$(row_1 - k \cdot row_2)$, followed by $(row_2 + k \cdot row_1)$. The result of
applying these two operations to the matrix $M$ is: $ \left(
\begin{array}{cc} m_{11}-k \cdot m_{21} & m_{12}-k \cdot m_{22}\\
-(k^2-1)m_{21}+ k \cdot m_{11} & -(k^2-1)m_{22}+ k \cdot m_{12}\end{array}
\right)$. If $k^2>2$, here we obviously have $|-(k^2-1)m_{22}+ k
\cdot m_{12}| > |m_{22}|$, and $|-(k^2-1)m_{21}+ k \cdot m_{11}| >
|m_{21}|$. Thus, this case is moot.
\smallskip
\noindent {\bf Case 2e.} Two subsequent elementary operations are:
$(row_1 - k \cdot row_2)$, followed by $(row_2 - k \cdot row_1)$.
The result of applying these two operations to the matrix $M$ is:
$\left(
\begin{array}{cc} m_{11}-k \cdot m_{21} & m_{12}-k \cdot m_{22}\\
(k^2+1)m_{21}- k \cdot m_{11} & (k^2+1)m_{22}- k \cdot m_{12}\end{array}
\right)$. Here again we obviously have $|(k^2+1)m_{22}- k \cdot
m_{12}| > |m_{22}|$ and $|(k^2+1)m_{21}- k \cdot m_{11}| >
|m_{21}|$, so this case is moot, too.
\medskip
\noindent {\bf Case 3.} There are no negative entries. Because of
the obvious symmetry, it is sufficient to consider the following
cases.
\medskip
\noindent {\bf Case 3a.} Two subsequent elementary operations are
both $(column_1 + k\cdot column_2)$, $(column_2 + k\cdot column_1)$,
$(row_1 + k\cdot row_2)$ or $(row_2 + k\cdot row_1)$. Since all the
entries are positive, the second operation cannot decrease the
complexity in this case.
\smallskip
\noindent {\bf Case 3b.} Two subsequent elementary operations are
$(column_1 + k \cdot column_2)$, followed by $(column_2 - k \cdot
column_1)$, give the matrix\par $\left(
\begin{array}{cc} m_{11}+ k \cdot m_{12} & (1-k^2) \cdot m_{12}- k \cdot m_{11}\\
m_{21} + k \cdot m_{22} & (1-k^2) \cdot m_{22} - k \cdot m_{21} \end{array}
\right)$. If the first operation increases $|M|$ (or leaves it
unchanged) and then the second operation reduces it, then the second
operation should reduce, say, $|m_{12}|=m_{12}$ (assuming that
$m_{12}\ge m_{22}$).
After the first operation we note that $|m_{11}+k\cdot
m_{12}|=m_{11}+k\cdot m_{12}\geq |m_{12}|=m_{12}$. After the second
operation, assuming that complexity was reduced, we have
$|(1-k^2)m_{12}-k\cdot m_{11}|\leq |m_{12}|=m_{12}$, which can be
rewritten as $|m_{12}-k(m_{11}+k\cdot m_{12})|\leq m_{12}$. Since
all the entries are positive, we have $m_{11}+k\cdot m_{12}\geq
m_{12}$, and hence in order for the second operation to reduce
$|m_{12}|$, the following inequality should hold: $-m_{12}\leq
(1-k^2)m_{12}-k\cdot m_{11}\leq 0$. Subtracting $m_{12}$,
multiplying each part by $-1$ and factoring out $k$ we get
$2m_{12}\geq k(m_{11}+\cdot m_{12})\geq m_{12}$. Divide by $k$:
$\frac{2}{k}\cdot m_{12}\geq m_{11}+k\cdot m_{12}\geq
\frac{1}{k}\cdot m_{12}$. However, $k\geq 2$ implies that
$m_{11}+k\cdot m_{12}\leq m_{12}$, which brings us to a
contradiction.
\smallskip
\noindent {\bf Case 3c.} Two subsequent elementary operations are
$(column_1 - k \cdot column_2)$, followed by $(column_2 + k \cdot
column_1)$. The resulting matrix is \par $\left(
\begin{array}{cc} m_{11}- k \cdot m_{12} & (1-k^2) \cdot m_{12}+ k \cdot m_{11}\\
m_{21} - k \cdot m_{22} & (1-k^2) \cdot m_{22} + k \cdot m_{21} \end{array}
\right)$. If the first operation increases $|M|$ (or leaves it
unchanged) and then the second one reduces it, then the second
operation should reduce, say, $|m_{12}|=m_{12}$ (assuming that
$m_{12}\ge m_{22}$).
Also, after the first operation we can observe that
$m_{11}\leq|m_{11}-k\cdot m_{12}|=$ and $m_{21}\leq |m_{21}-k\cdot
m_{11}|$ because otherwise, a single operation $(column_1 - k\cdot
column_2)$ would reduce $|m_{11}|$ and $|m_{21}|$. This implies that
$m_{11}-k\cdot m_{12}\leq -m_{11}$ and $m_{21}-k\cdot m_{22}\leq
-m_{21}$.
Consider the inequality $|(1-k^2) m_{12}+k\cdot m_{11}|\leq m_{12}$.
Rewrite it in the following way: $|m_{12}-k\cdot (k\cdot
m_{12}-m_{11})|\leq m_{12}$. Combining this with the previous
inequalities we get $|m_{12}-k\cdot m_{11}|\leq |m_{12}-k\cdot
(k\cdot m_{12}-m_{11})|\leq m_{12}$. Therefore, a single operation
$(column_2 - k\cdot column_1)$ reduces $|m_{12}|$.
\smallskip
\noindent {\bf Case 3d.} Two subsequent elementary operations are
$(column_1 - k \cdot column_2)$ followed by $(column_2 - k \cdot
column_1)$. The resulting matrix is \par $\left(
\begin{array}{cc} m_{11}- k \cdot m_{12} & (1-k^2) \cdot m_{12}- k \cdot m_{11}\\
m_{21} - k \cdot m_{22} & (1-k^2) \cdot m_{22} - k \cdot m_{21} \end{array}
\right)$. If the first operation increases $|M|$ (or leaves it
unchanged) and then the second operation reduces it, then the second
one should reduce, say, $|m_{12}|=m_{12}$ (assuming that $m_{12}\ge
m_{22}$).
We may assume that $m_{11}-k\cdot m_{12} \le 0$ because if
$m_{11}-k\cdot m_{12} >0$, then for the determinant to be unchanged
after the first operation, we would have also $m_{21}-k\cdot m_{22}
>0$, but then a single operation
$(column_1 - k\cdot column_2)$ would reduce the complexity of $M$.
Thus, $|m_{11}-k\cdot m_{12}| = k\cdot m_{12} - m_{11}$. If the
first operation $(column_1 - k \cdot column_2)$ did not reduce the
complexity of $M$, then $m_{11}\leq k\cdot m_{12} - m_{11}$, which
implies that $m_{11}-k\cdot m_{12} \leq -m_{11}$.
Now consider the inequality $|(1-k^2) m_{12}-k\cdot m_{11}| \leq
m_{12}$. After rewriting it we get $|m_{12}-k\cdot (k\cdot
m_{12}-m_{11})|\leq m_{12}$. Combining it with the inequality in the
previous paragraph, we get $|m_{12}-k\cdot m_{11}|\leq
|m_{12}-k\cdot (k\cdot m_{12}-m_{11})|\leq m_{12}$. Therefore, a
single elementary operation $(column_2 - k\cdot column_1)$ reduces
$|m_{12}|$.
\section{Proof of Theorem \ref{infinite}}
Let $k, m \in \mathbb{Z}, ~k \ge 3, ~m \ge 1$. Denote $M(k, m)= \left(
\begin{array}{cc} 1-k^2m & k^2m \\ -k^2m & 1+k^2m\end{array}
\right)$. It is straightforward to check that:
\medskip
\noindent {\bf (1)} $M(k, m)$ has determinant 1;
\noindent {\bf (2)} $M(k, m)= M(k, 1)^m$;
\noindent {\bf (3)} No elementary $k$-operation reduces the absolute
value of any of the entries of $M(k, m)$.
\medskip
Since the cyclic group generated by $M(k, 1)$ is infinite, the
result follows from Theorem \ref{peak}.
\section{Proof of Corollary \ref{complexity}}
\label{Corollary}
We assume in this section that $k \ge 3$ because for $k=2$, the
membership problem in the subgroup of $SL_2(\mathbb{Z})$ generated by $A(k)$
and $B(k)$ is solvable in constant time, see Corollary
\ref{corollary2} in the Introduction.
First of all we check that a given matrix $M$ from the group
$SL_2(\mathbb{Z})$ has the form $\left(
\begin{array}{cc} 1+k^2n_1 & kn_2 \\ kn_3 & 1+k^2n_4\end{array}
\right)$ for some integers $n_i$. Then we check that $M$ has at
most one zero entry. If there are more, then $M$ does not belong to
the subgroup in question unless $M$ is the identity matrix. We also
check that $\max |m_{ij}| > 1$. If $\max |m_{ij}| = 1$, then $M$
does not belong to the subgroup in question unless $M$ is the
identity matrix. Indeed, the only nontrivial cases here are $M=A(\pm
1)$ and $M=B(\pm 1)$. Then $M^k=A(k)^{\pm 1}$ or $M^k=B(k)^{\pm 1}$.
This would give a nontrivial relation in the group generated by
$A(k)$ and $B(k)$ contradicting the fact that this group is free.
Now let $\max |m_{ij}| > 1$. If no elementary operation either
reduces $\max |m_{ij}|$ or reduces $\sum_{i,j}|m_{ij}|$ without
increasing $\max |m_{ij}|$, then $M$ does not belong to the subgroup
generated by $A(k)$ and $B(k)$. If there is an elementary operation
that reduces $\max |m_{ij}|$, then we apply it. For example,
suppose the elementary operation $(row_1 -k \cdot row_2)$ reduces
$|m_{11}|$. The result of this operation is the matrix $\left(
\begin{array}{cc} m_{11}-km_{21} & m_{12}-km_{22} \\ m_{21} & m_{22} \end{array}
\right)$. If $|m_{11}|$ here has decreased, then $|m_{12}|$ could not
increase because otherwise, the determinant of the new matrix would
not be equal to 1. Thus, the complexity of the matrix $M$ has been
reduced, and the new matrix belongs to our subgroup if and only if
the matrix $M$ does. Since there are only finitely many numbers of
the form $kn, n \in \mathbb{Z},$ with bounded absolute value, this process
should terminate either with a non-identity matrix whose complexity
cannot be reduced or with the identity matrix. In the latter case,
the given matrix was in the subgroup generated by $A(k)$ and $B(k)$;
in the former case, it was not.
To estimate the time complexity of this algorithm, we note that each
step of it (i.e., applying a single elementary operation) takes time
$O(\log m)$, where $m$ is the complexity of the matrix this
elementary operation is applied to. This is because if $k$ is an
integer, multiplication by $k$ amounts to $k-1$ additions, and each
addition of integers not exceeding $m$ takes time $O(\log m)$. Since
the complexity of a matrix is reduced at least by 1 at each step of
the algorithm, the total complexity is $O(\sum_{k=1}^n \log k) = O(n
\cdot \log n)$. This completes the proof. $\Box$
\smallskip
As for generic-case complexity of this algorithm (cf. Problem
\ref{generic} in our Section \ref{results}), we note that, speaking
very informally, a ``random" product of $A(k)^{\pm 1}$ and
$B(k)^{\pm 1}$ is ``close" to a product where $A(k)^{\pm 1}$ and
$B(k)^{\pm 1}$ alternate, in which case the complexity of the
product matrix grows exponentially in the number of factors (see
e.g. \cite[Proposition 1]{BSV}), so the number of summands in the
sum that appears in the proof of Corollary \ref{complexity} will be
logarithmic in $n$, and therefore generic-case complexity of the
algorithm should be $O(\log ^2 n)$ in case the answer is ``yes"
(i.e., an input matrix belongs to the subgroup generated by $A(k)$
and $B(k)$). Of course, a ``random" matrix from $SL_2(\mathbb{Z})$ will {\it
not} belong to the subgroup generated by $A(k)$ and $B(k)$ with
overwhelming probability. This is because if $k \ge 3$, this
subgroup has infinite index in $SL_2(\mathbb{Z})$. It is, however, not clear
how fast (generically) our algorithm will detect that; specifically,
whether it will happen in sublinear time or not.
Note that, unlike the algorithms with low generic-case complexity
considered in \cite{KMSS}, this algorithm has a good chance to have
low generic-case complexity giving both ``yes" and ``no" answers.
Finally, we note that generic-case complexity depends on how one
defines the {\it asymptotic density} of a subset of inputs in the
set of all possible inputs. This, in turn, depends on how one
defines the complexity of an input. In \cite{KMSS}, complexity of an
element of a group $G$ was defined as the minimum word length of
this element with respect to a fixed generating set of $G$. In our
situation, where inputs are matrices over $\mathbb{Z}$, it is probably more
natural to define complexity $|M|$ of a matrix $M$ as the sum of the absolute values of
the entries of $M$, like we did in this paper. Yet another natural way is to use {\it Kolmogorov complexity}, i.e., speaking informally, the minimum possible size of a description of $M$. Since Kolmogorov complexity of an integer $n$ is equivalent to $\log n$, we see that for a matrix $M \in SL_2(\mathbb{Z})$, Kolmogorov complexity is equivalent
to $\log |M|$, for $|M| = \sum |m_{ij}|$, as defined in this paper. This is not the case though if $M \in SL_2(\mathbb{Q})$ since for Kolmogorov complexity of a rational number, complexity of both the numerator and denominator matters.
\bigskip
\noindent {\it Acknowledgement.} We are grateful to Norbert A'Campo,
Ilya Kapovich, and Linda Keen for helpful comments.
\baselineskip 11 pt
| {
"timestamp": "2017-02-07T02:07:08",
"yymm": "1605",
"arxiv_id": "1605.05226",
"language": "en",
"url": "https://arxiv.org/abs/1605.05226",
"abstract": "We consider what some authors call 'parabolic Möbius subgroups' of matrices over Z, Q, and R and focus on the membership problem in these subgroups and complexity of relevant algorithms.",
"subjects": "Group Theory (math.GR)",
"title": "On two-generator subgroups in SL_2(Z), SL_2(Q), and SL_2(R)",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534382002797,
"lm_q2_score": 0.8221891327004133,
"lm_q1q2_score": 0.8069403511397266
} |
https://arxiv.org/abs/2011.03400 | Apéry Limits: Experiments and Proofs | An important component of Apéry's proof that $\zeta (3)$ is irrational involves representing $\zeta (3)$ as the limit of the quotient of two rational solutions to a three-term recurrence. We present various approaches to such Apéry limits and highlight connections to continued fractions as well as the famous theorems of Poincaré and Perron on difference equations. In the spirit of Jon Borwein, we advertise an experimental-mathematics approach by first exploring in detail a simple but instructive motivating example. We conclude with various open problems. | \section{Introduction}
A fundamental ingredient of Ap\'ery's groundbreaking proof \cite{apery} of
the irrationality of $\zeta (3)$ is the binomial sum
\begin{equation}
A (n) = \sum_{k = 0}^n \binom{n}{k}^2 \binom{n + k}{k}^2 \label{eq:apery3}
\end{equation}
and the fact that it satisfies the three-term recurrence
\begin{equation}
(n + 1)^3 u_{n + 1} = (2 n + 1) (17 n^2 + 17 n + 5) u_n - n^3 u_{n - 1}
\label{eq:apery3:rec}
\end{equation}
with initial conditions $A (0) = 1$, $A (1) = 5$ --- or, equivalently but more
naturally, $A (- 1) = 0$, $A (0) = 1$. Now let $B (n)$ be the solution to
{\eqref{eq:apery3:rec}} with $B (0) = 0$ and $B (1) = 1$. Ap\'ery showed
that
\begin{equation}
\lim_{n \rightarrow \infty} \frac{B (n)}{A (n)} = \frac{\zeta (3)}{6}
\label{eq:apery3:lim}
\end{equation}
and that the rational approximations resulting from the left-hand side
converge too rapidly to $\zeta (3)$ for $\zeta (3)$ itself to be rational. For
details, we recommend the engaging account \cite{alf} of Ap\'ery's proof.
In the sequel, we will not pursue questions of irrationality further. Instead,
our focus will be on limits, like {\eqref{eq:apery3:lim}}, of quotients of
solutions to linear recurrences. Such limits are often called {\emph{Ap\'ery
limits}} \cite{avz-apery-limits}, \cite{yang-apery-limits}.
Jon Borwein was a tireless advocate and champion of experimental mathematics
and applied it with fantastic success. Jon was also a pioneer of teaching
experimental mathematics, whether through numerous books, such as
\cite{bb-exp1}, or in the classroom (the second author is grateful for the
opportunity to benefit from both). Before collecting known results on
Ap\'ery limits and general principles, we therefore find it fitting to
explore in detail, in Section~\ref{sec:delannoy}, a simple but instructive
example using an experimental approach. We demonstrate how to discover the
desired Ap\'ery limit; and we show, even more importantly, how the
exploratory process naturally leads us to discover additional structure that
is helpful in understanding this and other such limits. We hope that the
detailed discussion in Section~\ref{sec:delannoy} may be particularly useful
to those seeking to integrate experimental mathematics into their own
teaching.
After suggesting further examples in Section~\ref{sec:search}, we explain the
observations made in Section~\ref{sec:delannoy} by giving in
Section~\ref{sec:diffeq} some background on difference equations, introducing
the Casoratian and the theorems of Poincar\'e and Perron. In
Section~\ref{sec:cf}, we connect with continued fractions and observe that,
accordingly translated, many of the simpler examples are instances of
classical results in the theory of continued fractions. We then outline in
Section~\ref{sec:pf} several methods used in the literature to establish
Ap\'ery limits. To illustrate the limitations of these approaches, we
conclude with several open problems in Sections~\ref{sec:franel:d} and
\ref{sec:open}.
Creative telescoping --- including, for instance, Zeilberger's algorithm and
the Wilf--Zeilberger (WZ) method --- refers to a powerful set of tools that,
among other applications, allow us to algorithmically derive the recurrence
equations, like {\eqref{eq:apery3:rec}}, that are satisfied by a given sum,
like {\eqref{eq:apery3}}. In fact, as described in \cite{alf}, Zagier's
proof of Ap\'ery's claim that the sums {\eqref{eq:apery3}} and
{\eqref{eq:apery3:2}} both satisfy the recurrence {\eqref{eq:apery3:rec}} may
be viewed as giving birth to the modern method of creative telescoping. For an
excellent introduction we refer to \cite{aeqb}. In the sequel, all claims
that certain sums satisfy a recurrence can be established using creative
telescoping.
\section{A motivating example}\label{sec:delannoy}
At the end of van der Poorten's account \cite{alf} of Ap\'ery's proof, the
reader is tasked with the exercise to consider the sequence
\begin{equation}
A (n) = \sum_{k = 0}^n \binom{n}{k} \binom{n + k}{k} \label{eq:delannoy}
\end{equation}
and to apply to it Ap\'ery's ideas to conclude the irrationality of $\ln
(2)$. In this section, we will explore this exercise with an experimental
mindset but without using the general tools and connections described later in
the paper. In particular, we hope that the outline below could be handed out
in an undergraduate experimental-math class and that the students could (with
some help, depending on their familiarity with computer algebra systems)
reproduce the steps, feel intrigued by the observations along the way, and
then apply by themselves a similar approach to explore variations or
extensions of this exercise. Readers familiar with the topic may want to skip
ahead.
The numbers {\eqref{eq:delannoy}} are known as the central Delannoy numbers
and count lattice paths from $(0, 0)$ to $(n, n)$ using the steps $(0, 1)$,
$(1, 0)$ and $(1, 1)$. They satisfy the recurrence
\begin{equation}
(n + 1) u_{n + 1} = 3 (2 n + 1) u_n - n u_{n - 1} \label{eq:delannoy:rec}
\end{equation}
with initial conditions $A (- 1) = 0$, $A (0) = 1$. Now let $B (n)$ be the
sequence satisfying the same recurrence with initial conditions $B (0) = 0$,
$B (1) = 1$. Numerically, we observe that the quotients $Q (n) = B (n) / A
(n)$,
\begin{equation*}
(Q (n))_{n \geq 0} = \left(0, \frac{1}{3}, \frac{9}{26},
\frac{131}{378}, \frac{445}{1284}, \frac{34997}{100980},
\frac{62307}{179780}, \frac{2359979}{6809460}, \ldots \right),
\end{equation*}
appear to converge rather quickly to a limit
\begin{equation*}
L := \lim_{n \rightarrow \infty} Q (n) = 0.34657359 \ldots
\end{equation*}
When we try to estimate the speed of convergence by computing the difference
$Q (n) - Q (n - 1)$ of consecutive terms, we find
\begin{equation*}
(Q (n) - Q (n - 1))_{n \geq 1} = \left(\frac{1}{3}, \frac{1}{78},
\frac{1}{2457}, \frac{1}{80892}, \frac{1}{2701215}, \frac{1}{90770922},
\ldots \right) .
\end{equation*}
This suggests the probably-unexpected fact that these are all reciprocals of
integers. Something interesting must be going on here! However, a cursory
look-up of the denominators in the {\emph{\textit{On-Line Encyclopedia of
Integer Sequences}}} (OEIS) \cite{oeis} does not result in a match. (Were we
to investigate the factorizations of these integers, we might at this point
discover the case $x = 1$ of {\eqref{eq:delannoy:x:Qdiff}}. But we hold off on
exploring that observation and continue to focus on the speed of convergence.)
By, say, plotting the logarithm of $Q (n) - Q (n - 1)$ versus $n$, we are led
to realize that the number of digits to which $Q (n - 1)$ and $Q (n)$ agree
appears to increase (almost perfectly) linearly. This means that $Q (n)$
converges to $L$ exponentially.
\begin{exercise*}
For a computational challenge, quantify the exponential convergence by
conjecturing an exact value for the limit of $(Q (n + 1) - Q (n)) / (Q (n) -
Q (n - 1))$ as $n \rightarrow \infty$. Then connect that value to the
recurrence {\eqref{eq:delannoy:rec}}.
\end{exercise*}
At this point, we are confident that, say,
\begin{equation}
Q (50) = 0.34657359027997265470861606072908828403775006718 \ldots
\label{eq:delannoy:Q50}
\end{equation}
agrees with the limit $L$ to more than $75$ digits. The ability to recognize
constants from numerical data is a powerful asset in an experimental
mathematician's toolbox. Several approaches to constant recognition are
lucidly described in \cite[Section~6.3]{bb-exp1}. The crucial ingredients
are integer relation algorithms such as PSLQ or those based on lattice
reduction algorithms like LLL. Readers new to constant recognition may find
the {\emph{Inverse Symbolic Calculator}} of particular value. This web
service, created by Jon Borwein, Peter Borwein and Simon Plouffe, automates
the constant-recognition process: it asks for a numerical approximation as
input and determines, if successful, a suggested exact value. For instance,
given {\eqref{eq:delannoy:Q50}}, it suggests that
\begin{equation*}
L = \frac{1}{2} \ln (2),
\end{equation*}
which one can then easily confirm further to any desired precision. Of course,
while this provides overwhelming evidence, it does not constitute a proof.
Given the success of our exploration, a natural next step would be to repeat
this inquiry for the sequence of polynomials
\begin{equation}
A_x (n) = \sum_{k = 0}^n \binom{n}{k} \binom{n + k}{k} x^k,
\label{eq:delannoy:x}
\end{equation}
which satisfies the recurrence {\eqref{eq:delannoy:rec}} with the term $3 (2 n
+ 1)$ replaced by $(2 x + 1) (2 n + 1)$. An important principle to keep in
mind here is that introducing an additional parameter, like the $x$ in
{\eqref{eq:delannoy:x}}, can make the underlying structure more apparent; and
this may be crucial both for guessing patterns and for proving our assertions.
Now define the secondary solution $B_x (n)$ satisfying the recurrence with
$B_x (0) = 0$, $B_x (1) = 1$. Then, if we compute the difference of quotients
$Q_x (n) = B_x (n) / A_x (n)$ as before, we find that
\begin{equation*}
(Q_x (n) - Q_x (n - 1))_{n \geq 1} = \left(\frac{1}{1 + 2 x},
\frac{1}{2 (1 + 2 x) (1 + 6 x + 6 x^2)}, \ldots \right) .
\end{equation*}
Extending our earlier observation, these now appear to be the reciprocals of
polynomials with integer coefficients. Moreover, in factored form, we are
immediately led to conjecture that
\begin{equation}
Q_x (n) - Q_x (n - 1) = \frac{1}{n A_x (n) A_x (n - 1)} .
\label{eq:delannoy:x:Qdiff}
\end{equation}
Note that, since $Q_x (0) = 0$, this implies
\begin{equation}
Q_x (N) = \sum_{n = 1}^N (Q_x (n) - Q_x (n - 1)) = \sum_{n = 1}^N \frac{1}{n
A_x (n) A_x (n - 1)} \label{eq:delannoy:x:Qsum}
\end{equation}
and hence provides another way to compute the limit $L_x = \lim_{n \rightarrow
\infty} Q_x (n)$ as
\begin{equation*}
L_x = \sum_{n = 1}^{\infty} \frac{1}{n A_x (n) A_x (n - 1)},
\end{equation*}
which avoids reference to the secondary solution $B_x (n)$.
Can we experimentally identify the limit $L_x$? One approach could be to
select special values for $x$ and then proceed as we did for $x = 1$. For
instance, we might numerically compute and then identify the following values:
\begin{equation*}
\renewcommand{\arraystretch}{1.3}
\begin{array}{|l|l|l|l|}
\hline
& x = 1 & x = 2 & x = 3\\
\hline
L_x & \tfrac{1}{2} \ln (2) & \tfrac{1}{2} \ln \left(\tfrac{3}{2} \right)
& \tfrac{1}{2} \ln \left(\tfrac{4}{3} \right)\\
\hline
\end{array}
\end{equation*}
We are lucky and the emerging pattern is transparent, suggesting that
\begin{equation}
L_x = \frac{1}{2} \ln \left(1 + \frac{1}{x} \right) .
\label{eq:delannoy:x:L}
\end{equation}
A possibly more robust approach to identifying $L_x$ empirically is to fix
some values of $n$ and then expand the $Q_x (n)$, which are rational functions
in $x$, into power series. If the initial terms of these power series appear
to converge as $n \rightarrow \infty$ to identifiable values, then it is
reasonable to expect that these values are the initial terms of the power
series for the limit $L_x$. However, expanding around $x = 0$, we quickly
realize that the power series
\begin{equation*}
Q_x (n) = \sum_{k = 0}^{\infty} q_k^{(n)} x^k
\end{equation*}
do not stabilize as $n \rightarrow \infty$, but that the coefficients increase
in size: for instance, we find empirically that
\begin{equation*}
q_1^{(n)} = - n (n + 1), \quad q_2^{(n)} = \frac{1}{8} n (n + 1) (5 n^2 + 5
n + 6),
\end{equation*}
and it appears that, for $k \geq 1$, $q_k^{(n)}$ is a polynomial in $n$
of degree $2 k$. Expanding the $Q_x (n)$ instead around some nonzero value of
$x$ --- say, $x = 1$ --- is more promising. Writing
\begin{equation*}
Q_x (n) = \sum_{k = 0}^{\infty} r_k^{(n)} (x - 1)^k,
\end{equation*}
we observe empirically that
\begin{equation*}
\left( \lim_{n \rightarrow \infty} r_k^{(n)} \right)_{k \geq 1} = \left(-
\frac{1}{4}, \frac{3}{16}, - \frac{7}{48}, \frac{15}{128}, \ldots \right) .
\end{equation*}
Once we realize that the denominators are multiples of $k$, it is not
difficult to conjecture that
\begin{equation}
\lim_{n \rightarrow \infty} r_k^{(n)} = (- 1)^k \frac{2^k - 1}{k \cdot 2^{k
+ 1}} \label{eq:delannoy:x:Q:c1}
\end{equation}
for $k \geq 1$. From our initial exploration, we already know that
$\lim_{n \rightarrow \infty} r_0^{(n)} = \frac{1}{2} \ln (2)$ but we could
also have (re)guessed this value as the formal limit of the right-hand side of
{\eqref{eq:delannoy:x:Q:c1}} as $k \rightarrow 0$ (forgetting that $k$ is
really an integer). Anyway, {\eqref{eq:delannoy:x:Q:c1}} suggests that
\begin{equation*}
L_x = L_1 + \sum_{k = 1}^{\infty} (- 1)^k \frac{2^k - 1}{k \cdot 2^{k + 1}}
\, (x - 1)^k = \frac{1}{2} \ln (2) + \frac{1}{2} \ln \left(\frac{x + 1}{2
x} \right),
\end{equation*}
leading again to {\eqref{eq:delannoy:x:L}}. Finally, our life is easiest if we
look at the power series of $Q_x (n)$ expanded around $x = \infty$. In that
case, we find that the power series of $Q_x (n)$ and $Q_x (n + 1)$ actually
agree through order $x^{- 2 n}$. In hindsight --- and to expand our
experimental reach, it is always a good idea to reflect on the new data in
front of us --- this is a consequence of {\eqref{eq:delannoy:x:Qdiff}} and the
fact that $A_x (n)$ has degree $n$ in $x$ (so that $A_x (n) A_x (n - 1)$ has
degree $2 n - 1$). Therefore, from just the case $n = 3$ we are confident that
\begin{equation*}
L_x = Q_x (3) + O (x^{- 7}) = \frac{1}{2 x} - \frac{1}{4 x^2} + \frac{1}{6
x^3} - \frac{1}{8 x^4} + \frac{1}{10 x^5} - \frac{1}{12 x^6} + O (x^{- 7})
.
\end{equation*}
At this point the pattern is evident, and we arrive, once more, at the
conjectured formula {\eqref{eq:delannoy:x:L}} for $L_x$.
\section{Searching for Ap\'ery limits}\label{sec:search}
Inspired by the approach laid out in the previous section, one can search for
other Ap\'ery limits as follows:
\begin{enumerate}
\item Pick a binomial sum $A (n)$ and, using creative telescoping, compute a
recurrence satisfied by $A (n)$.
\item Compute the initial terms of a secondary solution $B (n)$ to the
recurrence.
\item Try to identify $\lim_{n \rightarrow \infty} B (n) / A (n)$ (either
numerically or as a power series in an additional parameter).
\end{enumerate}
It is important to realize, as will be indicated in Section~\ref{sec:cf}, that
if the binomial sum $A (n)$ satisfies a three-term recurrence, then the
Ap\'ery limit can be expressed as a continued fraction and compared to the
(rather staggering) body of known results \cite{wall-contfrac},
\cite{jt-contfrac}, \cite{handbook-contfrac}, \cite{bvsz-cf}.
Of course, the final step is to prove and/or generalize those discovered
results that are sufficiently appealing. One benefit of an experimental
approach is that we can discover results, connections and generalizations, as
well as discard less-fruitful avenues, before (or while!) working out a
complete proof. Ideally, the processes of discovery and proof inform each
other at every stage. For instance, experimentally finding a generalization
may well lead to a simplified proof, while understanding a small piece of a
puzzle can help set the direction of follow-up experiments. Jon Borwein's
extensive legacy is filled with such delightful examples.
Of course, one could just start with a recurrence; however, selecting a
binomial sum increases the odds that the recurrence has desirable properties
(it is a difficult open problem to ``invert creative telescoping'' in the
sense of producing a binomial sum satisfying a given recurrence). Some simple
suggestions for binomial sums, as well as the corresponding Ap\'ery limits,
are as follows (in each case, we choose the secondary solution with initial
conditions $B (0) = 0$, $B (1) = 1$).
\begin{equation*}
\setlength{\extrarowheight}{7pt}
\renewcommand{\arraystretch}{1.3}
\begin{array}{|>{\displaystyle}l|>{\displaystyle}l|}
\hline
\sum_{k = 0}^n \binom{n}{2 k} x^k & \frac{1}{\sqrt{x}} \quad
\text{(around $x = 1$)}\\
\hline
\sum_{k = 0}^n \binom{n - k}{k} x^k & \frac{2}{1 + \sqrt{1 + 4 x}} \quad
\text{(around $x = 0$)}\\
\hline
\sum_{k = 0}^n \binom{n}{k} \binom{n - k}{k} x^k & \frac{\arctan \left(\sqrt{4 x - 1} \right)}{\sqrt{4 x - 1}} \quad \text{(around $x =
\tfrac{1}{4}$)}\\
\hline
\end{array}
\end{equation*}
\begin{example}
\label{eg:arctan}Setting $x = 1 / 2$ in the last instance leads to the limit
being $\arctan (1) = \pi / 4$ and therefore to a way of computing $\pi$ as
\begin{equation*}
\pi = \lim_{n \rightarrow \infty} \frac{4 B (n)}{A (n)},
\end{equation*}
where $A (n)$ and $B (n)$ both solve the recurrence $(n + 1) u_{n + 1} = (2
n + 1) u_n + n u_{n - 1}$ with $A (- 1) = 0$, $A (0) = 1$ and $B (0) = 0$,
$B (1) = 1$. In an experimental-math class, this could be used to segue into
the fascinating world of computing $\pi$, a topic to which Jon Borwein,
sometimes admiringly referred to as Dr.~Pi, has contributed so much --- one
example being the groundbreaking work in \cite{borwein-piagm} with his
brother Peter. Let us note that this is not a new result. Indeed, with the
substitution $z = \sqrt{4 x - 1}$, it follows from the discussion in
Section~\ref{sec:cf} that the Ap\'ery limit in question is equivalent to
the well-known continued fraction
\begin{equation*}
\arctan (z) = \frac{z}{1 +} \, \frac{1^2 z^2}{3 +} \, \frac{2^2 z^2}{5 +}
\, \cdots \, \frac{n^2 z^2}{(2 n + 1) +} \, \cdots
\end{equation*}
\cite[p.~343, eq.~(90.3)]{wall-contfrac}. The reader finds, for instance,
in \cite[Theorem~2]{bcp-cf-tails} that this continued fraction, as well as
corresponding ones for the tails of $\arctan (z)$, is a special case of
Gauss' famous continued fraction for quotients of hypergeometric functions
${}_2 F_1$. We hope that some readers and students, in particular, enjoy the
fact that they are able to rediscover such results themselves.
\end{example}
\begin{example}
For more challenging explorations, the reader is invited to consider the
binomial sums
\begin{equation*}
A_x (n) = \sum_{k = 0}^n \binom{n}{k} \binom{n + k}{k}^2 x^k, \quad
\sum_{k = 0}^n \binom{n}{k} \binom{n + k}{k}^3 x^k,
\end{equation*}
and to compare the findings with those by Zudilin
\cite{zudilin-appr-polylog} who obtains simultaneous approximations to the
logarithm, dilogarithm and trilogarithm.
\end{example}
\begin{example}
\label{eg:cy}Increasing the level of difficulty further, one may consider,
for instance, the binomial sum
\begin{equation*}
A (n) = \sum_{k = 0}^n \binom{n}{k}^2 \binom{3 k}{n},
\end{equation*}
which is an appealing instance, randomly selected from many others, for
which Almkvist, van Straten and Zudilin \cite[Section~4,
\#219]{avz-apery-limits} have numerically identified an Ap\'ery limit (in
this case, depending on the initial conditions of the secondary solution,
the Ap\'ery limit can be empirically expressed as a rational multiple of
$\pi^2$ or of the $L$-function evaluation $L_{- 3} (2)$, or, in general, a
linear combination of those). To our knowledge, proving the conjectured
Ap\'ery limits for most cases in \cite[Section~4]{avz-apery-limits},
including the one above, remains open. While the techniques discussed in
Section~\ref{sec:pf} can likely be used to prove some individual limits, it
would be of particular interest to establish all these Ap\'ery limits in a
uniform fashion.
\end{example}
Choosing an appropriate binomial sum as a starting point, the present approach
could be used to construct challenges for students in an experimental-math
class, with varying levels of difficulty (or that students could explore
themselves with minimal guidance). As illustrated by Example~\ref{eg:arctan},
simple cases can be connected with existing classical results, and
opportunities abound to connect with other topics such as hypergeometric
functions, computer algebra, orthogonal polynomials, or Pad\'e
approximation, which we couldn't properly discuss here. However, much about
Ap\'ery limits is not well understood and we believe that more serious
investigations, possibly along the lines outlined here, can help improve our
understanding. To highlight this point, we present in
Sections~\ref{sec:franel:d} and \ref{sec:open} several specific open problems
and challenges.
\section{Difference equations}\label{sec:diffeq}
In our initial motivating example, we started with a solution $A (n)$ to the
three-term recurrence {\eqref{eq:delannoy:rec}} and considered a second,
linearly independent solution $B (n)$ of that same recurrence. We then
discovered in {\eqref{eq:delannoy:x:Qsum}} that
\begin{equation*}
B (n) = A (n) \sum_{k = 1}^n \frac{1}{k A (k) A (k - 1)} .
\end{equation*}
That the secondary solution is expressible in terms of the primary solution is
a consequence of a general principle in the theory of difference equations,
which we outline in this section. For a gentle introduction to difference
equations, we refer to \cite{kelley-peterson-diff}.
Consider the general homogeneous linear difference equation
\begin{equation}
u (n + d) + p_{d - 1} (n) u (n + d - 1) + \cdots + p_1 (n) u (n + 1) + p_0
(n) u (n) = 0 \label{eq:de}
\end{equation}
of order $d$, where we normalize the leading coefficient to $1$. If $u_1 (n),
\ldots, u_d (n)$ are solutions to {\eqref{eq:de}}, then their
{\emph{Casoratian}} $w (n)$ is defined as
\begin{equation*}
w (n) = \det \begin{bmatrix}
u_1 (n) & u_2 (n) & \cdots & u_d (n)\\
u_1 (n + 1) & u_2 (n + 1) & \cdots & u_d (n + 1)\\
\vdots & \vdots & \ddots & \vdots\\
u_1 (n + d - 1) & u_2 (n + d - 1) & \cdots & u_d (n + d - 1)
\end{bmatrix} .
\end{equation*}
This is the discrete analog of the Wronskian that is discussed in most
introductory courses on differential equations. By applying the difference
equation {\eqref{eq:de}} to the last row in $w (n + 1)$ and then subtracting
off multiples of earlier rows, one finds that the Casoratian satisfies
\cite[Lemma~3.1]{kelley-peterson-diff}
\begin{equation*}
w (n + 1) = (- 1)^d p_0 (n) w (n)
\end{equation*}
and hence
\begin{equation}
w (n) = (- 1)^{d n} p_0 (0) p_0 (1) \cdots p_0 (n - 1) w (0) .
\label{eq:casoratian:rec}
\end{equation}
In the case of second order difference equations ($d = 2$), we have
\begin{equation*}
\frac{u_2 (n + 1)}{u_1 (n + 1)} - \frac{u_2 (n)}{u_1 (n)} = \frac{u_1 (n)
u_2 (n + 1) - u_1 (n + 1) u_2 (n)}{u_1 (n) u_1 (n + 1)} = \frac{w (n)}{u_1
(n) u_1 (n + 1)},
\end{equation*}
which implies that we can construct a second solution from a given solution as
follows.
\begin{lemma}
Let $d = 2$ and suppose that $u_1 (n)$ solves {\eqref{eq:de}} and that $u_1
(n) \neq 0$ for all $n \geq 0$. Then a second solution of
{\eqref{eq:de}} is given by
\begin{equation}
u_2 (n) = u_1 (n) \sum_{k = 0}^{n - 1} \frac{w (k)}{u_1 (k) u_1 (k + 1)},
\label{eq:u2:casoratian}
\end{equation}
where $w (k) = p_0 (0) p_0 (1) \cdots p_0 (k - 1)$.
\end{lemma}
Here we have normalized the solution $u_2$ by choosing $w (0) = 1$: this
entails $u_2 (0) = 0$ and $u_2 (1) = 1 / u_1 (0)$. Note also that if $p_0 (0)
\neq 0$, then $w (1) \neq 0$, which implies that the solution $u_2$ is
linearly independent from $u_1$.
\begin{example}
For the Delannoy difference equation {\eqref{eq:delannoy:rec}} and the
solutions $A (n)$, $B (n)$ with initial conditions as before, we have $d =
2$ and $p_0 (n) = (n + 1) / (n + 2)$, hence $w (n) = 1 / (n + 1)$. In
particular, equation {\eqref{eq:u2:casoratian}} is equivalent to
{\eqref{eq:delannoy:x:Qsum}}.
\end{example}
Now suppose that $p_k (n) \rightarrow c_k$ as $n \rightarrow \infty$, for each
$k \in \{ 0, 1, \ldots, d - 1 \}$. Then the characteristic polynomial of the
recurrence {\eqref{eq:de}} is, by definition,
\begin{equation*}
\lambda^d + c_{d - 1} \lambda^{d - 1} + \cdots + c_1 \lambda + c_0 =
\prod_{k = 1}^d (\lambda - \lambda_k)
\end{equation*}
with characteristic roots $\lambda_1, \ldots, \lambda_d$. Poincare's famous
theorem \cite[Theorem~5.1]{kelley-peterson-diff} states, under a modest
additional hypothesis, that each nontrivial solution to {\eqref{eq:de}} has
asymptotic growth according to one of the characteristic roots.
\begin{theorem}[Poincar\'e]
\label{thm:poincare}Suppose further that the characteristic roots have
distinct moduli. If $u (n)$ solves the recurrence {\eqref{eq:de}}, then
either $u (n) = 0$ for all sufficiently large $n$, or
\begin{equation}
\lim_{n \rightarrow \infty} \frac{u (n + 1)}{u (n)} = \lambda_k
\label{eq:poincare}
\end{equation}
for some $k \in \{ 1, \ldots, d \}$.
\end{theorem}
If, in addition, $p_0 (n) \neq 0$ for all $n \geq 0$ (so that, by
{\eqref{eq:casoratian:rec}}, the Casoratian $w (n)$ is either zero for all $n$
or nonzero for all $n$), then Perron's theorem guarantees that, for each $k$,
there exists a solution such that {\eqref{eq:poincare}} holds.
\section{Continued Fractions}\label{sec:cf}
In this section, we briefly connect with (irregular) continued fractions
\begin{equation*}
C = b_0 + \frac{a_1}{b_1 +} \, \frac{a_2}{b_2 +} \, \frac{a_3}{b_3 +}
\ldots := b_0 + \frac{a_1}{b_1 + \frac{a_2}{b_2 + \frac{a_3}{b_3 +
\ldots}}},
\end{equation*}
as introduced, for instance, in \cite{jt-contfrac},
\cite[Entry~12.1]{berndtII} or \cite[Chapter~9]{bvsz-cf}. The $n$-th
convergent of $C$ is
\begin{equation*}
C_n = b_0 + \frac{a_1}{b_1 +} \, \frac{a_2}{b_2 +} \, \ldots \,
\frac{a_n}{b_n} .
\end{equation*}
It is well known, and readily proved by induction, that $C_n = B (n) / A (n)$
where both $A (n)$ and $B (n)$ solve the second-order recurrence
\begin{equation}
u_n = b_n u_{n - 1} + a_n u_{n - 2} \label{eq:cf:rec}
\end{equation}
with $A (- 1) = 0$, $A (0) = 1$ and $B (- 1) = 1$, $B (0) = b_0$. (Note that,
if $b_0 = 0$, then the initial conditions for $B (n)$ are equivalent to $B (0)
= 0$, $B (1) = a_1$.)
Conversely, see \cite[Theorem~9.4]{bvsz-cf}, two such solutions to a
second-order difference equation with non-vanishing Casoratian correspond to a
unique continued fraction. In particular, Ap\'ery limits $\lim_{n
\rightarrow \infty} B (n) / A (n)$ arising from second-order difference
equations can be equivalently expressed as continued fractions.
\begin{example}
The Ap\'ery limit {\eqref{eq:delannoy:x:L}} is equivalent to the continued
fraction
\begin{equation*}
\frac{1}{2} \ln \left(1 + \frac{1}{x} \right) = \frac{1}{(2 x + 1) -}
\, \frac{1^2}{3 (2 x + 1) -} \, \frac{2^2}{5 (2 x +
1) -} \cdots \frac{n^2}{(2 n + 1) (2 x + 1) -} \cdots
\end{equation*}
\cite[p.~343, eq.~(90.4)]{wall-contfrac}. To see this, it suffices to note
that, if $A_x (n)$ and $B_x (n)$ are as in Section~\ref{sec:delannoy}, then
$n!A_x (n)$ and $n!B_x (n)$ solve the adjusted recurrence
\begin{equation*}
u_{n + 1} = (2 x + 1) (2 n + 1) u_n - n^2 u_{n - 1}
\end{equation*}
of the form {\eqref{eq:cf:rec}}.
\end{example}
The interested reader can find a detailed discussion of the continued
fractions corresponding to Ap\'ery's limits for $\zeta (2)$ and $\zeta (3)$
in \cite[Section~9.5]{bvsz-cf}, which then proceeds to proving the
respective irrationality results.
\section{Proving Ap\'ery limits}\label{sec:pf}
In the sequel, we briefly indicate several approaches towards proving
Ap\'ery limits. In case of the Ap\'ery numbers {\eqref{eq:apery3}},
Ap\'ery established the limit {\eqref{eq:apery3:lim}} by finding the
explicit expression
\begin{equation}
B (n) = \frac{1}{6} \sum_{k = 0}^n \binom{n}{k}^2 \binom{n + k}{k}^2 \left(\sum_{j = 1}^n \frac{1}{j^3} + \sum_{m = 1}^k \frac{(- 1)^{m - 1}}{2 m^3
\binom{n}{m} \binom{n + m}{m}} \right) \label{eq:apery3:2}
\end{equation}
for the secondary solution $B (n)$. Observe how, indeed, the presence of the
truncated sum for $\zeta (3)$ in {\eqref{eq:apery3:2}} makes the limit
{\eqref{eq:apery3:lim}} transparent. While, nowadays, it is routine
\cite{schneider-apery} to verify that the sum {\eqref{eq:apery3:2}}
satisfies the recurrence {\eqref{eq:apery3:rec}}, it is much less clear how to
discover sums like {\eqref{eq:apery3:2}} that are suitable for proving a
desired Ap\'ery limit.
Shortly after, and inspired by, Ap\'ery's proof, Beukers
\cite{beukers-irr-leg} gave shorter and more elegant proofs of the
irrationality of $\zeta (2)$ and $\zeta (3)$ by considering double and triple
integrals that result in small linear forms in the zeta value and $1$. For
instance, for $\zeta (3)$, Beukers establishes that
\begin{align}
& (- 1)^n \int_0^1 \int_0^1 \int_0^1 \frac{x^n (1 - x)^n y^n (1 - y)^n z^n (1
- z)^n}{(1 - (1 - x y) z)^{n + 1}} \mathrm{d} x \mathrm{d} y \mathrm{d} z \nonumber\\ ={} & A (n) \zeta
(3) - 6 B (n), \label{eq:linearform:beukers:zeta3}
\end{align}
where $A (n)$ and $B (n)$ are precisely the Ap\'ery numbers
{\eqref{eq:apery3}} and the corresponding secondary solution
{\eqref{eq:apery3:2}}. By bounding the integrand, it is straightforward to
show that the triple integral approaches $0$ as $n \rightarrow \infty$. From
this we directly obtain the Ap\'ery limit {\eqref{eq:apery3:lim}}, namely,
$\lim_{n \rightarrow \infty} B (n) / A (n) = \zeta (3) / 6$.
\begin{example}
In the same spirit, the Ap\'ery limit {\eqref{eq:delannoy:x:L}} can be
deduced from
\begin{equation*}
\int_0^1 \frac{t^n (1 - t)^n}{(x + t)^{n + 1}} \mathrm{d} t = A_n (x) \ln
\left(1 + \frac{1}{x} \right) - 2 B_n (x),
\end{equation*}
which holds for $x > 0$ with $A_n (x)$ and $B_n (x)$ as in
Section~\ref{sec:delannoy}. We note that this integral is a variation of the
integral that Alladi and Robinson \cite{ar-irr} have used to prove
explicit irrationality measures for numbers of the form $\ln (1 + \lambda)$
for certain algebraic $\lambda$.
\end{example}
As a powerful variation of this approach, the same kind of linear forms can be
constructed through hypergeometric sums obtained from rational functions. For
instance, Zudilin \cite{zudilin-arithmetic-odd} studies a general
construction, a special case of which is the relation, originally due to
Gutnik,
\begin{equation*}
- \frac{1}{2} \sum_{t = 1}^{\infty} R_n' (t) = A (n) \zeta (3) - 6 B (n),
\quad \text{where } R_n (t) = \left(\frac{(t - 1) \cdots (t - n)}{t (t +
1) \cdots (t + n)} \right)^2,
\end{equation*}
which once more equals {\eqref{eq:linearform:beukers:zeta3}}. We refer to
\cite{zudilin-arithmetic-odd}, \cite{zudilin-appr-polylog} and
\cite[Section~2.3]{avz-apery-limits} for further details and references. A
detailed discussion of the case of $\zeta (2)$ is included in
\cite[Sections~9.5 and 9.6]{bvsz-cf}.
Beukers \cite{beukers-irr} further realized that, in Ap\'ery's cases, the
differential equations associated to the recurrences have a description in
terms of modular forms. Zagier \cite{zagier4} has used such modular
parametrizations to prove Ap\'ery limits in several cases, including for the
Franel numbers, the case $d = 3$ in {\eqref{eq:franel:d}}. The limits occuring
in his cases are rational multiples of
\begin{equation*}
\zeta (2), \quad L_{- 3} (2) = \sum_{n = 1}^{\infty} \frac{\left(\frac{-
3}{n} \right)}{n^2}, \quad L_{- 4} (2) = \sum_{n = 1}^{\infty} \frac{\left(\frac{- 4}{n} \right)}{n^2} = \sum_{n = 0}^{\infty} \frac{(- 1)^n}{(2 n +
1)^2},
\end{equation*}
where $\left(\frac{a}{n} \right)$ is a Legendre symbol and $L_{- 4} (2)$ is
Catalan's constant (whose irrationality remains famously unproven). A general
method for obtaining Ap\'ery limits in cases of modular origin has been
described by Yang \cite{yang-apery-limits}, who proves various Ap\'ery
limits in terms of special values of $L$-functions.
\section{Sums of powers of binomials}\label{sec:franel:d}
Let us consider the family
\begin{equation}
A^{(d)} (n) = \sum_{k = 0}^n \binom{n}{k}^d \label{eq:franel:d}
\end{equation}
of sums of powers of binomial coefficients. It is easy to see that $A^{(1)}
(n) = 2^n$ and $A^{(2)} (n) = \binom{2 n}{n}$. The numbers $A^{(3)} (n)$ are
known as {\emph{Franel numbers}} \cite[A000172]{oeis}. Almost a century
before the availability of computer-algebra approaches like creative
telescoping, Franel \cite{franel94} obtained explicit recurrences for
$A^{(3)} (n)$ as well as, in a second paper, $A^{(4)} (n)$, and he conjectured
that $A^{(d)} (n)$ satisfies a linear recurrence of order $\lfloor (d + 1) / 2
\rfloor$ with polynomial coefficients. This conjecture was proved by Stoll in
\cite{stoll-rec-bounds}, to which we refer for details and references. It
remains an open problem to show that, in general, no recurrence of lower order
exists.
Van der Poorten \cite[p.~202]{alf} reports that the following Ap\'ery
limits in the cases $d = 3$ and $d = 4$ (in which case the binomial sums
satisfy second-order recurrences like Ap\'ery's sequences) have been
numerically observed by Tom Cusick:
\begin{equation}
\lim_{n \rightarrow \infty} \frac{B^{(3)} (n)}{A^{(3)} (n)} =
\frac{\pi^2}{24}, \quad \lim_{n \rightarrow \infty} \frac{B^{(4)}
(n)}{A^{(4)} (n)} = \frac{\pi^2}{30} . \label{eq:franel:L34}
\end{equation}
In each case, $B^{(d)} (n)$ is the (unique) secondary solution with initial
conditions $B^{(d)} (0) = 0$, $B^{(d)} (1) = 1$. The case $d = 3$ was proved
by Zagier \cite{zagier4} using modular forms. Since the case $d = 4$ is
similarly connected to modular forms \cite{cooper-level10}, we expect that
it can be established using the methods in \cite{yang-apery-limits},
\cite{zagier4} as well. Based on numerical evidence, following the approach
in Section~\ref{sec:search}, we make the following general conjecture
extending {\eqref{eq:franel:L34}}:
\begin{conjecture}
\label{conj:franel:2}For $d \geq 3$, the minimal-order recurrence
satisfied by $A^{(d)} (n)$ has a unique solution $B^{(d)} (n)$ with $B^{(d)}
(0) = 0$ and $B^{(d)} (1) = 1$ that also satisfies
\begin{equation}
\lim_{n \rightarrow \infty} \frac{B^{(d)} (n)}{A^{(d)} (n)} = \frac{\zeta
(2)}{d + 1} . \label{eq:conj:franel:2}
\end{equation}
\end{conjecture}
Note that for $d \geq 5$, the recurrence is of order $\geq 3$, and
so the solution $B^{(d)} (n)$ is not uniquely determined by the two initial
conditions $B^{(d)} (0) = 0$ and $B^{(d)} (1) = 1$.
Conjecture~\ref{conj:franel:2} asserts that precisely one of these solutions
satisfies {\eqref{eq:conj:franel:2}}.
Subsequent to making this conjecture, we realized that the case $d = 5$ was
already conjectured in \cite[Section~4.1]{avz-apery-limits} as sequence
\#22. We are not aware of previous conjectures for the cases $d \geq 6$.
We have numerically confirmed each of the cases $d \leq 10$ to more than
$100$ digits.
\begin{example}
\label{eg:franel:5}For $d = 5$, the sum $A^{(5)} (n)$ satisfies a recurrence
of order $3$, spelled out in \cite{perlstadt-franel}, of the form
\begin{equation}
(n + 3)^4 p (n + 1) u (n + 3) + \ldots + 32 (n + 1)^4 p (n + 2) u (n) = 0
\label{eq:franel:5:rec}
\end{equation}
where $p (n) = 55 n^2 + 33 n + 6$. The solution $B^{(5)} (n)$ from
Conjecture~\ref{conj:franel:2} is characterized by $B^{(5)} (0) = 0$ and
$B^{(5)} (1) = 1$ and insisting that the recurrence
{\eqref{eq:franel:5:rec}} also holds for $n = - 1$ (note that this does not
require a value for $B^{(5)} (- 1)$ because of the term $(n + 1)^4$).
Similarly, for $d = 6, 7, 8, 9$ the sequence $B^{(d)} (n)$ in
Conjecture~\ref{conj:franel:2} can be characterized by enforcing the
recurrence for small negative $n$ and by setting $B^{(d)} (n) = 0$ for $n <
0$. By contrast, for $d = 10$, there is a two-dimensional space of sequences
$u (n)$ solving {\eqref{eq:franel:5:rec}} for all integers $n$ with the
constraint that $u (n) = 0$ for $n \leq 0$. Among these, $B^{(10)} (n)$
is characterized by $B^{(10)} (1) = 1$ and $B^{(10)} (2) = 381 / 4$.
\end{example}
Now return to the case $d = 5$ and let $C^{(5)} (n)$ be the third solution to
the same recurrence with $C^{(5)} (0) = 0$, $C^{(5)} (1) = 1$, $C^{(5)} (2) =
\frac{48}{7}$. Numerical evidence suggests that we have the Ap\'ery limits
\begin{equation*}
\lim_{n \rightarrow \infty} \frac{B^{(5)} (n)}{A^{(5)} (n)} = \frac{1}{6}
\zeta (2), \quad \lim_{n \rightarrow \infty} \frac{C^{(5)} (n)}{A^{(5)}
(n)} = \frac{3 \pi^4}{1120} = \frac{27}{112} \zeta (4) .
\end{equation*}
Extending this idea to $d = 5, 6, \ldots, 10$, we numerically find Ap\'ery
limits $C^{(d)} (n) / A^{(d)} (n) \rightarrow \lambda \zeta (4)$ with the
following rational values for $\lambda$:
\begin{equation*}
\frac{27}{112}, \frac{4}{21}, \frac{37}{240}, \frac{7}{55}, \frac{47}{440},
\frac{1}{11} .
\end{equation*}
These values suggest that $\lambda$ can be expressed as a simple rational
function of $d$:
\begin{conjecture}
For $d \geq 5$, the minimal-order recurrence satisfied by $A^{(d)} (n)$
has a unique solution $C^{(d)} (n)$ with $C^{(d)} (0) = 0$ and $C^{(d)} (1)
= 1$ that also satisfies
\begin{equation*}
\lim_{n \rightarrow \infty} \frac{C^{(d)} (n)}{A^{(d)} (n)} = \frac{3 (5
d + 2)}{(d + 1) (d + 2) (d + 3)} \zeta (4) .
\end{equation*}
\end{conjecture}
More generally, we expect that, for $d \geq 2 m + 1$, there exist such
Ap\'ery limits for rational multiples of $\zeta (2), \zeta (4), \ldots,
\zeta (2 m)$. It is part of the challenge presented here to explicitly
describe all of these limits. As communicated to us by Zudilin, one could
approach the challenge, uniformly in $d$, by considering the rational
functions
\begin{equation*}
R_n^{(d)} (t) = \left(\frac{(- 1)^t n!}{t (t + 1) \cdots (t + n)}
\right)^d
\end{equation*}
in the spirit of \cite{zudilin-arithmetic-odd},
\cite{zudilin-appr-polylog} and \cite[Section~2.3]{avz-apery-limits}, as
indicated in Section~\ref{sec:pf}.
\section{Further challenges and open problems}\label{sec:open}
Although many things are known about Ap\'ery limits, much deserves to be
better understood. The explicit conjectures we offer in the previous section
can be supplemented with similar ones for other families of binomial sums. In
addition, many conjectural Ap\'ery limits that were discovered numerically
are listed in \cite[Section~4]{avz-apery-limits} for sequences that arise
from fourth- and fifth-order differential equations of Calabi--Yau type. As
mentioned in Example~\ref{eg:cy}, it would be of particular interest to
establish all these Ap\'ery limits in a uniform fashion.
It is natural to wonder whether a better understanding of Ap\'ery limits can
lead to new irrationality results. Despite considerable efforts and progress
(we refer the reader to \cite{zudilin-arithmetic-odd} and
\cite{brown-apery} as well as the references therein), it remains a
wide-open challenge to prove the irrationality of, say, $\zeta (5)$ or
Catalan's constant. As a recent promising construction in this direction, we
mention Brown's cellular integrals \cite{brown-apery} which are linear forms
in (multiple) zeta values that are constructed to have certain vanishing
properties that make them amenable to irrationality proofs. In particular,
Brown's general construction includes Ap\'ery's results as (unique) special
cases occuring as initial instances.
In another direction, it would be of interest to systematically study
$q$-analogs and, in particular, to generalize from difference equations to
$q$-difference equations. For instance, Amdeberhan and Zeilberger
\cite{az-qapery} offer an Ap\'ery-style proof of the irrationality of the
$q$-analog of $\ln (2)$ based on a $q$-version of the Delannoy numbers
{\eqref{eq:delannoy}}.
Perron's theorem, which we have mentioned after Poincar\'e's
Theorem~\ref{thm:poincare}, guarantees that, for each characteristic root
$\lambda$ of an appropriate difference equation, there exists a solution $u
(n)$ such that $u (n + 1) / u (n)$ approaches $\lambda$. We note that, for
instance, Ap\'ery's linear form {\eqref{eq:linearform:beukers:zeta3}} is
precisely the unique (up to a constant multiple) solution corresponding to the
$\lambda$ of smallest modulus. General tools to explicitly construct such
Perron solutions from the difference equation would be of great utility.
\begin{acknowledgment}
We are grateful to Alan Sokal for improving the
exposition by kindly sharing lots of careful suggestions and comments. We also
thank Wadim Zudilin for helpful comments, including his suggestion at the end
of Section~\ref{sec:franel:d}, and references.
\end{acknowledgment}
| {
"timestamp": "2020-11-09T02:16:14",
"yymm": "2011",
"arxiv_id": "2011.03400",
"language": "en",
"url": "https://arxiv.org/abs/2011.03400",
"abstract": "An important component of Apéry's proof that $\\zeta (3)$ is irrational involves representing $\\zeta (3)$ as the limit of the quotient of two rational solutions to a three-term recurrence. We present various approaches to such Apéry limits and highlight connections to continued fractions as well as the famous theorems of Poincaré and Perron on difference equations. In the spirit of Jon Borwein, we advertise an experimental-mathematics approach by first exploring in detail a simple but instructive motivating example. We conclude with various open problems.",
"subjects": "Number Theory (math.NT)",
"title": "Apéry Limits: Experiments and Proofs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795110351222,
"lm_q2_score": 0.8175744806385542,
"lm_q1q2_score": 0.8069292611354342
} |
https://arxiv.org/abs/2104.04086 | Halperin's conjecture in formal dimensions up to 20 | A 1976 conjecture of Halperin on positively elliptic spaces was recently confirmed in formal dimensions up to 16. In this article, we shorten the proof and extend the result up to formal dimension 20. We work with Meier's algebraic characterization of the conjecture, so the proof is elementary in that it involves only polynomial algebras, ideals, and derivations. | \section*{Introduction}
We consider Artinian complete intersection algebras
\[H^* = \Q[x_1,\ldots,x_k] / (u_1,\ldots,u_k)\]
with a grading concentrated in even degrees. Examples include the rational cohomology of positively elliptic topological spaces, so for simplicity we refer to these algebras as {\it positively elliptic algebras} (see Section \ref{sec:Preliminaries} for definitions).
Positively elliptic spaces play an important role in rational homotopy theory. In fact, they are the subject of a 1976 conjecture of Halperin that is listed as the first of seventeen open problems in \cite[Chapter 39]{FelixHalperinThomas01}. In 1982 Meier \cite{Meier82} proved that this conjecture can be reformulated algebraically as follows (see Section \ref{sec:Preliminaries}):
\begin{nonumberconjecture}[Halperin Conjecture] If $H^*$ is a positively elliptic algebra, then $H^*$ does not admit a non-trivial derivation of negative degree.
\end{nonumberconjecture}
Evidence for this conjecture includes proofs under geometric assumptions such as when $H^*$ is the rational cohomology algebra of a K\"ahler manifold or homogeneous space (see \cite{Blanchard56,Meier83,ShigaTezuka87}). It has also been verified under algebraic assumptions such when $H^*$ is reduced (see \cite{PapadimaPaunescu96}), has at most three generators (see \cite{Lupton90,Chen99}), has relations of large degree (see \cite{ChenYauZuo19}), or has formal dimension at most $16$ (see \cite{AmannKennard20}). In this article, we expand on the latter result by shortening the proof and extending it as follows:
\begin{nonumbertheorem}
Halperin's conjecture holds in formal dimensions at most $20$.
\end{nonumbertheorem}
The proof follows the algebraic setup of \cite{PapadimaPaunescu96, Chen99,ChenYauZuo19} and the inductive strategy in \cite{AmannKennard20} (see Sections \ref{sec:Preliminaries} and \ref{sec:degreetypes}). In addition, we prove two new lemmas, the {\it Large Relations Lemma} and the {\it Top-to-Bottom Lemma} (see Sections \ref{sec:LargeRelations} and \ref{sec:Top-to-Bottom}). These serve to efficiently rule out all cases except for two in formal dimension $20$. The proof is completed in Section \ref{sec:Proof}.
\bigskip\noindent{\bf Acknowledgements:} This first author was supported by NSF Grant DMS-2005280, and the second was supported by a grant from the Syracuse Office of Undergraduate Research and Creative Engagement (SOURCE) at Syracuse University. Both authors would like to thank Manuel Amann and Claudia Miller for helpful discussions while preparing this paper.
\bigskip\section{Preliminaries}\label{sec:Preliminaries}\medskip
Let $A = \Q[x_1,\ldots,x_k]$ denote the polynomial ring on $k$ variables. Assume moreover that each $x_i$ has a positive, even degree assigned to it denoted by $|x_i|$. This induces a {\it graded algebra} structure on $A = \bigoplus_{n\geq 0} A^n$ where the subspace $A^n$ is spanned by monomials $x_1^{a_1}\cdots x_k^{a_k}$ satisfying $a_1|x_1| + \ldots + a_k |x_k| = n$.
Next let $I = (u_1,\ldots,u_k)$ denote the ideal generated by homogeneous polynomials $u_i \in A^{|u_i|}$, where $|u_i|$ denotes the degree of $u_i$. Recall that the $u_i$ form a {\it regular sequence} if $u_1 \in A$ is non-zero and if the image of $u_i$ in $A/(u_1,\ldots,u_{i-1})$ is not a zero divisor for all $2 \leq i \leq k$.
\begin{definition}\label{def:Q}
The quotient $H^* = \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$ of a graded polynomial ring on generators with even degrees by an ideal generated by a regular sequence $u_1,\ldots,u_k$ of homogeneous polynomials is called an {\it positively elliptic algebra}.
\end{definition}
\begin{remark}
Algebras presented as in Definition \ref{def:Q} are also called graded (or weighted or quasi-homogeneous) Artinian complete intersection algebras with grading concentrated in even degrees. The condition that all generators have even degree is not required to state the Halperin conjecture. However we require it here to maintain the connection to rational homotopy theory, and this is the motivation for the definition (see \cite{Lupton98, AmannKapovitch12,AmannKollross-pre}). For example, the formal dimension defined below and appearing in our theorem equals the dimension of the underlying rationally elliptic manifold $M$ when $H^*$ is the rational cohomology of $M$.
\end{remark}
Singly generated algebras of the form $H^* = \Q[x_1]/(x_1^\alpha)$ are always positively elliptic. Doubly generated algebras can be positively elliptic or not, as can be seen from the examples $\Q[x_1,x_2]/(x_1^2 - x_2^2, x_1x_2)$ or $\Q[x_1,x_2]/(x_1^2, x_1x_2)$. Note in the latter case that the image of $x_1x_2$ in $\Q[x_1,x_2]/(x_1^2)$ is a zero divisor, so the ideal is not generated by a regular sequence.
To better understand positively elliptic algebras, we review a few well known facts. First, since the $u_i$ are homogeneous, the grading on $A$ descends to $H^* = \bigoplus_{n\geq 0} H^n$. Note that $H^n = 0$ for odd $n$ since the degrees $x_i$ are even.
Second, the quotient $H^* = \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$ by an arbitrary sequence of homogeneous polynomials $u_i$ of positive degree has {\it finite dimension} $\ensuremath{\operatorname{dim}} H^* = \sum \ensuremath{\operatorname{dim}} H^n$ (or Krull dimension zero) if and only if the $u_i$ form a regular sequence. Therefore another way to define a positively elliptic algebra is by requiring that $\ensuremath{\operatorname{dim}} H^* < \infty$.
Third, $H^*$ satisfies Poincar\'e duality (see \cite[Section8]{Halperin77}). This means that there exists $n \geq 0$ such that $H^i = 0$ for $i > n$ and $H^n \cong \Q$ and that the product map $H^i \times H^{n-i} \to H^n \cong \Q$ is a non-degenerate bilinear pairing for all $0 \leq i \leq n$. The integer $n$ is called the {\it formal dimension} (or socle degree) and is denoted by $\fd H^*$ (cf. \cite{CattalaniMilivojevic20}).
Positively elliptic algebras arise as the rational cohomology algebras of rationally elliptic topological spaces $F$ with positive Euler characteristic. In the literature, such spaces are said to be $F_0$ or positively elliptic, and they were conjectured by Halperin in 1976 to satisfy the following: For a fibration with fiber $F$, the Serre spectral sequence of that fibration degenerates (see \cite[Chapter 39]{FelixHalperinThomas01}).
In 1982, Meier \cite{Meier82} proved that Halperin's conjecture can be reformulated entirely algebraically in terms of negative degree derivations as stated in the introduction. Indeed, any non-trivial differential $d_{r+1}$ in the spectral sequence induces a derivation $\delta$ on $H^*(F)$ of (negative) degree $-r$. Recall that a derivation on $H^*$ is a linear map $\delta:H^* \to H^*$ that increases degree by some (possibly non-positive) integer $|\delta| \in \Z$ and behaves on products of homogeneous elements as follows:
\[\delta(xy) = \delta(x) y + (-1)^{|\delta||x|} x \delta(y).\]
\begin{example}\label{exa:2gen}
The graded algebra $H^* = \Q[x_1,x_2]/(x_1^2 - \lambda x_2^2, x_1x_2)$ with $|x_1| = |x_2| = 2$ and $\lambda \in \Q\setminus\{0\}$ is a positively elliptic algebra and admits a non-trivial derivation $\delta$ of degree 2. Indeed, if we define $\delta(x_1) = x_1^2$ and $\delta(x_2) = 0$ and extend the definition by linearity and the Leibniz rule, we obtain a well defined derivation on $\Q[x_1,x_2]$. In addition, $\delta(x_1^2 - \lambda x_2^2)$ and $\delta(x_1x_2)$ are in the ideal $(x_1^2 - \lambda x_2^2, x_1x_2)$, so $\delta$ descends to a non-trivial derivation on $H^*$.
\end{example}
This example also demonstrates the way in which we work with derivations on $H^*$. They correspond exactly to derivations on $\Q[x_1,\ldots,x_k]$ that map the ideal $(u_1,\ldots,u_k)$ into itself. We note again here that we use the same notation for the generators $x_i$ in $\Q[x_1,\ldots,x_k]$ and in the quotient $H^*$.
We now restate Meier's reformulation of Halperin's conjecture from the introduction here for easy reference:
\begin{nonumberconjecture}[Halperin Conjecture] Positively elliptic algebras do not admit non-trivial derivations of negative degree.
\end{nonumberconjecture}
We close this preliminary section with two basic results (see \cite[Lemmas 11.1 and 11.3]{AmannKennard20}).
\begin{lemma}[Land in Zero Lemma]\label{lem:land}
Let $\delta$ be a derivation of negative degree on $H^*$. If $x\in H^{i}$ for some $i>0$ such that $\delta(x)\in H^0$, then $\delta(x)=0$.
\end{lemma}
\begin{lemma}[$k-1$ Lemma]\label{lem:kmo1}
If $\delta$ is a derivation of negative degree on $H^*$ such that $\delta(x_i)=0$ for $k-1$ of the $k$ generators $x_i$, then $\delta = 0$.
\end{lemma}
These lemmas imply Thomas' result that the Halperin Conjecture holds when $H^*$ is generated by at most two elements (see \cite{Thomas81}).
\bigskip\section{Degree types and splittings}\label{sec:degreetypes}\medskip
Given a positively elliptic algebra $H^* = \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$, the {\it degree type} of $H^*$ is the sequence of even, positive integers denoted by
\[(|x_1|,\ldots,|x_k|;|u_1|,\ldots,|u_k|).\]
As Example \ref{exa:2gen} shows, the degree type $(2, 2; 4, 4)$ can be realized in infinitely many ways, even up to isomorphism. This is a general feature. Nevertheless, it is helpful to sort positively elliptic algebras according to their degree types. In this section, we summarize previous work on degree types as they relate to Halperin's conjecture. In addition, we define pure models, formal dimension, and splittings. The first basic result is the following (see \cite[Section 32]{FelixHalperinThomas01}):
\begin{theorem}[Pure model]\label{thm:PureModel} Given a non-zero positively elliptic algebra $H^*$, there exist variables $x_i$, positive and even degrees $|x_i|$, and homogeneous polynomials
\[u_i \in \Q^{\geq 2}[x_1,\ldots,x_k] = \mathrm{span}\{x_1^{a_1}\cdots x_k^{a_k} ~|~ a_1 + \ldots + a_k \geq 2\}\]
such that $H^* \cong \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$. Moreover these choices can be made to satisfy all of the following:
\begin{enumerate}
\item $|x_1| \leq \ldots \leq |x_k|$.
\item $|u_1| \leq \ldots \leq |u_k|$.
\item $|u_i| \geq 2|x_i|$ for all $1 \leq i \leq k$.
\end{enumerate}
In addition, the formal dimension satisfies
\[\fd H^* = \sum_{k=1}^n \of{|u_i| - |x_i|}.\]
\end{theorem}
Such a presentation of $H^*$ is called a {\it pure model}, and we assume from now on that our presentations of positively elliptic algebras are pure models.
\begin{proof}
By definition, there is some presentation of $H^* = \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$ with $k \geq 1$. We may assume that $k$ is minimal.
Clearly the $u_j$ do not have constant terms, since otherwise the ideal $I = (u_1,\ldots,u_k)$ is the entire polynomial algebra. Moreover, if some $u_j$ has a linear term equal to a multiple of $x_i$, then the automorphism of $\Q[x_1,\ldots,x_k]$ that replaces $x_i$ by $u_j$ is an isomorphism. Taking the quotient by $I$ gives rise to a presentation of $H^*$ on $k-1$ generators. This contradicts the minimality of $k$, so we have that each $u_j \in \Q^{\geq 2}[x_1,\ldots,x_k]$.
Next, we may relabel the generators and relations so that $|x_1| \leq \ldots \leq |x_k|$ and $|u_1| \leq \ldots \leq |u_k|$. The final condition that $|u_i| \geq 2|x_i|$ for all $i$ follows by the result of Friedlander and Halperin below (Theorem \ref{thm:SAC}). Indeed, this result implies that some relation (and hence $u_k$) has degree at least twice $|x_k|$, that at least two relations (and hence $u_{k-1}$ and $u_k$) have degree at least twice $|x_{k-1}|$, and so on.
The last claim holds by the well known fact that $H^*$ satisfies Poincar\'e duality and that one choice of fundamental class is given by the Jacobian $\det\of{{\partial u_i}/{\partial x_j}}$ (see, for example, the remarks following Theorem B in \cite{ShigaTezuka87}).
\end{proof}
\begin{example}
The positively elliptic algebra $H^* = \Q[x_1,x_2]/(x_1^2 - x_2, x_2^3)$ with $|x_1| = 2$ and $|x_2| = 4$ can be more efficiently presented as $H^* \cong \Q[x]/(x^6)$. Indeed, an isomorphism is given by mapping $x_1 \mapsto x$ and $x_2 \mapsto x^2$.
\end{example}
A consequence of Theorem \ref{thm:PureModel} is that any given formal dimension only allows for finitely many degree types. Indeed,
\[\fd H^* = \sum_{i=1}^k |u_i| - |x_i| \geq \sum_{i=1}^k |x_i| \geq 2k,\]
so $k \leq \frac 1 2 \fd H^*$, the possible degrees $|x_i|$ are similarly bounded, and therefore the possibilities for the $|u_i|$ are finite.
A further restriction on the degree types is the following result due to Friedlander and Halperin (see \cite[Corollary 1.10]{FriedlanderHalperin79} or \cite[Proposition 32.9]{FelixHalperinThomas01}):
\begin{theorem}[Characterization of degree types]\label{thm:SAC} A sequence \[(A_1,\ldots,A_k; B_1,\ldots B_k)\] of positive, even integers arises as the degree type of some positively elliptic algebra if and only if the following holds: For all $1 \leq l \leq k$ and $1 \leq i_1 < \ldots < i_l \leq k$, there exist $1 \leq j_1 < \ldots < j_l \leq k$ such that $B_{j_1},\ldots,B_{j_l}$ can be expressed as linear combinations of the form $\lambda_1 A_{i_1} + \ldots + \lambda_l A_{i_l}$ with non-negative integer coefficients satisfying $\lambda_1 + \ldots + \lambda_l \geq 2$.
\end{theorem}
To illustrate, the degree type $(A_1,A_2; B_1, B_2) = (2,4;4,10)$ does not satisfy this condition since $A_2 = 4$ does not properly divide any of the $B_j$. Similarly, the degree type $(2,2,4,4; 4,6,8,10)$ does not satisfy the condition and therefore does not arise as the degree type of a positively elliptic algebra.
\begin{definition}\label{def:SAC}
A sequence $(A_1,\ldots,A_k; B_1, \ldots, B_k)$ as in Theorem \ref{thm:SAC} satisfies the condition $SAC(A_{i_1}, \ldots, A_{i_l})$ if there exist $B_{j_1},\ldots,B_{j_l}$ as in the theorem.
\end{definition}
In \cite{FriedlanderHalperin79}, the condition that $SAC(A_{i_1},\ldots,A_{i_l})$ holds for all possible subsequences of indices $1 \leq i_1 < \ldots < i_l \leq k$ is called the Strong Algebraic Condition (SAC). The examples after the theorem fail the SAC(4) and the SAC(4,4), respectively, and therefore both fail the SAC.
We close this section with a discussion of positively elliptic algebras $H^*$ that decompose in a certain sense.
\begin{definition}[Split positively elliptic algebras]
A positively elliptic algebra $H^*$ {\it splits} if there exist non-zero positively elliptic algebras $K^*$ and $Q^*$ such that the sequence
\[0 \to K^* \to H^* \to Q^* \to 0\]
is exact.
\end{definition}
An example of a splitting is if $H^*$ has a presentation as a pure model
\[H^* \cong \Q[x_1,\ldots,x_k]/(u_1,\ldots, u_k)\]
such that, for some $0 < l < k$, the polynomials $u_1,\ldots,u_l$ only depend on $x_1,\ldots,x_l$. Indeed, in this case, all of the following are easy to verify:
\begin{enumerate}
\item The subalgebra $K^* = \Q[x_1,\ldots,x_l]/(u_1,\ldots,u_l)$ is a positively elliptic algebra.
\item The quotient algebra $Q^* = H^*/K^* \cong \Q[\bar x_{l+1},\ldots,\bar x_k]/(\bar u_{l+1}, \ldots, \bar u_k)$ is a positively elliptic algebra, where the bars denote images under the projection map.
\item The formal dimensions of $K^*$ and $Q^*$ are positive and sum to $\fd H^*$.
\end{enumerate}
In the proof of the Halperin conjecture up to dimension $20$, we will proceed by induction over the formal dimension. In particular, the following is an important tool for dealing with the split case (see \cite[Theorem 1]{Markl90}):
\begin{theorem}[Markl's theorem]
Let $H^*$ be a positively elliptic algebra with a non-zero derivation of negative degree. If $H^*$ splits as
\[0 \to K^* \to H^* \to Q^* \to 0,\]
then $K^*$ or $Q^*$ also has a non-zero derivation of negative degree.
\end{theorem}
In our situation, where the splitting of
\[H^* \cong \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)\]
is adapted to this choice of pure model in the sense that the polynomials $u_1,\ldots,u_l$ only depend on $x_1,\ldots,x_l$ for some $0 < l < k$, Markl's proof takes on a somewhat simpler form, so we include it here:
\begin{proof}
Assume that $H^* = \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$ is a pure model for a positively elliptic algebra $H^*$ with the property that \[u_1,\ldots,u_l \in \Q[x_1,\ldots,x_l]\] for some $0 < l < k$. Let
\[K^* = \Q[x_1,\ldots,x_l]/(u_1,\ldots,u_l).\]
Let $\delta$ be a derivation of negative degree on $H^*$. We assume that both $K^*$ and $H^*/K^*$ do not admit non-zero derivations of negative degree, and we use this to prove that $\delta = 0$.
First, since the degrees of the $x_i$ are increasing, the derivation $\delta$ restricts to a derivation on $K^*$. By the assumption on $K^*$, we have
\[\delta(x_1) = \ldots = \delta(x_l) = 0.\]
Next, fix any vector space basis $\{\xi_\alpha\}$ for $K^*$ consisting of monic polynomials $\xi_\alpha$ in the variables $x_1,\ldots,x_l$. For each $y \in H^*$, there exist polynomials $\delta_\alpha(y)$ in the variables $x_{l+1},\ldots,x_k$ such that
\[\delta(y) = \sum_\alpha \xi_\alpha \delta_\alpha(y).\]
We claim that each of the maps
\begin{eqnarray*}
\bar\delta_\alpha:H^*/K^* &\to& H^*/K^*\\
\bar y &\mapsto& \overline{\delta_\alpha(y)}
\end{eqnarray*}
is a well defined derivation of negative degree on $H^*/K^*$ and hence is equal to zero by assumption. If this claim is true, it is straightforward to see that
\[\delta(x_j) = 0 \mathrm{~for~all~} l < j \leq k\]
since otherwise some $\delta_\alpha(x_j)$ is a non-zero polynomial only depending on $x_{l+1},\ldots,x_k$, which would imply that
\[\bar\delta_\alpha(\bar x_j) = \overline{\delta_\alpha(x_j)} \neq 0\]
in $H^*/K^*$, a contradiction.
To prove the claim, we first consider the issue of being well defined. It suffices to show that $\delta_\alpha$ maps the ideal $(x_1,\ldots,x_i)$ into itself. Fix such an element in this ideal, and express it as
\[z = \sum_{i=1}^l x_i z_i.\]
Since $\delta$ is a derivation on $H^*$ that vanishes on $x_1,\ldots,x_l$, this equation implies the following upon applying $\delta$:
\[\sum_{\alpha} \xi_\alpha \delta_\alpha(z)
= \sum_{i=1}^l x_i \sum_\alpha \xi_\alpha \delta_\alpha(z_i).\]
For any $\alpha$, define $\delta_{\alpha,i}$ to be zero if the monomial $\xi_\alpha$ has no $x_i$ term and otherwise to be $\delta_\beta$ where $\beta$ is the index such that $x_i \xi_\beta = \xi_\alpha$. Hence
\[\delta_\alpha(z) = \sum_{i=1}^{l} \delta_{\alpha,i}(z_i).\]
We are ready to prove by induction that $\bar\delta_\alpha= 0$ for any $\alpha$. For the base case, let $\alpha_0$ denote the index corresponding to the constant polynomial $\xi_{\alpha_0} = 1$. The calculation above shows that $\bar\delta_{\alpha_0}$ maps the ideal $(x_1,\ldots,x_l)$ to zero, so $\bar \delta_{\alpha_0}$ is a well defined function on $H^*/K^*$. Similar, but simpler, calculations use the fact that $\delta$ is linear and satisfies the Leibniz rule imply that these properties also hold for $\delta_{\alpha_0}$. Since the degree of $\delta_{\alpha_0}$ equals that of $\delta$, it must vanish by our assumption on $H^*/K^*$.
For the inductive step, fix any other index $\alpha$ and assume that $\bar \delta_\beta = 0$ for all $\beta < \alpha$. The computation of $\delta_\alpha(z)$ above, together with the induction hypothesis, implies that $\delta_\alpha$ descends to a well defined function on $H^*/K^*$. Again it follows that $\bar\delta_\alpha$ is a derivation on $H^*/K^*$ with degree $|\delta| - |\xi_\alpha| < |\delta| < 0$. Hence the assumption on $H^*/K^*$ implies that $\bar\delta_\alpha = 0$. This completes the proof of the claim and hence of Markl's theorem.
\end{proof}
For the inductive proof of our main theorem, Markl's theorem implies there is nothing to do in the case where $H^*$ splits. Therefore it is useful to have conditions that imply the existence of splittings. The following is an example of this (see \cite{AmannKennard20}):
\begin{lemma}[Degree Inequality]\label{lem:degreeInequality}
Let $H^* = \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$ be a positively elliptic algebra that does not split. The following hold:
\begin{enumerate}
\item If $i < k$, then $|u_i| \geq |x_1| + |x_{i+1}|$.
\item If $\delta(x_2) = \lambda x_1^\alpha \neq 0$ for some $\lambda \in \Q$, where $\delta$ is a derivation on $H^*$ with negative degree, then $|u_1| \geq |x_1| + |x_3|$.
\end{enumerate}
\end{lemma}
\begin{proof}
The first claim is a restatement of \cite[Lemma 11.4]{AmannKennard20}. It follows easily, since $|u_i| < |x_1| + |x_{i+1}|$ for some $i$ implies that $u_j \in \Q^{\geq 2}[x_1,\ldots,x_k]$ is a polynomial in $x_1,\ldots,x_i$ for degree reasons for all $1 \leq j \leq i$. Hence $x_1,\ldots,x_i$ generate a non-trivial subalgebra, a contradiction.
The second claim is implicit in the proof of \cite[Lemma 11.5]{AmannKennard20}. Suppose that $\delta(x_2) = \lambda x_1^\alpha \neq 0$ for some $\lambda \in \Q$ and $\alpha \geq 1$. Suppose for the purpose of contradiction that $|u_1| < |x_1| + |x_3|$. As in the previous paragraph, we conclude that $u_1$ is a polynomial in $x_1$ and $x_2$. Write $u_1 = \sum_{i=0}^r p_i(x_1) x_2^i$. Since $\delta(u_1)$ is in the ideal $(u_1,\ldots,u_k)$ and at the same time has degree less than any of the $u_i$, we have $\delta(u_1) = 0$. On the other hand, $\delta(u_1) = \sum_{i=1}^r \lambda i p_i(x_1) x_1^\alpha x_2^{i-1}$, so $p_i(x_1) = 0$ for all $i \geq 1$. Hence $u_1 = p_0(x_1)$, $x_1$ generates a non-trivial subalgebra of $H^*$, and we have a contradiction.
\end{proof}
\bigskip\section{The Large Relations Lemma}\label{sec:LargeRelations}\medskip
In this section, we prove the Large Relations Lemma and use it to prove Corollary \ref{cor:8}. This is the main step in proving the Halperin Conjecture when the degrees of the two largest generators satisfy $|x_{k-1}| + |x_k| \leq 8$.
\begin{lemma}[Large Relations Lemma]\label{lem:LargeRelations}
Let $H^* \cong \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$ be a positively elliptic algebra that does not split. Assume that $H^*$ admits a derivation $\delta$ of negative degree such that the map $\delta:H^4 \to H^2$ has rank $m \geq 1$.
Let $g_i$ denote the number of generators with degree $i$, and let $r_j$ denote the number of relations with degree $j$. The following hold:
\begin{enumerate}
\item If $g_6 + g_{10} + g_{14} + \ldots = 0$, then
\[r_{12} + r_{16} + \ldots \geq \of{k-g_2-g_4} + \max(1, m-r_4).\]
\item If $g_6 + g_{10} + g_{14} + \ldots \geq 1$ and $\delta^2(H^6) = 0$, then
\[r_{10} + r_{12} + \ldots \geq \of{k-g_2-g_4} + \max(1, m-r_4).\]
\end{enumerate}
In particular, $|u_k| \geq 12$ in the first case and $|u_{k-1}| \geq 10$ in the second.
\end{lemma}
\begin{proof}
We prove the claims simultaneously. By the Land in Zero Lemma, we may assume that $\delta(x_i) = 0$ for $1 \leq i \leq g_2$. In addition, we may change basis so that
\[\delta(x_{g_2+i}) = \left\{\begin{array}{rcl} x_{i} &\mathrm{if}& 1 \leq i \leq m\\
0 &\mathrm{if}& m < i \leq g_4\end{array}\right.\]
Finally, if $x_h$ is a generator in degree six, then the condition $\delta^2(H^6) = 0$ implies that $\delta(x_h)$ has no $x_{g_2+i}$ term with $1 \leq i \leq m$.
Let $\{u_j | j \in J\}$ denote the relations with degree $8$. Write each of these as
\[u_j = p_j(x_{g_2+1},\ldots,x_{g_2+m}) + r_j\]
where $p_j$ is a quadratic polynomial and where $r_j$ is in the ideal
\[I_0 = (x_1,\ldots,x_{g_2}) + (x_{g_2 + m+1}, \ldots, x_{g_2+g_4}).\]
Fix $J' \subseteq J$ such that $\{p_j ~|~ j \in J'\}$ is a basis for the span of $\{p_j ~|~ j \in J\}$.
\begin{comment}
First, suppose for a moment that $m = 0$. Then $I_0$ contains all generators in degrees two and four, and hence it contains all relations in degree less than $12$ for degree reasons. Consider the ideal
\[I = I_0 + \of{\{u_j ~|~ |u_j| \in\{12,14,16,\ldots\}}.\]
Clearly $I$ contains all of the relations $u_i$, so there exists a projection
\[H^* \cong \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k) \to \Q[x_1,\ldots,x_k]/I.\]
Since $H^*$ is a positively elliptic algebra and in particular finite-dimensional, $I$ has at least $k$ generators. This implies the desired bound in Claim 1.
Second, suppose that $m \geq 1$ and that $|J'| \geq m$.
\end{comment}
Suppose for a moment that $|J'| \geq m$. Since $I_0 \cap H^4 \subseteq \ker(\delta)$, $H^6 \subseteq \ker(\delta^2)$, and $|r_j| = 8$, it follows that $\delta^2(r_j) = 0$ for all $j \in J'$. Therefore
\[\delta^2(u_j) = 2 p_j(x_1,\ldots,x_m).\]
for $j \in J'$. Now $\delta^2(u_j)$ has degree four and lies in the ideal generated by $u_1,\ldots,u_k$. Hence $p_j(x_1,\ldots,x_m)$ lies in the $r_4$-dimensional span of $\{u_i ~|~ |u_i| = 4\}$. Since the polynomials $p_j$ with $j \in J'$ are linearly independent, we may perform a change of basis on the degree-four relations $u_i$ such that $u_1,\ldots,u_m$ are equal to the polynomials $p_j(x_1,\ldots,x_m)$ for $j \in J'$. But this implies that $x_1,\ldots,x_m$ generate a subalgebra $K^*$ that induces a splitting of $H^*$, a contradiction to the assumptions in the lemma.
\begin{comment}
Finally, assume that $m \geq 1$ and that $|J'| \leq m - 1$.
\end{comment}
We may assume now that $|J'| \leq m - 1$. By the argument in the previous paragraph, $|J'| \leq \min(m-1, r_4)$ by choice of $J'$. We can perform a change of basis on the $u_j$ for $j \in J$ so that $p_j = 0$ for $j \in J \setminus J'$.
To finish the proof of Claim 1, consider the ideal
\[I = I_0 + \of{\{u_j ~|~ j \in J'\}} + \of{\{u_j ~|~ |u_j| \in\{12,16,\ldots\}}.\]
If a relation $u_i$ has degree less than eight or not divisible by four, then it lies in $I_0$ for degree reasons since there are no generators in degrees $6$, $10$, etc. If $|u_i| = 8$, then it lies in $I_0 + \of{\{u_j ~|~ j \in J'\}}$ by choice of $J'$. Finally it is clear that $u_i \in I$ for all other relations $u_i$. Hence $H^*$ projects onto $\Q[x_1,\ldots,x_k]/I$. Since $H^*$ is finite-dimensional, $I$ must have at least $k$ generators. Therefore
\[(g_2 + g_4 - m) + \min(m-1,r_4) + (r_{12} + r_{16} + \ldots) \geq k,\]
which implies the desired bound in Claim 1.
To finish the proof of Claim 2, we use a similar argument with $I$ replaced by
\[I = I_0 + \of{\{u_j ~|~ j \in J'\}} + \of{\{u_j ~|~ |u_j| \in\{10,12,\ldots\}}.\]
It is clear that relations of degree four or degree eight or larger lie in $I$. Relations of degree six are also in $I_0$ and hence in $I$ because they are polynomials in $\Q^{\geq 2}[x_1,\ldots,x_k]$ (see Theorem \ref{thm:PureModel}). Hence again all relations are in $I$, and the claim follows as before.
\end{proof}
Next, we apply Lemma \ref{lem:LargeRelations} to quickly prove our main theorem when $|x_{k-1}| + |x_k| \leq 8$ in all but three exceptional cases.
\begin{proposition}\label{pro:8} Let $H^*$ be a positively elliptic algebra that does not split. If $H^*$ admits a non-zero derivation of negative degree and $|x_{k-1}| + |x_k| \leq 8$, then either $\fd H^* > 20$ or the degree type is one of the following:
\[(2,2,4,4; 4,6,8,12), (2,2,4,4; 4,8,8,12), \mathrm{~or~} (2,2,2,4,4; 4,4,6,8,12),\]
\end{proposition}
\begin{proof}
By the Land in Zero and $k-1$ Lemmas, we may assume that $|x_{k-1}| = |x_k| = 4$ and that $\delta:H^4 \to H^2$ has rank $m \geq 2$. Hence the Lemma \ref{lem:LargeRelations} implies that $|u_k| \geq 12$. This forces the formal dimension to be large, so we estimate it now.
Let $g_4 \geq m$ denote the number of generators of degree four. Using the formula for the formal dimension in Theorem \ref{thm:PureModel} and the Degree Inequality (Lemma \ref{lem:degreeInequality}), we have
\begin{eqnarray*}
\fd H^*
&\geq& \sum_{i=1}^{k-g_4} \of{|x_1| + |x_{i+1}| - |x_i|} + \sum_{i=k-g_4+1}^{k-1} {|x_i|} ~+~ \of{12 - |x_k|}\\
&=& 2k + 2g_4 + 6.
\end{eqnarray*}
If $g_4 \geq 3$, then $k \geq m + g_4 \geq 5$ and hence $\fd H^* > 20$. This is what we wish to show, so we may assume that $g_4 = m = 2$. The degree type is of the form
\[(2,\ldots, 2, 4, 4; B_1, \ldots, B_k)\]
with $B_i \geq 2|x_i| = 4$ for $1 \leq i \leq k-3$, $B_{k-2} \geq |x_1| + |x_{k-1}| = 6$, $B_{k-1} \geq 2|x_{k-1}| = 8$, and $B_k \geq 12$. Going back to the estimate on $\fd H^*$, we see that $k \in \{4,5\}$.
If $k = 4$, then $\fd H^* = \sum_{i=1}^4 B_i - 12 \geq 18$. Since we may assume that $\fd H^* \leq 20$, it follows either that we have equality in all four of the lower bounds on the $B_i$ or that we have equality in three of the four bounds and we are off by two in the fourth. This gives rise to five possibilities for the degree type. Two of these are ruled out by the SAC(4,4) condition, one is ruled out by the bound $r_{12} + r_{16} + \ldots \geq m - r_4$ from Lemma \ref{lem:LargeRelations}, and the remaining two appear in the conclusion of the proposition.
If instead $k = 5$, then we estimate as above: $\fd H^* = \sum_{i=1}^5 B_i - 14 \geq 20$. Hence equality holds in all five of the lower bounds on the $B_i$, and we find that the degree type is the last one shown in statement of the proposition.
\end{proof}
\begin{proposition}\label{pro:8part2}
Let $H^*$ be a positively elliptic algebra that does not split. If the degree type is
\[(2,2,4,4; 4,6,8,12), (2,2,4,4; 4,8,8,12), \mathrm{~or~} (2,2,2,4,4; 4,4,6,8,12),\]
then there does not exist a non-zero derivation with negative degree.
\end{proposition}
\begin{proof
We keep the notation from Lemma \ref{lem:LargeRelations}, with a slight modification. We may assume
\[\delta(x_{k-1}) = x_1 \mathrm{~and~} \delta(x_k) = x_2\]
and that $\delta(x_i) = 0$ for $1 \leq i \leq k-2$. In addition, after possibly swapping the two degree eight relations in the second case, we may assume that
\[u_{k-1} = p_{k-1}(x_{k-1}, x_k) + r_{k-1}\]
with $p_{k-1} \neq 0$ and $r_{k-1} \in (x_1,\ldots,x_{k-2})$. Indeed, if $p_{k-1} = 0$ (and $p_{k-2} = 0$ in the second case), then $H^*$ admits a quotient map onto $\Q[x_1,\ldots,x_k]/(x_1,\ldots,x_{k-2},u_k)$, a contradiction to finite-dimensionality.
Applying $\delta^2$ as in the proof of Lemma \ref{lem:LargeRelations}, we find that
\[p_{k-1}(x_1,x_2) = u_1\]
after possibly changing basis in the degree four relations. In addition, in the case where $u_{k-2}$ also has degree eight, we find that $p_{k-2}$ is a multiple of $u_1$, where $u_{k-2} = p_{k-2}(x_{k-1}, x_k) + r_{k-2}$ and $r_{k-2} \in (x_1,\ldots,x_{k-2})$. In this case, we can replace $u_{k-2}$ by $u_{k-2} - \mu u_{k-1}$ for some $\mu \in \Q$ so that $p_{k-2} = 0$. In any case, we have shown that
\[u_1,\ldots,u_{k-2} \in (x_1,\ldots,x_{k-2}).\]
We now extend the argument from Lemma \ref{lem:LargeRelations} by considering the degree $12$ relation $u_k$. Write
\[u_k = p_k(x_{k-1}, x_k) + r_k\]
for some cubic polynomial $p_k$ and some $r_k \in (x_1,\ldots,x_{k-2})$. For degree reasons, we have that $\delta^3(r_k) = 0$ and hence that
\[6 p_k(x_1,x_2) = \delta^3(u_k) \in (u_1,\ldots,u_k).\]
Note that $p_k(x_1,x_2)$ has degree six and can be expressed as
\[p_k(x_1,x_2) = \sum_{i=1}^k h_i u_i\]
where $h_i \in \Q[x_1,\ldots,x_k]$ is a linear polynomial in the first $k-2$ variables if $|u_i| = 4$, where $h_i \in \Q$ if $|u_i| = 6$, and where $h_i = 0$ if $|u_i| \geq 8$.
We further claim that $h_i=0$ when $|u_i|=6$. Indeed, otherwise we can replace $u_i$ by the expression $\sum h_i u_i$ so that $u_i = p_k(x_1,x_2)$. For the degree types under consideration, this implies that $x_1,x_2,\dots, x_{k-2}$ generate a subalgebra $K^*$ that induces a splitting of $H^*$, a contradiction. We may therefore assume that $p_k(x_1,x_2) = h_1 u_1$ in the first two cases and that $p_k(x_1,x_2) = h_1 u_1 + h_2 u_2$ in the third.
To derive a contradiction in the first two cases (where $k = 4$), recall that $u_1 = p_{3}(x_1,x_2)$ and hence that $p_4(x_3,x_4)$ is in the ideal
\[I = (x_1,x_2, p_3(x_3,x_4)).\]
For degree reasons, it follows that $I$ contains all four of the $u_i$ and hence that there exists a projection of $H^*$ onto $\Q[x_1,\ldots,x_4]/I$. Since the latter space has infinite dimension, this is a contradiction.
To derive a contradiction in the last case (where $k = 5$), we consider the expression
\[p_5(x_1,x_2) = h_1 u_1 + h_2 u_2.\]
Write $h_i = l_i(x_1,x_2) + k_i x_3$ for some linear polynomials $l_i$ and some $k_i\in\Q$, and write $u_2 = u_{2,0}(x_1,x_2) + x_3 u_{2,1}(x_1,x_2,x_3)$. We break the proof into cases.
\begin{itemize}
\item Suppose $u_{2,1} = 0$. This implies that $u_2$ is a polynomial in $x_1$ and $x_2$. Since $u_1 = p_4(x_1,x_2)$ as well, we see that $x_1$ and $x_2$ generate a subalgebra that induces a splitting of $H^*$, a contradiction.
\item Suppose $h_2 = 0$. This implies that $u_1 = p_{4}$ divides $p_5$. Hence the ideal
\[I = (x_1,x_2,x_3,p_{4}(x_4,x_5))\]
contains all of the $u_j$, a contradiction to finite-dimensionality of $H^*$.
\item Suppose instead that $u_{2,1} \neq 0$ and that $h_2 \neq 0$. Comparing coefficients of $x_3^2$ and $x_3^3$ in the above equation, we see that $h_2 = l_2 \neq 0$. Similarly, comparing coefficients of $x_3$, we find that $k_1 \neq 0$.
Now $l_2$ divides $p_5 - h_1 p_4$, which can be written as
\[\of{p_5 - l_1 p_{4}} - x_3 \of{k_1 p_{4}}.\]
It follows that $l_2$ divides both $p_5 - l_1 p_{4}$ and $k_1 p_{4}$ and hence $p_{4}$ and $p_5$ as well. Hence the ideal
\[I = (x_1,x_2,x_3,l_2(x_4,x_5))\]
contains all five of the relations $u_j$, and we once again have a contradiction to the finite-dimensionality of $H^*$.
\end{itemize}
We have derived a contradiction in all cases, so the proof is complete.
\end{proof}
The two propositions in this section imply the following.
\begin{corollary}\label{cor:8}
Let $H^* \cong \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$ be a pure model for a positively elliptic algebra that does not split and has the property that
\[|x_{k-1}| + |x_k| \leq 8.\]
If $H^*$ admits a non-zero derivation with negative degree, then $\fd H^* > 20$.
\end{corollary}
\bigskip\section{The Top-to-Bottom Lemma}\label{sec:Top-to-Bottom}\medskip
In this section, we prove the Top-to-Bottom Lemma and use it to prove Corollary \ref{cor:10}. This is main step in the proof of the Halperin Conjecture when the degrees of the two largest generators satisfy $|x_{k-1}| + |x_k| = 10$.
\begin{lemma}[Top-to-Bottom Lemma]\label{lem:Top-to-Bottom}
Let $H^* \cong \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$ be a positively elliptic algebra that does not split and that satisfies $|u_k| < 3|x_k|$. If there exists a derivation $\delta$ on $H^*$ and $l \geq 1$ such that the map
\[\delta^l \colon H^{|x_k|} \to H^{|x_1|}\]
exists and is non-zero, then in fact this map has rank at least two.
\end{lemma}
This lemma is reminiscent of the $k - 1$ Lemma, which states that a derivation with negative degree is non-zero only if it has rank at least two.
\begin{proof}
Without loss of generality, we may assume that $|\delta|$ divides $|x_k| - |x_1|$, and we may fix $l \geq 1$ such that $\delta^l$ maps $H^{|x_k|}$ into $H^{|x_1|}$. We may also assume that this map has rank exactly one and change basis, if necessary, so that $\delta^l(x_k) = x_1$ and that $\delta^l(x_i) = 0$ for $i < k$.
Consider the ideal in $H^*$ generated by $x_1,\ldots,x_{k-1}$. Since $H^*$ is finite-dimensional, there exists some relation $u_i$ not in this ideal. Since $|u_i| < 3|x_k|$, we must have
\[u_i = \lambda x_k^2 + x_k f + g\]
for some non-zero $\lambda \in \Q$ and some $f, g \in \Q[x_1,\ldots,x_{k-1}]$. By scaling $u_i$, we may assume $\lambda = 1$, and then completing the square and replacing $x_k$ by $x_k + \frac 1 2 f$, we may assume that $f = 0$.
We apply $\delta^{2l}$ to this equation. On the left-hand side, we see that $\delta^{2l}(u_i)$ is in the ideal $(u_1,\ldots,u_k)$ and has (minimal) degree $2|x_1|$. In particular, $\delta^{2l}(u_i)$ is a rational linear combination of the $u_i$ with minimal degree. Hence either it is zero or it is $u_1$ after possibly replacing $u_1$ by this linear combination.
On the right-hand side, note that
\[\delta^{2l}(x_k^2) = \binom{2l}{l} \of{\delta^l x_k}^2= \binom{2l}{l} x_1^2.\]
If it is the case that $\delta^{2l}(g) = 0$, then both sides of the equation are non-zero and we have $u_1 \in \Q[x_1]$, a contradiction to the assumption that $H^*$ does not split. Hence we may assume that $\delta^{2l}(g) \neq 0$.
Now $g$ is a polynomial in $x_1,\ldots,x_{k-1}$, so there exists a monomial $x_{i_1}\cdots x_{i_p}$ appearing in $g$ such that \[\delta^{2l}(x_{i_1}\cdots x_{i_p}) \neq 0.\] Furthermore, by the Leibniz rule, there exists $j_1 + \ldots + j_p = 2l$ such that \[\delta^{j_1}(x_{i_1})\cdots\delta^{j_p}(x_{i_p}) \neq 0.\] Each term in this product is non-zero and hence has degree at least $|x_1|$. Summing, we have
\[p|x_1| \leq \of{|x_{i_1}| + j_1|\delta|} + \ldots + \of{|x_{i_p}| + j_p|\delta|} = 2|x_k| + 2l |\delta| = 2|x_1|.\]
Hence $p \leq 2$. At the same time, $x_k$ has maximal degree among the generators, so $p = 2$ and equality holds in the estimate above. It follows that some $\delta^l(x_i) \neq 0$ with $x_i \neq x_k$, and this implies a contradiction to our choice of basis at the beginning of the proof.
\end{proof}
Using the Top-to-Bottom Lemma, we can nearly prove the theorem under the condition $|x_{k-1}| + |x_k| = 10$. The exceptional case given in Proposition \ref{pro:10part1} is proved in Proposition \ref{pro:10part2} using another argument.
\begin{proposition}\label{pro:10part1} Let $H^*$ be a positively elliptic algebra that does not split. If there exists a non-zero derivation of negative degree and $|x_{k-1}| + |x_k| = 10$, then either $\fd H^* > 20$ or the degree type is equal to
\[(2,2,2,4,6; 4,4,6,10,12).\]
\end{proposition}
\begin{proof}
Since $|x_{k-1}|$ and $|x_k|$ are positive, even numbers summing to $10$, and since $|x_{k-1}| \neq 2$ by the Land in Zero and $k-1$ Lemmas, we may assume that $|x_{k-1}| = 4$ and $|x_k| = 6$. In addition, we may assume that
\[\delta(x_{k-1}) = x_1\]
up to a change in basis. Note also that $k \geq 3$.
First suppose that $|u_k| > 12$. By the condition $SAC(6)$, there is a relation whose degree is properly divisible by six. In particular, $|u_{k-1}| \geq 12$ or $|u_k| \geq 18$, and hence
\[\sum_{i=k-1}^k \of{|u_i| - |x_i|} = |u_{k-1}| + |u_k| - 10 \geq 16.\]
Note also that
\[|u_{k-2}| - |x_{k-2}| \geq \max\of{|x_{k-2}|, |x_1| + |x_{k-1}| - |x_{k-2}|}.\]
Since the maximum is at least the average, and since the left-hand side is even, this is at least four. Substituting these estimates into the formula for the formal dimension and applying the Degree Inequality, we have
\[\fd H^* \geq \sum_{i=k-2}^{k}\of{|u_i| - |x_i|} \geq 20.\]
Since we may assume that $\fd H^* \leq 20$, we have equality everywhere. In particular, $k = 3$ and $|u_1| - |x_1| = 4$. But the $k-1$ Lemma implies that $\delta(x_2) = \lambda x_1 \neq 0$, so Part 2 of the Degree Inequality implies that $|u_1| \geq |x_1| + |x_3|$. This is a contradiction, and we may assume that $|u_k| = 12$.
The Top-to-Bottom Lemma implies that $\delta^2(x_k) = 0$. After replacing $x_k$ by something of the form $x_k - l(x_1,\ldots,x_{k-2})x_{k-1}$, we may assume that
\[\delta(x_k) = p(x_2,\ldots,x_{k-2}) \neq 0.\]
In particular, $k \neq 3$, since otherwise this expression implies that $\delta(x_3) = 0$, a contradiction to the $k-1$ Lemma. Assume then that $k \geq 4$.
The condition $\delta^2(x_k) = 0$ also means that we can apply the the second part of Lemma \ref{lem:LargeRelations}. Hence $|u_{k-1}| \geq 10$.
Suppose for a moment that $k \geq 5$. Since $|u_{k-1}| - |x_{k-1}| \geq 6$ and $|u_k| - |x_k| = 6$, we can estimate the formal dimension as above to obtain
\[\fd H^* \geq (k-3)|x_1| + 4 + 6 + 6 \geq 20.\]
Hence we may assume that equality holds in these estimates. It follows that the degree type is of the form
\[(2, A_2, A_3, 4, 6; 2+A_2, 2+A_3, 6, 10, 12).\]
But now the bounds $|u_i| \geq 2|x_i|$ for all $i$ imply that $A_3 = 2$ and $A_2 = 2$, so this is the exceptional case given in the conclusion of the proposition.
We may assume therefore that $k = 4$. In particular,
\[\delta(x_3) = x_1 \mathrm{~and~} \delta(x_4) = p(x_2),\]
where $p$ is linear if $|x_2| = 4$ and quadratic if $|x_2| = 2$.
Since $H^*$ is finite-dimensional, not all of the $u_i$ lie in the ideal $I=(x_1,x_2, x_4)$, since otherwise $H^*$ projects onto the infinite-dimensional algebra $\Q[x_1,\ldots,x_4]/I$. Hence there exists a relation (up to scaling) of the form
\[u_i = x_{3}^2 + r \mathrm{~or~} u_i = x_{3}^3 + r.\]
For degree reasons, the structure of $\delta$ implies that $r \in \ker(\delta^2)$ in the first case or $r \in \ker(\delta^3)$ in the second. Applying $\delta^2$ or $\delta^3$, we see that
\[2 x_1^2 = \delta^2(u_i) \mathrm{~or~} 6 x_1^3 = \delta^3(u_i).\]
Since $\delta$ preserves the ideal $(u_1, \ldots,u_4)$, the right-hand side of each expression lies in this ideal. In the first case, we may perform a change of basis on the degree four $u_i$ to obtain $u_1 = 2x_1^2$. This gives rise to a splitting by the subalgebra generated by $x_1$, a contradiction to the assumptions of the proposition.
Similarly, the second case gives rise to a contradiction if it is possible to change basis so that some $u_j = 6x_1^3$. Therefore we may assume that
\[6x_1^3 = \sum l_j u_j\]
where the $l_j$ are linear polynomials in the degree two generators and the $u_j$ are degree four relations. Now if $u_1$ is the only one degree four relation, then $u_1$ is a multiple of $x_1^2$, which is again a contradiction. But then we must have that $|u_1| = |u_2| = 4$, so we have that
\[|u_2| < 6 = |x_1| + |x_3|.\]
By the Degree Inequality, it follows that $x_1$ and $x_2$ generate a subalgebra of $H^*$ that induces a splitting. This is a contradiction, so the proof is complete.
\end{proof}
\begin{comment}
{\bf Case 1:} Assume that $|x_2| < |x_{k-2}|$. In particular, $k \geq 5$ and we can estimate the formal dimension using Theorem \ref{thm:PureModel} and the Degree Inequality:
\[\fd H^* \geq \sum_{i=1}^{k-3} \of{|x_1| + |x_{i+1}| - |x_i|} + \sum_{i=k-2}^k |x_i| = 2k + 10 \geq 20.\]
Hence $\fd H^* = 20$, equality holds everywhere in this estimate, and the degree type is given by $(2,2,4,4,6; 4,6,8,8,12)$. This appears in the conclusion of the proposition, so the proof of Case 1 is complete.
{\bf Case 2:} Assume $|x_2| = |x_{k-2}|$. We modify the basis elements $x_2,\ldots,x_{k-2}$, if necessary, so that
\[\delta(x_2) = \ldots = \delta(x_{k-2}) = 0.\]
Recall also that $\delta(x_1) = 0$ by the Land in Zero Lemma. For degree reasons, the following is easily checked: If a monomial in the $x_i$ has degree $8$ or $12$ and is not in the kernel of $\delta^2$ or $\delta^3$, respectively, then it is a multiple of $x_{k-1}^2$ or $x_{k-1}^3$, respectively.
Since $H^*$ is finite-dimensional, it is not the case that all $u_i$ lie in the ideal $(x_1,\ldots, x_{k-2}, x_k)$, since otherwise $H^*$ projects onto the infinite-dimensional algebra $\Q[x_1,\ldots,x_k]/(x_1, \ldots, x_{k-2}, x_k)$. Hence there exists a relation of the form
\[u_i = x_{k-1}^2 + r \mathrm{~or~} u_i = x_{k-1}^3 + r,\]
where $\lambda \in \Q$ is non-zero and where $r \in \ker(\delta^2)$ in the first case or $r \in \ker(\delta^3)$ in the second. Applying $\delta^2$ or $\delta^3$, we see that
\[2 x_1^2 = \delta^2(u_i) \mathrm{~or~} 6 x_1^3 = \delta^3(u_i).\]
Since $\delta$ preserves the ideal $(u_1, \ldots,u_k)$, the right-hand side of each expression lies in this ideal. In the first case, we may perform a change of basis on the degree four $u_i$ to obtain $u_1 = 2x_1^2$. This gives rise to a splitting of $H^*$, a contradiction to the assumptions of the proposition.
Similarly, the second case gives rise to a contradiction if it is possible to change basis so that some $u_j = 6x_1^3$. Therefore we may assume that
\[6x_1^3 = \sum l_j u_j\]
where the $l_j$ are linear polynomials in the degree two generators and the $u_j$ are degree four relations. Note that, if $x_1$ is the only degree two generator, then each $l_j$ is a multiple of $x_1$. After canceling an $x_1$ from each side of the above equation, we again derive a contradiction as in the previous case. We may therefore assume that there at least two degree two generators, i.e., $|x_2| = 2$. This implies that the degree type is of the form claimed in the conclusion of the proposition.
\end{comment}
We now deal with the exceptional case in the following, the proof of which has a different strategy.
\begin{proposition}\label{pro:10part2}
Let $H^* \cong \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$ be a positively elliptic algebra that does not split. If the degree type is
\[(2,2,2,4,6; 4,4,6,10,12),\]
then $H^*$ does not admit a non-zero derivation of negative degree.
\end{proposition}
\begin{proof}
As in the proof of the previous proposition, we may assume that
\[\delta(x_4) = x_1 \mathrm{~and~} \delta(x_5) = p(x_2,x_3).\]
Consider the ideal $I = (x_1,x_2,x_3, u_5)$. For degree reasons, $u_j \in I$ for all $j \neq 4$. Since $H^*$ is finite-dimensional, it follows that $u_4 \not\in I$. After scaling $u_4$, if necessary, we have
\[u_4 = x_4 x_5 + r_4\]
with $r_4 \in I$.
Note that $r_4$ has degree ten. For degree reasons, it is a polynomial in $\Q^{\geq 3}[x_1,\ldots,x_5]$. Note that $\delta$ preserves this subspace.
Since, in addition, $x_4 \delta(x_5) = x_4 p(x_2,x_3)$ is in this subspace, we have that
\[\delta(u_4) \in x_1 x_5 + \Q^{\geq 3}[x_1,\ldots,x_5].\]
On the other hand, $\delta(u_4)$ is a degree eight element of the ideal $(u_1,\ldots,u_5)$. For degree reasons, this implies that
\[\delta(u_4) = \sum_{i=1}^3 h_i u_i\]
with $h_i \in \Q^{\geq 1}[x_1,\ldots,x_5]$. But each $u_j$ is an element of $\Q^{\geq 2}[x_1,\ldots,x_5]$, so $\delta(u_j)$ is as well. Hence this equation shows that $\delta(u_4) \in \Q^{\geq 3}[x_1,\ldots,x_5]$, a contradiction.
\end{proof}
The propositions above imply the following:
\begin{corollary}\label{cor:10}
Let $H^* \cong \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$ be a pure model for a positively elliptic algebra that does not split and that satisfies
\[|x_{k-1}| + |x_k| = 10.\]
If $H^*$ admits a non-zero derivation with negative degree, then $\fd H^* > 20$.
\end{corollary}
\bigskip\section{Proof of the main theorem}\label{sec:Proof}\medskip
In this section, we finish the proof of the Halperin Conjecture for formal dimensions at most $20$. We are given a positively elliptic algebra
\[H^* \cong \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)\]
as in Theorem \ref{thm:PureModel}, and we assume the existence of a non-zero derivation $\delta$ on $H^*$ of negative degree. We seek a contradiction.
If the formal dimension $\fd H^* = 2$, then Theorem \ref{thm:PureModel} implies that $k = 1$. Hence the Land in Zero Lemma implies that $\delta = 0$, a contradiction. We may therefore inductively assume that $2 < \fd H^* \leq 20$ and that the Halperin Conjecture holds for formal dimensions less than $\fd H^*$.
In particular, we may assume by Markl's theorem that $H^*$ does not split. Hence the Degree Inequality applies to the degrees of the relations $u_i$. Corollaries \ref{cor:8} and \ref{cor:10} also apply, and together they imply that
\[|x_{k-1}| + |x_k| \geq 12.\]
Putting these facts together, we can finish the proof in all but two exceptional cases.
\begin{proposition}\label{pro:12orLarger-part1} Let $H^* = \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$ be a positively elliptic algebra with no non-trivial subalgebra and $\fd H^* \leq 20$. If there exists a non-zero derivation of negative degree and $|x_{k-1}| + |x_k| \geq 12$, then the degree type is
\[(2,4,6,6; 6,8,12,12) \mathrm{~or~} (2,2,6,6; 4,8,12,12).\]
\end{proposition}
\begin{proof}
First suppose that $k = 3$. By the Land in Zero Lemma, $\delta x_1 = 0$. By the $k-1$ Lemma, $\delta x_2$ and $\delta x_3$ are non-zero and, moreover, linearly independent since otherwise we could change basis to ensure $\delta x_2 = 0$. In particular, it follows for degree reasons that $|x_1| < |x_2| < |x_3|$. On one extreme, these degrees could be $2$, $4$, and $6$, but this contradicts the assumption that $|x_{k-1}| + |x_k| \geq 12$. We may assume therefore that $|x_3| \geq 8$. We put this into the formula for the formal dimension in Theorem \ref{thm:PureModel} and we estimate the summands using the Degree Inequality (Lemma \ref{lem:degreeInequality}):
\[\fd H^* = \sum_{i=1}^3 \of{|u_i| - |x_i|} \geq |x_3| + \max(|x_1| + |x_3| - |x_2|, |x_2|) + |x_3|.\]
Since the maximum is at least the average, this implies $\fd H^* > 20$, a contradiction.
Next suppose that $k \geq 4$ and $|x_k| \geq 8$. Using the Degree Inequality to estimate $|u_i|$ for $i \leq k-1$ and the estimate $|u_k| \geq 2|x_k|$, we obtain
\[\fd H^* \geq \sum_{i=1}^{k-1} \of{|x_1| + |x_{i+1}| - |x_i|} + |x_k| = (k-2)|x_1| + 2|x_k| \geq 20.\]
Hence equality holds everywhere, and we have $k = 4$ and $|x_k| = 8$. Now we repeat this estimate, except we use the bound $|u_i| \geq 2|x_i|$ for the $i = k-1$ term, to get
\[20 = \fd H^* \geq (k-3)|x_1| + 2|x_{k-1}| + |x_k| \geq 2 + 2|x_{k-1}| + 8.\]
Hence $|x_{k-1}| \leq 4$. By the $k-1$ Lemma, we may assume that $|x_{k-1}| = 4$ and that equality holds in the above estimates. In particular, $|u_1| \leq |u_2| = 6$ and $|u_3| = 10$, so we have a contraction to the $SAC(4,8)$ condition.
Finally, suppose that $k \geq 4$ and $|x_k| \leq 6$. By the assumption in the proposition, we have $|x_{k-1}| = |x_k| = 6$. Estimating as in the previous case, we see that
\[\fd H^* \geq (k-3)|x_1| + 2|x_{k-1}| + |x_k| \geq 2 + 3(6) = 20.\]
Hence equality holds, and the degree type is of the form
\[(2,A_2,6,6; 2+A_2, 8, 12, 12)\]
where $A_2 \in \{2,4\}$. These two possibilities correspond to the two degree types in the conclusion of the proposition, so the proof is complete.
\end{proof}
\begin{comment}
To illustrate....suppose that $H^* = \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$ has formal dimension $\fd H^* \leq 20$, and suppose there exists a non-trivial derivation $\delta$ with negative degree. Assuming the result in smaller formal dimensions, we may assume $H^*$ does not have a non-trivial subalgebra. Hence we can apply Lemma \ref{lem:DegIneq} to estimate the formal dimension as follows:
\begin{eqnarray}\label{eqn:fd-model-calculation}
\fd H^* &=& \sum_{i=1}^{k-2} \of{|u_{i}| - |x_{i}|} + \of{|u_{k-1}| - |x_{k-1}|} + \of{|u_{k}| - |x_{k}|}\\
&\geq& \sum_{i=1}^{k-2} \of{|x_1| + |x_{i+1}| - |x_i|} + |x_{k-1}| + |x_k|\nonumber\\
&=& (k-3)|x_1| + 2|x_{k-1}| + |x_k|\nonumber
\end{eqnarray}
Now the $k-1$ Lemma implies that at least two of the $x_i$ are not in the kernel of $\delta$, and the Land in Zero Lemma implies therefore that at least two $|x_i| \neq 2$. Since the $|x_i|$ are increasing, we may assume $|x_k| \geq |x_{k-1}| \geq 4$ and conclude
\[20 \geq \fd H^* \geq 2(k-3) + 2(4) + (4) = 2k + 6.\]
Hence $k \leq 7$, with equality only if the degree type is precisely \[(2,2,2,2,2,4,4; 4,4,4,4,6,8,8).\]
\end{comment}
To finish the proof, we only need to consider the two exceptional degree types in Proposition \ref{pro:12orLarger-part1}. Note that, for the first time, the possibility that $\delta$ has degree $-4$ is non-trivial. Indeed, in all previous cases, it is immediate to see that $\delta$ having degree $-4$, $-6$,\ldots implies that $\delta$ is zero on at least $k-1$ generators for degree reasons and hence that $\delta = 0$ by the $k-1$ Lemma.
The first case is simpler and uses ideas similar to previous proofs.
\begin{proposition}\label{pro:12orLarger-part2} If $H^* = \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$ is a positively elliptic algebra with no non-trivial subalgebra and degree type
\[(2,4,6,6; 6,8,12,12),\]
then $H^*$ does not admit a non-zero derivation with negative degree.
\end{proposition}
\begin{proof}
Suppose first that $\delta(x_2) = x_1$, after possibly rescaling. Applying the Top-to-Bottom Lemma, we see that $\delta(x_i) = \lambda_i x_1^2$ for some $\lambda_i \in \Q$ for $i \in \{3,4\}$. Replacing $x_i$ by $x_i - \lambda_i x_1 x_2$, we find that $x_1,x_3,x_4 \in \ker(\delta)$ in contradiction to the $k-1$ Lemma. Hence we may assume that
\[\delta(x_1) = 0 \mathrm{~and~} \delta(x_2) = 0.\]
Furthermore, we may assume that $\delta(x_3)$ and $\delta(x_4)$ are linearly independent elements in degree four. In particular, $\delta$ cannot have degree $-4$ (or smaller), so $\delta$ has degree $-2$. After choosing a suitable basis, we may assume that
\[\delta(x_3) = x_1^2 \mathrm{~and~} \delta(x_4) = x_2.\]
Write
\[u_j = p_j(x_3,x_4) + r_j\]
for $j \in \{3,4\}$, where $r_j \in (x_1,x_2)$. Note that $\delta^2(r_j) = 0$ for degree reasons, so
\[2 p_j(x_1^2, x_2) = \delta^2(u_j) \in (u_1,\ldots,u_4).\]
This is an equation in degree eight, so we have
\[2 p_j(x_1^2, x_2) = a x_1 u_1 + b u_2\]
for some $a,b \in \Q$. Note that $b = 0$, since otherwise $u_1$ and $u_2$ are polynomials in $x_1$ and $x_2$, which contradicts the assumption that $H^*$ does not have a non-trivial subalgebra.
Since $b = 0$, we find that $x_1$ divides $p_j(x_1^2, x_2)$ for $j \in \{3,4\}$. This implies that $x_1^2$ divides $p_j(x_1^2, x_2)$, and hence both $p_3(x_3,x_4)$ and $p_4(x_3,x_4)$ are divisible by $x_3$. It follows that
\[u_1,\ldots,u_4 \in (x_1, x_2, x_3),\]
which is a contradiction to the finite-dimensionality of $H^*$.
\end{proof}
Finally, we prove the last remaining case. We wish to highlight that the proof in this case differs from all of the previous arguments. Specifically, we do not choose our basis in order to simplify the action of $\delta$, as this does not appear to help us. Rather we choose our basis in order to simplify the form of the relations.
\begin{proposition}\label{pro:12orLarger-part3} If $H^* = \Q[x_1,\ldots,x_k]/(u_1,\ldots,u_k)$ is a positively elliptic algebra with no non-trivial subalgebra and degree type
\[(2,2,6,6; 4,8,12,12),\]
then $H^*$ does not admit a non-zero derivation with negative degree.
\end{proposition}
\begin{proof}
Suppose $\delta$ is a non-zero derivation of negative degree, and note that $\delta$ has degree $-2$ or $-4$ by the Land in Zero Lemma. For $j \in \{3,4\}$, write
\[u_j = p_j(x_3, x_4) + r_j\]
where $r_j \in (x_1, x_2)$. Since $r_j$ has degree $12$ and hence at most one $x_3$ or $x_4$ in each of its monomials, $r_j \in \ker(\delta^2)$.
Note that $p_3$ and $p_4$ are coprime polynomials. Indeed, if $g(x_3,x_4)$ were a non-constant common factor, then all relations $u_j$ are in the ideal $I = (x_1, x_2, g(x_3,x_4))$ and $H^*$ projects onto the infinite-dimensional space $\Q[x_1,\ldots,x_4]/I$, a contradiction.
Since $p_3(x_3,x_4)$ and $p_4(x_3,x_4)$ are coprime, quadratic polynomials, we can choose bases of $\mathrm{span}\{x_3, x_4\}$ and $\mathrm{span}\{u_3,u_4\}$ such that one of the following cases occurs:
\[(p_3,p_4) = (x_3^2, x_4^2) \mathrm{~or~} (p_3,p_4) = (x_3^2 - \lambda x_4^2, x_3x_4) \mathrm{~with~}\lambda\neq 0.\]
Indeed, up to relabeling and scaling, we may assume that $p_3$ contains an $x_3^2$ term. Completing the square and replacing $x_3$ by something of the form $x_3 + \mu x_4$, we find that $p_3 = x_3^2 - \lambda x_4^2$ for some $\lambda \in \Q$. Subtracting a multiple of $u_3$ from $u_4$ corresponds to subtracting the same multiple of $p_3$ from $p_4$. We can do this so that $p_4 = \mu x_3 x_4 + \nu x_4^2$ for some $\mu, \nu \in\Q$. If $\mu = 0$, the claim follows by rescaling $u_4$ and subtracting a multiple of $u_4$ from $u_3$. If $\mu \neq 0$, we may replace $x_3$ by $\mu x_3 + \nu x_4$. This results in $p_4 = x_3 x_4$. Subtracting now a multiple of $u_4$ from $u_3$ and scaling $u_3$ once more, we find that we are in the second case of the claim. Note here that $\lambda \neq 0$ because $p_3$ and $p_4$ are coprime.
Returning to the expressions for $u_j$, we apply $\delta^2$ to get
\[2 p_j(\delta x_3, \delta x_4) = \delta^2(u_j) \in (u_1,\ldots,u_4).\]
Suppose first that $\delta$ has degree $-4$, so that $\delta(x_j) \in \mathrm{span}\{x_1,x_2\}$ for $j \in \{3,4\}$. Without loss of generality, we may assume $\delta x_3 = x_1$ and $\delta x_4 = x_2$. Since $p_3$ and $p_4$ are coprime polynomials, so are
\[\delta u_3 = 2 p_3(x_1, x_2) \hspace{.2in} \mathrm{and} \hspace{.2in} \delta u_4 = 2 p_4(x_1, x_2).\]
But $\delta u_3, \delta u_4 \in \mathrm{span}\{u_1\}$, so we have a contradiction.
Suppose instead that $\delta$ has degree $-2$. Since the expressions for $p_j(\delta x_3, \delta x_4)$ are in degree eight, we have equations of the form
\[2 p_j(\delta x_3, \delta x_4) = l_j(x_1,x_2) u_1 + k_j u_2\]
for $j \in \{3,4\}$, where the $l_j$ are linear polynomials and the $k_j \in \Q$.
If some $k_j \neq 0$, we may replace $u_2$ by $l_j(x_1, x_2) u_1 + k_j u_2$ and conclude that $u_1$ and $u_2$ are polynomials in $x_1$ and $x_2$. This implies the existence of non-trivial subalgebra, a contradiction.
We may assume that $k_3 = k_4 = 0$, so that $u_1$ divides $p_3(\delta x_3, \delta x_4)$ and $p_4(\delta x_3, \delta x_4)$. Using the simple formulas for $p_3$ and $p_4$, we see that one of the following happens:
\begin{enumerate}
\item $u_1$ divides both $(\delta x_3)^2$ and $(\delta x_4)^2$.
\item $u_1$ divides both $(\delta x_3)^2 - \lambda (\delta x_4)^2$ and $(\delta x_3)(\delta x_4)$ for some $\lambda \in \Q \setminus\{0\}$.
\end{enumerate}
In either case, if $u_1$ is irreducible, it follows that $u_1$ divides both $\delta x_3$ and $\delta x_4$. Since all of these elements have degree four, we find that $\delta x_3$ and $\delta x_4$ are linearly dependent. After changing basis once more, we find a contradiction to the $k-1$ Lemma.
Next if $u_1 = l_1 l_2$ is a product of coprime irreducibles, then each irreducible factor divides both $\delta x_3$ and $\delta x_4$ by a similar argument. Moreover, since $l_1$ and $l_2$ are coprime, it follows that $u_1$ divides both of these elements, and we again have a contradiction.
Finally, if neither of these cases occurs, then $u_1 = \lambda l^2$ for some $\lambda \in \Q$ and some linear polynomial $l = l(x_1,x_2)$. But now we can replace $x_1$ or $x_2$ by $l(x_1,x_2)$ and derive the existence of a non-trivial subalgebra of $H^*$, so we again have a contradiction.
\end{proof}
| {
"timestamp": "2021-04-12T02:04:21",
"yymm": "2104",
"arxiv_id": "2104.04086",
"language": "en",
"url": "https://arxiv.org/abs/2104.04086",
"abstract": "A 1976 conjecture of Halperin on positively elliptic spaces was recently confirmed in formal dimensions up to 16. In this article, we shorten the proof and extend the result up to formal dimension 20. We work with Meier's algebraic characterization of the conjecture, so the proof is elementary in that it involves only polynomial algebras, ideals, and derivations.",
"subjects": "Algebraic Topology (math.AT); Commutative Algebra (math.AC)",
"title": "Halperin's conjecture in formal dimensions up to 20",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795083542037,
"lm_q2_score": 0.8175744695262775,
"lm_q1q2_score": 0.8069292479759943
} |
https://arxiv.org/abs/0907.0513 | The Gift Exchange Problem | The aim of this paper is to solve the "gift exchange" problem: you are one of n players, and there are n wrapped gifts on display; when your turn comes, you can either choose any of the remaining wrapped gifts, or you can "steal" a gift from someone who has already unwrapped it, subject to the restriction that no gift can be stolen more than a total of S times. The problem is to determine the number of ways that the game can be played out, for given values of S and n. Several recurrences and explicit formulas are given for these numbers, although some open questions remain. | \section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus
-.2ex}{2.3ex plus .2ex}{\normalsize\bf}}
\makeatother
\makeatletter
\def\subsection{\@startsection {subsection}{1}{\z@}{-3.5ex plus -1ex minus
-.2ex}{2.3ex plus .2ex}{\normalsize\bf}}
\makeatother
\begin{document}
\begin{center}
{\large\bf The Gift Exchange Problem } \\
\vspace*{+.2in}
David Applegate and N. J. A. Sloane${}^{(a)}$, \\
AT\&T Shannon Labs, \\
180 Park Ave., Florham Park, \\
NJ 07932-0971, USA.
\vspace*{+.2in}
${}^{(a)}$ Corresponding author.
\vspace*{+.2in}
Email: david@research.att.com, njas@research.att.com. \\
\vspace*{+.2in}
July 1, 2009
\vspace*{+.2in}
{\bf Abstract}
\end{center}
The aim of this paper is to solve the ``gift exchange'' problem:
you are one of $n$ players, and there are
$n$ wrapped gifts on display; when your turn comes,
you can either choose any of the remaining wrapped gifts,
or you can ``steal'' a gift from someone who has already unwrapped it, subject
to the restriction that no gift can be stolen more than a total
of $\sigma$ times. The problem is to determine the number of ways
that the game can be played out, for given values of $\sigma$ and $n$.
Several recurrences and explicit formulas are given for these
numbers, although some open questions remain.
\vspace{0.8\baselineskip}
Keywords: gift swapping, Bessel polynomials, restricted Stirling numbers,
hypergeometric functions, Wilf-Zeilberger summation
\vspace{0.8\baselineskip}
AMS 2000 Classification: Primary 05A, 11B37
\section{The problem}\label{Sec1}
The following game is sometimes played at parties.
A number $\sigma$ (typically $1$ or $2$) is fixed in advance.
Each of the $n$ guests brings a wrapped gift, the gifts are
placed on a table (this is the ``pool'' of gifts),
and slips of paper containing the numbers $1$ to $n$
are distributed randomly among the guests.
The host calls out the numbers $1$ through $n$ in order.
When the number you have been given is called, you can either choose one
of the wrapped (and so unknown) gifts remaining
in the pool, or you can take (or ``steal'') a gift that some earlier
person has unwrapped, subject to the restriction that no gift can be
``stolen'' more than a total of $\sigma$ times.
If you choose a gift from the pool, you unwrap it and show it
to everyone.
If a person's gift is stolen from them, they immediately get
another turn, and can either take a gift from the pool, or can
steal someone else's gift, subject always to the limit
of $\sigma$ thefts per gift.
The game ends when someone takes the last ($n$th) gift.
The problem is to determine the number of possible ways
the game can be played out,
for given values of $\sigma$ and $n$.
For example, if $\sigma=1$ and $n=3$, with guests $A, B, C$ and
gifts numbered $1$, $2$, $3$, there are 42 different scenarios,
as follows. We write $XN$ to indicate that
guest $X$ took gift $N$ -- it is always clear from the context
whether the gift was stolen or taken from the pool.
Also, provided we multiply the final answer by 6, we
can assume that the gifts are taken from the pool in
the order $1, 2, 3$.
There are then seven possibilities:
\begin{align}\label{Eq1}
{} & A1, B2, C3 \nonumber \\
{} & A1, B2, C1, A3 \nonumber \\
{} & A1, B2, C1, A2, B3 \nonumber \\
{} & A1, B2, C2, B3 \nonumber \\
{} & A1, B2, C2, B1, A3 \nonumber \\
{} & A1, B1, A2, C3 \nonumber \\
{} & A1, B1, A2, C2, A3 \nonumber \\
\end{align}
and so the final answer is $6 \cdot 7 = 42$.
If we continue to ignore the factor of $n!$ due to
the order in which the gifts are selected from the pool,
the number of scenarios for the case $\sigma=1$ and $n=1,2,3,4,5$ are
$1,2,7,37,266$, respectively.
We noticed that these five terms matched the beginning of entry
A001515 in \cite{OEIS}, although indexed differently.
The $n$th term of A001515 is defined as $y_n(1)$,
where $y_n(x)$ is a Bessel polynomial
(\cite{Gross78}, \cite{KF49}, \cite{RCI}),
and for $n=0,1,2,3,4$ the values are $1,2,7,37,266$, respectively.
Although there was no mention of gift-swapping in that entry,
one of the comments there provided
enough of a hint to lead us to a complete solution
of the general problem.
\subsection*{Comments on the rules}
(i) If $\sigma = 1$ then once a gift has been stolen it
can never be stolen again.
\noindent
(ii) If $\sigma = 2$, and someone steals your gift,
then if you wish you may immediately steal it back (provided
you got it honestly!), and then
it cannot be stolen again. Retrieving a gift in this way,
although permitted by a strict interpretation of the rules,
may be prohibited at real parties.
\noindent
(iii) A variation of the game allows the last player
to take {\em any} gift that has been unwrapped,
regardless of how many times it has already been stolen,
as an alternative to taking the last gift from the pool.
This case only requires minor modifications of
the analysis, and we will not consider it here.
\noindent
(iv) We also ignore the complications caused by the fact that
you brought (and wrapped) one of the gifts yourself, and so are
presumably unlikely to choose it when your number is called.
\section{Connection with partitions of labeled sets}\label{Sec2}
Let ${\mathnormal H}_{\sigma}(n)$ be the number of scenarios with $n$ gifts and a limit
of $\sigma$ steals, for $\sigma \ge 0, n \ge 1$.
Then ${\mathnormal H}_{\sigma}(n)$ is a multiple of $n!$, and we write
${\mathnormal H}_{\sigma}(n) = n! {\mathnormal G}_{\sigma}(n-1)$, where in ${\mathnormal G}_{\sigma}(n-1)$ we
assume that the gifts are taken from the pool in the order $1,2,\ldots,n$.
We write $n-1$ rather than $n$ as the argument of ${\mathnormal G}_{\sigma}$ because the
$n$th gift plays a special (and less important) role. This also
simplifies the statement of Theorem \ref{th1}.
In other words, ${\mathnormal G}_{\sigma}(n)$ is the number of scenarios when there
are $n+1$ gifts, with a limit of $\sigma$ steals per gift, and
the gifts are taken from the pool in the order $1,2,\ldots,n+1$.
As mentioned above, the sequence of values of ${\mathnormal G}_1(n)$ appeared to
coincide with entry A001515 in \cite{OEIS}.
One of the interpretations of that sequence (contributed by
Robert A. Proctor on April 18, 2005) involved partitions of a
labeled set into blocks, and this was enough of a hint to lead us to
our first theorem.
We recall that the Stirling number of the second kind,
$S_2(i,j)$, is the number of partitions of the labeled set $\{1,\ldots,i\}$ into $j$ blocks
(\cite{Comtet}, \cite{GKP}),
while for $h \ge 1$ the $h$-restricted Stirling number of the second kind,
$S_2^{(h)}(i,j)$, is the number of partitions of $\{1,\ldots,i\}$ into $j$ blocks
of size at most $h$ (\cite{ChSm1}-\cite{ChSm3}).
\begin{theorem}\label{th1}
For $\sigma \ge 0$ and $n \ge 0$,
\beql{Eq2}
{\mathnormal G}_{\sigma}(n) = \sum_{k=n}^{(\sigma+1)n} S_2^{(\sigma+1)}(k,n)\,.
\end{equation}
\end{theorem}
\noindent{\bf Proof.}
Equation \eqn{Eq2} is an assertion about ${\mathnormal G}_{\sigma}(n)$, so we are
now discussing scenarios where there are $n+1$ gifts.
For $\sigma = 0$, ${\mathnormal H}_0(n+1) = (n+1)!$, so ${\mathnormal G}_0(n) = 1$,
in agreement with $S_2^{(1)}(n,n) = 1$.
We may assume therefore that $\sigma \ge 1$.
Let an ``action'' refer to a player choosing a gift $\gamma$, either by
taking it from the pool or by stealing it from another player. Since
we are now assuming that the gifts are taken from the pool in order,
$\gamma$ determines both the player and whether the action was to take
a gift from the pool or to steal it from another player. So the
scenario is fully specified
simply by the sequence of $\gamma$ values,
recording which gift is chosen at each action.
For example, the scenarios in \eqn{Eq1} are represented
by the sequences
$123$, $1213$, $12123$, $1223$, $12213$, $1123$, $11223$.
Since the game ends as soon as the $(n+1)$st gift is selected, the number
of actions is at least $n+1$ and at most $(\sigma+1)n+1$.
The sequence of $\gamma$ values is therefore a sequence of integers
from $\{1,\ldots,n+1\}$ which begins with $1$, ends with $n+1$,
where each number $i \in \{1,\ldots,n\}$ appears
at least once and at most $\sigma+1$ times
and $n+1$ appears just once,
and in which the first $i$ can appear only after $i-1$ has appeared.
Conversely, any sequence with these
properties determines a unique scenario.
Let $k$ denote the length of the sequence with the last entry
(the unique $n+1$) deleted.
We map this shortened sequence to a partition of $[1,\ldots,k]$
into $n$ blocks: the first block records the positions of the $1$'s,
the second block records the positions of the $2$'s, $\ldots$,
and the $n$th block records the positions of the $n$'s.
Continuing the example,
for the seven sequences above,
the values of $k$ and the corresponding partitions are as
shown in Table 1.
\begin{table}[htb]
\label{Tab1}
\caption{Values of $k$ and partitions corresponding to
the scenarios in \eqn{Eq1}. }
$$
\begin{array}{cc}
k & \mbox{partition} \\
2 & 1, 2 \\
3 & 13, 2 \\
4 & 13, 24 \\
3 & 1, 23 \\
4 & 14, 23 \\
3 & 12, 3 \\
4 & 12, 34 \\
\end{array}
$$
\end{table}
The number of such partitions is precisely $S_2^{(\sigma+1)}(k,n)$.
Since the mapping from sequences to partitions is completely reversible,
the desired result follows.~~~${\vrule height .9ex width .8ex depth -.1ex }$
\vspace*{+.2in}
\noindent{\bf Remark.} The sums $B(i) := \sum_{j}^{} S_2(i,j)$
are the classical Bell numbers. The sums $\sum_{j}^{} S_2^{(h)}(i,j)$
also have a long history \cite{MMW}, \cite{MW55}.
However, the sums
$\sum_{i}^{} S_2^{(h)}(i,j)$
mentioned in \eqn{Eq2} do not seem to
have studied before.
Note that the limits in \eqn{Eq2}
are the natural limits on the summand $k$, and
could be omitted.
To simplify the notation, and to put the most important variable first, let
\beql{EqE}
E_{\sigma}(n,k) := S_2^{(\sigma+1)}(k,n)\,,
\end{equation}
for $\sigma \ge 0$, $n \ge 0$, $k \ge 0$.
In words, $E_{\sigma}(n,k)$ is the number of partitions of $\{1, \ldots, k\}$ into exactly
$n$ blocks of sizes in the range $[1, \ldots, \sigma+1]$.
For $n \ge 0$, $E_{\sigma}(n,k)$ is nonzero only
for $n \le k \le (\sigma+1)n$.
To avoid having to worry about negative arguments,
we define $E_{\sigma}(n,k)$ to be zero if either $n$ or $k$ is negative.
Then
\beql{Eq4}
{\mathnormal G}_{\sigma}(n) = \sum_{k=n}^{(\sigma+1)n} E_{\sigma}(n,k)\,.
\end{equation}
Stirling numbers of the second kind satisfy many different recurrences
and generating functions (\cite[Chap.~V]{Comtet}),
and to a lesser extent this is also true for $E_{\sigma}(n,k)$.
We begin with three general properties.
\begin{theorem}\label{th2}
(i) Suppose $\sigma \ge 1$. Then $E_{\sigma}(n,k) = 0$ for $k<n$ or $k>(\sigma+1)n$,
and otherwise, for $n \le k \le (\sigma + 1)n$,
\beql{EqAAB}
E_{\sigma}(n,k)
= \sum_{i=0}^{\sigma} \binom{k-1}{i} E_{\sigma}(n-1,k-1-i) \,.
\end{equation}
(ii) For $\sigma \ge 0$, $n \ge 0$, $k \ge 0$,
\beql{EqAAA}
E_{\sigma}(n,k)
= \sum_{(a_1,\ldots,a_{\sigma+1})}
\frac{k!}{a_1! a_2! \ldots a_{\sigma+1}! \, 1!^{a_1} 2!^{a_2} \cdots (\sigma+1)!^{a_{\sigma+1}} } \,,
\end{equation}
where the sum is over all $(\sigma+1)$-tuples of
nonnegative integers $(a_1,\ldots,a_{\sigma+1})$ satisfying
\begin{eqnarray}\label{EqAAD}
a_1 + a_2 + a_3 \cdots + a_{\sigma+1} & = & n \,, \nonumber \\
a_1 + 2a_2 + 3a_3 \cdots + (\sigma+1)a_{\sigma+1} & = & k \,.
\end{eqnarray}
(iii)
The numbers $E_{\sigma}(n,k)$ have the exponential generating function
\beql{EqAAC}
\sum_{n=0}^{\infty} \sum_{k=n}^{(\sigma+1)n} E_{\sigma}(n,k)x^n \frac{y^k}{k!}
=
\exp \left[ x\left(y+\frac{y^2}{2!}+\cdots + \frac{y^{\sigma+1}}{(\sigma+1)!}\right) \right] \,.
\end{equation}
\end{theorem}
\noindent{\bf Proof.}
(i) This is an analog of the ``vertical'' recurrence
for the Stirling numbers
(\cite[Eq.~$\lbrack$3c$\rbrack$,~p.~209]{Comtet}).
The idea of the proof is to take a partition of $[1,\ldots,k]$,
remove the block containing $k$, and renumber the
remaining parts.
(ii) Here $a_i$ is the number of blocks of size $i$ in the partition.
This follows by standard counting
arguments (cf. \cite[Th.~B,~p.~205]{Comtet}).
(iii) This is an analog of the ``vertical'' generating function
for the Stirling numbers (\cite[Eq.~$\lbrack$2b$\rbrack$,~p.~206]{Comtet}),
and follows directly from (i).~~~${\vrule height .9ex width .8ex depth -.1ex }$
\vspace*{+.2in}
The recurrence in Theorem \ref{th2}(i) makes it easy to
compute as many values of $E_{\sigma}(n,k)$ as one wishes.
Tables 3 through 7 show the initial values
of $E_{1}(n,k)$ through $E_{5}(n,k)$, and
Table 8 gives the
initial values of ${\mathnormal G}_{\sigma}(n)$ for $\sigma =0$ through $8$.
\section{The case $\sigma = 1$}\label{Sec3}
In the case when a gift can be stolen at most once, from Theorem \ref{th2}
we have the recurrence
\beql{EqE1a}
E_1(n,k) = E_1(n-1,k-1) + (k-1)E_1(n-1,k-2) \,,
\end{equation}
for $n \le k \le 2n$, with $E_1(n,k)=0$ for $k<n$ and $k>2n$;
the explicit formula
\beql{EqE1b}
E_1(n,k) = \frac{k!}{(2n-k)!~(k-n)!~2^{k-n}} \,,
\end{equation}
for $n \le k \le 2n$; and the generating function
\beql{EqE1c}
\sum_{n=0}^{\infty} \sum_{k=n}^{2n}~E_{1}(n,k)~x^n \frac{y^k}{k!}
=
e^{x(y+y^2/2)} \,.
\end{equation}
It follows from \eqn{Eq4} that
\begin{eqnarray}\label{EqG1a}
{\mathnormal G}_1(n) & = & \sum_{k=n}^{2n} \frac{k!}{(2n-k)!~(k-n)!~2^{k-n}} \nonumber \\
& = & \sum_{i=0}^{n} \frac{(n+i)!}{(n-i)!~i!~2^i} \,.
\end{eqnarray}
Equation \eqn{EqG1a} shows that the sequence ${\mathnormal G}_1(n)$ is indeed given by
entry A001515 in \cite{OEIS}.
That entry gives (mostly without proof) several other properties of these numbers,
taken from various sources, notably Grosswald \cite{Gross78}.
We collect some of these properties in the next theorem.
Property (iii) is especially interesting, since the following sections
will be concerned with attempts to generalize it to larger values of $\sigma$.
We recall from \cite{Gross78} that the Bessel polynomial $y_n(z)$ is given
by
\beql{EqBess1}
y_n(z) := \sum_{i=0}^{n} \frac{(n+i)!z^i}{(n-i)!~i!~2^i} \,.
\end{equation}
Also ${}_2F_{0}$ and (later) ${}_2F_{1}$ denote hypergeometric functions.
\begin{theorem}\label{th3}
(i)
\beql{EqG1b}
{\mathnormal G}_1(n) = y_n(1)\,.
\end{equation}
(ii)
\beql{EqG1c}
{\mathnormal G}_1(n) = {}_2F_{0}\left[ \begin{array}{c}
n+1,-n \\
-
\end{array}
;
\begin{array}{c}
-\frac{1}{2}
\end{array}
\right] \,.
\end{equation}
(iii)
\beql{EqG1d}
{\mathnormal G}_1(n) = (2n-1){\mathnormal G}_1(n-1) + {\mathnormal G}_1(n-2) \,.
\end{equation}
for $n \ge 2$, with ${\mathnormal G}_1(0)=1, {\mathnormal G}_1(1)=2$.
\noindent
(iv)
\beql{EqG1e}
\sum_{n=0}^{\infty} {\mathnormal G}_1(n)\frac{x^n}{n!} ~=~ \frac{ e^{1-\sqrt{1-2x}}}{\sqrt{1-2x}} \,.
\end{equation}
(v)
\beql{EqG1f}
{\mathnormal G}_1(n) ~ \sim ~ \frac{e(2n)!}{n! 2^n} \mbox{~as~} n \rightarrow \infty \,.
\end{equation}
\end{theorem}
\noindent{\bf Proof.}
(i) and (ii) are immediate consequences of \eqn{EqG1a}.
\noindent
(iii) We give three proofs of \eqn{EqG1d}.
(First proof.) Equation \eqn{EqG1d} follows from one of the recurrences for Bessel
polynomials (\cite[Eq.~(7),~p.~18]{Gross78}, \cite{KF49}).
(Second proof.) Alternatively, it is easy to verify from \eqn{EqE1b} that
\beql{EqE1d}
E_1(n,k) = (2n-1)E_1(n-1,k-2) + E_1(n-2,k-2)\,.
\end{equation}
Our conventions about negative arguments make it
unnecessary to put any restrictions on the range over which \eqn{EqE1d}
holds. By summing \eqn{EqE1d} on $k$ we obtain \eqn{EqG1d}.
(Third proof.) The third proof is combinatorial. We will show the equivalent
statement that for $n \ge 3$,
\beql{EqG1g}
{\mathnormal G}_1(n) = {\mathnormal G}_1(n-2) + {\mathnormal G}_1(n-1) + 2(n-1){\mathnormal G}_1(n-1)\,.
\end{equation}
We can build a partition counted in ${\mathnormal G}_1(n)$ in three ways.
(A) Take a partition $P$ into $n-2$ parts
and adjoin two parts of size $1$, $\{x\}$ and $\{y\}$, say, where
$x$, $y$ are elements not in $P$.
This gives ${\mathnormal G}_1(n-2)$ partitions.
(B) Take a partition $P$ into $n-1$ parts
and adjoin a part $\{x,y\}$ of size $2$.
This gives ${\mathnormal G}_1(n-1)$ partitions.
(C) Let $P$ be a partition into $n-1$ parts
and let $S$ be one of the parts.
If $S = \{u\}$ is a singleton, then
$$
P \setminus S \cup \{u,x\} \cup \{y\} \mbox{~and~}
P \setminus S \cup \{u,y\} \cup \{x\}
$$
are two partitions into $n$ parts.
If $S = \{u,v\}$ is a pair, then
$$
P \setminus S \cup \{u,x\} \cup \{v,y\} \mbox{~and~}
P \setminus S \cup \{u,y\} \cup \{v,x\}
$$
are two partitions into $n$ parts.
So in either case the pair $P$, $S$ gives rise to two
partitions into $n$ parts.
There are $n-1$ choices for $S$, so in all we obtain $2(n-1){\mathnormal G}_1(n-1)$
partitions.
The argument is clearly reversible, and so \eqn{EqG1g} and hence \eqn{EqG1d}
follow.
\noindent
(iv) Let
\begin{eqnarray}\label{EqG1h}
{\mathcal G} _1(x) & := & \sum_{n=0}^{\infty} {\mathnormal G}_1(n) \frac{x^n}{n!} \nonumber \\
& = & 1 +2x + 7\frac{x^2}{2!} + 37\frac{x^3}{3!} + 266 \frac{x^4}{4!} + \cdots \,. \nonumber
\end{eqnarray}
By multiplying \eqn{EqG1d} by $x^n/n!$ and summing on $n$ from $2$
to $\infty$ we obtain the differential equation
\beql{EqG1i}
{\mathcal G}_1''(x) = 3 {\mathcal G}_1'(x) + 2x{\mathcal G}_1''(x) + {\mathcal G}_1(x)\,.
\end{equation}
Then the right-hand side of \eqn{EqG1e} is the unique solution of \eqn{EqG1i}
which satisfies ${\mathcal G}_1(0) = 1$, ${\mathcal G}_1'(0) = 2$.
\noindent
(v) This follows from \eqn{EqG1a}, since the terms $i=n-1$
and $i=n$ dominate the sum (see also \cite[Eq.~(1),~p.~124]{Gross78}).~~~${\vrule height .9ex width .8ex depth -.1ex }$
\section{The case $\sigma = 2$}\label{Sec4}
In the case when a gift can be stolen at most once, the problem, as we saw in the
previous section, turned out to be related to the values of Bessel polynomials,
and the principal sequence, ${\mathnormal G}_1(n)$, had been studied before.
For $\sigma \ge 2$, we appear to be in new territory---for one thing,
the sequences ${\mathnormal G}_2(n), {\mathnormal G}_3(n), \ldots$ were not among the 140,000 existing
sequences in \cite{OEIS}.
These sequences can be computed using Theorem \ref{th2}.
From \eqn{Eq4}, \eqn{EqAAA} we have:
\beql{EqGsa}
{\mathnormal G}_{\sigma}(n) ~=~ \sum_{k=n}^{(\sigma+1)n} \sum_{(a_1,\ldots,a_{\sigma+1})}
\frac{k!}{a_1! a_2! \ldots a_{\sigma+1}! \, 1!^{a_1} 2!^{a_2} \cdots (\sigma+1)!^{a_{\sigma+1}} } \,,
\end{equation}
where the inner sum is over all $(\sigma+1)$-tuples of
nonnegative integers $(a_1,\ldots,a_{\sigma+1})$ satisfying \eqn{EqAAD}.
This may be rewritten as a sum of multinomial coefficients:
\beql{EqGsb}
{\mathnormal G}_{\sigma}(n) ~=~
\frac{1}{n!}~
\sum_{i_1=1}^{\sigma+1}
\sum_{i_2=1}^{\sigma+1}
\cdots
\sum_{i_n=1}^{\sigma+1}
\genfrac{(}{)}{0pt}{0}{i_1+i_2+\cdots+i_{n}}{i_1,~i_2,~\cdots,~i_{n}} \,,
\end{equation}
where $i_r$ is the size of the $r$th part.
We naturally tried to find analogs of the various parts of Theorem \ref{th3}
that would hold for $\sigma \ge 2$.
Let us begin with the simplest result, the asymptotic behavior.
This is directly analogous to Theorem \ref{th3}(v).
\begin{theorem}\label{th4}
For fixed $\sigma \ge 1$,
\beql{EqGsf}
{\mathnormal G}_{\sigma}(n) ~ \sim ~ \frac{e((\sigma+1)n)!}{n! {(\sigma+1)!}^n} \mbox{~as~} n \rightarrow \infty \,.
\end{equation}
\end{theorem}
\noindent{\bf Sketch of proof.}
The two terms corresponding to
$ \{ k=(\sigma+1)n, a_{\sigma+1}=n$, other $a_i=0 \} $ and
$ \{ k=(\sigma+1)n-1, a_{\sigma+1}=n-1, a_{\sigma}=1$, other $a_i=0 \} $
dominate the right-hand side of \eqn{EqGsa},
and are both equal to
$((\sigma+1)n)!/(n! {(\sigma+1)!}^n)$.
Dividing the sum by this quantity gives a converging sum,
in which a subset of terms approach $1+1+1/2!+1/3!+...$,
while the others vanish as $n \rightarrow \infty$.~~~${\vrule height .9ex width .8ex depth -.1ex }$
Concerning Theorem \ref{th3}(i), we do not know if there is a generalization
of Bessel polynomials whose value gives \eqn{EqGsa} for $\sigma \ge 2$.
As for Theorem \ref{th3}(ii), there is a relationship with hypergeometric
functions in the case $\sigma = 2$.
From \eqn{EqAAA} we have
\begin{eqnarray}\label{EqE2a}
E_2(n,k) & = &
\sum_{c=\max\{0,k-2n\}}^{\lfloor(k-n)/2\rfloor}
\frac{k!}
{(2n-k+c)! (k-n-2c)! c! \, 2^{k-n-c} 3^c} \nonumber \\
& = &
\sum_{c=\max\{0,\eta-n\}}^{\lfloor \eta/2\rfloor}
\frac{k!}
{(n-\eta+c)! (\eta-2c)! c! \, 2^{\eta-c} 3^c} \,,
\end{eqnarray}
where $\eta=k-n$ (this is the ``excess'' of $k$ over $n$).
\begin{theorem}\label{th5}
(i) Let $\eta=k-n$.
\noindent
If $\eta \le n$ then
\beql{EqE2b}
E_2(n,k) =
\frac{(n+\eta)!}{\eta! (n-\eta)! 2^{\eta} } ~
{}_2F_{1}\left[\begin{array}{c}
-\eta/2,-\eta/2+1/2 \\
n-\eta+1
\end{array}
;
\begin{array}{c}
\frac{8}{3}
\end{array}
\right] \,.
\end{equation}
\noindent
If $\eta \ge n$ then
\beql{EqE2c}
E_2(n,k) =
\frac{(\eta+n)!}{(2n-\eta)! (\eta-n)! 2^{n} 3^{\eta-n} } ~
{}_2F_{1}\left[ \begin{array}{c}
-n+\eta/2,-n+\eta/2+1/2 \\
\eta-n+1
\end{array}
;
\begin{array}{c}
\frac{8}{3}
\end{array}
\right] \,.
\end{equation}
\noindent
(ii)
\begin{eqnarray}\label{EqG2c}
{\mathnormal G}_2(n) & = &
\sum_{ \eta = 0 }^{n-1} ~
\frac{(n+\eta)!}{\eta! (n-\eta)! 2^{\eta} } ~
{}_2F_{1}\left[ \begin{array}{c}
-\eta/2,-\eta/2+1/2 \\
n-\eta+1
\end{array}
;
\begin{array}{c}
\frac{8}{3}
\end{array}
\right] \nonumber \\
& + & \sum_{ \eta = n }^{2n} ~
\frac{(n+\eta)!}{(2n-\eta)! (\eta-n)! 2^{n} 3^{\eta-n} } ~
{}_2F_{1}\left[ \begin{array}{c}
-n+\eta/2,-n+\eta/2+1/2 \\
\eta-n+1
\end{array}
;
\begin{array}{c}
\frac{8}{3}
\end{array}
\right] \,.
\end{eqnarray}
\end{theorem}
\noindent{\bf Proof.}
(i) follows from \eqn{EqE2a} using the standard rules for converting sums of
products of factorials to hypergeometric functions (cf. \cite{And74}),
and (ii) follows from \eqn{Eq4}.~~~${\vrule height .9ex width .8ex depth -.1ex }$
We can now state the main theorem of this section, which gives
analogs of \eqn{EqE1d} and \eqn{EqG1d}.
\begin{theorem}\label{th6}
(i)
\begin{align}\label{EqE2d}
E_2(n,k) & = (9 n^2 - 9 n + 2) E_2(n-1,k-3)/2
- 5 E_2(n-1,k-1)/2 \nonumber \\
& +\, (9 n^2 - 36 n + 35) E_2(n-2,k-4)/2
+ 6(n-1) E_2(n-2,k-3)
- 3 E_2(n-2,k-2)/2 \nonumber \\
& +\, 3(2 n-5) E_2(n-3,k-4)
+ 5 E_2(n-3,k-3)/2
+ 5 E_2(n-4,k-4)/2 \, .
\end{align}
\noindent
(ii)
\begin{align}\label{EqG2d}
{\mathnormal G}_2(n) & = (9 n^2 - 9 n - 3) {\mathnormal G}_2(n-1)/2 \nonumber \\
& +\, (9 n^2 - 24 n + 20) {\mathnormal G}_2(n-2)/2 \nonumber \\
& +\, (6 n - 25/2) {\mathnormal G}_2(n - 3) + 5 {\mathnormal G}_2(n - 4)/2 \, ,
\end{align}
for $n \ge 4$, with ${\mathnormal G}_2(0)=1$, ${\mathnormal G}_2(1) = 3$,
${\mathnormal G}_2(2) = 31$, ${\mathnormal G}_2(3) = 18252$.
\end{theorem}
\noindent{\bf Proof.}
(ii) Eq. \eqn{EqG2d} follows by summing \eqn{EqE2d}
on $k$, just as \eqn{EqG1d} followed from \eqn{EqE1d}.
(i) We give two proofs of \eqn{EqE2d}.
The first proof uses \eqn{EqE2b}, \eqn{EqE2c} and
Gauss's contiguity relations for hypergeometric functions
(\cite[\S2.1.2]{Erd}, \cite[\S14.7]{WW}).
There are nine $E_2(i,j)$ terms in \eqn{EqE2d},
and each of them is given by either \eqn{EqE2b} or \eqn{EqE2c},
depending on the relationship between $i$ and $j$.
This means that six separate cases must be considered,
according to whether $k \ge 2n+1$, $k=2n, 2n-1, 2n-2, 2n-3$ or
$k \le 2n-4$.
We give the details just for the first case, the other cases
being very similar.
Assuming then that $k \ge 2n+1$, \eqn{EqE2b} applies to all
nine $E_2(i,j)$ terms in \eqn{EqE2d}.
Writing $\eta=k-n$ as before, and replacing the final argument
$\frac{8}{3}$ in the hypergeometric functions by a new
variable $z$, we must show that the expression
\begin{align}\label{EqE2e}
\frac{(\eta+n)!}{(\eta-n)! (2n-\eta)! 2^{n} 3^{\eta-n} } ~ &
{}_2F_{1}\left[ \begin{array}{c}
\eta/2-n,\eta/2-n+1/2 \\
\eta-n+1
\end{array}
;
\begin{array}{c}
z
\end{array}
\right] \nonumber \\
- ~ \frac{9 n^2 - 9 n + 2}{2} ~
\frac{(\eta+n-3)!}{(\eta-n-1)! (2n-\eta)! 2^{n-1} 3^{\eta-n-1} } ~ &
{}_2F_{1}\left[\begin{array}{c}
\eta/2-n,\eta/2-n+1/2 \\
\eta-n
\end{array}
;
\begin{array}{c}
z
\end{array}
\right] \nonumber \\
+ ~ \frac{5}{2} ~
\frac{(\eta+n-1)!}{(\eta-n+1)! (2n-\eta-2)! 2^{n-1} 3^{\eta-n+1} } ~ &
{}_2F_{1}\left[\begin{array}{c}
\eta/2-n+1,\eta/2-n+3/2 \\
\eta-n+2
\end{array}
;
\begin{array}{c}
z
\end{array}
\right] \nonumber \\
- ~ \frac{9 n^2 - 36 n + 35}{2} ~
\frac{(\eta+n-4)!}{(\eta-n)! (2n-\eta-2)! 2^{n-2} 3^{\eta-n} } ~ &
{}_2F_{1}\left[\begin{array}{c}
\eta/2-n+1,\eta/2-n+3/2 \\
\eta-n+1
\end{array}
;
\begin{array}{c}
z
\end{array}
\right] \nonumber \\
- ~ 6(n-1) ~
\frac{(\eta+n-3)!}{(\eta-n+1)! (2n-\eta-3)! 2^{n-2} 3^{\eta-n+1} } ~ &
{}_2F_{1}\left[\begin{array}{c}
\eta/2-n+3/2,\eta/2-n+2 \\
\eta-n+2
\end{array}
;
\begin{array}{c}
z
\end{array}
\right] \nonumber \\
+ ~ \frac{3}{2}
\frac{(\eta+n-2)!}{(\eta-n+2)! (2n-\eta-4)! 2^{n-2} 3^{\eta-n+2} } ~ &
{}_2F_{1}\left[\begin{array}{c}
\eta/2-n+2,\eta/2-n+5/2 \\
\eta-n+3
\end{array}
;
\begin{array}{c}
z
\end{array}
\right] \nonumber \\
- ~ 3(2n-5)
\frac{(\eta+n-4)!}{(\eta-n+2)! (2n-\eta-5)! 2^{n-3} 3^{\eta-n+2} } ~ &
{}_2F_{1}\left[\begin{array}{c}
\eta/2-n+5/2,\eta/2-n+3 \\
\eta-n+3
\end{array}
;
\begin{array}{c}
z
\end{array}
\right] \nonumber \\
- ~ \frac{5}{2}
\frac{(\eta+n-3)!}{(\eta-n+3)! (2n-\eta-6)! 2^{n-3} 3^{\eta-n+3} } ~ &
{}_2F_{1}\left[\begin{array}{c}
\eta/2-n+3,\eta/2-n+7/2 \\
\eta-n+4
\end{array}
;
\begin{array}{c}
z
\end{array}
\right] \nonumber \\
- ~ \frac{5}{2}
\frac{(\eta+n-4)!}{(\eta-n+4)! (2n-\eta-8)! 2^{n-4} 3^{\eta-n+4} } ~ &
{}_2F_{1}\left[\begin{array}{c}
\eta/2-n+4,\eta/2-n+9/2 \\
\eta-n+5
\end{array}
;
\begin{array}{c}
z
\end{array}
\right]
\end{align}
vanishes when $z = \frac{8}{3}$:
Using Gauss's contiguity relations, the nine hypergeometric
functions in \eqn{EqE2e} can all be expressed as linear combinations
of just two of them.
The computer algebra program Maple 11 simplifies\footnote{We don't actually
know how Maple obtains \eqn{EqE2f}, but the result is consistent
with the use of Gauss's relations.}
the above expression to
\begin{align}\label{EqE2f}
\frac{(\eta+n-4)! (3z-8)}
{324 (\eta-n+1)! (2n-\eta-2)! 2^n 3^{\eta-n} z^3 (z-1)^3}
& \left(
\phi_1 ~ {}_2F_{1}\left[ \begin{array}{c}
\eta/2-n+1,\eta/2-n+3/2 \\
\eta-n+2
\end{array}
;
\begin{array}{c}
z
\end{array}
\right] \right. \nonumber \\
~+~ & \left. \phi_2 ~ {}_2F_{1}\left[ \begin{array}{c}
\eta/2-n,\eta/2-n+1/2 \\
\eta-n+1
\end{array}
;
\begin{array}{c}
z
\end{array}
\right] \right) \,,
\end{align}
where $\phi_1$ and $\phi_2$ are polynomials in $z$ of degrees $6$ and $5$
respectively, with coefficients which are polynomials in $n$ and $\eta$.
Since the exact values of $\phi_1$ and $\phi_2$ are not important for the
argument, we relegate them to Tables
9 and 10
in the Appendix.
The above expression clearly vanishes
for $z = \frac{8}{3}$, which proves the desired result.
Second proof. Let
\beql{EqD2a}
D_2(n,k,c) := \frac{k!} {(2n-k+c)! (k-n-2c)! c! \, 2^{k-n-c} 3^c}
\end{equation}
denote the first summand in \eqn{EqE2a}.
We look for a recurrence of the form
\beql{EqD2b}
\sum_{r=0}^{4}
\sum_{s=0}^{4}
\sum_{t=0}^{4}
C(r,s,t) D_2(n+r,k+s,c+t) ~=~ 0 \, ,
\end{equation}
where the coefficients $C(r,s,t)$ depend on $n$ but not
on $k$ or $c$, with the property that when summed on $c$ it
collapses to the appropriately shifted version of \eqn{EqE2d},
which is:
\begin{align}\label{EqE2dd}
& E_2(n+4,k+4) - (9 n^2 + 63 n + 110) E_2(n+3,k+1)/2
+ 5 E_2(n+3,k+3)/2 \nonumber \\
& -~ (9 n^2 + 36 n + 35) E_2(n+2,k)/2
+ 6(n+3) E_2(n+2,k+1)
+ 3 E_2(n+2,k+2)/2 \nonumber \\
& -~ 3(2 n+3) E_2(n+1,k)
- 5 E_2(n+1,k+1)/2
- 5 E_2(n,k)/2 ~=~ 0 \, .
\end{align}
For this we used the method of Sister Mary Celine Fasenmyer,
exactly as described in \S4.1 of \cite{AeqB}.
A Maple 11 program found that there is a solution to \eqn{EqD2b}
in which the coefficients $C(n,k,c)$ involve five free parameters,
and there is a two-parameter solution which collapses to \eqn{EqE2dd}
when summed on $c$.
The simplest solution (obtained from Maple's solution
by setting both free parameters to zero) is the following.
All the $C(r,s,t)$ are zero except for the following 19 terms:
\begin{align}
C( 0, 0, 1) &= -8, & C( 2, 1, 1) &= -9, \nonumber \\
C( 0, 0, 2) &= 7, & C( 2, 1, 2) &= 3, \nonumber \\
C( 0, 0, 3) &= -3/2, & C( 2, 2, 1) &= 6, \nonumber \\
C( 1, 0, 1) &= -18, & C( 2, 2, 2) &= -6, \nonumber \\
C( 1, 0, 2) &= 15, & C( 2, 2, 3) &= 3/2, \nonumber \\
C( 1, 0, 3) &= -3, & C( 3, 1, 0) &= -9, \nonumber \\
C( 1, 1, 1) &= -4, & C( 3, 3, 1) &= 5, \nonumber \\
C( 1, 1, 2) &= 3/2, & C( 3, 3, 2) &= -5/2, \nonumber \\
C( 2, 0, 0) &= -9, & C( 4, 4, 1) &= 1, \nonumber \\
C( 2, 0, 1) &= 9. & \nonumber
\end{align}
It is easy to verify that this collapses to \eqn{EqE2dd} when
summed on $c$.~~~${\vrule height .9ex width .8ex depth -.1ex }$
\vspace*{+.2in}
Is there a combinatorial proof for \eqn{EqG2d}? We do not know.
We discovered \eqn{EqG2d} by experiment, using Theorem
\ref{th6} to suggest the leading term. (Note that
if $r(n)$ denotes the right-hand side of \eqn{EqGsf},
then $r(n)/r(n-1) = (9 n^2 - 9 n + 2)/2$.)
We also discovered a second recurrence, which
is independent of \eqn{EqG2d}:
\begin{align}\label{EqG2e}
(n-2) {\mathnormal G}_2(n) & = n (9 n^2-27 n+17) {\mathnormal G}_2(n-1)/2 \nonumber \\
& + (6 n^2-15 n+13/2) {\mathnormal G}_2(n-2) \nonumber \\
& + (5 n-5) {\mathnormal G}_2(n-3)/2 \, ,
\end{align}
for $n \ge 3$, with ${\mathnormal G}_2(0)=1$, ${\mathnormal G}_2(1) = 3$,
${\mathnormal G}_2(2) = 31$.
In view of \eqn{EqG2c}, this is equivalent to a complicated
identity involving hypergeometric functions.
We did not find a proof, but Doron Zeilberger
has kindly informed us that he was able to derive \eqn{EqG2e}
by applying the method of ``creative telescoping''
(\cite[Chap.~6]{AeqB}, \cite{Zeil90b}, \cite{Zeil91})
to \eqn{EqD2a}
and using a modified version of his Maple program ``MultiZeilberger''.
\section{The case $\sigma \ge 3$}\label{Sec5}
For $\sigma \ge 3$ we have not found any connections between
${\mathnormal G}_{\sigma}(n)$ and generalized Bessel polynomials or hypergeometric functions,
and we do not have proofs for the recurrences that we have discovered.
However, we do know that recurrences for ${\mathnormal G}_{\sigma}(n)$ and $E_{\sigma}(n,k)$ always exist.
This follows from Wilf and Zeilberger's Fundamental Theorem
for Multivariate Sums (\cite[Theorem~4.5.1]{AeqB}, \cite{WZ92a}).
\begin{theorem}\label{th7}
\noindent
(i) For $\sigma \ge 1$,
there is a number $\delta \ge 0$ such that
$E_{\sigma}(n,k)$ satisfies a recurrence of the form
\beql{EqEsWZ}
\sum_{i=0}^{\delta} \sum_{j=0}^{\delta} C_{i,j}^{(E)}(n) E_{\sigma}(n-i,k-j) =0
\mbox{~for~all~} n \,,
\end{equation}
where the coefficients $C_{i,j}^{(E)}(n)$ are polynomials
in $n$ with coefficients depending on $i$ and $j$.
\noindent
(ii) For $\sigma \ge 1$,
there is a number $\delta \ge 0$ such that
${\mathnormal G}_{\sigma}(n)$ satisfies a recurrence of the form
\beql{EqGsWZ}
\sum_{i=0}^{\delta} \sum_{j=0}^{\delta} C_{i}^{(G)}(n) {\mathnormal G}_{\sigma}(n-i)=0
\mbox{~for~all~} n \,,
\end{equation}
where the coefficients $C_{i}^{(G)}(n)$ are polynomials
in $n$ with coefficients depending on $i$.
\end{theorem}
\noindent{\bf Proof.}
(ii) As usual, Eq. \eqn{EqGsWZ} follows by summing \eqn{EqEsWZ} on $k$.
(i) We will use the case $\sigma = 3$ to illustration of the proof,
the general case being similar. We know from \eqn{EqAAA} that
\beql{EqAAA3}
E_{3}(n,k)
= \sum_{a,b,c,d}
\frac{k!}{a! b! c! d! \, 2^b 6^c 24^d } \,,
\end{equation}
where the sum is over all $4$-tuples of
nonnegative integers $(a,b,c,d)$ satisfying
\begin{eqnarray}
a + b + c + d & = & n \,, \nonumber \\
a + 2b + 3c + 4d & = & k \,. \nonumber
\end{eqnarray}
In other words,
\beql{EqAAA4}
\frac{E_{3}(n,k)}{2^n}
= \sum_{c,d}
\frac{k!}{(2n-k+c+2d)! (k-n-2c-3d)! c! d! \, 2^{k-c} 3^{c+d} } \,,
\end{equation}
where now the sum is over all values of $c$ and $d$ for
which the summand is defined.
This summand is a ``holonomic proper-hypergeometric term'',
in the sense of \cite{WZ92a}, and it follows from
the Fundamental Theorem in that paper that
$E_{3}(n,k)/2^n$ and hence $E_{3}(n,k)$
satisfies a recurrence of the desired form.
Similarly, in the general case, we write the summand in
$E_{\sigma}(n,k)$ as a function of $n, k, a_3, \ldots, a_{\sigma+1}$,
again obtaining a holonomic proper-hypergeometric term.~~~${\vrule height .9ex width .8ex depth -.1ex }$
We conjecture, but do not have a proof,
that a stronger result holds, namely that
recurrences always exist in which the leading terms $C_{0,0}^{(E)}$
and $C_0^{(G)}$ are both $1$, as in \eqn{EqG1d}, \eqn{EqE1d},
\eqn{EqE2d}, \eqn{EqG2d}, \eqn{EqG3e}, \eqn{EqG4e} and
Table 11.
(The recurrence guaranteed by Theorem \ref{th7} may well look more like
\eqn{EqG2e}, with a nontrivial coefficient on the leading term.)
For $\sigma = 3, 4$ and $5$, we have found recurrences for
$E_{\sigma}(n,k)$ and ${\mathnormal G}_{\sigma}(n)$ with leading coefficient $1$,
although we do not have proofs that they are correct.
The following are our conjectured recurrences for ${\mathnormal G}_3(n)$
and ${\mathnormal G}_4(n)$:
\begin{eqnarray}\label{EqG3e}
{\mathnormal G}_3(n) & = & (32 n^3/3 - 16 n^2 + 10 n/3 - 49/6) {\mathnormal G}_3(n-1) \nonumber \\
& + & (48 n^3 - 236 n^2 + 1157 n/3 - 650/3) {\mathnormal G}_3(n-2) \nonumber \\
& + & (80 n^3 - 382 n^2 + 641 n - 511) {\mathnormal G}_3(n-3)/3 \nonumber \\
& + & (64 n^3/3 - 218 n^2 + 2696 n/3 - 7915/6) {\mathnormal G}_3(n-4) \nonumber \\
& + & (56 n^2 - 490 n + 6853/6) {\mathnormal G}_3(n-5) \nonumber \\
& + & (56 n - 1703/6) {\mathnormal G}_3(n-6) \nonumber \\
& + & 58 {\mathnormal G}_3(n-7)/3 \,,
\end{eqnarray}
\begin{eqnarray}\label{EqG4e}
{\mathnormal G}_4(n) & = & (625\,{n}^{4}-1250\,{n}^{3}+625\,{n}^{2}-300\,n-543) {\mathnormal G}_4(n-1)/24 \nonumber \\
& + & (27500\,{n}^{4}-184000\,{n}^{3}+447500\,{n}^{2}-473075\,n+180003) {\mathnormal G}_4(n-2) /72 \nonumber \\
& + & (336875\,{n}^{4}-2546500\,{n}^{3}+7679675\,{n}^{2}-12016800\,n+8048577) {\mathnormal G}_4(n-3)/864 \nonumber \\
& + & (4833125\,{n}^{4}-77581625\,{n}^{3}+476892700\,{n}^{2}-1304291160\,n+1325759504) {\mathnormal G}_4(n-4)/2592 \nonumber \\
& + & (1700625\,{n}^{4}+28316750\,{n}^{3}-605973450\,{n}^{2}+3123850885\,n-5033477363) {\mathnormal G}_4(n-5)/7776 \nonumber \\
& + & (2670000\,{n}^{4}-64380500\,{n}^{3}+704577200\,{n}^{2}-3610058445\,n+6818722190) {\mathnormal G}_4(n-6)/7776 \nonumber \\
& + & (2002500\,{n}^{4}-51976000\,{n}^{3}+517392050\,{n}^{2}-2252744530\,n+3561765885) {\mathnormal G}_4(n-7)/7776 \nonumber \\
& + & (9078000\,{n}^{3}-209915400\,{n}^{2}+1640828980\,n-4301927039) {\mathnormal G}_4(n-8)/7776 \nonumber \\
& + & (5393400\,{n}^{2}-91413680\,n+390747263) {\mathnormal G}_4(n-9)/2592 \nonumber \\
& + & (1593990\,n-14522219) {\mathnormal G}_4(n-11)/972 \nonumber \\
& + & 310343 {\mathnormal G}_4(n-11)/648 \,.
\end{eqnarray}
The recurrence for ${\mathnormal G}_5(n)$ is similar but more complicated,
and we do not state it here. The recurrence
for $E_{3}(n,k)$ is given in the Appendix (see Table 11).
We also omit the recurrences for
for $E_{4}(n,k)$ and $E_{5}(n,k)$, which are even more complicated.
Inspection of these recurrences for
$\sigma \le 5$ has led us to
some conjectures about their general structure.
First, if $\delta$ denotes the ``depth'' of the recurrence,
as in \eqn{EqEsWZ}, \eqn{EqGsWZ}, then the initial values of $\delta$
for both ${\mathnormal G}_{\sigma}(n)$ and $E_{\sigma}(n,k)$
appear to be as shown in Table 2, that is,
it appears that these both recurrences have depth
$\delta = \binom{n+1}{2}+1$ (sequence A000124 of \cite{OEIS}).
\begin{table}[htb]
\label{Tab2}
\caption{Depth $\delta$ of recurrences for ${\mathnormal G}_{\sigma}(n)$
and $E_{\sigma}(n,k)$.}
$$
\begin{array}{c|rrrrrr}
\sigma & 0 & 1 & 2 & 3 & 4 & 5 \\
\hline
\delta & 1 & 2 & 4 & 7 & 11 & 16
\end{array}
$$
\end{table}
Second, we make the following conjectures\footnote{There are similar
conjectures about the putative recurrence for ${\mathnormal G}_{\sigma}(n)$.} about
the coefficients in the putative recurrence for $E_{\sigma}(n,k)$.
We write this recurrence as
\beql{EqEsWZb}
\sum_{i=0}^{\delta} \sum_{j=0}^{\delta} C_{i,j}^{(E)}(n) E_{\sigma}(n-i,k-j) = 0\,,
\end{equation}
where $\delta= \binom{n+1}{2}+1, C_{0,0}^{(E)}(n) = 1$.
Then we believe that
$C_{i,j}(n) = 0$ if $j > \binom{n+1}{2}+1$, or $j<i$, or
$(i < \sigma$ and $j > \binom{n+1}{2}+1 - ((\sigma+1-i)^2 - \sigma -i-1)/2)$.
Furthermore, the degree of $C_{i,j}(n)$ as a polynomial in $n$
is $\le \min \{\sigma, j-i\}$.
\section{Open questions}\label{Sec6}
We collect here some of the questions that we have mentioned.
(i) The case $\sigma=1$ corresponds to values of Bessel polynomials; is
there a notion of generalized Bessl polynomial that could be applied
for larger values of $\sigma$?
(ii) The case $\sigma=2$ can be described using hypergeometric functions;
is there a notion of generalized hypergeometric function that could be applied
for larger values of $\sigma$?
(iii) Is there a combinatorial proof of \eqn{EqG2d}?
(iv) Is the conjecture following Theorem \ref{th7} concerning
the existence of recurrences with leading coefficient $1$ true?
(v) Find proofs that the recurrences \eqn{EqG3e} and \eqn{EqG4e}
are correct.
(vi) Establish the conjectures about the general form of the recurrences for
${\mathnormal G}_{\sigma}(n)$ and $E_{\sigma}(n,k)$
that are mentioned at the end of \S\ref{Sec5}
(this includes question (iv) as a special case).
\section{Acknowledgment}
We thank Doron Zeilberger for finding a proof
of the recurrence \eqn{EqG2e}.
| {
"timestamp": "2009-07-03T03:36:07",
"yymm": "0907",
"arxiv_id": "0907.0513",
"language": "en",
"url": "https://arxiv.org/abs/0907.0513",
"abstract": "The aim of this paper is to solve the \"gift exchange\" problem: you are one of n players, and there are n wrapped gifts on display; when your turn comes, you can either choose any of the remaining wrapped gifts, or you can \"steal\" a gift from someone who has already unwrapped it, subject to the restriction that no gift can be stolen more than a total of S times. The problem is to determine the number of ways that the game can be played out, for given values of S and n. Several recurrences and explicit formulas are given for these numbers, although some open questions remain.",
"subjects": "Combinatorics (math.CO)",
"title": "The Gift Exchange Problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126513110865,
"lm_q2_score": 0.8244619242200082,
"lm_q1q2_score": 0.8069113157584044
} |
https://arxiv.org/abs/2012.08365 | Two generalizations of the Butterfly Theorem | We establish two direct extensions to the Butterfly Theorem on the cyclic quadrilateral along with the proofs using the projective method and analytic geometry of the Cartesian coordinate system. | \section{Introduction}
We repeat the Butterfly Theorem expressed with the chord of the circle; see \cite{1,2,3,4,4a,4b}. This is an interesting and important theorem of plane Euclidean geometry. This classic theorem also has a lot of solutions; see \cite{1,2,3,4,4b}. Previously, the first author of this article also contributed a new proof to this theorem in \cite{5a}.
\begin{theorem}[Butterfly Theorem]Let $M$ be the midpoint of a chord $AB$ of a circle $\omega$, through $M$ two other chords $CD$ and $EF$ of $\omega$ are drawn. If $C$ and $F$ are on opposite sides of $AB$, and $CF$, $DE$ intersect $AB$ at $G$ and $H$ respectively, then $M$ is also the midpoint of $GH$.
\end{theorem}
\begin{figure}[htbp]
\begin{center}\scalebox{0.7}{\includegraphics{Figure7729a.pdf}}\end{center}
\label{fig00}
\caption{Butterfly Theorem for chord of circle}
\end{figure}
If $O$ is the center of $\omega$, the condition $M$ is the midpoint of $AB$ and can be changed into $AB$ is perpendicular to $OM$. Based on this property we may have a different version of Butterfly Theorem as follows
\begin{theorem}[Butterfly Theorem for cyclic quadrilateral]\label{thm0}Let $ABCD$ be a quadrilateral inscribed in circle $\omega$. Let $O$ be the center of $\omega$. Diagonals $AC$ and $BD$ meet at $P$. Perpendicular line from $P$ to $OP$ meets the side line $AB$ and $CD$ at $Q$ and $R$, respectively. Then, $P$ is the midpoint of $QR$.
\end{theorem}
\begin{figure}[htbp]
\begin{center}\scalebox{0.7}{\includegraphics{Figure7729.pdf}}\end{center}\label{fig0}
\caption{Butterfly Theorem for cyclic quadrilateral}
\end{figure}
The Butterfly Theorem has a lot of extensions and generalizations, see \cite{4b,4c,6,7,8,9}. In this paper, we shall generalize the Theorem \ref{thm0} by replacing the cyclic quadrilateral into any quadrilateral. We shall have two generalizations of Theorem \ref{thm0} by this way. Where the quadrilateral is cyclic, we obtain Theorem \ref{thm0}. We also refer to the name as well as the properties of Quadrangle, Quadrilateral, and Quadrigon in \cite {4d,4e}.
\begin{theorem}[The first generalization of Butterfly Theorem]\label{thm1}Let $ABCD$ be an arbitrary quadrilateral. Diagonals $AC$ and $BD$ meet at $P$. Let $O_a$, $O_b$, $O_c$, and $O_d$ be the circumcenters of the triangles $BCD$, $CDA$, $DAB$, and $ABC$, respectively. Let $M$ and $N$ be the midpoints of segments $O_aO_c$ and $O_bO_d$, respectively. The perpendicular lines from $P$ to $MN$ meets lines $AB$ and $CD$ at $Q$ and $R$, respectively. Then, $P$ is the midpoint of $QR$.
\end{theorem}
\begin{figure}[htbp]
\begin{center}\scalebox{0.7}{\includegraphics{Figure6916a.pdf}}\end{center}
\label{fig1}
\caption{The first generalization of Butterfly Theorem}
\end{figure}
\begin{theorem}[The second generalization of Butterfly Theorem]\label{thm2}Let $ABCD$ be an arbitrary quadrilateral. Diagonals $AC$ and $BD$ meet at $P$. Perpendicular bisectors of two segments $AC$ and $BD$ meet at $X$. Perpendicular bisectors of two segments $AB$ and $CD$ meet at $Y$. Perpendicular bisectors of two segments $AD$ and $BC$ meet at $Z$. Construct a parallelogram $XYWZ$. Perpendicular line to $PW$ passes through $P$ which meets the sides $AD$ and $BC$ at $Q$ and $R$, respectively. Then, $P$ is the midpoint of $QR$.
\end{theorem}
\begin{figure}[htbp]
\begin{center}\scalebox{0.7}{\includegraphics{Figure7422a.pdf}}\end{center}
\label{fig2}
\caption{The second generalization of Butterfly Theorem}
\end{figure}
We give two proofs to both Theorem \ref{thm1} and Theorem \ref{thm2}. One of which is an elegant proof using projective geometry. For the remaining solutions, we use analytic geometry of the Cartesian coordinate system, if we have chosen the appropriate coordinate system, then we prove both theorems by other concise numerical methods. Obviously, the selection of the appropriate axis system is also a very interesting illustration showing the power of using the Cartesian coordinate system in proving complex geometrical theorems without using powerful tools of the number. The idea of Ren\'e Descartes in coordinate geometry is really very great; see \cite{0,0a}.
\section{Proofs of Theorems}
For the first proof of Theorem \ref{thm1}, we shall use the following lemmas
\begin{lemma}[AMM Problem 12147 \cite{4}]\label{lem1}$ABCD$ is an arbitrary quadrilateral. Perpendicular bisectors of $AB$, $CD$ meet at $X$ and the perpendicular bisectors of $BC$, $DA$ meet at $Y$. Then $XY$ is perpendicular to the Newton line of $ABCD$.
\end{lemma}
\begin{lemma}\label{lem2}Let $ABCD$ and $PQRS$ be two quadrilaterals in the plane such that $PQ\perp AB$, $QR \perp BC$, $RS \perp CD$, $SP \perp DA$, $PR \perp BD$, $SQ\perp AC$, and $PR\perp BD$. Let $J$ be the intersection point of lines $PQ$ and $RS$. Let $K$ be the intersection point of lines $QR$ and $SP$. Then, $JK$ is perpendicular to the Newton line of $ABCD$.
\end{lemma}
\begin{proof}[The first proof of Theorem \ref{thm1}]Let $F \equiv BC \cap DA$ and $G \equiv CD \cap AB$. Since $AB \perp O_cO_d$, $BC \perp O_dO_a$, $CD \perp O_aO_b$, $DA \perp O_bO_c$, $BD \perp O_aO_c$, and $AC\perp O_bO_d$ then using Lemma \ref{lem2} on quadrilateral $O_aO_bO_cO_d$, we get that $GF$ is perpendicular to its Newton line $MN$ so that $QR \parallel FG$. Since $F(P,A;B,G)=F(P,R;Q,G)=-1$ and $QR \parallel FG$, it follows that $P$ is midpoint of $QR$ (following the properties of harmonic pencil in \cite{3}). We complete the solution.
\end{proof}
We shall use the Cartesian coordinate system for the second proof of Theorem \ref{thm1}.
\begin{figure}[htbp]
\begin{center}\scalebox{0.7}{\includegraphics{Figure7727.pdf}}\end{center}
\label{fig6}
\caption{The second proof of Theorem \ref{thm1} using Cartesian coordinate system}
\end{figure}
\begin{proof}[The second proof of Theorem \ref{thm1}]Choosing Cartesian coordinate system such that $P(0,0)$, $A(a,0)$, $B(b,kb)$, $C(c,0)$, and $D(d,kd)$, it is easily seen that $P$ is intersection of two diagonals $AC$ and $BD$. We now compute the coordinates of some points.
Circumcenter $O_a$ of triangle $BCD$ is intersection of perpendicular bisector of $BC$, $BD$
\begin{equation}\label{eq1}
O_a = \left(\frac{b d k^{2} + b d + c^{2}}{2 c}, \frac{b c k^{2} + b c - b d k^{2} - b d - c^{2} + c d k^{2} + c d}{2 c k} \right).
\end{equation}
Circumcenter $O_b$ of triangle $CDA$ is intersection of perpendicular bisector of $CD$, $CA$
\begin{equation}\label{eq2}
O_b = \left(\frac{a+c}{2}, \frac{a c - a d - c d + d^{2} k^{2} + d^{2}}{2 d k} \right).
\end{equation}
Circumcenter $O_c$ of triangle $DAB$ is intersection of perpendicular bisector of $DA$, $DB$
\begin{equation}\label{eq3}
O_c = \left(\frac{a^{2} + b d k^{2} + b d}{2 a}, \frac{-a^{2} + a b k^{2} + a b + a d k^{2} + a d - b d k^{2} - b d}{2 a k} \right).
\end{equation}
Circumcenter $O_d$ of triangle $ABC$ is intersection of perpendicular bisector of $AB$, $AC$
\begin{equation}\label{eq4}
O_d = \left(\frac{a+c}{2}, \frac{-a b + a c + b^{2} k^{2} + b^{2} - b c}{2 b k} \right).
\end{equation}
From \eqref{eq1}, \eqref{eq2}, \eqref{eq3}, and \eqref{eq4}, midpoints $M$ and $N$ of $O_aO_c$ and $O_bO_d$, respectively, have the coordinates
\begin{equation}\label{eq5}M = \left(\frac{a^{2} c + a b d k^{2} + a b d + a c^{2} + b c d k^{2} + b c d}{4 a c}, \frac{\splitfrac{-a^{2} c + 2 a b c k^{2} + 2 a b c - a b d k^{2} - a b d}{ - a c^{2} + 2 a c d k^{2} + 2 a c d - b c d k^{2} - b c d}}{4 a c k} \right),
\end{equation}
\begin{equation}\label{eq6}N = \left(\frac{a + c}{2}, \frac{a b c - 2 a b d + a c d + b^{2} d k^{2} + b^{2} d - 2 b c d + b d^{2} k^{2} + b d^{2}}{4 b d k} \right).
\end{equation}
From \eqref{eq5} and \eqref{eq6}, the perpendicular from $P(0,0)$ to line $MN$ has equation
\begin{equation}\label{eq7}y = \frac{-a b d k - b c d k}{a b c - a b d + a c d - b c d} x.
\end{equation}
Lines $BC$ and $AD$ have equation
\begin{equation}\label{eq8}
(AB):\ y = a b \frac{k}{a - b} - b k \frac{x}{a - b}.
\end{equation}
\begin{equation}\label{eq9}
(CD):\ y = c d \frac{k}{c - d} - d k \frac{x}{c - d}.
\end{equation}
From \eqref{eq7}, \eqref{eq8}, and \eqref{eq9}, we obtain the intersection $Q$ and $R$ of the perpendicular from $P$ to line $MN$ with the lines $AB$ and $CD$, respectively, have the coordinates
\begin{equation}\label{eq10}
Q = \left(\frac{-a b c + a b d - a c d + b c d}{a d - b c}, \frac{a b d k + b c d k}{a d - b c} \right),\end{equation}
\begin{equation}\label{eq11}
R = \left(\frac{a b c - a b d + a c d - b c d}{a d - b c}, \frac{-a b d k - b c d k}{a d - b c} \right).\end{equation}
Finally, from \eqref{eq10} and \eqref{eq11}, we easily check $P$ is the midpoint of $QR$. This completes the second proof.
\end{proof}
Using the Cartesian coordinate system again, we give proof of Theorem \ref{thm2}.
\begin{figure}[htbp]
\begin{center}\scalebox{0.7}{\includegraphics{Figure7728.pdf}}\end{center}
\label{fig7}
\caption{Proof of Theorem \ref{thm2} using Cartesian coordinate system}
\end{figure}
\begin{proof}[The first proof of Theorem \ref{thm2}]Choosing the Cartesian coordinate system such that $P(0,0)$, $A(a,0)$, $B(b,kb)$, $C(c,0)$, and $D(d,kd)$, we easily seen $P$ is intersection of two diagonals $AC$ and $BD$. We now compute the coordinates of some points.
Perpendicular bisectors of $AC$ and $BD$ meet at $X$ which has coordinates
\begin{equation}\label{eq12}X = \left(\frac{a+c}{2}, \frac{-a + bk^{2} + b - c + dk^{2} + d}{2k} \right).
\end{equation}
Perpendicular bisectors of $AB$ and $CD$ meet at $Y$ which has coordinates
\begin{equation}\label{eq13}Y = \left(\frac{a^{2}d - b^{2}dk^{2} - b^{2}d - bc^{2} + bd^{2}k^{2} + bd^{2}}{2ad - 2bc}, \frac{\splitfrac{a^{2}c - a^{2}d - ac^{2} + ad^{2}k^{2} + ad^{2} - b^{2}ck^{2}}{ - b^{2}c + b^{2}dk^{2} + b^{2}d + bc^{2} - bd^{2}k^{2} - bd^{2}}}{2adk - 2bck} \right).
\end{equation}
Perpendicular bisectors of $AD$ and $BC$ meet at $Z$ which has coordinates
\begin{equation}\label{eq14}Z = \left(\frac{a^{2}b + b^{2}dk^{2} + b^{2}d - bd^{2}k^{2} - bd^{2} - c^{2}d}{2ab - 2cd}, \frac{\splitfrac{-a^{2}b + a^{2}c + ab^{2}k^{2} + ab^{2} - ac^{2}- b^{2}dk^{2}}{ - b^{2}d + bd^{2}k^{2} + bd^{2} + c^{2}d - cd^{2}k^{2} - cd^{2}}}{2abk - 2cdk} \right).
\end{equation}
From \eqref{eq12}, \eqref{eq13}, and \eqref{eq14}, the vertex $W$ of parallelogram $XYWZ$ has coordinates
\begin{equation}\label{eq15}W = \left(f(a,b,c,d,k),g(a,b,c,d,k)\right)
\end{equation}
where
\begin{equation}\label{eq16}f(a,b,c,d,k)=\frac{\splitfrac{a^{3}bd - a^{2}bcd - ab^{3}dk^{2} - ab^{3}d + 2ab^{2}d^{2}k^{2} + 2ab^{2}d^{2} - abc^{2}d - abd^{3}k^{2}}{ - abd^{3} - b^{3}cdk^{2} - b^{3}cd + 2b^{2}cd^{2}k^{2} + 2b^{2}cd^{2} + bc^{3}d - bcd^{3}k^{2} - bcd^{3}}}{2a^{2}bd - 2ab^{2}c - 2acd^{2} + 2bc^{2}d},
\end{equation}
and
\begin{equation}\label{eq17}
g(a,b,c,d,k)=\frac{\begin{aligned}
&a^{3}bc - a^{3}bd + a^{3}cd - 2a^{2}bc^{2} + a^{2}bcd
- 2a^{2}c^{2}d - ab^{3}ck^{2}- ab^{3}c\\
&+ ab^{3}dk^{2} + ab^{3}d + ab^{2}cdk^{2} + ab^{2}cd - 2ab^{2}d^{2}k^{2}- 2ab^{2}d^{2} + abc^{3}\\
&+ abc^{2}d + abcd^{2}k^{2} + abcd^{2}+ abd^{3}k^{2}+ abd^{3} + ac^{3}d - acd^{3}k^{2} - acd^{3}\\
&+ b^{3}cdk^{2} + b^{3}cd- 2b^{2}cd^{2}k^{2} - 2b^{2}cd^{2} - bc^{3}d + bcd^{3}k^{2} + bcd^{3}\end{aligned}}{2a^{2}bdk - 2ab^{2}ck - 2acd^{2}k + 2bc^{2}dk}.
\end{equation}
From \eqref{eq15}, \eqref{eq16}, and \eqref{eq17}, we get equation of perpendicular line from $P$ to line $PW$ as follows
\begin{equation}\label{eq18}
y = \frac{-abdk - bcdk}{abc - abd + acd - bcd}x.
\end{equation}
Thus intersection points of this line with lines $AD$ and $BC$ have coordinates
\begin{equation}\label{eq19}
Q = \left(\frac{-abc + abd - acd + bcd}{ab - cd}, \frac{abdk + bcdk}{ab - cd} \right)
\end{equation}
and
\begin{equation}\label{eq20}
R = \left(\frac{abc - abd + acd - bcd}{ab - cd}, \frac{-abdk - bcdk}{ab - cd} \right).
\end{equation}
From \eqref{eq19} and \eqref{eq20}, we easily check $P$ is the midpoint of $QR$. This completes the proof.
\end{proof}
The second proof of Theorem \ref{thm2} is based on the idea of the first proof of Theorem \ref{thm1}. We need a lemma
\begin{lemma}\label{lem3}Let $ABCD$ be an arbitrary quadrilateral. Diagonals $AC$ and $BD$ meet at $P$. Let $O_a$, $O_b$, $O_c$, and $O_d$ be the circumcenters of the triangles $BCD$, $CDA$, $DAB$, and $ABC$, respectively. Let $M$ and $N$ be the midpoints of diagonals $AC$ and $BD$, respectively. Then, the circles with diameters $O_aO_c$, $O_bO_d$, and the circumcircle of triangle $PMN$ are coaxial.
\end{lemma}
\begin{proof}Like the second proof of Theorem \ref{thm1}, we choose Cartesian coordinate system such that $P(0,0)$, $A(a,0)$, $B(b,kb)$, $C(c,0)$, and $D(d,kd)$, we easily seen $P(0,0)$ is intersection of two diagonals $AC$ and $BD$, and midpoints $M\left(\frac{a+c}{2},0\right)$ and $N\left(\frac{b+d}{2},k\frac{b+d}{2}\right)$. We now compute the coordinates of some points.
Circumcenter $O_a$ of triangle $BCD$ is intersection of perpendicular bisector of $BC$, $BD$
\begin{equation}\label{eq21}
O_a = \left(\frac{b d k^{2} + b d + c^{2}}{2 c}, \frac{b c k^{2} + b c - b d k^{2} - b d - c^{2} + c d k^{2} + c d}{2 c k} \right).
\end{equation}
Circumcenter $O_b$ of triangle $CDA$ is intersection of perpendicular bisector of $CD$, $CA$
\begin{equation}\label{eq22}
O_b = \left(\frac{a+c}{2}, \frac{a c - a d - c d + d^{2} k^{2} + d^{2}}{2 d k} \right).
\end{equation}
Circumcenter $O_c$ of triangle $DAB$ is intersection of perpendicular bisector of $DA$, $DB$
\begin{equation}\label{eq23}
O_c = \left(\frac{a^{2} + b d k^{2} + b d}{2 a}, \frac{-a^{2} + a b k^{2} + a b + a d k^{2} + a d - b d k^{2} - b d}{2 a k} \right).
\end{equation}
Circumcenter $O_d$ of triangle $ABC$ is intersection of perpendicular bisector of $AB$, $AC$
\begin{equation}\label{eq24}
O_d = \left(\frac{a+c}{2}, \frac{-a b + a c + b^{2} k^{2} + b^{2} - b c}{2 b k} \right).
\end{equation}
From this, note that power of point $W$ with respect to circle diameter $XY$ is the dot product
$$\mathcal{P}_{W/(XY)}=\overrightarrow{WX}\cdot \overrightarrow{WY}.$$
Using \eqref{eq21}, \eqref{eq22}, \eqref{eq23}, and \eqref{eq24}, we have the ratio of powers
\begin{equation}\label{eq25}
\frac{\mathcal{P}_{P/(O_AO_C)}}{\mathcal{P}_{P/(O_BO_D)}}=\frac{\overrightarrow{PO_A}\cdot\overrightarrow{PO_C}}{\overrightarrow{PO_B}\cdot\overrightarrow{PO_D}}=\frac{(k^2+1)bd}{ac}
\end{equation}
\begin{equation}\label{eq26}
\frac{\mathcal{P}_{M/(O_AO_C)}}{\mathcal{P}_{M/(O_BO_D)}}=\frac{\overrightarrow{MO_A}\cdot\overrightarrow{MO_C}}{\overrightarrow{MO_B}\cdot\overrightarrow{MO_D}}=\frac{(k^2+1)bd}{ac}
\end{equation}
\begin{equation}\label{eq27}
\frac{\mathcal{P}_{N/(O_AO_C)}}{\mathcal{P}_{N/(O_BO_D)}}=\frac{\overrightarrow{NO_A}\cdot\overrightarrow{NO_C}}{\overrightarrow{NO_B}\cdot\overrightarrow{NO_D}}=\frac{(k^2+1)bd}{ac}.
\end{equation}
Therefore,
\begin{equation}\label{eq28}\frac{\overrightarrow{PO_A}\cdot\overrightarrow{PO_C}}{\overrightarrow{PO_B}\cdot\overrightarrow{PO_D}}=\frac{\overrightarrow{MO_A}\cdot\overrightarrow{MO_C}}{\overrightarrow{MO_B}\cdot\overrightarrow{MO_D}}=\frac{\overrightarrow{NO_A}\cdot\overrightarrow{NO_C}}{\overrightarrow{NO_B}\cdot\overrightarrow{NO_D}}=\frac{(k^2+1)bd}{ac}=\frac{\overrightarrow{PB}\cdot\overrightarrow{PD}}{\overrightarrow{PA}\cdot\overrightarrow{PC}}.
\end{equation}
By property of ratio powers, we have the circles with diameters $O_aO_c$, $O_bO_d$, and the circumcircle of triangle $PMN$ are coaxial. This completes the proof.
\end{proof}
\begin{proof}[Second proof of Theorem \ref{thm2}]Letting $U \equiv AD \cap BC$ and $V \equiv AB \cap CD$. In order to prove that $P$ is midpoint of $QR$, we need to prove $QR \parallel UV$ (using harmonic pencil as in the first proof of Theorem \ref{thm1}) or in other words $PW \perp UV$. By homothety with center $X$ and factor $\frac{1}{2}$, this is equivalent to prove that the line joining the midpoints $P'$ and $K'$ of $XP$ and $XK$ is perpendicular to $UV$. From Lemma \ref{lem3}, we deduce that $K'$ falls on the Newton line of $O_AO_BO_CO_C$ and from Lemma \ref{lem3}, we know this Newton line is perpendicular to $UV$ from Lemma \ref{lem2}, which completes the proof.
\end{proof}
\begin{acknowledgements}We thank Chris van Tienhoven and his associates in \cite{4d,4e} for providing us with valuable knowledge about Quadrangle, Quadrilateral, and Quadrigon and how to categorize them.
We thank open math software Geogebra Geometry \cite{10a} and Sage Notebook \cite{10} for drawing and transforming algebraic expressions.
\end{acknowledgements}
| {
"timestamp": "2020-12-16T02:21:52",
"yymm": "2012",
"arxiv_id": "2012.08365",
"language": "en",
"url": "https://arxiv.org/abs/2012.08365",
"abstract": "We establish two direct extensions to the Butterfly Theorem on the cyclic quadrilateral along with the proofs using the projective method and analytic geometry of the Cartesian coordinate system.",
"subjects": "History and Overview (math.HO); Metric Geometry (math.MG)",
"title": "Two generalizations of the Butterfly Theorem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9840936101542134,
"lm_q2_score": 0.8198933425148213,
"lm_q1q2_score": 0.8068517993768155
} |
https://arxiv.org/abs/1207.1258 | Matrices that commute with their derivative. On a letter from Schur to Wielandt | We examine when a matrix whose elements are differentiable functions in one variable commutes with its derivative. This problem was discussed in a letter from Issai Schur to Helmut Wielandt written in 1934, which we found in Wielandt's Nachlass. We present this letter and its translation into English. The topic was rediscovered later and partial results were proved. However, there are many subtle observations in Schur's letter which were not obtained in later years. Using an algebraic setting, we put these into perspective and extend them in several directions. We present in detail the relationship between several conditions mentioned in Schur's letter and we focus in particular on the characterization of matrices called Type 1 by Schur. We also present several examples that demonstrate Schur's observations. | \section{Introduction}
What are the conditions that force a matrix of differentiable functions to commute with its elementwise derivative?
This problem, discussed in a letter from I. Schur to H. Wielandt \cite{Sch34}, has been
discussed in a large number of papers \cite{Ama54,Asc50,Asc52,BogC59,Die74,Eps63,Eru46,Gof81,Hel55,KotE82,
Kuz76,Mar65,Mar67,Par72,Pet79,Ros65,Sch52,Ter55}. However, these authors were unaware of Schur's letter and did not find some of its principal results. A summary and a historical discussion of the problem and several extensions thereof are presented by Evard in \cite{Eva85,Eva95}, where the study of the topic is dated back to the 1940s and 1950s, but
Schur's letter shows that it already appeared in Schur's lectures in the 1930s, if not earlier.
\if {
We do not know which set of functions Schur had in mind,
whether Schur meant this to be the set of analytic functions
or meromorphic functions in one variable. He
may even have had the rational functions in one variable over the
complex numbers in mind. And we do not know which arguments Schur used to reach conclusions concerning matrices of small size at the end of his letter. Though our arguments remain close to those of Schur, we will take an algebraic approach and discuss the results in Schur's letter in differential fields.
This is also the approach that was taken in \cite{AdkEG93} and in unpublished notes of Guralnick~\cite{Gur05}, where results related to ours using differential fields were discussed.
This approach has also been taken in
} \fi
The content of the paper is as follows.
In Section~\ref{letter} we present a facsimile of Schur's letter to
Wielandt and its English translation. In Section~\ref{sec:discussion} we discuss Schur's letter and we motivate our use of differential fields.
In Section~\ref{sec:prelim} we introduce our notation and reprove Frobenius result on Wronskians. In Section~\ref{sec:ct1} we discuss the results that characterize
the matrices of Type~1 in Schur's letter and in our main Section~\ref{sec:triandia} we discuss
the role played by diagonalizability and triangularizability of the matrix in the commutativity of the matrix and its derivative.
We also present several illustrative examples in Section~\ref{sec:exs}
and we state an open problem in Section~\ref{conclusion}.
\section{A letter from Schur to Wielandt}\label{letter}
Our paper deals with the following letter from Issai Schur to his PhD
student Helmut Wielandt. See the facsimile below.
\begin{figure}
\scalebox{.8}{\includegraphics{schur_brief_voll}}
\vspace{0.5cm}
\label{FigOne}
\end{figure}
\begin{figure}
\scalebox{.8}{\includegraphics{schur_brief_voll2}}
\vspace{0.5cm}
\label{FigTwo}
\end{figure}
\begin{figure}
\scalebox{.8}{\includegraphics{schur_brief_voll3}}
\vspace{0.5cm}
\label{FigThree}
\end{figure}
\begin{figure}
\scalebox{.8}{\includegraphics{schur_brief_voll4}}
\vspace{0.5cm}
\label{FigFour}
\end{figure}
Translated into English, the letter reads as follows:
{\it
Lieber Herr Doktor!\hfill Berlin, 21.7.34
You are perfectly right. Already for $3\leq n<6$
not every solution of the
equation $M M'=M'M$ has the form
\eq{1} M_1 =\sum_{\lambda} f_\lambda C_\lambda, \en
where the $C_\lambda$ are pairwise commuting constant matrices. One
must also consider the type
\eq{2} M_2=(f_\alpha g_\beta), \quad (\alpha,\beta=1,\ldots n), \en
where $f_1,\ldots f_n$, $ g_1,\ldots,g_n$ are arbitrary functions
that satisfy the conditions
$$ \sum_\alpha f_\alpha g_\alpha= \sum_\alpha f'_\alpha g_\alpha=0$$
and therefore also
$$ \sum_\alpha f_\alpha g'_\alpha=0.$$
In this case we obtain
$$ M^2 =M M'=M'M=0.$$
In addition we have the type
\eq{3} M_3=\phi E+M_2, \en
with $M_2$ of type (\ref{2}). \footnote{Note that $E$ here denotes the identity matrix.}
From my old notes, which I did not present correctly in my lectures,
it can be deduced that for $n<6$ every solution of $M M'=M'M$
can be completely decomposed by means of constant similarity
transformations into matrices of type (\ref{1}) and (\ref{3}).
Only from $n=6$ on there are also other cases.
This seems to be correct.
But I have not checked my rather laborious
computations (for $n=4$ and $n=5$).
I concluded in the following simple manner that one can restrict oneself to
the case where $M$ has only one characteristic root (namely $0$): If $M$
has two different characteristic roots, then one can determine
a rational function $N$ of $M$ for which $N^2=N$ but
not $N=\phi E$. Also $N$ commutes with $N'$. It follows from $N^2=N$
that $2NN'=N'$, thus $2N^2N'= 2NN'=NN'$. This yields
$2 NN'=N'=0$, i.e., $N$ is constant.
Now one can apply a constant similarity transformation to $M$ so that
instead of $N$ one achieves a matrix of the form
$$\mat{cc} E & 0 \\ 0 & 0 \rix.$$
This shows that $M$ can be decomposed completely
by means of a constant
similarity transformation.
One is led to type (\ref{2}) by studying the case $M^2=0$, $\rank M=1$.
Already for $n=4$ also the cases $M^2=0$, $\rank M=2$, $M^3=0$ need to be
considered.
Type (\ref{1}) is completely characterized by the property
that $M,M',M'', \ldots $ are pairwise
commuting. This is not only necessary but also sufficient.
For, if among the $n^2$ coefficients $f_{\alpha \beta}$ of $M$ exactly
$r$ are linearly independent over the domain of constants,
then one can write
$$ M=f_1 C_1+\cdots +f_rC_r,$$
($C_s$ a constant matrix), where $f_1, \ldots, f_r$ satisfy no
equation $\sum_{\alpha} \mbox{\rm const}\ f_\alpha=0$.
Then
$$ M^{(\nu)}= f_1^{(\nu)} C_1+ \cdots +f_r^{(\nu)} C_r,
\quad (\nu=1,\ldots,r-1).$$
Since the Wronskian determinant
$$\left |\mat{ccc} f_1 &\ldots& f_r \\
f_1' &\ldots& f_r'\\
& \vdots & \rix\right |
$$
cannot vanish identically, one obtains equations of the form
$$ C_s=\sum_{\sigma=0}^{r-1} \phi_{s \sigma} M^{(\sigma)}.$$
If $M,M',M'', \ldots, M^{(r-1)}$ are pairwise
commuting, then the same is true also for
$C_1,\ldots C_r$ and thus $M$ is of type (\ref{1}). This implies
furthermore that $M$ belongs to type (\ref{1}) if
$M^n$ is the highest\footnote{We think that Schur means \emph{lowest} here.} power of $M$ that equals $0$.
In the case $n=3$ one therefore only needs to consider type (\ref{2}).
\hfill With best regards
\hfill Yours, Schur
}
\bigskip
\section{Discussion of Schur's letter}\label{sec:discussion}
This letter was found in Helmut Wielandt's mathematical Nachlass when it was collected by Heinrich Wefelscheid and Hans Schneider not long after Wielandt's death in 2001. We may therefore safely assume that Schur's recent student Wielandt is the "Herr Doktor" to whom the letter is addressed. Schur's letter begins with a reference to a previous remark of Wielandt's which corrected an incorrect assertion by Schur. We can only guess at this sequence of events, but perhaps a clue is provided by Schur's reference to his notes which he did not present correctly in his lectures. Could Wielandt have been in the audience and did he subsequently point out the error? And what was this error? Very probably it was that every matrix of functions that commutes with its derivative is given by (1) (matrices called Type 1), for Schur now denies this and displays another type of matrix commuting with its derivative (called Type 2). He recalls that in his notes he claimed that for matrices of size 5 or less every such matrix is of Type 1, 2 or 3, where Type 3 is obtained from Type 2 by adding a scalar function times the identity. This is not correct because there is also the direct sum of a size 2 matrix of Type 1 and a size 3 matrix of Type 2, we prove this below.
We do not know why Schur was interested in the topic of matrices of functions that commute with their derivative, but
it is probably a safe guess that this question came up in the context of solving differential equations, at least this is
the motivation in many of the subsequent papers on this topic.
As one of the main results of his letter, Schur shows that an idempotent that commutes with its derivative is a constant matrix and, without further explanation, concludes that one can restrict oneself to matrices with a single eigenvalue. The latter observation raises several questions. First, Schur does not say which functions he has in mind. Second, his argument follows from a standard decomposition of a matrix by a similarity into a direct sum of matrices {\em provided that} the eigenvalues of the matrix are functions of the type considered. But this is not true in general, for example the eigenvalues of a matrix of rational functions are algebraic functions.
We wonder whether Schur was aware of this difficulty and we shall return to it at the end of this section.
Then Schur shows a matrix of size $n$ is of Type 1 if and only if it and its first $n-1$ derivatives are pairwise commutative. His proof is based on a result of Frobenius \cite{Fro1874} that a set of functions is linear independent over the constants if and only if their Wronskian determinant is nonzero. Frobenius, like Schur, does not explain what functions he has in mind. In fact, Peano \cite{Pea1889} shows that there exist real differentiable functions that are linearly independent over the reals whose Wronskian is $0$. This is followed by Bocher \cite{Boc00} who shows that Frobenius' result holds for analytic functions and investigates necessary and sufficient conditions in \cite{Boc01}. A good discussion of this topic can be found in \cite{BosD10}.
We conclude this section by explaining how our exposition has been influenced by some of the observation above. As we do not know what functions Schur and Frobenius had in mind, we follow \cite{AdkEG93} and some unpublished notes of Guralnick \cite{Gur05} and set Schur's results and ours in terms of differential fields (which include the field of rational functions and the quotient field of analytic functions over the real or complex numbers). Since we do not know how Schur concludes that it is enough to consider matrices with a single eigenvalue, we derive our results from standard matrix decomposition (our Lemma \ref{decomp} below) which does not assume that all eigenvalues lie in the differential field under consideration.
\section{Notation and preliminaries}\label{sec:prelim}
A {\em differential field} $\mathbb F$ is an (algebraic) field together
with an additional operation (the derivative), denoted by ${}'$ that satisfies $(a+b)' = a'
+ b'$ and $(ab)' = ab' + a'b$ for $a,b \in \mathbb F$. An element $a
\in \mathbb F$ is called a {\em constant} if $a' = 0$. It is easily
shown that the set of constants forms a subfield $\mathbb K$ of $\mathbb F$
with $1 \in \mathbb K$. Examples are provided by the rational functions over the real or complex numbers
and the meromorphic functions over the complex numbers.
In what follows we consider a (differential) field $\mathbb F$
and matrices $M=[m_{i,j}]\in \mathbb F^{n,n}$.
The main condition that we want to analyze is when $M\in \mathbb F^{n,n}$
commutes with its derivative,
\begin{equation}\label{c1}
MM'=M'M.
\end{equation}
As $M\in \mathbb F^{n,n}$, it has a minimal and a characteristic polynomial, and $M$ is called {\em nonderogatory} if the characteristic polynomial is equal to the minimal polynomial, otherwise it is called {\em derogatory}. See \cite{HorJ85}.
In Schur's letter the following
three types of matrices are considered.
%
\begin{definition}
Let $M\in \mathbb F^{n,n}$. Then $M$ is said to be of
\begin{itemize}
\item {\em Type 1\/} if
\[
M =\sum_{j=1}^{k} f_j C_j,
\]
where $f_j\in \mathbb F$, and $C_j\in \mathbb K^{n,n}$, for $j=1,\ldots,k$,
and the $C_j$ are pairwise commuting;
\item {\em Type 2\/} if
\[
M=f g^T,\
\]
with $f,g\in \mathbb F^{n}$, satisfying $f^Tg=f^Tg'=0$;
\item {\em Type 3\/} if
\[
M=hI+\widetilde M,
\]
with $h\in \mathbb F$ and $\widetilde M$ is of Type~2.
\end{itemize}
\end{definition}
Schur's letter also mentions the condition that all derivatives of $M$ commute, i.e.,
\begin{equation}\label{c6}
M^{(i)}M^{(j)}=M^{(j)}M^{(i)}\ \mbox{\rm for all nonnegative integers}\ i,j.
\end{equation}
To characterize the relationship between all these properties, we first recall
several results from Schur's letter and from classical algebra.
\begin{lemma} \label{insight} Let $\mathbb F$ be a differential field with field of constants
$\mathbb K$. Let $N$ be an idempotent matrix in
$\mathbb {F}^{n,n}$ that commutes with $N'$. Then $N \in \mathbb K^{n,n}$.
\end{lemma}
\proof (See Schur's letter.) It follows from $N^2 = N$ that $2NN' =
N'$. Thus $2NN' = 2N^2N' = NN'$ and this implies that $0 = 2NN' = N'$.
\eproof
Another important tool in our analysis will be the following result which in its original form
is due to Frobenius~\cite{Fro1874}, see Section~\ref{sec:discussion}.
We phrase and prove the result in the context of differential fields.
\begin{theorem}\label{frob}
Consider a differential field $\mathbb F$ with field of constants $\mathbb K$. Then
$y_1,\ldots,y_r\in \mathbb F$ are linearly dependent over $\mathbb K$ if and only if the columns of the \emph{Wronski matrix}
\[
Y=\left[ \begin{array}{cccc} y_1 & y_2 & \dots & y_r \\
y_1' & y_2' &\dots & y_r'\\
\vdots & \vdots & \ddots & \vdots\\
y_1^{(r-1)} & y_2^{(r-1)} &\dots & y_r^{(r-1)}
\end{array} \right ],
\]
are linearly dependent over $\mathbb F$.
\end{theorem}
\proof
We proceed by induction over $r$. The case $r=1$ is trivial.
Consider the Wronski matrix $Y$ and the lower triangular matrix
\[
Z=\left[ \begin{array}{cccc} z & 0 & \dots & 0 \\
c_{2,1}z' & z &\dots & 0\\
\vdots & \vdots & \ddots & \vdots\\
c_{n,1}z^{(n-1)} & c_{n,2}z^{(n-2)} &\dots & z
\end{array} \right ],
\]
with $c_{i,j}$ appropriate binomial coefficients such that
\[
ZY=\left[ \begin{array}{cccc} z y_1& z y_2& \dots & z y_r \\
(z y_1)' & (z y_2)' &\dots & (z y_r)'\\
\vdots & \vdots & \ddots & \vdots\\
(z y_1)^{(n-1)} & (z y_2)^{(n-1)} &\dots & (z y_r)^{(n-1)}
\end{array} \right ].
\]
Since $\mathbb F$ is a differential field, we can choose $z=y_1^{-1}$ and obtain
that
\[
ZY=\left[ \begin{array}{cccc} 1& y_1^{-1} y_2& \dots & y_1^{-1} y_r \\
0 & (y_1^{-1} y_2)' &\dots & ( y_1^{-1} y_r)'\\
\vdots & \vdots & \ddots & \vdots\\
0 & (y_1^{-1} y_2)^{(n-1)} &\dots & ( y_1^{-1} y_r)^{(n-1)}
\end{array} \right ].
\]
It follows that the columns of $Y$ are linearly dependent over $\mathbb F$ if and only if the columns
of
\[
\left[ \begin{array}{ccc}
(y_1^{-1} y_2)' &\dots & ( y_1^{-1} y_r)'\\
\vdots & \ddots & \vdots\\
(y_1^{-1} y_2)^{(n-1)} &\dots & ( y_1^{-1} y_r)^{(n-1)}
\end{array} \right ]
\]
are linearly dependent over $\mathbb F$, which, by induction, holds
if and only if $(y_1^{-1} y_2)' ,\dots , ( y_1^{-1} y_r)'$
are linearly dependent over $\mathbb K$, i.e., there exist coefficients
$b_2,\ldots,b_r\in \mathbb K$, not all $0$,
such that
\[
b_2 (y_1^{-1} y_2)'+\cdots +b_r( y_1^{-1} y_r)'=0.
\]
Integrating this identity, we obtain
\[
b_2 (y_1^{-1} y_2)+\cdots +b_r( y_1^{-1} y_r)=-b_1
\]
for some integration constant $b_1\in \mathbb K$, or equivalently
\[
b_1 y_1+\cdots + b_r y_r=0.
\]
{}\vskip -1cm\hskip 12cm \eproof
Theorem~\ref{frob} implies in particular that the columns of the Wronski matrix $Y$
are linearly independent over $\mathbb F$ if and only if they are linearly independent over $\mathbb K$.
\begin{remark}{\rm
Theorem~\ref{frob} is discussed from a formal algebraic point of view, which however includes the cases of complex analytic functions and rational functions over a field, since these are contained in differential fields. Necessary and sufficient conditions for Theorem~\ref{frob} to hold for other functions were proved in \cite{Boc01} and discussed in many places, see, e.g., \cite{BosD10,Mei61} and \cite[Ch. XVIII]{Mui33}.
}
\end{remark}
\section{Characterization of matrices of Type~1} \label{sec:ct1}
In this section we discuss relationships among the various properties introduced in Schur's letter and in the previous section. This will give, in particular, a characterization of matrices of Type~1.
In his letter, Schur proves the following result.
\begin{theorem}\label{th:allcommute}
Let $\mathbb F$ be a differential field. Then $M\in {\mathbb F}^{n,n}$
is of Type~1 if and only if it satisfies condition (\ref{c6}), i.e., $M^{(i)}M^{(j)}=M^{(j)}M^{(i)}$ for all nonnegative integers $i,j$.
\end{theorem}
\proof (See Schur's letter.)
If $M$ is of Type~1, then $M =\sum_{j=1}^{k} f_j C_j$ and the $C_j\in \mathbb K^{n,n}$ are pairwise commuting, which immediately implies (\ref{c6}). For the converse,
Schur makes use of Theorem~\ref{frob}, since if among the $n^2$ coefficients
$m_{i,j}$ exactly $r$ are linearly independent over $\mathbb K$, then
\[
M=f_1C_1+\cdots+f_rC_r,
\]
with coefficients $C_i\in \mathbb K^{n,n}$, where
$f_1,\ldots,f_r$ are linearly independent over $\mathbb K$.
Then
\[
M^{(i)}=f_1^{(i)}C_1+\cdots +f_r^{(i)}C_r,\qquad i=1,\ldots,r-1.
\]
By Theorem~\ref{frob}, the columns of the associated Wronski matrix
are linearly independent, and hence each of the $C_i$ can be expressed
as
\[
C_i=\sum_{j=0}^{r-1} g_{i,j} M^{(j)}.
\]
Thus, if condition~(\ref{c6}) holds, then the $C_i$, $i=1,\ldots,r$,
are pairwise commuting and thus $M$ is of Type~1.
\eproof
Using this result we immediately have the following Theorem.
\begin{theorem}\label{th:nondero}
Let $\mathbb F$ be a differential field with field of constants
$\mathbb K$. If $M\in \mathbb F^{n,n}$ is nonderogatory and $MM' = M'M$,
then $M$ is of Type~1.
\end{theorem}
\proof
If $M$ is nonderogatory then all matrices that commute with $M$ have the form $p(M)$, where $p$ is a polynomial with coefficients in $\mathbb F$, see \cite{DraDG51,HorJ85}. Thus $MM' = M'M$ implies that $M'$ is a polynomial in $M$.
But then every derivative $M^{(j)}$ is a polynomial in $M$ as well and
thus (\ref{c6}) holds which by Theorem~\ref{th:allcommute} implies that $M$ is of Type~1.
\eproof
The following example from \cite{Asc52,Eva85} of a Type~2 matrix shows that one cannot easily drop the
condition that the matrix is nonderogatory.
\begin{example}\label{eva3x3}
{\rm Let
\[ f =\mat{c} 1\\ t \\ t^2\rix,\qquad g =\mat{c} t^2\\ -2 t\\ 1
\rix,
\]
then $f^T g=0$ and $f^T g'=0$, hence
\begin{equation}\label{evaM}
M_a := g f^T= \mat{ccc} t^2 & t^3& t^4\\
-2 t& -2 t^2 & -2 t^3\\
1& t & t^2\rix,
\end{equation}
is of Type~2. Since $M_a$ is nilpotent with $M_a^2=0$ but $M_a\neq 0$ and the rank is $1$,
it is derogatory. One has
\[
M_a'=
\mat{ccc} 2t & 3t^2& 4t^3\\
-2 & -4t& -6t^2\\
0 & 1 & 2 t
\rix,\qquad
M_a''= \mat{ccc} 2 & 6t & 12 t^2 \\
0 & -4 & -12 t \\
0 & 0 & 2\rix,
\]
and thus $ M_a M_a'=M_a' M_a=0$. By the product rule
it immediately follows that $M_a M_a''=M_a''M_a$, but
\[
M_a' M_a''= \mat{ccc}
4t& 0& -4t^3\\
-4 & 4t &12t^2\\
0 & -4 & -8t
\rix \neq M_a'M_a''=\mat{ccc}
-8t &-6t^2 & -4t^3\\
8 & 4t & 0\\
0 & 2 & 4t\rix.
\]
Therefore, it follows from Theorem~\ref{th:allcommute} that $M_a$ is not of Type~1.
For any dimension $n\geq 3$, one can construct an example of Type~2 by choosing $f \in {\mathbb F}^n$, setting $F = [f,f']$ and then choosing $g$ in the nullspace of $F^T$.
Then $fg^T$ is of Type~2.
}
\end{example}
Actually every nilpotent matrix function $M$ of rank one
satisfying $MM' = M'M$ is of the form $M=fg^T$ and hence of Type~2.
This follows immediately because if $M=fg^T$ and $M^2=0$ then $g^Tf=0$
and hence $g^Tf'+(g^T)'f=0$. Then it follows from $MM' = M'M$
that $fg^T(f(g^T)' + f'g^T) = (g^Tf')fg^T= (f(g^T)' + f'g^T)fg^T = (g^T)'f fg^T$
which implies that $g^Tf' = f^Tg'$ and hence $g^Tf' = f^Tg' = 0$.
\section{Triangularizability and Diagonalizability}\label{sec:triandia}
In his letter Schur claims that it is sufficient to consider the case
that $M\in \mathbb F^{n,n}$ is triangular with only one eigenvalue. This follows from his argument
in case the matrix has its eigenvalues in $\mathbb F$, which could be guaranteed by assuming that this
matrix is $\mathbb F$-diagonalizable or even $\mathbb F$-triangularizable. Clearly a sufficient condition
for this to hold is that $\mathbb F$ is algebraically closed, because then for every matrix in $\mathbb F^{n,n}$ the characteristic polynomial splits into linear factors.
\if{
To do this, an essential property discussed in Schur's letter is whether there exists a similarity
transformation to triangular or diagonal form with nonsingular matrices in $\mathbb F^{n,n}$ or
$\mathbb K^{n,n}$.
}\fi
\begin{definition}\label {triangularizable}
Let $\mathbb F$ be a differential field and let $\mathbb H$ be a subfield of $\mathbb F$.
Then $M\in \mathbb F^{n,n}$ is called
{\em $\mathbb H$-triangularizable (diagonalizable)} if there exists a nonsingular $T\in \mathbb H^{n,n}$ such that $T^{-1} M T$ is upper triangular (diagonal).
\end{definition}
Using Lemma~\ref{insight}, we can obtain the following result for
matrices $M\in \mathbb F^{n,n}$ that commute with their derivative $M'$, which is most likely well known
but we could not find a reference.
\begin{lemma}\label{decomp}
Let $\mathbb F$ be a differential field with field of constants $\mathbb K$, and suppose that $M \in {\mathbb F}^{n,n}$ satisfies $MM' = M'M$. Then there
exists an invertible matrix $T\in \mathbb K^{n,n}$ such that
\eq{dirsum}
T^{-1} M T= \diag (M_1,\ldots,M_k),
\en
where the minimal polynomial of each $M_i$ is a power of a
polynomial that is irreducible over $\mathbb F$.
\end{lemma}
\proof
Let the minimal polynomial of $M$ be $\mu(\lambda) = \mu_1(\lambda)\cdots \mu_k(\lambda)$,
where the $\mu_i(\lambda)$ are powers of pairwise distinct polynomials that are
irreducible over $\mathbb F$. Set
\[
p_i(\lambda) = \mu(\lambda)/\mu_i(\lambda),\qquad i = 1,\ldots,k.
\]
Since the polynomials $p_i(\lambda)$ have no common factor, there exist polynomials
$q_i(\lambda)$, $i = 1,\ldots,k$, such that the polynomials $\epsilon_i(\lambda) = p_i(\lambda)q_i(\lambda)$, $i = 1,\ldots,k$, satisfy
\begin{equation}\label{epscomp}
\epsilon_1(\lambda) + \cdots + \epsilon_k(\lambda) = 1.
\end{equation}
Setting $E_i = \epsilon_i(M)$, $i = 1, \ldots, k$ and using the fact that
$\mu(M) =0$ yields that
\begin{eqnarray}\label{comp}
E_1 + \cdots + E_k &=& I, \\
\label{ortho}
E_iE_j &=& 0,\qquad
\; \, i,j = 1,\ldots,k,\quad i\neq j,\\
\label{idem}
E_i^2 &=& E_i,\qquad i = 1,\ldots,k.
\end{eqnarray}
The last identity follows directly from (\ref{comp}) and (\ref{ortho}).
Since the $E_i$ are polynomials in $M$ and $MM' = M'M$, it follows that the $E_i$ commute with $E'_i$, $i = 1,\ldots k$. Hence, by Lemma~\ref{insight}, $E_i \in \mathbb K^{n,n}$, $i = 1,\ldots,k$. By (\ref{comp}), (\ref{ortho}), and (\ref{idem}), $\mathbb K^n$ is a direct sum
of the ranges of the $E_i$ and we obtain that, for some nonsingular $T\in \mathbb K^{n,n}$,
\[
\widetilde{E_i}:= T^{-1}E_iT = \diag(0, I_i,0),\qquad i = 1,\ldots,k,
\]
where the $I_i$ are identity matrices of the size equal to the dimension to the
range of $E_i$. This is a consequence of the fact that $E_i$ is diagonalizable with eigenvalues $0$ and $1$.
Since each $E_i$ commutes with $M$,
we obtain that
\begin{eqnarray*}
\widetilde{M_i}&:=& T^{-1}E_iMT \\
&=& T^{-1}E_iME_iT\\
&=& \diag(0,I_i,0) T^{-1}MT \diag(0,I_i,0)\\
&=&\diag(0,M_i,0) ,\qquad i = 1, \ldots, k.
\end{eqnarray*}
Now observe that
\[
\widetilde{E_i}\mu_i(\widetilde{M}_i) \widetilde{E_i}= T^{-1} \epsilon_i(M)\mu_i(M) \epsilon_i(M)T = 0,
\]
since
$\epsilon_i(\lambda)\mu_i(\lambda )= \mu(\lambda)q_i(\lambda)$. Hence $\mu_i(M_i)=0$ as well.
We assert that $\mu_i(\lambda)$ is the minimal polynomial of $M_i$, for if
$r(M_i) = 0$ for a proper factor $r(\lambda)$ of $m_i(\lambda)$ then $r(M)\Pi_{j \neq i} \mu_j(M) = 0$,
contrary to the assumption that $ \mu(\lambda)$ is the minimal polynomial of $M$.
\eproof
Lemma~\ref{decomp} has the following corollary, which has been proved in a different way
in \cite{AdkEG93} and \cite{Gur05}.
\begin{corollary}\label{th:diagonal}
Let $\mathbb F$ be a differential field with field of constants
$\mathbb K$. If $M\in \mathbb F^{n,n}$ satisfies $MM' = M'M$ and is $\mathbb F$-diagonalizable, then $M$ is $\mathbb K$-diagonalizable.
\end{corollary}
\proof
In this case, the minimal polynomial of $M$ is a product of distinct linear factors and hence, the minimal polynomial of each $M_i$ occurring in the proof of Lemma~\ref{decomp} is linear. Therefore, each $M_i$ is a scalar matrix.
\eproof
We also have the following Corollary.
\begin{corollary}\label{cor:diag_type1}
Let $\mathbb F$ be a differential field with field of constants
$\mathbb K$. If $M\in \mathbb F^{n,n}$ satisfies $MM' = M'M$ and is $\mathbb F$-diagonalizable, then $M$ is of Type~1.
\end{corollary}
\proof
By Corollary~\ref{th:diagonal}, $M=T^{-1} \diag (m_1,\ldots,m_n)T$ with $m_i\in \mathbb F$
and nonsingular $T\in \mathbb K^{n,n}$. Hence
\[
M=\sum_{i=1}^n m_{i}T^{-1} E_{i,i} T
\]
where $E_{i,i}$ is a matrix that has a $1$ in position $(i,i)$ and zeros everywhere else. Since all the matrices $E_{i,i}$ commute,
$M$ is of Type~1.
\eproof
\begin{remark}\label{refrem} {\rm Any $M\in \mathbb F^{n,n}$ that is of rank one, satisfies $MM' = M'M$ and is not nilpotent, is of Type~1, since in this case $M$ is $\mathbb F$-diagonalizable. This follows by Corollary~\ref{cor:diag_type1}, since
the minimal polynomial has the from $(\lambda-c)\lambda$ for some $c\in \mathbb F$. This means in particular for a rank one matrix $M\in \mathbb F^{n,n}$ to be of Type~2 and not of Type~1 it has to be nilpotent.}
\end{remark}
For matrices that are just triangularizable the situation is more subtle. We have the following theorem.
\begin{theorem} \label{th:triangularize} Let $\mathbb F$ be a differential field with an algebraically closed field of constants $\mathbb K$. If $M \in \mathbb F^{n,n}$ is Type~1, then $M$ is $\mathbb K$-triangularizable.
\end{theorem}
\proof
Any finite set of pairwise
commutative matrices with elements in an algebraically closed field may be simultaneously triangularized, see e.g.,
\cite[Theorem 1.1.5]{RadR00}. Under this assumption on $\mathbb K$, if $M$ is Type~1, then it follows that the matrices $C_i \in \mathbb K^{n,n}$ in the representation of $M$ are simultaneously triangularizable by a matrix
$T \in \mathbb K^{n,n}$. Hence $T$ also triangularizes $M$.
\eproof
Theorem~\ref{th:triangularize} implies that Type~1 matrices
have $n$ eigenvalues in $\mathbb F$ if $\mathbb K$ is algebraically closed and it further immediately
leads to a Corollary of Theorem~\ref{th:nondero}.
\begin{corollary}\label{cor:triangularize}
Let $\mathbb F$ be a differential field with field of constants
$\mathbb K$. If $M\in \mathbb F^{n,n}$ is nonderogatory, satisfies $MM' = M'M$ and
if $\mathbb K$ is algebraically closed, then $M$ is $\mathbb K$-triangularizable.
\end{corollary}
\proof
By Theorem~\ref{th:nondero} it follows that $M$ is Type~1 and thus the assertion follows
from Theorem~\ref{th:triangularize}.
\eproof
\section{Matrices of small size and examples}\label{sec:exs}
Example~\ref{eva3x3} again shows that it is difficult to drop
some of the assumptions, since this matrix is derogatory, not of Type~1, and not $\mathbb K$-triangularizable.
One might be tempted to conjecture that any $M\in \mathbb F^{n,n}$ that is $\mathbb K$-triangularizable and satisfies (\ref{c1}) is of Type~1 but this is so only for small dimensions and
is no longer true for large enough $n$, as we will demonstrate below.
Consider small dimensions first.
\begin{proposition}\label{2x2}
Consider a differential field $\mathbb F$ of functions with field of constants
$\mathbb K$.
Let $M=[m_{i,j}]\in {\mathbb F}^{2,2}$ be upper triangular and satisfy $M\, M'=M'\, M$. Then
$M$ is of Type~\ref{1}.
\end{proposition}
\proof
Since $M\, M'=M'\, M$ we obtain
\[
m_{1,2}(m_{1,1}'-m_{2,2}')-m_{1,2}'(m_{1,1}-m_{2,2})=0,
\]
%
which implies that $m_{1,2}=0$ or $m_{1,1}-m_{2,2}=0$ or
both are nonzero and ${ d \over dt}({m_{1,1}-m_{2,2}\over m_{1,2}})=0$, i.e.,
$c m_{1,2}+ (m_{1,1}-m_{2,2})=0$ for some nonzero constant $c$.
If $m_{1,1}=m_{2,2}$ or $m_{1,2}=0$, then $M$, being triangular, is obviously of Type~1.
Otherwise
\[
M=m_{1,1} I + m_{1,2}\mat{cc} 0 & 1 \\ 0 & c \rix.
\]
and hence again of Type~1 as claimed.
\eproof
Proposition~\ref{2x2} implies that $2\times2$ $\mathbb K$-triangularizable matrices
satisfying (\ref{c1}) are of Type~1.
\begin{proposition}\label{2x2type}
Consider a differential field $\mathbb F$ with an algebraically closed field of constants $\mathbb K$.
Let $M=[m_{i,j}]\in {\mathbb F}^{2,2}$ satisfy $M\, M'=M'\, M$.
Then $M$ is of Type~1.
\end{proposition}
\proof If $M$ is $\mathbb F$-diagonalizable, then the result follows by Corollary~\ref{cor:diag_type1}. If $M$ is not $\mathbb F$-diagonalizable, then it is nonderogatory and the result follows by Corollary~\ref{cor:triangularize}.
\eproof
\begin{example}{\rm
In the $2\times2$ case, any Type~2 or Type~3 matrix is also of Type~1 but not every
Type~1 matrix is Type~3.
Let $M=\phi I_2 +f g^T$ with
\[\phi\in \mathbb F,\quad f=\mat{c} f_1 \\ f_2\rix ,\quad g=\mat{c} g_1 \\ g_2\rix\in {\mathbb F}^2\]
be of Type~3, i.e.,
$f^Tg={f'}^Tg= f^T{g'}=0$.
If $f_2=0$, then $M$ is upper triangular and hence by Proposition~\ref{2x2}, $M$ is of Type~\ref{1}.
If $f_2\neq 0$, then with
\[
T=\mat{cc} 1 & -f_1/f_2 \\ 0 & 1 \rix.
\]
we have
\[
T MT^{-1}= \phi I_2 +\mat{cc} 0 & 0 \\ f_2 g_1 & 0 \rix=
\phi I_2 +f_2g_1\mat{cc} 0 & 0 \\ 1 & 0 \rix,
\]
since $ f_1g_1+f_2 g_2=0$, and hence $M$ is of Type~1.
However, if we consider
\[
M=\phi I_2+ f\mat{cc} 0 & c \\ 0 & d\rix
\]
with $\phi,f$ nonzero functions and $c,d$ nonzero constants,
then $M$ is Type~1 but not Type~3.
}
\end{example}
\begin{proposition}\label{3x3type}
Consider a differential field $\mathbb F$ of functions with field of constants
$\mathbb K$.
Let $M=[m_{i,j}]\in {\mathbb F}^{3,3}$ be $\mathbb K$-triangularizable and satisfy $M\, M'=M'\, M$. Then $M$ is of Type~\ref{1}.
\end{proposition}
\proof
Since $M$ is $\mathbb K$-triangularizable, we may assume that it is upper triangular already and consider different cases for the diagonal elements. If $M$ has three distinct diagonal elements, then it is $\mathbb K$-diagonalizable and the result follows by
Corollary~\ref{cor:diag_type1}. If $M$ has exactly two distinct diagonal elements,
then it can be transformed to a direct sum of a $2\times 2$ and $1\times1$ matrix and hence the result follows by Proposition~\ref{2x2}. If all diagonal elements are equal, then, letting $E_{i,j}$ be the matrix that is zero except for the position $(i,j)$, where it is $1$, we have
$M=m_{1,1} I+ m_{1,3} E_{1,3}+ \widetilde M$, where $\widetilde M=m_{1,2} E_{1,2}+ m_{2,3} E_{2,3}$ also satisfies (\ref{c1}). Then it follows that $m_{1,2} m'_{2,3}= m'_{1,2} m_{2,3}$. If either
$m_{1,2}=0$ or $m_{2,3}=0$, then we immediately have again Type~1, since $\widetilde M$ is a direct sum of a $2\times 2$ and a $1\times 1$ problem.
If both are nonzero, then $\widetilde M$ is nonderogatory and the result follows
by Theorem~\ref{th:nondero}. In fact, in this case $m_{1,2}=c m_{2,3}$ for some $c\in \mathbb K$ and therefore
\[
M=m_{1,1} I+ m_{1,3} E_{1,3}+ m_{2,3}\mat{ccc} 0 & c & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \rix,
\]
which is clearly of Type~\ref{1}.
\eproof
In the $4\times 4$ case, if the matrix is $\mathbb K$-triangularizable, then we either have at least two different eigenvalues, in which case we have reduced the problem again to the case of dimensions smaller than $4$, or there is only one eigenvalue,
and thus without loss of generality $M$ is nilpotent. If $M$ is nonderogatory then we again have Type
$1$. If $M$ is derogatory then it is the direct sum of blocks of smaller dimension. If these dimensions are smaller than $3$, then we are again in the Type~1
case. So it remains to study the case of a block of size $3$ and a block of size $1$.
Since $M$ is nilpotent, the block of size $3$ is either Type~1 or Type~2. In both cases the complete matrix is also Type~1 or Type~2, respectively.
The following example shows that $\mathbb K$-triangularizability is not enough
to imply that the matrix is Type~1.
\begin{example}{\rm
Consider the $9\times 9$ block matrix
\[
\hat M=\mat{ccc} 0 & M_a & 0 \\ 0 & 0 & M_a \\ 0 & 0 & 0 \rix,
\]
where $M_a$ is the Type~2 matrix from Example~\ref{eva3x3}.
Then $\hat M$ is nilpotent upper triangular and not of Type~1, 2, or 3, the latter two
facts due to its $\mathbb F$-rank being $2$.
}
\end{example}
Already in the $5\times 5$
case, we can find examples that are none of the (proper) types.
\begin{example}{\rm
Consider $M=T^{-1} \diag(M_1,M_2) T$ with $T\in \mathbb K^{n,n}$,
$M_1\in \mathbb F^{3,3}$ of Type~2 (e.g., take $M_1=M_a$ as in Example~\ref{eva3x3})
and $M_2=\mat{cc} 0&1\\ 0& 0\rix$. Then clearly $M$ is
not of Type~1 and it is not of Type~2, since
it has an $\mathbb F$-rank larger than $1$. By definition it is not of Type~3 either.
Clearly examples of any size can be constructed by building direct sums
of smaller blocks.
}
\end{example}
Schur's letter states that for $n\geq 6$ there are other types. The following example demonstrates this.
\begin{example}{\rm Let $M_a$ be the Type~2 matrix in Example~\ref{eva3x3}
and form the block matrix
\[
A =\mat{cc} M_a & I \\ 0 & M_a\rix .
\]
Direct computation shows $AA' = A'A$ but $A'A'' \not = A''A$. Furthermore $A^3 = 0$ and $A$ has $\mathbb F$-rank $3$. Thus $A$ is neither Type~1, Type~2 nor Type 3 (the last case need not be considered, since $A$ is nilpotent). We also note that $\rank(A'') = 6$. We now assume that $\mathbb K$ is algebraically closed and we show that $A$ is not $\mathbb K$-similar to the direct sum of Type~1 or Type~2 matrices.
To obtain a contradiction we assume that (after a $\mathbb K$-similarity)
$A = \diag(A_1,A_2)$ where $A_1$ is the direct sum of
Type~1 matrices (and hence Type~1) and $A_2$ is the direct sum of Type~2 matrices that are not Type~1. Since $A$ is not Type~1, $A_2$ cannot be the empty matrix. Since the minimum size of a Type~2 matrix that is not Type~1 is $3$ and its rank is $1$ it follows that $A$ cannot be the sum of Type~2 matrices that are not Type~1. Hence the size of $A_1$ must be larger or equal to $1$ and, since $A_1$ is nilpotent, it follows that $\rank(A_1) < \size(A_1)$. Since
$A_1$ is $\mathbb K$-similar to a strictly triangular matrix, it follows that
$\rank(A_1'') < \size(A_1)$. Hence $\rank(A'') = \rank(A_1'') + \rank(A_2'') < 6$, a contradiction.
}
\end{example}
\begin{example}{\rm
If the matrix $M=\sum_{i=0}^r C_i t^i\in \mathbb F^{n,n}$ is a polynomial with coefficients $C_i\in \mathbb K^{n,n}$, then from (\ref{c1}) we obtain a specific set of conditions on
sums of commutators that have to be satisfied. For this we just compare coefficients
of powers of $t$ and obtain a set of quadratic equations in the $C_i$, which has a
clear pattern. For example, in the case $r=2$, we obtain the three conditions $C_0C_1-C_1C_0=0$, $C_0C_2-C_2C_0=0$ and $C_1C_2-C_2C_1=0$, which shows that $M$ is of Type~1.
For $r=3$ we obtain the first nontrivial condition $3(C_0C_3-C_3C_0)+(C_1C_2-C_2C_1)=0$.
We have implemented
a Matlab routine for Newton's method to solve the set of quadratic matrix equations in the case $r=3$ and ran it for many different random starting coefficients
$C_i$ of different dimensions $n$. Whenever Newton's method converged (which it did in most of the
cases) it converged to a matrix of Type~1. Even in the neighborhood of a Type~2 matrix it converged to a Type~1 matrix. This suggests that the matrices of Type~1 are generic in the set of matrices satisfying~(\ref{c1}). A copy of the Matlab routine is available from the authors upon request.
}
\end{example}
\section{Conclusion}\label{conclusion}
We have presented a letter of Schur's that contains a major contribution
to the question when a matrix with elements that are functions in one variable
commutes with its derivative. Schur's letter precedes many partial results
on this question, which is still partially open. We have put Schur's result in perspective with later results and extended it in an algebraic context to matrices over a differential field. In particular, we have presented several results that characterize
Schur's matrices of Type~1. We have given examples of matrices that commute with their derivative which are of none of the Types~1,~2 or~3.We have shown that matrices of Type~1 may be triangularized over the constant field (which implies that their eigenvalues lie in the differential field) but we are left with an open problem already mentioned in Section \ref{sec:discussion}.
\begin{openproblem} {\rm Let $M$ be a matrix in a differential field $\mathbb F$, with an algebraically closed
field of constants, that satisfies $MM' = M'M$. Must the eigenvalues of $M$ be elements of the field $\mathbb F$?}
\end{openproblem}
For example, if $M$ is a polynomial matrix over the complex numbers must the eigenvalues be rational functions?
We have found no counterexample.
\section*{Acknowledgements}
We thank Carl de Boor for helpful comments on a previous draft of the paper and Olivier Lader for his careful reading of the paper and for his suggestions. We also thank an anonymous referee for pointing out the observation in Remark~\ref{refrem}.
\bibliographystyle{plain}
| {
"timestamp": "2012-11-28T02:00:32",
"yymm": "1207",
"arxiv_id": "1207.1258",
"language": "en",
"url": "https://arxiv.org/abs/1207.1258",
"abstract": "We examine when a matrix whose elements are differentiable functions in one variable commutes with its derivative. This problem was discussed in a letter from Issai Schur to Helmut Wielandt written in 1934, which we found in Wielandt's Nachlass. We present this letter and its translation into English. The topic was rediscovered later and partial results were proved. However, there are many subtle observations in Schur's letter which were not obtained in later years. Using an algebraic setting, we put these into perspective and extend them in several directions. We present in detail the relationship between several conditions mentioned in Schur's letter and we focus in particular on the characterization of matrices called Type 1 by Schur. We also present several examples that demonstrate Schur's observations.",
"subjects": "Rings and Algebras (math.RA); History and Overview (math.HO)",
"title": "Matrices that commute with their derivative. On a letter from Schur to Wielandt",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9840936092211994,
"lm_q2_score": 0.819893335913536,
"lm_q1q2_score": 0.8068517921155608
} |
https://arxiv.org/abs/2012.10015 | A Gallery of Gaussian Periods | Gaussian periods are certain sums of roots of unity whose study dates back to Gauss's seminal work in algebra and number theory. Recently, large scale plots of Gaussian periods have been revealed to exhibit striking visual patterns, some of which have been explored in the second named author's prior work. In 2020, the first named author produced a new app, \texttt{Gaussian periods}, which allows anyone to create these plots much more efficiently and at a larger scale than before. In this paper, we introduce Gaussian periods, present illustrations created with the new app, and summarize how mathematics controls some visual features, including colorings left unexplained in earlier work. | \section*{Introduction}
Gaussian periods, certain sums of roots of unity introduced by Gauss, have played a key role in several mathematical developments. For example, Gauss employed them in his work on constructibility of regular polygons by unmarked straightedge and compass, as well as in number theory. In the past few years, realizations of their role in the supercharacter theory of Diaconis and Isaacs have led in new directions \cite{SESUP}.
The fast computations afforded by modern technology provide new insights into Gaussian periods by enabling us to study them at a scale that was until recently unfathomable. In particular, large-scale plots of Gaussian periods display striking visual properties that were previously unknown (see, e.g., Figure \ref{Figure:BeautifulGallery}).
The goals of this paper lie in the intersection of the mathematical and artistic aspects of Gaussian periods:
\begin{itemize}[leftmargin=*]
\item{Introduce Gaussian periods, together with new illustrations representing some of their visual features.}
\item{Introduce a new app, created in 2020, that quickly produces illustrations of Gaussian periods and is appropriate as a tool for art, exploratory mathematical research, illustration at scale, and pedagogy.}
\item{Summarize how mathematics controls some features, including colorings unexplained in earlier work.}\label{item:coloringsadhoc}
\end{itemize}
\begin{figure}[h!tbp]
\centering
\begin{minipage}[b]{0.475\textwidth}
\includegraphics[width=\textwidth]{GP-N3x7x251x281-omega54184Layers}
\subcaption{$n=1481151 $, $\omega = 54184$, $c=21$}
\end{minipage}
\hfill
\begin{minipage}[b]{0.475\textwidth}
\includegraphics[width=\textwidth]{GP-N3x5x7x11x13x17-omega254r5c7x13.png}
\subcaption{$n=255255$, $\omega = 254$, $c=7$}
\label{Figure:Rotate:b}
\end{minipage}
\caption{Examples of $\G(n, \omega)$ for different choices of input.}
\label{Figure:BeautifulGallery}
\end{figure}
Before proceeding, we clarify what we mean by the term {\em Gaussian period}. Given a positive integer $n$, an integer $\omega$ coprime to $n$, and an integer $k$, we set
\begin{align*}
\eta_{n, \omega, k}:=\sum_{j=0}^{d-1}e^{\frac{2\pi i\omega^j k}{n}},
\end{align*}
where $d$ is the multiplicative order of $\omega$ mod $n$; that is, $d$ is the smallest positive integer such that $\omega^d\equiv 1\pmod{n}$. When $k$ is relatively prime to $n$, we say that $\eta_{n, \omega, k}$ is a {\em Gaussian period of modulus $n$ and generator $\omega$}.
In this paper, by a slight abuse of notation that coincides with the conventions employed in \cite{GHM,GNGP, Lutz}, we still call $\eta_{n, \omega, k}$ a {\em Gaussian period of modulus $n$ and generator $\omega$} even when $k$ is not coprime to $n$. As we explain later in this paper, $\eta_{n, \omega, k}$ is a positive integer multiple of the Gaussian period $\eta_{\frac{n}{\gcd(n, k)}, \omega, k}$, which allows us to preserve certain key structures of interest.
\section*{Creating and viewing plots of Gaussian periods}
Each of the images in this paper is a plot in the complex plane of
\begin{align*}
\mathsf{G}(n, \omega) := \left\{\eta_{n, \omega, k} : k = 1, 2, \ldots, n\right\}\subset\mathbb{C}.
\end{align*}
Mathematical principles guarantee that $\mathsf{G}(n, \omega)$ exhibits certain basic symmetries. For large values of $n$ (which we refer to as {\em large scale}), however, plots of $\mathsf{G}(n, \omega)$ also exhibit striking patterns and intricacies whose aesthetic properties can be appreciated even by those without mathematical training.
That large-scale plots of Gaussian periods exhibit such variety and intricate patterns was a surprise. This was discovered by B.\ Lutz,
an undergraduate, in the course of his senior thesis. In his explorations, he also introduced a coloring scheme, which is employed in the plots in this paper and discussed in the next section.
A new app, \texttt{Gaussian Periods} (written in Swift for Apple computers and freely available \cite{app}) was produced in 2020 by the first named author, with assistance from R.\ Lipshitz. This app:
\begin{itemize}[leftmargin=*]
\item{plots Gaussian periods faster than previous code (including the aforementioned \texttt{Mathematica} code), taking seconds to produce images that used to take hours;}
\item{allows larger scale plots than previously possible (e.g., Figure \ref{Figure:Fills:c}, which contains over 9 million points), which can be useful for exploring or illustrating asymptotic behavior;}
\item{does not require programming experience or mathematical expertise;}
\item{allows one to quickly modify values of $n$ and $\omega$, as well as a coloring parameter $c$; and}
\item{includes an option to save layers suitable for further steps, e.g., manipulation in \texttt{Adobe Photoshop} to customize color choices (as was done to produce the images in this paper).}
\end{itemize}
As a result, the app is suitable for projects ranging from art to exploratory mathematical investigations to illustration. We also used it to produce all the images in this paper. To improve the image quality in this paper, we layered the different color components (plotting all the points corresponding to a color at the same time), which is assisted by the layers option in the app.
While notions of ``beauty'' and ``aesthetic appeal'' are subjective, symmetry has often appeared in discussions of beauty in both art and math, dating back to the ancient Babylonians. In a historical context, given the algebraic origins of Gaussian periods, it is fitting to note that symmetry originally became a prominent concept in mathematics not through geometry, but through algebra \cite[Preface]{stewart}. Gaussian periods, together with the various symmetries they exhibit, can also be considered of independent artistic merit and be appreciated in their own right (even by those with no mathematical training). More broadly, patterns in plots of certain other families of algebraic numbers have also been recognized for their beauty \cite{baez}.
For those who wish to delve further into philosophical considerations of the artistic merits of Gaussian periods, a brief survey of visual aesthetics in similar mathematical contexts can be found in \cite[pp.~121-4]{Brown}.
\section*{Using color to reveal structures}
Whether focusing on art or math, a coloring scheme can be used to highlight some structures in $\mathsf{G}(n, \omega)$ for a given $n$ and $\omega$. Following \cite[\S 3]{GHM},
fix a positive integer $c \mid n$ and, for $j=1,2, \ldots, c$, assign the same color to all points in the set
$\{\eta_{n, \omega, k} : k\equiv j\pmod{c}\}$.
Two points $\eta_{n, \omega, k}$ and $\eta_{n, \omega, \ell}$ might have the same color even if $k\nequiv \ell\pmod{c}$, since $\eta_{n, \omega, k} = \eta_{n, \omega, k\omega^j}$ for all integers $j$.
Since this is the coloring scheme employed in each of the images here (and is implemented in \texttt{Gaussian Periods}), we begin with small-scale examples to help readers grasp the subtleties of this coloring scheme.
\begin{Example}\label{example:small:a}
Suppose $n = 27$, $\omega = 2$, and $c = 9$. There are three orbits of $\langle\omega\rangle$ acting on $\Z/27\Z$: $\mathcal{O}_1:=\Z/27\Z^\times$, $\mathcal{O}_0:=\left\{0\right\}$, and $\mathcal{O}_{18}:=\{[a]\in\Z/27\Z : 27>\gcd(a, 27) >1\}$, and $G(27, 2)$ consists of the three points $\eta_{27, 2, 1},$ $\eta_{27, 2, 0}$, and $\eta_{27, 2, 18}$. So by our rules for coloring, since $c=9$ and $18\equiv 0\pmod{9}$, we must assign the same colors to $\eta_{27, 2, 0}$ and $\eta_{27, 2, 9}$. Since elements in $(\Z/27\Z)^\times$ are not congruent modulo $9$ to elements divisible by $3$, there are no restrictions on the color of $\eta_{27, 2, 1}$. As seen in Figure \ref{Figure:Small:a}, for this input, our recipe produces a plot with $3$ points and just $2$ colors, even though $c= 9$.
\end{Example}
\begin{figure}[h!tbp]
\centering
\begin{minipage}[b]{0.3\textwidth}
\boxed{\includegraphics[width=\textwidth]{GPN27omega2c9r88}}
\subcaption{$n=27$, $\omega = 2$, $c=9$}
\label{Figure:Small:a}
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\textwidth}
\boxed{\includegraphics[width=\textwidth]{GPN12omega5c3r88}}
\subcaption{$n=12$, $\omega = 5$, $c=3$}
\label{Figure:Small:b1}
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\textwidth}
\boxed{\includegraphics[width=\textwidth]{GPN12omega5c4r88}}
\subcaption{$n=12$, $\omega = 5$, $c=4$}
\label{Figure:Small:b2}
\end{minipage}
\caption{Small scale illustrations for Examples \ref{example:small:a} and \ref{example:small:b}.}
\label{Figure:Small}
\end{figure}
This approach to coloring was chosen not as a way to illustrate a particular mathematical principle but rather as a recipe Lutz discovered that produced visually appealing pictures. In fact, in the final paragraph of \cite{GHM}, he and his coauthors note that this approach to coloring is \emph{ad hoc} and that ``a general theory is necessary to formalize our intuition.''
\section*{Algebraic structures}
Through the lens of Galois theory (which was never considered in prior work on illustrating $\mathsf{G}(n,\omega)$), the above coloring scheme can be linked to mathematical structure. The main theorem of Galois theory gives a bijection between the subfields of $\Q\left(\zeta_n\right)$, with $\zeta_n$ a primitive $n$th of unity, and the subgroups of the Galois group $G:=\mathrm{Gal}\left(\Q\left(\zeta_n\right)/\Q\right)=\operatorname{Aut}\left(\Q\left(\zeta_n\right)\right)$, which is identified with $\left(\Z/n\Z\right)^\times$ via
\begin{align*}
\left(\Z/n\Z\right)^\times&\overset{\sim}{\rightarrow} G\\
[m]&\mapsto \psi_m\\
\psi_m\Big(\sum_j a_j \zeta_n^j\Big) &:= \sum_j a_j\left(\zeta_n^m\right)^j, \hspace{0.5in} a_j\in\Q.
\end{align*}
Each subgroup $H\subseteq G$ corresponds to the fixed field $\Q\left(\zeta_n\right)^H:=\left\{z\in \Q\left(\zeta_n\right) : \psi(z) = z \mbox{ for all } \psi\in H\right\}$. Then $H$ is the Galois group of $\Q\left(\zeta_n\right)$ over $\mathbb{Q}\left(\zeta_n\right)^H$, i.e., the group of field automorphisms of $\Q\left(\zeta_n\right)$ fixing $\Q\left(\zeta_n\right)^H$, and $G/H$ is the Galois group of $\Q\left(\zeta_n\right)^H$ over $\Q$, i.e., the group of field automorphisms of $\Q\left(\zeta_n\right)^H$.
To help identify elements of the subfields of $\Q\left(\zeta_n\right)$, students in Galois theory courses are sometimes assigned to determine Gaussian periods (e.g., see \cite[\S 14.5]{DF}). Indeed, if $H=\langle\omega\rangle$, then
\begin{equation*}
\eta_{n, \omega, k}=\sum_{\psi\in H}{\left(\psi\left(\zeta_n\right)^k\right) }= \sum_{\psi\in H}\psi\big(\zeta_n^k\big)\in \Q\left(\zeta_n\right)^H,
\end{equation*}
So
the plot of $\mathsf{G}(n, \omega)$ contains a (rescaled) plot of $\mathsf{G}(\frac{n}{\gcd(n, k)}, \omega)$.
If $c$ is the coloring number, then the plot of $\mathsf{G}(n, \omega)$ contains a single-colored plot of $\mathsf{G}(\frac{n}{c}, \omega)$, rescaled by a factor of $\frac{\mathrm{ord}_n(\omega)}{\mathrm{ord}_{n/c}(\omega)}$, with $\mathrm{ord}_m(\omega)$ denoting the multiplicative order of $\omega \pmod{m}$. For clarification, we illustrate this in a simple example.
\begin{Example}\label{example:small:b}
Let $n= 12$ and $\omega = 5$. Note that $5$ has multiplicative order $2$ mod $12$ and $1$ mod $4$. So $\eta_{12, 5, 3k} = 2\eta_{4, 5, k} = 2 e^{\frac{2\pi ik}{4}}$ for all $k$. So $G(4, 5)=\{1, -1, i, -i\}$, and $G(12, 5)\supseteq 2\cdot G(4, 5) = \{2, -2, 2i, -2i\}$. If we choose $c = 3$, then the points in $2\cdot G(4, 5)$ all must be the same color as each other (the red, outer diamond in Figure \ref{Figure:Small:b1}), while the other points need not be colored that color (the blue, inner diamond). On the other hand, selecting $c= 4$ forces the pair of points in $G(3, 5)\subset G(12, 5)$ to be the same color (shown in Figure \ref{Figure:Small:b2} in red). As illustrated in Figure \ref{Figure:Small:b2}, the pair gets rotated by $2\pi/4 = 2\pi\omega/4$, with a new color allowed at each rotation.
\end{Example}
Regarding the current coloring scheme, if two elements of $\Z/n\Z^\times\cong\mathrm{Gal}(\Q(\zeta_n)/\Q)$ are congruent $\mod c$, then the corresponding elements of the Galois group restrict to the same element of $\mathrm{Gal}(\Q(\zeta_c)/\Q)$. So, for example, given an element $\psi\in \mathrm{Gal}(\Q(\zeta_c)/\Q)$ and an integer $a$, all points $\Psi(\eta_{n, \omega, a})$, as $\Psi$ ranges over the extensions of $\psi$ to $\Q(\zeta_n),$ are colored the same. There are also other natural coloring schemes. For example, an option (called ``period squared'') in \texttt{Gaussian Periods} is to color $\eta_{n, \omega, k}$ and $\eta_{n, \omega, -k}$ the same, thus creating symmetry across the real axis. This corresponds to coloring the points in the Galois orbit of the complex conjugation automorphism the same. More generally, one might color all points in some other given Galois orbits the same. Furthermore, our reformulation in terms of elements of fixed fields should naturally generalize beyond Gaussian periods to the illustration of symmetries in other settings beyond the scope of this short paper (such as finite nonabelian extensions of $\Q$).
\section*{Asymptotic behavior}
Symmetry is the most obvious feature in typical Gaussian-period plots.
We say that $\G(n, \omega)$ has \emph{$k$-fold dihedral symmetry}
if $\G(n, \omega)$ is invariant under the action of the dihedral group of order $2k$.
That is, $\G(n, \omega)$ is invariant under complex conjugation and rotation by $2\pi/k$ about the origin.
It turns out that $\G(n, \omega)$ has at least $\gcd(\omega-1,n)$-fold dihedral symmetry \cite[Prop.~3.1]{GNGP}.
This symmetry refers to the uncolored graph; the colors highlight additional features beyond the initial symmetry.
This is illustrated in Figure \ref{Figure:Rotate}.
\begin{figure}[h!tbp]
\centering
\begin{minipage}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{GP-N2x3x5x17x3x19-omega169x169x169Layers}
\subcaption{$n=29070$, $\omega = 1189$, $\gcd(1188, 29070)=18$, $c=3$}
\label{Figure:Rotate:a}
\end{minipage}
~
\begin{minipage}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{GP-N70091-omega21792c7r4FIGH}
\subcaption{$n=70091$, $\omega = 21792$, $\gcd(21791,70091)=7$, $c=7$}
\label{Figure:Rotate:c}
\end{minipage}
~
\begin{minipage}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{GP-N3x5x7x11x13x17-omega254Layers}
\subcaption{$n=255255$, $\omega = 254$, $\gcd(255255,253)=11$, $c=7$}
\end{minipage}
\caption{Dihedral symmetry of $\G(n, \omega)$.}
\label{Figure:Rotate}
\end{figure}
Gaussian period plots often demonstrate great structural coherence if the parameters
$n$ and $\omega$ vary in the appropriate manner. The following ``filling out'' of various shapes
was discovered in \cite[Thm.~6.3]{GNGP}. See the thorough exposition in \cite[Thm.~1]{GHM}.
Let $q=p^a$ be a nonzero power of an odd prime and let $\omega = \omega(q)$ be such that
$d= |\langle \omega \rangle|$ divides $p-1$.
Then $\mathsf{G}(q,\omega)$ is contained in the image of the Laurent polynomial function $g:\mathbb{T}^{\phi(d)}\to\mathbb{C}$ defined by
\begin{equation*}
g(z_1,z_2,\ldots,z_{\phi(d)}) = \sum_{k=0}^{d-1} \prod_{j=0}^{\phi(d)-1} z_{j+1}^{b_{k,j}},
\end{equation*}
where the integers $b_{k,j}$ are determined by
$t^k\equiv \sum_{j=0}^{\phi(d)-1} b_{k,j} t^j \pmod{\Phi_d(t)}.$
Here $\mathbb{T}$ denotes the unit circle in $\mathbb{C}$, $\phi$ is the Euler totient function, and $\Phi_d$ denotes the $d$th cyclotomic polynomial.
For a fixed $d$, as $q$ becomes large, $\mathsf{G}(q,\omega)$ ``fills out'' the image of $g$; see Figure \ref{Figure:Fills}.
\begin{figure}[h!tbp]
\centering
\begin{minipage}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{GP-N3019-omega239r6c1}
\subcaption{$n=3019$, $\omega = 239$}
\end{minipage}
~
\begin{minipage}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{GP-N13063-omega1347r6c1}
\subcaption{$n=13063$, $\omega = 1347$}
\end{minipage}
~
\begin{minipage}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{GP-N9114361-omega3082638r1c1}
\subcaption{$n=9114361$, $\omega = 3082638$}
\label{Figure:Fills:c}
\end{minipage}
\caption{Sometimes $\mathsf{G}(n,\omega)$ appears to ``fill out'' the image of a Laurent
polynomial $g:\mathbb{T}^{\phi(d)}\to\mathbb{C}$. Here $g(z_1, z_2) = z_1 + z_1 + 1/(z_1 z_2)$.}
\label{Figure:Fills}
\end{figure}
\begin{comment}
\begin{figure}[h!tbp]
\centering
\begin{minipage}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{GP-N3019-omega239r6c1}
\subcaption{$n=3019$, $\omega = 239$}
\end{minipage}
~
\begin{minipage}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{GP-N13063-omega1347r6c1}
\subcaption{$n=13063$, $\omega = 1347$}
\end{minipage}
~
\begin{minipage}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{GP-N9114361-omega3082638r1c1}
\subcaption{$n=9114361$, $\omega = 3082638$}
\label{Figure:Fills:c}
\end{minipage}
\caption{Sometimes $\mathsf{G}(n,\omega)$ appears to ``fill out'' the image of a Laurent
polynomial $g:\mathbb{T}^{\phi(d)}\to\mathbb{C}$.}
\label{Figure:Fills}
\end{figure}
\end{comment}
With ellipses, hypocycloids, and so forth as primitive graphical elements, one can use the Chinese remainder
theorem to produce new images of startling complexity (as explained in \cite{Lutz}). We content ourselves here
with a few aesthetically pleasing images produced in such a manner; see Figure \ref{Figure:Finale}.
We encourage the reader to enjoy more examples by experimenting with our app \texttt{Gaussian Periods} (freely available at \url{http://www.elleneischen.com/gaussianperiods.html}).
\begin{figure}[h!tbp]
\centering
\begin{minipage}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{GP-N11x43x79-omega608Layers}
\subcaption{$n=37367$, $\omega = 608$, $c=11$}
\label{Figure:Finale:a}
\end{minipage}
~
\begin{minipage}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{GP-N3x37x67x5x5-omega766Layers}
\subcaption{$n=185925$, $\omega = 766$, $c=25$}
\label{Figure:Finale:b}
\end{minipage}
~
\begin{minipage}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{GP-N82677-omega8147Layeredpng}
\subcaption{$n=82677$, $\omega = 8147$, $c=21$}
\label{Figure:Finale:c}
\end{minipage}
\caption{Primitive graphical elements
(here, a ``filled'' deltoid) can form more elaborate plots.}
\label{Figure:Finale}
\end{figure}
\section*{Acknowledgments}
Ellen Eischen was partially funded by NSF grant DMS-1751281. Stephan Ramon Garcia was partially funded by NSF grant DMS-1800123. This material is based partly upon work supported by the NSF grant DMS-1439786 and the Alfred P.~Sloan Foundation award G-2019-11406 while the authors were in residence at the Institute for Computational and Experimental Research in Mathematics (ICERM) in Providence, RI, during the Illustrating Mathematics program (Fall 2019). We thank R.\ Lipshitz for help with the code.
\footnotesize
{\setlength{\baselineskip}{13pt}
\raggedright
| {
"timestamp": "2020-12-21T02:06:06",
"yymm": "2012",
"arxiv_id": "2012.10015",
"language": "en",
"url": "https://arxiv.org/abs/2012.10015",
"abstract": "Gaussian periods are certain sums of roots of unity whose study dates back to Gauss's seminal work in algebra and number theory. Recently, large scale plots of Gaussian periods have been revealed to exhibit striking visual patterns, some of which have been explored in the second named author's prior work. In 2020, the first named author produced a new app, \\texttt{Gaussian periods}, which allows anyone to create these plots much more efficiently and at a larger scale than before. In this paper, we introduce Gaussian periods, present illustrations created with the new app, and summarize how mathematics controls some visual features, including colorings left unexplained in earlier work.",
"subjects": "Number Theory (math.NT)",
"title": "A Gallery of Gaussian Periods",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9840936087546923,
"lm_q2_score": 0.8198933337131076,
"lm_q1q2_score": 0.8068517895676472
} |
https://arxiv.org/abs/1205.4851 | Green's Theorem for Generalized Fractional Derivatives | We study three types of generalized partial fractional operators. An extension of Green's theorem, by considering partial fractional derivatives with more general kernels, is proved. New results are obtained, even in the particular case when the generalized operators are reduced to the standard partial fractional derivatives and fractional integrals in the sense of Riemann-Liouville or Caputo. | \section{Introduction}
In 1828, the English mathematician George Green (1793-1841),
who up to his forties was working as a baker and a miller,
published an essay where he introduced a formula connecting
the line integral around a simple closed curve with a double integral.
Within years, this result turned out to be useful in many fields
of mathematics, physics and engineering \cite{Colton,Emch,Russenschuck,Sherman}.
Generalizations of Green's theorem have chosen different directions,
and are known as the Kelvin--Stokes and the Gauss--Ostrogradsky theorems.
In this paper, in contrast with previous works,
we want to state a Green's theorem for generalized
partial fractional derivatives. Notions of generalized fractional
derivatives were introduced in \cite{OmPrakashAgrawal,book:Kiryakova},
and then developed in \cite{FVC_Gen,MyID:227}.
A fractional version of the Green theorem
has been already showed for Riemann--Liouville integrals
and Caputo derivatives \cite{tarasov}, and for fractional operators
in the sense of Jumarie \cite{MyID:182}. However, generalized fractional
operators have never been considered. Our result may be useful
in the theory of fractional calculus (see, e.g.,
\cite{book:Kilbas,book:Klimek,book:Podlubny,book:Samko}),
in particular for the two-dimensional fractional calculus of variations,
where the derivation of Euler--Lagrange equations uses,
as a key step in the proof,
Green's theorem \cite{MyID:182,Cresson,book:frac,tatiana}.
The paper is organized as follows. In Section~\ref{sec:prelim} a common review
of ordinary and partial generalized fractional calculus is given. Our results
are then formulated and proved in Section~\ref{sec:MR}: we show the two-dimensional
integration by parts formula for generalized Riemann--Liouville partial
fractional integrals (Theorem~\ref{theorem:GRI})
and Green's theorem for generalized
partial fractional derivatives (Theorem~\ref{thm:ggt}).
\section{Basic Notions}
\label{sec:prelim}
In this section we give definitions of generalized ordinary
and partial fractional operators. By the choice of a certain
kernel, these operators can be reduced to the standard fractional
integrals and derivatives. For more on the subject, we refer the
reader to \cite{OmPrakashAgrawal,MR2974327,book:Kiryakova,FVC_Gen,MyID:227}.
\subsection{Generalized fractional operators}
\begin{definition}[Generalized fractional integral]
\label{def:GI}
The operator $K_P^\alpha$ is given by
\begin{equation*}
\left(K_P^{\alpha}f\right)(t)
:=p\int\limits_{a}^{t}k_{\alpha}(t,\tau)f(\tau)d\tau
+q\int\limits_{t}^{b}k_{\alpha}(\tau,t)f(\tau)d\tau,
\end{equation*}
where $P=\langle a,t,b,p,q\rangle$ is the \emph{parameter set}
($p$-set for brevity), $t\in[a,b]$, $p,q$ are real numbers,
and $k_{\alpha}(t,\tau)$ is a kernel which may depend on $\alpha$.
The operator $K_P^\alpha$ is referred as the \emph{operator $K$}
($K$-op for simplicity) of order $\alpha$ and $p$-set $P$.
\end{definition}
\begin{theorem}[Theorem 2.3 of \cite{FVC_Gen}]
\label{theorem:L1}
Let $k_\alpha$ be a difference kernel, i.e.,
$k_\alpha(t,\tau)=k_\alpha(t-\tau)$ and
$k_\alpha\in L_1\left([a,b]\right)$. Then,
$K_P^{\alpha}:L_1\left([a,b]\right)\rightarrow
L_1\left([a,b]\right)$ is well defined,
bounded and linear operator.
\end{theorem}
The $K$-op reduces to the classical left or right Riemann--Liouville
fractional integral (see, \textrm{e.g.}, \cite{book:Kilbas,book:Podlubny})
for a suitably chosen kernel $k_{\alpha}(t,\tau)$ and $p$-set $P$. Indeed,
let $k_{\alpha}(t-\tau)=\frac{1}{\Gamma(\alpha)}(t-\tau)^{\alpha-1}$.
If $P=\langle a,t,b,1,0\rangle$, then
\begin{equation*}
\left(K_{P}^{\alpha}f\right)(t)=\frac{1}{\Gamma(\alpha)}
\int\limits_a^t(t-\tau)^{\alpha-1}f(\tau)d\tau
=: \left({_{a}}\textsl{I}^{\alpha}_{t} f\right)(t)
\end{equation*}
is the left Riemann--Liouville fractional integral
of order $\alpha$; if $P=\langle a,t,b,0,1\rangle$, then
\begin{equation*}
\left(K_{P}^{\alpha}f\right)(t)=\frac{1}{\Gamma(\alpha)}
\int\limits_t^b(\tau-t)^{\alpha-1}f(\tau)d\tau
=: \left({_{t}}\textsl{I}^{\alpha}_{b} f\right)(t)
\end{equation*}
is the right Riemann--Liouville fractional integral
of order $\alpha$.
\begin{definition}[Generalized Riemann--Liouville derivative]
\label{def:GRL}
Let $P$ be a given parameter set. The operator $A_P^\alpha$,
$0 < \alpha < 1$, is defined for functions $f$ such that
$K_P^{1-\alpha} f\in AC\left([a,b]\right)$ by
$A_P^\alpha := \frac{d}{dt} \circ K_P^{1-\alpha}$,
where $D$ denotes the standard derivative.
We refer to $A_P^\alpha$ as \emph{operator $A$} ($A$-op)
of order $\alpha$ and $p$-set $P$.
\end{definition}
\begin{definition}[Generalized Caputo derivative]
\label{def:GC}
Let $P$ be a given parameter set. The operator $B_P^\alpha$,
$\alpha \in (0,1)$, is defined for functions $f$
such that $f\in AC\left([a,b]\right)$ by
$B_P^\alpha :=K_P^{1-\alpha} \circ \frac{d}{dt}$
and is referred as the \emph{operator $B$} ($B$-op)
of order $\alpha$ and $p$-set $P$.
\end{definition}
Let $k_{1-\alpha}(t-\tau)=\frac{1}{\Gamma(1-\alpha)}(t-\tau)^{-\alpha}$,
$\alpha \in (0,1)$. If $P=\langle a,t,b,1,0\rangle$, then
\begin{equation*}
\left(A_{P}^\alpha f\right)(t) = \frac{1}{\Gamma(1-\alpha)}
\frac{d}{dt} \int\limits_a^t(t-\tau)^{-\alpha}f(\tau)d\tau
=: \left({_{a}}\textsl{D}^{\alpha}_{t} f\right)(t)
\end{equation*}
is the standard left Riemann--Liouville fractional derivative
of order $\alpha$ while
\begin{equation*}
\left(B_{P}^\alpha f\right)(t)
=\frac{1}{\Gamma(1-\alpha)}
\int\limits_a^t(t-\tau)^{-\alpha} f'(\tau)d\tau
=: \left({^{C}_{a}}\textsl{D}^{\alpha}_{t} f\right)(t)
\end{equation*}
is the standard left Caputo fractional derivative of order $\alpha$;
if $P=\langle a,t,b,0,1\rangle$, then
\begin{equation*}
- \left(A_{P}^\alpha f\right)(t)
= - \frac{1}{\Gamma(1-\alpha)} \frac{d}{dt}
\int\limits_t^b(\tau-t)^{-\alpha}f(\tau)d\tau
=: \left({_{t}}\textsl{D}^{\alpha}_{b} f\right)(t)
\end{equation*}
is the standard right Riemann--Liouville
fractional derivative of order $\alpha$ while
\begin{equation*}
- \left(B_{P}^\alpha f\right)(t)
= - \frac{1}{\Gamma(1-\alpha)}
\int\limits_t^b(\tau-t)^{-\alpha} f'(\tau)d\tau
=: \left({^{C}_{t}}\textsl{D}^{\alpha}_{b} f\right)(t)
\end{equation*}
is the standard right Caputo fractional derivative of order $\alpha$.
\subsection{Generalized partial fractional operators}
Let $\alpha$ be a real number from the interval $(0,1)$,
$\Delta_n=[a_1,b_1]\times\dots\times [a_n,b_n]$, $n\in\mathbb{N}$,
be a subset of $\mathbb{R}^n$, $\textbf{t}=(t_1,\dots, t_n)$
be a point in $\Delta_n$ and $\mathbf{p}=(p_1,\dots,p_n)$,
$\mathbf{q}=(q_1,\dots,q_n)\in\mathbb{R}^n$. Generalized partial fractional
integrals and derivatives are a natural generalization
of the corresponding one-dimensional generalized fractional
integrals and derivatives.
\begin{definition}[Generalized partial fractional integral]
\label{def:GPI}
Let function $f=f(t_1,\dots,t_n)$ be continuous
on the set $\Delta_n$. The generalized partial
Riemann--Liouville fractional integral of order $\alpha$
with respect to the $i$th variable $t_i$ is given by
\begin{multline*}
\left(K_{P_{t_i}}^{\alpha}f\right)(\textbf{t})
:=p_i\int\limits_{a_i}^{t_i}k_{\alpha}(t_i,\tau)
f(t_1,\dots,t_{i-1},\tau,t_{i+1},\dots,t_n)d\tau \\
+q_i\int\limits_{t_i}^{b_i}k_{\alpha}(\tau,t_i)
f(t_1,\dots,t_{i-1},\tau,t_{i+1},\dots,t_n)d\tau,
\end{multline*}
where $P_{t_i}=\langle a_i,t_i,b_i,p_i,q_i \rangle$.
We refer to $K_{P_{t_i}}^{\alpha}$ as the
\emph{partial operator $K$} (partial $K$-op)
of order $\alpha$ and $p$-set $P_{t_i}$.
\end{definition}
\begin{definition}[Generalized partial Riemann--Liouville derivatives]
\label{def:GPRL}
Let $P_{t_i}=\langle a_i,t_i,b_i,p_i,q_i \rangle$
and $K_{P_{t_i}}^{1-\alpha}f\in C^1(\Delta_n)$.
The generalized partial Riemann--Liouville fractional
derivative of order $\alpha$ with respect
to the $i$th variable $t_i$ is given by
\begin{equation*}
\begin{split}
\bigl(A_{P_{t_i}}^{\alpha}f\bigr)(\textbf{t})
&:=\left(\frac{\partial}{\partial t_i}\circ K_{P_{t_i}}^{1-\alpha}f\right)(\textbf{t})\\
&=\frac{\partial}{\partial t_i}\left(p_i\int\limits_{a_i}^{t_i}
k_{1-\alpha}(t_i,\tau)f(t_1,\dots,t_{i-1},\tau,t_{i+1},\dots,t_n)d\tau \right.\\
&\qquad \qquad \left.+q_i\int\limits_{t_i}^{b_i}k_{1-\alpha}(\tau,t_i)
f(t_1,\dots,t_{i-1},\tau,t_{i+1},\dots,t_n)d\tau\right).
\end{split}
\end{equation*}
The operator $A_{P_{t_i}}^{\alpha}$ is referred as the
\emph{partial operator $A$} (partial $A$-op)
of order $\alpha$ and $p$-set $P_{t_i}$.
\end{definition}
\begin{definition}[Generalized partial Caputo derivative]
\label{def:GPC}
Let $P_{t_i}=\langle a_i,t_i,b_i,p_i,q_i \rangle$
and $f\in C^1(\Delta_n)$. The generalized partial Caputo
fractional derivative of order $\alpha$ with respect
to the $i$th variable $t_i$ is given by
\begin{equation*}
\begin{split}
\bigl(B_{P_{t_i}}^{\alpha}f\bigr)(\textbf{t})
&:=\left(K_{P_{t_i}}^{1-\alpha} \circ \frac{\partial}{\partial t_i} f\right)(\textbf{t})\\
&=p_i\int\limits_{a_i}^{t_i}k_{1-\alpha}(t_i,\tau)\frac{\partial}{\partial \tau}
f(t_1,\dots,t_{i-1},\tau,t_{i+1},\dots,t_n)d\tau\\
& \qquad \qquad +q_i\int\limits_{t_i}^{b_i}k_{1-\alpha}(\tau,t_i)\frac{\partial}{\partial \tau}
f(t_1,\dots,t_{i-1},\tau,t_{i+1},\dots,t_n)d\tau
\end{split}
\end{equation*}
and is referred as the \emph{partial operator $B$} (partial $B$-op)
of order $\alpha$ and $p$-set $P_{t_i}$.
\end{definition}
Similarly as in the one-dimensional case \cite{OmPrakashAgrawal,FVC_Gen,MyID:227},
the generalized partial operators $K$, $A$ and $B$ here introduced
give the standard partial fractional integrals
and derivatives for particular kernels and $p$-sets.
The left- and right-sided Riemann--Liouville partial fractional integrals
with respect to the $i$th variable $t_i$ are obtained by choosing the kernel
$$
k_{\alpha}(t_i,\tau)=\frac{1}{\Gamma(\alpha)}(t_i-\tau)^{\alpha-1}
$$
and $p$-sets $L_{t_i}=\langle a_i,t_i,b_i,1,0\rangle$
and $R_{t_i}=\langle a_i,t_i,b_i,0,1\rangle$, respectively:
\begin{equation*}
\begin{split}
\left({_{a_i}}\textsl{I}^{\alpha}_{t_i} f\right)(\textbf{t})
&= \left(K_{L_{t_i}}^{\alpha}f\right)(\textbf{t})\\
&=\frac{1}{\Gamma(\alpha)}\int\limits_{a_i}^{t_i}(t_i-\tau)^{\alpha-1}
f(t_1,\dots,t_{i-1},\tau,t_{i+1},\dots,t_n)d\tau,
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
\left({_{t_i}}\textsl{I}^{\alpha}_{b_i} f\right)(\textbf{t})
&= \left(K_{R_{t_i}}^{\alpha}f\right)(\textbf{t})\\
&=\frac{1}{\Gamma(\alpha)}\int\limits_{t_i}^{b_i}(\tau-t_i)^{\alpha-1}
f(t_1,\dots,t_{i-1},\tau,t_{i+1},\dots,t_n)d\tau.
\end{split}
\end{equation*}
The standard left- and right-sided partial Riemann--Liouville and Caputo
fractional derivatives with respect to the $i$th variable $t_i$
are obtained with the choice of kernel $k_{1-\alpha}(t_i,\tau)
=\frac{1}{\Gamma(1-\alpha)}(t_i-\tau)^{-\alpha}$:
if $P_{t_i}=\langle a_i,t_i,b_i,1,0\rangle$, then
\begin{equation*}
\begin{split}
\left({_{a_i}}\textsl{D}^{\alpha}_{t_i} f\right)(\textbf{t})
&= \left(A_{P_{t_i}}^{\alpha}f\right)(\textbf{t})\\
&=\frac{1}{\Gamma(1-\alpha)}\frac{\partial}{\partial t_i}
\int\limits_{a_i}^{t_i}(t_i-\tau)^{-\alpha}
f(t_1,\dots,t_{i-1},\tau,t_{i+1},\dots,t_n)d\tau
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
\left({^{C}_{a_i}}\textsl{D}^{\alpha}_{t_i} f\right)(\textbf{t})
&= \left(B_{P_{t_i}}^{\alpha}f\right)(\textbf{t})\\
&=\frac{1}{\Gamma(1-\alpha)}\int\limits_{a_i}^{t_i}
(t_i-\tau)^{-\alpha}\frac{\partial}{\partial \tau}
f(t_1,\dots,t_{i-1},\tau,t_{i+1},\dots,t_n)d\tau;
\end{split}
\end{equation*}
if $P_{t_i}=\langle a_i,t_i,b_i,0,1\rangle$, then
\begin{equation*}
\begin{split}
\left({_{t_i}}\textsl{D}^{\alpha}_{b_i} f\right)(\textbf{t})
&= -\left(A_{P_{t_i}}^{\alpha}f\right)(\textbf{t})\\
&=-\frac{1}{\Gamma(1-\alpha)}\frac{\partial}{\partial t_i}\int\limits_{t_i}^{b_i}
(\tau-t_i)^{-\alpha}f(t_1,\dots,t_{i-1},\tau,t_{i+1},\dots,t_n)d\tau
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
\left({^{C}_{t_i}}\textsl{D}^{\alpha}_{b_i} f\right)(\textbf{t})
&= -\left(B_{P_{t_i}}^{\alpha}f\right)(\textbf{t})\\
&=-\frac{1}{\Gamma(1-\alpha)}\int\limits_{t_i}^{b_i}(\tau-t_i)^{-\alpha}
\frac{\partial}{\partial \tau}f(t_1,\dots,t_{i-1},\tau,t_{i+1},\dots,t_n)d\tau.
\end{split}
\end{equation*}
\begin{remark}
In Definitions~\ref{def:GPI}, \ref{def:GPRL} and \ref{def:GPC},
all the variables, except $t_i$, are kept fixed. That choice
of fixed values determines a function
$f_{t_1,\dots,t_{i-1},t_{i+1},\dots,t_n}:[a_i,b_i]\rightarrow \mathbb{R}$
of one variable $t_i$:
\begin{equation*}
f_{t_1,\dots,t_{i-1},t_{i+1},\dots,t_n}(t_i)
=f(t_1,\dots,t_{i-1},t_i,t_{i+1},\dots,t_n).
\end{equation*}
By Definitions~\ref{def:GI}, \ref{def:GRL}, \ref{def:GC}
and \ref{def:GPI}, \ref{def:GPRL}, \ref{def:GPC}, we have
\begin{gather*}
\left(K_{P_{t_i}}^{\alpha}f_{t_1,\dots,t_{i-1},t_{i+1},\dots,t_n}\right)(t_i)
=\left(K_{P_{t_i}}^{\alpha}f\right)(t_1,\dots,t_{i-1},t_i,t_{i+1},\dots,t_n),\\
\left(A_{P_{t_i}}^{\alpha}f_{t_1,\dots,t_{i-1},t_{i+1},\dots,t_n}\right)(t_i)
=\left(A_{P_{t_i}}^{\alpha}f\right)(t_1,\dots,t_{i-1},t_i,t_{i+1},\dots,t_n),\\
\left(B_{P_{t_i}}^{\alpha}f_{t_1,\dots,t_{i-1},t_{i+1},\dots,t_n}\right)(t_i)
=\left(B_{P_{t_i}}^{\alpha}f\right)(t_1,\dots,t_{i-1},t_i,t_{i+1},\dots,t_n).
\end{gather*}
Therefore, as in the classical integer order case, computation of partial
generalized fractional operators is reduced to the computation
of one-variable generalized fractional operators.
\end{remark}
\section{Green's Theorem for Generalized Fractional Derivatives}
\label{sec:MR}
\begin{definition}[Dual $p$-set]
Let $P_{t_i}=\langle a_i,t_i,b_i,p_i,q_i\rangle$, $i\in\mathbb{N}$.
We denote by $P_{t_i}^{*}$ the $p$-set
$P_{t_i}^{*} = \langle a_i,t_i,b_i,q_i,p_i\rangle$
and call it the dual of $P_{t_i}$.
\end{definition}
\begin{theorem}[Generalized 2D Integration by Parts]
\label{theorem:GRI}
Let $\alpha\in (0,1)$, $P_{t_i}=\langle a_i,t_i,b_i,p_i,q_i \rangle$
be a parameter set, and $k_\alpha$ be a difference kernel, i.e.,
$k_\alpha(t_i,\tau)=k_\alpha(t_i-\tau)$ such that
$k_\alpha\in L_1([0,b_i-a_i])$, $i=1,2$.
If $f,g,\eta_1,\eta_2\in C\left(\Delta_2\right)$, then the
generalized partial fractional integrals satisfy the following identity:
\begin{multline*}
\int\limits_{a_1}^{b_1}\int\limits_{a_2}^{b_2}
\left[g(\textbf{t})\left(K_{P_{t_1}}^{\alpha}\eta_1\right)(\textbf{t})
+f(\textbf{t})\left(K_{P_{t_2}}^{\alpha}\eta_2\right)(\textbf{t})\right]dt_2 dt_1\\
=\int\limits_{a_1}^{b_1}\int\limits_{a_2}^{b_2}
\eta_1(\textbf{t})\left[\left(K_{P_{t_1}^*}^{\alpha}g\right)(\textbf{t})\right]
+\eta_2(\textbf{t})\left[\left(K_{P_{t_2}^*}^{\alpha}f\right)(\textbf{t})\right]dt_2 dt_1,
\end{multline*}
where $P_{t_i}^*$ is the dual of $P_{t_i}$, $i=1,2$.
\end{theorem}
\begin{proof}
Define
\[
F_1(\textbf{t},\tau):=
\left\{
\begin{array}{ll}
\left|p_1 k_\alpha(t_1-\tau)\right|
\cdot\left|g(\textbf{t})\right|\cdot \left|\eta_1(\tau,t_2)\right|
& \mbox{if $\tau \leq t_1$}\\
\left|q_1 k_\alpha(\tau-t_1)\right|
\cdot \left|g(\textbf{t})\right|\cdot\left|\eta_1(\tau,t_2)\right|
& \mbox{if $\tau > t_1$}
\end{array}\right.
\]
for all $(\textbf{t},\tau)\in [a_1,b_1]\times [a_2,b_2]\times [a_1,b_1]$ and
\[
F_2(\textbf{t},\tau):=
\left\{
\begin{array}{ll}
\left|p_2 k_\alpha(t_2-\tau)\right|
\cdot\left|f(\textbf{t})\right|\cdot \left|\eta_2(t_1,\tau)\right|
& \mbox{if $\tau \leq t_2$}\\
\left|q_2 k_\alpha(\tau-t_2)\right|
\cdot \left|f(\textbf{t})\right|\cdot\left|\eta_2(t_1,\tau)\right|
& \mbox{if $\tau > t_2$}
\end{array}\right.
\]
for all $(\textbf{t},\tau)\in [a_1,b_1]\times [a_2,b_2]\times [a_2,b_2]$.
Since $f,g$ and $\eta_i$, $i=1,2$, are continuous functions on $\Delta_2$,
they are bounded on $\Delta_2$. Hence, there exist real numbers
$C_1,C_2,C_3,C_4>0$ such that
$$
\left|f(\textbf{t})\right|\leq C_1, \quad
\left|g(\textbf{t})\right| \leq C_2, \quad
\left|\eta_1(\textbf{t})\right| \leq C_3,
\quad \left|\eta_2(\textbf{t})\right| \leq C_4,
$$
for all $\textbf{t}\in \Delta_2$. Therefore,
\begin{equation*}
\begin{split}
\int_{a_1}^{b_1}&\int_{a_2}^{b_2}\int_{a_1}^{b_1}
F_1(\textbf{t},\tau)dt_1 dt_2 d\tau
+ \int_{a_1}^{b_1} \int_{a_2}^{b_2} \int_{a_2}^{b_2}
F_2(\textbf{t},\tau)dt_2 d\tau dt_1\\
&=\int_{a_1}^{b_1}\left(\int_{a_2}^{b_2}\left(
\int_{\tau}^{b_1} \left|p_1 k_\alpha(t_1-\tau)\right|
\cdot\left|g(\textbf{t})\right|\cdot \left|\eta_1(\tau,t_2)\right| dt_1\right.\right. \\
&\quad + \left.\left.\int_{a_1}^{\tau} \left|q_1 k_\alpha(\tau-t_1)\right|
\cdot \left|g(\textbf{t})\right|\cdot\left|\eta_1(\tau,t_2)\right| dt_1 \right)dt_2 \right) d\tau \\
&\quad +\int_{a_1}^{b_1}\left(\int_{a_2}^{b_2}\left(\int_{\tau}^{b_2} \left|p_2 k_\alpha(t_2-\tau)\right|
\cdot\left|f(\textbf{t})\right|\cdot \left|\eta_2(t_1,\tau)\right| dt_2 \right.\right.\\
&\quad +\left.\left.\int_{a_2}^{\tau} \left|q_2 k_\alpha(\tau-t_2)\right|
\cdot \left|f(\textbf{t})\right|\cdot\left|\eta_2(t_1,\tau)\right| dt_2 \right)d\tau \right)dt_1\\
&\leq C_2 C_3 \left[\int_{a_1}^{b_1}\left(\int_{a_2}^{b_2}\left(\int_{\tau}^{b_1}
\left|p_1 k_\alpha(t_1-\tau)\right|dt_1\right.\right.\right. \\
&\quad +\left.\left.\left.\int_{a_1}^{\tau}
\left|q_1 k_\alpha(\tau-t_1)\right| dt_1\right)dt_2\right)d\tau\right]\\
&\quad + C_1 C_4 \left[\int_{a_1}^{b_1}\left(\int_{a_2}^{b_2}
\left(\int_{\tau}^{b_2}\left|p_2 k_\alpha(t_2-\tau)\right| dt_2\right.\right.\right.\\
&\quad + \left.\left.\left. \int_{\tau}^{b_2}\left|q_2 k_\alpha(\tau-t_2)\right|dt_2
\right)d\tau \right)dt_1\right]\\
&\leq C_2 C_3 \left[\int_{a_1}^{b_1}\left(\int_{a_2}^{b_2}\left(\int_{0}^{b_1-a_1}
\left|p_1 k_\alpha(u_1)\right|du_1 \right.\right.\right.\\
&\quad + \left.\left.\left.\int_{0}^{b_1-a_1}
\left|q_1 k_\alpha(u_1)\right| du_1\right)dt_2\right)d\tau\right]\\
&\quad + C_1 C_4 \left[\int_{a_1}^{b_1}\left(\int_{a_2}^{b_2}\left(
\int_{0}^{b_2-a_2}\left|p_2 k_\alpha(u_2)\right| du_2\right.\right.\right.\\
&\quad + \left.\left.\left.\int_{0}^{b_2-a_2}
\left|q_2 k_\alpha(u_2)\right|du_2 \right)d\tau \right)dt_1\right]\\
&= C_2 C_3 \left(\left|p_1\right|+\left|q_1\right|\right)\left\|k_{\alpha}\right\|(b_2-a_2)(b_1-a_1)\\
&\quad + C_1 C_4 \left(\left|p_2\right|+\left|q_2\right|\right)
\left\|k_{\alpha}\right\|(b_2-a_2)(b_1-a_1)\\
&< \infty.
\end{split}
\end{equation*}
Hence, we can use Fubini's theorem to change
the order of integration in the iterated integrals:
\begin{equation*}
\begin{split}
\int_{a_1}^{b_1}&\int_{a_2}^{b_2}\left[g(\textbf{t})\left(K_{P_{t_1}}^{\alpha}\eta_1\right)(\textbf{t})
+f(\textbf{t})\left(K_{P_{t_2}}^{\alpha}\eta_2\right)(\textbf{t})\right]dt_2 dt_1
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
&= \int_{a_1}^{b_1}\int_{a_2}^{b_2}\left[g(\textbf{t})\left(p_1 \int_{a_1}^{t_1}
k_\alpha (t_1-\tau)\eta_1(\tau,t_2)d\tau\right.\right.\\
&\quad + \left. q_1\int_{t_1}^{b_1} k_\alpha (\tau-t_1)\eta_1(\tau,t_2)d\tau\right)\\
&\quad +f(\textbf{t})\left(p_2 \int_{a_2}^{t_2} k_\alpha (t_2-\tau)\eta_2(t_1,\tau)d\tau\right.\\
&\quad + \left.\left. q_2\int_{t_2}^{b_2} k_\alpha (\tau-t_2)\eta_2(t_1,\tau)d\tau\right)\right]dt_2 dt_1\\
&=\int_{a_2}^{b_2}\left(\int_{a_1}^{b_1}\eta_1(\tau,t_2)\left(p_1\int_{\tau}^{b_1}
k_\alpha (t_1-\tau)g(\textbf{t})dt_1\right.\right.\\
&\quad + \left.\left. q_1\int_{a_1}^{\tau}k_\alpha (\tau-t_1)g(\textbf{t})dt_1\right)d\tau \right)dt_2\\
& \quad +\int_{a_1}^{b_1}\left(\int_{a_2}^{b_2}\eta_2(t_1,\tau)\left(p_2 \int_{\tau}^{b_2}
k_\alpha(t_2-\tau)f(\textbf{t})dt_2\right.\right.\\
&\quad + \left.\left. q_2 \int_{a_2}^{\tau}k_\alpha(\tau-t_2)f(\textbf{t})dt_2\right)d\tau\right)dt_1\\
&=\int_{a_1}^{b_1}\int_{a_2}^{b_2}\eta_1(\tau,t_2)\left(K_{P_{t_1^*}}^{\alpha}g\right)(\tau,t_2) dt_2 d\tau \\
&\quad + \int_{a_1}^{b_1}\int_{a_2}^{b_2}\eta_2(t_1,\tau)\left(K_{P_{t_2^*}}^{\alpha}f\right)(t_1,\tau) d\tau dt_1.
\end{split}
\end{equation*}
\end{proof}
We are now in conditions to state and prove the main result of the paper:
the Green theorem for generalized fractional derivatives.
\begin{theorem}[Generalized Green's Theorem]
\label{thm:ggt}
Let $0<\alpha<1$ and $f,g,\eta\in C^1\left(\Delta_2\right)$.
Let $k_\alpha$ be a difference kernel, i.e.,
$k_\alpha(t_i,\tau)=k_\alpha(t_i-\tau)$ such that $k_\alpha\in L_1([0,b_i-a_i])$,
$i=1,2$, and $K_{P_{t_1}^*}^{1-\alpha}g,K_{P_{t_2}^*}^{1-\alpha}f\in C^1\left(\Delta_2\right)$.
Then, the following formula holds:
\begin{multline*}
\int_{a_1}^{b_1}\int_{a_2}^{b_2}\left[g(\textbf{t})\left(B_{P_{t_1}}^{\alpha}\eta\right)(\textbf{t})
+f(\textbf{t})\left(B_{P_{t_2}}^{\alpha}\eta\right)(\textbf{t})\right]dt_2 dt_1\\
=-\int_{a_1}^{b_1}\int_{a_2}^{b_2}\eta(\textbf{t})\left[\left(A_{P_{t_1}^*}^{\alpha}g\right)(\textbf{t})
+\left(A_{P_{t_2}^*}^{\alpha}f\right)(\textbf{t})\right]dt_2 dt_1\\
+\oint_{\partial\Delta_2}\eta(\textbf{t})\left[\left(K_{P_{t_1}^*}^{1-\alpha}g\right)(\textbf{t})dt_2
-\left(K_{P_{t_2}^*}^{1-\alpha}f\right)(\textbf{t})dt_1\right].
\end{multline*}
\end{theorem}
\begin{proof}
By the definition of generalized partial Caputo fractional derivative,
Theorem~\ref{theorem:GRI}, and the standard Green's theorem, one has
\begin{equation*}
\begin{split}
\int_{a_1}^{b_1}&\int_{a_2}^{b_2}\left[g(\textbf{t})\left(B_{P_{t_1}}^{\alpha}\eta\right)(\textbf{t})
+f(\textbf{t})\left(B_{P_{t_2}}^{\alpha}\eta\right)(\textbf{t})\right]dt_2 dt_1\\
&=\int_{a_1}^{b_1}\int_{a_2}^{b_2}\left[g(\textbf{t})\left(
K_{P_{t_1}}^{1-\alpha}\frac{\partial}{\partial t_1} \eta\right)(\textbf{t})
+ f(\textbf{t})\left(K_{P_{t_2}}^{1-\alpha}\frac{\partial}{\partial t_2}
\eta\right)(\textbf{t})\right]dt_2 dt_1\\
&=\int_{a_1}^{b_1}\int_{a_2}^{b_2}\left[\frac{\partial}{\partial t_1}\eta(\textbf{t})
\left(K_{P_{t_1}^*}^{1-\alpha} g\right)(\textbf{t})
+ \frac{\partial}{\partial t_2}\eta(\textbf{t})
\left(K_{P_{t_2}^*}^{1-\alpha} f\right)(\textbf{t})\right]dt_2 dt_1\\
&=-\int_{a_1}^{b_1}\int_{a_2}^{b_2}\eta(\textbf{t})\left[\frac{\partial}{\partial t_1}
\left(K_{P_{t_1^*}}^{1-\alpha}g\right)(\textbf{t})
+ \frac{\partial}{\partial t_2}\left(
K_{P_{t_2^*}}^{1-\alpha}f\right)(\textbf{t})\right]dt_2 dt_1\\
&\quad + \oint_{\partial\Delta_2}\eta(\textbf{t})\left[\left(K_{P_{t_1}^*}^{1-\alpha}g\right)(\textbf{t})dt_2
-\left(K_{P_{t_2}^*}^{1-\alpha}f\right)(\textbf{t})dt_1\right].
\end{split}
\end{equation*}
\end{proof}
\begin{corollary}
Let $0<\alpha<1$ and $f,g,\eta\in C^1\left(\Delta_2\right)$.
If $\left({_{t_1}}\textsl{I}^{1-\alpha}_{b_1} g\right)(\textbf{t})$
and $\left({_{t_2}}\textsl{I}^{1-\alpha}_{b_2} f\right)(\textbf{t})$
are continuously differentiable on the rectangle $\Delta_2$, then
\begin{multline*}
\int_{a_1}^{b_1}\int_{a_2}^{b_2}\left[g(\textbf{t})\left({^{C}_{a_1}}\textsl{D}^{\alpha}_{t_1}
\eta\right)(\textbf{t})+f(\textbf{t})\left({^{C}_{a_2}}\textsl{D}^{\alpha}_{t_2}
\eta\right)(\textbf{t})\right]dt_2 dt_1\\
=\int_{a_1}^{b_1}\int_{a_2}^{b_2}\eta(\textbf{t})\left[
\left({_{t_1}}\textsl{D}^{\alpha}_{b_1} g\right)(\textbf{t})
+\left({_{t_2}}\textsl{D}^{\alpha}_{b_2} f\right)(\textbf{t})\right]dt_2 dt_1\\
+\oint_{\partial\Delta_2}\eta(\textbf{t})\left[\left({_{t_1}}\textsl{I}^{1-\alpha}_{b_1}
g\right)(\textbf{t})dt_2-\left({_{t_2}}\textsl{I}^{1-\alpha}_{b_2} f\right)(\textbf{t})dt_1\right].
\end{multline*}
\end{corollary}
\section*{Acknowledgements}
This work received \emph{The Grunwald--Letnikov Award: Best Student Paper (theory)},
at the 2012 Symposium on Fractional Differentiation and Its Applications (FDA'2012),
May 16, 2012, Hohai University, Nanjing.
It was supported by {\it FEDER} funds through
{\it COMPETE} (Operational Program Factors of Competitiveness)
and by Portuguese funds through the
{\it Center for Research and Development
in Mathematics and Applications} (CIDMA)
and the Portuguese Foundation for Science and Technology (FCT),
within project PEst-C/MAT/UI4106/2011
with COMPETE number FCOMP-01-0124-FEDER-022690.
Odzijewicz was also supported by FCT under Ph.D. fellowship
SFRH/BD/33865/2009; Malinowska by Bia{\l}ystok
University of Technology grant S/WI/02/2011;
and Torres by FCT through the project PTDC/MAT/113470/2009.
| {
"timestamp": "2012-10-29T01:00:43",
"yymm": "1205",
"arxiv_id": "1205.4851",
"language": "en",
"url": "https://arxiv.org/abs/1205.4851",
"abstract": "We study three types of generalized partial fractional operators. An extension of Green's theorem, by considering partial fractional derivatives with more general kernels, is proved. New results are obtained, even in the particular case when the generalized operators are reduced to the standard partial fractional derivatives and fractional integrals in the sense of Riemann-Liouville or Caputo.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Green's Theorem for Generalized Fractional Derivatives",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9840936050226358,
"lm_q2_score": 0.8198933293122506,
"lm_q1q2_score": 0.8068517821769038
} |
https://arxiv.org/abs/2210.12371 | Structure of singular and nonsingular tournament matrices | A tournament is a directed graph resulting from an orientation of the complete graph; so, if $M$ is a tournament's adjacency matrix, then $M + M^T$ is a matrix with $0$s on its diagonal and all other entries equal to $1$. An outstanding question in tournament theory asks to classify the adjacency matrices of tournaments which are singular (or nonsingular). We study this question using the structure of tournaments as graphs, in particular their cycle structure. More specifically, we find, as precisely as possible, the number of cycles of length three that dictates whether the corresponding tournament matrix is singular or nonsingular. We also give structural classifications of the tournaments that have the specified numbers of cycles of length three. | \section{Introduction}
A \underline{tournament matrix} of order $n$ is an $n \times n$ $(0,1)$-matrix $M = [m_{ij}]$ which satisfies
$$M + M^T = J_n - I_n,$$
where $J_n$ denotes the $n \times n$ matrix of all 1s and $I_n$ denotes the $n \times n$ identity matrix. A \underline{tournament} of order $n$ is a digraph obtained by arbitrarily orienting each edge of the complete graph on $n$ vertices. Thus a tournament matrix is merely the adjacency matrix of a tournament. Whenever a tournament has property $P$, we say that the corresponding tournament matrix has property $P$ and vice versa. We denote the set of order $n$ tournament matrices by $\mathcal{T}_{n}$ and the set of order $n$ singular tournament matrices by $\S$. We denote the number of $3$-cycles in a tournament matrix $M$ by $C_3(M)$.
The \underline{score vector} of an order $n$ tournament matrix $M$ is $s := M\bm{1}_n$ where $\bm{1}_n \in \mathbb{C}^n$ is the vector of 1s. An \underline{ordered score vector} is a score vector in nondecreasing order. In this paper, we refer to an ordered score vector simply as a score vector. From the score vector, we can compute the number of $3$-cycles using the following well-known formula.
\begin{proposition}[Score Vector Formula for $3$-Cycles]
Let $M \in \mathcal{T}_{n}$ and $s = (s_1,...,s_n)$ its score vector. The number of three cycles in $M$ is
$$C_3(M) = \binom{n}{3} - \sum_{i=1}^n \binom{s_i}{2}.$$
\end{proposition}
For two vertices $u$ and $v$ in a digraph, if there is an arc from $u$ to $v$, then we say that $u$ beats $v$ and we write $u \to v$. A \underline{transitive tournament} is a tournament such that if $u \to v$ and $v \to w$, then $u \to w$. An order $n$ transitive tournament matrix has the score vector $(0,1,2,...,n-1)$. A \underline{regular tournament} is a tournament whose vertices all have the same outdegree. An order $n$ regular tournament matrix must have odd order and has the score vector $(\frac{n-1}{2},...,\frac{n-1}{2})$. An \underline{almost regular} tournament is a tournament whose maximum difference in outdegrees between vertices is 1. An order $n$ almost regular tournament matrix must have even order and the score vector $(\frac{n}{2}-1,...,\frac{n}{2}-1,\frac{n}{2},...,\frac{n}{2})$.
A pair of vertices in a digraph are called \underline{strongly connected} if there exists a directed walk from one to the other and vice versa. A digraph is called \underline{strongly connected} or \underline{strong} if each pair of vertices is strongly connected. A digraph is called \underline{Hamiltonian} if it contains a directed cycle that traverses through all the vertices, i.e. if it contains a \underline{Hamiltonian cycle}.
Strongly connected is an equivalence relation on the set of vertices in a digraph. Thus a tournament may be partitioned into strongly connected components. The \ul{strongly connected component} in a digraph $D$ containing the vertex $v$ is the induced subdigraph which contains all vertices of $D$ which are strongly connected to $v$. Throughout this paper, we denote the strongly connected components of a tournament matrix $M$ by $M(V_1),...,M(V_k)$ where $V_1,...,V_k$ are their vertex sets.
An \underline{upset tournament} is generated by taking a transitive tournament, finding a directed path from the vertex of degree $n-1$ to the vertex of degree 0, and switching the directions of the arcs on that path. An order $n$ upset tournament has the score vector $(1,1,2,...,n-3,n-2,n-2)$. Burzio showed in \cite{Burzio} that the strongly connected tournaments which contain the least number of $3$-cycles are precisely the upset tournaments.
Colloquially, for a fixed $n$, tournament matrices are on a spectrum from `transitive' to `regular'. $C_3$ is a measurement of this spectrum: transitive tournaments have the least number of $3$-cycles, while regular and almost regular tournaments have the most number of $3$-cycles. Transitive tournament matrices are singular. Regular and almost regular tournament matrices are nonsingular for $n \ge 3$. Thus singular tournament matrices attain an interesting maximum of $C_3$ and nonsingular tournament matrices attain an interesting minimum of $C_3$.
In this paper, we investigate the extrema in Figure \ref{fig:Spectrum}.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=1.5,>=stealth,
every node/.style={align=center,scale=.8}]
\draw[black,line width=1pt] (1,0)--(8,0);
\foreach \i in {1, 3, 6, 8}
\draw[black, ultra thick] (\i,.1)--(\i,-.1);
\draw (1,0.2) node[above]{Transitive};
\draw (8,0.2) node[above]{Regular};
\draw (2,-0.1) node[below]{Singular};
\draw (4.5,-0.1) node[below]{Singular/Nonsingular};
\draw (7,-0.1) node[below]{Nonsingular};
\draw (3,0.2) node[above]{Nonsingular \\ Minimum};
\draw (6,0.2) node[above]{Singular \\ Maximum};
\end{tikzpicture}
\caption{Transitive-Regular Spectrum with Extrema}
\label{fig:Spectrum}
\end{figure}
We give structural classifications of nonsingular tournament matrices which minimize $C_3$ and the number of $3$-cycles they contain in \Cref{thm:nonsingularextreme}.
\begin{restatable*}{theorem}{nonsingularextreme}
\label{thm:nonsingularextreme}
Let $n \ge 3$ and let $M \in \mathcal{T}_{n}$. If $M \in \mathcal{T}_{n}\setminus\S$, then $\displaystyle C_3(M) \ge n - 2\floor*{\frac{n}{3}}$. Furthermore, $M \in \mathcal{T}_{n}\setminus\S$ and $\displaystyle C_3(M) = n - 2\floor*{\frac{n}{3}}$ if and only if the strongly connected components of $M$ are upset tournaments and the number of strongly connected components is $\displaystyle\floor*{\frac{n}{3}}$.
\end{restatable*}
Furthermore, we give the number of $3$-cycles of nonsingular tournament matrices which maximize $C_3$ for $n$ even and we give bounds for $n$ odd.
\begin{restatable*}{proposition}{singularextreme}
\label{prop:singularextreme}
If $n$ is even, then
$$\max_{M \in \S} C_3(M) = \frac{1}{4}\binom{n}{3}.$$
If $n$ is odd, then
$$2\binom{\frac{n+1}{2}}{3} \le \max_{M \in \S} C_3(M) \le \frac{1}{4}\binom{n}{3}.$$
\end{restatable*}
We do not have encompassing structural characterizations of nonsingular tournament matrices which maximize $C_3$. We are cognizant of a trivial subset: tournaments of order $n$ that are obtained by either adding a sink or source to a regular or almost regular tournament of order $n-1$ (see \Cref{thm:TSE}). We prove in \Cref{prop:nontrivialifandonlyifstrong} that singular tournament matrices which maximize $C_3$ are nontrivial if and only if they are strongly connected.
\section{Maximum Number of $3$-Cycles for Singular Tournament Matrices}
The following proposition has been discovered a multitude of times by different authors. We reference Moon \cite{Moon} for a proof and a small list of original discovers.
\begin{proposition}\label{prop:max3cycles}
Every tournament matrix $M \in \mathcal{T}_{n}$ satisfies
$$C_3(M) \le \begin{cases}
\frac{1}{4}\binom{n+1}{3}, & \text{if $n$ is odd}, \\
2\binom{\frac{n}{2}+1}{3}, & \text{if $n$ is even},
\end{cases}$$
and equality holds if and only if $M$ is a regular or almost regular.
\end{proposition}
Next, we investigate the maximum of $C_3$ over singular tournament matrices.
\begin{lemma}\label{lem:maxSing>=maxPrevTourn}
$$\max_{M \in \mathcal{S}_{n+1}} C_3(M) \ge \max_{M \in \mathcal{T}_{n}} C_3(M).$$
\end{lemma}
\begin{proof}
Let $M \in \mathcal{T}_{n}$ maximize $C_3$ over $\mathcal{T}_{n}$. From the tournament $M$, create a new tournament $N \in \mathcal{T}_{n+1}$ which contains all the vertices and arcs of $M$, plus an additional vertex that is a sink (resp. source); a vertex that is beaten by all other vertices (resp. beats all other vertices). It is straightforward to check that $C_3(N) = C_3(M).$ By construction, the row (resp. column) of $N$ corresponding to the sink (resp. source) is a row of 0s (resp. a column of 0s). Therefore, $N \in \mathcal{S}_{n+1}$.
\end{proof}
Shader showed in \cite{Shader} (Corollary 3.2) that if $C_3(M) > \displaystyle\frac{1}{4}\binom{n}{3}$, then $M$ is nonsingular. Taking the contrapostive yields the following lemma.
\begin{lemma}[Shader's Inequality]\label{lem:Max}
If $M \in \S$, then $C_3(M) \le \displaystyle\frac{1}{4}\binom{n}{3}$.
\end{lemma}
The following proposition gives a precise quantity for $\displaystyle\max_{M \in \S}C_3(M)$ when $n$ is even and bounds when $n$ is odd.
\singularextreme
\begin{proof}
If $n$ is even, then by Shader's inequality,
$$\frac{1}{4}\binom{n}{3} = \max_{M \in \mathcal{T}_{n-1}} C_3(M) \le \max_{M \in \S} C_3(M) \le \frac{1}{4}\binom{n}{3}$$
whence $\displaystyle\max_{M \in \S} C_3(M) = \frac{1}{4}\binom{n}{3}$. If $n$ is odd, then
$$2\binom{\frac{n+1}{2}}{3} = \max_{M \in \mathcal{T}_{n-1}} C_3(M) \le \max_{M \in \S} C_3(M)\le \frac{1}{4}\binom{n}{3}.$$
\end{proof}
Almost nothing is known about the tournament matrices that maximize $C_3$ over $\S$. Some of these matrices are obtained by adding a vertex to a regular or almost regular tournament of order $n-1$ (see the proof of \Cref{lem:maxSing>=maxPrevTourn} and see \Cref{thm:TSE}). The first counterexample is at $n = 7$. From computer computation, there are exactly $3$ nonisomorphic singular tournament matrices which maximize $C_3$ over $\mathcal{S}_7$. These tournament matrices have the score vector $(1,2,2,3,4,4,5)$ and are
\begin{equation*}
\begin{bmatrix}
0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 1 & 0 \\
1 & 0 & 0 & 1 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 & 1 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 & 1 & 0 \\
1 & 0 & 1 & 1 & 0 & 0 & 1 \\
1 & 1 & 1 & 1 & 1 & 0 & 0 \end{bmatrix}, \qquad \begin{bmatrix}
0 & 0 & 1 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 1 & 0 & 1 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 & 1 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 & 1 & 0 \\
1 & 0 & 1 & 1 & 0 & 0 & 1 \\
1 & 1 & 1 & 1 & 1 & 0 & 0 \end{bmatrix}, \qquad \begin{bmatrix}
0 & 0 & 1 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 1 & 0 \\
1 & 0 & 1 & 0 & 1 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 & 0 & 1 \\
1 & 1 & 0 & 1 & 1 & 0 & 0 \\
1 & 1 & 1 & 1 & 0 & 1 & 0 \end{bmatrix}.
\end{equation*}
\begin{definition}
A singular tournament matrix $M \in \S$ is called a \ul{singular maximizer of $C_3$} if $C_3(M) = \displaystyle \max_{N \in \S} C_3(N)$. A singular maximizer of $C_3$ is called \ul{trivial} if it is obtained by adding a vertex to a regular or almost regular tournament of order $n-1$.
\end{definition}
We classify trivial singular maximizers of $C_3$ in the next proposition; however, we require the following lemma to do so. The following lemma may be proven using the score vector formula for $3$-cycles.
\begin{lemma}
Let $T_1$ be a tournament with score vector $s = (s_1,...,s_n)$ and $v_i \to v_j$. Let $T_2$ be the tournament obtained by reversing the arc between $v_i$ and $v_j$ in $T_1$. Then $C_3(T_2) - C_3(T_1) = s_i - s_j - 1$.
\end{lemma}
\begin{theorem}\label{thm:TSE}
Let $M \in \S$ be a singular maximizer of $C_3$. The following conditions are equivalent:
\begin{enumerate}[\normalfont(i)]
\item $M$ is a trivial singular maximizer of $C_3$;
\item $M$ is obtained by adding a source or a sink to a regular or almost regular tournament of order $n-1$;
\item $M$ has a strongly connected component of order $1$.
\end{enumerate}
\end{theorem}
\begin{proof}
(ii) $\implies$ (iii) is immediate.
(i) $\implies$ (ii) By definition, there exists a regular or almost regular tournament $R$ of order $n-1$, with vertex set $\{v_1,...,v_{n-1}\}$, such that $M = R + v_n$. Let $N$ be the tournament obtained by adding a vertex $v_n$ to $R$ that is a sink; that is, $v_i \to v_n$ for all $i \neq n$. Then $M$ is obtained by reversing some number of arcs which end at $v_n$, let's say $k$, in $N$. We split up the proof into two cases: when $R$ is regular and when $R$ is almost regular. \\
\underline{Case 1:} $R$ is regular. \\
By the previous lemma,
$$C_3(M) - C_3(N) = \sum_{i=0}^{k-1}\left[\frac{n}{2} - i - 1\right] = \frac{n-k-1}{2}k.$$
Since $R$ is regular, we know that $n$ is even. $N$ is also a singular maximizer of $C_3$, so by \Cref{prop:singularextreme},
$$C_3(M) = C_3(N) + \frac{n-k-1}{2}k = \frac{1}{4}\binom{n}{3} + \frac{n-k-1}{2}k.$$
Note that for $M$ to be a nonsingular minimizer of $C_3$, $\displaystyle\frac{n-k-1}{2}k$ must equal $0$, which only happens when $k = 0$ or $k = n-1$. Therefore the added vertex $v_n$ in $M$ must be a sink or source. \\
\underline{Case 2:} $R$ is almost regular. \\
Let $k = k_1 + k_2$ where $k_1$ of the arcs begin at a vertex with degree $\displaystyle\frac{n+1}{2}$ and $k_2$ with degree $\displaystyle\frac{n-1}{2}$. By the previous lemma,
$$C_3(M) - C_3(N) = \sum_{i=0}^{k_2-1}\left[\frac{n-1}{2} -i - 1\right] + \sum_{i=k_2}^{k-1}\left[\frac{n+1}{2} -i - 1\right] = \frac{n-k}{2}k - k_2.$$
Since $R$ is almost regular, we know that $n$ is odd and hence $\displaystyle C_3(N) = 2\binom{\frac{n+1}{2}}{3}$. By \Cref{prop:singularextreme},
$$C_3(M) - C_3(N) = \frac{n-k}{2}k - k_2 \le \frac{1}{4}\binom{n}{3} - 2\binom{\frac{n+1}{2}}{3} = \frac{n-1}{8}.$$
We know that this inequality holds for $k = 0$ and $k = n-1$; that is, when $v_n$ is a source or a sink. Assume, by way of contradiction, that the inequality holds for some $1 \le k \le n-2$. We know that $\displaystyle k_2 \le \min\left\{k,\frac{n-1}{2}\right\}$. Thus
$$\frac{n-k}{2}k - \min\left\{k,\frac{n-1}{2}\right\} \le \frac{n-k}{2}k - k_2.$$
It is straightforward to show that the function on the LHS is minimized when $k = 1$ and when $k = n-2$, and that its minimum is $\displaystyle\frac{n-3}{2}$. The inequality
$$\frac{n-3}{2} \le \frac{n-k}{2}k - \min\left\{k,\frac{n-1}{2}\right\} \le \frac{n-k}{2}k - k_2 \le \frac{n-1}{8}$$
implies that $n \le 3$. It is easy to check that the proposition holds for $n \le 3$.
(iii) $\implies$ (i) Assume that $M$ has a strongly connected component of order $1$. Let $M(V_1),...,M(V_k)$ be the strongly connected components of $M$. Assume, without loss of generality, that $M(V_k)$ is the strongly connected component of order $1$. Let $N$ be a tournament matrix whose strongly connected components are $M(V_1),...,M(V_{k-1})$. The order of $N$ is $n-1$. By \Cref{prop:singularextreme} and \Cref{prop:max3cycles},
$$\begin{cases}
\frac{1}{4}\binom{n}{3}, & \text{if $n$ is even}, \\
2\binom{\frac{n+1}{2}}{3}, & \text{if $n$ is odd},
\end{cases} \le C_3(M) = C_3(N) \le \begin{cases}
\frac{1}{4}\binom{n}{3}, & \text{if $n$ is even}, \\
2\binom{\frac{n+1}{2}}{3}, & \text{if $n$ is odd},
\end{cases}.$$
In order for equality to hold, $N$ must be a regular or almost regular tournament of order $n-1$.
\end{proof}
\begin{lemma}
If $M \in \mathcal{T}_{n}$ ($n \ge 6$) is not strongly connected and the strongly connected components of $M$ each contain at least $3$ vertices, then
$$C_3(M) \le \binom{n-2}{3} + 1.$$
\end{lemma}
\begin{proof}
Let $M(V_1),...,M(V_k)$ be the strongly connected components of $M$. Let $N$ be a tournament matrix whose strongly connected components are $M(V_1),...,M(V_{k-1})$. Let $\ell = |V_1| + \cdots + |V_{k-1}|$ be the order of $N$. By \Cref{prop:max3cycles},
$$C_3(N) = \sum_{i=1}^{k-1}C_3(M(V_\ell)) \le \begin{cases}
\frac{1}{4}\binom{\ell+1}{3}, & \text{if $n$ is odd}, \\
2\binom{\frac{\ell}{2}+1}{3}, & \text{if $n$ is even},
\end{cases} \le \frac{1}{4}\binom{\ell+1}{3}.$$
Therefore,
$$C_3(M) = \sum_{i=1}^{k}C_3(M(V_\ell)) \le \frac{1}{4}\binom{\ell+1}{3} + \frac{1}{4}\binom{n-\ell+1}{3} = \frac{n}{24}(3\ell^2 - 3n\ell + n^2 - 1).$$
The expression $\frac{n}{24}(3\ell^2 - 3n\ell + n^2 - 1)$ is a quadratic polynomial in terms of $\ell$ with a positive leading coefficient. It is easy to see that $\frac{n}{24}(3\ell^2 - 3n\ell + n^2 - 1)$ is maximized at both endpoints: $\ell = 3$ and $\ell = n-3$. Furthermore, its maximum value is $\displaystyle\frac{1}{4}\binom{n-2}{3} + 1$. Therefore,
$$C_3(M) \le \frac{1}{4}\binom{n-2}{3} + 1.$$
\end{proof}
\begin{proposition}\label{prop:nontrivialifandonlyifstrong}
Let $M \in \S$ ($n \ge 7$) be a singular maximizer of $C_3$. $M$ is nontrivial if and only if $M$ is strongly connected.
\end{proposition}
\begin{proof}
Clearly if $M$ is trivial, then $M$ is is not strongly connected. On the other hand, assume, by way of contradiction, that $M$ is nontrivial and $M$ is not strongly connected. The strongly connected components of $M$ each have at least $3$ vertices. By the previous lemma,
$$C_3(M) \le \frac{1}{4}\binom{n-2}{3} + 1.$$
However, is easy to check that,
$$C_3(M) \le \frac{1}{4}\binom{n-2}{3} + 1 < 2\binom{\frac{n}{2}+1}{3} \le C_3(M).$$
Therefore, if $M$ is nontrivial, then $M$ is strongly connected.
\end{proof}
\section{Minimum Number of $3$-Cycles for Nonsingular Tournament Matrices}
The goal of this section is to prove \Cref{thm:nonsingularextreme}. The following proposition may be proven by looking at the condensation of a tournament matrix, but the proof is omitted here.
\begin{proposition}\label{prop:Determinant}
Let $M$ be a tournament matrix and let $M(V_1),...,M(V_k)$ be its strongly connected components. Then $$\det(M) = \det(M(V_1)) \cdots \det(M(V_k)).$$
\end{proposition}
\begin{corollary}\label{cor:SingIffStrCnnCmpts}
A tournament matrix $M$ is singular if and only if one of its strongly connected components is singular.
\end{corollary}
\begin{corollary}\label{cor:nonsingularscc3verts}
If $M$ is a nonsingular tournament matrix, then the strongly connected components of $M$ contain at least three vertices.
\end{corollary}
\begin{proof}
We prove the contrapositive. Assume $M$ has a strongly connected component with less than 3 vertices. Since a strongly connected component with exactly 2 vertices is impossible, $M$ has a strongly connected component with exactly 1 vertex, say $M(V_\ell)$. Since $M(V_\ell) = [0]$ is singular, we get that $M$ is singular, as desired.
\end{proof}
Note that if $M \in \mathcal{T}_{n}$ is an upset tournament, then $C_3(M) = n-2$ (this may be computed using the score vector formula for $3$-cycles). Burzio showed in \cite{Burzio} that the strongly connected tournaments which contain the least number of $3$-cycles are precisely the upset tournaments. This fact gives us our next theorem.
\begin{theorem}\label{thm:upset}
If $M$ is a strongly connected tournament matrix of order $n \ge 3$, then $C_3(M) \ge n-2$ and equality holds if and only if $M$ is an upset tournament.
\end{theorem}
The following theorem gives structural classifications of nonsingular tournament matrices at the nonsingular extreme.
\nonsingularextreme
\begin{proof}
Suppose that $M \in \mathcal{T}_{n}\setminus\S$. Let $M(V_1),M(V_2),...,M(V_k)$ be the strongly connected components of $M$ with orders $\ell_1$, $\ell_2$, ..., $\ell_k$, respectively. By \Cref{cor:nonsingularscc3verts}, $\ell_i \ge 3$. By \Cref{thm:upset}, $C_3(M(V_i)) \ge \ell_i-2$. Therefore,
$$C_3(M) = \sum_{i=1}^k C_3(M(V_i)) \ge \sum_{i=1}^k (\ell_i -2) = \sum_{i=1}^k \ell_i - \sum_{i=1}^k 2 = n - 2k.$$ Since $\ell_i \ge 3$, the number of strongly connected components satisfies $k \le \displaystyle\floor*{\frac{n}{3}}$. Therefore,
\begin{equation}\label{eq:inequality}
C_3(M) \ge n - 2k \ge n - 2\floor*{\frac{n}{3}}.
\end{equation}
For the forwards implication, assume that $M \in \mathcal{T}_{n}\setminus\S$ and that equality holds. If not all of the strongly connected components of $M$ are upset tournaments, then $C_3(M) = n - 2\displaystyle\floor*{\frac{n}{3}} > n - 2k$, which contradicts \eqref{eq:inequality}. Thus the strongly connected components are upsets tournaments. \Cref{eq:inequality} forces $C_3(M) = n - 2k = n - 2\displaystyle\floor*{\frac{n}{3}}$ and it follows that $k = \displaystyle\floor*{\frac{n}{3}}$.
For the backwards implication, assume that the strongly connected components of $M$ are upset tournaments and the number of strongly connected components is $k = \displaystyle\floor*{\frac{n}{3}}$. We first show that $M \in \mathcal{T}_{n}\setminus\S$. The strongly connected components of $M$ are upset tournaments of order $3$, $4$, or $5$. Thus the strongly connected components of $M$ have score vectors $(1,1,1)$, $(1,1,2,2)$, or $(1,1,2,3,3)$. The only tournament matrix (up to isomorphism) with the score vector $(1,1,1)$ is
\begin{equation*}
F_1 = \begin{bmatrix}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 & 0
\end{bmatrix}.
\end{equation*}
The only tournament matrix (up to isomorphism) with the score vector $(1,1,2,2)$ is
\begin{equation*}
F_2 = \begin{bmatrix}
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
1 & 0 & 0 & 1 \\
1 & 1 & 0 & 0\end{bmatrix}.
\end{equation*}
The only tournament matrices (up to isomorphism) with the score vector $(1,1,2,3,3)$ are
\begin{equation*}
F_3 = \begin{bmatrix}
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
1 & 0 & 0 & 1 & 0 \\
1 & 1 & 0 & 0 & 1 \\
1 & 1 & 1 & 0 & 0 \end{bmatrix} \quad \text{and} \quad
F_4 = \begin{bmatrix}
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
1 & 1 & 0 & 0 & 0 \\
1 & 0 & 1 & 0 & 1 \\
1 & 1 & 1 & 0 & 0 \end{bmatrix}.
\end{equation*}
Observe that $\det(F_1) = \det(F_3) = \det(F_4) = 1$ and $\det(F_2) = -1$. Since isomorphic tournament matrices are permutation similar, the determinants of isomorphic tournaments are the same. By \Cref{cor:SingIffStrCnnCmpts}, $M \in \mathcal{T}_{n}\setminus\S$. Switching inequalities to equalities in the derivation of \Cref{eq:inequality} yields that $C_3(M) = n - 2\displaystyle\floor*{\frac{n}{3}}.$
\end{proof}
\begin{corollary}
Let $M \in \mathcal{T}_{n}\setminus\S$. If $\displaystyle C_3(M) = n - 2\floor*{\frac{n}{3}}$, then
$$\det(M) = \begin{cases}
-1, & \text{if } n \equiv 1 \bmod 3 \\
1, & \text{otherwise.} \end{cases}$$
\end{corollary}
\begin{proof}
If $n \equiv 0 \bmod 3$, then the strongly connected components are all isomorphic to $F_1$. Since isomorphic tournament matrices are permutation similar, their determinants are the same. By \Cref{prop:Determinant}, $\det(M) = \det(F_1) \cdots \det(F_1) = 1$. If $n \equiv 1 \bmod 3$, then all but one strongly connected component is isomorphic to $F_1$. The other has 4 vertices and must be isomorphic to $F_2$. Therefore, $\det(M) = \det(F_1) \cdots \det(F_1)\det(F_2) = -1$. If $n \equiv 2 \bmod 3$, then at most two strongly connected components are not isomorphic to $F_1$. Either they both have $4$ vertices or one has $3$ and the other has $5$. In the former case, both are isomorphic to $F_2$ and hence $\det(M) = \det(F_1) \cdots \det(F_1)\det(F_2)\det(F_2) = 1$. In the latter case, one is isomorphic to $F_1$ and the other is either isomorphic to $F_3$ or $F_4$. Nonetheless, $\det(M) = \det(F_1) \cdots \det(F_1)\det(F_3) = \det(F_1) \cdots \det(F_1)\det(F_4) = 1$.
\end{proof}
This corollary shows that if $M$ is nonsingular and minimizes $C_3$, then $M$ also minimizes the determinant magnitude-wise. This corollary also shows that the nonsingular miminizer of $C_3$ are \ul{unimodular} (i.e. have determinant $\pm 1$). Recall that a matrix is \ul{totally unimodular} if every square submatrix has determinant $\pm 1$ or $0$. Singular minimizers of $C_3$ are not totally unimodular, but they are \textit{almost} totally unimodular.
\begin{corollary}
Let $M \in \mathcal{T}_{n}\setminus\S$. If $\displaystyle C_3(M) = n - 2\floor*{\frac{n}{3}}$, then every subtournament matrix of $M$ has determinant $\pm 1$ or $0$.
\end{corollary}
\begin{proof}
Let $N$ be a subtournament of $M$. Then $N$ is obtained by deleting a set of vertices in $M$. Thus the strongly connected components of $N$ are subtournaments of the corresponding strongly connected components of $M$. It is easy to check that $F_1$, $F_2$, $F_3$, and $F_4$ are totally unimodular. Therefore the strongly connected components of $N$ have determinants $\pm 1$ or $0$. Therefore $N$ has determinant $\pm 1$ or $0$.
\end{proof}
| {
"timestamp": "2022-10-25T02:07:00",
"yymm": "2210",
"arxiv_id": "2210.12371",
"language": "en",
"url": "https://arxiv.org/abs/2210.12371",
"abstract": "A tournament is a directed graph resulting from an orientation of the complete graph; so, if $M$ is a tournament's adjacency matrix, then $M + M^T$ is a matrix with $0$s on its diagonal and all other entries equal to $1$. An outstanding question in tournament theory asks to classify the adjacency matrices of tournaments which are singular (or nonsingular). We study this question using the structure of tournaments as graphs, in particular their cycle structure. More specifically, we find, as precisely as possible, the number of cycles of length three that dictates whether the corresponding tournament matrix is singular or nonsingular. We also give structural classifications of the tournaments that have the specified numbers of cycles of length three.",
"subjects": "Combinatorics (math.CO)",
"title": "Structure of singular and nonsingular tournament matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9896718480899426,
"lm_q2_score": 0.8152324848629215,
"lm_q1q2_score": 0.8068126399172437
} |
https://arxiv.org/abs/2105.10618 | Tight bounds on the maximal perimeter of convex equilateral small polygons | A small polygon is a polygon that has diameter one. The maximal perimeter of a convex equilateral small polygon with $n=2^s$ sides is not known when $s \ge 4$. In this paper, we construct a family of convex equilateral small $n$-gons, $n=2^s$ and $s \ge 4$, and show that their perimeters are within $O(1/n^4)$ of the maximal perimeter and exceed the previously best known values from the literature. In particular, for the first open case $n=16$, our result proves that Mossinghoff's equilateral hexadecagon is suboptimal. | \section{Introduction}
The {\em diameter} of a polygon is the largest Euclidean distance between pairs of its vertices. A polygon is said to be {\em small} if its diameter equals one. For an integer $n \ge 3$, the maximal perimeter problem consists in finding a convex small $n$-gon with the longest perimeter. The problem was first investigated by Reinhardt~\cite{reinhardt1922} in 1922, and later by Datta~\cite{datta1997} in 1997. They proved that for $n \ge 3$
\begin{itemize}
\item the value $2n\sin \frac{\pi}{2n}$ is an upper bound on the perimeter of any convex small $n$-gon;
\item when $n$ is odd, the regular small $n$-gon is an optimal solution, but it is unique only when $n$ is prime;
\item when $n$ is even, the regular small $n$-gon is not optimal;
\item when $n$ has an odd factor, there are finitely many optimal solutions~\cite{mossinghoff2011,hare2013,hare2019} and there are all equilateral.
\end{itemize}
When $n$ is a power of $2$, the maximal perimeter problem is solved for $n \le 8$. The case $n=4$ was solved by Tamvakis~\cite{tamvakis1987} in 1987 and the case $n=8$ by Audet, Hansen, and Messine~\cite{audet2007} in 2007. Both optimal $4$-gon and $8$-gon, shown respectively in Figure~\ref{figure:4gon:R3+} and Figure~\ref{figure:8gon:V8}, are not equilateral. For $n=2^s$ with integer $s\ge 4$, exact solutions in the maximal perimeter problem appear to be presently out of reach. However, tight lower bounds can be obtained analytically. Recently, Bingane~\cite{bingane2021b} constructed a family of convex non-equilateral small $n$-gons, for $n=2^s$ with $s\ge 2$, and proved that the perimeters obtained cannot be improved for large $n$ by more than $\pi^7/(32n^6)$.
The diameter graph of a small polygon is defined as the graph with the vertices of the polygon, and an edge between two vertices exists only if the distance between these vertices equals one. Figure~\ref{figure:4gon}, Figure~\ref{figure:6gon}, and Figure~\ref{figure:8gon} show diameter graphs of some convex small polygons. The solid lines illustrate pairs of vertices which are unit distance apart. In 1950, Vincze~\cite{vincze1950} studied the problem of finding the minimal diameter of a convex polygon with unit-length sides. This problem is equivalent to the equilateral case of the maximal perimeter problem. He showed that a necessary condition of a convex equilateral small polygon to have maximal perimeter is that each vertex should have an opposite vertex at a distance equal to the diameter. It is easy to see that for $n=4$, the maximal perimeter of a convex equilateral small $4$-gon is only attained by the regular $4$-gon. Vincze also described a convex equilateral small $8$-gon, shown in Figure~\ref{figure:8gon:X8}, with longer perimeter than the regular $8$-gon. In 2004, Audet, Hansen, Messine, and Perron~\cite{audet2004} used both geometrical arguments and methods of global optimization to determine the unique convex equilateral small $8$-gon with the longest perimeter, illustrated in Figure~\ref{figure:8gon:H8}.
For $n=2^s$ with integer $s\ge 4$, the equilateral case of the maximal perimeter problem remains unsolved and, as in the general case, exact solutions appear to be presently out of reach. In 2008, Mossinghoff~\cite{mossinghoff2008} constructed a family of convex equilateral small $n$-gons, for $n=2^s$ with $s\ge 4$, and proved that the perimeters obtained cannot be improved for large $n$ by more than $3\pi^4/n^4$. By contrast, the perimeters of the regular $n$-gons cannot be improved for large $n$ by more than $\pi^3/(8n^2)$ when $n$~is even. In the present paper, we propose tighter lower bounds on the maximal perimeter of convex equilateral small $n$-gons when $n=2^s$ and integer $s \ge 4$ by a constructive approach. Thus, our main result is the following:
\begin{theorem}\label{thm:Bn}
Suppose $n=2^s$ with integer $s\ge 4$. Let $\ub{L}_n := 2n \sin \frac{\pi}{2n}$ denote an upper bound on the perimeter $L(\geo{P}_n)$ of a convex small $n$-gon $\geo{P}_n$. Let $\geo{M}_n$ denote the convex equilateral small $n$-gon constructed by Mossinghoff~\cite{mossinghoff2008}. Then there exists a convex equilateral small $n$-gon $\geo{B}_n$ such that
\[
\ub{L}_n - L(\geo{B}_n) = \frac{\pi^4}{n^4} + O\left(\frac{1}{n^5}\right)
\]
and
\[
L(\geo{B}_n) - L(\geo{M}_n) = \frac{2\pi^4}{n^4} + O\left(\frac{1}{n^5}\right).
\]
\end{theorem}
In addition, we show that the resulting polygons for $n=32$ and $n=64$ are not optimal by providing two convex equilateral small polygons with longer perimeters.
The remainder of this paper is organized as follows. Section~\ref{sec:ngon} recalls principal results on the maximal perimeter of convex small polygons. Section~\ref{sec:Bn} considers the polygons $\geo{B}_n$ and shows that they satisfy Theorem~\ref{thm:Bn}. Section~\ref{sec:optimal} shows that the polygons $\geo{B}_{32}$ and $\geo{B}_{64}$ are not optimal by constructing a $32$-gon and a $64$-gon with larger perimeters. Concluding remarks are presented in Section~\ref{sec:conclusion}.
\begin{figure}[h]
\centering
\subfloat[$(\geo{R}_4,2.828427)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.5000,0.5000) -- (0,1) -- (-0.5000,0.5000) -- cycle;
\draw (0,0) -- (0,1);
\draw (0.5000,0.5000) -- (-0.5000,0.5000);
\end{tikzpicture}
}
\subfloat[$(\geo{R}_3^+,3.035276)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0.5000,0.8660) -- (0,1) -- (-0.5000,0.8660);
\draw (0,1) -- (0,0) -- (0.5000,0.8660) -- (-0.5000,0.8660) -- (0,0);
\end{tikzpicture}
\label{figure:4gon:R3+}
}
\caption{Two convex small $4$-gons $(\geo{P}_4,L(\geo{P}_4))$: (a) Regular $4$-gon; (b) Optimal non-equilateral $4$-gon~\cite{tamvakis1987}}
\label{figure:4gon}
\end{figure}
\begin{figure}[h]
\centering
\subfloat[$(\geo{R}_6,3)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.4330,0.2500) -- (0.4330,0.7500) -- (0,1) -- (-0.4330,0.7500) -- (-0.4330,0.2500) -- cycle;
\draw (0,0) -- (0,1);
\draw (0.4330,0.2500) -- (-0.4330,0.7500);
\draw (0.4330,0.7500) -- (-0.4330,0.2500);
\end{tikzpicture}
}
\subfloat[$(\geo{R}_{3,6},3.105829)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.3660,0.3660) -- (0.5000,0.8660) -- (0,1) -- (-0.5000,0.8660) -- (-0.3660,0.3660) -- cycle;
\draw (0,0) -- (0.5000,0.8660) -- (-0.5000,0.8660) -- cycle;
\draw (0,0) -- (0,1);
\draw (0.3660,0.3660) -- (-0.5000,0.8660);
\draw (0.5000,0.8660) -- (-0.3660,0.3660);
\end{tikzpicture}
\label{figure:6gon:R36}
}
\caption{Two convex equilateral small $6$-gons $(\geo{P}_6,L(\geo{P}_6))$: (a) Regular $6$-gon; (b) Reinhardt $6$-gon~\cite{reinhardt1922}}
\label{figure:6gon}
\end{figure}
\begin{figure}[h]
\centering
\subfloat[$(\geo{R}_8,3.061467)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.3536,0.1464) -- (0.5000,0.5000) -- (0.3536,0.8536) -- (0,1) -- (-0.3536,0.8536) -- (-0.5000,0.5000) -- (-0.3536,0.1464) -- cycle;
\draw (0,0) -- (0,1);
\draw (0.3536,0.1464) -- (-0.3536,0.8536);
\draw (0.5000,0.5000) -- (-0.5000,0.5000);
\draw (0.3536,0.8536) -- (-0.3536,0.1464);
\end{tikzpicture}
}
\subfloat[$(\geo{X}_8,3.090369)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.3335,0.1950) -- (0.4799,0.5525) -- (0.3790,0.9254) -- (0,1) -- (-0.3737,0.9021) -- (-0.5201,0.5446) -- (-0.3225,0.2127) -- cycle;
\draw (0.3335,0.1950) -- (-0.3737,0.9021);\draw (0.4799,0.5525) -- (-0.5201,0.5446);
\draw (0,1) -- (0,0) -- (0.3790,0.9254) -- (-0.3225,0.2127);
\draw[thick,dashed] (0.4799,0.3438) -- (-0.5201,0.7533);
\end{tikzpicture}
\label{figure:8gon:X8}
}
\subfloat[$(\geo{H}_8,3.095609)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,1) -- (0.3796,0.9251) -- (0.5000,0.5574) -- (0.3228,0.2134) -- (0,0) -- (-0.3228,0.2134) -- (-0.5000,0.5574) -- (-0.3796,0.9251) -- cycle;
\draw (0,0) -- (0,1);
\draw (0,0) -- (0.3796,0.9251);\draw (0,0) -- (-0.3796,0.9251);
\draw (0.3796,0.9251) -- (-0.3228,0.2134);\draw (-0.3796,0.9251) -- (0.3228,0.2134);
\draw (0.5000,0.5574) -- (-0.5000,0.5574);
\end{tikzpicture}
\label{figure:8gon:H8}
}
\subfloat[$(\geo{V}_8,3.121147)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.2983,0.2128) -- (0.5000,0.5188) -- (0.4217,0.9067) -- (0,1) -- (-0.4217,0.9067) -- (-0.5000,0.5188) -- (-0.2983,0.2128) -- cycle;
\draw (0,0) -- (0,1);
\draw (0,0) -- (0.4217,0.9067) -- (-0.5000,0.5188) -- (0.5000,0.5188)-- (-0.4217,0.9067) -- cycle;
\draw (0.4217,0.9067) -- (-0.2983,0.2128);\draw (-0.4217,0.9067) -- (0.2983,0.2128);
\end{tikzpicture}
\label{figure:8gon:V8}
}
\caption{Four convex small $8$-gons $(\geo{P}_8,L(\geo{P}_8))$: (a) Regular $8$-gon; (b) Vincze $8$-gon~\cite{vincze1950}; (c) Optimal equilateral $8$-gon~\cite{audet2004}; (d) Optimal non-equilateral $8$-gon~\cite{audet2007}}
\label{figure:8gon}
\end{figure}
\section{Perimeters of convex equilateral small polygons}\label{sec:ngon}
Let $L(\geo{P})$ denote the perimeter of a polygon $\geo{P}$. For a given integer $n\ge 3$, let $\geo{R}_n$ denote the regular small $n$-gon. We have
\[
L(\geo{R}_n) =
\begin{cases}
2n\sin \frac{\pi}{2n} &\text{if $n$ is odd,}\\
n\sin \frac{\pi}{n} &\text{if $n$ is even.}\\
\end{cases}
\]
When $n$ has an odd factor $m$, consider the family of convex equilateral small $n$-gons constructed as follows:
\begin{enumerate}
\item Transform the regular small $m$-gon $\geo{R}_m$ into a Reuleaux $m$-gon by replacing each edge by a circle's arc passing through its end vertices and centered at the opposite vertex;
\item Add at regular intervals $n/m-1$ vertices within each arc;
\item Take the convex hull of all vertices.
\end{enumerate}
These $n$-gons are denoted $\geo{R}_{m,n}$ and $L(\geo{R}_{m,n}) = 2n\sin \frac{\pi}{2n}$. The $6$-gon $\geo{R}_{3,6}$ is illustrated in Figure~\ref{figure:6gon:R36}.
\begin{theorem}[Reinhardt~\cite{reinhardt1922}, Vincze~\cite{vincze1950}, Datta~\cite{datta1997}]\label{thm:perimeter}
For all $n \ge 3$, let $L_n^*$ denote the maximal perimeter among all convex small $n$-gons, $\ell_n^*$ the maximal perimeter among all equilateral ones, and $\ub{L}_n := 2n \sin \frac{\pi}{2n}$.
\begin{itemize}
\item When $n$ has an odd factor $m$, $\ell_n^* = L_n^* = \ub{L}_n$ is achieved by finitely many equilateral $n$-gons~\cite{mossinghoff2011,hare2013,hare2019}, including~$\geo{R}_{m,n}$. The optimal $n$-gon $\geo{R}_{m,n}$ is unique if $m$ is prime and $n/m \le 2$.
\item When $n=2^s$ with $s\ge 2$, $L(\geo{R}_n) < L_n^* < \ub{L}_n$.
\end{itemize}
\end{theorem}
When $n=2^s$, both $L_n^*$ and $\ell_n^*$ are only known for $s \le 3$. Tamvakis~\cite{tamvakis1987} found that $L_4^* = 2+\sqrt{6}-\sqrt{2}$, and this value is only achieved by $\geo{R}_3^+$, shown in Figure~\ref{figure:4gon:R3+}. Audet, Hansen, and Messine~\cite{audet2007} proved that $L_8^* = 3.121147\dots$, and this value is only achieved by $\geo{V}_8$, shown in Figure~\ref{figure:8gon:V8}. For the equilateral quadrilateral, it is easy to see that $\ell_4^* = L(\geo{R}_4) = 2\sqrt{2}$. Audet, Hansen, Messine and Perron~\cite{audet2004} studied the equilateral octagon and determined that $\ell_8^* = 3.095609\ldots > L(\geo{R}_8) = 4\sqrt{2-\sqrt{2}}$, and this value is only achieved by $\geo{H}_8$, shown in Figure~\ref{figure:8gon:H8}. If $u := {\ell_8^*}^2/64$ denote the square of the sides length of $\geo{H}_8$, we can show that $u$ is the unique root of the polynomial equation
\[
2u^6 - 18u^5 + 57u^4 -78u^3+46u^2-12u+1=0
\]
that belongs to $(\sin^2(\pi/8),4\sin^2(\pi/16))$. Note that the following inequalities are strict: $\ell_4^* < L_4^*$ and $\ell_8^* < L_8^*$.
For $n=2^s$ with $s\ge 4$, exact solutions of the maximal perimeter problem appear to be presently out of reach. However, tight lower bounds can be obtained analytically. Recently, Bingane~\cite{bingane2021b} proved that, for $n=2^s$ with $s\ge 2$,
\[
L_n^* \ge 2n \sin \frac{\pi}{2n} \cos \left(\frac{\pi}{2n}-\frac{1}{2}\arcsin\left(\frac{1}{2}\sin \frac{2\pi}{n}\right)\right) > L(\geo{R}_n),
\]
which implies
\[
\ub{L}_n - L_n^* \le \frac{\pi^7}{32n^6} + O\left(\frac{1}{n^8}\right).
\]
On the other hand, Mossinghoff~\cite{mossinghoff2008} constructed a family of convex equilateral small $n$-gons $\geo{M}_n$, illustrated in Figure~\ref{figure:Mn}, such that
\[
\ub{L}_n - L(\geo{M}_n) = \frac{3\pi^4}{n^4} + O\left(\frac{1}{n^5}\right)
\]
and
\[
L(\geo{M}_n) - L(\geo{R}_n) = \frac{\pi^3}{8n^2} + O\left(\frac{1}{n^4}\right)
\]
for $n=2^s$ with $s\ge 4$. The next section proposes tighter lower bounds for $\ell_n^*$.
\begin{figure}
\centering
\subfloat[$(\geo{M}_{16},3.134707)$]{
\begin{tikzpicture}[scale=6]
\draw[dashed] (0,0) -- (0.1875,0.0568) -- (0.3390,0.1811) -- (0.4315,0.3538) -- (0.4885,0.5412) -- (0.4922,0.7311) -- (0.3678,0.8885) -- (0.1950,0.9808) -- (0,1) -- (-0.1950,0.9808) -- (-0.3678,0.8885) -- (-0.4922,0.7311) -- (-0.4885,0.5412) -- (-0.4315,0.3538) -- (-0.3390,0.1811) -- (-0.1875,0.0568) -- cycle;
\draw[red,thick] (0,0)--(0,1);
\draw[blue,thick] (0,0) -- (0.1950,0.9808) -- (-0.1875,0.0568) -- (0.3678,0.8885) -- (-0.3390,0.1811) -- (0.4922,0.7311) -- (-0.4885,0.5412);
\draw[blue,thick] (0,0) -- (-0.1950,0.9808) -- (0.1875,0.0568) -- (-0.3678,0.8885) -- (0.3390,0.1811) -- (-0.4922,0.7311) -- (0.4885,0.5412);
\draw (0.4922,0.7311) -- (-0.4315,0.3538);\draw (-0.4922,0.7311) -- (0.4315,0.3538);
\end{tikzpicture}
}
\subfloat[$(\geo{M}_{32},3.140134)$]{
\begin{tikzpicture}[scale=6]
\draw[dashed] (0,0) -- (0.0971,0.0144) -- (0.1895,0.0475) -- (0.2736,0.0979) -- (0.3525,0.1564) -- (0.4184,0.2291) -- (0.4603,0.3178) -- (0.4842,0.4129) -- (0.4986,0.5100) -- (0.4966,0.6081) -- (0.4635,0.7005) -- (0.4131,0.7847) -- (0.3546,0.8635) -- (0.2819,0.9294) -- (0.1932,0.9713) -- (0.0980,0.9952) -- (0,1) -- (-0.0980,0.9952) -- (-0.1932,0.9713) -- (-0.2819,0.9294) -- (-0.3546,0.8635) -- (-0.4131,0.7847) -- (-0.4635,0.7005) -- (-0.4966,0.6081) -- (-0.4986,0.5100) -- (-0.4842,0.4129) -- (-0.4603,0.3178) -- (-0.4184,0.2291) -- (-0.3525,0.1564) -- (-0.2736,0.0979) -- (-0.1895,0.0475) -- (-0.0971,0.0144) -- cycle;
\draw[red,thick] (0,0) -- (0,1);
\draw[blue,thick] (0,0) -- (0.0980,0.9952) -- (-0.0971,0.0144) -- (0.1932,0.9713) -- (-0.1895,0.0475) -- (0.2819,0.9294) -- (-0.3525,0.1564) -- (0.3546,0.8635) -- (-0.4184,0.2291) -- (0.4635,0.7005) -- (-0.4603,0.3178) -- (0.4966,0.6081) -- (-0.4986,0.5100);
\draw[blue,thick] (0,0) -- (-0.0980,0.9952) -- (0.0971,0.0144) -- (-0.1932,0.9713) -- (0.1895,0.0475) -- (-0.2819,0.9294) -- (0.3525,0.1564) -- (-0.3546,0.8635) -- (0.4184,0.2291) -- (-0.4635,0.7005) -- (0.4603,0.3178) -- (-0.4966,0.6081) -- (0.4986,0.5100);
\draw (0.2819,0.9294) -- (-0.2736,0.0979);\draw (-0.2819,0.9294) -- (0.2736,0.0979);
\draw (-0.4184,0.2291) -- (0.4131,0.7847);\draw (0.4184,0.2291) -- (-0.4131,0.7847);
\draw (0.4966,0.6081) -- (-0.4842,0.4129);\draw (-0.4966,0.6081) -- (0.4842,0.4129);
\end{tikzpicture}
}
\caption{Mossinghoff polygons $(\geo{M}_n,L(\geo{M}_n))$: (a) Hexadecagon~$\geo{M}_{16}$; (b) Triacontadigon~$\geo{M}_{32}$}
\label{figure:Mn}
\end{figure}
\section{Proof of Theorem~\ref{thm:Bn}}\label{sec:Bn}
We use cartesian coordinates to describe an $n$-gon $\geo{P}_n$, assuming that a vertex $\geo{v}_i$, $i=0,1,\ldots,n-1$, is positioned at abscissa $x_i$ and ordinate $y_i$. Sum or differences of the indices of the coordinates are taken modulo $n$. Placing the vertex $\geo{v}_0$ at the origin, we set $x_0 = y_0 = 0$. We also assume that the $n$-gon $\geo{P}_n$ is in the half-plane $y\ge 0$ and the vertices $\geo{v}_i$, $i=1,2,\ldots,n-1$, are arranged in a counterclockwise order as illustrated in Figure~\ref{figure:model}, i.e., $x_iy_{i+1} \ge y_ix_{i+1}$ for all $i=1,2,\ldots,n-2$.
The $n$-gon $\geo{P}_n$ is small if $\max_{i,j} \|\geo{v}_i - \geo{v}_j\| = 1$. It is equilateral if $\|\geo{v}_i - \geo{v}_{i-1}\| = c$ for all $i=1,2,\ldots,n$. Imposing that the determinants of the $2\times 2$ matrices satisfy
\[
\sigma_i :=
\begin{vmatrix}
x_i - x_{i-1} & x_{i+1} - x_{i-1}\\
y_i - y_{i-1} & y_{i+1} - y_{i-1}
\end{vmatrix}
\ge 0
\]
for all $i=1,2,\ldots,n-1$ ensures the convexity of the $n$-gon.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) node[below]{$\geo{v}_0(0,0)$} -- (0.3228,0.2134) node[right]{$\geo{v}_1(x_1,y_1)$} -- (0.5000,0.5574) node[right]{$\geo{v}_2(x_2,y_2)$} -- (0.3796,0.9251) node[right]{$\geo{v}_3(x_3,y_3)$} -- (0,1) node[above]{$\geo{v}_4(x_4,y_4)$} -- (-0.3796,0.9251) node[left]{$\geo{v}_5(x_5,y_5)$} -- (-0.5000,0.5574) node[left]{$\geo{v}_6(x_6,y_6)$} -- (-0.3228,0.2134) node[left]{$\geo{v}_7(x_7,y_7)$} -- cycle;
\draw[->] (-0.25,0)--(0.25,0)node[below]{$x$};
\draw[->] (0,0)--(0,0.5)node[left]{$y$};
\end{tikzpicture}
\caption{Definition of variables: Case of $n=8$ vertices}
\label{figure:model}
\end{figure}
For any $n=2^s$ where $s\ge 4$ is an integer, we introduce a convex equilateral small $n$-gon called~$\geo{B}_n$ and constructed as follows. Its diameter graph has the edge $\geo{v}_0-\geo{v}_{\frac{n}{2}}$ as axis of symmetry and can be described by the $3n/8-1$-length half-path $\geo{v}_0 - \geo{v}_{\frac{n}{2}-1} - \ldots - \geo{v}_{\frac{3n}{4}+1} - \geo{v}_{\frac{n}{4}}$ and the pendant edges $\geo{v}_0 - \geo{v}_{\frac{n}{2}}$, $\geo{v}_{4k-1} - \geo{v}_{4k-1+\frac{n}{2}}$, $k=1,\ldots,n/8$. The polygons $\geo{B}_{16}$ and $\geo{B}_{32}$ are shown in Figure~\ref{figure:Bn}. They are symmetrical with respect to the vertical diameter.
\begin{figure}[h]
\centering
\subfloat[$(\geo{B}_{16},3.135288)$]{
\begin{tikzpicture}[scale=6]
\draw[dashed] (0,0) -- (0.1875,0.0569) -- (0.3604,0.1492) -- (0.4847,0.3006) -- (0.4960,0.4963) -- (0.4390,0.6838) -- (0.3465,0.8565) -- (0.1950,0.9808) -- (0,1) -- (-0.1950,0.9808) -- (-0.3465,0.8565) -- (-0.4390,0.6838) -- (-0.4960,0.4963) -- (-0.4847,0.3006) -- (-0.3604,0.1492) -- (-0.1875,0.0569) -- cycle;
\draw[red,thick] (0,0)--(0,1);
\draw[blue,thick] (0,0) -- (0.1950,0.9808) -- (-0.3604,0.1492) -- (0.3465,0.8565) -- (-0.4847,0.3006) -- (0.4960,0.4963);
\draw[blue,thick] (0,0) -- (-0.1950,0.9808) -- (0.3604,0.1492) -- (-0.3465,0.8565) -- (0.4847,0.3006) -- (-0.4960,0.4963);
\draw (0.1950,0.9808) -- (-0.1875,0.0569);\draw (-0.1950,0.9808) -- (0.1875,0.0569);
\draw (-0.4847,0.3006) -- (0.4390,0.6838);\draw (0.4847,0.3006) -- (-0.4390,0.6838);
\end{tikzpicture}
}
\subfloat[$(\geo{B}_{32},3.140246)$]{
\begin{tikzpicture}[scale=6]
\draw[dashed] (0,0) -- (0.0971,0.0144) -- (0.1923,0.0382) -- (0.2810,0.0802) -- (0.3537,0.1461) -- (0.4121,0.2249) -- (0.4626,0.3091) -- (0.4957,0.4015) -- (0.4995,0.4995) -- (0.4851,0.5966) -- (0.4613,0.6918) -- (0.4193,0.7805) -- (0.3534,0.8532) -- (0.2746,0.9117) -- (0.1904,0.9621) -- (0.0980,0.9952) -- (0,1) -- (-0.0980,0.9952) -- (-0.1904,0.9621) -- (-0.2746,0.9117) -- (-0.3534,0.8532) -- (-0.4193,0.7805) -- (-0.4613,0.6918) -- (-0.4851,0.5966) -- (-0.4995,0.4995) -- (-0.4957,0.4015) -- (-0.4626,0.3091) -- (-0.4121,0.2249) -- (-0.3537,0.1461) -- (-0.2810,0.0802) -- (-0.1923,0.0382) -- (-0.0971,0.0144) -- cycle;
\draw[red,thick] (0,0)--(0,1);
\draw[blue,thick] (0,0) -- (0.0980,0.9952) -- (-0.1923,0.0382) -- (0.1904,0.9621) -- (-0.2810,0.0802) -- (0.3534,0.8532) -- (-0.3537,0.1461) -- (0.4193,0.7805) -- (-0.4626,0.3091) -- (0.4613,0.6918) -- (-0.4957,0.4015) -- (0.4995,0.4995);
\draw[blue,thick] (0,0) -- (-0.0980,0.9952) -- (0.1923,0.0382) -- (-0.1904,0.9621) -- (0.2810,0.0802) -- (-0.3534,0.8532) -- (0.3537,0.1461) -- (-0.4193,0.7805) -- (0.4626,0.3091) -- (-0.4613,0.6918) -- (0.4957,0.4015) -- (-0.4995,0.4995);
\draw (0.0980,0.9952) -- (-0.0971,0.0144);\draw (-0.0980,0.9952) -- (0.0971,0.0144);
\draw (-0.2810,0.0802) -- (0.2746,0.9117);\draw (0.2810,0.0802) -- (-0.2746,0.9117);
\draw (0.4193,0.7805) -- (-0.4121,0.2249);\draw (-0.4193,0.7805) -- (0.4121,0.2249);
\draw (-0.4957,0.4015) -- (0.4851,0.5966);\draw (0.4957,0.4015) -- (-0.4851,0.5966);
\end{tikzpicture}
}
\caption{Polygons $(\geo{B}_n,L(\geo{B}_n))$ defined in Theorem~\ref{thm:Bn}: (a) Hexadecagon $\geo{B}_{16}$; (b) Triacontadigon $\geo{B}_{32}$}
\label{figure:Bn}
\end{figure}
Place the vertex $\geo{v}_{\frac{n}{2}}$ at $(0,1)$ in the plane. Let $t \in (0,\pi/n)$ denote the angle formed at the vertex~$\geo{v}_0$ by the edge $\geo{v}_0-\geo{v}_{\frac{n}{2}-1}$ and the edge $\geo{v}_0-\geo{v}_{\frac{n}{2}}$. This implies that the sides length of~$\geo{B}_{n}$ is $2\sin (t/2)$. Since $\geo{B}_n$ is equilateral and symmetric, we have from the half-path $\geo{v}_0 - \ldots - \geo{v}_{\frac{n}{4}}$,
\[
\begin{aligned}
x_{\frac{3n}{4}+1} &= \sin t - \sum_{k=1}^{n/8-1} (-1)^{k-1} (\sin (4k-1)t - \sin 4kt + \sin (4k+1)t)\\
&= \sin t - \frac{(2\cos t -1)(\sin 2t + \sin (n/2-2)t)}{2\cos 2t} &&= -x_{\frac{n}{4}-1},\\
x_{\frac{n}{4}} &= x_{\frac{3n}{4}+1} + \sin (n/2-1)t &&= -x_{\frac{3n}{4}},\\
y_{\frac{3n}{4}+1} &= \cos t - \sum_{k=1}^{n/8-1} (-1)^{k-1} (\cos (4k-1)t - \cos 4kt + \cos (4k+1)t)\\
&= \cos t - \frac{(2\cos t -1)(\cos 2t + \cos (n/2-2)t)}{2\cos 2t} &&= y_{\frac{n}{4}-1},\\
y_{\frac{n}{4}} &= y_{\frac{3n}{4}+1} + \cos (n/2-1)t &&= y_{\frac{3n}{4}}.
\end{aligned}
\]
Finally, the angle $t$ is chosen so that $\|\geo{v}_{\frac{3n}{4}+1}-\geo{v}_{\frac{3n}{4}}\| = 2\sin (t/2)$, i.e.,
\[
(2x_{\frac{3n}{4}+1} + \sin (n/2-1)t)^2+\cos^2 (n/2-1)t = 4\sin^2(t/2).
\]
An asymptotic analysis produces that, for large $n$, this equation has a solution $t_0(n)$ satisfying
\[
t_0(n) = \frac{\pi}{n} - \frac{\pi^4}{n^5} + \frac{\pi^5}{n^6} -\frac{11\pi^6}{6n^7} + \frac{35\pi^7}{12n^8} + O\left(\frac{1}{n^9}\right).
\]
By setting $t = t_0(n)$, the perimeter of $\geo{B}_n$ is
\[
\begin{aligned}
L(\geo{B}_n) &= 2n\sin \frac{t_0(n)}{2} = 2n \sin \left(\frac{\pi}{2n} - \frac{\pi^4}{2n^5} + O\left(\frac{1}{n^6}\right)\right)\\
&= \pi - \frac{\pi^3}{24n^2} + \left(\frac{\pi^5}{1920}-\pi^4\right)\frac{1}{n^4} + \frac{\pi^5}{n^5} - \left(\frac{\pi^7}{322560}+\frac{41\pi^6}{24}\right)\frac{1}{n^6} + O\left(\frac{1}{n^7}\right)
\end{aligned}
\]
and
\[
\ub{L}_n - L(\geo{B}_n) = \frac{\pi^4}{n^4} - \frac{\pi^5}{n^5} + O\left(\frac{1}{n^6}\right).
\]
Since the polygon $\geo{M}_n$ proposed by Mossinghoff~\cite{mossinghoff2008} satisfies
\[
L(\geo{M}_n) = \pi - \frac{\pi^3}{24n^2} + \left(\frac{\pi^5}{1920}-3\pi^4\right)\frac{1}{n^4} + \frac{9\pi^5}{n^5} - \left(\frac{\pi^7}{322560}+\frac{9\pi^6}{8}\right)\frac{1}{n^6} + O\left(\frac{1}{n^7}\right),
\]
it follows that
\[
L(\geo{B}_n) - L(\geo{M}_n) = \frac{2\pi^4}{n^4} - \frac{8\pi^5}{n^5} - \frac{7\pi^6}{12n^6} + O\left(\frac{1}{n^7}\right).
\]
To verify that $\geo{B}_n$ is small, we calculate
\[
\|\geo{v}_{\frac{n}{4}} - \geo{v}_{\frac{3n}{4}}\| = 2x_{\frac{n}{4}} = 1 - \frac{\pi^3}{n^3} - \frac{7\pi^5}{4n^5} + O\left(\frac{1}{n^7}\right) < 1.
\]
To test that $\geo{B}_n$ is convex, we compute
\[
\sigma_{\frac{n}{4}} = \frac{2\pi^3}{n^3} - \frac{\pi^4}{n^4} + O\left(\frac{1}{n^5}\right) > 0.
\]
This completes the proof of Theorem~\ref{thm:Bn}.\qed
All polygons presented in this work were implemented as a MATLAB package: OPTIGON~\cite{optigon}, which is freely available at \url{https://github.com/cbingane/optigon}. In OPTIGON, we provide MATLAB functions that give the coordinates of the vertices. For example, the vertices coordinates of a regular small $n$-gon are obtained by calling {\tt [x,y] = cstrt\_regular\_ngon(n)}. The command {\tt calc\_perimeter\_ngon(x,y)} computes the perimeter of a polygon given by its vertices coordinates $(\rv{x},\rv{y})$. One can also find an algorithm developed in~\cite{bingane2021a} to find an estimate of the maximal area of a small $n$-gon when $n \ge 6$ is even.
Table~\ref{table:perimeter} shows the perimeters of $\geo{B}_n$, along with the upper bounds $\ub{L}_n$, the perimeters of the regular polygons $\geo{R}_n$ and Mossinghoff polygons $\geo{M}_n$. When $n = 2^s$ and $s\ge 4$, $\geo{B}_n$ provides a tighter lower bound on the maximal perimeter $\ell_n^*$ compared to the best prior convex equilateral small $n$-gon~$\geo{M}_n$. By analyzing the fraction $\frac{L(\geo{B}_n) - L(\geo{M}_n)}{\ub{L}_n - L(\geo{M}_n)}$ of the length of the interval $[L(\geo{M}_n), \ub{L}_n]$ containing $L(\geo{B}_n)$, it is not surprising that $L(\geo{B}_n)$ approaches $\frac{1}{3} L(\geo{M}_n) + \frac{2}{3}\ub{L}_n$ as $n$ increases since $L(\geo{B}_n) - L(\geo{M}_n) \sim \frac{2\pi^4}{n^4}$ for large~$n$.
\begin{table}[h]
\footnotesize
\centering
\caption{Perimeters of $\geo{B}_n$}
\label{table:perimeter}
\begin{tabular}{@{}rllllr@{}}
\toprule
$n$ & $L(\geo{R}_n)$ & $L(\geo{M}_n)$ & $L(\geo{B}_n)$ & $\ub{L}_n$ & $\frac{L(\geo{B}_n) - L(\geo{M}_n)}{\ub{L}_n - L(\geo{M}_n)}$ \\
\midrule
16 & 3.1214451523 & 3.1347065475 & 3.1352878881 & 3.1365484905 & 0.3156 \\
32 & 3.1365484905 & 3.1401338091 & 3.1402460942 & 3.1403311570 & 0.5690 \\
64 & 3.1403311570 & 3.1412623836 & 3.1412717079 & 3.1412772509 & 0.6272 \\
128 & 3.1412772509 & 3.1415127924 & 3.1415134468 & 3.1415138011 & 0.6487 \\
256 & 3.1415138011 & 3.1415728748 & 3.1415729180 & 3.1415729404 & 0.6589 \\
\bottomrule
\end{tabular}
\end{table}
\section{Improved triacontadigon and hexacontatetragon}\label{sec:optimal}
It is natural to ask if the polygon constructed $\geo{B}_n$ might be optimal for some $n$. Using constructive arguments, Proposition~\ref{thm:Z32} and Proposition~\ref{thm:Z64} show that $\geo{B}_{32}$ and $\geo{B}_{64}$ are suboptimal.
\begin{proposition}\label{thm:Z32}
There exists a convex equilateral small $32$-gon whose perimeter exceeds that of~$\geo{B}_{32}$.
\end{proposition}
\begin{proof}
Consider the $32$-gon $\geo{Z}_{32}$, illustrated in Figure~\ref{figure:Z32}. Its diameter graph has the edge $\geo{v}_0-\geo{v}_{16}$ as axis of symmetry and can be described by the $4$-length half-path $\geo{v}_0-\geo{v}_{11}-\geo{v}_{24}-\geo{v}_{10}-\geo{v}_{23}$ and the pendant edges $\geo{v}_{0} - \geo{v}_{15}, \ldots, \geo{v}_{0} - \geo{v}_{12}$, $\geo{v}_{11} - \geo{v}_{31}, \ldots, \geo{v}_{11} - \geo{v}_{25}$.
Place the vertex $\geo{v}_0$ at $(0,0)$ in the plane, and the vertex $\geo{v}_{16}$ at $(0,1)$. Let $t \in (0,\pi/32)$ denote the angle formed at the vertex $\geo{v}_0$ by the edge $\geo{v}_0-\geo{v}_{15}$ and the edge $\geo{v}_0-\geo{v}_{16}$. We have, from the half-path $\geo{v}_0 - \ldots -\geo{v}_{23}$,
\[
\begin{aligned}
x_{10} &= \sin 5t - \sin 13t + \sin 14t &&= -x_{22}, & y_{10} &= \cos 5t - \cos 13t + \cos 14t &&= y_{11},\\
x_{23} &= x_{10} - \sin 15t &&= -x_9, & y_{23} &= y_{10} - \cos 15t &&= y_9.
\end{aligned}
\]
Finally, $t$ is chosen so that $\|\geo{v}_{10}-\geo{v}_9\| = 2\sin (t/2)$, i.e.,
\[
(2(\sin 5t - \sin 13t + \sin 14t) - \sin 15t)^2+\cos^2 15t = 4\sin^2(t/2).
\]
We obtain $t = 0.0981744286\ldots$ and $L(\geo{Z}_{32}) = 64\sin (t/2) = 3.1403202339\ldots > L(\geo{B}_{32})$. One can verify that $\geo{Z}_{32}$ is small and convex.
\end{proof}
\begin{proposition}\label{thm:Z64}
There exists a convex equilateral small $64$-gon whose perimeter exceeds that of~$\geo{B}_{64}$.
\end{proposition}
\begin{proof}
Consider the $64$-gon $\geo{Z}_{64}$, illustrated in Figure~\ref{figure:Z64}. Its diameter graph has the edge $\geo{v}_0-\geo{v}_{32}$ as axis of symmetry and can be described by the $23$-length half-path $\geo{v}_0-\geo{v}_{31}-\geo{v}_{63}-\geo{v}_{30}-\geo{v}_{61} - \geo{v}_{29} - \geo{v}_{60} - \geo{v}_{28} - \geo{v}_{58} - \geo{v}_{27} - \geo{v}_{57} - \geo{v}_{26} - \geo{v}_{56} - \geo{v}_{25} - \geo{v}_{55} - \geo{v}_{24} - \geo{v}_{54} - \geo{v}_{23} - \geo{v}_{53} - \geo{v}_{21} - \geo{v}_{52} - \geo{v}_{19} - \geo{v}_{51} - \geo{v}_{16}$, the pendant edges $\geo{v}_{30}-\geo{v}_{62}$, $\geo{v}_{28} - \geo{v}_{59}$, $\geo{v}_{53} - \geo{v}_{22}$, $\geo{v}_{52} - \geo{v}_{20}$, $\geo{v}_{51} - \geo{v}_{18}$, $\geo{v}_{51}-\geo{v}_{17}$, and the $4$-length path $\geo{v}_{15}-\geo{v}_{50} - \geo{v}_{14} - \geo{v}_{49}$.
Place the vertex $\geo{v}_0$ at $(0,0)$ in the plane, and the vertex $\geo{v}_{32}$ at $(0,1)$. Let $t \in (0,\pi/64)$ denote the angle formed at the vertex $\geo{v}_0$ by the edge $\geo{v}_0-\geo{v}_{31}$ and the edge $\geo{v}_0-\geo{v}_{32}$. We have, from the half-path $\geo{v}_0 - \ldots -\geo{v}_{31}$,
\[
\begin{aligned}
x_{51} &= \sin t - \sin 2t + \sin 3t - \sin 5t + \sin 6t -\sin 7t + \sin 8t\\
&-\sum_{k=10}^{20} (-1)^k \sin kt + \sin 22t - \sin 23t + \sin 25t - \sin 26t &&= -x_{13},\\
y_{51} &= \cos t - \cos 2t + \cos 3t - \cos 5t + \cos 6t -\cos 7t + \cos 8t\\
&-\sum_{k=10}^{20} (-1)^k \cos kt + \cos 22t - \cos 23t + \cos 25t - \cos 26t &&= y_{13},\\
x_{16} &= x_{51} + \sin 29t &&= -x_{48},\\
y_{16} &= y_{51} + \cos 29t &&= y_{48},
\end{aligned}
\]
and, from the path $\geo{v}_{15} - \ldots -\geo{v}_{49}$,
\[
\begin{aligned}
x_{50} &= -1/2 &&= -x_{14}, & y_{50} &= y &&= y_{14},\\
x_{15} &= x_{50} + \cos t &&= -x_{49}, & y_{15} &= y_{50} + \sin t &&= y_{49}.
\end{aligned}
\]
Finally, $t$ and $y$ are chosen so that $\|\geo{v}_{51}-\geo{v}_{50}\| = \|\geo{v}_{16}-\geo{v}_{15}\| = 2\sin (t/2)$. We obtain $t = 0.0490873533\ldots$ and $L(\geo{Z}_{64}) = 128 \sin(t/2)= 3.1412752155\ldots > L(\geo{B}_{64})$. One can verify that $\geo{Z}_{64}$ is small and convex.
\end{proof}
Polygons $\geo{Z}_{32}$ and $\geo{Z}_{64}$ offer a significant improvement to the lower bound of the optimal value. We note that
\[
\begin{aligned}
\ell_{32}^* - L(\geo{Z}_{32}) &< \ub{L}_{32} - L(\geo{Z}_{32}) = 1.09\ldots \times 10^{-5} < \ub{L}_{32} - L(\geo{B}_{32}) = 8.50\ldots \times 10^{-5},\\
\ell_{64}^* - L(\geo{Z}_{64}) &< \ub{L}_{64} - L(\geo{Z}_{64}) = 2.03\ldots \times 10^{-6} < \ub{L}_{64} - L(\geo{B}_{64}) = 5.54\ldots \times 10^{-6}.
\end{aligned}
\]
Also, the fractions
\[
\begin{aligned}
\frac{L(\geo{Z}_{32}) - L(\geo{B}_{32})}{\ub{L}_{32} - L(\geo{B}_{32})} &= 0.8715\ldots,\\
\frac{L(\geo{Z}_{64}) - L(\geo{B}_{64})}{\ub{L}_{64} - L(\geo{B}_{64})} &= 0.6327\ldots
\end{aligned}
\]
indicate that the perimeters of the improved polygons are quite close to the maximal perimeter. This suggests that it is possible that another family of convex equilateral small polygons might produce an improvement to Theorem~\ref{thm:Bn}.
\begin{figure}[h]
\centering
\subfloat[$(\geo{Z}_{32},3.140320)$]{
\begin{tikzpicture}[scale=8]
\draw[dashed] (0,0) -- (0.0842,0.0505) -- (0.1630,0.1089) -- (0.2357,0.1748) -- (0.3016,0.2475) -- (0.3601,0.3263) -- (0.4105,0.4105) -- (0.4525,0.4992) -- (0.4855,0.5916) -- (0.4999,0.6887) -- (0.4952,0.7867) -- (0.4714,0.8819) -- (0.3827,0.9239) -- (0.2903,0.9569) -- (0.1951,0.9808) -- (0.0980,0.9952) -- (0,1) -- (-0.0980,0.9952) -- (-0.1951,0.9808) -- (-0.2903,0.9569) -- (-0.3827,0.9239) -- (-0.4714,0.8819) -- (-0.4952,0.7867) -- (-0.4999,0.6887) -- (-0.4855,0.5916) -- (-0.4525,0.4992) -- (-0.4105,0.4105) -- (-0.3601,0.3263) -- (-0.3016,0.2475) -- (-0.2357,0.1748) -- (-0.1630,0.1089) -- (-0.0842,0.0505) -- cycle;
\draw[red,thick] (0,0)--(0,1);
\draw (0,0)--(0.0980,0.9952);\draw (0,0)--(-0.0980,0.9952);
\draw (0,0)--(0.1951,0.9808);\draw (0,0)--(-0.1951,0.9808);
\draw (0,0)--(0.2903,0.9569);\draw (0,0)--(-0.2903,0.9569);
\draw (0,0)--(0.3827,0.9239);\draw (0,0)--(-0.3827,0.9239);
\draw[blue,thick] (0,0)--(0.4714,0.8819);\draw[blue,thick] (0,0)--(-0.4714,0.8819);
\draw (0.0842,0.0505)--(-0.4714,0.8819);\draw (-0.0842,0.0505)--(0.4714,0.8819);
\draw (0.1630,0.1089)--(-0.4714,0.8819);\draw (-0.1630,0.1089)--(0.4714,0.8819);
\draw (0.0842,0.0505)--(-0.4714,0.8819);\draw (-0.0842,0.0505)--(0.4714,0.8819);
\draw (0.2357,0.1748)--(-0.4714,0.8819);\draw (-0.2357,0.1748)--(0.4714,0.8819);
\draw (0.3016,0.2475)--(-0.4714,0.8819);\draw (-0.3016,0.2475)--(0.4714,0.8819);
\draw (0.3601,0.3263)--(-0.4714,0.8819);\draw (-0.3601,0.3263)--(0.4714,0.8819);
\draw (0.4105,0.4105)--(-0.4714,0.8819);\draw (-0.4105,0.4105)--(0.4714,0.8819);
\draw (0.4525,0.4992)--(-0.4714,0.8819);\draw (-0.4525,0.4992)--(0.4714,0.8819);
\draw[blue,thick] (0.4855,0.5916)--(-0.4714,0.8819);\draw[blue,thick] (-0.4855,0.5916)--(0.4714,0.8819);
\draw[blue,thick] (0.4952,0.7867)--(-0.4855,0.5916);\draw[blue,thick] (-0.4952,0.7867)--(0.4855,0.5916);
\draw[blue,thick] (0.4952,0.7867)--(-0.4999,0.6887);\draw[blue,thick] (-0.4952,0.7867)--(0.4999,0.6887);
\end{tikzpicture}
\label{figure:Z32}
}
\subfloat[$(\geo{Z}_{64},3.141275)$]{
\begin{tikzpicture}[scale=8]
\draw[dashed] (0,0) -- (0.0490,0.0036) -- (0.0973,0.0120) -- (0.1452,0.0228) -- (0.1918,0.0382) -- (0.2367,0.0580) -- (0.2805,0.0801) -- (0.3220,0.1064) -- (0.3607,0.1366) -- (0.3962,0.1704) -- (0.4283,0.2076) -- (0.4565,0.2477) -- (0.4786,0.2915) -- (0.4940,0.3382) -- (0.5000,0.3869) -- (0.4988,0.4359) -- (0.4952,0.4849) -- (0.4868,0.5332) -- (0.4760,0.5811) -- (0.4629,0.6284) -- (0.4453,0.6742) -- (0.4254,0.7191) -- (0.4012,0.7618) -- (0.3749,0.8033) -- (0.3447,0.8420) -- (0.3109,0.8775) -- (0.2737,0.9096) -- (0.2336,0.9378) -- (0.1909,0.9620) -- (0.1451,0.9797) -- (0.0978,0.9928) -- (0.0491,0.9988) -- (0,1) -- (-0.0491,0.9988) -- (-0.0978,0.9928) -- (-0.1451,0.9797) -- (-0.1909,0.9620) -- (-0.2336,0.9378) -- (-0.2737,0.9096) -- (-0.3109,0.8775) -- (-0.3447,0.8420) -- (-0.3749,0.8033) -- (-0.4012,0.7618) -- (-0.4254,0.7191) -- (-0.4453,0.6742) -- (-0.4629,0.6284) -- (-0.4760,0.5811) -- (-0.4868,0.5332) -- (-0.4952,0.4849) -- (-0.4988,0.4359) -- (-0.5000,0.3869) -- (-0.4940,0.3382) -- (-0.4786,0.2915) -- (-0.4565,0.2477) -- (-0.4283,0.2076) -- (-0.3962,0.1704) -- (-0.3607,0.1366) -- (-0.3220,0.1064) -- (-0.2805,0.0801) -- (-0.2367,0.0580) -- (-0.1918,0.0382) -- (-0.1452,0.0228) -- (-0.0973,0.0120) -- (-0.0490,0.0036) -- cycle;
\draw[red,thick] (0,0) -- (0,1);
\draw[blue,thick] (0,0) -- (0.0491,0.9988) -- (-0.0490,0.0036) -- (0.0978,0.9928) -- (-0.1452,0.0228) -- (0.1451,0.9797) -- (-0.1918,0.0382) -- (0.1909,0.9620) -- (-0.2805,0.0801) -- (0.2336,0.9378) -- (-0.3220,0.1064) -- (0.2737,0.9096) -- (-0.3607,0.1366) -- (0.3109,0.8775) -- (-0.3962,0.1704) -- (0.3447,0.8420) -- (-0.4283,0.2076) -- (0.3749,0.8033) -- (-0.4565,0.2477) -- (0.4254,0.7191) -- (-0.4786,0.2915) -- (0.4629,0.6284) -- (-0.4940,0.3382) -- (0.4952,0.4849);
\draw[blue,thick] (0,0) -- (-0.0491,0.9988) -- (0.0490,0.0036) -- (-0.0978,0.9928) -- (0.1452,0.0228) -- (-0.1451,0.9797) -- (0.1918,0.0382) -- (-0.1909,0.9620) -- (0.2805,0.0801) -- (-0.2336,0.9378) -- (0.3220,0.1064) -- (-0.2737,0.9096) -- (0.3607,0.1366) -- (-0.3109,0.8775) -- (0.3962,0.1704) -- (-0.3447,0.8420) -- (0.4283,0.2076) -- (-0.3749,0.8033) -- (0.4565,0.2477) -- (-0.4254,0.7191) -- (0.4786,0.2915) -- (-0.4629,0.6284) -- (0.4940,0.3382) -- (-0.4952,0.4849);
\draw[orange,thick] (0.4988,0.4359) -- (-0.5000,0.3869) -- (0.5000,0.3869) -- (-0.4988,0.4359);
\draw (0.0978,0.9928) -- (-0.0973,0.0120);\draw (-0.0978,0.9928) -- (0.0973,0.0120)
\draw (0.1909,0.9620) -- (-0.2367,0.0580);\draw (-0.1909,0.9620) -- (0.2367,0.0580)
\draw (-0.4565,0.2477) -- (0.4012,0.7618);\draw (0.4565,0.2477) -- (-0.4012,0.7618)
\draw (-0.4786,0.2915) -- (0.4453,0.6742);\draw (0.4786,0.2915) -- (-0.4453,0.6742)
\draw (-0.4940,0.3382) -- (0.4760,0.5811);\draw (0.4940,0.3382) -- (-0.4760,0.5811);
\draw (-0.4940,0.3382) -- (0.4868,0.5332);\draw (0.4940,0.3382) -- (-0.4868,0.5332)
\end{tikzpicture}
\label{figure:Z64}
}
\caption{Improved convex equilateral small $n$-gons $(\geo{Z}_n,L(\geo{Z}_n))$: (a) Triacontadigon $\geo{Z}_{32}$ with larger perimeter than $\geo{B}_{32}$; (b) Hexacontatetragon $\geo{Z}_{64}$ with larger perimeter than $\geo{B}_{64}$}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
Lower bounds on the maximal perimeter of convex equilateral small $n$-gons were provided when $n$ is a power of $2$ and these bounds are tighter than the previous ones from the literature. For any $n=2^s$ with integer $s\ge 4$, we constructed a convex equilateral small $n$-gon $\geo{B}_n$ whose perimeter is within $\pi^4/n^4 + O(1/n^5)$ of the optimal value. For $n=32$ and $n=64$, we propose solutions with even larger perimeters.
\bibliographystyle{ieeetr}
| {
"timestamp": "2021-06-02T02:20:22",
"yymm": "2105",
"arxiv_id": "2105.10618",
"language": "en",
"url": "https://arxiv.org/abs/2105.10618",
"abstract": "A small polygon is a polygon that has diameter one. The maximal perimeter of a convex equilateral small polygon with $n=2^s$ sides is not known when $s \\ge 4$. In this paper, we construct a family of convex equilateral small $n$-gons, $n=2^s$ and $s \\ge 4$, and show that their perimeters are within $O(1/n^4)$ of the maximal perimeter and exceed the previously best known values from the literature. In particular, for the first open case $n=16$, our result proves that Mossinghoff's equilateral hexadecagon is suboptimal.",
"subjects": "Optimization and Control (math.OC); Combinatorics (math.CO); Metric Geometry (math.MG)",
"title": "Tight bounds on the maximal perimeter of convex equilateral small polygons",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9896718480899426,
"lm_q2_score": 0.8152324848629215,
"lm_q1q2_score": 0.8068126399172437
} |
https://arxiv.org/abs/cs/9910009 | Locked and Unlocked Polygonal Chains in 3D | In this paper, we study movements of simple polygonal chains in 3D. We say that an open, simple polygonal chain can be straightened if it can be continuously reconfigured to a straight sequence of segments in such a manner that both the length of each link and the simplicity of the chain are maintained throughout the movement. The analogous concept for closed chains is convexification: reconfiguration to a planar convex polygon. Chains that cannot be straightened or convexified are called locked. While there are open chains in 3D that are locked, we show that if an open chain has a simple orthogonal projection onto some plane, it can be straightened. For closed chains, we show that there are unknotted but locked closed chains, and we provide an algorithm for convexifying a planar simple polygon in 3D. All our algorithms require only O(n) basic ``moves'' and run in linear time. | \section{Introduction}
\seclab{Introduction}
A {\em polygonal chain\/} $P=(v_0,v_1,\ldots,v_{n-1})$ is a sequence
of consecutively joined segments (or edges)
$e_i =v_iv_{i+1}$ of fixed lengths $\ell_i = |e_i|$,
embedded in space.\footnote{
All index arithmetic throughout the paper is mod $n$.
}
A chain is {\em closed\/} if
the line segments are joined in cyclic fashion, i.e., if $v_{n-1}=v_0$;
otherwise, it is {\em open}.
A closed chain is also called a {\em polygon}.
If the line segments are regarded as obstacles, then the chains must remain
{\em simple\/} at all times, i.e., self intersection is not allowed.
The edges of a simple chain are
pairwise disjoint except for adjacent edges, which share the common
endpoint between them.
We will often use {\em chain\/} to abbreviate ``simple polygonal chain.''
For an open chain our goal is to straighten it;
for a closed chain the goal is to {\em convexify\/} it,
i.e., to reconfigure it to a planar convex polygon.
Both goals are to be achieved by continuous motions that
maintain simplicity of the chain throughout, i.e.,
links are not permitted to intersect.
A chain that cannot be straightened or convexified we call {\em locked};
otherwise the chain is {\em unlocked}.
Note that a chain in 3D can be continuously moved between any of its
unlocked configurations, for example via straightened or convexified
intermediate configurations.
Basic questions concerning open and closed chains have proved surprisingly
difficult.
For example, the question of whether every planar, simple open chain
can be straightened in the plane while maintaining simplicity
has circulated in the computational geometry community for years,
but remains open at this writing.
Whether locked chains exist in dimensions $d \ge 4$ was only settled
(negatively, in~\cite{co-pccl4d-99})
as a result of the open problem we posed in a preliminary
version of this paper~\cite{bddlloorstw-lupc3d-99}.
In piecewise linear knot theory, complete classification of the 3D
embeddings of closed chains with $n$ edges has been found to be difficult,
even for $n = 6$. These types of questions are basic to the study of
embedding and reconfiguration of edge-weighted graphs, where the weight
assigned to an edge specifies the desired distance between the vertices
it joins. Graph embedding and reconfiguration problems, with or without
a simplicity requirement, have arisen in many contexts,
including molecular conformation,
mechanical design,
robotics,
animation,
rigidity theory,
algebraic geometry,
random walks,
and knot theory.
We obtain several results for chains in 3D:
open chains with a simple orthogonal projection,
or embedded in the surface of a polytope,
may be straightened
(Sections~\secref{Open.3D} and~\secref{Open.Polytope});
but there exist open and closed chains that are locked
(Section~\secref{Knitting.Needles}).
For closed chains initially taking the form of a polygon lying in a plane,
it has long been known that
they may be convexified in 3D, but only via a procedure
that may require an unbounded number of moves.
We
provide an algorithm to perform the
convexification
(Section~\secref{StLouis}) in $O(n)$ moves.
Previous computational geometry research on the reconfiguration
of chains
(e.g., \cite{k-rpuras-97}, \cite{ksw-frit-96}, \cite{w-aigplm-92})
typically concerns planar chains with
crossing links, moving in the presence of obstacles;
\cite{s-scsc-73}
and \cite{lw-rcpce-95}
reconfigure closed chains with crossing links in all dimensions $d \ge 2$.
In contrast, throughout this paper we work in 3D
and
require that chains remain simple throughout their motions.
Our algorithmic methods
complement the algebraic and topological approaches to these
problems, offering constructive proofs for topological results and
raising computational, complexity, and algorithmic issues.
Several open problems are listed in Section~\secref{Open}.
\conf{
The Schwartz-Sharir cell decomposition approach~\cite{ss-pmp2g-83}
from algorithmic robotics
shows that all the problems we consider in this paper are decidable,
and Canny's roadmap algorithm~\cite{c-crmp-87} leads to
an algorithm singly-exponential
in $n$.
See, e.g.,
\cite{hjw-mp2dl-84},
\cite{k-gir-85},
\cite{ch-dgmc-88},
or
\cite{w-rsa-97}
for other weighted graph embedding and reconfiguration problems.
\subsection{Background}
Thinking about movements of polygonal chains goes back at least
to A.~Cauchy's 1813 theorem on the rigidity of polyhedra~\cite[Ch.~6]{c-p-97}.
His proof employed a key lemma on opening angles at the joints of
a planar convex open polygonal chain.
This lemma, now known as Steinitz's Lemma (because E.~Steinitz gave
the first correct proof in the 1930's), is similar in spirit to
our Lemma~\lemref{barb}.
Planar linkages, objects more general than polygonal chains in
that a graph structure is permitted, have been studied
intensively by mechanical engineers since at least Peaucellier's 1864 linkage.
Because the goals of this linkage work are so different from ours,
we could not find directly relevant results in the literature
(e.g., \cite{h-kgm-78}). However, we have no doubt that simple
results like our convexification of quadrilaterals (Lemma~\lemref{quad.M02})
are known to that community.
Work in algorithmic robotics is relevant.
In particular, the Schwartz-Sharir cell decomposition approach~\cite{ss-pmp2g-83}
shows that all the problems we consider in this paper are decidable,
and Canny's roadmap algorithm~\cite{c-crmp-87} leads to
an algorithm singly-exponential in $n$, the number of vertices of the
polygonal chain.
Although hardness results are known for more general
linkages~\cite{hjw-mp2dl-84},
we know of no nontrivial lower bounds for the problems discussed in
this paper.
See, e.g.,
\cite{hjw-mp2dl-84},
\cite{k-gir-85},
\cite{ch-dgmc-88},
or
\cite{w-rsa-97}
for other weighted graph embedding and reconfiguration problems.
\subsection{Measuring Complexity}
\seclab{Complexity}
As usual, we compute the time and space
complexity of our algorithms as a function of
$n$, the number of vertices of the polygonal chain.
This, however, will not be our focus, for
it is of perhaps greater interest
to measure the geometric complexity
of a proposed reconfiguration of a chain.
We first define what constitutes a ``move'' for these
counting purposes.
Define a {\em joint movement\/} at $v_i$ to be a monotonic rotation
of $e_i$ about an axis through $v_i$ fixed with respect to a reference frame
rigidly attached to some other edges of the chain.
For example,
a joint movement could feasibly be executed by a motor at $v_i$
mounted in a frame attached to $e_{i-1}$ and $e_{i-2}$.
The axis might be moving in absolute space (due to other joint movements),
but it must be fixed in the reference frame.
Although more general movements could be explored, these will
suffice for our purposes.
A {\em monotonic rotation\/} does not stop or reverse direction.
Note we ignore the angular velocity profile of a joint movement,
which might not be appropriate in some applications.
Our primary measure of complexity is a {\em move\/}:
a reconfiguration of the chain $P$ of $n$ links to
$P'$ that may be composed of a constant number of simultaneous
joint movements. Here the constant number should be independent
of $n$, and is small ($\le 4$)
in our algorithms.
All of our algorithms achieve reconfiguration in $O(n)$ moves.
One of our open problems (Section~\secref{Open}) asks for
exploration of a measure of the complexity of movements.
\section{Open Chains with Simple Projections}
\seclab{Open.3D}
This section considers an open polygonal chain $P$ in 3D with a simple
orthogonal projection $P'$ onto a plane.
Note that there is a polynomial-time
algorithm to determine whether $P$ admits such a projection,
and to output a projection plane if it exists~\cite{bgrt-dnpos-96}.
We choose our coordinate system so that
the $xy$-plane ${\Pi_{xy}}$ is parallel to this plane;
we will refer to lines and planes parallel to the $z$-axis as ``vertical.''
We will describe an algorithm that straightens $P$, working from
one end of the chain.
We use the notation
$P[i,j]$ to represent the chain of edges
$(v_i, v_{i+1},\ldots,v_j)$, including $v_i$ and $v_j$,
and $P(i,j)$ to represent the chain without its endpoints:
$P(i,j) = P[i,j] \setminus \{ v_i , v_j \}$.
Any object lying in plane
${\Pi_{xy}}$ will be labelled with a prime.
Consider the projection $P'=(v'_0,v'_1,\ldots,v'_{n-1})$ on ${\Pi_{xy}}$.
Let $r_i = \min_{j\not\in \{i-1,i\}} d(v'_i,e'_j)$, where
$d(v',e')$ is the minimum distance from vertex $v'$ to a point on edge
$e'$. Construct a disk of radius $r_i$ around each vertex $v'_i$.
The interior of each disk
does not intersect any other vertex
of $P'$ and does not intersect any edges other than the two incident
to $v'_i$: $e'_{i-1}$ and $e'_i$; see Fig.~\figref{simple.proj}.
\conf{
to $v'_i$: $e'_{i-1}$ and $e'_i$.
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=simple.proj.eps,height=2.5in}
\conf{
\ \psfig{figure=simple.proj.eps,width=8cm}
\end{center}
\caption{The projection $P'$ of $P$. Each vertex $v'_i$ is surrounded by
an ``empty'' disk of radius $r_i$. Several such disks are shown.}
\figlab{simple.proj}
\end{figure}
We construct in 3D a vertical cylinder $C_i$ centered on each vertex
$v_i$ of radius $r = \frac{1}{3}\min_i \{ r_i \}$. This choice of $r$
ensures that no two cylinders intersect one another (the choice of the
fraction $\frac{1}{3} < \frac{1}{2}$ guarantees that cylinders do not even touch),
and no edges of $P$, other than those incident to $v_i$, intersect
$C_i$, for all $i$.
The straightening algorithm proceeds in two stages.
In the first stage, the links are squeezed like an accordion
into the cylinders, so that after step $i$ all the links
of $P_{i+1}=P[0,i+1]$ are packed into $C_{i+1}$.
Let $\Pi_i$ be the vertical plane containing
$e_i$ (and therefore $e'_i$).
After the first stage, the chain is {\em monotone\/} in $\Pi_i$,
i.e., it is monotone with respect to the line $\Pi_i \cap \Pi_{xy}$
in that the intersection of the chain with a vertical line in $\Pi_i$ is either
empty or a single point.
In stage two, the chain
is unraveled link by link
into a straight line.
The rest of this section describes the first stage.
Let ${\delta} = r/n$.
\subsection{Stage 1}
We describe the Step~0 and the general Step~$i$ separately,
although the former is a special case of the latter.
\begin{enumerate}
\conf{\setlength{\itemsep}{0pt}}
\item[0.]
\conf{\setlength{\itemsep}{0pt}}
\begin{enumerate}
\conf{\setlength{\itemsep}{0pt}}
\item Rotate $e_0$ about $v_1$, within $\Pi_0$,
so that the projection of $e_0$
on ${\Pi_{xy}}$ is contained in $e'_0$ throughout the motion.
The direction of rotation is determined by the relative heights
($z$-coordinates) of $v_0$ and $v_1$.
Thus
if $v_0$ is at or above $v_1$, $e_0$ is rotated upwards
($v_0$ remains above $v_1$ during the rotation);
see Fig.~\figref{cylinder}.
If $v_0$ is lower than $v_1$, $e_0$ is rotated downwards
($v_0$ remains below $v_1$ during the rotation).
The rotation stops when $v_0$ lies within ${\delta}$ of the vertical
line through $v_1$, i.e., when $v_0$ lies in the cylinder $C_1$
and is very close to its axis.
The value of ${\delta}$ is chosen to be $r/n$ so that in later steps
more links can be accommodated in the cylinder.
Again see Fig.~\figref{cylinder}.
\item
Now we rotate $e_0$ about the axis of $C_1$ away from $e_1$,
until $e'_0$ and $e'_1$
are collinear (but not overlapping),
i.e., until $e_0$ lies in the vertical plane $\Pi_1$.
\end{enumerate}
\conf{
See~Fig.~\figref{cylinder}.
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=cylinder.eps,height=3in}
\conf{
\ \psfig{figure=cylinder.eps,width=7cm}
\end{center}
\caption{
Step~0: (a) $e_0$ is first rotated within $\Pi_0$ into $C_1$,
and then (b) rotated into the vertical plane $\Pi_1$ containing $e_1$.
\conf{
Step~0: $e_0$ is rotated within $\Pi_0$ and then into $\Pi_1$.
}
\figlab{cylinder}
\end{figure}
After completion of Step~0, $(v_0, v_1, v_2)$ forms a
chain in $\Pi_1$ monotone with respect to the line $\Pi_1 \cap {\Pi_{xy}}$.
\item[$i$.]
At the start of Step~$i>0$, we
have a monotone chain
$P_{i+1} = P[0,i+1]$
contained in the vertical plane $\Pi_i$ through $e_i$,
with $P_i=P[0,i]$ in $C_i$ and $v_0$ within a distance of $i{\delta}$
of the axis of $C_i$.
\begin{enumerate}
\conf{\setlength{\itemsep}{0pt}}
\item
As in Step~0(a), rotate
\conf{
Rotate
$e_i$ within $\Pi_i$
(in the direction that shortens the vertical projection of $e_i$)
so that
$v_i$ lies within a distance ${\delta}$ of the axis of $C_{i+1}$.
The difference now is that $v_i$ is not the start of the chain,
but rather is connected to the chain $P_i$.
During the rotation of $e_i$ we ``drag'' $P_i$ along in such a way that
only joints $v_i$ and $v_{i+1}$ rotate, keeping
the joints $v_1,\ldots,v_{i-1}$ frozen.
Furthermore, we constrain the motion of $P_i$
(by appropriate rotation about joint $v_i$)
so that it does
not undergo a rotation. Thus at any instant of time during the
rotation of $e_i$, the position of $P_i$ remains within $\Pi_i$
and is a translated copy of the initial $P_i$.
See Fig.~\figref{accordion}.
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=accordion.eps,height=3in}
\conf{
\ \psfig{figure=accordion.eps,height=6.5cm}
\end{center}
\caption{The chain $P_i$ translates within $\Pi_i$.}
\figlab{accordion}
\end{figure}
\item
Following Step~0(b), rotate
\conf{
Rotate
$P_{i+1}$
about the axis of $C_{i+1}$ until $e'_i$ and $e'_{i+1}$ are coplanar.
\end{enumerate}
At the completion of Step~$i$ we therefore have a chain
$P_{i+2} =P[0,i+2]$ in the vertical plane $\Pi_{i+1}$, with
$P_{i+1}$ in $C_{i+1}$ and $v_0$ within a distance of $(i+1){\delta}$ of its axis.
The chain is monotone in $\Pi_{i+1}$
with respect to the line $\Pi_{i+1} \cap {\Pi_{xy}}$.
\end{enumerate}
\conf{
Now the second stage can be performed simply by straightening
one joint at a time, because this operation maintains monotonicity.
\subsection{Stage 2}
Now it is trivial to unfold this monotone chain by straightening one
joint at a time, i.e., rotating each joint angle to $\pi$,
starting at either end of the chain.
We have therefore established the first claim of this theorem:
\begin{theorem}
A polygonal chain of $n$ links
with a simple orthogonal projection may be straightened,
in $O(n)$ moves, with an algorithm of
$O(n)$ time and space complexity.
\theolab{simple.proj}
\end{theorem}
Counting the number of moves is straightforward.
Stage~1, Step~$i$(a) requires one move:
only joints $v_i$ and ${v_{i+1}}$ rotate.
Step~$i$(b) is again one move: only ${v_{i+1}}$ rotates.
So Stage~1 is completed in $2n$ moves.
As Stage~2 takes $n-1$ moves, the whole procedure is accomplished
with $O(n)$ moves.
Each move can be computed in constant time, so the time complexity
is dominated by the computation of the cylinder radii $r_i$.
These can be trivially computed in $O(n^2)$ time, by computing
each vertex-vertex and vertex-edge distance.
However, a more efficient computation is possible, based on the
medial axis of a polygon, as follows.
Given the projected chain $P'$ in the plane
(Fig.~\figref{medial}a), form two simple polygons
$P_1$ and $P_2$, by doubling the chain from its endpoint
$v'_0$ until the convex hull is reached (say at point $x$),
and from there connecting along the line bisecting the
hull angle at $x$ to a large surrounding rectangle, and similarly
connecting from $v'_{n-1}$ to the hull to the rectangle.
For $P_1$ close the polygon above $P'$, and below for $P_2$.
See Figs.~\figref{medial}bc. Note that $P_1 \cup P_2$ covers
the rectangle, which, if chosen large, effectively covers the plane
for the purposes of distance computation.
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=medial.eps,height=12cm}
\end{center}
\caption{(a) Chain $P'$; (b) Polygon $P_1$; (c) Polygon $P_2$.}
\figlab{medial}
\end{figure}
Compute the medial axis of $P_1$ and $P_2$ using a
the linear-time algorithm of~\cite{csw-fmasp-95}.
The distances $r_i$ can now be determined from the
distance information in the medial axes.
For a convex vertex $v_i$ of $P_k$, its minimum
``feature distance'' can be found from axis information
at the junction of the axis edge incident to $v_i$.
For a reflex vertex, the information is with the associated
axis parabolic arc.
Because the bounding box is chosen to be large, no
vertex's closest feature is part of the bounding box,
and so must be part of the chain.
\section{Open Chains on a Polytope}
\seclab{Open.Polytope}
In this section we show that any open chain embedded on the
surface of a convex polytope may be straightened. We start with a
planar chain which we straighten in 3D.
Let $P$ be an open chain in 2D, lying in ${\Pi_{xy}}$.
It may be easily straightened by the following procedure.
Rotate $e_0$ within $\Pi_0$ until it is vertical; now
$v_0$ projects into $v_1$ on ${\Pi_{xy}}$.
In general, rotate $e_i$ within $\Pi_i$ until $v_i$
sits vertically above $v_{i+1}$.
Throughout this motion, keep the previously straightened
chain $P_i=P[0,i]$ above $v_i$ in a vertical ray through $v_i$.
This process clearly maintains simplicity throughout,
as the projection at any stage is a subset of the original
simple chain in ${\Pi_{xy}}$.
In fact, this procedure can be seen as a special case of
the algorithm described in the preceding section.
An easy generalization of this ``pick-up into a vertical ray''
idea permits straightening any open chain lying on the
surface of a convex polytope ${\cal P}$.
The same procedure is followed, except that the
surface of ${\cal P}$ plays the role of ${\Pi_{xy}}$, and surface
normals play the roles of vertical rays.
When a vertex $v_i$ of the polygonal chain $P$ lies on
an edge $e$ between two faces $f_1$ and $f_2$ of ${\cal P}$,
then the line containing $P_i$ is rotated from $R_1$,
the ray through $v_i$ and normal to $f_1$, through
an angle of measure $\pi - {\delta}(e)$,
where ${\delta}(e)$ is the (interior) dihedral angle at $e$,
to $R_2$,
the ray through $v_i$ and normal to $f_2$.
This algorithm uses $O(n)$ moves and can be executed in $O(n)$ time.
Note that it is possible to draw a polygonal chain on a polytope
surface that has no simple projection.
So this algorithm handles some cases not covered by
Theorem~\theoref{simple.proj}.
We believe that the sketched algorithm applies to
a class of polyhedra
wider than convex polytopes,
but we will not pursue this further here.
\conf{
Any open chain lying on the surface of a convex polytope
can be straightened
by ``picking up'' the chain. Working from one end of the chain, each link is
rotated up away from the polytope to the normal of the face
containing the link. At edges of the polytope the straightened subchain
is swiveled from one
face normal to the next. For details see~\cite{full}.
Though related to the result of the previous section, this result is
different because a polygonal
chain on a polytope surface need not have a simple orthogonal projection.
\section{Locked Chains}
\seclab{Knitting.Needles}
Having established that two classes of open chains may be
straightened, we show in this section that not all open chains
may be straightened, describing one locked open chain of
five links (Section~\secref{Locked.Open}).
A modification of this example establishes
the same result for closed chains (Section~\secref{Locked.Closed}).
Both of these results were obtained independently by other
researchers~\cite{cj-nepiu-98}.
Our proofs are, however, sufficiently different
to be of independent interest.
\subsection{A Locked Open Chain}
\seclab{Locked.Open}
Consider the chain $K=(v_0,\ldots,v_5)$ configured as in
Fig.~\figref{knitting},
where the standard knot theory convention is followed to denote
``over'' and ``under'' relations.
Let $L = \ell_1 + \ell_2 + \ell_3$ be the total length of the
short central links, and
let $\ell_0$ and $\ell_4$ be both larger than $L$;
in particular, choose $\ell_0 = L + {\delta}$ and $\ell_4 = 2L + {\delta}$ for ${\delta} > 0$.
(One can think of this as
composed of two rigid knitting needles, $e_0$
and $e_4$, connected by a flexible cord of length $L$.)
Finally, center a ball $B$ of radius $r = L + {\epsilon}$ on $v_1$,
with $0 < 2{\epsilon} < {\delta}$.
The two vertices $v_0$ and $v_5$ are exterior to $B$, while the
other four are inside $B$.
See Fig.~\figref{knitting}.
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=knitting.eps,height=3.5in}
\conf{
\ \psfig{figure=knitting.eps,width=7.5cm}
\end{center}
\caption{A locked open chain $K$ (``knitting needles'').
(The first and last edges $e_0$ and $e_4$ are longer than
they appear in this view.)}
\figlab{knitting}
\end{figure}
Assume now that the chain $K$ can be straightened by some motion.
During the entire process,
$\{ v_1, v_2, v_3, v_4 \} \subset B$ because $L < r$.
Of course $v_0$ remains outside of $B$ because $\ell_0 > r$.
Now because $v_4 \in B$ and
$\ell_4 = |v_4v_5| = 2L+{\delta}$ is more than the diameter $2r = 2(L+{\epsilon})$ of $B$,
$v_5$ also remains exterior to $B$ throughout the motion.
Before proceeding with the proof, we recall some terms from
knot theory.
The {\em trivial knot\/} is an unknotted closed curve homeomorphic
to a circle.
The {\em trefoil knot\/} is the simplest
knot, the only knot that may be drawn with three crossings.
See, e.g.,~\cite{Livingston} or~\cite{a-kb-94}.
\conf{
See, e.g.,~\cite{a-kb-94}.
Because of the constant separation between
$\{ v_0, v_5 \}$ and
$\{ v_1, v_2, v_3, v_4 \}$
by the boundary of $B$,
we could have attached
a sufficiently long unknotted string $P'$ from $v_0$ to $v_5$
exterior to $B$ that would not have hindered the unfolding
of $P$.
But this would imply that $K \cup P'$ is the
trivial knot; but it is clearly a trefoil knot.
We have reached a contradiction; therefore, $K$ cannot be straightened.
\subsection{A Locked, Unknotted Closed Chain}
\seclab{Locked.Closed}
It is easy to obtain locked closed chains in 3D: simply tie the
polygonal chain into a knot. Convexifying such a chain would
transform it to the trivial knot, an impossibility.
More interesting for our goals is whether there exists a
locked, closed polygonal chain that is {\em unknotted},
i.e., whose topologically
structure is that of the trivial knot.
We achieve this by ``doubling'' $K$: adding
vertices $v'_i$ near $v_i$ for $i=1,2,3,4$, and connecting
the whole into a chain $K^2 = (v_0,\ldots,v_5, v'_4,\ldots,v'_1)$.
See Fig.~\figref{knitting2}.
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=knitting2.eps,width=12cm}
\conf{
\ \psfig{figure=knitting2.eps,width=8.5cm}
\end{center}
\caption{$K^2$ ($K$ doubled): a locked but unknotted chain.}
\figlab{knitting2}
\end{figure}
Because $K \subset K^2$, the preceding argument applies when
the second copy of $K$ is ignored:
any convexifying motion will have the property that $v_0$ and $v_5$
remain exterior to $B$, and
$\{ v_1, v_2, v_3, v_4 \}$ remain interior to $B$ throughout
the motion.
Thus the extra copy of $K$ provides
no additional freedom of motion to $v_5$ with respect to $B$.
Consequently, we can argue as before: if $K^2$ is somehow
convexified, this motion could be used to unknot $K \cup P'$,
where $P'$ is an unknotted chain exterior to $B$ connecting $v_0$ to $v_5$.
This is impossible, therefore $K^2$ is locked.
\section{Convexifying a Planar Simple Polygon in 3D}
\seclab{StLouis}
An interesting open problem is to generalize our result from Section~\secref{Open.3D}
to convexify a general closed chain.
We show now that the special case of a closed chain
lying in a plane, i.e., a planar simple polygon,
may be convexified in 3D.
Such a polygon may be
convexified in 3D by ``flipping'' out the reflex pockets,
i.e., rotating the pocket chain into 3D and back down to the plane;
see Fig.~\figref{pocket}.
This simple procedure was suggested by Erd\H{o}s~\cite{e-p3763-35}
and proved to work by de Sz.~Nagy~\cite{sn-sp3763-39}.
The number of flips, however, cannot be bound as a function
of the number of vertices $n$ of the polygon, as
first proved by Joss and Shannon~\cite{g-hcp-95}.
See~\cite{t-entr-99} for the complex history of these results.
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=pocket.eps,height=2.5in}
\end{center}
\caption{(a) A pocket $p$; (b) The polygon after flipping $p$.}
\figlab{pocket}
\end{figure}
We offer a new algorithm for convexifying planar closed chains,
which we call the ``St. Louis Arch'' algorithm.
It is more complicated than flipping
but uses a bounded number of moves, in fact $O(n)$ moves.
It models the intuitive approach of picking up the polygon
into 3D. We discretize this to lifting vertices one by one, accumulating
the attached links into a convex
``arch''\footnote{
We call this the {\em St.\ Louis Arch Algorithm\/}
because of the resemblance to the arch in St.\ Louis, Missouri.
}
$A$
in a
vertical plane above the remaining polygonal chain;
see Fig.~\figref{A0}.
Although the
algorithm is conceptually simple, some care is required to make it
precise, and to then establish that simplicity is maintained
throughout the motions.
Let $P$ be a simple polygon in the $xy$-plane, ${\Pi_{xy}}$.
Let ${\Pi_\epsilon}$ be the plane $z = {\epsilon}$ parallel to ${\Pi_{xy}}$, for ${\epsilon} > 0$;
the value of ${\epsilon}$ will be specified later.
We use this plane to convexify the arch safely above the
portion of the polygon not yet picked up.
We will use primes to indicate positions of moved (raised) vertices;
unprimed labels refer to the original positions.
After a generic step $i$ of the algorithm,
$P(0,i)$ has been lifted above ${\Pi_\epsilon}$ and convexified,
$v_0$ and $v_i$ have been raised to $v'_0$ and $v'_i$ on ${\Pi_\epsilon}$,
and
$P[i+1,n-1]$ remains in its original
position on ${\Pi_{xy}}$.
We first give a precise description of the conditions that
hold after the $i$th step.
Let $\Pi_z(v_i,v_j)$ be the (vertical) plane containing
\conf{
$v_i$ and $v_j$.
}
$v_i$ and $v_j$,
parallel to the $z$-axis.
\begin{enumerate}
\conf{\setlength{\itemsep}{0pt}}
\item[H1:]
${\Pi_\epsilon}$ splits the vertices of $P$ into three sets: $v'_0$ and $v'_i$
lie in ${\Pi_\epsilon}$, $v'_1, \ldots, v'_{i-1}$ lie above the plane, and
$v_{i+1}, \ldots, v_{n-1}$ lie below it.
\item[H2:]
The arch $A = P(0,i)$ lies in the plane $\Pi_z(v'_0, v'_i)$, and is convex.
\item[H3:] $v'_0$ and $v'_i$ project onto ${\Pi_{xy}}$ within distance ${\delta}$ of
their original positions $v_0$ and $v_i$.
(Here, $\delta>0$ is
a constant that depends only on the input positions; it will
be specified later.)
\item[H4:]
Edges ${v_{n-1}} v'_0$ and $v'_i {v_{i+1}}$ connect between
${\Pi_{xy}}$ and ${\Pi_\epsilon}$.
\item[H5:]
$P[i+1,n-1]$ remains in its original
position in ${\Pi_{xy}}$.
\end{enumerate}
See Fig.~\figref{A0}.
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=A0.eps,width=10cm}
\conf{
\ \psfig{figure=A0.eps,width=8cm}
\end{center}
\caption{The arch $A$ after the $i$th step, i.e.,
after ``picking up'' $P(0,i)$ into $A$.
(The planes ${\Pi_{xy}}$ and ${\Pi_\epsilon}$ are not distinguished in
this figure, nor in Figs.~\protect\figref{A1} or~\protect\figref{A2}.)
}
\figlab{A0}
\end{figure}
A central aspect of the algorithm will be choosing ${\epsilon}$ small
enough to guarantee a ${\delta}$ (see H3) that maintains simplicity
throughout all movements.
The algorithm consists of an initialization step S0, followed by
repetition of steps S1--S4.
\subsection{S0}
The algorithm is initialized at $i=2$ by selecting an arbitrary
(strictly) convex vertex $v_1$,
and raising $\{ v_0, v_1, v_2 \}$ in four
steps:
\begin{enumerate}
\conf{\setlength{\itemsep}{0pt}}
\item Rotate $v_1$ about the line through $v_0 v_2$ up to ${\Pi_\epsilon}$.
Call its new position $v''_1$.
\item Rotate $v_0$ about the line through $v_{n-1} v''_1$ up to ${\Pi_\epsilon}$.
Call its new position $v'_0$.
\item Rotate $v_2$ about the line through $v''_1 v_3$ up to ${\Pi_\epsilon}$.
Call its new position $v'_2$.
\item Rotate $v''_1$ about the line through $v'_0 v'_2$ upwards
until it lies in the plane $\Pi_z(v'_0,v'_2)$.
Call its new position $v'_1$.
\end{enumerate}
So long as the joint at $v''_1$ is not straight,
the $4$th step above is unproblematical, simply rotating a triangle
from a horizontal to a vertical plane.
That this joint does not become straight
depends on ${\epsilon}$ and ${\delta}$, and will be established
under the discussion of S1 below. Ditto for establishing that
the first three steps can be accomplished without causing
self-intersection.
\conf{
steps, whose details we leave for~\cite{full}.
After completion of Step S0, the hypotheses H1--H5 are all satisfied.
The remaining steps S1--S4 are repeated for each $i > 2$.
\subsection{S1}
The purpose of Step S1 is to lift $v_{i}$ from ${\Pi_{xy}}$ to ${\Pi_\epsilon}$.
This will be accomplished by a rotation of $v_{i}$ about
the line through $v'_{i-1}$ and
\conf{
$v_{i+1}$.
$v_{i+1}$, the same rotation used
in substeps~(2) and~(3), and in a modified form in~(1), of Step S0.
Although this rotation is conceptually simple, it is this key movement
that demands a value of ${\epsilon}$ to guarantee a ${\delta}$ that ensures correctness.
The values of ${\epsilon}$ and ${\delta}$ will be computed directly from the
initial geometric structure of $P$.
Specifying the conditions on ${\epsilon}$ is one of the more delicate
aspects of our argument, to which we now turn.
Let ${\alpha}_j$ be the smaller of the two (interior and exterior)
angles at $v_j$. Also let ${\beta}_j = \pi - {\alpha}_j$, the deviation
from straightness at joint $v_j$.
We assume that $P$ has no three consecutive collinear vertices.
If a vertex is collinear with its two adjacent vertices,
we freeze and eliminate that joint.
So we may assume that ${\beta}_j > 0$ for all $j$.
\subsubsection{Determination of ${\delta}$}
\seclab{delta}
As in our earlier Figure~\figref{simple.proj}, the simplicity of $P$
guarantees ``empty'' disks around each vertex. Here we need
disks to meet more stringent conditions than used
in Section~\secref{Open.3D}.
Let ${\delta} > 0$ be such that:
\begin{enumerate}
\conf{\setlength{\itemsep}{0pt}}
\item Disks around each vertex $v_j$ of radius ${\delta}$ include no
other vertices of $P$, and only intersect the two edges incident
to $v_j$.
\item
A perturbed polygon, obtained by displacing the vertices within
the disks (ignoring the fixed link lengths),
\begin{enumerate}
\item remains simple, and
\item has no straight vertices.
\end{enumerate}
\end{enumerate}
It should be clear that the simplicity of $P$ together with ${\beta}_j > 0$
guarantees that such a ${\delta} > 0$ exists.
As a technical aside,
we sketch how ${\delta}$ could be computed.
Finding a radius that satisfies condition~(1) is easy.
Half this radius guarantees the simplicity condition~(2a),
for this keeps a maximally displaced vertex separated from a
maximally displaced edge.
To prevent an angle ${\beta}_j$ from reaching zero,
condition~(2b),
displacements of
the three points $v_{j-1}$, $v_j$, and $v_{j+1}$ must be
considered.
Let $\ell = \min_j \{ \ell_j \}$ be the length of the shortest
edge, and let ${\beta}' = \min_j \{ {\beta}_j \}$
be the minimum deviation from collinearity.
Lemma~\lemref{rho},which we prove in the Appendix, shows that
choosing
${\delta} < \frac{1}{2} \ell \sin ({\beta}'/2)$
prevents straight vertices.
Let ${\sigma}$ be the minimum separation
$|v_j v_k|$ for all positions of $v_j$ and $v_k$ within
their ${\delta}$ disks, for all $j$ and $k$.
Condition~(2a) guarantees that ${\sigma} > 0$.
Note that ${\sigma} \le \ell$.
Let ${\beta}$ be the minimum of all ${\beta}_j$ for all positions of $v_j$ within
their ${\delta}$ disks.
Condition~(2b) guarantees that ${\beta} > 0$.
Our next task is to derive ${\epsilon}$ from ${\sigma}$, ${\beta}$, and ${\delta}$.
To this end, we must detail the ``lifting'' step of the algorithm.
\conf{
First, we sketch the computation of ${\delta}$. It is chosen so that
disks of radius ${\delta}$ around each vertex are empty of other vertices
and all but the two incident edges, and, most importantly, that
displacement of the vertices within these disks (perhaps all
simultaneously) both maintains polygon simplicity and could not
align any three vertices to become collinear.
\subsubsection{S1 Lifting}
\seclab{S1.lifting}
Throughout the algorithm, $v'_0$ remains fixed at the position on ${\Pi_\epsilon}$
it reached in Step S0.
During the lifting step, $v'_{i-1}$ also remains fixed, while $v_i$ is lifted.
Thus $v'_0 v'_{i-1}$, the base of the arch $A$, remains fixed during the
lifting, which permits us, by hypothesis H1, to safely ignore
the arch during this step.
We now concentrate on the $2$-link chain $(v'_{i-1}, v_i, v_{i+1})$.
By H5, $v_i v_{i+1}$ has not moved on ${\Pi_{xy}}$;
by H3, $v'_{i-1}$ has not moved horizontally more than ${\delta}$ from $v_{i-1}$.
Let ${\alpha}'_{i}$ be the measure in $[0,\pi]$ of
angle $\angle(v'_{i-1}, v_i, v_{i+1})$, i.e., the angle at $v_i$ measured
in the slanted plane determined by the three points.
Because $v_i v_{i+1}$ lie on ${\Pi_{xy}}$ and $v'_{i-1}$ is on ${\Pi_\epsilon}$,
${\alpha}'_{i} \neq \pi$ and
the chain $(v'_{i-1}, v_i, v_{i+1})$ is kinked at the joint $v_i$.
Now imagine holding $v'_{i-1}$ and $v_{i+1}$ fixed.
Then $v_i$ is free to move on a circle $C$ with center on $v'_{i-1} v_{i+1}$.
See Fig.~\figref{double.cone}.
This circle might lie partially below ${\Pi_{xy}}$,
and is tilted from the vertical (because $v'_{i-1}$ lies on ${\Pi_\epsilon}$).
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=double.cone.eps,width=10cm}
\end{center}
\caption{$v_{i}$ rotates up the circle $C$ until it hits ${\Pi_\epsilon}$.}
\figlab{double.cone}
\end{figure}
The lifting step consists simply in rotating $v_i$ on $C$ upward
until it lies on ${\Pi_\epsilon}$; its position there we call $v'_i$.
\subsubsection{Determination of ${\epsilon}$}
\seclab{Epsilon}
We now choose ${\epsilon}>0$ so that two conditions are satisfied:
\begin{enumerate}
\conf{\setlength{\itemsep}{0pt}}
\item The highest point of $C$ is above ${\Pi_\epsilon}$
(so that $v_i$ can reach ${\Pi_\epsilon}$).
\item $v'_i$ projects no more than ${\delta}$ away from $v_i$
(to satisfy H3).
\end{enumerate}
It should be clear that both goals may be achieved by choosing
${\epsilon}$ small enough.
We sketch a computation of ${\epsilon}$ in the Appendix.
The computation of ${\epsilon}$
ultimately depends solely on ${\sigma}$ and ${\beta}$---the
shortest vertex separation
and the smallest deviation from straightness---because
these determine ${\delta}$, and then $r$ and
${\delta}_1$ and ${\delta}_2$ and ${\epsilon}$.
\conf{
The computation of ${\epsilon}$
ultimately depends solely on the
shortest vertex separation
and the smallest deviation from straightness.
Although we have described the computation within
Step S1, in fact it is
\conf{
Thus it can be
performed prior to starting any movements;
and ${\epsilon}$ remains fixed throughout.
As we mentioned earlier, two of the three lifting rotations used
in Step~S0 match the lifting just detailed.
The exception is the first lifting, of $v_1$ to $v'_1$ in Step~S0.
This only differs in that the cone axis $v_{0} v_2$ lies
on ${\Pi_{xy}}$ rather than connecting ${\Pi_{xy}}$ to ${\Pi_\epsilon}$.
But it should be clear this only changes the above
computation in that the tilt angle $\psi$ is zero,
which only improves the inequalities. Thus the ${\epsilon}$ computed
for the general situation already suffices for this special case.
\subsubsection{Collinearity}
\seclab{Collinearity}
We mention here, for reference in the following steps,
that it is possible that $v'_i$ might be collinear with
$v'_0$ and $v'_{i-1}$ on ${\Pi_\epsilon}$.
There are two possible orderings of these three vertices
along a line:
\begin{enumerate}
\conf{\setlength{\itemsep}{0pt}}
\item $(v'_0, v'_i ,v'_{i-1})$.
\item $(v'_0, v'_{i-1}, v'_i)$.
\end{enumerate}
The ordering
$(v'_{i}, v'_0, v'_{i-1})$
is not possible
because that would violate the simplicity condition 2(a),
as all three vertices project to within ${\delta}$ of their
original positions on ${\Pi_{xy}}$, and no vertex comes within
$\delta$ of an edge.
Despite this possible degeneracy, we will refer to ``the
triangle $\triangle v'_0 v'_{i-1} v'_i$,'' with the understanding that
it may be degenerate.
This possibility will be dealt with in Lemma~\lemref{strict}.
We now turn to the remaining three steps of the algorithm for
iteration $i$.
We use the
notation $A^{(k)}$ to represent
the arch $A=A^{(0)}$ at various stages of its processing, incrementing
$k$ whenever the shape of the arch might change.
\subsection{S2}
After the completion of Step~S1, $v'_{i-1} v'_i$ lies in ${\Pi_\epsilon}$.
We now rotate the arch $A^{(0)}$ into the plane ${\Pi_\epsilon}$,
rotating about its base $v'_0 v'_{i-1}$,
away from $v'_{i-1} v'_i$. This guarantees that
$A^{(1)} = A^{(0)} \cup \triangle v'_0 v'_{i-1} v'_i$ is a planar
weakly-simple polygon.
Moreover, while $\triangle v'_0 v'_{i-1} v'_i$
may be degenerate,
the chain
$(v'_0 ,\ldots, v'_i)$ lies strictly to one side of the line
through $(v'_0,v'_{i-1})$ and so is simple.
See Fig.~\figref{A1}.
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=A1.eps,width=10cm}
\conf{
\ \psfig{figure=A1.eps,width=8cm}
\end{center}
\caption{$A^{(1)} = A^{(0)} \cup \triangle v'_0 v'_{i-1} v'_i$ lies
in the plane ${\Pi_\epsilon}$ just slightly above ${\Pi_{xy}}$.
}
\figlab{A1}
\end{figure}
\subsection{S3}
Now that $A^{(1)}$ lies in its ``own'' plane ${\Pi_\epsilon}$,
it may be convexified without worry about intersections
with the remaining polygon $P[i+1,n-1]$ in ${\Pi_{xy}}$.
The polygon $A^{(1)}$ is a ``barbed polygon'': one
that is a union of a convex polygon ($A^{(0)}$) and a triangle
($\triangle v'_0 v'_{i-1} v'_i$).
We establish in Theorem~\theoref{barb.strict}
that $A^{(1)}$ may be convexified in such a way
that neither $v'_0$ nor $v'_i$ move, and
$v'_0$ and $v'_i$ end up strictly convex vertices of
the resulting convex polygon $A^{(2)}$.
\subsection{S4}
We next rotate $A^{(2)}$ up into the vertical plane $\Pi_z(v'_0,v'_{i})$.
Because of strict convexity at $v'_0$ and $v'_i$, the arch stays
above ${\Pi_\epsilon}$.
See Fig.~\figref{A2}.
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=A2.eps,width=10cm}
\conf{
\ \psfig{figure=A2.eps,width=8cm}
\end{center}
\caption{$A^{(2)}$, which has incorporated the
edge $v_{i-1} v'_i$ of $P$, is rotated up into the plane $\Pi_z(v'_0,v'_{i})$.
}
\figlab{A2}
\end{figure}
We have now reestablished the induction hypothesis conditions
H1--H5.
After the penultimate step, for $i=n{-}2$,
only $v_{n-1}$ lies on ${\Pi_{xy}}$, and
the final execution of the lifting Step S1 rotates $v_{n-1}$
about $v'_0 v'_{n-2}$ to raise it to ${\Pi_\epsilon}$.
A final execution of Steps S1 and S2 yields a convex polygon.
Thus, assuming Theorem~\theoref{barb.strict}
in Section~\secref{barbed} below,
we have established the correctness of the algorithm:
\begin{theorem}
The ``St.~Louis Arch'' Algorithm convexifies a planar simple polygon
of $n$ vertices.
\theolab{StLouis}
\end{theorem}
We will analyze its complexity in Section~\secref{Complexity.StLouis}.
We now return to Step S3, convexifying a barbed polygon.
We perform the convexification entirely within the plane ${\Pi_\epsilon}$.
We found two strategies for this task.
One maintains $A$ as a convex quadrilateral,
and the goal of Step S3 can be achieved by convexifying the
(nonconvex) pentagon $A^{(1)}$, and then reducing it to a
convex quadrilateral.
Although this approach is possible, we found it somewhat easier
to leave $A$ as a convex $(i{+}1)$-gon,
and prove that
$A^{(1)} = A^{(0)} \cup \triangle v'_0 v'_{i-1} v'_i$ can be convexified.
This is the strategy we pursue in the next two sections.
Section~\secref{quad} concentrates on the base case,
convexifying a quadrilateral, and Section~\secref{barbed}
achieves Theorem~\theoref{barb.strict}, the final piece needed to
complete Step S3.
\subsection{Convexifying Quadrilaterals}
\seclab{quad}
It will come as no surprise that every planar, nonconvex quadrilateral
can be convexified. Indeed, recent
work has shown that any star-shaped polygon may be
convexified~\cite{elrsw-cssp-98},
and this implies the result for quadrilaterals.
However, because we need several variations on basic quadrilateral
convexification, we choose to develop our results independently,
although relegating some details to the Appendix.
Let $Q = (v_0,v_1,v_2,v_3)$ be a weakly simple, nonconvex quadrilateral,
with $v_2$ the reflex vertex.
By {\em weakly simple\/} we mean that either $Q$ is simple,
or $v_2$ lies in the relative interior of one of the edges
incident to $v_0$
(i.e., no two of $Q$'s edges properly cross).
This latter violation of simplicity is permitted so that we can
handle a collapsed triangle inherited
from step S1 of the arch algorithm
(Section~\secref{Collinearity}).
As before, let ${\alpha}_i$ be the smaller of the two (interior and exterior)
angles at $v_i$.
Call a joint $v_i$ {\em straightened\/} if ${\alpha}_i = \pi$,
and {\em collapsed\/} if ${\alpha}_i = 0$.
All motions throughout this (\secref{quad}) and the next
section (\secref{barbed}) are in 2D.
We will convexify $Q$ with
one motion $M$, whose intuition is as follows;
see Fig.~\figref{quad.0}.
Think of the two links adjacent to the
reflex vertex $v_2$ as constituting a rope.
$M$ then opens the joint at $v_0$ until the rope becomes taut.
Because the rope is shorter than the sum of the lengths of the other
two links, it becomes taut prior to any other ``event.''
Any motion $M$ that transforms a shape such as $Q$ can take on rather
different appearances when different parts of $Q$ are fixed in
the plane, providing different frames of reference for the motion.
Although all such fixings represent the same intrinsic
shape transformation $M$, when convenient we distinguish two fixings:
$M_{02}$, which fixes the line $L$ containing $v_0 v_2$,
and $M_{03}$, which fixes the line containing $v_0 v_3$.
The convexification motion $M$ is easiest to see when
viewed as motion
$M_{02}$. Here the two $2$-link chain
$(v_0, v_1, v_2)$ and $(v_0, v_3, v_2)$ perform a
{\em line-tracking\/} motion~\cite{lw-rltm-92}:
fix $v_0$, and move $v_2$ away from $v_0$
along the fixed directed line $L$ containing
$v_0 v_2$, until $v_2$ straightens.
\begin{lemma}
A weakly simple quadrilateral $Q$ nonconvex at $v_2$ may be convexified
by motion $M_{02}$, which
straightens the reflex joint $v_2$, thereby converting $Q$
to a triangle $T$.
Throughout the motion, all four angles ${\alpha}_i$ increase only,
and remain within $(0,\pi)$ until ${\alpha}_2=\pi$.
{\em See Fig.~\figref{quad.0}a.}
\lemlab{quad.M02}
\end{lemma}
Although this lemma is intuitively obvious, and implicit
in work on linkages (e.g., \cite{gn-ogp4bm-86}),
we have not found an explicit statement of it in the literature,
and we therefore present a proof in the Appendix
(Lemma~\lemref{linetrack-theorem}).
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=quad.0.eps,height=5in}
\end{center}
\caption{(a) Convexifying a quadrilateral by $M_{02}$: moving $v_2$ out the
$v_0 v_2$ diagonal;
(b) The same motion viewed as $M_{03}$:
opening ${\alpha}_0$ with $v_0 v_3$ fixed.
}
\figlab{quad.0}
\end{figure}
We note that the same motion convexifies a degenerate quadrilateral,
where the triangle $\triangle v_0 v_1 v_2$ has zero area with
$v_2$ lying on the edge $v_0 v_1$.
See Fig.~\figref{quad.degen}.
As long as we open ${\alpha}_2$ in the direction, as illustrated,
that makes the quadrilateral simple, the proof of Lemma~\lemref{quad.M02}
carries through.
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=quad.degen.eps,height=4cm}
\end{center}
\caption{Motion $M_{02}$ also convexifies a weakly simple quadrilateral.}
\figlab{quad.degen}
\end{figure}
The motion $M_{02}$ used in Lemma~\lemref{quad.M02} is equivalent to
the motion $M_{03}$ obtained by
fixing $v_0 v_3$ and opening ${\alpha}_0$ by
rotating $v_1$ clockwise (cw) around the circle
of radius $\ell_0$ centered on $v_0$.
Throughout this motion, the polygon stays right of the fixed edge $v_0 v_3$.
See Fig.~\figref{quad.0}b.
This yields the following
easy corollary of
Lemma~\lemref{quad.M02}:
\begin{lemma}
Let $P = Q \cup P'$ be a polygon
obtained by gluing
edge $v_0 v_3$ of a weakly simple quadrilateral $Q$ nonconvex at $v_2$,
to an equal-length edge of a convex polygon $P'$, such that
$Q$ and $P'$ are on opposite sides of the diagonal $v_0 v_3$.
Then applying the motion $M_{03}$ to $Q$ while keeping $P'$ fixed,
maintains simplicity of $P$ throughout.
\lemlab{quad.halfplane}
\end{lemma}
\subsubsection{Strict Convexity}
Motion $M$ converts a nonconvex quadrilateral into a triangle,
but we will need to convert it to a strictly convex
quadrilateral. This can always be achieved by continuing $M_{02}$
beyond the straightening of ${\alpha}_2$.
\begin{lemma}
Let $Q = (v_0,v_1,v_2,v_3)$ be a quadrilateral,
with $(v_1,v_2,v_3)$
collinear so that ${\alpha}_2=\pi$, and such that $\triangle v_0 v_1 v_3$
is nondegenerate.
As in Lemma~\lemref{quad.halfplane},
let $P = Q \cup P'$ be a convex polygon obtained by gluing $P'$
to edge $v_0 v_3$ of $Q$, with $v_0$ and $v_3$
strictly convex vertices of $P$.
The motion $M_{02}$ (moving $v_2$ along the line determined by $v_0 v_2$)
transforms $Q$ to a strictly convex quadrilateral $Q'$
such that $Q' \cup P'$ remains a convex polygon
{\em (See Fig.~\figref{quad.strict}.)}
\lemlab{quad.strict}
\end{lemma}
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=quad.strict.eps,height=2in}
\end{center}
\caption{Converting $Q$ to the strictly convex quadrilateral $Q'$
via $M_{02}$.
Attachment $P'$ is carried along rigidly.
}
\figlab{quad.strict}
\end{figure}
\begin{pf}
Because $v_0$ and $v_3$ are strictly convex vertices, and $v_1$
must be strictly convex because $Q$ is a nondegenerate triangle,
all the interior angles at these
vertices are bounded away from $\pi$. By assumption, they
are also bounded away from $0$. Thus there is some freedom
of motion for $v_2$ along the line determined by $v_0 v_2$
before the next event, when one of these angles reaches $0$ or $\pi$.
\end{pf}
A lower bound on ${\beta}' = \pi - {\alpha}'_2$, the amount that $v_2$
can be bent before an event is reached, could be computed
explicitly in $O(1)$ time
from the local geometry of $Q \cup P'$, but we will not
do so here.
\subsection{Convexifying Barbed Polygons}
\seclab{barbed}
Call a polygon {\em barbed\/} if removal of one ear $\triangle abc$
leaves a convex polygon $P'$.
$\triangle abc$ is called the {\em barb\/} of $P$.
Note that either or both of vertices $a$ and $c$ may be reflex vertices of $P$.
In order to permit $\triangle abc$ to be degenerate (of zero area),
we extend the definition as follows.
A weakly simple polygon
(Section~\secref{quad}, Figure~\figref{quad.degen})
is {\em barbed\/} if, for three consecutive vertices $a$, $b$, $c$,
deletion of $b$ (i.e., removal of the possibly degenerate $\triangle abc$)
leaves a simple convex polygon $P'$.
Note this definition only permits weak simplicity at the barb $\triangle abc$.
The following lemma (for simple barbed polygons) is
implicit in~\cite{s-scsc-73},
and explicit (for star-shaped polygons, which includes barbed
polygons) in~\cite{elrss-cssp-98},
but we will need to subsequently
extend it, so we provide our own proof.
\begin{lemma}
A weakly simple barbed polygon may be convexified, with $O(n)$ moves.
\lemlab{barb}
\end{lemma}
\begin{pf}
Let $P = (v_0, v_1, \ldots, v_{n-1})$, with
$\triangle v_0 v_{n-2} v_{n-1}$ the barb.
See Fig.~\figref{barb}.
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=barb.eps,height=12cm}
\end{center}
\caption{(a) A barbed polygon with barb $\triangle v_0 v_{n-2} v_{n-1}$.
The nonconvex quadrilateral $Q$ is transformed to $T$, resulting
in a new barbed polygon $T \cup P''$.
(b) and (c) show the remaining convexification steps.}
\figlab{barb}
\end{figure}
The proof is by induction.
Lemma~\lemref{quad.M02} establishes the base case, $n=4$,
for every quadrilateral is a barbed polygon.
So assume the theorem holds for all barbed polygons of up to $n-1$
vertices.
If both $v_0$ and $v_{n-2}$ are convex, $P$ is already convex and we
are finished.
So assume that $P$ is nonconvex, and without loss of generality
let $v_0$ be reflex in $P$.
It must be that $v_1 v_{n-2}$ is a diagonal, as it lies within
the convex portion of $P$.
Let $Q = (v_0, v_1, v_{n-2}, v_{n-1})$ be the quadrilateral cut off
by diagonal $v_1 v_{n-2}$,
and let $P'' = (v_1,\ldots,v_{n-2})$ be the remaining portion of
$P$, so that $P = Q \cup P''$.
$Q$ is nonconvex at $v_0$.
Lemma~\lemref{quad.halfplane} shows that motion $M$ (appropriately
relabeled) may be applied to convert $Q$
to a triangle $T$ by straightening $v_0$,
leaving $P''$ unaffected.
At the end of this motion, we have reduced $P$ to a polygon $P'$
of one fewer vertex.
Now note that $T$ is a barb for $P'$ (because $P''$ is convex):
$P' = T \cup P''$.
Apply the induction hypothesis to $P'$.
The result is a convexification of $P$.
Each reduction uses one move $M$, and so $O(n)$ moves suffice for $P$.
\end{pf}
Note that although each step of the convexification straightens
one reflex vertex, it may also introduce a reflexivity:
$v_1$ is convex in Fig.~\figref{barb}a but reflex in
Fig.~\figref{barb}b. We could make the procedure more
efficient by ``freezing'' any joint as soon as it straightens,
but it suffices for our analysis to freeze each straightened
reflex vertex, thenceforth treating the segment on which it lies
as a single rigid link.
As is evident in
Fig.~\figref{barb}c, the convexification leaves a polygon
with several vertices straightened. One of the edges $e$ of the
barbed polygon is the base of the arch $A$ from Section~\secref{S1.lifting}.
If either of $e$'s endpoints are straightened, then part
of the arch will lie directly in the plane ${\Pi_\epsilon}$, and could
cause a simplicity violation during the S1 lifting step.
Therefore we must ensure that both of $e$'s endpoints are
strictly convex:
\begin{lemma}
Any convex polygon with a distinguished edge $e$ can be
reconfigured so that
that both endpoints of $e$
become strictly convex vertices.
\lemlab{strict}
\end{lemma}
\begin{pf}
Suppose the counterclockwise endpoint $v_2$ of $e$
has internal angle ${\alpha}=\pi$;
see Fig.~\figref{barb.strict}.
Let $v_1$ be the next strictly convex vertex in clockwise order before $v_2$
(it may be that $v_1$ is the other endpoint of $e$),
and $v_3, v_0$ be the next two strictly
convex vertices adjacent to
$v_2$ counterclockwise.
Let $Q=(v_0,v_1,v_2,v_3)$.
\begin{figure}[htbp]
\begin{center}
\ \psfig{figure=barb.strict.eps,width=8cm}
\end{center}
\caption{Making one endpoint of $e$ strictly convex.}
\figlab{barb.strict}
\end{figure}
Then apply Lemma~\lemref{quad.strict} to $Q$ to convexify $v_2$
via motion $M_{02}$.
Apply the same procedure to the other endpoint of $e$
if necessary.
\end{pf}
Using Lemma~\lemref{barb} to convexify the barbed polygon arch,
and Lemma~\lemref{strict} to make its base endpoints strictly
convex, yields:
\begin{theorem}
A weakly simple barbed polygon may be convexified in such a
way that the endpoints of a distinguished edge are strictly convex.
\theolab{barb.strict}
\end{theorem}
This completes the description of the St.\ Louis Arch Algorithm,
as $A^{(1)} = A^{(0)} \cup \triangle v'_0 v'_{i-1} v'_i$ is a barbed
polygon, and Step S4 may proceed because of the strict convexity
at the arch base endpoints.
\subsection{Complexity of St.\ Louis Arch Algorithm}
\seclab{Complexity.StLouis}
It is not difficult to see that only a constant number of moves
are used in steps S0, S1, S2, and S4.
Step S3 is the only exception, which we have seen
in Lemma~\lemref{barb}
can be executed in $O(n)$ moves.
So the resulting procedure can be accomplished in $O(n^2)$ moves.
The algorithm actually only uses $O(n)$ moves, as the following
amortization argument shows:
\begin{lemma}
The St.\ Louis Arch Algorithm runs in $O(n)$ time and
uses $O(n)$ moves.
\end{lemma}
\lemlab{amort}
\begin{pf}
Each barb convexification move used in the proof of Lemma~\lemref{barb}
constitutes a single move according to the definition in
Section~\secref{Complexity}, as four joints open monotonically
(cf.~Lemma~\lemref{quad.M02}).
Each such convexification move necessarily straightens one reflex
joint, which is subsequently ``frozen.''
The number of such freezings is at most $n$ over the life of
the algorithm. So although any one barbed polygon might
require $\Omega(n)$ moves to convexify, the convexifications
over all $n$ steps of the algorithm uses only $O(n)$ moves.
Making the base endpoint angles strictly convex requires
at most two moves per step, again $O(n)$ overall.
Each step of the algorithm can be executed in constant time,
leading to a time complexity of $O(n)$.
Again we must consider computation of the minimum distances
around each vertex to obtain ${\delta}$ (Section~\secref{delta}),
but we can employ the same medial axis technique used
in Section~\secref{Open.3D} to compute these distances
in $O(n)$ time.
\end{pf}
Note that at most four joints rotate at any one time,
in the barb convexification step.
\section{Open problems}
\seclab{Open}
Although we have mapped out some basic distinctions between
locked and unlocked chains in three dimensions, our results leave many
aspects unresolved:
\begin{enumerate}
\conf{\setlength{\itemsep}{0pt}}
\item What is the complexity of deciding whether a chain in 3D can be unfolded?
\item Theorem~\theoref{simple.proj} only covers
chains with simple orthogonal projections.
Extension to
perspective (central) projections, or
other types of projection,
seems possible.
\item
Can a closed chain with a simple projection always be convexified?
None of the algorithms presented in this paper seem to settle
this case.
\item
Find unfolding algorithms that minimize the number
of simultaneous
joint rotations.
Our quadrilateral convexification procedure,
for example, moves four joints at once, whereas pocket flipping
moves only two at once.
\item Can an open chain of unit-length links lock in 3D?
Cantarella and Johnson show in~\cite{cj-nepiu-98}
that the answer is {\sc no} if $n \le 5$.
\end{enumerate}
\subsection*{Acknowledgements}
We thank
W.~Lenhart for co-suggesting the knitting needles
example in Fig.~\figref{knitting},
J.~Erickson for the amortization argument that reduced the time
complexity in Lemma~\lemref{amort} to $O(n)$,
and H.~Everett for useful comments.
| {
"timestamp": "1999-10-08T14:04:18",
"yymm": "9910",
"arxiv_id": "cs/9910009",
"language": "en",
"url": "https://arxiv.org/abs/cs/9910009",
"abstract": "In this paper, we study movements of simple polygonal chains in 3D. We say that an open, simple polygonal chain can be straightened if it can be continuously reconfigured to a straight sequence of segments in such a manner that both the length of each link and the simplicity of the chain are maintained throughout the movement. The analogous concept for closed chains is convexification: reconfiguration to a planar convex polygon. Chains that cannot be straightened or convexified are called locked. While there are open chains in 3D that are locked, we show that if an open chain has a simple orthogonal projection onto some plane, it can be straightened. For closed chains, we show that there are unknotted but locked closed chains, and we provide an algorithm for convexifying a planar simple polygon in 3D. All our algorithms require only O(n) basic ``moves'' and run in linear time.",
"subjects": "Computational Geometry (cs.CG); Discrete Mathematics (cs.DM)",
"title": "Locked and Unlocked Polygonal Chains in 3D",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407144861112,
"lm_q2_score": 0.8289388146603365,
"lm_q1q2_score": 0.806757004245296
} |
https://arxiv.org/abs/1904.08002 | Quotients of Hurwitz Primes | Quotient sets have attracted the attention of mathematicians in the past three decades. The set of quotients of primes is dense in the positive real numbers and the set of all quotients of Gaussian primes is also dense in the complex plane. Sittinger has proved that the set of quotients of primes in an imaginary quadratic ring is dense in the complex plane and the set of quotients of primes in a real quadratic number ring is dense in R. An interesting open question is introduced by Sittinger: Is the set of quotients of Hurwitz primes dense in the quaternions? In this paper, we answer the question and prove that the set of all quotients of Hurwitz primes is dense in the quaternions. | \section{\baselineskip=17pt}
\title{Quotients of Hurwitz Primes}
\author{Minghao Pan}
\address{Department of Mathematics, University of California, Los Angeles, CA 90095, United States}
\email{minghaopan@g.ucla.edu}
\author{Wentao Zhang}
\address{Shenzhen Middle School, No.18 Shenzhong Street, Luohu, Shenzhen, Guangdong, 518001, China}
\email{wtzhang@shenzhong.net}
\date{\today}
\begin{document}
\maketitle
\begin{abstract}
Quotient sets have attracted the attention of mathematicians in the past three decades. The set of quotients of primes is dense in the positive real numbers and the set of all quotients of Gaussian primes is also dense in the complex plane. Sittinger has proved that the set of quotients of primes in an imaginary quadratic ring is dense in the complex plane and the set of quotients of primes in a real quadratic number ring is dense in $\mathbb{R}.$ An interesting open question is introduced by Sittinger: Is the set of quotients of Hurwitz primes dense in the quaternions? In this paper, we answer the question and prove that the set of all quotients of Hurwitz primes is dense in the quaternions.
\end{abstract}
\section{Introduction}
Quotient sets like $\{p/q:p,q\mathrm{\ are\ primes}\}$ have attracted the attention of mathematicians in the past three decades. It has been proved (or observed) many times that the set $\{p/q:p,q\mathrm{\ are\ primes}\}$ is dense in the positive real numbers (e.g., \cite[Exercise 218]{de}, \cite[Corollary 5]{gs}, \cite[Theorem 4]{ho} ). In 2013, Garcia
\cite{ga1} considered the set of all quotients of Gaussian primes and proved that it is dense in the complex plane. Later Garcia and Luca \cite{ga2} proved that the set of quotients of nonzero Fibonacci numbers is dense in the $p$-adic numbers
for every prime $p$. Sanna \cite{sa} generalized Garcia and Luca's result and proved that for any integer $k\ge2$ and any prime number $p$, the set of quotients of nonzero
$k$-generalized Fibonacci numbers is dense in the $p$-adic numbers.
Recently, Sittinger \cite{si} proved that the set of quotients of primes in an imaginary quadratic ring is dense in the complex plane and the set of quotients of primes in a real quadratic number ring
is dense in $\mathbb{R}.$ Sittinger also asked an interesting open question in his paper: Is the set of quotients of Hurwitz primes dense in the quaternions (see Section 2 for the definitions)? In this paper, we answer Sittinger's question and prove the following theorem.
\begin{theorem}\label{thm1}
The set of all quotients of Hurwitz primes is dense in the quaternions.
\end{theorem}
\begin{remark}
As the multiplication of quaternions is not commutative, for any two non-zero quaternions $\mathfrak{a},\mathfrak{b}$, their quotient could be defined by $\frac{\mathfrak{b}}{\mathfrak{a}}=\mathfrak{b}\mathfrak{a}^{-1}$ or $\frac{\mathfrak{b}}{\mathfrak{a}}=\mathfrak{a}^{-1}\mathfrak{b}$ and Theorem \ref{thm1} holds for both cases. In this paper, we only prove the former case and the proof of the latter case is very similar with obvious modifications.
\end{remark}
\begin{remark}
In fact we prove a slightly stronger result than Theorem \ref{thm1}. Hurwitz quaternions could be divided into two disjoint subsets (see Section 2 for the notation and definitions)
$$
H_1=\left\{{x_1+x_2i+x_3j+x_4k}:x_1,x_2,x_3,x_4\in\mathbb{Z}\right\}
$$
and
$$
H_2=\left\{{x_1+x_2i+x_3j+x_4k}:x_1,x_2,x_3,x_4\in\mathbb{Z}+\frac12\right\}.
$$
Our proof indicates that the set
$$
\left\{\frac{\mathfrak{p}}{\mathfrak{q}}:{\mathfrak{p}}\ \textrm{and}\ {\mathfrak{q}}\ \textrm{are Hurwitz primes in } H_1\right\}
$$
is dense in the quaternions. Moreover, for any two Hurwitz primes ${\mathfrak{p}},{\mathfrak{q}}\in H_1$ with odd norms, it is easy to see that $\mathfrak{pu},\mathfrak{qu}$ are Hurwitz primes belonging to $H_2$ and $\frac{\mathfrak{pu}}{\mathfrak{qu}}=(\mathfrak{pu})(\mathfrak{qu})^{-1}=\frac{\mathfrak{p}}{\mathfrak{q}}$, where $\mathfrak{u}=\frac{1+i+j+k}{2}$ is a unit in Hurwitz quaternions. Therefore, the set
$$
\left\{\frac{\mathfrak{p}}{\mathfrak{q}}:{\mathfrak{p}}\ \textrm{and}\ {\mathfrak{q}}\ \textrm{are Hurwitz primes in } H_2\right\}
$$
is also dense in the quaternions.
\end{remark}
\section{Hurwitz quaternions}
In this section, we introduce some properties of quaternions and most of the materials can be found in \cite{co}.
The quaternions were discovered by Irish mathematician Hamilton in 1843. They have been widely used in the electrodynamics, general relativity, navigation, satellite attitude control and other fields.
\begin{definition}
The set of quaternions is defined as
$$
Q=\left\{x_1+x_2i+x_3j+x_4k:x_1,x_2,x_3,x_4\in\mathbb{R}\right\}
$$
where $i,j,k$ commute with every real number and satisfy
\begin{equation*}\label{ijk}
ijk=i^2=j^2=k^2=-1.
\end{equation*}
\end{definition}
Let $\mathfrak{a}=a_1+a_2i+a_3j+a_4k$ and $\mathfrak{b}=b_1+b_2i+b_3j+b_4k$ be any two quaternions. The addition of quaternions is defined by
$$
\mathfrak{a}+\mathfrak{b}=a_1+b_1+(a_2+b_2)i+(a_3+b_3)j+(a_4+b_4)k.
$$
For any real number $\lambda$, the scalar multiplication is defined by
$$
\lambda\mathfrak{a}=\lambda a_1+\lambda a_2i+\lambda a_3j+\lambda a_4k.
$$
Then the quaternions form a vector space with these two operations. Moreover, we can define the multiplication of quaternions by
\begin{align*}
\mathfrak{a}\mathfrak{b}
=&(a_1b_1-a_2b_2-a_3b_3-a_4b_4)+(a_1b_2+a_2b_1+a_3b_4-a_4b_3)i\\
&+(a_1b_3+a_3b_1+a_4b_2-a_2b_4)j+(a_1b_4+a_4b_1+a_2b_3-a_3b_2)k.
\end{align*}
Clearly, we have
$$
ij=k\mathrm{\ and\ }ji=-k
$$
so the multiplication of quaternions is not commutative.
For any $\mathfrak{a}=a_1+a_2i+a_3j+a_4k\in Q$, $\overline{\mathfrak{a}}=a_1-a_2i-a_3j-a_4k$ is called the conjugate of $\mathfrak{a}$. It is easy to see that
$$
\mathfrak{a}\overline{\mathfrak{a}}=a_1^2+a_2^2+a_3^2+a_4^2.
$$
\begin{definition}
For any $\mathfrak{a}=a_1+a_2i+a_3j+a_4k\in Q$, its norm is defined by
$$
\|\mathfrak{a}\|=\mathfrak{a}\overline{\mathfrak{a}}=a_1^2+a_2^2+a_3^2+a_4^2.
$$
\end{definition}
The norm induces a metric $d(\mathfrak{a},\mathfrak{b})=|\mathfrak{a}-\mathfrak{b}|$ on the quaternions by
$$
|\mathfrak{a}-\mathfrak{b}|=\sqrt{\|\mathfrak{a}-\mathfrak{b}\|}
$$
and the quaternions form a metric space.
\begin{definition}
A subset $D$ of quaternions is said to be dense in the quaternions if for any quaternion $\mathfrak{a}$ and any $\varepsilon>0$, there exists a quaternion $\mathfrak{b}\in D$ such that
$$
|\mathfrak{a}-\mathfrak{b}|<\varepsilon.
$$
\end{definition}
\begin{definition}
For any $\mathfrak{a}=a_1+a_2i+a_3j+a_4k\in Q$ and ${\|\mathfrak{a}\|}\ne 0$, its inverse is defined by
$$
\mathfrak{a}^{-1}=\frac{\overline{\mathfrak{a}}}{\|\mathfrak{a}\|}=\frac{a_1-a_2i-a_3j-a_4k}{\|\mathfrak{a}\|}.
$$
\end{definition}
In this paper, the quotient of two quaternions is defined by $$\frac{\mathfrak{b}}{\mathfrak{a}}=\mathfrak{b}\mathfrak{a}^{-1}.$$
One interesting subset of quaternions is the set of Hurwitz quaternions which was introduced by Hurwitz in 1919.
\begin{definition}
The set of Hurwitz quaternions $H$ is a subset of quaternions, defined as
$$
H=\left\{{x_1+x_2i+x_3j+x_4k}:x_1,x_2,x_3,x_4\in\mathbb{Z}\mathrm{\ or\ }x_1,x_2,x_3,x_4\in\mathbb{Z}+\frac12\right\}.
$$
We say that $\mathfrak{a}$ is a unit in $H$ if $\|\mathfrak{a}\|=1$.
\end{definition}
It is easy to see that for any $\mathfrak{a}\in H$, $\|\mathfrak{a}\|\in\mathbb{Z}$. Moreover, we have the following result.
\begin{lemma}\label{lemma 1}
Let $n$ be any positive integer. Then the number of Hurwitz quaternions with norm $n$ is $$24\sum\limits_{d|n\atop 2\nmid d}d.$$
\end{lemma}
\begin{definition}
We say that $\mathfrak{p}\in H$ is a Hurwitz prime if $\mathfrak{p}$ is not zero or a unit and is not a product of non-units in $H$.
\end{definition}
We have the following result to determine whether a Hurwitz quaternion is a Hurwitz prime.
\begin{lemma}\label{lemma 2}
For any $\mathfrak{p}\in H$, $\mathfrak{p}$ is a Hurwitz prime if and only if $\|\mathfrak{p}\|$ is a prime number.
\end{lemma}
\section{Preliminaries}
In this section, we introduce some tools which will be used later. We begin with some well-known properties of $\mathbb{R}^4$.
For any two vectors $\overrightarrow{x}=(x_1,x_2,x_3,x_4),\overrightarrow{y}=(y_1,y_2,y_3,y_4)\in\mathbb{R}^4$, the metric is defined by
$$
|\overrightarrow{x}-\overrightarrow{y}|
=\sqrt{(x_1-y_1)^2+(x_2-y_2)^2+(x_3-y_3)^2+(x_4-y_4)^2}
$$
and the inner product is defined by
$$
\langle\overrightarrow{x},\overrightarrow{y}\rangle=x_1y_1+x_2y_2+x_3y_3+x_4y_4.
$$
Clearly $|\overrightarrow{x}|=\sqrt{\langle\overrightarrow{x},\overrightarrow{x}\rangle}$.
It is well-known that
\begin{equation}\label{para}
|\overrightarrow{x}-\overrightarrow{y}|^2
=|\overrightarrow{x}|^2+|\overrightarrow{y}|^2
-2
{\langle\overrightarrow{x},\overrightarrow{y}\rangle}
.
\end{equation}
Define a map $\sigma$ from $Q$ to $\mathbb{R}^4$ by
\begin{align*}
\sigma:\qquad\qquad Q\qquad&\rightarrow\qquad\mathbb{R}^4\\
x_1+x_2i+x_3j+x_4k\ &\rightarrow\ (x_1,x_2,x_3,x_4).
\end{align*}
Then it is easy to see that $\sigma$ is an isomorphism. Moreover, $\sigma$ is also an isometry and
\begin{equation}\label{equiv}
|\sigma(\mathfrak{a})-\sigma(\mathfrak{b})|=|\mathfrak{a}-\mathfrak{b}|
\end{equation}
for any $\mathfrak{a},\mathfrak{b}\in Q$.
Next, we introduce our main tool.
Denote by $S$ the four dimensional hypersphere
$$
x_1^2+x_2^2+x_3^2+x_4^2=1.
$$
For any $0<\theta<\pi$ and $\overrightarrow{x}\in\mathbb{R}^4$, define
$$
\Omega(\overrightarrow{x},\theta)
=\left\{\overrightarrow{y}\in\mathbb{R}^4:|\overrightarrow{y}|=1\mathrm{\ and\ }\arccos
\frac{\langle \overrightarrow{x},\overrightarrow{y}\rangle}
{|\overrightarrow{x}||\overrightarrow{y}|}\le \theta\right\}.
$$
$\Omega(\overrightarrow{x},\theta)$ is a hyperspherical cap in $S$ and denote by $A(\Omega(\overrightarrow{x},\theta))$ its surface area. Clearly $A(\Omega(\overrightarrow{x},\theta))$ is a positive real number and only depends on $\theta$ and $\overrightarrow{x}$.
Define
\begin{equation}\label{def r omega}
r(n,\Omega(\overrightarrow{x},\theta))
=\#\left\{\overrightarrow{y}\in\mathbb{Z}^4:|\overrightarrow{y}|=\sqrt{n}
\mathrm{\ and\ }\frac{\overrightarrow{y}}{\sqrt{n}}\in\Omega(\overrightarrow{x},\theta)\right\}.
\end{equation}
The following theorem is a special case of \cite[Theorem 1]{fo} with $Q(X)=x_1^2+x_2^2+x_3^2+x_4^2$ and $\Omega=\Omega(\overrightarrow{x},\theta)$.
\begin{theorem}\label{thm fo}
Let notation be as above. For any positive integer $n$ with $(n,2)=1$ and $\varepsilon>0$, we have
$$
r(n,\Omega(\overrightarrow{x},\theta))
=r(n)\frac{A(\Omega(\overrightarrow{x},\theta))}{A(S)}\left(1+O\left(n^{-1/7+\varepsilon}\right)\right),
$$
where $r(n)$ is the number of integral solutions of $x_1^2+x_2^2+x_3^2+x_4^2=n$ and $A(S)$ is the surface area of $S$.
\end{theorem}
\begin{remark}
By the famous Jacobi's four-square theorem, we have
\begin{equation}\label{jacobi}
r(n)=\left\{ \begin{aligned}
&8\sum_{m|n}m \ \ \ \mathrm{if\ }n\ \mathrm{is\ odd},&\\
& 24\sum_{m|n\atop2\nmid m}m\ \ \ \mathrm{if\ }n\ \mathrm{is\ even}.&
\end{aligned} \right.
\end{equation}
\end{remark}
$ \\ $
\section{Proof of Theorem \ref{thm1}}
It is sufficient to prove that for any quaternion $\mathfrak{h}$ and any $\varepsilon>0$, there exist two Hurwitz primes $\mathfrak{p},\mathfrak{q}$ such that $$\left|\mathfrak{h}-\frac{\mathfrak{p}}{\mathfrak{q}}\right|<\varepsilon.$$
We first consider the case $\|\mathfrak{h}\|=0$. Since the set of all quotients of prime numbers is dense in positive real numbers, there exist two prime numbers $p,q$ such that $p/q<\varepsilon^2$. By Lemma \ref{lemma 1} and Lemma \ref{lemma 2}, there exist two Hurwitz primes $\mathfrak{p},\mathfrak{q}$ such that $\|\mathfrak{p}\|=p$, $\|\mathfrak{q}\|=q$ and $$\left|\frac{\mathfrak{p}}{\mathfrak{q}}\right|=\sqrt{\frac pq}<\varepsilon.$$
In what follows, we assume $\|\mathfrak{h}\|\neq0$ and
without loss of generality, we assume
\begin{equation}\label{varepsilon}
\varepsilon<\min(\|\mathfrak{h}\|,1/\|\mathfrak{h}\|)\le1.
\end{equation}
Put
\begin{equation}\label{varepsilon1}
\varepsilon_1=\frac{\varepsilon^2}{10(\|\mathfrak{h}\|+\varepsilon)}
\le 1.
\end{equation}
By Theorem \ref{thm fo} and \eqref{jacobi}, for any positive odd integer $n$, we have
$$
r(n,\Omega(\sigma(\mathfrak{h}),\varepsilon_1))
=8\frac{A(\Omega(\sigma(\mathfrak{h}),\varepsilon_1))}{A(S)}\sum_{m|n}m
\left(1+O\left(n^{-1/7+\varepsilon_1}\right)\right).
$$
Since $\frac{A(\Omega(\sigma(\mathfrak{h}),\varepsilon_1))}{A(S)}$ is positive and only depends on $\varepsilon$ and $\mathfrak{h}$, there exists $N_1=N_1(\varepsilon,\mathfrak{h})$ such that
$$
r(n,\Omega(\sigma(\mathfrak{h}),\varepsilon_1))>1
$$
if $n>N_1$.
By similar arguments, there exists $N_2=N_2(\varepsilon_1,\overrightarrow{e_1})$ such that
$$
r(n,\Omega(\overrightarrow{e_1},\varepsilon_1))>1
$$
if $n>N_2$, where $\overrightarrow{e_1}=(1,0,0,0)$.
Moreover, by the Prime Number Theorem, there exists $N_3=N_3(\varepsilon,\mathfrak{h})$ such that the interval $$(n(\|\mathfrak{h}\|-\varepsilon^2/10),n(\|\mathfrak{h}\|+\varepsilon^2/10))$$ contains at least one prime number if $n>N_3$.
Let $q$ be a prime number satisfying $$q>\max\left(\frac{N_1}{\|\mathfrak{h}\|-\varepsilon^2/10},N_2,N_3\right).$$
Then
\begin{equation}\label{existence q}
r(q,\Omega(\overrightarrow{e_1},\varepsilon_1))>1
\end{equation}
and there exists a prime
\begin{equation}\label{p}
p\in (q(\|\mathfrak{h}\|-\varepsilon^2/10),q(\|\mathfrak{h}\|+\varepsilon^2/10)).
\end{equation}
By our choice of $q$, we get that
$$
p>q(\|\mathfrak{h}\|-\varepsilon^2/10)>N_1.
$$
Hence, we obtain
\begin{equation}\label{existence p}
r(p,\Omega(\sigma(\mathfrak{h}),\varepsilon_1))>1.
\end{equation}
By \eqref{def r omega}, \eqref{existence q} and \eqref{existence p}, there exist $$\overrightarrow{x}=(x_1,x_2,x_3,x_4)\in\mathbb{Z}^4\mathrm{\ and\ }\overrightarrow{y}=(y_1,y_2,y_3,y_4)\in\mathbb{Z}^4$$ such that $|\overrightarrow{x}|=\sqrt{q}$, $|\overrightarrow{y}|=\sqrt{p}$,
\begin{equation}\label{upper bound 1}
\arccos\frac{x_1}{\sqrt{q}}
=
\arccos\frac{\langle \overrightarrow{x},\overrightarrow{e_1}\rangle}{|\overrightarrow{x}|}
\le \varepsilon_1
\end{equation}
and
\begin{equation}\label{upper bound 2}
\arccos
\frac{\langle \overrightarrow{y},\sigma(\mathfrak{h})\rangle}
{|\overrightarrow{y}||\sigma(\mathfrak{h})|}\le \varepsilon_1.
\end{equation}
By \eqref{upper bound 1}, we have
\begin{align}\label{upper bound 3}
0 \le 1-\frac{x_1}{\sqrt{q}}\le 1-\cos\varepsilon_1=2\sin^2\frac{\varepsilon_1}{2}
\le\frac{\varepsilon_1^2}{2}
\end{align}
and for $\ell=2,3,4$
\begin{align}\label{upper bound 4}
0 \le \frac{x_\ell^2}{{q}}\le 1-\frac{x_1^2}{{q}}\le 1-\cos^2\varepsilon_1
=\sin^2\varepsilon_1\le\varepsilon_1^2.
\end{align}
Here we have used the well-known inequality $0\le \sin t\le t$ if $0\le t\le 1$.
Moreover, by \eqref{p}, we have
\begin{equation}\label{norm 1}
\left|\frac{\overrightarrow{y}}{\sqrt{q}}\right|
=\sqrt{\frac pq}\le \sqrt{\|\mathfrak{h}\|+\varepsilon^2/10}
\end{equation}
and
\begin{align*}
\left(|\sigma(\mathfrak{h})|-\left|\frac{\overrightarrow{y}}{\sqrt{q}}\right|\right)^2
=\left(\frac{|\sigma(\mathfrak{h})|^2-\left|\frac{\overrightarrow{y}}{\sqrt{q}}\right|^2}
{|\sigma(\mathfrak{h})|+\left|\frac{\overrightarrow{y}}{\sqrt{q}}\right|}\right)^2
\le\left(\frac{\|\mathfrak{h}\|-\frac{p}{{q}}}
{|\sigma(\mathfrak{h})|}\right)^2\le \frac{\varepsilon^4}{100\|\mathfrak{h}\|}.
\end{align*}
Therefore, by \eqref{para}, \eqref{upper bound 2} and the last inequality in \eqref{upper bound 3}, we obtain
\begin{align}\label{difference 1}
\left|\sigma(\mathfrak{h})-\frac{\overrightarrow{y}}{\sqrt{q}}\right|^2
&=|\sigma(\mathfrak{h})|^2
+\left|\frac{\overrightarrow{y}}{\sqrt{q}}\right|^2
-2|\sigma(\mathfrak{h})|\left|\frac{\overrightarrow{y}}{\sqrt{q}}\right|
\frac{\langle\sigma(\mathfrak{h}),\frac{\overrightarrow{y}}{\sqrt{q}}\rangle}
{|\sigma(\mathfrak{h})|\left|\frac{\overrightarrow{y}}{\sqrt{q}}\right|}\nonumber\\
&\le|\sigma(\mathfrak{h})|^2
+\left|\frac{\overrightarrow{y}}{\sqrt{q}}\right|^2
-2|\sigma(\mathfrak{h})|\left|\frac{\overrightarrow{y}}{\sqrt{q}}\right|\cos\varepsilon_1\nonumber\\
&=\left(|\sigma(\mathfrak{h})|-
\left|\frac{\overrightarrow{y}}{\sqrt{q}}\right|\right)^2
+2|\sigma(\mathfrak{h})|\left|\frac{\overrightarrow{y}}{\sqrt{q}}\right|(1-\cos\varepsilon_1)\nonumber\\
&\le\frac{\varepsilon^4}{100\|\mathfrak{h}\|}
+{\varepsilon_1^2}\sqrt{\|\mathfrak{h}\|(\|\mathfrak{h}\|+\varepsilon^2/10)}
\le \frac{\varepsilon^4}{50\|\mathfrak{h}\|}\le \frac{\varepsilon^2}{9}.
\end{align}
Here we have applied \eqref{varepsilon} and \eqref{varepsilon1} in the last two steps.
Put
$$
\mathfrak{q}=x_1+x_2i+x_3j+x_4k
$$
and
$$
\mathfrak{p}=y_1+y_2i+y_3j+y_4k.
$$
Then $\|\mathfrak{p}\|=p$ and $\|\mathfrak{q}\|=q$. By Lemma \ref{lemma 2}, $\mathfrak{p}$ and $\mathfrak{q}$ are Hurwitz primes. Furthermore, by the triangle inequality we have
\begin{align}\label{upper bound 5}
\left|\mathfrak{h}-\frac{\mathfrak{p}}{\mathfrak{q}}\right|
&=\left|\mathfrak{h}-\frac{\mathfrak{p}(x_1-x_2i-x_3j-x_4k)}{\|\mathfrak{q}\|}\right|\nonumber\\
&\le \left|\mathfrak{h}-\frac{x_1}{q}\mathfrak{p}\right|
+\left|\frac{\mathfrak{p}(x_2i)}{q}\right|
+\left|\frac{\mathfrak{p}(x_3j)}{q}\right|
+\left|\frac{\mathfrak{p}(x_4k)}{q}\right|.
\end{align}
By \eqref{upper bound 4} and \eqref{p}, we obtain
\begin{align}\label{upper bound 6}
&\left|\frac{\mathfrak{p}(x_2i)}{q}\right|
+\left|\frac{\mathfrak{p}(x_3j)}{q}\right|
+\left|\frac{\mathfrak{p}(x_4k)}{q}\right|\nonumber\\
&=\sum_{\ell=2}^4\sqrt{\frac{x_\ell^2\|\mathfrak{p}\|}{q^2}}
=\sqrt{\frac{p}{q}}\sum_{\ell=2}^4\sqrt{\frac{x_\ell^2}{q}}\nonumber\\
&\le 3\sqrt{(\|\mathfrak{h}\|+\varepsilon^2/10)}\varepsilon_1\le \frac{3\varepsilon^2}{10\sqrt{\|\mathfrak{h}\|}}\le\frac{\varepsilon}{3}.
\end{align}
Here we have applied \eqref{varepsilon} and \eqref{varepsilon1} in the last two steps.
On the other hand, by \eqref{equiv}, \eqref{difference 1}, \eqref{upper bound 3} and \eqref{norm 1}, we get
\begin{align}\label{upper bound 7}
\left|\mathfrak{h}-\frac{x_1}{q}\mathfrak{p}\right|
=\left|\sigma(\mathfrak{h})-\frac{x_1}{q}\sigma(\mathfrak{p})\right|
&=\left|\sigma(\mathfrak{h})-\frac{\overrightarrow{y}}{\sqrt{q}}
+\left(1-\frac{x_1}{\sqrt{q}}\right)\frac{\overrightarrow{y}}{\sqrt{q}}\right|
\nonumber\\
&\le\left|\sigma(\mathfrak{h})-\frac{\overrightarrow{y}}{\sqrt{q}}
\right|
+\left(1-\frac{x_1}{\sqrt{q}}\right)\left|\frac{\overrightarrow{y}}{\sqrt{q}}\right|\nonumber\\
&\le \frac{\varepsilon}{3}+\frac{\varepsilon_1^2}{2}\sqrt{(\|\mathfrak{h}\|+\varepsilon^2/10)}
\le\frac{2\varepsilon}{3}.
\end{align}
Here we have applied \eqref{varepsilon} and \eqref{varepsilon1} again in the last one step.
Combining \eqref{upper bound 5}, \eqref{upper bound 6} and \eqref{upper bound 7}, we have
$$
\left|\mathfrak{h}-\frac{\mathfrak{p}}{\mathfrak{q}}\right|\le \frac{2\varepsilon}{3}+\frac{\varepsilon}{3}=\varepsilon.
$$
The proof is complete.
\section{Acknowledgement}
It is our pleasure to thank professor Yingnan Wang who is from Shenzhen Univeristy for his help and helpful advice throughout this project.
| {
"timestamp": "2019-04-18T02:04:18",
"yymm": "1904",
"arxiv_id": "1904.08002",
"language": "en",
"url": "https://arxiv.org/abs/1904.08002",
"abstract": "Quotient sets have attracted the attention of mathematicians in the past three decades. The set of quotients of primes is dense in the positive real numbers and the set of all quotients of Gaussian primes is also dense in the complex plane. Sittinger has proved that the set of quotients of primes in an imaginary quadratic ring is dense in the complex plane and the set of quotients of primes in a real quadratic number ring is dense in R. An interesting open question is introduced by Sittinger: Is the set of quotients of Hurwitz primes dense in the quaternions? In this paper, we answer the question and prove that the set of all quotients of Hurwitz primes is dense in the quaternions.",
"subjects": "Number Theory (math.NT)",
"title": "Quotients of Hurwitz Primes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9811668739644686,
"lm_q2_score": 0.8221891283434876,
"lm_q1q2_score": 0.806704736864351
} |
https://arxiv.org/abs/1401.1736 | Statistical Topology of Three-Dimensional Poisson-Voronoi Cells and Cell Boundary Networks | Voronoi tessellations of Poisson point processes are widely used for modeling many types of physical and biological systems. In this paper, we analyze simulated Poisson-Voronoi structures containing a total of 250,000,000 cells to provide topological and geometrical statistics of this important class of networks. We also report correlations between some of these topological and geometrical measures. Using these results, we are able to corroborate several conjectures regarding the properties of three-dimensional Poisson-Voronoi networks and refute others. In many cases, we provide accurate fits to these data to aid further analysis. We also demonstrate that topological measures represent powerful tools for describing cellular networks and for distinguishing among different types of networks. | \section{Introduction}
Poisson-Voronoi tessellations are random subdivisions of space that have found applications as models for many physical systems \cite{1992okabe, stoyan1995stochastic}. They have been used to study how galaxies are distributed throughout space \cite{icke1988voronoi, yoshioka1989large} and have aided in discovering new galaxies \cite{ramella2001finding}. They have been used to study how animals establish territories \cite{1980tanemura}, how crops can be planted to minimize weed growth \cite{1973fischer}, and how atoms are arranged in crystals \cite{mackay2011stereological}, liquids \cite{finney1970random}, and glasses \cite{hentschel2007statistical, luchnikov2000voronoi}. A more complete list of applications can be found in standard references on the subject \cite{1992okabe, stoyan1995stochastic}.
A Poisson-Voronoi tessellation is constructed as follows. Points called {\it seeds} are obtained as the realization of a uniform Poisson point process (e.g.~see \cite{daley2003introduction, daley2007introduction, stoyan1995stochastic, kingman1992poisson, cox1980point}) in a fixed region. {\it Cells} are the sets of all points in the region that are closer to a particular seed than to any other. If a point is equidistant to multiple nearest seeds, then it lies on the boundary of the associated cells. In three dimensions, there is a zero probability that a point will be equidistant to five or more seeds. All cells are convex, and this network of cells partitions the entire region.
Many exact results have been obtained in connection with three-dimensional Poisson-Voronoi structures. Meijering \cite{1953meijering} proved that the average number of faces per cell is $48\pi^2/35 + 2 \approx 15.535$, the average number of edges per face is $144\pi^2/(35+24\pi^2) \approx 5.228$, the average surface area per cell is $\left(256\pi/3\right)^{1/3}\Gamma(\frac{5}{3})\rho^{-2/3}$ and the average edge length per cell is $(3072\pi^5/125)^{1/3}\Gamma(\frac{4}{3})\rho^{-1/3}$, where $\rho$ is the number of seeds or cells per unit volume. Gilbert \cite{1962gilbert} expressed the variance of the cell volume distribution as a double integral. Using a more general approach, Brakke \cite{1985brakke} obtained integral expressions for the variances of number of cell faces, volumes, surface areas, number of face edges, face areas, and perimeters, as well as variances and covariances of several other quantities of interest and the distribution of edge lengths. In all of these cases, Brakke also solved these integrals numerically. Much more is understood about Poisson-Voronoi structures than can be detailed here, and the interested reader is referred to standard references \cite{1992okabe, 1994moller, stoyan1995stochastic} and the more recent surveys of M{\o}ller and Stoyan \cite{moller2007stochastic} and of Calka in \cite{kendall2010new}.
Additional properties of Poisson-Voronoi structures have been investigated through simulation. Using a data set with 358,000 cells, Kumar {\it et al.}~\cite{1992kumar} reported the distributions of faces with fixed numbers of edges and cells with fixed numbers of faces, volumes, face areas and cell surface areas. They also reported distributions of volumes and surface areas restricted to cells with fixed numbers of faces, and distributions of areas and perimeters restricted to faces with fixed numbers of edges. Although their data set was relatively small by current standards, their results are the most complete set of three-dimensional Poisson-Voronoi cell statistics available in the literature.
The Kumar {\it et al.}~data set has since been augmented by additional results. Marthinsen \cite{1996marthinsen} used a set of 100,000 cells to compute the distribution of cell volumes and surface areas. Tanemura \cite{2003tanemura} later used a substantially larger data set of 5,000,000 cells to obtain more precise data for the distributions of volumes, surfaces areas, and faces, as well as volumes for cells with fixed numbers of faces. Ferenc and N{\'e}da \cite{2007ferenc} later used a data set with 18,000,000 cells to calculate the distribution of cell volumes.
Thorvaldsen \cite{thorvaldsen1992simulation} and Reis {\it et al.} \cite{ferro2006geometry} used the ratio between the surface area of a cell and the surface area of a sphere of equal volume to describe the ``shape isotropy'' of a cell. Using a system of 250,000 cells, Thorvaldsen reported the distribution of this parameter among Poisson-Voronoi cells, and observed that this parameter decreases with increasing cell volume. Using a smaller set of 10,000 cells, Reis {\it et al.} considered how this parameter depends on the number of faces of a cell. A more sophisticated, higher-order method of measuring shape isotropy using Minkowski tensors has been recently introduced \cite{schroder2010minkowski} and used to characterize a number of natural structures \cite{kapfer2010local, schroder2010disordered}. In particular, Kapfer {\it et al.} \cite{kapfer2010local} have used this method to characterize a data set of 160,000 Poisson-Voronoi cells \cite{kapfer2010local}.
In prior studies, the topology of individual cells has been described by counting their numbers of faces. As we discuss below, this is a simplistic and incomplete description of the topology of a cell.\footnote{When referring to the topology of a cell we have in mind the topology of the cell and its immediate neighborhood, which includes the network of edges and faces which intersect it.} In this report, we present distributions of many important topological features of Poisson-Voronoi structures based on a dataset of a combined total of 250,000,000 cells. This is the largest data set available today and provides the most precise characterization of topological properties of the Poisson-Voronoi network. This resolution allows us to examine the validity of conjectures made on the basis of smaller data sets, some of which we now show are qualitatively incorrect. We supplement the discussion of topological properties with analysis of some purely geometrical descriptions and the interrelationship between some topological and geometrical features. We leave many results in the Supplemental Material and make the entire data set available online \cite{suppmat}.
\section{Method}
We employ the computer code {\tt vor3dsim}, developed by Ken Brakke \cite{vor3dsim}, to generate 250 Poisson-Voronoi tessellations, each of which contains 1,000,000 cells; periodic boundary conditions are used to eliminate boundary effects. Because the statistics we consider measure neighborhoods of the structure which are small compared to the total size of the system, we expect that statistics observed in this set of smaller systems will be consistent with what we would observe in a single system with an identical number of total cells. Details of the algorithms used to perform some of the more complex topological analyses were reported previously \cite{2012lazar, 2012mason}.
\section{Topological characteristics}
\subsection{Distribution of faces}
The simplest way to classify the topology of a Poisson-Voronoi cell involves counting its number of faces. This is the topological characterization most commonly quoted in the literature \cite{1952smith, 1974rhines}. Figure \ref{faces} shows this distribution of faces per cell; these data are consistent with those reported in \cite{1992kumar} and \cite{2003tanemura}.
\begin{figure}[b]
\centering
\includegraphics[width=1.\columnwidth]{multifaces.eps}
\caption{Distribution of the number of faces per cell; squares show the discrete probability distribution of Eq.~(\ref{kumar_faces}). The mean and standard deviation are 15.535 and 3.335, respectively, to within the accuracy of the data. The inset shows a subset of the data on a semilogarithmic plot; error bars show the standard error from the mean. }
\label{faces}
\end{figure}
As noted earlier, Meijering \cite{1953meijering} proved that the mean of this distribution is $48\pi^2/35 + 2$. Brakke \cite{1985brakke} obtained an integral form of the variance, which he numerically evaluated to be 11.1246. Our data reproduce these exact results to within 0.001\% and 0.004\%, respectively. While the distribution of faces is approximately symmetric about 15, there are no cells with fewer than 4 faces. We note that while Kumar {\it et al.}~\cite{1992kumar} and Tanemura \cite{2003tanemura} reported no cells with more than 36 faces based on their more limited data set, we find cells with up to 41 faces.
Kumar {\it et al.}~\cite{1992kumar} suggested that the distribution of the number of faces $F$ per cell can be described by the discretized two-parameter $\Gamma$ function:
\begin{equation}
p(F) = \int_{F-1/2}^{F+1/2}\frac{x^{a-1}}{b^a\Gamma(a)}e^{-x/b}dx.
\label{kumar_faces}
\end{equation}
The best fit to their data yielded $a=21.6292$ and $b=0.7199$. The form of $p(F)$ in Eq.~(\ref{kumar_faces}) gives $p(F)>0$ for all positive integers $F$, including 1, 2 and 3. Of course, this is incorrect since there can be no polyhedra with fewer than four faces in a Voronoi tessellation. Moreover, since we know exactly both the mean of the distribution as well as its variance, we are left with no free parameters. Regardless of whether we choose to include $p(1)$, $p(2)$, and $p(3)$ in normalizing the distribution, these parameters must be $a=21.85892$ and $b=0.710714$ to match the exact results.
Careful inspection of the data in the inset to Fig.~\ref{faces} reveals that Eq.~(\ref{kumar_faces}) does not accurately describe the decay in $p(F)$ for large $F$. This further demonstrates that this conjectured equation is not an exact representation of $p(F)$; we know of no such exact relation.
\subsection{Distribution of edges}
We next consider the distribution $p(n)$ of faces with $n$ edges. Meijering \cite{1953meijering} proved that the mean of this distribution is $144\pi^2/(35+24\pi^2)$, and Brakke \cite{1985brakke} obtained an integral form of the variance, which he evaluated numerically to be 2.4846. Our data reproduce the exact result of the mean to within 0.0002\%, and the exact result for the variance to within 0.00001\%.
\begin{figure}
\centering
\includegraphics[width=1.\columnwidth]{multiedges.eps}
\caption{Distribution of number of edges per face; squares show the discrete probability distribution of Eq.~(\ref{newfit}). The mean and standard deviation for this data set is 5.228 and 1.576, respectively, to within the accuracy of the data. The inset shows the same data on a semilogarithmic plot; error bars show the standard error from the mean.}
\label{multiedges}
\end{figure}
Figure \ref{multiedges} shows this distribution for our Poisson-Voronoi data set. This distribution is similar to that reported earlier \cite{1992kumar}, albeit with more accurate statistics and over a greater range of $n$. The distribution has a maximum at 5 edges per face, which is close to the mean; there are no faces with fewer than 3 edges. While Kumar {\it et al.}~\cite{1992kumar} reported no faces with more than 15 edges, our data set shows faces with up to 18 edges.
While the mean and variance of this distribution are known exactly, little else is known and, to our knowledge, there are no proposed forms for this distribution. We suggest the following empirical form
\begin{equation}
p(n) =
\begin{cases}
A(n-2)e^{-B(n-\hat{n})^2} & n\geq 2 \\
0 & n<2,
\end{cases}
\label{newfit}
\end{equation}
where $n$ is the number of edges of a face. Although this empirical relation fits the data remarkably well, its origin is unclear. By requiring that the distribution is properly normalized and that the mean and variance reproduce the exact results, all three parameters are determined: $A=0.13608070$, $B=0.093483172$, and $\hat{n}=2.64631320$. Overall, this empirical functional form provides an excellent fit to the Poisson-Voronoi data set, including the large $n$ tail.
\subsection{Aboav-Weaire relation}
In studying two-dimensional cross-sections of polycrystalline magnesium oxide, Aboav \cite{1970aboav} and Weaire \cite{1974weaire} explored the relationship between the number of edges $n$ of a cell and the expected number of edges $m(n)$ of its $n$ neighbors. They observed that this relationship can be described by $m(n) \approx A + B/n$, which can be understood as follows. In two dimensions, the average number of edges per cell is $\langle n \rangle = 6$. If this average is approximately maintained among every cluster of cells, then a cluster with an $n$-sided cell in its center should have on average $(n+nm(n))/(n+1) \approx \langle n \rangle$ edges per cell. This gives us an expression for $m(n)$ in terms of $n$ and $\langle n \rangle$ of the above form: $m(n) \approx 5 + 6/n$. This equivalence is only approximate because the $\langle n \rangle = 6$ average is {\it not} maintained among every cluster of cells. This leads to a correction term that in part depends on the variance of the distribution. We note, for later, that this form of $m(n)$ decreases monotonically with increasing $n$. This relationship has been used to analyze biological tissue \cite{jeune1998interactions, mombach1993mitosis}, soap foams \cite{weaire1999physics, mejia2000evolution}, and other cellular structures \cite{elias1997two, earnshaw1994topological, moriarty2002nanostructured}.
In two dimensions, it was originally believed that this relationship also describes Poisson-Voronoi structures \cite{boots1982arrangement, kumar1993properties}. However, Hilhorst \cite{hilhorst2005planar, hilhorst2006planar} has shown that the correct form of the relationship is $m(n) = 4+3(\pi/n)^{1/2} + O(1/n)$, in the limit of large $n$.
We now investigate the extension of this relationship to three-dimensional Poisson-Voronoi structures, i.e., the relationship between the number of faces $F$ of a cell and the expected number of faces $m(F)$ of its neighbors. Figure \ref{weaireaboav} shows that $m(F)$ increases for small $F$, reaches a maximum at $F = 12$, and then decreases in a nearly linear manner for large $F$. The existence of the increasing region of $m(F)$ has not been previously reported for the Poisson-Voronoi structure.
Based on limited three-dimensional Poisson-Voronoi data (3729 cells), Kumar {\it et al.}~\cite{1992kumar} fit their data to a linear function as follows:
\begin{equation}
m(F) = A - BF,
\end{equation}
and found $A=16.57$ and $B=0.02$. Of course, such a fit is unreasonable because it suggests that $m(F)<0$ for sufficiently large $F$. Using the same data set as Kumar {\it et al.}, Fortes \cite{fortes1993applicability} proposed fitting this data to an Aboav-Weaire-type of relation:
\begin{equation}
m(F) = A + B/F,
\end{equation}
where the constants $A=15.96$ and $B=4.60$ were found using a least square fit to this data set.
\begin{figure}[h!]
\centering
\vspace{3mm}
\includegraphics[width=1.\columnwidth]{weaireaboav.eps}
\caption{(Color online) Expected number of faces $m(F)$ of neighbors of cells with $F$ faces; error bars indicate standard error from the mean. The red, yellow, green, and blue curves show the forms suggested by Kumar {\it et al.}, Fortes, Hillhorst (truncated at $i=3$), and Mason {\it et al.}~(truncated at $i=4$), respectively.}
\label{weaireaboav}
\end{figure}
The forms suggested by Kumar {\it et al.}~and Fortes do not provide even a qualitative fit to $m(F)$ at small $F$; they both decrease monotonically with increasing $F$, contrary to the data for $F < 12$. Clearly, the general form of the Aboav-Weaire relation provides a poor representation of the topological correlations between nearest neighbor cells in three-dimensional Poisson-Voronoi structures.
Hilhorst \cite{hilhorst2009heuristic}, building on his earlier work on two-dimensional Poisson-Voronoi structures \cite{hilhorst2005planar, hilhorst2006planar}, provided strong theoretical arguments for a relationship of the form:
\begin{equation}
m(F) = k_0 + \sum_{i=1}^{\infty}k_iF^{-i/6}
\label{hilhorsteq}
\end{equation}
with $k_0 = 8$. A least squares fit of this expression (truncated at $i=3$) to our data yields $k_1=2.474$, $k_2=49.36$, and $k_3=-51.50$. This form provides an excellent fit to the present three-dimensional Poisson-Voronoi data set, over the entire range of $F$.
Finally, Mason {\it et al.}~\cite{2012masonB}, also based on theoretical arguments, developed the following expression for $m(F)$:
\begin{equation}
m(F) = \langle F \rangle + \frac{\langle F \rangle+\mu_2}{F} - 1 - \frac{1}{\xi F} \sum_{i=1}^{\infty}k_i\left[(F-\langle F \rangle)^i - \mu_i \right],
\label{masoneq}
\end{equation}
where $\xi = 4\pi-6\cos^{-1}(-1/3)$ is a constant for three-dimensional structures, $\mu_i$ is the $i^{\text th}$ central moment of the distribution of faces, and $k_i$ are fitting parameters. The equation shown in Fig.~\ref{weaireaboav} corresponds to a best fit when considering $i\leq 4$; we find that $k_1=-1.567$, $k_2=0.0478$, $k_3=-0.00109$, and $k_4=0.000022$. Except at the tail end of $m(F)$, Eqs.~(\ref{hilhorsteq}) and (\ref{masoneq}) are indistinguishable.
\subsection{$p$-vectors}
Although counting faces can often distinguish between topologically distinct cells, it cannot do so in general. Figure \ref{sixfaces} shows topologically distinct cells, each with six faces.
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\resizebox{0.33\columnwidth}{!}{\includegraphics{cube6.eps}} &\quad\quad
\resizebox{0.38\columnwidth}{!}{\includegraphics{noncube4.eps}}\\
(a)&\quad\quad(b)
\end{tabular}
\vspace{-5mm}
\end{center}
\caption{Topologically distinct cells with six faces. Type (a) appears more than twice as frequently as type (b) in the Poisson-Voronoi structure.}
\label{sixfaces}
\end{figure}
A more refined description of the topology of a cell involves recording not only its number of faces, but also its particular types of faces. Figure \ref{sixfaces}(a) has six four-sided faces, while Fig.~\ref{sixfaces}(b) has two three-sided faces, two four-sided faces, and two five-sided faces. These two topological types are the only ones with six faces that appear in the Poisson-Voronoi structures.
Barnette \cite{1969barnette}, in describing the combinatorial properties of three-dimensional polytopes, defined a $p$-vector as a vector of integer entries in which the $i^{\text{th}}$ entry denotes the number of $i$-sided faces of a polyhedron. Table \ref{pvectortable} lists the 48 most frequent $p$-vectors of the Poisson-Voronoi structure and their relative frequencies in the Poisson-Voronoi dataset. The table shows that the Poisson-Voronoi structure is not dominated by a small set of $p$-vectors; rather, the distribution is quite broad -- no $p$-vector occurs with a frequency greater than 0.39\%. In the 250,000,000 Poisson-Voronoi cell data set, there are 375,410 distinct $p$-vectors; the complete distribution of observed $p$-vectors may be found in the Supplemental Material.
\begin{table*}
\centering
\begin{tabular}{|r|c|r|c|c|}
\hline
& $p$-vector & \multicolumn{1}{c|}{$F$} & {\it f} \\
\hline
1 & $(001343100...)$ & 12 & 0.388\% \\
2 & $(001342100...)$ & 11 & 0.342\% \\
3 & $(001433200...)$ & 13 & 0.298\% \\
4 & $(001344100...)$ & 13 & 0.289\% \\
5 & $(001423100...)$ & 11 & 0.288\% \\
6 & $(002333110...)$ & 13 & 0.284\% \\
7 & $(001332000...)$ & 9 & 0.274\% \\
8 & $(000442000...)$ & 10 & 0.265\% \\
9 & $(001352200...)$ & 13 & 0.263\% \\
10 & $(002233100...)$ & 11 & 0.261\% \\
11 & $(001432200...)$ & 12 & 0.258\% \\
12 & $(001353200...)$ & 14 & 0.258\% \\
\hline
\end{tabular}
\hspace{0.6mm}
\begin{tabular}{|r|c|r|c|c|}
\hline
& $p$-vector & \multicolumn{1}{c|}{$F$} & {\it f} \\
\hline
13 & $(002332110...)$ & 12 & 0.256\% \\
14 & $(001422100...)$ & 10 & 0.254\% \\
15 & $(002322200...)$ & 11 & 0.252\% \\
16 & $(002242200...)$ & 12 & 0.248\% \\
17 & $(002342210...)$ & 14 & 0.247\% \\
18 & $(001443110...)$ & 14 & 0.244\% \\
19 & $(000443000...)$ & 11 & 0.239\% \\
20 & $(002343210...)$ & 15 & 0.233\% \\
21 & $(001442110...)$ & 13 & 0.232\% \\
22 & $(001424100...)$ & 12 & 0.231\% \\
23 & $(001434200...)$ & 14 & 0.223\% \\
24 & $(002243200...)$ & 13 & 0.217\% \\
\hline
\end{tabular}
\hspace{0.6mm}
\begin{tabular}{|r|c|r|c|c|}
\hline
& $p$-vector & \multicolumn{1}{c|}{$F$} & {\it f} \\
\hline
25 & $(002323200...)$ & 12 & 0.213\% \\
26 & $(002232100...)$ & 10 & 0.210\% \\
27 & $(002423210...)$ & 14 & 0.203\% \\
28 & $(002334110...)$ & 14 & 0.202\% \\
29 & $(001252000...)$ & 10 & 0.201\% \\
30 & $(000533100...)$ & 12 & 0.199\% \\
31 & $(001263100...)$ & 13 & 0.198\% \\
32 & $(001341100...)$ & 10 & 0.196\% \\
33 & $(002234100...)$ & 12 & 0.193\% \\
34 & $(001354200...)$ & 15 & 0.192\% \\
35 & $(001345100...)$ & 14 & 0.191\% \\
36 & $(001334000...)$ & 11 & 0.190\% \\
\hline
\end{tabular}
\hspace{0.6mm}
\begin{tabular}{|r|c|r|c|c|}
\hline
& $p$-vector & \multicolumn{1}{c|}{$F$} & {\it f} \\
\hline
37 & $(002422210...)$ & 13 & 0.190\% \\
38 & $(002333300...)$ & 14 & 0.188\% \\
39 & $(002324200...)$ & 13 & 0.188\% \\
40 & $(001442300...)$ & 14 & 0.186\% \\
41 & $(001444110...)$ & 15 & 0.185\% \\
42 & $(001443300...)$ & 15 & 0.185\% \\
43 & $(002332300...)$ & 13 & 0.180\% \\
44 & $(001351200...)$ & 12 & 0.179\% \\
45 & $(000453100...)$ & 13 & 0.178\% \\
46 & $(001533210...)$ & 15 & 0.175\% \\
47 & $(001333000...)$ & 10 & 0.173\% \\
48 & $(001453210...)$ & 16 & 0.172\% \\
\hline
\end{tabular}
\caption{The 48 most frequent $p$-vectors in the Poisson-Voronoi structure, their number of faces $F$ and their relative frequency ${\it f}$. The 250,000,000 cell data set contains 375,410 distinct $p$-vectors; the complete distribution of $p$-vectors may be found in the Supplemental Material.}
\label{pvectortable}
\end{table*}
Poisson-Voronoi cells may be contrasted with those found in other natural structures. Matzke \cite{1946matzke} carefully recorded $p$-vector data for 1000 soap bubbles in a foam, and Williams and Smith \cite{1952williams} reported $p$-vector data for 91 individual cells in an aluminum polycrystal. More recently, Kraynik {\it et al.}~\cite{kraynik2003structure} reported $p$-vector data to characterize over 1000 simulated monodisperse foam bubbles, and we have reported the distribution of $p$-vectors in a set of 269,555 grains in simulated grain growth microstructures \cite{2012lazar}.
Although soap foams and grain growth microstructures share much in common with Poisson-Voronoi tessellations, it is important to emphasize that they result from qualitatively different processes. Capillarity, surface tension, and curvature play significant roles in the formation and evolution of soap foams \cite{weaire1999physics} and grain growth microstructures \cite{ratke2002growth}. Since these forces tend to minimize interfacial areas (subject to certain constraints), we can expect that the microstructures that result from these processes somehow reflect these physics. More specifically, we might expect to find qualitatively different microstructures than those that result from a Poisson-Voronoi tessellation, in which these forces play no role.
The data reported in \cite{1946matzke}, \cite{1952williams}, and \cite{kraynik2003structure} are insufficient to provide definitive $p$-vector distributions for either polycrystalline aluminum or soap foam structures. However, they are sufficient to clearly distinguish those structures from the Poisson-Voronoi one. Of the 91 aluminum cells examined by Williams and Smith, the most common $p$-vector is $(0004420...)$, and it appeared 8 times. Seven other $p$-vectors appeared 2 or 3 times each, and the remaining 66 distinct $p$-vectors appeared only once each.
Considering only the interior bubbles of his original sample, Matzke found that the most common $p$-vector was $(0001\,10\,2...)$. These bubbles accounted for 20\% of the 600 interior bubbles. Three more $p$-vectors each accounted for at least 8\% of all bubbles, five more accounted for at least 3\% each, and five more accounted for at least 1.5\% of all bubbles.
Kraynik {\it et al.}~\cite{kraynik2003structure} found that data from simulated monodisperse foams closely resembled the experimental results of Matzke. In particular, they found that the most common $p$-vector was $(0001\,10\,2...)$, which accounted for just under 20\% of all relaxed and annealed monodisperse foam bubbles, a result almost identical to that of Matzke. The next most common $p$-vector in simulated monodisperse structures was $(0002840...)$, which accounted for almost 14\% of all bubbles, and then $(0001\,10\,3...)$, which accounted for just under 11\% of all bubbles. Four other $p$-vectors each accounted for at least 5\% of bubbles.
The distribution of $p$-vectors in grain growth structures \cite{2012lazar}, which evolve through mean curvature flow, is substantially more concentrated than in the Poisson-Voronoi microstructure, but not nearly as much as in the data of Matzke, Williams and Smith, and Kraynik {\it et al.}. In the grain growth data, the most common $p$-vector is $(0004400...)$, and it accounts for nearly 3\% of all cells; each of the 10 most common $p$-vectors accounts for at least 1\% of all cells.
Although the exact nature of the heavy bias towards certain $p$-vectors in each of the structures is not completely understood, it can already be used to distinguish the different structures from the Poisson-Voronoi structure and from each other by standard statistical tests (e.g., a chi-squared test). The distribution of $p$-vectors, hence, provides a useful means to distinguish between cellular structures of different physical origin. Despite its early introduction, this method has not been widely adopted.
\subsection{Distribution of topological types}
Although the $p$-vector of a cell provides more information than a mere count of its faces, it too does not completely describe its topology. For example, Fig.~\ref{gons8} shows three cells that share the $p$-vector $(0004420...)$ and yet are topologically distinct.
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=0.23\columnwidth]{type2}\quad\quad&
\includegraphics[height=0.25\columnwidth]{type1}\quad\quad&
\includegraphics[height=0.25\columnwidth]{type3}\\
(a)\quad&(b)\quad&(c)
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{Three topologically distinct cells, each with four quadrilateral, four pentagonal, and two hexagonal faces. Type (a) appears roughly twice as frequently as type (b), which appears roughly fives times as frequently as type (c), in the Poisson-Voronoi structure.}
\label{gons8}
\end{figure}
In an earlier paper \cite{2012lazar}, we developed a method to succinctly characterize the complete topology of a cell. That work was built on earlier work of Weinberg \cite{weinberg1965plane, weinberg1966simple}, who developed an efficient graph-theoretic algorithm to determine whether two triply-connected planar graphs are isomorphic. We showed that the edges and vertices of a cell can be treated as a planar graph, and Weinberg's method can then be used to calculate what we call a {\it Weinberg vector} for each cell. A Weinberg vector for a cell with $F$ faces is a vector with $6(F-2)$ integer entries that can be computed in time linear in $F^2$. Two cells are topologically identical if and only if their Weinberg vectors are identical. Moreover, the method by which the Weinberg vector is calculated also determines the order of the cell's associated symmetry group \cite{1966weinberg2}. The topological type of each cell in the structures is recorded, along with its $p$-vector, symmetry order, and frequency. We do not reproduce the algorithm for creating a Weinberg vector here, but simply refer the interested reader to \cite{2012lazar}.
In the remainder of this paper, we use Schlegel diagrams to help visualize topological types. A {\it Schlegel diagram} is constructed by projecting the boundary of a cell onto one of its faces in a way that vertices not belonging to that face lie inside it and no edges cross \cite{schlegel1883theorie, schlegel1886uber}. Figure \ref{schlegel-diagrams} shows Schlegel diagrams for the 24 most common topological types that appear in the Poisson-Voronoi structure. Along with the Schlegel diagram for each, we show its frequency $f$, number of faces $F$, $p$-vector, and order $S$ of its symmetry group.
\setlength{\tabcolsep}{3pt}
\begin{figure*}[ht]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\specialcell[t]{1. $f$=0.274\%\\\includegraphics[scale=0.275]{top01}\\(0013320...)\\$F$=9, $S$=1 } &
\specialcell[t]{2. $f$=0.166\%\\\includegraphics[scale=0.275]{top02}\\(0013310...)\\$F$=8, $S$=2 } &
\specialcell[t]{3. $f$=0.158\%\\\includegraphics[scale=0.275]{top03}\\(0004420...)\\$F$=10, $S$=2 } &
\specialcell[t]{4. $f$=0.120\%\\\includegraphics[scale=0.275]{top04}\\(0013411...)\\$F$=10, $S$=1 } &
\specialcell[t]{5. $f$=0.117\%\\\includegraphics[scale=0.275]{top05}\\(0004410...)\\$F$=9, $S$=4 } &
\specialcell[t]{6. $f$=0.101\%\\\includegraphics[scale=0.275]{top06}\\(0004400...)\\$F$=8, $S$=8 } &
\specialcell[t]{7. $f$=0.096\%\\\includegraphics[scale=0.275]{top07}\\(0014221...)\\$F$=10, $S$=1 } &
\specialcell[t]{8. $f$=0.095\%\\\includegraphics[scale=0.275]{top08}\\(0003620...)\\$F$=11, $S$=2 } \\
\hline
\hline
\specialcell[t]{9. $f$=0.095\%\\\includegraphics[scale=0.275]{top09}\\(0013330...)\\$F$=10, $S$=1 } &
\specialcell[t]{10. $f$=0.094\%\\\includegraphics[scale=0.275]{top10}\\(0004430...)\\$F$=11, $S$=1 } &
\specialcell[t]{11. $f$=0.094\%\\\includegraphics[scale=0.275]{top11}\\(0014221...)\\$F$=10, $S$=1 } &
\specialcell[t]{12. $f$=0.093\%\\\includegraphics[scale=0.275]{top12}\\(0014231...)\\$F$=11, $S$=1 } &
\specialcell[t]{13. $f$=0.091\%\\\includegraphics[scale=0.275]{top13}\\(0004420...)\\$F$=10, $S$=2 } &
\specialcell[t]{14. $f$=0.091\%\\\includegraphics[scale=0.275]{top14}\\(0012510...)\\$F$=9, $S$=2 } &
\specialcell[t]{15. $f$=0.090\%\\\includegraphics[scale=0.275]{top15}\\(0005220...)\\$F$=9, $S$=4 } &
\specialcell[t]{16. $f$=0.088\%\\\includegraphics[scale=0.275]{top16}\\(0012520...)\\$F$=10, $S$=2 } \\
\hline
\hline
\specialcell[t]{17. $f$=0.082\%\\\includegraphics[scale=0.275]{top18}\\(0012520...)\\$F$=10, $S$=2 } &
\specialcell[t]{18. $f$=0.082\%\\\includegraphics[scale=0.275]{top19}\\(0012611...)\\$F$=11, $S$=1 } &
\specialcell[t]{19. $f$=0.081\%\\\includegraphics[scale=0.275]{top17}\\(0022321...)\\$F$=10, $S$=1 } &
\specialcell[t]{20. $f$=0.080\%\\\includegraphics[scale=0.275]{top20}\\(0022311...)\\$F$=9, $S$=2 } &
\specialcell[t]{21. $f$=0.079\%\\\includegraphics[scale=0.275]{top21}\\(0003610...)\\$F$=10, $S$=6 } &
\specialcell[t]{22. $f$=0.077\%\\\includegraphics[scale=0.275]{top22}\\(0014140...)\\$F$=10, $S$=2 } &
\specialcell[t]{23. $f$=0.075\%\\\includegraphics[scale=0.275]{top23}\\(0004430...)\\$F$=11, $S$=2 } &
\specialcell[t]{24. $f$=0.073\%\\\includegraphics[scale=0.275]{top24}\\(0032122...)\\$F$=10, $S$=1 } \\
\hline
\end{tabular}
\caption{Schlegel diagrams of the 24 most common topological types among the Poisson-Voronoi cells. Listed for each type is its frequency $f$, $p$-vector, number of faces $F$, and order $S$ of its associated symmetry group. In these data, there are four pairs of Weinberg vectors which share $p$-vectors.}
\label{schlegel-diagrams}
\end{figure*}
Each of the six most common topological types have 10 or fewer faces. This may be surprising in light of the fact that of the 48 most commonly occurring $p$-vectors, only one had fewer than 10 faces. This can be understood by considering that many distinct topological types can share the same $p$-vector, as illustrated earlier in Fig.~\ref{gons8}. This degeneracy increases with the number of faces, and so $p$-vectors of cells with many faces can appear frequently even if no single topological type with that $p$-vector appears frequently. Conversely, $p$-vectors with few faces are typically shared by few distinct topological types. The most frequently occurring $p$-vector (001343100...) is shared by 38 distinct topological types,\footnote{This can be extracted from data available on {\it The Manifold Page} \cite{2013lutz}; data for these 38 types are included in Fig.~20 of the Supplemental Material.} not one of which appears among the 24 most common types.
The most common topological type in the Poisson-Voronoi structure (Fig.~\ref{schlegel-diagrams}, entry 1) has $p$-vector $(0013320...)$ and occurs with frequency 0.273\%. Two factors appear to contribute to its relative high frequency. First, its distribution of face types closely resembles that of the structure as a whole (Fig.~\ref{multiedges}). Specifically, four- and five-sided faces appear most frequently, followed by six-sided faces and then three-sided faces. Second, no other topological type shares this $p$-vector. Despite its frequency, however, it is difficult to describe it as a ``typical'' Poisson-Voronoi cell, given how few cells are of this type.
Figure \ref{famous-diagrams} illustrates Schlegel diagrams of a number of highly symmetric polyhedra: the tetrahedron, truncated tetrahedron, cube, truncated cube, pentagonal dodecahedron, truncated pentagonal prism, pentagonal antiprism over a heptagon, and truncated octahedron. The first, third, and fifth of these are Platonic solids that occur with non-zero probability in the Poisson-Voronoi structures. The last of these shapes, often referred to as the {\it Kelvin tetrakaidecaheron} or {\it Kelvin cell}, was conjectured by Lord Kelvin \cite{sir1887division, kelvin1894homogeneous} to tile three-dimensional space with a minimal surface area, very much like the regular hexagon tiles the plane with minimal perimeter \cite{hales2001honeycomb}.
\setlength{\tabcolsep}{3pt}
\begin{figure*}[ht!]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\specialcell[t]{$N=325$\\$F$=4, $S$=24 \\\includegraphics[height=1.9cm, angle=0]{schlegel-tetrahedronR.eps}\\ \\$tetrahedron$\\} &
\specialcell[t]{$N=22227$\\$F$=8, $S$=24 \\\includegraphics[height=1.9cm, angle=0]{truncated_tetrahedron.eps}\\\\$truncated$\\$tetrahedron$} &
\specialcell[t]{$N=$ 23744\\$F$=6, $S$=48\\\includegraphics[height=1.9cm]{schlegel-cubeR.eps}\\ \\$cube$\\} &
\specialcell[t]{$N=41$\\$F$=14, $S$=48 \\\includegraphics[height=1.9cm, angle=0]{truncated_cube.eps}\\\\$truncated$\\$cube$} &
\specialcell[t]{$N=3612$\\$F$=12, $S$=120\\\includegraphics[height=1.9cm]{schlegel-pentdodecR.eps}\\\\ $pentagonal$\\$dodecahedron$} &
\specialcell[t]{$N=1$\\$F$=17, $S$=20 \\\includegraphics[height=1.9cm, angle=0]{truncated_5_prism.eps}\\$truncated$\\$prism$ $over$\\$a$ $prism$} &
\specialcell[t]{$N=$55\\$F$=16, $S$=28\\\includegraphics[height=1.9cm]{anti_7_prism.eps}\\ $pentagonal$\\$antiprism$ $over$\\$heptagon$} &
\specialcell[t]{$N=623$\\$F$=14, $S$=48\\\includegraphics[height=1.9cm]{schlegel-kelvinR.eps}\\ \\$Kelvin$ $cell$} \\
\hline
\end{tabular}
\caption{Highly symmetric polyhedra. For each type, we include the number of times $N$ it appears in the 250,000,000 cell data set, its number of faces $F$, and the order $S$ of its symmetry group.}
\label{famous-diagrams}
\end{figure*}
It can be shown that every topological type appears in the Poisson-Voronoi tessellation with a non-zero frequency, and so the appearance of these highly symmetric shapes is not surprising. However, their relative frequencies warrant attention. The truncated cube and the Kelvin cell both have 14 faces and a symmetry group of order $S=48$, and yet they occur with substantially different frequencies. It is clear that frequencies are not entirely determined by the number of faces of a cell nor by the order of its associated symmetry group. It is unclear how these topological features impact frequency.
The 24 most common topological types in Poisson-Voronoi structures account for less than 2.5\% of all cells. By contrast, the distribution of topological types in grain growth structures is substantially more concentrated \cite{2012lazar}. There, the 24 most common types account for over 25\% of all cells \cite{2012lazar}. While space-filling constraints in both the Poisson-Voronoi and grain growth structures create a bias towards certain topological types, the curvature flow process that governs the evolution of grain growth structures leads to a secondary bias towards cells that exhibit a low surface area to volume ratio. Distributions of topological types have not been collected, to the best of our knowledge, for other cellular structures.
\subsection{Order of symmetry groups}
As noted earlier, the algorithm which determines the Weinberg vector of a cell also determines the order of its associated symmetry group.
\begin{figure}
\centering
\includegraphics[width=1.\columnwidth]{symmetries.eps}
\caption{Distribution of orders of cell symmetry groups. Error bars indicate standard error from the mean; in many cases the error bars are not visible because they are smaller than the points.}
\label{symmetries}
\end{figure}
Figure \ref{symmetries} shows the distribution of symmetry orders among all cells. Roughly 91.71\% of cells have only the trivial symmetry (order 1), 6.61\% have a symmetry of order 2, and 1.00\% have a symmetry of order 4. The remaining 0.68\% have symmetries of order 3 or higher than 4. The probability of finding a cell with a particular symmetry order generally decreases quickly with the order, subject to certain secondary rules. More specifically, odd numbers and numbers whose prime factors are all large appear highly infrequently. Odd orders appear in topological types with rotational symmetries but without mirror or inversion symmetries. Therefore, we find no cells with symmetry order 13, for example, even though we find many with symmetry orders 16, 24 and 48.
The average symmetry order of Poisson-Voronoi cells is 1.16. This might be contrasted with the case of grain growth structures \cite{2011lazar}, where the average observed symmetry order is 3.09 \cite{2012lazar}. This discrepancy may be due to the tendency of mean curvature flow to minimize surface area, although how this correlates with topological symmetry is unclear since curvature is a geometric quantity.
We note that although some symmetries of a cell can be observed in its Schlegel diagram, the diagram can often obscure other symmetries. Entries 5 and 21 in Fig.~\ref{schlegel-diagrams}, for example, might appear at first sight more symmetric than entry 6, and yet the latter has the highest symmetry order of the three. To understand this apparent inconsistency between the diagram and the data, we note that entries 5 and 21 both have only one hexagonal face. Therefore, aside from rotations or reflections, there is no way to redraw identical graphs using a different face as the outside polygon. Entry 6, in contrast, has four pentagonal faces, and the graph can be redrawn with each of those faces as the outside polygon. These contribute additional symmetries which might be initially overlooked when considering the Schlegel diagrams.
\subsection{Cloths and swatches}
\label{subsec_cloths_and_swatches}
The types of topological information considered up to this point concern the configuration of faces and edges on cell surfaces, but not the topology of the network of cells extending throughout the tessellation. This is more difficult to address for at least two reasons. First, much more information is involved in characterizing the topology of the cell network than a single cell. Second, collecting statistics relating to the topological features of the boundary network requires a much larger computational effort.
One approach \cite{2012mason} is to construct and collect statistics of {\it swatches}, where a swatch is roughly a collection of labels for the vertices (intersection of four cells) in a portion of the tessellation. The labeling procedure is performed as follows. Let one of the vertices of the structure be designated as the root, and assign a label to this vertex. The swatch is expanded by a canonical procedure that assigns labels to any vertices connected by a single edge to one of the most recently labeled vertices. While performing this procedure on a quadrivalent Cayley tree would give a single, unique label for every vertex, in practice the network of edges contains loops around every face. The result is that vertices are often assigned multiple labels; the labels of such a vertex are considered to be equivalent, and define an equivalence relation. After $r$ iterations of assigning labels, the set of equivalence relations is known as a swatch of order $2r$. A swatch contains all of the topological information about the network of cells in the region around the root; as evidence of this, consider that applying the equivalence relations to a labeled quadrivalent Cayley tree exactly reproduces the network of edges. A swatch therefore classifies the topology of the locale, analogous to the way a Weinberg vector classifies a single cell.
For a positive integer $k$, vertices may be randomly selected from the tessellation to serve as root vertices for the construction of swatches of order $k$. The frequencies at which the different types of swatches appear during this sampling gives a probability distribution that effectively describes the distribution of local topological environments. Allowing $k$ to vary over the positive integers gives an infinite set of probability distributions, collectively known as the {\it cloth} of the cell network.
The probability distribution for a given value of $k$ further defines a {\it $k$-entropy} via the Shannon entropy formula \cite{1948shannon}. The $k$-entropy indicates the variability of the local topological environment, and is a well-defined property of an infinite and statistically homogenous \cite{2012mason} cellular structure. The $k$-entropies are reported here as a function of $k$.
We constructed swatches for all $V$ = 1,691,911,665 vertices in the data set, and report the $k$-entropies for $k = 0$ to $8$ in Fig.~\ref{kentropy}.
\begin{figure}[h]
\centering
\vspace{3mm}
\includegraphics[width=1.\columnwidth]{kentropy.eps}
\caption{The $k$-entropy calculated using all $V$ = 1,691,911,665 vertices in the three-dimensional Poisson-Voronoi structures, containing 250,000,000 cells. The red line is $0.0457e^{0.972k}$.}
\label{kentropy}
\end{figure}
The $k$-entropy of the system is 0 for $k = {0,1,2}$ since a sufficiently small neighborhood around any vertex is topologically trivial (e.g., an isolated vertex, or a vertex connected to four edges). On the other hand, for $k = {7,8}$ there are so many possible local environments that the number of swatch types is much larger than the number of vertices in the system. As a result, no swatch type is sampled more than once during the sampling procedure, apparently bounding the $k$-entropy from above due to the finite system size. This probably affects the $k$-entropy for $k = 6$ as well, where the probability distribution of swatch types appears to be insufficiently sampled. The $k$-entropy for the remaining three values of $k = {3,4,5}$ is adequately fit by a least-squares procedure to an exponential function.
Suppose that the $k$-entropy is roughly proportional to the natural logarithm of the number of swatch types (this is precisely true in the case of a uniform distribution), and that the exponential form suggested above holds, i.e.,
\begin{equation}
ln(N_k) \sim c_0 \exp( c_1 \cdot k ),
\end{equation}
where $N_k$ is the number of swatch types of order $k$. This implies that $N_k$ grows roughly as a double exponential,
\begin{equation}
N_k \sim \exp( c_0 \exp( c_1 \cdot k ) ).
\end{equation}
Although more data points would certainly help to validate this suggestion, the apparent growth rate means that adequately sampling the $k$-entropy for even $k = 6$ is extremely computationally demanding.
Comparing our results with the $k$-entropies for grain growth structures \cite{2012mason}, we find that the $k$-entropies of the Poisson-Voronoi tessellation are slightly higher. The slightly higher values are consistent with the greater variability of cell types in the Poisson-Voronoi tessellation, as is evidenced by the differences in the distributions of the $p$-vectors or of the Weinberg vectors for the two structures. That said, the similarity of the $k$-entropies does not imply the similarity of the local topological environments, but only that the amount of variability in the two structures is similar.
\section{Geometrical data}
One of the most frequently studied geometrical-topological relations is that between the number of faces of a cell and its expected volume. Before considering that relationship, we look at the distribution of volumes over all cells and at the partial distributions of volumes limited to cells with fixed numbers of faces. Likewise, we consider the distribution of surface areas of cells, as well as areas and perimeters of faces.
\subsection{Distribution of volumes}
\label{Distribution of volumes}
Despite much interest in understanding the distribution of volumes among three-dimensional Poisson-Voronoi cells, few rigorous results are available. Throughout this section we use $x_v = v/\langle v \rangle$ to denote normalized cell volumes, where $v$ is the volume of a particular cell and $\langle v \rangle$ is the average volume per cell. Throughout the paper we use $p(x)$ to denote the probability distribution of a variable $x$. Gilbert \cite{1962gilbert} and Brakke \cite{1985brakke} obtained exact integral expressions for the variance of this distribution; numerical integration yields a variance of 0.1790. Figure \ref{volumefits} shows the distribution of cell volumes in our data set. The distribution exhibits a maximum at roughy $x_v = 0.831$ with a probability density of $p(x_v)=1.006$. Several suggestions have been made for the form of this distribution.
Hanson \cite{hanson1983voronoi} suggested that the volume is distributed according to a Maxwell distribution:
\begin{equation}
p(x) = \frac{32}{\pi^2}x^2e^{-4x^2/\pi}.
\label{maxwelleq}
\end{equation}
Hanson acknowledged the lack of physical motivation to substantiate this suggestion and realized that this form does not provide a particularly good fit to the data for all $x$. In addition, the variance of this distribution, $3\pi/8-1$, is not consistent with the exact results of Gilbert \cite{1962gilbert} and Brakke \cite{1985brakke}.
Ferenc and N{\'e}da \cite{2007ferenc}, motivated by a known result of one-dimensional Poisson-Voronoi structures, and based on the study of 18,000,000 three-dimensional Poisson-Voronoi cells, proposed
\begin{equation}
p(x) = \frac{3125}{24}x^4e^{-5x}.
\label{ferenceq}
\end{equation}
The variance of this distribution is 1/5 which, again, is inconsistent with the known exact result. Ferenc and N{\'e}da acknowledged that this form is empirical and not completely consistent with the true distribution. Because Eqs.~(\ref{maxwelleq}) and (\ref{ferenceq}) are inconsistent with the exact, known properties of this distribution, we do not consider them further.
Kumar {\it et al.}~\cite{1992kumar} considered a lognormal distribution as an approximation of the Poisson-Voronoi volumes distribution:
\begin{equation}
p(x) = \frac{1}{x\sqrt{2\pi}\sigma}\exp\left[-\frac{(\ln x - \mu)^2}{2\sigma^2}\right],
\label{lognormaleq}
\end{equation}
where $\mu$ and $\sigma$ are determined by fitting. Using simulation data, Kumar {\it et al.}~\cite{1992kumar} obtained $\sigma = 0.4332$ and $\mu = -0.0735$. However, since we know the mean and variance exactly \cite{1962gilbert,1985brakke}, these two parameters are completely determined: $\sigma=0.4058$ and $\mu=-0.0823$. As noted by Kumar {\it et al.}~\cite{1992kumar} and others \cite{fatima1988grain}, a lognormal distribution appears to have little physical justification, and given its weakness in fitting the data, can serve only as a rough guide to the actual distribution.
Another suggested form for the Poisson-Voronoi volume distribution is a $\Gamma$ distribution function with one, two, or three fitting parameters. Kiang \cite{kiang1966random} attempted to extend results known for one-dimensional systems and limited simulation data to suggest a volume distribution of the form:
\begin{equation}
p(x) = \frac{\gamma}{\Gamma(\gamma)}(\gamma x)^{\gamma-1}e^{-\gamma x},
\label{gammaeqA}
\end{equation}
where $\gamma$ is a constant which Kiang believed to be 6. Andrade and Fortes \cite{andrade1988distribution}, using a larger data set, concluded that $\gamma\approx 5.56$. Kumar {\it et al.}~\cite{1992kumar} found $\gamma = 5.7869$. All these fits should only be considered approximations, since the variance $\sigma^2$ is known exactly. This, then, determines $\gamma = 1/\sigma^2 = 5.586$.
Kumar {\it et al.}~\cite{1992kumar} also suggested a two-parameter version of this distribution,
\begin{equation}
p(x) = \frac{x^{\gamma-1}}{\beta^\gamma\Gamma(\gamma)}e^{-x/\beta}.
\label{gammaeqB}
\end{equation}
Using simulation data, Kumar {\it et al.}~obtained best fit values of the constants, $\beta=0.1782$ and $\gamma=5.6333$. However, the exact variance results require $\beta=\sigma^2=0.1790$ and $\gamma=1/\sigma^2 = 5.586$. With these values, Eq.~(\ref{gammaeqB}) reduces to Eq.~(\ref{gammaeqA}).
Tanemura \cite{2003tanemura} suggested a three-parameter version of the distribution,
\begin{equation}
p(x) = \frac{\alpha\beta^{\gamma/\alpha}}{\Gamma(\gamma/\alpha)}x^{\gamma-1}e^{-\beta x^\alpha}.
\label{gammaeqC}
\end{equation}
Fitting to simulation data, Tanemura found $\alpha=1.409$, $\beta=2.813$, and $\gamma=4.120$. However, this can be simplified using the exact values for the mean and variance; hence, there is only one free parameter. Fitting to our own data and using these exact results yields $\alpha=1.1580$, which fixes $\beta=4.0681$ and $\gamma=4.7868$.
Figure \ref{volumefits} shows a comparison of the volume distribution for our large data set and the various suggested fits.
\begin{figure}[h]
\centering
\vspace{3mm}
\includegraphics[width=1.\columnwidth, trim = 4mm 0 2mm 0mm]{multivolumes.eps}
\caption{(Color online) Distribution of normalized cell volumes, $x_v = v/\langle v \rangle$. The standard deviation of the data set is 0.4231, consistent with analytical results to within numerical accuracy. The curves represent suggested forms of the distribution, as described in the text. The inset shows a subset of the data on a semilogarithmic plot.}
\label{volumefits}
\end{figure}
The parameters in Eqs.~(\ref{lognormaleq}), (\ref{gammaeqA}), and (\ref{gammaeqB}) are determined using the known exact results, with the single free parameter in Eq.~(\ref{gammaeqC}) determined via a least squares fit to our data set.
Inspection of Fig.~\ref{volumefits} shows that Eq.~(\ref{lognormaleq}) does a poor job reproducing the simulation data. Equations \ref{gammaeqA} and \ref{gammaeqB} exhibit systematic errors compared with the simulation data (see both the peak position and the large $x$ behavior), although they are far superior to Eq.~(\ref{lognormaleq}). The adjustable three-parameter $\Gamma$ distribution function [Eq.~\ref{gammaeqC})] provide a best fit to the data.
We next consider the partial distributions $p_{_F}(x_v)$ of cell volumes for each number of faces $F$; the partial distributions are normalized so $p(x) = \sum_{F=1}^{\infty} p_{_F}(x_v)$. Data for $4\leq F \leq 32$ are shown in Fig.~\ref{faces_volumes_distributions}; a semi-log scale is used to help differentiate the data for very small and very large volumes.
\begin{figure}
\centering
\vspace{3mm}
\includegraphics[width=1.\columnwidth]{faces_volumes_distributions.eps}
\caption{(Color online) Partial distributions $p_{_F}(x_v)$ of cell volumes for each number of faces $F$. The red-most curve, at the bottom left of the plot, corresponds to $F=5$; the blue-most curve, at the bottom right of the plot, corresponds to $F=32$. Data were binned in intervals of width 0.04.}
\label{faces_volumes_distributions}
\end{figure}
Tanemura \cite{2003tanemura} used a relatively large data set (5 million cells) and suggested that each of these partial distributions could be accurately described by the three-parameter $\Gamma$ function considered earlier [Eq.~(\ref{gammaeqC})], where $\alpha$, $\beta$, and $\gamma$ for each curve are parameters that depend on $F$. We test this suggestion using a least squares fit to obtain parameters $\alpha$, $\beta$, and $\gamma$ for each $F$. Figure \ref{faces_volumes_distributions} shows least squares fits of Eq.~(\ref{gammaeqC}) for each $F$. Obtained parameters are provided in Table IX of the Supplemental Material. While we know of no theoretical reason to expect this form, it appears to match the data very well.
\subsection{Distribution of surface areas}
\label{Distribution of surface areas}
We next consider the distribution of surface areas over all cells. In this section we use $x_s = s/\langle s \rangle$ to denote normalized surface area, where $s$ is the surface area of a particular cell and $\langle s \rangle$ is the average surface area per cell. Brakke \cite{1985brakke} provided an integral equation for the variance of this distribution, and numerically evaluated it to be 0.064679. Our data reproduce this exact result to within 0.0001\%. Figure \ref{surface_areas_distribution} plots the distribution of surface areas in our data set. The curve appears to peak at roughly $x_s=0.96$ with a probability density of $p(x_s)= 1.57$.
\begin{figure}
\centering
\vspace{3mm}
\includegraphics[width=1.\columnwidth]{multisurfaces.eps}
\caption{(Color online) Distribution of surface areas among all cells. Data were binned in intervals of width 0.04. The standard deviation is 0.254, shown to three decimal places. The inset shows a subset of the data on a semilogarithmic plot.}
\label{surface_areas_distribution}
\end{figure}
Kumar {\it et al.}~suggested that this distribution can be described by a two-parameter $\Gamma$ function [Eq.~(\ref{gammaeqB})] with fitted parameters $\alpha=15.4847$ and $\beta=0.06490$. However, since both the mean and variance are known, there are no degrees of freedom in fitting two parameters. The analytic constraints yield $\alpha=15.461$ and $\beta=0.06468$.
If we consider a three-parameter $\Gamma$ function [Eq.~(\ref{gammaeqC})], then we are left with one degree of freedom in choosing the parameters. A least squares fit finds that $\alpha=1.845$, $\beta=4.416$, and $\gamma=8.557$ fit the data most closely, while satisfying the known analytic constraints.
Figure \ref{surface_areas_distribution} shows both fits and the collected data. Although the three-parameter version slightly underestimates $p(x)$ for small $x$, as can be seen on the inset plot, overall it provides excellent agreement with the data.
Figure \ref{faces_surface_areas_distributions} shows the partial distributions $p_{_F}(x_s)$ of surface areas for cells with fixed numbers of faces.
\begin{figure}
\centering
\vspace{3mm}
\includegraphics[width=1.\columnwidth]{faces_surface_areas_distributions.eps}
\caption{(Color online) Partial distributions $p_{_F}(x_s)$ of cell surface areas for each number of faces $F$. The red-most curve, at the bottom left of the plot, corresponds to $F=4$; the blue-most curve, at the bottom right of the plot, corresponds to $F=33$. Data were binned in intervals of width 0.04.}
\label{faces_surface_areas_distributions}
\end{figure}
We show the data on a semi-log plot to focus attention on data of very large and small surface areas. Tanemura \cite{2003tanemura} suggested that these distributions could also be accurately described by the three-parameter $\Gamma$ function considered earlier [Eq.~(\ref{gammaeqC})], where $\alpha$, $\beta$, and $\gamma$ for each curve are parameters that depend on $F$. We test this suggestion using a least squares fit to obtain parameters $\alpha$, $\beta$, and $\gamma$ for each $F$. Figure \ref{faces_volumes_distributions} shows least squares fits of Eq.~(\ref{gammaeqC}) for each $F$; the parameters are provided in Table X of the Supplemental Material. While we cannot provide justification to expect this form, it appears to match the data very well.
\subsection{Distribution of face areas}
We next consider the distribution of areas of faces. In this section we use $x_a = a/\langle a \rangle$ to denote normalized areas, where $a$ is the area of a particular face and $\langle a \rangle$ is the average area over all faces. Brakke \cite{1985brakke} provided an integral expression for the variance of this distribution; numerical evaluation shows that it is equal to 1.01426. Our data reproduce this exact result to within 0.005\%. The black curves in Fig.~\ref{faces_areas_distributions} show the distribution of areas among all faces in our data set. Unlike the distributions considered earlier, this one is far from symmetric; instead, it is strongly biased towards faces with very small areas.
We also consider the partial distributions $p_n(x_a)$ of areas limited to faces with fixed numbers of edges $n$. The colored curves in Fig.~\ref{faces_areas_distributions} show these distributions.
\begin{figure}
\centering
\vspace{3mm}
\includegraphics[width=1.\columnwidth]{multiareas.eps}
\caption{(Color online) Partial distributions $p_n(x_a)$ of face areas for each number of edges $n$. The red-most curve, located to the left of the other curves, corresponds to $n=3$; the blue-most curve, located at the bottom-center of the plot, corresponds to $n=15$. The black curves show the distribution of areas summed over all $n$. Data were binned in intervals of width 0.04.}
\label{faces_areas_distributions}
\end{figure}
It appears from the figure that $p_n(0)>0$ for $n=3$ and $4$. Hence, these curves cannot be fitted using a $\Gamma$ function [Eq.~(\ref{gammaeqC})], for which $p(0)$ always evaluates to $0$.
\subsection{Distribution of face perimeters}
Last, we consider the distribution of perimeters of faces. In this section we use $x_l = l/\langle l \rangle$ to denote a normalized perimeter, where $l$ is the perimeter of a particular face and $\langle l \rangle$ is the average perimeter over all faces. Again, Brakke \cite{1985brakke} derived an exact analytical expression for the variance that evaluates to 0.2898. Our data reproduce this result to within 0.004\%. The black curves in Fig.~\ref{faces_perim_distributions} show the distribution of perimeters among all faces in our data set. The shape of this figure is similar to that calculated analytically by Brakke \cite{brakke1987statistics} for the distribution of edge lengths in two-dimensional Poisson-Voronoi structures.
We also consider the partial distributions $p_n(x_l)$ of perimeters limited to faces with fixed numbers of edges $n$. The colored curves in Fig.~\ref{faces_perim_distributions} show these distributions.
\begin{figure}
\centering
\vspace{3mm}
\includegraphics[width=1.\columnwidth]{multiperims.eps}
\caption{(Color online) Partial distributions $p_n(x_l)$ of face perimeters for each number of edges $n$. The red-most curve, located to the left of the other curves, corresponds to $n=3$; the blue-most curve, located at the bottom-center of the plot, corresponds to $n=15$. The black curves show the distribution of perimeters summed over all $n$. Data were binned in intervals of width 0.04.}
\label{faces_perim_distributions}
\end{figure}
It appears from the data that $p_n(0)>0$ for $n=3$; this implies that the partial perimeter distributions cannot be fitted to a $\Gamma$ function.
\section{Correlations between geometry and topology}
We now consider how the average volume and surface area of a cell depend on its number of faces $F$, and how the average area and perimeter of a face depend on its number of edges $n$. In two-dimensional systems, the study of this type of relationship was pioneered by Lewis \cite{1928lewis}, who observed in some natural structures that the area of a cell was proportional to its number of edges.
Figure \ref{nvolumes} shows the average volume of a cell as a function of its number of faces; we use $\langle x_v \rangle_F$ to denote the average volume of cells with $F$ faces. Based on a data set with 102,000 cells, Kumar {\it et al.}~\cite{1992kumar} suggested that $\langle x_v \rangle_F = AF^b$, where $A$ and $b$ are fitting parameters. Kumar {\it et al.}~found $A=0.0164$ and $b=1.498$; the more extensive data collected here yield similar values, $A=0.0176$ and $b=1.468$.
\begin{figure}
\centering
\vspace{3mm}
\includegraphics[width=1.\columnwidth, trim=5mm 0 0 0]{nvolumes.eps}
\caption{Average normalized volume $\langle x_v \rangle_{_F}$ as a function of number of faces $F$; error bars indicate standard error from the mean. The red curve is a least squares fit to $AF^b$, as explained in the text.}
\label{nvolumes}
\end{figure}
The curve appears to fit the data well for $10\leq F \leq 20$, though not for large or small $F$.
A similar relation might be considered for the average surface area of a cell. Based on simulation results, Kumar {\it et al.}~\cite{1992kumar} suggested that $\langle x_s \rangle_F = AF^b$, where $\langle x_s \rangle_F$ is the average surface area of cells with $F$ faces and $A$ and $b$ are fitting parameters (Fig.~\ref{nareas}). Kumar {\it et al.}~found $A=0.09645$ and $b=0.8526$; our data yield similar values, $A=0.0993$ and $b=0.843$.
\begin{figure}
\centering
\vspace{3mm}
\includegraphics[width=1.\columnwidth, trim=5mm 0 0 0]{nareas.eps}
\caption{Average normalized surface area $\langle x_s \rangle_{_F}$ as a function of number of faces $F$; error bars indicate standard error from the mean. The red curve is a least squares fit to $AF^b$, as explained in the text.}
\label{nareas}
\end{figure}
This curve too appears to fit the data well for $10\leq F \leq 20$, though also fails for large and small $F$.
Finally we turn to the dependance of the expected area and perimeter of a face on its number of edges $n$, illustrated in Figs.~\ref{eareas} and \ref{eperims}.
\begin{figure}
\centering
\vspace{3mm}
\includegraphics[width=1.\columnwidth, trim=5mm 0 0 0]{eareas.eps}
\caption{Average normalized area $\langle x_a \rangle_n$ as a function of number of edges $n$; error bars indicate standard error from the mean.}
\label{eareas}
\end{figure}
\begin{figure}
\centering
\vspace{3mm}
\includegraphics[width=1.\columnwidth, trim=5mm 0 0 0]{eperims.eps}
\caption{Average normalized perimeter $\langle x_l \rangle_n$ as a function of number of edges $n$; error bars indicate standard error from the mean.}
\label{eperims}
\end{figure}
Although the exact forms of these relationships cannot be determined, it is clear that neither the average area nor perimeter of a face increase linearly with $n$.
\section{Conclusions}
Poisson-Voronoi networks are widely used across the physical and biological sciences as canonical cell structures. While two-dimensional Poisson-Voronoi networks have been widely studied and often used as surrogates for three-dimensional applications, such three-dimensional networks have been much less widely examined. In this report, we have provided a much more complete characterization of three-dimensional Poisson-Voronoi networks than exists in the literature.
In particular, we report a wide range of statistical properties of three-dimensional Poisson-Voronoi structures containing a combined total of 250,000,000 cells. The data demonstrate that although the Poisson-Voronoi structure is generated using a random distribution of points, it exhibits a rich topological and geometrical structure.
The size of the data set considered here has enabled us to resolve properties of such structures that have been impossible to investigate previously. While some of the results corroborate earlier work at much higher precision, the results also clearly contradict other conjectures.
In particular, we found that the natural extension of the Aboav-Weaire relation to three dimensions is {\bf not} consistent with our very large data set, contrary to what was previously reported \cite{1992kumar, fortes1993applicability}. In particular, for $F < 12$ faces, the average number of faces of a cell's neighbors increases with the number of faces $F$ of a central cell. This is consistent with recent theoretical results \cite{hilhorst2009heuristic, 2012masonB}.
Considering more refined topological data, we observed that some $p$-vectors appear significantly more frequently than others. We also observed that even when considering a fixed $p$-vector, not all topological types appear with equal frequencies. Understanding such topological distributions may provide new insight into the topological structure of other natural cellular structures and the forces under which those systems evolve. One particularly interesting set of results shows that the order of the symmetry groups of the three-dimensional Poisson-Voronoi cells shows clear trends that can be used to distinguish it from other types of cellular networks.
Our data set supports the conjecture of Tanemura \cite{2003tanemura} regarding the distribution of cell volumes and surface areas when restricted to cells with fixed numbers of faces. In particular, a three-parameter $\Gamma$ function [Eq.~(\ref{gammaeqC})] appears to fit these data precisely. This equation also appears to fit the distribution of volumes over all cells. However, this functional form does not accurately describe the distribution of cell surface areas or cell face areas and perimeters.
We considered the dependence of the expected volume and surface area of a cell on its number of faces. The data presented here counters conjectures of Kumar {\it et al.}~\cite{1992kumar} regarding the form of this relationship. Unfortunately, we were unable to provide a well-founded alternative.
Extensive geometrical and topological statistics from our data structures are included in the Supplemental Material and an extensive set of measures of the cells in the entire 250,000,000 cell data set is available online at \cite{webpage2}.
{\bf Acknowledgments.} We thank Ken Brakke for providing computer programs to generate Poisson-Voronoi structures, and for ongoing support of his Surface Evolver program. Most of the computations reported herein were performed using the computational resources of the Institute for Advanced Study.
\bibliographystyle{apsrev4-1.bst}
| {
"timestamp": "2014-01-09T02:10:06",
"yymm": "1401",
"arxiv_id": "1401.1736",
"language": "en",
"url": "https://arxiv.org/abs/1401.1736",
"abstract": "Voronoi tessellations of Poisson point processes are widely used for modeling many types of physical and biological systems. In this paper, we analyze simulated Poisson-Voronoi structures containing a total of 250,000,000 cells to provide topological and geometrical statistics of this important class of networks. We also report correlations between some of these topological and geometrical measures. Using these results, we are able to corroborate several conjectures regarding the properties of three-dimensional Poisson-Voronoi networks and refute others. In many cases, we provide accurate fits to these data to aid further analysis. We also demonstrate that topological measures represent powerful tools for describing cellular networks and for distinguishing among different types of networks.",
"subjects": "Computational Physics (physics.comp-ph); Disordered Systems and Neural Networks (cond-mat.dis-nn); Soft Condensed Matter (cond-mat.soft)",
"title": "Statistical Topology of Three-Dimensional Poisson-Voronoi Cells and Cell Boundary Networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846703886661,
"lm_q2_score": 0.8244619350028204,
"lm_q1q2_score": 0.8066409185257363
} |
https://arxiv.org/abs/2005.03921 | Closed formulas and determinantal expressions for higher-order Bernoulli and Euler polynomials in terms of Stirling numbers | In this paper, applying the Faà di Bruno formula and some properties of Bell polynomials, several closed formulas and determinantal expressions involving Stirling numbers of the second kind for higher-order Bernoulli and Euler polynomials are presented. | \section{Introduction}
The classical Bernoulli polynomials $B_{n}(x)$ and Euler polynomials
$E_{n}(x)$ are usually defined by means of the following generating functions
\[
\dfrac{te^{xt}}{e^{t}-1}
{\displaystyle\sum\limits_{n=0}^{\infty}}
B_{n}(x)\dfrac{t^{n}}{n!}\text{ }\left( \left\vert t\right\vert <2\pi\right)
\text{ and }\dfrac{2e^{xt}}{et+1}
{\displaystyle\sum\limits_{n=0}^{\infty}}
E_{n}(x)\dfrac{t^{n}}{n!}\text{ }\left( \left\vert t\right\vert <\pi\right)
.
\]
In particular, the rational numbers $B_{n}=B_{n}(0)$ and integers $E_{n
=2^{n}E_{n}(1/2)$ are called classical Bernoulli numbers and Euler numbers, respectively.
As is well known, the classical Bernoulli and Euler polynomials play important
roles in different areas of mathematics such as number theory, combinatorics,
special functions and analysis.
Numerous generalizations of these polynomials and numbers are defined and many
properties are studied in a variety of context. One of them can be traced back
to N\"{o}rlund \cite{n}: The higher-order Bernoulli polynomials $B_{n
^{(\alpha)}(x)$ and higher-order Euler polynomials $E_{n}^{(\alpha)}(x)$, each
of degree $n$ in $x$ and in $\alpha$, are defined by means of the generating
functions
\begin{equation}
\left( \dfrac{t}{e^{t}-1}\right) ^{\alpha}e^{xt}
{\displaystyle\sum\limits_{n=0}^{\infty}}
B_{n}^{(\alpha)}(x)\dfrac{t^{n}}{n!} \label{0
\end{equation}
an
\begin{equation}
\text{ }\left( \dfrac{2}{e^{t}+1}\right) ^{\alpha}e^{xt}
{\displaystyle\sum\limits_{n=0}^{\infty}}
E_{n}^{(\alpha)}(x)\dfrac{t^{n}}{n!}, \label{15
\end{equation}
respectively. For $\alpha=1$, we have $B_{n}^{(1)}(x)=B_{n}(x)$ and
$E_{n}^{(1)}(x)=E_{n}(x)$.
According to Wiki \cite{closed}, "In mathematics, a closed-form expression is
a mathematical expression that can be evaluated in a finite number of
operations. It may contain constants, variables, certain `well-known'
operations (e.g.,$+-\times\div$), and functions (e.g., $n$th root, exponent,
logarithm, trigonometric functions, and inverse hyperbolic functions), but
usually no limit."
From this point of view, Wei and Qi \cite{wei} studied several closed form
expressions for Euler polynomials in terms of determinant and the Stirling
numbers of the second kind. Also, Qi and Chapman \cite{qi2} established two
closed forms for the Bernoulli polynomials and numbers involving the Stirling
numbers of the second kind and in terms of a determinant of combinatorial
numbers. Moreover, some special determinantal expressions and recursive
formulas for Bernoulli polynomials and numbers can be found in \cite{qi6}.
In 2018, Hu and Kim \cite{hu-kim} presented two closed formulas for
Apostol-Bernoulli polynomials by aid of the properties of the Bell polynomials
of the second kind $B_{n,k}\left( x_{1},x_{2},...,x_{n-k+1}\right) $ (see
Lemma \ref{3}, below). Very recently, Dai and Pan \cite{d} have obtained the
closed forms for degenerate Bernoulli polynomials.
In this paper, we focus on higher-order Bernoulli and Euler polynomials in
those respects, mentioned above. Firstly, we find some novel closed formulas
for higher-order Bernoulli and Euler polynomials in terms of Stirling numbers
of the second kind $S(n,k)$ via the Fa\`{a} di Bruno formula for the Bell
polynomials of the second kind and the generating function methods. Secondly,
we establish some determinantal expressions by applying a formula of higher
order derivative for the ratio of two differentiable functions. Consequently,
taking some special cases in our results provides the further formulas for
classical Bernoulli and Euler polynomials, and numbers.
\section{Some Lemmas}
In order to prove our main results, we recall several lemmas below.
\begin{lemma}
\label{3}(\cite[p. 134 and 139]{Comtet}) The Bell polynomials of the second
kind, or say, partial Bell polynomials, denoted by $B_{n,k}\left( x_{1
,x_{2},...,x_{n-k+1}\right) $ for $n\geq k\geq0,$ defined by
\[
B_{n,k}\left( x_{1},x_{2},...,x_{n-k+1}\right)
{\displaystyle\sum\limits_{\substack{1\leq i\leq n,\text{ }l_{i}\in\left\{
0\right\} \cup\mathbb{N}\\{{\textstyle\sum\nolimits_{i=1}^{n}}
il_{i}=n,\text{ }{{\textstyle\sum\nolimits_{i=1}^{n}}}l_{i}=k}}^{\infty}}
\frac{n!}
{\textstyle\prod\nolimits_{i=1}^{l-k+1}}
l_{i}!
{\textstyle\prod\limits_{i=1}^{l-k+1}}
\left( \frac{x_{i}}{i!}\right) ^{l_{i}}.
\]
The Fa\`{a} di Bruno formula can be described in terms of the Bell polynomials
of the second kind $B_{n,k}\left( x_{1},x_{2},...,x_{n-k+1}\right) $ b
\begin{equation}
\frac{d^{n}}{dt^{n}}f\circ h\left( t\right)
{\displaystyle\sum\limits_{k=0}^{n}}
f^{\left( k\right) }\left( h\left( t\right) \right) B_{n,k}\left(
h^{\prime}\left( t\right) ,h^{\prime\prime}\left( t\right)
,...,h^{(n-k+1)}\left( t\right) \right) . \label{2
\end{equation}
\end{lemma}
\begin{lemma}
\label{4}(\cite[p. 135]{Comtet}) For $n\geq k\geq0,$ we hav
\begin{equation}
B_{n,k}\left( abx_{1},ab^{2}x_{2},...,ab^{n-k+1}x_{n-k+1}\right) =a^{k
b^{n}B_{n,k}\left( x_{1},x_{2},...,x_{n-k+1}\right) , \label{5
\end{equation}
where $a$ and $b$ are any complex number.
\end{lemma}
\begin{lemma}
(\cite[p. 135]{Comtet})For $n\geq k\geq0,$ we hav
\begin{equation}
B_{n,k}\left( 1,1,...,1\right) =S\left( n,k\right) , \label{12
\end{equation}
where $S\left( n,k\right) $ is the Stirling numbers of the second kind,
defined by \cite[p. 206]{Comtet}
\[
\frac{\left( e^{t}-1\right) ^{k}}{k!}
{\displaystyle\sum\limits_{n=k}^{\infty}}
S\left( n,k\right) \frac{t^{n}}{n!}.
\]
\end{lemma}
\begin{lemma}
(\cite{guo})For $n\geq k\geq1,$ we hav
\begin{equation}
B_{n,k}\left( \frac{1}{2},\frac{1}{3},...,\frac{1}{n-k+2}\right) =\frac
{n!}{\left( n+k\right) !
{\displaystyle\sum\limits_{i=0}^{k}}
\left( -1\right) ^{k-i}\binom{n+k}{k-i}S\left( n+i,i\right) . \label{6
\end{equation}
\end{lemma}
\begin{lemma}
\label{1}(\cite[p. 40, Entry 5]{bourbaki}) For two differentiable functions
$p\left( x\right) $ and $q\left( x\right) \neq0,$ we have for $k\geq0
\begin{align}
& \frac{d^{k}}{dx^{k}}\left[ \frac{p\left( x\right) }{q\left( x\right)
}\right] \nonumber\\
& =\frac{\left( -1\right) ^{k}}{q^{k+1}}\left\vert
\begin{array}
[c]{cccccccc
p & q & 0 & . & . & . & 0 & 0\\
p^{\prime} & q^{\prime} & q & . & . & . & 0 & 0\\
p^{\prime\prime} & q^{\prime\prime} & \binom{2}{1}q^{\prime} & . & . & . & 0 &
0\\
. & . & . & . & . & . & . & .\\
. & . & . & . & . & . & . & .\\
. & . & . & . & . & . & . & .\\
p^{\left( k-2\right) } & q^{\left( k-2\right) } & \binom{k-2}{1}q^{(k-3)}
& . & . & . & q & 0\\
p^{\left( k-1\right) } & q^{\left( k-1\right) } & \binom{k-1}{1}q^{(k-2)}
& . & . & . & \binom{k-1}{k-2}q^{\prime} & q\\
p^{\left( k\right) } & q^{\left( k\right) } & \binom{k}{1}q^{(k-1)} & . &
. & . & \binom{k}{k-2}q^{\prime\prime} & \binom{k}{k-1}q^{\prime
\end{array}
\right\vert \label{10
\end{align}
In other words, the formula (\ref{10}) can be represented a
\[
\frac{d^{k}}{dx^{k}}\left[ \frac{p\left( x\right) }{q\left( x\right)
}\right] =\frac{\left( -1\right) ^{k}}{q^{k+1}}\left\vert W_{\left(
k+1\right) \times\left( k+1\right) }\left( x\right) \right\vert ,
\]
where $\left\vert W_{\left( k+1\right) \times\left( k+1\right) }\left(
x\right) \right\vert $ denotes the determinant of the matri
\[
W_{\left( k+1\right) \times\left( k+1\right) }\left( x\right) =\left[
U_{\left( k+1\right) \times1}\left( x\right) \text{ \ \ }V_{\left(
k+1\right) \times k}\left( x\right) \right] .
\]
Here $U_{\left( k+1\right) \times1}\left( x\right) $ has the elements
$u_{l,1}\left( x\right) =p^{\left( l-1\right) }\left( x\right) $ for
$1\leq l\leq k+1$ and $V_{\left( k+1\right) \times k}\left( x\right) $ has
the entries of the for
\[
v_{i,j}\left( x\right)
\begin{cases}
\binom{i-1}{j-1}q^{\left( i-j\right) }\left( x\right) , & \text{if
}i-j\geq0\text{;}\\
0, & \text{if }i-j<0\text{,
\end{cases}
\]
for $1\leq i\leq k+1$ and $1\leq j\leq k.$
\end{lemma}
\section{Closed formulas}
In this section, we give closed formulas for higher-order Bernoulli and Euler polynomials.
\begin{theorem}
\label{main}The higher-order Bernoulli polynomials $B_{n}^{(\alpha)}(x)$ can
be expressed as
\[
B_{n}^{(\alpha)}(x)
{\displaystyle\sum\limits_{k=0}^{n}}
\binom{n}{k
{\displaystyle\sum\limits_{i=0}^{k}}
\left\langle -\alpha\right\rangle _{i}\frac{k!}{(k+i)!
{\displaystyle\sum\limits_{j=0}^{i}}
\left( -1\right) ^{i-j}\binom{k+i}{i-j}S(k+j,j)x^{n-k},
\]
where $S(n,k)$ is the Stirling numbers of the second kind and $\left\langle
x\right\rangle _{n}$ denotes the falling factorial, defined for $x\i
\mathbb{R}
$ by
\[
\left\langle x\right\rangle _{n}=\prod\limits_{k=0}^{n-1}\left( x-k\right)
\begin{cases}
x\left( x-1\right) ...(x-n+1), & \text{if }n\geq1\text{;}\\
1, & \text{if }n=0.
\end{cases}
\]
In particular $x=0,$ the higher-order Bernoulli numbers $B_{n}^{(\alpha)}$
possess the following for
\begin{equation}
B_{n}^{(\alpha)}
{\displaystyle\sum\limits_{i=0}^{n}}
\left\langle -\alpha\right\rangle _{i}\frac{n!}{(n+i)!
{\displaystyle\sum\limits_{j=0}^{i}}
\left( -1\right) ^{i-j}\binom{n+i}{i-j}S(n+j,j). \label{11
\end{equation}
\end{theorem}
\begin{proof}
Let us begin by writing $\left( \dfrac{e^{t}-1}{t}\right) ^{\alpha}=\left(
{\displaystyle\int_{1}^{e}}
s^{t-1}ds\right) ^{\alpha}.$ From (\ref{2}) and (\ref{6}), we hav
\begin{align*}
& \frac{d^{k}}{dt^{k}}\left( \dfrac{e^{t}-1}{t}\right) ^{-\alpha}\\
&
{\displaystyle\sum\limits_{i=0}^{k}}
\left\langle -\alpha\right\rangle _{i}\left( \dfrac{e^{t}-1}{t}\right)
^{-\alpha-i}B_{k,i}\left(
{\displaystyle\int_{1}^{e}}
s^{t-1}\ln sds
{\displaystyle\int_{1}^{e}}
s^{t-1}\ln^{2}sds,...
{\displaystyle\int_{1}^{e}}
s^{t-1}\ln^{k-i+1}sds\right) \\
& \rightarro
{\displaystyle\sum\limits_{i=0}^{k}}
\left\langle -\alpha\right\rangle _{i}B_{k,i}\left( \frac{1}{2},\frac{1
{3},...,\frac{1}{n-i+2}\right) ,\text{ \ \ \ \ as }t\rightarrow0\\
&
{\displaystyle\sum\limits_{i=0}^{k}}
\left\langle -\alpha\right\rangle _{i}\frac{k!}{\left( k+i\right) !
{\displaystyle\sum\limits_{j=0}^{i}}
\left( -1\right) ^{i-j}\binom{k+i}{i-j}S\left( k+j,j\right) ,
\end{align*}
Also $\left( e^{xt}\right) ^{(k)}=x^{k}e^{xt}\rightarrow x^{k},$ as
$t\rightarrow0.$ So, using the Leibnitz's formula for the $n$th derivative of
the product of two functions yields that
\begin{align*}
& \lim_{t\rightarrow0}\frac{d^{n}}{dt^{n}}\left[ \left( \dfrac{e^{t}-1
{t}\right) ^{-\alpha}e^{xt}\right] \\
&
{\displaystyle\sum\limits_{k=0}^{n}}
\binom{n}{k
{\displaystyle\sum\limits_{i=0}^{k}}
\left\langle -\alpha\right\rangle _{i}\frac{k!}{\left( k+i\right) !
{\displaystyle\sum\limits_{j=0}^{i}}
\left( -1\right) ^{i-j}\binom{k+i}{i-j}S\left( k+j,j\right) x^{n-k},
\end{align*}
which is equal to $B_{n}^{(\alpha)}(x)$ from the generating function
(\ref{0}). For $x=0,$ we immediately get the identity (\ref{11}).
\end{proof}
\begin{remark}
For $\alpha=1,$ noting the fact $\left\langle -1\right\rangle _{i}=\left(
-1\right) ^{i}i!$, the equation (\ref{11}) become
\begin{align*}
B_{n} &
{\displaystyle\sum\limits_{i=0}^{n}}
i!\frac{n!}{(n+i)!
{\displaystyle\sum\limits_{j=0}^{i}}
\left( -1\right) ^{j}\binom{n+i}{i-j}S(n+j,j)\\
&
{\displaystyle\sum\limits_{j=0}^{n}}
\left( -1\right) ^{j}\frac{S(n+j,j)}{\binom{n+j}{j}
{\displaystyle\sum\limits_{i=j}^{n}}
\binom{i}{j}\\
&
{\displaystyle\sum\limits_{j=0}^{n}}
\left( -1\right) ^{j}\frac{\binom{n+1}{j+1}}{\binom{n+j}{j}}S(n+j,j),
\end{align*}
which is \cite[Eq. 1.3]{qi2}.
\end{remark}
\begin{theorem}
\label{main1}The higher-order Euler polynomials $E_{n}^{(\alpha)}(x)$ can be
represented as
\[
E_{n}^{(\alpha)}(x)
{\displaystyle\sum\limits_{k=0}^{n}}
\binom{n}{k
{\displaystyle\sum\limits_{i=0}^{k}}
\left\langle -\alpha\right\rangle _{i}\frac{S(k,i)}{2^{i}}x^{n-k}.
\]
In particular, the higher-order Euler numbers $E_{n}^{(\alpha)}$ have the
following for
\begin{equation}
E_{n}^{(\alpha)}
{\displaystyle\sum\limits_{k=0}^{n}}
\binom{n}{k
{\displaystyle\sum\limits_{i=0}^{k}}
\left\langle -\alpha\right\rangle _{i}S(k,i)2^{k-i}. \label{16
\end{equation}
\end{theorem}
\begin{proof}
By (\ref{2}), (\ref{5}) and (\ref{12}), we hav
\begin{align*}
\frac{d^{k}}{dt^{k}}\left( \dfrac{e^{t}+1}{2}\right) ^{-\alpha} &
{\displaystyle\sum\limits_{i=0}^{k}}
\left\langle -\alpha\right\rangle _{i}\left( \dfrac{e^{t}+1}{2}\right)
^{-\alpha-i}B_{k,i}\left( \frac{e^{t}}{2},\frac{e^{t}}{2},...,\frac{e^{t}
{2}\right) \\
&
{\displaystyle\sum\limits_{i=0}^{k}}
\left\langle -\alpha\right\rangle _{i}\left( \dfrac{e^{t}+1}{2}\right)
^{-\alpha-i}\left( \frac{e^{t}}{2}\right) ^{i}B_{k,i}\left(
1,1,...,1\right) \\
& \rightarro
{\displaystyle\sum\limits_{i=0}^{k}}
\left\langle -\alpha\right\rangle _{i}\frac{S\left( k,i\right) }{2^{i
},\text{ \ \ \ \ \ \ as }t\rightarrow0.
\end{align*}
So, from the Leibnitz's rule again and the generating function for
higher-order Euler polynomials, given by (\ref{15}), we obtain tha
\begin{align*}
E_{n}^{(\alpha)}(x) & =\lim_{t\rightarrow0}\frac{d^{n}}{dt^{n}}\left[
\left( \dfrac{e^{t}+1}{2}\right) ^{-\alpha}e^{xt}\right] \\
&
{\displaystyle\sum\limits_{k=0}^{n}}
\binom{n}{k
{\displaystyle\sum\limits_{i=0}^{k}}
\left\langle -\alpha\right\rangle _{i}\frac{S\left( k,i\right) }{2^{i
}x^{n-k}.
\end{align*}
Taking special case gives the closed form (\ref{16}) immediately for
higher-order Euler numbers.
\end{proof}
\begin{remark}
For $\alpha=1,$ the counterpart closed formula for Euler polynomials can be
derived. Moreover, setting more special cases leads to similar formula for
Euler numbers.
\end{remark}
\section{Determinantal expressions}
This section is devoted to demonstrate some determinantal expressions for
higher order Bernoulli and Euler polynomials.
\begin{theorem}
\label{main2}The higher-order Bernoulli polynomials $B_{n}^{(\alpha)}(x)$ can
be represented in terms of the following determinant a
\[
B_{n}^{(\alpha)}\left( x\right) =\left( -1\right) ^{n}\left\vert
\begin{array}
[c]{cccccc
1 & \gamma_{0} & 0 & \ldots & 0 & 0\\
x & \gamma_{1} & \gamma_{0} & \ldots & 0 & 0\\
x^{2} & \gamma_{2} & \binom{2}{1}\gamma_{1} & \ldots & 0 & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
x^{n-2} & \gamma_{n-2} & \binom{n-2}{1}\gamma_{n-3} & ... & \gamma_{0} & 0\\
x^{n-1} & \gamma_{n-1} & \binom{n-1}{1}\gamma_{n-2} & \ldots & \binom
{n-1}{n-2}\gamma_{1} & \gamma_{0}\\
x^{n} & \gamma_{n} & \binom{n}{1}\gamma_{n-1} & \ldots & \binom{n}{n-2
\gamma_{2} & \binom{n}{n-1}\gamma_{1
\end{array}
\right\vert ,
\]
where
\[
\gamma_{n}
{\displaystyle\sum\limits_{i=0}^{n}}
\left\langle \alpha\right\rangle _{i}\frac{n!}{\left( n+i\right) !
{\displaystyle\sum\limits_{j=0}^{i}}
\left( -1\right) ^{i-j}\binom{n+i}{i-j}S\left( n+j,j\right) .
\]
\end{theorem}
\begin{proof}
We use Lemma \ref{1} for $p\left( t\right) =e^{xt}$ and $q\left( t\right)
=\left( \left( e^{t}-1\right) /t\right) ^{\alpha}.$ Note that if we
proceed similar manipulations to the proof of Theorem \ref{main}, then, we
deduce that
\[
\lim_{t\rightarrow0}\frac{d^{n}}{dt^{n}}q\left( t\right)
{\displaystyle\sum\limits_{i=0}^{n}}
\left\langle \alpha\right\rangle _{i}\frac{n!}{\left( n+i\right) !
{\displaystyle\sum\limits_{j=0}^{i}}
\left( -1\right) ^{i-j}\binom{n+i}{i-j}S\left( n+j,j\right) :=\gamma_{n}.
\]
So, we hav
\begin{align*}
& \frac{d^{n}}{dt^{n}}\left[ \frac{e^{xt}}{\left( \left( e^{t}-1\right)
/t\right) ^{\alpha}}\right] \\
& =\frac{\left( -1\right) ^{n}}{\left( \left( e^{t}-1\right) /t\right)
^{(n+1)\alpha}}\left\vert
\begin{array}
[c]{cccccc
e^{xt} & q & 0 & \ldots & 0 & 0\\
xe^{xt} & q^{\prime} & q & \ldots & 0 & 0\\
x^{2}e^{xt} & q^{\prime\prime} & \binom{2}{1}q^{\prime} & \ldots & 0 & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
x^{n-2}e^{xt} & q^{(n-2)} & \binom{n-2}{1}q^{(n-3)} & ... & q & 0\\
x^{n-1}e^{xt} & q^{(n-1)} & \binom{n-1}{1}q^{(n-2)} & \ldots & \binom
{n-1}{n-2}q^{\prime} & q\\
x^{n}e^{xt} & q^{(n)} & \binom{n}{1}q^{(n-1)} & \ldots & \binom{n
{n-2}q^{\prime\prime} & \binom{n}{n-1}q^{\prime
\end{array}
\right\vert \\
& \rightarrow\left( -1\right) ^{n}\left\vert
\begin{array}
[c]{cccccc
1 & \gamma_{0} & 0 & \ldots & 0 & 0\\
x & \gamma_{1} & \gamma_{0} & \ldots & 0 & 0\\
x^{2} & \gamma_{2} & \binom{2}{1}\gamma_{1} & \ldots & 0 & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
x^{n-2} & \gamma_{n-2} & \binom{n-2}{1}\gamma_{n-3} & ... & \gamma_{0} & 0\\
x^{n-1} & \gamma_{n-1} & \binom{n-1}{1}\gamma_{n-2} & \ldots & \binom
{n-1}{n-2}\gamma_{1} & \gamma_{0}\\
x^{n} & \gamma_{n} & \binom{n}{1}\gamma_{n-1} & \ldots & \binom{n}{n-2
\gamma_{2} & \binom{n}{n-1}\gamma_{1
\end{array}
\right\vert
\end{align*}
as $t\rightarrow0.$ From the generating function, given by (\ref{0}), we reach
the desired result.
\end{proof}
\begin{remark}
We mention that for the special case $x=0,$ the analog determinantal
expression for higher-order Bernoulli numbers $B_{n}^{(\alpha)}$ can be
offered. Moreover, for $\alpha=1,$ and $\alpha=1$ and $x=0,$ the similar
representations can be obtained for classical Bernoulli polynomials and
numbers, respectively.
\end{remark}
\begin{theorem}
The higher-order Euler polynomials $E_{n}^{(\alpha)}(x)$ can be represented in
terms of the following determinant a
\[
E_{n}^{(\alpha)}\left( x\right) =\left( -1\right) ^{n}\left\vert
\begin{array}
[c]{cccccc
1 & \beta_{0} & 0 & \ldots & 0 & 0\\
x & \beta_{1} & \beta_{0} & \ldots & 0 & 0\\
x^{2} & \beta_{2} & \binom{2}{1}\beta_{1} & \ldots & 0 & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
x^{n-2} & \beta_{n-2} & \binom{n-2}{1}\beta_{n-3} & ... & \beta_{0} & 0\\
x^{n-1} & \beta_{n-1} & \binom{n-1}{1}\beta_{n-2} & \ldots & \binom{n-1
{n-2}\beta_{1} & \beta_{0}\\
x^{n} & \beta_{n} & \binom{n}{1}\beta_{n-1} & \ldots & \binom{n}{n-2}\beta_{2}
& \binom{n}{n-1}\beta_{1
\end{array}
\right\vert ,
\]
wher
\[
\beta_{n}
{\displaystyle\sum\limits_{i=0}^{n}}
\left\langle \alpha\right\rangle _{i}\frac{S\left( n,i\right) }{2^{i}}.
\]
\end{theorem}
\begin{proof}
The proof can be verified by proceeding as in the proof of Theorem
\ref{main2}. So, we omit it.
\end{proof}
\begin{remark}
The special values of the Bell polynomials of the second kind $B_{n,k}$ are
worthy in combinatorics and number theory. In this respect, $B_{n,k}$ has been
applied in order to cope with some difficult problems and obtain significant
results in many studies (see for example \cite{qi9,qi12,qi13,qi15,qi16}).
\end{remark}
| {
"timestamp": "2020-05-11T02:09:12",
"yymm": "2005",
"arxiv_id": "2005.03921",
"language": "en",
"url": "https://arxiv.org/abs/2005.03921",
"abstract": "In this paper, applying the Faà di Bruno formula and some properties of Bell polynomials, several closed formulas and determinantal expressions involving Stirling numbers of the second kind for higher-order Bernoulli and Euler polynomials are presented.",
"subjects": "Number Theory (math.NT)",
"title": "Closed formulas and determinantal expressions for higher-order Bernoulli and Euler polynomials in terms of Stirling numbers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978384668497878,
"lm_q2_score": 0.8244619220634457,
"lm_q1q2_score": 0.8066409043071676
} |
https://arxiv.org/abs/1608.06905 | Pólya's conjecture fails for the fractional Laplacian | The analogue of Pólya's conjecture is shown to fail for the fractional Laplacian (-Delta)^{alpha/2} on an interval in 1-dimension, whenever 0 < alpha < 2. The failure is total: every eigenvalue lies below the corresponding term of the Weyl asymptotic.In 2-dimensions, the fractional Pólya conjecture fails already for the first eigenvalue, when 0 < alpha < 0.984. | \subsection*{\bf Introduction}
The Weyl asymptotic for the $n$-th eigenvalue of the Dirichlet Laplacian on a bounded domain of volume $V$ in ${\mathbb R}^d$ says that
\[
\lambda_n \sim (nC_d/V)^{2/d} \qquad \text{as $n \to \infty$,}
\]
where $C_d = (2\pi)^d/\omega_d$
and $\omega_d=$ volume of the unit ball in ${\mathbb R}^d$. In $1$-dimension, ``volume'' means length and in $2$-dimensions it means area, so that $C_1 = \pi, C_2 = 4\pi$. P\'{o}lya suggested that the Weyl asymptotic provides more than a limiting relation. He conjectured that it gives a lower bound on each eigenvalue:
\[
\lambda_n \geq (nC_d/V)^{2/d} , \qquad n=1,2,3,\ldots .
\]
He proved this inequality for tiling domains \cite{Pol61}, but it remains open in general.
In this note, we deduce from existing results in the literature that the analogue of
P\'{o}lya's conjecture \emph{fails} for the fractional Laplacian $(-\Delta)^{\alpha/2}$ on the simplest domain imaginable --- an interval in $1$-dimension. In $2$-dimensions we show it fails on the disk and square, at least for some values of $\alpha$.
\subsection*{\bf Fractional P\'{o}lya conjecture}
The fractional Laplacian $(-\Delta)^{\alpha/2}$ is a Fourier multiplier operator, with
\[
\big( (-\Delta)^{\alpha/2} u \big)\widehat{\ }(\xi) = |\xi|^\alpha \widehat{u}(\xi) , \qquad \alpha > 0 ,
\]
where the Fourier transform is defined by
\[
\widehat{u}(\xi) = \frac{1}{(2\pi)^{d/2}} \int_{{\mathbb R}^d} u(x) e^{-i x \cdot \xi} \, dx .
\]
The fractional Laplacian is known to have discrete Dirichlet spectrum on the bounded domain $\Omega \subset {\mathbb R}^d$, with weak eigenfunctions belonging to the fractional Sobolev space
\[
H^{\alpha/2}_0(\Omega) = \{ u \in H^{\alpha/2}({\mathbb R}^d) : \text{$u=0$ a.e.\ on ${\mathbb R}^d \setminus \Omega$} \, \} .
\]
For further information on the fractional Sobolev space see \cite{DNPV12}; for the fractional Laplacian see \cite{Kwa}; and for the variational formulation of the spectrum see \cite{Fra}.
Write $\lambda_n(\alpha)$ for the $n$-th eigenvalue of $(-\Delta)^{\alpha/2}$ on $\Omega$. The Weyl asymptotic (see \cite[Theorem~3.1]{Fra} and associated references) says that
\begin{equation} \label{eq:Weyl}
\lambda_n(\alpha) \sim (nC_d/V)^{\alpha/d} \qquad \text{as $n \to \infty$.}
\end{equation}
Thus the fractional analogue of the P\'{o}lya conjecture is the assertion that
\[
\lambda_n(\alpha) \geq (nC_d/V)^{\alpha/d} , \qquad n=1,2,3,\ldots .
\]
This inequality is what we shall disprove.
\subsection*{\bf Fractional P\'{o}lya conjecture fails for the unit interval, for all eigenvalues}
In $1$-dimension on an interval of length $L$, the conjecture says $\lambda_n(\alpha) \geq (n\pi/L)^\alpha$. Equality holds when $\alpha=2$, the classical case of a vibrating string, but the equality is broken as soon as $\alpha$ drops below $2$, according to the next theorem.
\begin{theorem}[Interval] \label{th:polyafalse}
Suppose $\Omega=(0,L)$ is an interval in $1$-dimension, and let $0<\alpha<2$. Then $\lambda_n(\alpha) < (n\pi/L)^\alpha$ for all $n$.
\end{theorem}
Hence the fractional P\'{o}lya conjecture fails on intervals, which contradicts a claim made about tiling domains (in all dimensions) in the literature \cite{YYY12}. See also our remark later in the paper about the square, which is a tiling domain in $2$ dimensions.
\begin{proof}
The eigenvalues of the fractional Laplacian are known to be bounded above by powers of the usual Laplacian eigenvalues, with strict inequality:
\[
\lambda_n(\alpha) < \lambda_n(2)^{\alpha/2} , \qquad n=1,2,3,\ldots ,
\]
whenever $0<\alpha<2$. See \autoref{pr:spectralcomparison} and the discussion at the end of the paper.
On an interval in $1$-dimension this last inequality says $\lambda_n(\alpha) < (n\pi/L)^\alpha$, which proves the theorem.
For an alternative proof when $\alpha=1$ that provides more explicit estimates, we recall an estimate of Kulczycki, Kwa\'{s}nicki, Ma{\l}ecki and Stos \cite[Theorem~6]{KKMS10}. It implies for the interval of length $L=2$ that
\[
\lambda_n(1) < \frac{n\pi}{L} - \frac{\pi}{40}
\]
whenever $n \geq 4$. When $n=1,2,3$, those authors give the following numerical estimates \cite[Section 11]{KKMS10}:
\[
\lambda_n(1) <
\begin{cases}
1.16 , & n=1 , \\
2.76 , & n=2 , \\
4.32 , & n=3 .
\end{cases}
\]
Their 12 digit estimates have been rounded up to 2 decimal places. The numerical estimates obviously satisfy $\lambda_n(1) < n\pi/L$ for $n=1,2,3$, with $L=2$.
A similarly explicit approach when $\alpha \neq 1$ proceeds through an asymptotic estimate of Kwa\'{s}nicki \cite[Theorem~1]{Kwa12}, which asserts that for the interval of length $L=2$,
\[
\lambda_n(\alpha) = \Big( \frac{n\pi}{2} - \frac{(2-\alpha)\pi}{8} \Big)^{\! \alpha} + O \Big( \frac{1}{n} \Big) .
\]
Rearranging, we find
\[
\lambda_n(\alpha) = \Big( \frac{n\pi}{2} \Big)^{\! \alpha}
\Big( 1 - \frac{\alpha(2-\alpha)}{4n} + o(1/n) \Big) .
\]
Clearly the second factor on the right is less than $1$ for all large $n$, and so $\lambda_n(\alpha) < (n\pi/L)^\alpha$ for all large $n$. Thus once again we see P\'{o}lya's conjecture fails for the fractional Laplacian.
\end{proof}
\subsection*{\bf Relation to Laptev's inequality of Berezin--Li--Yau type}
Laptev \cite[Corollary~2.3]{Lap97a} extended Berezin's eigenvalue sum inequality from the Laplacian to the fractional Laplacian, working on general domains and with an even more general class of operators. The resulting lower bound of ``Li--Yau'' form (see \cite[formula (4.2)]{Fra}) says for an interval in $1$-dimension that
\[
\Big( \frac{\pi}{L} \Big)^{\! \alpha} \frac{n^{1+\alpha}}{1+\alpha} \leq \sum_{k=1}^n \lambda_k(\alpha) , \qquad n=1,2,3,\ldots .
\]
For more information, see Frank's survey \cite[Theorem~4.1]{Fra}, and the improvements by Yildirim--Yolcu and Yolcu \cite[Theorem~1.4]{YYY13}, who strengthened the inequality with a lower order term.
Combining this lower bound by Laptev with the upper bound on individual eigenvalues from \autoref{th:polyafalse} yields a two-sided bound, which in the special case $\alpha=1$ has a particularly simple form:
\[
\frac{\pi}{2L} n^2 \leq \sum_{k=1}^n \lambda_k(1) < \frac{\pi}{2L} n(n+1) , \qquad n=1,2,3,\ldots .
\]
\subsection*{\bf Fractional P\'{o}lya conjecture fails for the unit disk, for the first eigenvalue}
Take $n=1$ and consider the unit disk in dimension $d=2$, which has area $\pi$. Then the corresponding term in the Weyl asymptotic \autoref{eq:Weyl} is $(1 \cdot C_2/\pi)^{\alpha/2} = 2^\alpha$. The next theorem shows that the fractional P\'{o}lya conjecture fails already for the first eigenvalue of the disk, when $\alpha$ is not too large.
\begin{theorem}[Disk]
For the unit disk, $\lambda_1(\alpha) < 2^\alpha$ for all $\alpha \in (0,0.802)$.
\end{theorem}
The theorem can be extended to $\alpha \in (0,0.984)$ provided one accepts a numerical plot as part of the proof; see part (iii) below.
\begin{proof}
We rely on several bounds from the literature for the unit ball in ${\mathbb R}^d$.
(i) The first bound is the simplest, but handles only $\alpha \in (0, 0.699)$. By work of Ba{\~n}uelos and Kulczycki \cite[Corollary~2.2]{BK04},
\[
\lambda_1(\alpha) \leq \frac{2^{\alpha + 1} \Gamma(\tfrac{\alpha}{2} + 1)^2 \Gamma(\tfrac{d}{2} + \alpha + 1)}{(d + \alpha) \Gamma(\alpha + 1) \Gamma(\tfrac{d}{2})} = \frac{2^{\alpha + 1} (\alpha + 1) \Gamma(\tfrac{\alpha}{2} + 1)^2}{\alpha + 2}
\]
after substituting the dimension $d=2$. Plotting this bound shows that $\lambda_1(\alpha) < 2^\alpha$ when $\alpha \in (0, 0.699)$. We will not justify this claim rigorously, since part (ii) below gives an analytic proof for an even larger interval of $\alpha$-values.
(ii) A somewhat stronger estimate by Dyda, Kuznetsov and Kwa{\' s}nicki, namely \cite[formula~(13)]{DKK15a}, says for $d=2$ that
\begin{equation} \label{eq:partii}
\lambda_1(\alpha)
\leq \frac{2^{\alpha-1} (\alpha + 2) (7 \alpha + 24) \Gamma(\tfrac{\alpha}{2} + 1)^2}{(\alpha + 4) (\alpha + 6)} .
\end{equation}
By plotting, we verify the desired inequality $\lambda_1(\alpha) < 2^\alpha$ on the larger interval $\alpha \in (0,0.802)$. This inequality can be checked rigorously, as follows: to show the right side of \autoref{eq:partii} is less than $2^\alpha$ is equivalent to showing
\[
2 \log \Gamma (\tfrac{\alpha}{2} + 2) - \log \frac{\alpha+2}{7\alpha+24} - \log (\alpha+4) - \log (\alpha+6) + \log 2 < 0 .
\]
Each term on the left is convex as a function of $\alpha$, and so it suffices to check that the left side equals $0$ at $\alpha=0$ and is negative at $\alpha=0.802$, which is easily done.
(iii) To get the desired inequality for the interval $\alpha \in (0,0.984)$, we apply an even stronger (and more complicated) bound of Dyda \cite[Section~5]{Dyd12}. It says for the unit ball that
\[
\lambda_1(\alpha) \leq \frac{P - \sqrt{P^2 - Q R}}{2 R}
\]
where the quantities are defined (when $d=2$) by
\begin{align*}
& \hspace*{1.3cm} P
= \frac{2^{\alpha - 1} \pi^2 (\alpha + 4) (\alpha^2 + 3 \alpha + 6) \Gamma(\tfrac{\alpha}{2} + 1)^2}{(\alpha + 1) (\alpha + 3) (\alpha + 6)} , \\
Q
= & \frac{4^{\alpha + 1} \pi^2 (\alpha + 2) \Gamma(\tfrac{\alpha}{2} + 1)^4}{\alpha + 6} ,
\qquad
R = \frac{\pi^2 (\alpha + 4)^2}{4 (\alpha + 1) (\alpha + 2)^2 (\alpha + 3)} ;
\end{align*}
the above formulation is taken from \cite[formula~(12)]{DKK15a}. Substituting these values of $P,Q,R$ and then plotting as a function of $\alpha$ shows $\lambda_1(\alpha) < 2^\alpha$ when $\alpha \in (0,0.984)$. We do not attempt an analytic proof of this last inequality.
\end{proof}
\subsection*{The square} To disprove P\'{o}lya's conjecture on the square $(-1,1) \times (-1,1)$ of sidelength $2$, it would suffice to show $\lambda_1(\alpha) < (C_2/4)^{\alpha/2} = \pi^{\alpha/2}$. Domain monotonicity of eigenvalues means it would be enough in fact to show the first eigenvalue of the unit disk (which lies inside the square) is less than $\pi^{\alpha/2}$. This last inequality can be verfied when $\alpha < 0.417$ by using the estimate in (iii) above. The simpler bound in (ii) suffices for the square when $\alpha < 0.298$, while the bound in (i) is not good enough for any $\alpha$, for this purpose.
Hence in $2$-dimensions, the fractional P\'{o}lya conjecture can fail even for a tiling domain, namely, the square.
\subsection*{\bf Concluding discussion} We have shown that the analogue of P\'{o}lya's conjecture fails for the fractional Laplacian. The conjecture is known to fail for another variant of the Laplacian too, the so-called magnetic Laplacian, by work of Frank, Loss and Weidl \cite{FLW09}.
Thus any technique that might prove the original P\'{o}lya conjecture for the Dirichlet Laplacian must be rather special, because it must break down for both the magnetic Laplacian and the fractional Laplacian.
\section*{Appendix. Spectral comparison} \autoref{th:polyafalse} depended on the fact that the eigenvalues of the fractional Laplacian are bounded above by powers of the classical Laplacian eigenvalues. We give a direct proof of this fact in the next Proposition, and then discuss earlier work. The proof relies on Jensen's inequality and the Poincar\'{e} minimax characterization of eigenvalues, and it is new to the best of our knowledge.
\begin{proposition} \label{pr:spectralcomparison}
The function $\alpha \mapsto \lambda_n(\alpha)^{1/\alpha}$ is strictly increasing when $\alpha>0$, for each $n \geq 1$. Hence $\lambda_n(\alpha) < \lambda_n(2)^{\alpha/2}$ when $0<\alpha<2$.
\end{proposition}
\begin{proof}
Suppose $0<\alpha<\beta<\infty$. Take $u \in H^{\beta/2}({\mathbb R}^d)$ with $\int_{{\mathbb R}^d} |u|^2 \, dx = 1$, so that $\int_{{\mathbb R}^d} |\widehat{u}(\xi)|^2 \, d\xi = 1$ by Plancherel's identity. Then
\[
\Big( \int_{{\mathbb R}^d} |\xi|^\alpha |\widehat{u}(\xi)|^2 \, d\xi \Big)^{\! \beta/\alpha} < \int_{{\mathbb R}^d} |\xi|^\beta |\widehat{u}(\xi)|^2 \, d\xi
\]
by Jensen's inequality applied with the strictly convex function $t \mapsto t^{\beta/\alpha}$ and with measure $d\mu(\xi) = |\widehat{u}(\xi)|^2 \, d\xi$, and where the inequality is shown to be strict by the following argument. If equality held then the equality conditions for Jensen would imply that $|\xi|^\alpha$ is constant $\mu$-a.e., meaning $\mu(|\xi| \neq c)=0$ for some constant $c$. Also the sphere $|\xi| = c$ has $\mu$-measure zero, and so we conclude $\mu \equiv 0$ and hence $\widehat{u} = 0$ a.e.\ with respect to Lebesgue measure. That contradiction shows that Jensen's inequality must hold strictly.
Next, recall that the eigenvalues are characterized variationally \cite[p.~97]{B80}, with
\[
\lambda_n(\alpha) = \min_{S \in S_n(\alpha)} \max \Big\{ \int_{{\mathbb R}^d} |\xi|^\alpha |\widehat{u}|^2 \, d\xi : u \in S \text{\ with\ } \int_{{\mathbb R}^d} |u|^2 \, dx = 1 \Big\}
\]
for $\alpha > 0$, where $S_n(\alpha)$ is the collection of all $n$-dimensional subspaces of $H^{\alpha/2}_0(\Omega)$. The minimum is attained when $S$ is spanned by the first $n$ eigenfunctions of $(-\Delta)^{\alpha/2}$.
Choose $S \in S_n(\beta)$ to be the subspace of $H^{\beta/2}_0(\Omega)$ spanned by the first $n$ eigenfunctions of $(-\Delta)^{\beta/2}$. Then $S \in S_n(\alpha)$, just because $H^{\beta/2}_0(\Omega) \subset H^{\alpha/2}_0(\Omega)$, and so the variational characterization and strict Jensen inequality imply that
\begin{align*}
\lambda_n(\alpha)
& \leq \max \Big\{ \int_{{\mathbb R}^d} |\xi|^\alpha |\widehat{u}|^2 \, d\xi : u \in S \text{\ with\ } \int_{{\mathbb R}^d} |u|^2 \, dx = 1 \Big\} \\
& < \max \Big\{ \Big( \int_{{\mathbb R}^d} |\xi|^\beta |\widehat{u}|^2 \, d\xi \Big)^{\! \alpha/\beta} : u \in S \text{\ with\ } \int_{{\mathbb R}^d} |u|^2 \, dx = 1 \Big\} \\
& = \lambda_n(\beta)^{\alpha/\beta} ,
\end{align*}
which completes the proof.
\end{proof}
Earlier work proved the non-strict inequality $\lambda_n(\alpha) \leq \lambda_n(2)^{\alpha/2}$ for $\alpha=1$ \cite[Theorem~3.14]{BK04}, and for rational $\alpha \in (0,2)$ \cite[Theorem~1.3]{DeB04}, and for general $\alpha \in (0,2)$ \cite[Theorem~3.4]{CS05}. Further, $\alpha \mapsto \lambda_n(\alpha)^{1/\alpha}$ is continuous \cite[Theorem~1.3]{DMH07}, \cite[Example~5.1]{CS06}, and is increasing by work of Chen and Song \cite[Example~5.4]{CS05}, while \autoref{pr:spectralcomparison} shows it is strictly increasing.
A stronger result than \autoref{pr:spectralcomparison} is true when $0<\alpha<\beta=2$: the fractional Laplacian is bounded above as an operator by the $\alpha/2$-th power of the Dirichlet Laplacian. References for the non-strict version of this operator inequality are in Frank's survey paper \cite[Theorem~2.3]{Fra}. For the strict operator inequality, see the paper of Musina and Nazarov \cite[Corollary~4]{MN14}.
Finally, \autoref{pr:spectralcomparison} and its proof by Jensen's inequality extend to eigenvalues of other families of operators, provided the corresponding Fourier multipliers are related by (strictly) convex transformations, just as $|\xi|^\alpha$ is related to $|\xi|^\beta$ by the transformation $t \mapsto t^{\beta/\alpha}$. Additionally, the result extends from eigenvalues to the more general ``inf--max'' values defined by a variational formula in the case of non-discrete spectrum, although the inequality is no longer strict in that case.
\section*{Acknowledgments}
This research was supported by grants from the Simons Foundation (\#204296 and \#429422 to Richard Laugesen) and the statutory fund of the Department of Mathematics, Faculty of Pure and Applied Mathematics, Wroc{\l}aw University of Science and Technology (Mateusz Kwa\'snicki).
The paper was initiated at the Stefan Banach Mathematical International Center (B\k{e}dlewo, Poland), during the 3rd Conference on Nonlocal Operators and Partial Differential Equations, June 2016. The authors are grateful for the financial support and hospitality received during the conference.
| {
"timestamp": "2016-08-25T02:06:15",
"yymm": "1608",
"arxiv_id": "1608.06905",
"language": "en",
"url": "https://arxiv.org/abs/1608.06905",
"abstract": "The analogue of Pólya's conjecture is shown to fail for the fractional Laplacian (-Delta)^{alpha/2} on an interval in 1-dimension, whenever 0 < alpha < 2. The failure is total: every eigenvalue lies below the corresponding term of the Weyl asymptotic.In 2-dimensions, the fractional Pólya conjecture fails already for the first eigenvalue, when 0 < alpha < 0.984.",
"subjects": "Spectral Theory (math.SP); Analysis of PDEs (math.AP)",
"title": "Pólya's conjecture fails for the fractional Laplacian",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717480217662,
"lm_q2_score": 0.8175744673038221,
"lm_q1q2_score": 0.806595871345896
} |
https://arxiv.org/abs/1502.07805 | On Hopital-style rules for monotonicity and oscillation | We point out the connection of the so-called Hôpital-style rules for monotonicity and oscillation to some well-known properties of concave/convex functions. From this standpoint, we are able to generalize the rules under no differentiability requirements and greatly extend their usability. The improved rules can handle situations in which the functions involved have non-zero initial values and when the derived functions are not necessarily monotone. This perspective is not new; it can be dated back to Hardy, Littlewood and Polya. | \section{Introduction and historical remarks}
Since the 1990's, many authors have successfully applied the so-called
monotone L'H\^opital's\footnote{A well-known anecdote, recounted in some undergraduate
textbooks and the Wikipedia, claims that H\^opital might have
cheated his teacher Johann Bernoulli to earn the credit for this classical
rule.}
rules to establish many new and useful inequalities.
In this article we make use of the connection of these rules
to some well-known properties of concave{/}convex functions (via a change of variable)
to extend the usability of the rules.
We show how
the rules can be formulated under no differentiability requirements,
and how to characterize all possible situations, which are
normally not covered by the conventional form of the rules.
These include situations in which the functions involved have non-zero
initial values and when the derived functions are not necessarily monotone.
This perspective is not new; it can be dated back to Hardy, Littlewood
and P\'olya (abbreviated as HLP).
The concepts of non-decreasing, increasing (we use this term in the sense of what
some authors prefer to call ``strictly increasing''), non-increasing, and decreasing functions are
defined as usual. The term ``monotone'' can refer to any
one of these senses. For convenience, we use the symbols
$ \nearrow $ and $ \searrow $ to denote increasing and decreasing, respectively.
The rules have appeared in various formulations. Let us
start with the most popularly known form. Let $ a<b\leq \infty $.
\par\vspace*{-3mm}\par
\begin{center}
\fbox{
\begin{minipage}[t]{5in}
\em
\par\vspace*{2mm}\par
Let $ f,g:[a,b)\rightarrow \mathbb R $ be two continuous real-valued functions,
such that $ f(a)=g(a)=0 $, and $ g(x)>0 $ for $ x>a $.
Assume that they are differentiable at each point in $ (a,b) $, and
$ g'(x)>0 $ for $ x>a $.
\par\vspace*{2mm}\par
\CENTERLINE{If \,\, $\displaystyle \frac{f'(x)}{g'(x)} $ \,\, is monotone in $ [a,b) $, so is \,\, $\displaystyle \frac{f(x)}{g(x)} $ ``in the same sense''.}
\par\vspace*{2mm}\par
\end{minipage}
}
\end{center}
A dual form assumes $ b<\infty $, $ f(b)=g(b)=0 $ and $ g'(x)<0 $.
The same conclusion holds.
Note that even though the rule allows
$ b=\infty $, and that $ f $ or $ g $ need not be defined at $ b $, as far as the proof
is concerned we may
assume without loss of generality that $ b $ is finite, and $ f $ and $ g $ are defined and continuous up to $ b $,
because we can first study the functions in a smaller subinterval
$ [\alpha ,\beta ]\subset(0,b) $ and then let $ \alpha \rightarrow a $ and $ \beta \rightarrow b $.
When $ f $ and $ g $ are assumed to be differentiable, we take $ f'(a) $ and
$ g'(a) $ to mean the righthand derivative at $ x=a $, and $ f'(b) $, $ g'(b) $ to mean
the lefthand derivative at $ x=b $.
Moreover, we can take $ a=0 $ after a suitable translation.
These simplifications will be assumed in the rest of the paper.
Some of the variations in other formulations are merely cosmetic. For
example, if $ f(0)\neq 0 $, then use $ f(x)-f(0) $ instead, or if $ f $ is not defined
at $ 0 $, use the righthand limit $ f(0+) $, if it exists. We will not dwell
further on these types.
Some other variations are attempts to
weaken the differentiability requirement on $ f $ and~ $ g $. A different type
concerns stronger formulations in which strict monotonicity can
be deduced from
non-strict hypothesis. We defer a discussion of such variations to Section~2.
In the majority of concrete practical applications, however,
the functions involved
are fairly smooth, often infinitely differentiable and the stronger
forms are seldom used.
Propositions 147 and 148 (page 106) in the famous classic by HLP
\cite{hlp} (First Edition 1934) read:
\par\vspace*{-3mm}\par
\begin{center}
\fbox{
\begin{minipage}[t]{5in}
\em
\par\vspace*{2mm}\par
{\bf 147.} The function
\par\vspace*{-6mm}\par
$$ \sigma (x) = \frac{\vp{17}{13}\displaystyle \int_{0}^{x} (1+ \sec t)\,\log\,\sec t \,dt }{\vp{18}{16}\displaystyle \log\,\sec x \int_{0}^{x} (1+\sec t)\,dt} $$
increases steadily from $ \frac{1}{3} $ to $ \frac{1}{2} $ as $ x $ increases from $ 0 $ to $ \frac{1}{2} \pi $.
\par\vspace*{\baselineskip}\par
There is a general theorem which will be found useful in the proof of Theorem 147.
\par\vspace*{\baselineskip}\par
{\bf 148.} If $ f $, $ g $, and $ f'/g' $ are {\color{red}positive increasing} functions, then $ f/g $ either increases
for all $ x $ in question, {\color{red} or decreases for all such $ x $, or decreases to
a minimum and then increases.} In particular, if $ (0)=g(0)=0,then $ $ f/g $
increases for $ x>0 $.
\par\vspace*{2mm}\par
\end{minipage}
}
\end{center}
Other than the phrases in red, Proposition 148 is essentially the
increasing part of the monotone
rule, and Proposition 147 is an application of the rule in the same
spirit as in the more recent work. At first reading, Proposition 148
appears to be weaker than the modern rule because it requires the
additional conditions that $ f $ is + and $ \nearrow $, and $ f'/g' $ is +.
Let us take a closer look at HLP's short and
elegant proof which is reproduced below.
\begin{PROOF}
[\,Hardy, Littlewood, P\'olya\,] To prove this, observe that
$$ \frac{d}{dx} \left( \frac{f}{g} \right) = \left( \frac{f'}{g'} - \frac{f}{g} \right) \,\frac{g'}{g} $$
and consider the possible intersections of the curves $ y=f/g $, $ y=f'/g' $.
At one of these intersections the first curve has a horizontal and the
second a rising tangent, and therefore there can be at most one intersection.
If we take $ g $ as the independent variable, write $ f(x)=\phi (g) $, and suppose,
as in the last clause of the theorem, that
$$ f(0)=g(0)=0, $$
or $ \phi (0)=0 $, then the theorem takes the form: {\em if $ \phi (0)=0 $ and $ \phi '(g) $
increases for $ g>0 $, then $ \phi /g $ increases for $ g>0 $.} This is a slight
generalization of part of Theorem 127.
\end{PROOF}
In the proof, the + $ \nearrow $ property
of $ g $ is needed to guarantee that the denominators of the fractions
$ f/g $ and $ f'/g' $ will not become 0. The + $ \nearrow $ property of
$ f $ and the positivity of $ f'/g' $, however, is not needed anywhere.
Once the extra conditions are disposed of, we see that the $ \searrow $ version of rule
also holds, by considering $ -f(x) $ instead of $ f(x) $.
HLP did not attribute the result to anyone. It could mean that it is
one of their own, or it was widely known. We did not attempt to track
it down further in earlier literature.
In one of the Bourbaki books, \cite{bo} (1958), Exercise 10 of Chapter 1, \S2
(page 38) is exactly HLP's Proposition 148, minus those extra
conditions. It seems that in the intervening years,
someone must have figured out that those are superfluous.
Neither a solution nor any attribution is given.
Mitrinovic \cite{mi} quoted Bourbaki's Exercise as \S3.9.49.
HLP's Proposition, on the other hand, is broader than the modern rule.
It says something about the general situation when neither $ f(0)=0 $
nor $ g(0)=0 $ is assumed. In Section~3, we will fully characterize all such
situations.
A classroom note by Mott \cite{mo} (1963) in the Monthly
presented a much weaker version of the monotone rule, in the
integral formulation (see Section~2). In the (more or less)
equivalent differential formulation,
the hypotheses require $ f'\nearrow $ and $ g'\searrow $ (which implies
$ f'/g' $ is $ \nearrow $). Two follow-up papers by Redheffer \cite{re} (1964) and
Boas \cite{boa} (1965) contained alternative proofs and further comments.
The authors were unaware of the earlier results.
Then came the work of Gromov \cite{cgt}, Anderson, Vamanamurthy, and Vuorinen
\cite{avv} \cite{avv2}, Pinelis \cite{pi1}, and many subsequent authors who have
done a tremendous amount of good work to enrich the subject area.
The readers should have no difficulty finding them by searching for
``monotone L'H\^opital rule'' on the internet, and by referring to the references
cited in the papers listed.
Analogous to the classical L'H\^opital's rule for finding limits of indeterminant
forms, one can also consider the situation when $ f(0)=g(0)=\pm \infty $. Results,
examples and counterexamples in this regard are presented
in Anderson, et al. \cite{avv2}. In this article,
we confine ourselves to the case when $ f(0) $ and $ g(0) $ are finite.
\section{Different formulations of the rules.}
If we let $ p(x)=f'(x) $ and $ q(x)=g'(x) $, then the rule stated in Section~1
becomes:
\par\vspace*{-3mm}\par
\begin{center}
\fbox{
\begin{minipage}[t]{5in}
\em
\par\vspace*{0mm}\par
\CENTERLINE{If \,\, $\displaystyle \frac{p(x)}{q(x)} $ \,\, is monotone, so is \,\, $\displaystyle \frac{\vp{12}{6}\int_0^xp(t)\,dt}{\vp{12}{0}\int_0^xq(t)\,dt} $ ``in the same sense''.}
\par\vspace*{1mm}\par
\end{minipage}
}
\end{center}
This integral formulation has two advantages.
One is that now we do not have to
separately assume that $ f(0)=g(0)=0 $. Second is that the rule, stated as
it is, can be applied to functions $ p $ and $ q $ with discontinuities.
For instance, $ p(x) $ and $ q(x) $ may be piecewise continuous, such as
step functions, as long as their quotient is still monotone.
In such cases, the differential form would have failed because the
requirement that $ f $ and $ g $ be differentiable at every point is not
satisfied. If one has proved the differential form of the rule previously based
on this assumption, such as invoking the generalized Cauchy mean value
theorem, one has to seek a different proof.
Alternatively, one can use an approximation technique, as suggested by
Redheffer \cite{re}, to deduce the integral form from the differential form.
Choose a sequence of continuous functions $ q_n(x) $ that converge to $ q(x) $ in the uniform norm
in the finite interval $ [0,b\,] $. Next, choose another sequence of continuous functions $ h_n(x) $ that
converge to $ p(x)/q(x) $, this time requiring that each
$ h_n(x) $ preserves the same
monotone property of the latter. This can be done, for instance, by
using the well-known method of mollifiers of S. Sobolev and K.O. Friedrich
in the theory of partial differential equations. Since the
functions $ \vp{14}{8}\int_{0}^{x} h_n(t)q_n(t)\,dt $ and $ \int_{0}^{x} q_n(t)\,dt $ are now differentiable
everywhere, the differential form of the rule implies that
$ \vp{14}{8}\int_{0}^{x} h_n(t)q_n(t)\,dt /\int_{0}^{x} q_n(t)\,dt $ is monotone.
Letting $ \vp{10}{0}n\rightarrow \infty $ gives the desired conclusion.
There are other ways to relax the differentiability requirement
in the differential formulation. For example, one can use one-sided Dini
derivatives instead of regular derivatives. In the next section, we will
see how the rule can be formulated even without mentioning
differentiability.
As we have remarked before, such extensions may be of theoretical value, but
they are often not needed for practical applications.
Another direction of extension is to strengthen the conclusion of the
rule. Pinelis \cite{pi4} showed that if $ f'/g' $ is $ \nearrow $, then
$ (f/g)' $ is in fact strictly + (with strict
monotonicity following as a corollary).
One can compare this with the regular and strong forms of the maximum
principle in the theory of differential equations. The weaker form states that the global maximum
must be attained at the boundary of the region in consideration, while
the strong form maintains that at the boundary point where the global
maximum is attained, the directional derivative along the outward normal
must be strictly +.
One is also able to deduce strict monotonicity in a certain sense
from non-strict monotone assumptions. One such case will be discussed
in the next section.
\section{Convex functions}
In the particular case when $ g(x)=x $, the increasing
monotone rule reduces to:
\begin{quote}
\em If $ f(0)=0 $ and $ f'\nearrow $, so is $ f/x $.
\end{quote}
The slightly weaker (since $ f $ is required to be twice differentiable)
version, $ f''>0\,\,\Longrightarrow \,\,f(x)/x\,\,\nearrow $, is what
HLP referred to as Proposition 127. Both of these results are subsumed by
the following well-known property of convex functions.
\[
\fbox{
\begin{minipage}[t]{5in}
\par\vspace*{2mm}\par
\em If $ f $ is a continuous, strictly
convex function in $ [0,b) $, satisfying $ f(0)\leq 0 $,
then $ f/x\,\,\nearrow $ in $ [0,b) $.
\par\vspace*{1mm}\par
\end{minipage}
}
\]
A function is defined to be convex in $ [0,b\,] $ if
for all $ 0\leq x_1<x_2\leq b $,
\begin{equation} f\left( \frac{x_1+x_2}{2} \right) \leq \frac{f(x_1)+f(x_2)}{2} \,. \end{equation}
It is said to be strictly convex if $ \leq $
is replaced by $ < $. Concavity is defined by reversing the inequality signs.
No differentiability requirement is assumed.
It is well-known that
for a continuous convex function, the following inequality holds:
\begin{equation} f(\lambda x_1+(1-\lambda )x_2) \leq \lambda f(x_1) + (1-\lambda ) f(x_2), \qquad 0<\lambda <1 . \Label{con} \end{equation}
If the convexity is strict, replace $ \leq $ by $ < $.
Its geometric interpretation is that the arc of the
graph of $ f(x) $, $ x\in[x_1,x_2] $, lies below the the chord joining the
two points $ A_1=(x_1,f(x_1)) $ and $ A_2=(x_2,f(x_2)) $.
\begin{center}
\FG{50}{conv1} \qquad \qquad \FG{50}{conv2}
\par\vspace*{6mm}\par
Figure 1. Convex functions with $ f(0)=0 $ and $ f(0)<0 $.
\end{center}
\par\vspace*{-31mm}\par
\hspace*{7mm} $ O $
\par\vspace*{-14mm}\par
\hspace*{75mm} $ O $
\par\vspace*{9mm}\par
\hspace*{32mm} $ A_1 $
\par\vspace*{-6mm}\par
\hspace*{98mm} $ A_1 $
\par\vspace*{-44mm}\par
\hspace*{57mm} $ A_2 $
\par\vspace*{-7mm}\par
\hspace*{124mm} $ A_2 $
\par\vspace*{50mm}\par
Figure 1 depicts two such functions (green curves);
the first has $ f(0)=0 $ and the second $ f(0)<0 $.
The quantity $ f(x)/x $ represents the slope of the straight line $ OA $
joining the origin $ O $ and the point $ A=(x,f(x)) $ on the curve.
As $ x $ increases, the point $ A $ slides along the curve towards the
right and it is intuitively clear that
the slope of $ OA $ increases. A vigorous proof can be
given using (\ref{con}). The proof is not new, but we include it here for
easy reference.
\begin{PROOF}
In the case $ f(0)=0 $, by convexity,
the arc of the curve between $ O $ and $ A_2 $
lies below the straight line $ OA_2 $. In particular, the point $ A_1 $ lies
below $ OA_2 $ and the desired conclusion follows.
For the case $ f(0)<0 $, we
modify the function $ f $ in $ [0,x_1] $ by replacing the arc
over $ [0,x_1] $ with
the chord $ 0A_1 $. The resulting new curve represents the function
$ \max(f(x),f(x_1)x/x_1) $ which is again convex.
Then we are back to the first case $ f(0)=0 $.
\end{PROOF}
{\bf Alternative Proof}.
Another simple proof uses the fact that a straight line
cannot intersect a strict convex{/}concave curve at more than two points.
As above, we only have to consider the case $ f(0)=0 $.
Suppose $ f/x $ is not monotone; then there are $ x_1\neq x_2 $ such that
$ f(x_1)/x_1=f(x_2)/x_2=\lambda $. Then the straight line $ y=\lambda x $ intersects
the graph of $ f $ at the two points $ (x_1,f(x_1)) $ and $ (x_2,f(x_2)) $.
A third intersection point, however, is $ (0,0) $, giving a contradiction.
{{\ \vrule height7pt width4pt depth1pt} \par \vspace{2ex} }
But how do we make the quantum jump from the special case $ g(x)=x $
to the general case?
The trick of change of variable, just as HLP pointed out in their proof,
is the key. By the way, this trick has been known to work for the
classical L'H\^opital rule for indeterminant limits as well.
See, for example, Taylor \cite{ta}. Let us rephrase
HLP's argument to make it more transparent.
Suppose that $ g(0)=0 $ and $ g(x) $ is continuous and $ \nearrow $ in $ [0,b\,] $.
The inverse function $ x=g^{-1}(u) $ is well-defined.
Substitute this into the definition of $ f(x) $ to get the composite function
$ \phi (u)=f(g^{-1}(u)) $. We have then the following generalized monotone rule.
\par\vspace*{-3mm}\par
\begin{center}
\fbox{
\begin{minipage}[t]{5in}
\par\vspace*{2mm}\par
\em If $ \phi (u) $ is a continuous, strictly convex {\rm(}concave{\rm)} function of $ u $ in
$ [0,g(b)] $ and $ f(0)\leq 0 $ $ (\geq 0) $,
then $ \displaystyle \vp{20}{0}\frac{f(x)}{g(x)} =\frac{\phi (u)}{u} $ is $ \nearrow $ {\rm(}$\searrow${\rm)}.
\par\vspace*{2mm}\par
\end{minipage}
}
\end{center}
In practical
applications, to check the convexity or concavity of $ \phi (u) $, we
often resort to showing that
$ \displaystyle \phi '(u) $
is monotone. By the chain rule,
$ \displaystyle \phi '(u) $
is nothing but $ \displaystyle \frac{f'(x)}{g'(x)} $.
This new rule is more general than the popular
one described on p.\ \!\!1,
because it does not impose any differentiability on $ f $ and $ g $, and
it covers situations when $ f(0)\neq 0 $.
If we omit ``strictly'' in the hypotheses, we cannot guarantee that $ f/g $ is
strictly monotone, as the degenerate example $ f(x)=g(x)=x $ shows.
However, this is pretty much the only exceptional situation, in the following
sense. If there exist two points $ x_1<x_2 $ in $ [0,b) $ such that
$ f(x_1)/g(x_1)=f(x_2)/g(x_2) $, then $ f(x)=\lambda g(x) $ in $ [0,x_2] $
for some constant $ \lambda $. To prove this, one only has to study the special case
when $ g(x)=x $. Then the assertion becomes geometrically obvious. We omit
the details.
\begin{center}
\FG{50}{conv3}
\par\vspace*{2mm}\par
Figure 2. Convex function with $ f(0)>0 $.
\end{center}
\par\vspace*{-40mm}\par
\hspace*{32mm} $ O $
\par\vspace*{-14mm}\par
\hspace*{42mm} $ A $
\par\vspace*{12mm}\par
\hspace*{52mm} $ A_c $
\newpage
Next, let us look at the case $ f(0)>0 $ when $ g(x)=x $, and $ f(x) $ is strictly
convex. Figure 2 depicts such a curve.
Starting at $ x=0 $, the point $ A $ is where the curve intersects the vertical axis
and the slope of $ OA $ is $ \infty $. As $ x $ increases, the slope of $ OA $ decreases
until we reach the point $ A_c $ such that $ OA_c $ is tangent to the curve.
Beyond that the slope of $ OA $ increases again. This is an example of
the general situation described in the conclusion of HLP's Proposition.
The quintessential conclusion is that there exists at most one point $ c\in(0,b) $
such that $ f/x $ is $ \searrow $ in $ (0,c) $ and $ \nearrow $ in $ (c,b) $.
As the example $ f(x)=1/(1+x) $ shows, the point $ c $ may not exist at all.
After uplifting to the case of general $ g(x) $, the pertinent part of
HLP's Proposition can be generalized as follows. The symbol $ \exists\,! $
is a shorthand for ``there exists a unique''.
\par\vspace*{-3mm}\par
\begin{center}
\fbox{
\begin{minipage}[t]{5in}
\par\vspace*{2mm}\par
\em $ \phi (u) $ is continuous, strictly convex {\rm(}concave{\rm)} in $ u $ and $ f(0)>0 $ $ (<0) $.
Then either $ \displaystyle \vp{20}{0}\frac{f(x)}{g(x)} $ is $ \searrow $ {\rm(}$\nearrow${\rm)} in $ (0,b) $, or
$ \exists\,! $
$ c\in(0,b) $ such that $ \displaystyle \vp{20}{0}\frac{f(x)}{g(x)} $ is $ \searrow $ {\rm(}$\nearrow${\rm)} in $ (0,c) $ and $ \nearrow $ {\rm(}$\searrow${\rm)} in $ (c,b) $.
\par\vspace*{2mm}\par
\end{minipage}
}
\end{center}
\begin{PROOF}
Of course, HLP's proof no
longer works under the minimal assumption. A rigorous proof
can be given using only properties of convex functions.
The quotient $ f/x $ is continuous in $ (0,b\,] $. Although
it blows up at $ x=0 $, it is easy to see that
it attains a global minimum at some point $ c\in(0,b\,] $.
By its very construction,
the line $ OA_c $ that joins the origin $ O $ and the point $ A_c=(c,f(c)) $
lies below the curve of $ f $. We can show that $ f/x $ is decreasing in $ [0,c] $
as follows. Let $ 0<x_1<x_2<c $.
Since the line joining $ O $ and the point $ A_1=(x_1,f(x_1)) $
is above the line $ OA_c $, the former must intersect the vertical line $ x=c $
above $ A_c $, say at a point $ D $.
The arc of $ f(x) $ from $ A_1 $ to $ A_c $
must be below the line $ A_1A_c $, which lies below $ A_1D $. In particular, the
point $ A_2=(x_2,f(x_2)) $ is below the
line $ A_1D $ and hence $ f(x_2)/x_2<f(x_1)/x_1 $.
If it happens that $ c=b $,
$ f/x $ has no chance to bounce back from
$ \searrow $ to $ \nearrow $. If $ c<b $, then we can argue in a similar way as above that
$ f/x $ is $ \nearrow $ in $ (c,b) $. This completes the proof.
\end{PROOF}
Is there a practical way to delineate the two situations in the conclusion?
In the special case $ g(x)=x $ and $ f(x) $ is convex, it is easy to see that
a necessary and sufficient condition for the existence of an interior $ c\in(0,b) $ is that
in a left neighborhood of $ b $, the curve of $ f $ lies below the chord joining
$ O $ and $ B=(b,f(b)) $. In the general setting, this criteria can be expressed as
\begin{equation} \liminf _{x\rightarrow b^-} \, \frac{f(b)-f(x)}{g(b)-g(x)} > \frac{f(b)}{g(b)} \,. \end{equation}
One can also use limsup instead of liminf.
For differentiable $ f(x) $ and $ g(x) $, it simplifies to
$$ f(b) g'(b) - f'(b) g (b) < 0 \, . $$
For concave $ \phi (u) $, the inequality sign is reversed. If $ c<b $ exists, it is
determined by
solving the equation $ f'(x)g(x)=f(x)g'(x) $.
If we allow the function $ \phi (u) $ to be non-strict convex in the hypotheses, then
the curve of $ \phi (u) $ (versus $u$) may contain flat portions
(line segments). If it
happens that the tangent line from the origin touches the curve
and contains one of these flat portions,
then the unique turning point $ c $ in the above
rule becomes an entire interval of turning points. We have to modify the rule to
say that now there exists a subinterval $ [c_1,c_2]\subset(a,b) $ such that
$ f/g $ is $ \searrow $ in $ (0,c_1) $, constant in $ [c_1,c_2] $ and $ \nearrow $ in $ (c_2,b) $.
Finally, let us consider the case when $ g(0)>0 $, which is also covered
by HLP's Proposition. In the special case when
$ g(x)=x+\gamma $, $ \gamma >0 $, and $ f $ is convex,
the corresponding problem is to investigate the $ \nearrow $ and $ \searrow $
properties of $ f(x)/(x+\gamma ) $, which is the slope of the line joining the
point $ (-\gamma ,0) $ on the $ x $-axis and the point $ (x,f(x)) $ on the graph of $ f $.
Equivalently, we may apply a translation to shift the point $ (-\gamma ,0) $
to be the new origin. In this perspective, we can exploit the same figures
earlier in this section, only that now the curve of $ f $ starts from $ x=\gamma $
instead of $ x=0 $.
It is easy to check with simple examples that both
possibilities discussed in the case $ g(0)=0 $, $ f(0)>0 $ can occur.
Besides those, an additional
possible third situation is that $ f/g $
is $ \nearrow $ in $ (0,b) $. In the special case $ g(x)=x+\gamma $, this happens when
the line joining the points $ (-\gamma ,0) $ and $ (0,f(0)) $ lies below the
curve of $ f $ in a right neighborhood of $ x=0 $. In the general setting, if $ f $ and
$ g $ are differentiable, this happens when
$ f(0)/g(0) - f'(0)/g'(0) \leq 0. $
However, the possibility of $ f/g $ having the shape $ \nearrow\searrow $ is not allowed.
We summarize all the findings:
\begin{THEOREM}
Suppose that $ g(x) $ is a continuous, positive $ \nearrow $ function in $ [0,b\,] $ and $ \phi (u)=f(g^{-1}(u)) $
is a continuous strictly convex {\rm(}concave{\rm)} function of $ u $ in $ [g(0),g(b)] $.
\par\vspace*{-2mm}\par
\begin{itemize}
\item[\rm(1)\,\,] $ g(0)=0 $ and $ f(0)\leq (\geq )\,\,0 $,
then $ f/g $ is $ \nearrow $ $(\searrow)$.
\item[\rm(2)\,\,] $ g(0)=0 $ and $ f(0)>(<)\,\,0 $\quad or \quad $ g(0)>0 $.
\begin{itemize}
\item[\rm i)\,\,] If \,\, $ g(0)\neq 0 $ and
$ \displaystyle \left( \frac{f(0)}{g(0)} - \liminf_{x\rightarrow 0+} \, \frac{f(x)-f(0)}{g(x)-g(0)} \right) \leq (\geq 0)\,\, 0 , $
then $ f/g $ is $ \nearrow $ $ (\searrow) $.
\item[\rm ii)\,\,] If \,\,
$ \displaystyle \left( \frac{f(b)}{g(b)} - \liminf_{x\rightarrow b-} \, \frac{f(b)-f(x)}{g(b)-g(x)} \right) \geq (\leq 0)\,\, 0 , $
then $ f/g $ is $ \searrow $ $ (\nearrow) $.
\item[\rm iii)\,\,] Otherwise,
$ f/g $ has the shape $ \searrow\nearrow $ $ (\nearrow\searrow) $ with a unique turning point in $ (0,b) $.
\end{itemize}
\end{itemize}
Suppose that the convexity{/}concavity of $ \phi (u) $ is not assumed to be
strict. Let $ [0,\alpha )\subset[0,b\,] $ be a maximal
subinterval $($which is possibly void$)$
in which $ f(x)=\lambda g(x) $ for a constant $ \lambda $. Then in $ [\alpha ,b\,] $, the same conclusions
as above $($with all monotonicity being strict$)$ hold.
\end{THEOREM}
\begin{REM} \rm
Case (1) can actually be combined with (2) i). We prefer to separate it
out since it is historically as well as application-wise
the most prominent case.
\end{REM}
\begin{REM} \rm
Two points in the result are worth noting. The function $ f/g $ cannot change monotonicity
more than once, and in the convex case, the shape $ \nearrow\searrow $ is ruled out.
\end{REM}
\begin{REM} \rm
The usefulness of the criteria given in the cases (2) i) and ii)
(in addition to the $ f'/g'\nearrow $ condition) lies
in the fact that there is only one boundary condition at
one of the endpoints to verify in order to
deduce monotonicity of $ f/g $ over the entire interval.
\end{REM}
\begin{REM} \rm
The last part of the Theorem is a strong form of the rule. Even if strict
convexity{/}concavity is not assumed, we can still obtain
strict monotonicity, except possibly in an initial subinterval in a very
special situation.
\end{REM}
If $ g(x) $ is a positive $ \searrow $ function in $ (0,b) $, an analogous result holds. We only have
to do a reflection $ x\mapsto (b-x) $ to reduce it to the $ \nearrow $ case. The
role of $ 0 $ and $ b $ are now exchanged. One has to be careful in chasing
the signs and inequalities in the conditions. We state the result for ease of
reference. It is useful in handling functions such as $ f(x)/(b-x) $.
\begin{THEOREM}
Suppose that $ g(x) $ is a continuous, positive $ \searrow $ function in $ (0,b) $ and $ \phi (u)=f(g^{-1}(u)) $
is a continuous strictly convex $($concave$)$ function of $ u $ in $ [g(b),g(0)] $.
\par\vspace*{-2mm}\par
\begin{itemize}
\item[\rm(1)\,\,] $ g(b)=0 $ and $ f(b)\leq (\geq )\,\,0 $,
then $ f/g $ is $ \searrow $ $(\nearrow)$.
\item[\rm(2)\,\,] $ g(b)=0 $ and $ f(b)>(<)\,\,0 $\quad or \quad $ g(b)>0 $.
\begin{itemize}
\item[\rm i)\,\,] If \,\,
$ \displaystyle \left( \frac{f(0)}{g(0)} - \liminf_{x\rightarrow 0+} \, \frac{f(x)-f(0)}{g(x)-g(0)} \right) \geq (\leq 0)\,\, 0 , $
then $ f/g $ is $ \nearrow $ $ (\searrow) $.
\item[\rm ii)\,\,] If \,\, $ g(b)\neq 0 $\,\, and
$ \displaystyle \left( \frac{f(b)}{g(b)} - \liminf_{x\rightarrow b-} \, \frac{f(b)-f(x)}{g(b)-g(x)} \right) \leq (\geq 0)\,\, 0 , $
then $ f/g $ is $ \searrow $ $ (\nearrow) $.
\item[\rm iii)\,\,] Otherwise,
$ f/g $ has the shape $ \searrow\nearrow $ $ (\nearrow\searrow) $ with a unique turning point in $ (0,b) $.
\end{itemize}
\end{itemize}
\end{THEOREM}
In practical applications when $ f $ and $ g $ are differentiable, the
following simplified rule is easier to use. Corollary 1 deals with increasing
$ g $ and Corollary 2 deals with decreasing $ g $.
\begin{COR}
Suppose $ f $ and $ g $ are differentiable,
$ g,\,g'>0 $, and $ f'/g'\nearrow(\searrow) $ in $ (0,b) $.
\par\vspace*{-2mm}\par
\begin{itemize}
\item[\rm(1)\,\,] If $ g(0)=0 $ and $ f(0)\leq (\geq )\,\,0 $, then $ f/g $ is $ \nearrow $ $(\searrow)$.
\item[\rm(2)\,\,] $ g(0)=0 $ and $ f(0)>(<)\,\,0 $\quad or \quad $ g(0)>0 $.
\par\vspace*{-2mm}\par
\begin{itemize}
\item[\rm i)\,\,] If \,\, $ g(0)\neq 0 $\,\, and
$ \displaystyle (f/g)(0) \leq (\,\geq \,)\, (f'/g')(0), $
then $ f/g $ is $ \nearrow $ $ (\searrow) $.
\item[\rm ii)\,\,] If \,\,
$ \displaystyle (f/g)(b) \geq (\,\leq \,)\, \hspace*{0.6mm} (f'/g')(b), $
then $ f/g $ is $ \searrow $ $ (\nearrow) $.
\item[\rm iii)\,\,] Otherwise,
$ f/g $ has the shape $ \searrow\nearrow $ $ (\nearrow\searrow) $ with a unique turning point in $ (0,b) $.
\end{itemize}
\end{itemize}
\end{COR}
\newpage
\begin{REM} \rm
It is instructive to visualize the three possibilities in the rule.
Figure 3 shows the plots of $ f/g $ (red curves) and $ f'/g' $
(green dashed curves)
in three typical examples. All other examples exhibit the same features.
By hypotheses, the dashed curves are $ \nearrow $.
\end{REM}
\begin{center}
\FG{42}{conv5} \qquad \FG{42}{conv6} \qquad \FG{42}{conv7}
Cases (1) and (2) i) \hspace*{26mm} (2) ii) \hspace*{40mm} (2) iii) \hspace*{5mm}
\par\vspace*{6mm}\par
Figure 3. Three possibilities of $ f/g $ (red curve) when $ f'/g'\,\,\nearrow $.
\end{center}
\par\vspace*{1mm}\par
The first plot shows that if $ f/g $ lies below $ f'/g' $, then $ f/g $ is $ \nearrow $.
As the rule asserts, to guarantee this situation, you only have to check
whether $ f(0)=g(0)=0 $ (case (1)) or in case (2) i), to check whether
$ f(0)/g(0)\leq f'(0)/g'(0) $. In other words, only the behaviors at the left
endpoint matters.
The second plot shows that if $ f/g $ lies above $ f'/g' $, then $ f/g $ is $ \searrow $.
Again, to guarantee this situation, you only need to
check whether $ f(b)/g(b)\geq f'(b)/g'(b) $ at the right endpoint.
The third plot shows the $ \searrow\nearrow $ possibility, which happens when the dashed
curve intersects the red curve at some point $ c\in(0,b) $. This situation can
be considered as a hybrid case: before the intersection, the plot looks
like the second one, and after that it looks like the first one.
Right at the intersection, the red curve has a horizontal tangent.
HLP have recorded the same observation in their proof of Proposition 148.
\par\vspace*{3mm}\par
\begin{COR}
Suppose $ f $ and $ g $ are differentiable,
$ g>0 $, $ g'<0 $, and $ f'/g'\searrow(\nearrow) $ in $ (0,b) $.
\par\vspace*{-2mm}\par
\begin{itemize}
\item[\rm(1)\,\,] If $ g(b)=0 $ and $ f(b)\leq (\geq )\,\,0 $, then $ f/g $ is $ \searrow $ $(\nearrow)$.
\item[\rm(2)\,\,] $ g(b)=0 $ and $ f(b)>(<)\,\,0 $\quad or \quad $ g(b)>0 $.
\par\vspace*{-2mm}\par
\begin{itemize}
\item[\rm i)\,\,] If \,\,
$ \displaystyle (f/g)(0) \geq (\,\leq \,)\, (f'/g')(0), $
then $ f/g $ is $ \nearrow $ $ (\searrow) $.
\item[\rm ii)\,\,] If \,\, $ g(b)\neq 0 $\,\, and
$ \displaystyle (f/g)(b) \leq (\,\geq \,)\, \hspace*{0.6mm} (f'/g')(b), $
then $ f/g $ is $ \searrow $ $ (\nearrow) $.
\item[\rm iii)\,\,] Otherwise,
$ f/g $ has the shape $ \searrow\nearrow $ $ (\nearrow\searrow) $ with a unique turning point in $ (0,b) $.
\end{itemize}
\end{itemize}
\end{COR}
\newpage
\begin{REM} \rm
The analogous plots of $ f/g $ and $ f'/g' $ are shown in Figure 4.
\end{REM}
\begin{center}
\FG{42}{conv8} \qquad \FG{42}{conv9} \qquad \FG{42}{conv10}
Cases (1) and (2) i) \hspace*{26mm} (2) ii) \hspace*{40mm} (2) iii) \hspace*{5mm}
\par\vspace*{6mm}\par
Figure 4. Three possibilities of $ f/g $ (red curve) when $ f'/g'\,\,\searrow $.
\end{center}
\section{Rules for oscillation}
Pinelis \cite{pi3} \cite{pi4} discovered an interesting extension of the
monotone rule to the situation when $ f'/g' $ is no longer
monotone. In this section, we look at this extension from the
perspective of convex{/}concave functions.
\begin{center}
\FG{50}{conv4}
Figure 5. A function that changes convexity three times.
\end{center}
\par\vspace*{-29mm}\par
\hspace*{24mm} $ O $
\par\vspace*{-3mm}\par
\hspace*{47mm} $ b_1 $\hspace*{18mm} $ b_2 $\hspace*{19mm} $ b_3 $
\par\vspace*{20mm}\par
Suppose that $ (0,b) $ can be divided into $ n+1 $ subintervals with points
$$ (0=b_0)\,<\,b_1\,<\,b_2\,<\,\cdots\,<\,b_n\,<\,(b_{n+1}=b), $$
and $ f'/g' $ is assumed to be $ \nearrow $
in the odd subintervals $ (0,b_1) $, $ (b_2,b_3) $, $ \cdots $ and $ \searrow $ in
the even ones. The function changes its monotonicity $ n $ times.
Figure 5 depicts one such functions with $ f(0)\leq 0 $. It is convex in
$ [0,b_1] $, concave in $ [b_1,b_2] $, etc. The points $ b_i $ are points of reflection
of $ f $.
For convenience, we can also say that $ f'/g' $\, ``oscillates''
$ n $ times. Do not confuse this use of the term with the more
conventional meaning, as in the ``oscillation'' of the pendulum. When
a pendulum oscillates 2 times, it has ``oscillated in our sense''
(changed directions) 3 times.
The case when $ f'/g' $ is $ \searrow $ in the odd subintervals and $ \nearrow $
in the even ones can be studied in a similar way.
\underline{$\vphantom{y}$Question}: {\em What can we say about the oscillatory property of
$ f/g $? }
Again, we first appeal to the special case when $ g(x)=x $,
and $ f $ is assumed to be alternatively convex and concave in the subintervals.
By Theorem~1 (1), $ f/x $ is $ \nearrow $ in $ [0,b_1] $.
In $ [b_1,b_2] $, $ f $ is concave. Using the concave version of Theorem~1 (2),
we see that $ f/x $ can have three possible behavior. Since before $ b_1 $,
it is $ \nearrow $, it will continue to $ \nearrow $ at least for a little while after $ b_1 $.
This rules out the case (2) i), that it is $ \searrow $ in the entire subinterval.
In the remaining two possibilities,
there may or may not exist a turning point $ c_1\in(b_1,b_2) $, depending
on whether a tangent line through the origin can be drawn touching the
curve inside $ (b_1,b_2) $. If such a $ c_1 $ exists, then
$ f/x $ will change monotonicity once in $ [0,b_2] $. Otherwise, $ f/x $
remains $ \nearrow $ in $ [0,b_2] $.
To summarize, in $ [0,b_2] $, $ f/x $ cannot oscillate more
times than $ f' $. Furthermore, the turning point of $ f/g $, $ c_1 $
(if it exists) lags behind that of the latter, namely, $ b_1 $. In other
words, $ c_1>b_1 $.
We can continue with similar arguments in subsequent intervals and it is
easy to see that the first statement in the above summary remains true
throughout the entire interval $ [0,b) $. The second statement has to be interpreted
in the following way. In each subinterval $ (b_i,b_{i+1}) $, there is at most
one turning point $ c_i $ of $ f'/x $. The sense of the change of monotonicity
(i.e. from $ \nearrow $ to $ \searrow $, or from $ \searrow $ to $ \nearrow) $ is the same as that of $ f' $
at $ b_i $. In the list of all turning points of $ f(x)/x $, some $ c_i $ can be
missing.
The case when $ f(0)>0 $ is just a little more complicated. In view of
Theorem~1 (2), we have to append the possibility of a
first turning point $ c_0\in(0,b_1) $ at which $ f/x $ switches from $ \searrow $
to $ \nearrow $; $ f/x $ can change monotonicity at most $ n+1 $ times.
If there is no such $ c_0 $, then either $ f/x $ is $ \nearrow $ in $ [0,b_1) $
and the behavior is exactly the same as the case when $ f(0)\leq 0 $,
or $ f/x $ is $ \searrow $ in $ [0,b_2] $ and in the remaining subintervals the behavior
mimics that of the case $ f(0)\leq 0 $. In the first situation, $ f/x $ changes
monotonicity at most $ n $ times. In the second situation, it changes at
most $ n-1 $ times.
The case with $ g(x)=x+\gamma $, $ \gamma >0 $ can be analyzed in a similar way.
After translating to the more general setting, we derive the following
generalization of Pinelis' oscillation rules.
\begin{THEOREM}
Suppose that $ g>0 $ and $ \nearrow $ in $ [0,b\,] $.
Suppose that $ \phi (u)=f(g^{-1}(u)) $, as defined before,
is a continuous function of $ u $ and is alternatively
strictly convex $($concave$)$ and strictly concave $($convex$)$
in the $ (n+1) $ subintervals corresponding to the decomposition of
$ [0,b) $ described above.
If $ g(0)=0 $ and $ f(0)\leq (\geq )\,0 $, then $ f/g $ is initially
$ \nearrow $ $(\searrow)$ in $ [0,b_1) $, while
in each subsequent subinterval $ (b_i,b_{i+1}) $, $ f/g $ can
change monotonicity at most once $($in the same sense as the change
of monotonicity of $ f'/g' $ at $b_i )$. Hence, $ f/g $ can oscillate at most $ n $
times.
If $ g(0)=0 $ and $ f(0)>(<)\,0 $, or if $ g(0)>0 $,
then $ f/g $ may or may not have one additional change of
monotonicity in $ [0,b_1) $. The behavior in subsequent subintervals is the same
as in the previous case. Hence,
$ f/g $ can change monotonicity at most $ n+1 $ times.
\end{THEOREM}
\begin{REM} \rm
Theorem~3 has an obvious analog for + $ \searrow $ $ g $.
\end{REM}
Two corollaries have found applications (to be described in the next
section) in some recent work of the author.
\begin{COR}
Suppose $ f $ and $ g $ are continuous in $ [a,b] $, differentiable, with $ g,g'>0 $ in $ (a,b) $.
If $ f'/g' $ is initially $ \nearrow(\searrow) $ and changes monotonicity only once
in $ [a,b] $, then $ f/g $ has a unique global maximum $($minimum$)$
in $ [a,b] $.
\end{COR}
\begin{COR}
Let $ f $ and $ g $ be continuous and differentiable with $ g,g'>0 $ in $ (a,b) $.
Suppose that $ f'/g' $ is initially $ \nearrow $ and changes monotonicity once
in $ [a,b] $.
If in addition $ f'(a)g(a)\geq f(a)g'(a) $
and $ f'(b)g(b)\leq f(b)g'(b) $, then $ f/g $ is $ \nearrow $ in $ [a,b] $
\end{COR} \begin{PROOF}
Suppose that $ f'/g' $ is $ \nearrow $ in $ [a,b_1] $ and $ \searrow $ in $ [b_1,b] $.
By Corollary 1 (2) i),
the boundary condition at $ x=a $ implies that $ f/g $ is $ \nearrow $ in $ [a,b_1] $.
In $ [b_1,b] $, we use the concave version of Corollary 1 (2) ii),
applied to the boundary condition at $ b $, to
conclude that $ f/g $ is also $ \nearrow $ there.
\end{PROOF}
We can push Corollary 4 a little further to still get $ f/g $ $ \nearrow $
when $ f'/g' $ changes monotonicity two times.
\begin{COR}
Let $ f $ and $ g $ be continuous and differentiable with $ g,g'>0 $ in $ (a,b) $.
Suppose that $ f'/g' $ is initially $ \nearrow $ and changes monotonicity exactly
twice in $ [a,b] $. Let $ b_2\in(a,b) $ be the second turning point of $ f'/g' $.
If $ f'(a)g(a)\geq f(a)g'(a) $
and $ f'(b_2)g(b_2)\leq f(b_2)g'(b_2) $, then $ f/g $ is increasing in $ [a,b] $
\end{COR}
\begin{REM} \rm
Corollaries 4 and 5 have analogs for proving $ f/g\searrow $.
\end{REM}
\section{Examples}
\noindent
\underline{$\vphantom{y}$Example 1}. In some recent work with H. Alzer (on studying some
properties of the error function) we need to compute the global maximum
value of the function
\begin{equation} k_1(x) = \frac{h(x^2)}{h(x)} \end{equation}
in $ [0,\infty ) $, where
\begin{equation} h(x) = \int_{0}^{x} \mbox{e}^{-t^2}\,dt \end{equation}
is a multiple of the error function.
Any numerical software can easily produce the estimate
$1.0541564714695\cdots$, attained at $x = 1.246574335142\cdots$.
From a theoretical viewpoint, no matter how accurate the maximization
algorithm is, these values cannot be simply taken to be the correct ones.
That is
because most algorithms can only guarantee to return a local maximum, which
is not necessarily the global maximum. Before any further
justification, the best we can conclude is that the computed value
represents a local maximum out of possibly multiple local maxima.
Hence, it
can only be taken as a lower bound of the true value sought. To put any
doubt to rest, we need to
affirm that $ k_1(x) $ changes monotonicity only once.
A first attempt is to show that $ k_1'(x) $ changes sign only
once. Plotting its graph seems to support the claim. Yet no easy
proof is apparent.
Letting $ f(x)=h(x^2) $, we compute the ``H\^opital derivative'' (for lack of
a better name) of $ k_1(x) $
\begin{equation} \xi (x) = \frac{f'(x)}{h'(x)} = 2x\,\mbox{e}^{x^2-x^4} \end{equation}
the derivative of which is
\begin{equation} \xi '(x) = 2\,\mbox{e}^{x^2-x^4}(1+2x^2-4x^4) . \end{equation}
It is easy to verify that $ \xi '(x) $ has a unique positive root $ b_1=\sqrt{\sqrt5+1}/2 $,
and that $ \xi (x) $ is $ \nearrow $ in $ (0,b_1) $ and $ \searrow $ in $ (b_1,\infty ) $.
By Corollary 2, we conclude that $ k_1=f/h $ also
changes monotonicity only once in the same sense, just as desired.
\par\vspace*{1.5\baselineskip}\par
\noindent
\underline{$\vphantom{y}$Example 2}. We also need to know the global maximum of
\begin{equation} k_2(x) = \frac{k_1(x)}{x} = \frac{h(x^2)}{xh(x)} \end{equation}
in $ [0,\infty ) $. The numerical estimate is
$1.0785966957414\cdots$, attained at $x = .68355125808421\cdots$
and we need to ensure that $ k_2(x) $ has only one local maximum.
The case of $ k_2(x) $ is a bit more complicated. With $ f(x)=h(x^2) $
and $ g(x)=xh(x) $, its H\^opital derivative is
\begin{equation} \xi _1(x) = \frac{f'(x)}{g'(x)} = \frac{4x\,\mbox{e}^{-x^4}}{2x\,\mbox{e}^{-x^2}+h(x)} \,. \end{equation}
It suffices to show that $ \xi _1(x) $ is initially increasing and then changes
monotonicity only once in $ (0,\infty ) $.
One is tempted to apply the oscillation
rule one more time by computing
\begin{equation} \xi _2(x) = \frac{(4x\,\mbox{e}^{-x^4})'}{(2x\,\mbox{e}^{-x^2}+h(x))'} = \frac{(4x^4-1)\mbox{e}^{x^2-x^4}}{x^2-1} \,. \end{equation}
However, the conditions of the rule are not satisfied because the denominator of
$ \xi _1(x) $ is not a monotone function of $ x $. In fact, it increases in $ (0,1) $
and decreases in $ (1,\infty ) $. We have to investigate the behaviors of $ \xi _1(x) $
in these two subintervals separately.
First we study the monotonicity of $ \xi _2(x) $ in $ [0,1) $ and $ (1,\infty ) $.
The derivative of $ \xi _2(x) $, after some simplification, is
\begin{equation} \xi _2'(x) = \frac{2x\,\mbox{e}^{x^2-x^4}}{(x^2-1)^2} \, (2-11x^2+2x^4+12 x^6-8x^8) \,. \end{equation}
Since the fraction on the righthand side is nonnegative, the monotonicity of
$ \xi _2(x) $ depends on the sign of the polynomial in parentheses. Letting $ x^2=y $,
the polynomial can be written as
\begin{equation} p(y) = 2-11y+2y^2+12y^3-4y^4. \end{equation}
Since $ p(0)=2 $ and $ p(1)=-3 $, $ p(y) $ has a root $ \sigma $ in $ (0,1) $. We claim
that this is the only positive root, by showing that $ p(y) $ is strictly decreasing
for $ y>0 $. To this end, we note that
$ p'(y)=-11+4y+36y^2-32y^3 $ attains its global maximum in $ (0,\infty ) $ when
$ p''(y)=4+72y-96y^2=0 $. It is easy to verify that this global maximum
is negative.
It follows that $ \xi _2(x) $ is $ \nearrow $ in $ (0,\sqrt\sigma ) $ and $ \searrow $ in $ (\sqrt\sigma ,1)\cup(1,\infty ) $.
We are now ready to study $ \xi _1(x) $ in $ [1,\infty ) $. Since its denominator
is decreasing in $ [1,\infty ) $, we invoke Corollary 2 instead of Corollary~1.
Since $ \xi _2(x) $ is decreasing, the function $ \phi (u) $ in the hypotheses is concave and we
have to use the concave version of Corollary~2. It is easy to verify that
\begin{equation} \lim_{x\rightarrow \infty } \frac{\xi _1(x)}{\xi _2(x)} = \lim_{x\rightarrow \infty } \frac{4x(x^2-1)\mbox{e}^{-x^2}}{(2x\,\mbox{e}^{-x^2}+h(x))(4x^4-1)} = 0 . \end{equation}
Hence, for $ b $ very large, $ \xi _1(b)<\xi _2(b) $. By Corollary~2 (2) ii),
we conclude that $ \xi _1\searrow $ in $ [1,b] $ for large $ b $.
In $ [0,1] $, $ \xi _2 $ changes monotonicity once, implying that $ \xi _1 $ changes
monotonicity at most once. The only way that this is compatible with
$ \xi _1\searrow $ in $ (1,\infty ) $ is that $ \xi _1 $ changes monotonicity exactly once in
$ [0,\infty ) $, as desired.
\par\vspace*{1.5\baselineskip}\par
\noindent
\underline{$\vphantom{y}$Example 3}. The function
\begin{equation} k_3(x) = \frac{h(x)-x\,\mbox{e}^{-x^2}}{x^2} \end{equation}
occurs in the same study. We want to show that it is $ \nearrow $ in the interval
$ I=[0,0.967857163] $.
Note that we cannot extend the claim to $ [0,1] $ because $ k_3 $ is
not $ \nearrow $ at $ x=1 $. Letting $ f(x)=h(x)-x\,\mbox{e}^{-x^2} $
and $ g(x)=x^2 $, we find that
\begin{equation} \frac{f'(x)}{g'(x)} = x\,\mbox{e}^{-x^2} \,. \end{equation}
However, $ f'/g' $ is not monotone in $ I $;
it changes monotonicity at
$ c=1/\sqrt2 $. Hence, the regular monotone rule fails. We can easily verify
that the hypotheses of Corollary 4 are satisfied and thus conclude that $ k_3 $ is
$ \nearrow $ in the interval.
Figure 6 shows the graphs of $ k_3(x) $ and its H\^opital derivative. It should
be compared with Figures 3 and 4. The green
dashed curve is not monotone, but as long as it stays above the red
curve, the latter is $ \nearrow $.
\begin{center}
\FG{50}{conv11}
\par\vspace*{2mm}\par
Figure 6. The graphs of $ k_3(x) $ and its H\^opital derivative.
\end{center}
\par\vspace*{1.5\baselineskip}\par
\noindent
\underline{$\vphantom{y}$Example 4}. The function
\begin{equation} k_4(x) = \frac{(2x^2-1)h(x)}{h(x)-x\,\mbox{e}^{-x^2}} \end{equation}
is $ \nearrow $ in $ [0,\infty ) $.
The H\^opital derivative is
$$ \xi _3(x) = \frac{(2x^2-1)\,\mbox{e}^{-x^2}+2xh(x)}{2x^2\mbox{e}^{-x^2}} \,. $$
Note that at $ x=0 $, the denominator becomes 0 while the numerator is $ -1 $.
Therefore, the usual monotone rule cannot be used. Instead of using the
extended rule established in this article, an easier way is to note that
$$ \xi _3(x) = 1 - \frac{1}{2x^2} + \frac{h(x)}{x\,\mbox{e}^{-x^2}} \,. $$
The first two terms combined is $ \nearrow $. Thus, it suffices to show that the
last term is $ \nearrow $. Yet the usual rule still cannot be used directly
because the denominator of the last term is not monotone in $ [0,\infty ) $.
We can overcome that obstacle by showing that its reciprocal
$ x\,\mbox{e}^{-x^2}/h(x) $ is $ \searrow $. Then the usual rule can be applied.
| {
"timestamp": "2015-03-02T02:05:42",
"yymm": "1502",
"arxiv_id": "1502.07805",
"language": "en",
"url": "https://arxiv.org/abs/1502.07805",
"abstract": "We point out the connection of the so-called Hôpital-style rules for monotonicity and oscillation to some well-known properties of concave/convex functions. From this standpoint, we are able to generalize the rules under no differentiability requirements and greatly extend their usability. The improved rules can handle situations in which the functions involved have non-zero initial values and when the derived functions are not necessarily monotone. This perspective is not new; it can be dated back to Hardy, Littlewood and Polya.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "On Hopital-style rules for monotonicity and oscillation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.965381162156829,
"lm_q2_score": 0.8354835411997897,
"lm_q1q2_score": 0.8065600719663559
} |
https://arxiv.org/abs/1404.2443 | Polygons as sections of higher-dimensional polytopes | We show that every heptagon is a section of a $3$-polytope with $6$ vertices. This implies that every $n$-gon with $n\geq 7$ can be obtained as a section of a $(2+\lfloor\frac{n}{7}\rfloor)$-dimensional polytope with at most $\lceil\frac{6n}{7}\rceil$ vertices; and provides a geometric proof of the fact that every nonnegative $n\times m$ matrix of rank $3$ has nonnegative rank not larger than $\lceil\frac{6\min(n,m)}{7}\rceil$. This result has been independently proved, algebraically, by Shitov (J. Combin. Theory Ser. A 122, 2014). | \section{Introduction}
Let $P$ be a (convex) polytope.
An \defn{extension} of~$P$ is any polytope~$Q$ such that~$P$ is the image of $Q$ under a linear projection;
the \defn{extension
complexity} of~$P$, denoted~\defn{$\ec(P)$}, is the minimal number of facets
of an extension of~$P$. This concept is relevant in combinatorial optimization
because if a polytope has low extension complexity, then it is possible to
use an extension with few facets to efficiently optimize a linear
functional over it.
A \defn{section} of a polytope is its intersection with an affine subspace.
We will work with the polar formulation of the problem above, which
asks for the minimal number of vertices of a polytope $Q$ that has $P$ as a section.
If we call this quantity the \defn{intersection complexity} of $P$, \defn{$\ic(P)$},
then by definition it holds that $\ic(P)=\ec(\pol P)$, where $\pol P$ is the polar dual of~$P$.
However, extension complexity is preserved under polarity
(see~\cite[Proposition~2.8]{ThomasParriloGouveia2013}),
so these four quantities actually coincide:
\[\ic(P)=\ec(\pol P)=\ec(P)=\ic(\pol P).\]
Despite the increasing amount of attention that this topic has received recently
(see \cite{FioriniRothvosTiwary2012}, \cite{ThomasParriloGouveia2013}, \cite{GouveiaRobinsonThomas2013}, \cite{Shitov2014} and references therein),
it is still far from being well understood. For example, even the possible range of values of the intersection complexity of an $n$-gon is still unknown.
Obviously, every $n$-gon has intersection complexity at most~$n$, and for those with
$n\leq 5$ it is indeed exactly~$n$. It is not hard to
check that hexagons can have complexity $5$ or $6$
(cf. \cite[Example~3.4]{ThomasParriloGouveia2013}) and, as we show in Proposition~\ref{prop:ichexagon},
it is easy to decide which is the exact value.
By proving that a certain pseudo-line arrangement is not stretchable, we show that
every heptagon is a section of a $3$-polytope with no more than $6$ vertices.
This reveals the geometry behind a result found independently by
Shitov in~\cite{Shitov2014},
and allows us to settle the intersection complexity of heptagons.
\begin{reptheorem}{thm:icheptagon}
Every heptagon has intersection complexity $6$.
\end{reptheorem}
In general, the minimal intersection complexity of an $n$-gon is $\theta(\log n)$,
which is attained by regular $n$-gons \cite{BenTalNemirovski2001}\cite{FioriniRothvosTiwary2012}.
On the other hand, there exist $n$-gons whose intersection complexity is at least $\sqrt{2n}$~\cite{FioriniRothvosTiwary2012}.
As a consequence of Theorem~\ref{thm:icheptagon} we automatically get upper bounds for the
complexity of arbitrary $n$-gons.
\begin{reptheorem}{thm:icngon}
Any $n$-gon with $n\geq 7$ is a section of a $(2+\ffloor{n}{7})$-dimensional
polytope with at most $\fceil{6n}{7}$ vertices. In particular, $\ic(P)\leq
\fceil{6n}{7}$.
\end{reptheorem}
Of course, this is just a first step towards understanding the
intersection complexity of polygons. By counting degrees of freedom, it is
conceivable that every $n$-gon could be represented as a section of an $O(\sqrt{n})$-dimensional polytope
with $O(\sqrt{n})$ vertices. For sections of $3$-polytopes, our result only
shows that every $n$-gon is a section of a $3$-polytope with not more than $n-1$ vertices, whereas
we could expect an order of $\frac23 n$ vertices.
There is an alternative formulation of these results. The \defn{nonnegative rank} of
a nonnegative $n\times m$ matrix $M$,
denoted \defn{$\pr(M)$}, is the minimal number~$r$ such that there
exists an $n\times r$ nonnegative matrix~$R$ and an $r\times m$ nonnegative matrix
$S$ such that \(M=RS.\)
A classical result of Yannakakis~\cite{Yannakakis1991}
states that the intersection complexity of a polytope coincides with the
{nonnegative rank} of its slack matrix.
In this setting, it is not hard to
deduce the following theorem from Theorem~\ref{thm:icngon} (it is easy to
deal with matrices of rank~$3$ that are not slack
matrices).
\begin{theorem}[{{\cite[Theorem~3.2]{Shitov2014}}}]\label{thm:nonnegativerank}
Let $M$ be a nonnegative $n\times m$ matrix of rank~$3$. %
Then $\pr(M)\leq \fceil{6\min(n,m)}{7}$.
\end{theorem}
This disproved a conjecture of
Beasley and Laffey (originally stated in \cite[Conjecture~3.2]{BeasleyLaffey2009} in a more
general setting), who asked if for any $n\geq 3$ there is an $n\times n$ nonnegative matrix $M$ of rank~$3$ with $\pr(M)=n$. While this paper was under review, Shitov improved Theorem~\ref{thm:icngon} and provided a sublinear upper bound for the intersection/extension complexity of $n$-gons~\cite{Shitov2014b}.
\subsection{Notation}
We assume throughout that the vertices $\set{p_i}{i\in \mathbb{Z}/n\mathbb{Z}}$
of every $n$-gon $P$ are cyclically clockwise labeled, \ie the edges of $P$
are $\conv\{p_i,p_{i+1}\}$ for $i\in\mathbb{Z}/n\mathbb{Z}$
and the triangles $(p_{i+2},p_{i+1},p_i)$ are positively oriented for $i\in\mathbb{Z}/n\mathbb{Z}$.
We regard the $p_i$ as points of the Euclidean plane $\mathbb{E}^2$, embedded in the
real projective plane $\mathbb{P}^2$ as $\mathbb{E}^2 =
\smallset{\trans{(x, y, 1)}}{ x, y \in\mathbb{R}}$. For any pair of points $p, q \in
\mathbb{E}^2$, we denote by $\li{p}{q} = p \wedge q$ the line joining them. It is well
known that $\li{p}{q}$ can be identified with the point $\li{p}{q} = p \times q$
in the dual space $(\mathbb{P}^2)^*$, where
\[p\times q =\trans{\left( \begin{vmatrix}p_2& p_3\\q_2&q_3\end{vmatrix},\; -\begin{vmatrix}p_1& p_3\\q_1&q_3\end{vmatrix},\; \begin{vmatrix}p_1& p_2\\q_1&q_2\end{vmatrix} \right)}\]
denotes the cross-%
product in Euclidean $3$-space. Similarly, the meet
$\ell_1\vee\ell_2$ of two lines $\ell_1,\ell_2\in (\mathbb{P}^2)^*$ is their
intersection point in $\mathbb{P}^2$, which has coordinates~$\ell_1\times\ell_2$.
\section{The intersection complexity of hexagons}
As an introduction for the techniques that we use later with heptagons, we study the intersection complexity of hexagons. Hexagons can have intersection
complexity either~$5$ or $6$~\cite[Example~3.4]{ThomasParriloGouveia2013}. In this section we provide a geometric condition to decide among the two values.
This section is mostly independent from the next two, and the reader can safely skip it.
First, we introduce a lower bound for the $3$-dimensional intersection complexity of $n$-gons that we will use later.
\begin{proposition}\label{prop:ic3bound}
No $n$-gon can be obtained as a section of a $3$-polytope with less than $\fceil{n+4}{2}$ vertices.
\end{proposition}
\begin{proof}
Let $Q$ be a $3$-polytope with $m$ vertices such that its intersection with the
plane~$H$ coincides with $P$, and let $k$ be the number of vertices of $Q$ that lie on $H$.
By Euler's formula, the number of edges of $Q$ is at most~$3m-6$, of which at least $3k$ have an endpoint on $H$. Moreover, the subgraphs $G^+$ and $G^-$ consisting of edges of $Q$ lying in the open halfspaces $H^+$ and $H^-$ are both connected. Indeed, if $H=\set{x}{\sprod{a}{x}=b}$, then the linear function $\sprod{a}{x}$ induces an acyclic partial orientation on $G^+$ and $G^-$ by setting $v\rightarrow w$ when $\sprod{a}{v}<\sprod{a}{w}$. Following this orientation we can connect each vertex of $G^+$ to the face of $Q$ that maximizes $\sprod{a}{x}$, and following the reverse orientation, each vertex of $G^-$ to the face that minimizes $\sprod{a}{x}$ (compare~\cite[Theorem~3.14]{Ziegler1995}).
Hence, there are at least $m-k-2$ edges in $G^+\cup G^-$. These are edges of $Q$ that do not intersect $H$. There are also at least $3k$ edges that have an endpoint on~$H$. Now, observe that every vertex of $P$ is either a vertex of $Q$ in $H$ or is the intersection with~$H$ of an edge of $Q$ that has an endpoint at each side of~$H$. Hence,
\begin{equation}\label{eq:boundnumvertices}n-k\leq (3m-6)-(3k)-(m-k-2)=2m-4-2k,\end{equation}
and since $k\geq 0$, we get the desired bound.
\end{proof}
The lower bound of Proposition~\ref{prop:ic3bound} is optimal: for every $m\geq 2$ there are $2m$-gons appearing as sections of $3$-polytopes with $m+2$ vertices (Figure~\ref{fig:optimalcuts}).
\begin{figure}[htpb]
\centering
\includegraphics[width=.22\textwidth]{optimalcut}
\caption{For $m\geq 2$, the join of an $m$-path with an edge is the graph of a stacked $3$-polytope with $2m+2$ vertices that has a $2m$-gon as a section (by a plane that truncates the edge).}\label{fig:optimalcuts}
\end{figure}
\begin{corollary}
The intersection complexity of a hexagon is either $5$ or $6$.
\end{corollary}
\begin{proposition}\label{prop:ichexagon}
The intersection complexity of a hexagon is $5$ if and only if the lines
$\li{p_0}{p_5}$, $\li{p_1}{p_4}$ and $\li{p_2}{p_3}$ intersect in a common point
of the projective plane for some cyclic labeling of its vertices $\set{p_i}{i\in\mathbb{Z}/6\mathbb{Z}}$.
\end{proposition}
\begin{figure}[htpb]
\centering
\subcaptionbox{Non-hexagonal cuts.}[
.58\textwidth ]
{\includegraphics[width=.55\textwidth]{Cuts2}}
\subcaptionbox{The only hexagonal cut.\label{fig:bipyrcut}}[.4\textwidth ]
{\includegraphics[width=.3\textwidth]{hexagoncut}}
\caption{All cuts of the quadrangular pyramid and the triangular bipyramid into two connected components, up to symmetry.}\label{fig:cuts}
\end{figure}
\begin{proof}
The only $4$-polytope with $5$ vertices is the simplex, which only has $5$ facets;
thus, none of its $2$-dimensional sections is a hexagon. Therefore, if $P$ is a hexagon, then
$\ic(P)=5$ if and only if it is the intersection of a $2$-plane~$H$ with a $3$-polytope~$Q$ with $5$~vertices.
There are only two combinatorial types of $3$-polytopes with $5$~vertices: the quadrangular pyramid and the triangular bipyramid. By \eqref{eq:boundnumvertices}, $H$ does not contain any vertex of~$Q$. Hence, $H$ induces a cut of the graph of $Q$ into two (nonempty) disjoint connected components.
A small case-by-case analysis (cf. Figure~\ref{fig:cuts}) tells us that the only
possibility is that $Q$ is the bipyramid and $H$ cuts its graph as shown in Figure~\ref{fig:bipyrcut}. However, in every geometric realization of such a cut (with the same labeling), the lines $\li{p_0}{p_5}$, $\li{p_1}{p_4}$ and $\li{p_2}{p_3}$ intersect in a common (projective) point: the point of intersection of $\li{q_0}{q_1}$
with $H$ (compare Figure~\ref{fig:hexagonsection}).
\begin{figure}[htpb]
\centering{\includegraphics[width=.9\textwidth]{hexagonslice}}
\caption{A hexagon as a section of a triangular bipyramid.}\label{fig:hexagonsection}
\end{figure}
For the converse, we prove only the case when the point of intersection is finite (the
case with parallel lines is analogous). Then we can apply an affine transformation
and assume that the coordinates of the hexagon are
\begin{align*}
p_0&=(0,\alpha),& p_1&=(\beta x,\beta y),& p_2&=(\gamma,0)\\
p_3&=(1,0),&p_{4}&=(x,y),& p_{5}&=(0,1);
\end{align*}
for some $x,y>0$ and $\alpha,\beta,\gamma>1$.
Now, let $K>\max(\alpha,\beta,\gamma)$ and consider the polytope $Q$ with vertices
\begin{align*}
q_0&=(0,0,-K),\qquad q_1=(0,0,-1),\qquad q_2=\frac{\big((K-1)\gamma,0, K(\gamma-1)\big)}{K-\gamma},\\
q_3&=\frac{\big(x(K-1)\beta,y(K-1)\beta, K(\beta-1)\big)}{K-\beta},
\quad q_4=\frac{\big(0,(K-1)\alpha, K(\alpha-1)\big)}{K-\alpha}.
\end{align*}
If $H$ denotes the plane of vanishing third coordinate, then $q_0$ and $q_1$ lie below~$H$, while $q_2$, $q_3$ and $q_4$ lie above. The intersections $\li{q_i}{q_j}\cap H$ for $i\in\{0,1\}$ and $j\in\{2,3,4\}$ coincide with the vertices of $P\times\{0\}$. This proves that $P\times\{0\}=Q\cap H$, and hence that $\ic(P)=5$.
\end{proof}
\begin{remark}
Let $P$ be a regular hexagon, and let $Q$ be a polytope with $5$ vertices such that $Q\cap H=P$ for some plane~$H$.
By the proof of Proposition~\ref{prop:ichexagon}, $Q$ is a triangular bipyramid and one of the two halfspaces
defined by $H$ contains only two vertices of $Q$: $q_0$ and $q_1$. Even more, since $P$ is regular,
the line $\li{q_0}{q_1}$ must be parallel to one of the edge directions of $P$ because, as we saw in the previous proof, the projective point $\li{q_0}{q_1}\cap H$ must coincide with the intersection of two opposite edges of $P$ (at infinity in this case).
This means that there are three different choices for the direction of the line $\li{q_0}{q_1}$; and shows that
the set of minimal extensions of~$P$ (which can be parametrized by the vertex coordinates) is not connected, even if we consider
its quotient space obtained after identifying extensions related by an admissible projective
transformation that fixes $P$ and those related by a relabeling of the vertices of $Q$.
A similar behavior was already observed in~\cite{MondSmith2003} for the space
of nonnegative factorizations of nonnegative matrices of rank~$3$ and nonnegative rank~$3$.
\end{remark}
\begin{figure}[htpb]
\centering
{\includegraphics[width=.4\textwidth]{hexagonRS}}
\caption{The intersection complexity of $P$ according to the position of the last vertex.}\label{fig:hexagonRS}
\end{figure}
\begin{remark}
Consider the set of all hexagons with $5$ fixed vertices. The position of the last vertex determines its intersection/extension complexity. This is depicted in Figure~\ref{fig:hexagonRS}. The hexagon fulfills the condition of Proposition~\ref{prop:ichexagon} if and only if the last point lies on any of the three dark lines. Hence, $\ic(P)=5$ if the last point lies on a dark line and $\ic(P)=6$ otherwise.
Actually, an analogous picture appears for any choice for the position of the
initial $5$ points (the dark lines are always concurrent because of Pappus's Theorem).
In addition, the dark lines depend continuously on the coordinates of the first $5$ points.
This implies that, if we take two realizations that have the last point in two different
$\ic(P)=6$ regions in Figure~\ref{fig:hexagonRS}, then we cannot continuously transform one into the other. Said otherwise, the realization space of the hexagon (as considered by Richter-Gebert in~\cite{RG}) restricted to those that have intersection complexity~$6$ is disconnected.
\end{remark}
\section{The complexity of heptagons}
In this section we prove our main result, Theorem~\ref{thm:icheptagon}, in two steps.
The easier part consists of showing that a special family of heptagons,
which we call standard heptagons, always have intersection complexity less than or equal to~$6$ (Proposition~\ref{prop:icstandardheptagon}).
The remainder of the section is devoted to proving the second step, Proposition~\ref{prop:noncrossingstandardization}: every heptagon is projectively equivalent to a standard heptagon.
\subsection{A standard heptagon}
Here, and throughout this section, $P$ denotes a heptagon and $\set{p_i}{i\in \mathbb{Z}/7\mathbb{Z}}$ is its set of vertices, cyclically clockwise labeled.
\begin{definition}
We say that $P$ is a \defn{standard heptagon} if
$p_0=(0,0)$, $p_3=(0,1)$ and $p_{-3}=(1,0)$; and
the lines $\li{p_1}{p_2}$ and $\li{p_{-1}}{ p_{-2}}$ are respectively parallel to the
lines $\li{p_0}{p_3}$ and $\li{p_0}{p_{-3}}$
(see~Figure~\ref{fig:heptagon}).
\end{definition}
We can easily prove that standard heptagons have intersection complexity at
most~$6$.
\begin{figure}[htbp]
\centering
\subcaptionbox{Coordinates of a standard
heptagon.\label{fig:heptagon}}[.45\textwidth]
{\raisebox{1cm}{\includegraphics[width=.4\textwidth]{StandardHeptagon}}}
\qquad
\subcaptionbox{The setup of
Proposition~\ref{prop:icstandardheptagon}.\label{fig:intersection}}[
.45\textwidth ]
{\includegraphics[width=.4\textwidth]{Intersection}}
\caption{Standard heptagons.}
\end{figure}
\begin{proposition}\label{prop:icstandardheptagon}
Every standard heptagon $P$ is a section of a $3$-polytope $Q$ with $6$ vertices.
In particular $\ic(P)\leq 6$.
\end{proposition}
\begin{proof}
For any standard heptagon~$P$, there are real numbers $b,c < 0 < a,d,\lambda,\mu$
such that the coordinates of the vertices of $P$ are
\begin{align*}
p_0&=(0,0),& p_1&=(c,d),& p_2&=(c,d+\mu),&p_3&=(0,1),\\
&&p_{-1}&=(a,b),& p_{-2}&=(a+\lambda,b), & p_{-3}&=(1,0).
\end{align*}
\noindent Fix some $K>\max(\lambda -1,\mu -1)$ and consider the points
\begin{align*}
q_0&:=(0,0,1),\,\, q_1:=(0,0,-K),\,\,q_2:=(1+K,0,-K),\,\,q_3:=(0,1+K,-K),\\
q_4&:=\frac{\big(a(1+K),b(1+K),
\lambda K\big)}{(1+K)-\lambda},\,\,
q_5:=\frac{\big((1+K)c,(1+K)d,\mu
K\big)}{(1+K)-\mu}.
\end{align*}
\noindent We claim that $P$ is the intersection of the $3$-polytope
$Q:=\conv\{q_0,q_1,\dots,q_5\}$ with the plane $H:=\set{(x,y,z)\in\mathbb{R}^3}{z=0}$:
\[
Q\cap H=P\times \{0\}.
\]
Observe that every vertex of $Q\cap H$ corresponds to the intersection of~$H$ with
an edge of~$Q$ that has one endpoint on each side of the plane. Since
$q_0$, $q_4$ and $q_5$ lie above $H$ and $q_1$, $q_2$ and $q_3$
lie below, the intersections of the relevant lines $\li{ q_i}{q_j}$ with~$H$
are (see Figure~\ref{fig:intersection}):
\begin{align*}
\li{q_0}{q_1}\cap H&=(0,0,0),& \li{q_0}{q_2}\cap H&=(1,0,0),& \li{q_0}{q_3}\cap H&=(0,1,0),\\
\li{q_4}{q_1}\cap H&=(a,b,0),&\li{q_4}{q_2}\cap H&=(a+\lambda,b,0),&\li{q_4}{q_3}\cap H&=(a,b+\lambda,0),\\
\li{q_5}{q_1}\cap H&=(c,d,0),&\li{q_5}{q_2}\cap H&=(c+\mu,d,0),&\li{q_5}{q_3}\cap H&=(c,d+\mu,0) .
\end{align*}
These are the vertices of $P\times\{0\}$ together with $(a,b+\lambda,0)$,
$(c+\mu,d,0)$, which proves that $Q\cap H\supseteq P\times \{0\}$.
To prove that indeed $Q\cap H= P\times \{0\}$, we need to see that
both $(a,b+\lambda)$ and $(c+\mu,d)$ belong to $P$.
The convexity of $P$ implies the following conditions on the coordinates of its vertices by comparing, respectively, $p_{-1}$ with the lines $\li{p_0}{p_3}$ and $\li{p_0}{p_{-3}}$, $p_{-1}$ with $p_{-2}$, and $p_{-2}$ with the line $\li{p_3}{p_{-3}}$:
\begin{align*}
a&>0, & -b&>0, & \lambda&>0, & 1-a-b-\lambda>0.
\end{align*}
Hence, the real numbers $\frac{1-a-b-\lambda}{1-b}$, $\frac{a}{1-b}$ and $\frac{\lambda}{1-b}$ are all greater than $0$. Since they add up to $1$, we can exhibit $(a,b+\lambda)$ as a convex combination
of $p_{-1}$, $p_{-2}$ and $p_3$:
\begin{equation*}
\frac{1-a-b-\lambda}{1-b}\,p_{-1}+\frac{a}{1-b}\,p_{-2}+\frac{\lambda}{1-b}\,p_3=(a,b+\lambda).
\end{equation*}
This proves that $(a,b+\lambda)\in P$. That $(c+\mu,d)\in P$ is proved analogously.
\end{proof}
\subsection{Standardization lines of heptagons}
Our next goal is to show that every heptagon is projectively equivalent to a standard heptagon. For this, the key concept is that of a standardization line.
\begin{definition}
Consider a heptagon $P$, embedded in the projective space~$\mathbb{P}^2$, whose vertices are cyclically labeled $\set{p_i}{i\in \mathbb{Z}/7\mathbb{Z}}$.
For $i\in \mathbb{Z}/7\mathbb{Z}$, and abbreviating $\li{i}{j}:=\li{p_i}{p_j}$, construct
\begin{align*}
p_i^+ &:= \li{i+ 1}{i+ 2}\vee\li{i}{i+ 3},&
p_i^- &:= \li{i- 1}{i- 2}\vee\li{i}{i- 3},&
\ell_i &:= p_i^+ \wedge p_i^-.
\end{align*}
We call the line $\ell_i$ the $i$th \defn{standardization line} of~$P$. If $\ell_i\cap P=\emptyset$, it is a \defn{non-crossing} standardization line.
\end{definition}
Figure~\ref{fig:standardizationlines} shows a heptagon and its
standardization lines $\ell_0$ and $\ell_ {-3}$. Observe that $\ell_0$~is a non-crossing standardization line, while $\ell_{-3}$ is not.
\begin{figure}[htpb]
\includegraphics[width=.45\linewidth]{StandardizationLine1}
\includegraphics[width=.45\linewidth]{StandardizationLine2}
\vspace{-.8cm}
\caption{The standardization lines $\ell_0$ and
$\ell_ {-3}$.}\label{fig:standardizationlines}
\end{figure}
\begin{lemma}\label{lem:converttostandard}
A heptagon $P$ is projectively equivalent to a standard heptagon if and only if it has at least one non-crossing standardization line.
\end{lemma}
\begin{proof}
The line at infinity of a standard heptagon must be one of its standardization
lines, which is obviously non-crossing. Conversely, the projective
transformation that sends a non-crossing standardization line of $P$ to
infinity, followed by a suitable affine transformation, maps $P$ onto a
standard heptagon.
\end{proof}
Hence, having a non-crossing standardization line characterizes standard
heptagons up to projective equivalence. Our next step is to show that every
heptagon has a non-crossing standardization line
(Proposition~\ref{prop:noncrossingstandardization}). But to prove this, we still
need to introduce a couple of concepts.
Observe that $\ell_i$ cannot cross any
of the lines $\li{p_{i+1}}{p_{i+2}}$, $\li{p_{i+0}}{p_{i+3}}$,
$\li{p_{i-1}}{p_{i-2}}$ and $\li{p_{i+0}}{p_{i-3}}$ in the interior of $P$,
since by construction their intersection point is $p_i^\pm$, which lies outside~$P$
(compare Figure~\ref{fig:standardizationlines}).
In particular, if $\ell_i$ intersects~$P$, either it separates $p_{i+1}$ and~$p_{i+2}$ from the remaining vertices of~$P$, or it separates $p_{i-1}$ and~$p_{i-2}$.
\begin{definition}
If the standardization line $\ell_i$ separates $p_{i+1}$ and $p_{i+2}$ from the remaining vertices of $P$, we say that it is \defn{$+$-crossing}; if it separates $p_{i-1}$~and~$p_{i-2}$ it is \defn{$-$-crossing}.
In the example of Figure~\ref{fig:standardizationlines}, $\ell_{-3}$ is $-$-crossing.
\end{definition}
\begin{definition}
The lines $\li{p_{i}}{p_{i+3}}$ and $\li{p_{i+1}}{p_{i+2}}$ partition the projective plane~$\mathbb{P}^2$ into two disjoint angular sectors (cf.
Figure~\ref{fig:sector}). One of them contains the points
$p_{i-1}$, $p_{i-2}$ and $p_{i-3}$, while the interior of the other is empty of vertices of~$P$. We denote this empty sector~\defn{$S_i^+$}.
Similarly, \defn{$S_i^-$}~is the sector formed by
$\li{p_{i}}{p_{i-3}}$ and $\li{p_{i-1}}{p_{i-2}}$ that contains no vertices of~$P$.
\end{definition}
\begin{figure}
\centering
\subcaptionbox{The sector $S_0^+$ (shaded). The point $p_0^-$ lies in
$S_0^+$ if and only if $\ell_{0}$ is
$+$-crossing.\label{fig:sector}}[.45\textwidth]
{\includegraphics[width=.4\textwidth]{CrossingSector}}\qquad
\subcaptionbox{The point $p_0^-$ cannot lie in both shaded sectors
simultaneously.\label{fig:sectorcompatibility}}[.45\textwidth]
{\includegraphics[width=.4\textwidth]{SectorCompatibility}}
\caption{The relevant angular sectors.}\label{fig:sectors}
\end{figure}
These sectors allow us to characterize $\pm$-crossing standardization lines.
\begin{lemma}\label{lem:crossingcharacterization}
The standardization line $\ell_i$ is $+$-crossing if and only if $p_i^-\in S_i^+$.
Analogously, $\ell_i$ is $-$-crossing if and only if $p_i^+\in S_i^-$.
\end{lemma}
\begin{proof}
The line $\ell_i$ is $+$-crossing when it separates $p_i$ and $p_{i+1}$ from the rest of~$P$; this happens if and only if $\ell_i\subset S_i^+$. Since $\li{p_{i-1}}{p_{i-2}}\cap\ell_i = p_i^-$, this is equivalent to $p_i^-\in S_i ^+$.
The case of $-$-crossing follows analogously.
\end{proof}
With this characterization, we can easily prove the following compatibility condition.
\begin{lemma}\label{lem:sectorcompatibility}
If $\ell_i$ is $+$-crossing, then $\ell_{i-3}$ cannot be $-$-crossing. Analogously, if $\ell_i$ is $-$-crossing, then $\ell_{i+3}$ cannot be $+$-crossing.
\end{lemma}
\begin{proof}
Both statements are equivalent by symmetry. We assume that $\ell_i$ is $+$-crossing and $\ell_{i-3}$ is $-$-crossing to reach a contradiction.
Observe that $p_i^-=p_{i-3}^+$ by definition. By Lemma~\ref{lem:crossingcharacterization}, $p_i^-$ must lie both in the sector formed by $\li{p_{i}}{p_{i+3}}$ and $\li{p_{i+1}}{p_{i+2}}$ and in the sector formed by $\li{p_{i-3}}{p_{i+1}}$ and $\li{p_{i+3}}{p_{i+2}}$. However, the intersection of these two sectors lies in the interior of the polygon (cf. Figure~\ref{fig:sectorcompatibility}), while $p_i^-$ lies outside.
\end{proof}
\begin{corollary}
\label{cor:allcrossing}
If all the standardization lines $\ell_i$ intersect $P$, they are either all $+$-crossing or all $-$-crossing.
\end{corollary}
\subsection{Every heptagon has a non-crossing standardization line}
We are finally ready to present and prove
Proposition~\ref{prop:noncrossingstandardization}.
In essence, we prove that the combinatorics of the pseudo-line arrangement in
Figure~\ref{fig:nonrealizable} are not realizable by an arrangement of
straight lines in the projective plane. Here the ``combinatorics'' refers to the order
of the intersection points in each projective pseudo-line.
However, any heptagon that had only $+$-crossing standardization lines would provide such
a realization (compare the characterization of Lemma~\ref{lem:crossingcharacterization}).
\begin{figure}[htpb]
\centering
\includegraphics[width=.55\linewidth]{NonRealizable}
\caption{A non-stretchable pseudo-line arrangement.}\label{fig:nonrealizable}
\end{figure}
For the proof, we will need the formula \begin{equation}\label{eq:quadprod} ({a \times b} ){\times} ({c}\times {d}) = [{a,\, b,\, d}]\, c - [{a,\, b,\, c}] \, d ,
\end{equation}
where $[{a,\, b,\, c}]= \det(a,b,c)$ is the $3\times 3$-determinant formed by
the homogeneous coordinates of the corresponding points. That is,
$[{p_x,\ p_y, \ p_z}]=\pm 2 \Vol\left(\conv\{{p_x,\ p_y, \ p_z}\}\right)$.
Observe that, since the vertices of the heptagon are labeled clockwise and are in convex position, $ [{p_x,\ p_y, \ p_z}]>0$ whenever $z$ lies in the interval $(x,y)$.
To simplify the notation, in what follows we abbreviate $[{p_{i+x},\ p_{i+y}, \ p_{i+z}}]$ as $[x,y,z]_i$, for any $x,y,z,i\in \mathbb{Z}/7\mathbb{Z}$.
\begin{lemma}\label{lem:alg_characterization}
With the notation from above, the standardization line $\ell_i$ is $+$-crossing
if and only if
\begin{align}\label{eq:bdycondition1}
[p_{i+2},p_{i+1}, p_i^-]= [- 1,-2,-3]_{i}[2, 1, 0]_{i} - [- 1,-2,0]_i[2, 1,-3]_{i}\geq 0;
\end{align}
and is $-$-crossing if and only if
\begin{align}\label{eq:bdycondition2}
[p_{i-2},p_{i-1},p_i^+]= [1,2,3]_i[-2,-1, 0]_{i} - [1,2,0]_{i}[-2,-1, 3]_{i} \geq 0.
\end{align}
\end{lemma}
\begin{proof}
Using \eqref{eq:quadprod}, the coordinates of the standardization point
$p_i^-$ are given by
\begin{align*}
p_i^-
&= (p_{i- 1}\wedge p_{i- 2})\vee(p_{i}\wedge p_{i- 3})
=(p_{i- 1}\times p_{i- 2})\times(p_{i}\times p_{i- 3})\\
&\stackrel{\eqref{eq:quadprod}}{=}
\phantom{-}[- 1,-2, -3]_ i\ p_{i} - [- 1,-2,0]_i\ p_{i-3}.
\end{align*}
Observe that $[p_{i+3},p_i,p_i^-]<0$ since
\begin{align*}
[p_{i+3},p_i,p_i^-]&=
[- 1,-2, -3]_ i[3,0, 0]_ i - [- 1,-2,0]_i[3,0,-3]_ i\\
&=\phantom{[- 1,-2, -3]_ i[3,0, 0]_ i} - [- 1,-2,0]_i[3,0, -3]_ i
\ < \ 0,
\end{align*}
because $[3,0,0]_i=0$ and $[- 1,-2,0]_i > 0$, $[3,0, -3]_ i > 0$ by convexity. %
Therefore, in view of Lemma~\ref{lem:crossingcharacterization}, requiring $\ell_i$ to be $+$-crossing reduces to the equation $[p_{i+2},p_{i+1}, p_i^-]\geq 0$, since otherwise $ p_i^-$ would not lie in the desired sector. This expression can be reformulated as~\eqref{eq:bdycondition1}. The proof of~\eqref{eq:bdycondition2} is analogous.
\end{proof}
\begin{proposition}\label{prop:noncrossingstandardization}
Every heptagon has at least one non-crossing standardization line.
\end{proposition}
\begin{proof}
We want to prove that $P$ has at least one non-crossing standardization line.
By Corollary~\ref{cor:allcrossing} (and symmetry), it is enough to prove that
it is impossible for all $\ell_i$ to be $+$-crossing. We will assume this to be
the case and reach a contradiction.
If $\ell_i$ is $+$-crossing for all $0\leq i\leq 6$, then by Lemma~\ref{lem:alg_characterization}, the coordinates of the vertices of $P$ fulfill
\eqref{eq:bdycondition1} for all $i\in\mathbb{Z}/7\mathbb{Z}$. Moreover, if $\ell_i$ is $+$-crossing then it cannot be $-$-crossing. Therefore, again by Lemma~\ref{lem:alg_characterization}, one can see that the coordinates of the vertices of $P$ fulfill
\begin{align}\label{eq:bdycondition3}
[2,1,0]_{i}[-1,-2, 3]_{i} - [2,1,3]_i[-1,-2, 0]_{i}> 0,
\end{align}
for all $i\in\mathbb{Z}/7\mathbb{Z}$.
Therefore, if all the $\ell_i$ are $+$-crossing, the addition of the left hand sides of \eqref{eq:bdycondition1} and \eqref{eq:bdycondition3} for all $i\in\mathbb{Z}/7\mathbb{Z}$ should be positive.
With the abbreviations
\begin{align*}
A_i&:=[- 1,-2,-3]_{i},& B_i&:=[2, 1, 0]_{i},& C_i&:=[- 1,-2,0]_i,\\
D_i&:=[2, 1,-3]_{i},& E_i&:=[2,1,0]_{i},& F_i&:=[-1,-2, 3]_{i},\\
G_i &:=[2,1,3]_i,& H_i&:=[-1,-2, 0]_{i};
\end{align*}
this can be expressed as
\begin{equation}\label{eq:globalcondition}
\sum_{i\in \mathbb{Z}/7\mathbb{Z}}A_iB_i-C_iD_i+E_iF_i-G_iH_i>0 .
\end{equation}
However, it turns out that for every heptagon the equation
\begin{equation}\label{eq:heptagonidentity}
\sum_{i\in \mathbb{Z}/7\mathbb{Z}}A_iB_i-C_iD_i+E_iF_i-G_iH_i=0
\end{equation}
holds by the upcoming Lemma~\ref{lem:invariant}. This contradiction concludes the proof
that every heptagon has at least one standardization line.
\end{proof}
\begin{lemma}\label{lem:invariant}
Let $A$ be a configuration of $7$ points in $\mathbb{E}^2\subset\mathbb{P}^2$ labeled
$\set{a_i}{i\in \mathbb{Z}/7\mathbb{Z}}$. Denote the determinant $[{a_{i+x},\ a_{i+y}, \
a_{i+z}}]$ as $[x,y,z]_i$, for any $x,y,z,i\in \mathbb{Z}/7\mathbb{Z}$. Finally, let
\begin{align*}
A_i&:=[- 1,-2,-3]_{i},& B_i&:=[2, 1, 0]_{i},& C_i&:=[- 1,-2,0]_i,\\
D_i&:=[2, 1,-3]_{i},& E_i&:=[2,1,0]_{i},& F_i&:=[-1,-2, 3]_{i},\\
G_i &:=[2,1,3]_i,& H_i&:=[-1,-2, 0]_{i};
\end{align*}
Then,
\begin{equation}\tag{\ref{eq:heptagonidentity}}
\sum_{i\in \mathbb{Z}/7\mathbb{Z}}A_iB_i-C_iD_i+E_iF_i-G_iH_i=0.
\end{equation}
\end{lemma}
\begin{proof}
Although~\eqref{eq:heptagonidentity} can be checked purely algebraically, we provide a geometric interpretation. Observe that
$[x,y,z]_i=\pm2\Vol(a_{i+x},a_{i+y},a_{i+z})$,
which implies that the identity in~\eqref{eq:heptagonidentity} can be proved in terms of
(signed) areas of certain triangles spanned by $A$. Figure~\ref{fig:invariant} depicts some of these triangles when the points are
in convex position.
\begin{figure}[htpb]
\centering
\begin{tabular}{cccc}
\includegraphics[width=.18\linewidth]{A0}&
\includegraphics[width=.18\linewidth]{B0}&
\includegraphics[width=.18\linewidth]{C0}&
\includegraphics[width=.18\linewidth]{D0}\\
$A_0$&$B_0$&$C_0$&$D_0$\\
\includegraphics[width=.18\linewidth]{E0}&
\includegraphics[width=.18\linewidth]{F0}&
\includegraphics[width=.18\linewidth]{G0}&
\includegraphics[width=.18\linewidth]{H0}\\
$E_0$&$F_0$&$G_0$&$H_0$
\end{tabular}
\caption{The determinants involved in Lemma~\ref{lem:invariant}.}\label{fig:invariant}
\end{figure}
To see \eqref{eq:heptagonidentity}, we show the stronger result that both
\begin{align}
\sum_{i\in \mathbb{Z}/7\mathbb{Z}}A_iB_i& =\sum_{i\in \mathbb{Z}/7\mathbb{Z}}G_iH_i \qquad \text{ and }\label{eq:identity1}\\\sum_{i\in \mathbb{Z}/7\mathbb{Z}}E_iF_i&=\sum_{i\in \mathbb{Z}/7\mathbb{Z}}C_iD_i.\label{eq:identity2}
\end{align}
Indeed, the identity \eqref{eq:identity1} is easy, because
$A_iB_i=G_{i+3}H_{i+3}$ for all $i\in \mathbb{Z}/7\mathbb{Z}$ since $A_i=G_{i+3}$ and $B_i=H_{i+3}$.
Moreover, it is straightforward to check that
\begin{align}
C_i&=E_{i-2},\label{eq:Ci}
\end{align}
and it is also not hard to see that
\begin{equation}
D_i+C_{i-3}=F_{i-2}+E_{(i+3)-2}.\label{eq:square}
\end{equation}
Finally, we subtract the right-hand side of \eqref{eq:identity2} from the left hand side:
\begin{align*}
& \sum_{i\in \mathbb{Z}/7\mathbb{Z}}E_iF_i -\sum_{i\in \mathbb{Z}/7\mathbb{Z}}C_iD_i
\ = \
\sum_{i\in \mathbb{Z}/7\mathbb{Z}}E_{i-2}F_{i-2}-\sum_{i\in \mathbb{Z}/7\mathbb{Z}}C_iD_i
\\ &\stackrel{\phantom{\eqref{eq:Ci}}}{=}
\sum_{i\in \mathbb{Z}/7\mathbb{Z}}E_{i-2}(F_{i-2}+E_{i+1}-E_{i+1})
-\sum_{i\in \mathbb{Z}/7\mathbb{Z}}C_i(D_i+C_{i-3}-C_{i-3})
\\ &
\stackrel{\eqref{eq:Ci}}{=}
\sum_{i\in\mathbb{Z}/7\mathbb{Z}}C_i
\underbrace{\big(F_{i-2}+E_{i+1}-D_i-C_{i-3}\big)}_{{}=0 \text{ by }\eqref{eq:square}}
-\sum_{i\in \mathbb{Z}/7\mathbb{Z}}E_{i-2}E_{i+1}
+\sum_{i\in \mathbb{Z}/7\mathbb{Z}}C_iC_{i-3}
\\ &\stackrel{\eqref{eq:Ci}}{=}
-\sum_{i\in \mathbb{Z}/7\mathbb{Z}}C_{i}C_{i+3}
+\sum_{i\in \mathbb{Z}/7\mathbb{Z}} C_iC_{i-3} \ = \ 0,
\end{align*}
and this concludes our proof of~\eqref{eq:identity2}.
\end{proof}
\begin{observation}
In contrast to Proposition~\ref{prop:noncrossingstandardization}, there exist
heptagons with $6$~crossing standardization lines.
For example, the convex hull of
\begin{align*}
p_0&=(\tfrac{7}{5},\tfrac{1}{2}),&p_1&=(\tfrac{6}{5},\tfrac{1}{10}), &p_2&=(1,0), &p_3&=(0,0), \\p_{-3}&=(0,1), &p_{-2}&=(1,1), &p_{-1}&=(\tfrac{6}{5},\tfrac{9}{10})\end{align*}
has $6$ crossing standardization lines (Figure~\ref{fig:6crossings}). Notice that the symmetry about the $x$-axis makes $\ell_2$ and $\ell_{-2}$ coincide.
\end{observation}
\begin{figure}[htpb]
\centering
\includegraphics[width=.6\linewidth]{6crossing}
\caption{A heptagon with $6$ crossing standardization
lines.}\label{fig:6crossings}
\end{figure}
\subsection{The intersection complexity of heptagons}
Using projective transformations to standardize heptagons is the last step towards
Theorem~\ref{thm:icheptagon}.
\begin{lemma}\label{lem:projective}
Projective equivalence preserves intersection complexity.
\end{lemma}
\begin{proof}
Let $\sigma:P_1\to P_2$ be a projective transformation between $k$-dimensional
polytopes. Let $Q_1\subset\mathbb{R}^d$ be a polytope with $\ic(P_1)$ many vertices and
let~$H$ be an affine $k$-flat such that $Q_1\cap H=P_1$. Finally, let $\tau$ be a
projective transformation of~$\mathbb{R}^d$ that leaves invariant both $H$ and its
orthogonal complement, and such that $\tau|_H=\sigma$. Then $\tau(Q_1)\cap
H=\sigma (P_1)=P_2$.
\end{proof}
\begin{lemma}\label{lem:icheptagon}
Any heptagon $P$ is a section of a $3$-polytope with no more than $6$ vertices.
\end{lemma}
\begin{proof}
Let $P$ be a heptagon. By Proposition~\ref{prop:noncrossingstandardization}
it has a non-crossing standardization line, which implies that $P$ is
projectively equivalent to a standard heptagon by
Lemma~\ref{lem:converttostandard}.
Our claim follows by combining Lemma~\ref{lem:projective} with
Proposition~\ref{prop:icstandardheptagon}.
\end{proof}
The combination of this lemma with the lower bound of Proposition~\ref{prop:ic3bound}
finally yields our claimed result.
\begin{theorem}\label{thm:icheptagon}
Every heptagon has intersection complexity $6$.
\end{theorem}
\section{The intersection complexity of $n$-gons}
We can use Lemma~\ref{lem:icheptagon} to derive bounds for the complexity of
arbitrary polygons. We begin with a trivial bound that presents $n$-gons as sections of $3$-polytopes.
\begin{theorem}\label{thm:ic3ngon}
Any $n$-gon $P$ with $n\geq 7$ is a section of a $3$-polytope with at most $n-1$
vertices.
\end{theorem}
\begin{proof}
The proof is by induction. The case $n=7$ is Lemma~\ref{lem:icheptagon}. For
$n\ge8$, let $x=(a,b)$ be a vertex of $P$, and consider the $(n-1)$-gon $P'$
obtained by taking the convex hull of the remaining vertices of $P$. By
induction there is a $3$-polytope $Q'$ with at most $n-2$ vertices such that
$Q'\cap H_0=P'\times \{0\}$, where $H_0=\set{(x,y,z)\in\mathbb{R}^3}{z=0}$. Then
$Q=\conv\big(Q'\cup (a,b,0)\big)$ satisfies $Q\cap H_0=P\times \{0\}$, and has $n-1$ vertices.
\end{proof}
\begin{question}
Which is the smallest $f(n)$ such that any $n$-gon is a section of a $3$-polytope with at most $f(n)$ vertices? Is $f(n)\sim \frac23 n$?
\end{question}
We can derive more interesting bounds when we allow ourselves to increase the
dimension. We only need the following result (compare \cite[Proposition~2.8]{ThomasParriloGouveia2013}).
\begin{lemma}\label{lem:icunion}
Let $P_1$ and $P_2$ be polytopes in $\mathbb{R}^d$, and let $P=\conv(P_1\cup P_2)$.
If $P_i$ is a section of a $d_i$-polytope with $n_i$ vertices for $i=1,2$, then $P$ is a section of a $(d_1+d_2-d)$ polytope with not more than $n_1+n_2$ vertices. In particular, $\ic(P)\leq \ic(P_1)+\ic(P_2)$.
\end{lemma}
\begin{proof}
For $i=1,2$, let $Q_i$ be a polytope in $\mathbb{R}^{d_i}$ with $n_i$ vertices
and such that $Q_i\cap H_i=P_i$,
where $H_i=\set{x\in \mathbb{R}^d}{x_j =0 \text{ for }d_i-d< j\leq d}$ is the
$d$-flat that contains the points with vanishing last $d_i-d$ coordinates.
Now consider the following embeddings of $Q_1$ and $Q_2$ in $\mathbb{R}^{d_1+d_2-d}$:
\begin{itemize}
\item for $q\in Q_1$ let $f_1(q)=(q_1,\dots,q_{d},q_{d+1},\dots,q_{d_1},
0,\dots,0)$, and
\item for $q\in Q_2$ let
$f_2(q)=(q_1,\dots,q_{d},0,\dots,0,q_{d+1},\dots,q_{d_2})$.
\end{itemize}
Finally, consider the polytope $Q:=\conv\big(f_1(Q_1)\cup f_2(Q_2)\big)$,
which has at most $n_1+n_2$ vertices, and the $d$-flat $H:=\set{x\in\mathbb{R}^{d_1+d_2-d}}{x_j=0 \text{ for }d<j\leq d_1+d_2-d}$;
then $P=Q\cap H$.
\end{proof}
\begin{theorem}\label{thm:icngon}
Any $n$-gon with $n\geq 7$ is a section of a $(2+\ffloor{n}{7})$-dimensional
polytope with at most $\fceil{6n}{7}$ vertices. In particular, $\ic(P)\leq
\fceil{6n}{7}$.
\end{theorem}
\begin{proof}
This is a direct consequence of Lemmas~\ref{lem:icheptagon}
and~\ref{lem:icunion}.
\end{proof}
\begin{question}
Which is the smallest $f(n)$ such that any $n$-gon is a section of a $g(n)$-dimensional polytope with at most $f(n)$ vertices? Are $f(n)=O(\sqrt{n})$ and $g(n)=O(\sqrt{n})$?
\end{question}
\section*{Acknowledgements}
The authors want to thank G\"unter Ziegler and Vincent Pilaud for many enriching discussions on this subject.
| {
"timestamp": "2015-02-11T02:18:15",
"yymm": "1404",
"arxiv_id": "1404.2443",
"language": "en",
"url": "https://arxiv.org/abs/1404.2443",
"abstract": "We show that every heptagon is a section of a $3$-polytope with $6$ vertices. This implies that every $n$-gon with $n\\geq 7$ can be obtained as a section of a $(2+\\lfloor\\frac{n}{7}\\rfloor)$-dimensional polytope with at most $\\lceil\\frac{6n}{7}\\rceil$ vertices; and provides a geometric proof of the fact that every nonnegative $n\\times m$ matrix of rank $3$ has nonnegative rank not larger than $\\lceil\\frac{6\\min(n,m)}{7}\\rceil$. This result has been independently proved, algebraically, by Shitov (J. Combin. Theory Ser. A 122, 2014).",
"subjects": "Metric Geometry (math.MG); Combinatorics (math.CO)",
"title": "Polygons as sections of higher-dimensional polytopes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9893474879039229,
"lm_q2_score": 0.8152324960856175,
"lm_q1q2_score": 0.8065482220599504
} |
https://arxiv.org/abs/2201.12949 | The shallow permutations are the unlinked permutations | Diaconis and Graham studied a measure of distance from the identity in the symmetric group called total displacement and showed that it is bounded below by the sum of length and reflection length. They asked for a characterization of the permutations where this bound is an equality; we call these the shallow permutations. Cornwell and McNew recently interpreted the cycle diagram of a permutation as a knot diagram and studied the set of permutations for which the corresponding link is an unlink. We show the shallow permutations are precisely the unlinked permutations. As Cornwell and McNew give a generating function counting unlinked permutations, this gives a generating function counting shallow permutations. | \section{Introduction}
There are many measures for how far a given permutation $w\in S_n$ is from being the identity. The most classical are length and reflection length, which are defined as follows. Let $s_i$ denote the adjacent transposition $s_i=(i\,\,i+1)$ and $t_{ij}$ the transposition $t_{ij}=(i\,\,j)$. The {\bf length} of $w$, denoted $\ell(w)$, is the smallest integer $\ell$ such that there exist indices $i_1,\ldots,i_\ell$ with $w=s_{i_1}\cdots s_{i_\ell}$. It is classically known that the length of $w$ is equal to the number of inversions of $w$; an {\bf inversion} is a pair $(a,b)$ such that $a<b$ but $w(a)>w(b)$. The {\bf reflection length} of $w$, which we will denote $\ell_T(w)$, is the smallest integer $r$ such that there exist indices $i_1,\ldots,i_r$ and $j_1,\ldots,j_r$ with $w=t_{i_1j_1}\cdots t_{i_rj_r}$. It is classically known that $\ell_T(w)$ is equal to $n-\cyc(w)$, where $\cyc(w)$ denotes the number of cycles in the cycle decomposition of $w$.
Another such measure is {\bf total displacement}, defined by Knuth~\cite{KnuAOCP} as $\td(w)=\sum_{i=1}^n |w(i)-i|$ and first studied by Diaconis and Graham~\cite{DG} under the name Spearman's disarray. Diaconis and Graham showed that $\ell(w)+\ell_T(w)\leq\td(w)$ for all permutations $w$ and asked for a characterization of those permutations for which equality holds. More recently, Petersen and Tenner~\cite{PT} defined a statistic they call {\bf depth} on arbitrary Coxeter groups and showed that, for any permutation, its total displacement is always twice its depth. Following their terminology, we call the permutations for which the Diaconis--Graham bound is an equality the {\bf shallow} permutations.
\begin{figure}[hbtp]
\cyclefig{7,5,6,3,4,2,1}
\caption{\label{diag:7563421}Knot diagram for $w=7563421$}
\end{figure}
In a recent paper, Cornwell and McNew~\cite{CM} interpreted the cycle diagram of a permutation as a knot diagram and studied the permutations whose corresponding knots are the trivial knot or the trivial link. Given a permutation $w$, to obtain the {\bf cycle diagram}, draw a horizontal line between the points $(i,i)$ and $(w^{-1}(i),i)$ for each $i$ and a vertical line between $(j,j)$ and $(j,w(j))$ for each $j$. Turn the cycle diagram into a {\bf knot diagram} by designating every vertical line to cross over any horizontal line it meets. For example, Figure~\ref{diag:7563421} shows the knot diagram for $w=7563421$. They say that a permutation is {\bf unlinked} if the knot diagram of the permutation is a diagram for the unlink, a collection of circles embedded trivially in $\mathbb{R}^3$. In their paper, they mainly consider derangements, but it is easy to modify their definitions to consider all permutations by treating each fixed point as a tiny unknotted loop.
Our main result is the following:
\begin{thm}
A permutation is shallow if and only if it is unlinked.
\end{thm}
Readers can check that Figure~\ref{diag:7563421} shows that the diagram of $w=7563421$ is a diagram of the unlink with 2 components, and $\ell(w)=19$, $\ell_T(w)=5$, and $\td(w)=24$, so $\ell(w)+\ell_T(w)=\td(w)$.
Using this theorem and further results of Cornwell and McNew~\cite[Theorem 6.5]{CM}, we obtain a generating function counting shallow permutations. Let $P$ be the set of shallow permutations, and let
$$G(x)=\sum_{n=0}^\infty \sum_{P\cap S_n} x^n.$$ Then $G$ satisfies the following recurrence.
\begin{cor}
The generating function $G$ satisfies the following recurrence:
$$x^2G^3 + (x^2 - 3x + 1)G^2 + (3x-2)G + 1 =0.$$
\end{cor}
This is sequence A301897 (defined as the number of shallow permutations) in the OEIS~\cite{OEIS}.
While this paper was being prepared, Berman and Tenner~\cite{BM21} gave another characterization of shallow cycles that could also be compared with the work of Cornwell and McNew to give our results.
Our proof relies on a recursive description of the set of unlinked permutations due to Cornwell and McNew and a different recursive description of the set of shallow permutations due to Hadjicostas and Monico~\cite{HM}. We show by induction that all permutations satisfying the description of Cornwell and McNew are shallow and separately that all permutations satisfying the description of Hadjicostas and Monico are unlinked.
The shallow permutations have another surprising connection not previously noted in the literature. Given a permutation $w$, Bagno, Biagioli, Novick, and the last author~\cite{BBNW} defined the {\bf reduced reflection length} $\ell_R(w)$ as the smallest integer $q$ such that there exist $i_1,\ldots,i_q$ and $j_1,\ldots,j_q$ such that $w=t_{i_1j_1}\cdots t_{i_qj_q}$ and $\ell(w)=\sum_{k=1}^q \ell(t_{i_kj_k})$ and show that the shallow permutations are equivalently the permutations for which $\ell_T(w)=\ell_R(w)$. Bennett and Blok~\cite{BB} show, using somewhat different language, that reduced reflection length is the rank function on the universal Grassman order introduced by Bergeron and Sottile~\cite{BS} to study questions in Schubert calculus.
Section 2 describes the recursive characterizations of Cornwell and McNew and of Hadjicostas and Monico, while the proof of our main theorem is given in Section 3.
I originally conjectured Theorem 1.1 out of work on a related conjecture in an undergraduate directed research seminar in Spring 2019. I thank the students in the seminar, specifically Jacob Alderink, Noah Jones, Sam Johnson, and Matthew Mills, for ideas that helped spark this work. I also thank Nathan McNew for the Tikz code to draw the figures. Finally, I learned about the work of Cornwell and McNew at Permutation Patterns 2018 and thank the organizers of that conference.
\section{Characterizations of shallow and unlinked permutations}
We now describe the recursive characterizations of unlinked and shallow permutations.
Let $w\in S_n$ be a permutation. Denote by $\fl_i(w)$ the {\bf $i$-th flattening} of $w$, which is defined by removing the $i$-th entry of $w$ (in one-line notation) and then renumbering down by 1 every entry greater than $w(i)$.
Formally, $$\fl_i(w)(k)=\begin{cases}
w(k) &\mbox{if } k<i \mbox{and } w(k)<w(i) \\
w(k)-1 &\mbox{if } k<i \mbox{and } w(k)>w(i) \\
w(k+1) &\mbox{if } k>i \mbox{and } w(k)<w(i) \\
w(k+1)-1 &\mbox{if } k>i \mbox{and } w(k)>w(i) \\
\end{cases}$$
Cornwell and McNew~\cite{CM} give the following recursive characterization of permutations with unlinked cycle diagrams.
\begin{thm}
Suppose $w$ is unlinked. Then either
\begin{itemize}
\item $w\in S_1$ (so $w=1$ in one-line notation), OR
\item There exists $i$ with $|w(i)-i|\leq 1$, and $\fl_i(w)$ is unlinked.
\end{itemize}
\end{thm}
This characterization is assembled from several statements in their paper, and we consider all permutations instead of only derangements, so we explain how to obtain this statement from their work. References to specific statements are by the numbering in~\cite{CM}
\begin{proof}
Suppose $w\in S_n$ is unlinked. If $w(i)=i$ for some $i$, then $|w(i)-i|=0$ and $\fl_i(w)$ is unlinked. This handles the case where $w$ has a fixed point.
Applying Lemma 6.3 repeatedly until some $\tau_i$ is a single cycle, we see that $w$ has some cycle involving the consecutive entries $j, j+1, \ldots, k$. Now Proposition 5.10 applied to this cycle shows that there is some index $i$ with $j\leq i\leq k$ such that $|w(i)-i|=1$. The process of going from the diagram $D$ to the diagram $D_0$ described in the second paragraph of the proof of Proposition 5.11 is precisely $\fl_i$.
\end{proof}
\begin{example}
Let $w=7563421$. Then $w(4)=3$, so $|w(4)-4|=1$. Furthermore, $\fl_4(w)=645321$, which is also unlinked.
\end{example}
Given a permutation $w\in S_n$, an index $j$ is a {\bf left-to-right maximum} if $w(j)>w(i)$ for all $i<j$. An index $j$ is a {\bf right-to-left minimum} if $w(j)<w(i)$ for all $i>j$.
Hadjicostas and Monico~\cite[Theorem 4.1]{HM} give the following recursive characterization of shallow permutations.
\begin{thm}
\label{thm:shallow}
Suppose $w\in S_n$ is shallow. Then either
\begin{itemize}
\item $w\in S_1$ (so $w=1$ in one line notation), OR
\item $w(n)=n$, and the permutation $w'\in S_{n-1}$ with $w'(i)=w(i)$ for all $i$ is shallow, OR
\item $w(n)=k$, $w^{-1}(n)=j$, and the permutation $w'\in S_{n-1}$ defined by setting $w'(i)=w(i)$ for $i\neq j$ and $w'(j)=k$ is shallow with either a left-to-right maximum or right-to-left minimum at $j$.
\end{itemize}
\end{thm}
\begin{example}
If $w=7563421$, then $w'=156342$ is shallow with both a left-to-right maximum and a right-to-left minimum at position $1$. If $w=45231$, then $w'=4123$ is shallow with a right-to-left minimum at position $2$.
\end{example}
\section{Proof of Main Theorem}
To prove our main theorem, we use the two recursive characterizations. We split the proof into two parts, first using the characterization of Cornwell and McNew to prove the fllowing.
\begin{proposition}
Every unlinked permutation is shallow.
\end{proposition}
\begin{proof}
We prove this proposition by induction on $n$. Let $w$ be an unlinked permutation.
For the base case, clearly $\ell(w)+\ell_T(w)=\td(w)$ for the permutation $w=1$. (Both sides are 0.)
For the inductive case, suppose there exists $i$ with $|w(i)-i|\leq 1$ and $\fl_i(w)$ unlinked. Given integers $a$ and $b$, let $a'=a$ if $a<i$ and $a'=a-1$ if $a>i$,
and similarly $b'=b$ if $b<i$ and $b'=b-1$ if $b>i$. Then note that for $a,b\neq i$, $(a,b)$ is an inversion of $w$ if and only if
$(a',b')$ is an inversion of $\fl_i(w)$. Hence $\ell(w)-\ell(\fl_i(w))$ is equal to the number of inversions involving $i$, or, in notation, the number of pairs $(a,i)$ with $a<i$ and $w(a)>w(i)$ and pairs $(i,b)$ with $i<b$ and $w(i)>w(b)$.
We now split into three cases depending on whether $w(i)-i$ is $0$, $1$, or $-1$.
If $w(i)-i=0$, then $\ell_T(\fl_i(w))=\ell_T(w)$, as $\fl_i(w)$ has one fewer cycle, namely the fixed point $i$ that was removed, and $\fl_i(w)$ is a permutation of one fewer element.
Furthermore, since $w(i)=i$, $|\fl_i(w)(a')-a'|=|w(a)-a|$ if and only if $(a,i)$ or $(i,a)$ is not an inversion of $w$, and $|\fl_i(w)(a')-a'|=|w(a)-a|-1$ if it is an inversion. (Note that this is
so simple because the sign of $w(a)-a$ is determined by whether $(a,i)$ is an inversion or $(i,a)$ is an inversion.) Also $w(i)-i=0$. Hence $\ell(w)-\ell(\fl_i(w))=\td(w)-\td(\fl_i(w))$.
By the inductive hypothesis we can assume $\ell(\fl_i(w))+\ell_T(\fl_i(w))=\td(\fl_i(w))$, so $\ell(w)+\ell_T(w)=\td(w)$.
If $w(i)-i=-1$, then the cycle decomposition of $\fl_i(w)$ is the same as that of $w$ except that $i$ is removed and every $b>i$ is replaced by $b-1$. (In particular, $\fl_i(w)(w^{-1}(i))=w(i)=i-1$.) Hence $\ell_T(\fl_i(w))=\ell_T(w)-1$. Furthermore, also in this case, $|\fl_i(w)(a')-a'|=|w(a)-a|$ if and only if $(a,i)$ or $(i,a)$ is not an inversion of $w$, and $|\fl_i(w)(a')-a'|=|w(a)-a|-1$ if it is an inversion. However, $|w(i)-i|=1$, so $\td(w)-\td(\fl_i(w))=\ell(w)-\ell(\fl_i(w))+1$.
Again by the inductive hypothesis we can assume $\ell(\fl_i(w))+\ell_T(\fl_i(w))=\td(\fl_i(w))$, so $\ell(w)+\ell_T(w)=\td(w)$.
The proof where $w(i)-i=1$ is similar to the previous case, as again we have $\ell_T(\fl_i(w))=\ell_T(w)-1$ and $\td(w)-\td(\fl_i(w))=\ell(w)-\ell(\fl_i(w))+1$.
\end{proof}
\begin{example}
Let $w=7563421$, and let $i=4$, so $w(i)-i=-1$. Then$\fl_4(w)=645321$, with $\ell_T(\fl_4(w))=4$. Furthermore, $\td(w)-\td(\fl_4(w))=6$, and $\ell(w)-\ell(\fl_4(w))=5$.
\end{example}
We now follow the recursive characterization of Hadjicostas and Monico to prove the following:
\begin{proposition}
Every shallow permutation is unlinked.
\end{proposition}
\begin{proof}
We prove this by induction on $n$.
If $w\in S_1$, then the associated link is a single small unknotted and unlinked loop.
If $w(n)=n$, $w'\in S_{n-1}$ is defined by $w'(i)=w(i)$ for all $i$ with $1\leq i\leq n-1$, and $w'$ is unlinked, then the cycle diagram of $w$ is obtained from that of $w'$ by adding a small unknotted and unlinked loop at the top right, so it is also unlinked.
Now suppose $w(n)=k$, $w^{-1}(n)=j$, $w'$ as defined in Theorem~\ref{thm:shallow} is shallow, and $w'(j)=k$ is a right-to-left minimum. The cycle diagram of $w$ can be obtained from the cycle diagram of $w'$ by deleting the vertical segment
from $(j,j)$ to $(j,k)$ and replacing it with segments from $(j,j)$ to $(j,n)$ to $(n,n)$ to $(n,k)$ to $(j,k)$. Since $(j,k)$ is a right-to-left minimum in $w'$, the only crossings made by the new
segments are on the vertical segment from $(j,j)$ to $(j,n)$. Since they are on a vertical segment, these are all overcrossings. Hence this long loop in the link associated to $w$ can be
slid around over the top of the knot and shrunk to the vertical segment from $(j,j)$ to $(j,k)$, which also only has overcrossings. Therefore, the link types of $w$ and $w'$ are the same.
By induction, $w'$ is unlinked, so $w$ is also unlinked.
One has a similar argument if $w'(j)=k$ is a left-to-right maximum, except that the crossings are undercrossings associated to horizontal segments and hence the isotopy takes place under the rest of the link. If $w'(j)=k$ is both a left-to-right maximum and a right-to-left minimum, then $j=k$ and the new segments make no crossings at all, forming a free unknotted link component.
\end{proof}
\begin{example}
Let $w=7563421$. Here we have $k=1$, $j=1$, and $j=k$ is both a left-to-right maximum and a right-to-left minimum. One can see that the cycle $(17)$ produces a free unknotted link component that can be shrunk to a little loop at $1$.
\begin{figure}[htbp]
\cyclefig{4,5,2,3,1}\,\,\cyclefig{4,1,2,3}
\caption{\label{diag:45231isotopy}Knot diagrams for $w=45231$ and $w'=4123$}
\end{figure}
Now let $w=45231$. Then $k=1$, $j=2$, and $w'=4123$. One can see from Figure~\ref{diag:45231isotopy} that the knot diagrams for $w=45231$ and $w'=4123$ are isotopic as described above.
\end{example}
| {
"timestamp": "2022-02-01T02:32:50",
"yymm": "2201",
"arxiv_id": "2201.12949",
"language": "en",
"url": "https://arxiv.org/abs/2201.12949",
"abstract": "Diaconis and Graham studied a measure of distance from the identity in the symmetric group called total displacement and showed that it is bounded below by the sum of length and reflection length. They asked for a characterization of the permutations where this bound is an equality; we call these the shallow permutations. Cornwell and McNew recently interpreted the cycle diagram of a permutation as a knot diagram and studied the set of permutations for which the corresponding link is an unlink. We show the shallow permutations are precisely the unlinked permutations. As Cornwell and McNew give a generating function counting unlinked permutations, this gives a generating function counting shallow permutations.",
"subjects": "Combinatorics (math.CO)",
"title": "The shallow permutations are the unlinked permutations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9893474879039229,
"lm_q2_score": 0.8152324938410784,
"lm_q1q2_score": 0.8065482198393212
} |
https://arxiv.org/abs/1405.2805 | Cross-intersecting families of vectors | Given a sequence of positive integers $p = (p_1, . . ., p_n)$, let $S_p$ denote the family of all sequences of positive integers $x = (x_1,...,x_n)$ such that $x_i \le p_i$ for all $i$. Two families of sequences (or vectors), $A,B \subseteq S_p$, are said to be $r$-cross-intersecting if no matter how we select $x \in A$ and $y \in B$, there are at least $r$ distinct indices $i$ such that $x_i = y_i$. We determine the maximum value of $|A|\cdot|B|$ over all pairs of $r$- cross-intersecting families and characterize the extremal pairs for $r \ge 1$, provided that $\min p_i >r+1$. The case $\min p_i \le r+1$ is quite different. For this case, we have a conjecture, which we can verify under additional assumptions. Our results generalize and strengthen several previous results by Berge, Frankl, Füredi, Livingston, Moon, and Tokushige, and answers a question of Zhang. | \section{Introduction}
The Erd\H os-Ko-Rado theorem~\cite{EKR61} states that for $n\geq 2k$, every family of pairwise intersecting $k$-element subsets of an $n$-element set consists of at most ${n-1\choose k-1}$ subsets, as many as the star-like family of all subsets containing a fixed element of the underlying set. This was the starting point of a whole new area within combinatorics: extremal set theory; see~\cite{GK78}, \cite{Bol86}, \cite{DeF83}, \cite{F95}. The Erd\H os-Ko-Rado theorem has been extended and generalized to other structures: to multisets, divisors of an integer, subspaces of a vector space, families of permutations, etc. It was also generalized to ``cross-intersecting" families, i.e., to families $A$ and $B$ with the property that every element of $A$ intersects all elements of $B$; see Hilton~\cite{Hi77}, Moon~\cite{Mo82}, and Pyber~\cite{Py86}.
\smallskip
For any positive integer $k$, we write $[k]$ for the set $\{1,\dots,k\}$. Given a sequence of positive integers $p=(p_1,\dots,p_n)$, let
$$S_p=[p_1]\times\cdots\times[p_n]=\{(x_1,\dots,x_n)\; :\; x_i\in[p_i]\hbox{ for }i\in[n]\}.$$
We will refer to the elements of $S_p$ as {\em vectors}. The {\em Hamming
distance} between the vectors $x,y\in S_p$ is $|\{i\in[n]\;:\; x_i\ne y_i\}|$
and is denoted by $d(x,y)$. Let $r\ge1$ be an
integer. Two vectors $x,y\in
S_p$ are said to be {\em $r$-intersecting} if $d(x,y)\le n-r$. (This term
originates in the
observation that if we represent a vector $x=(x_1,\dots,x_n)\in S_p$ by the
set $\{(i,x_i)\; :\; i\in[n]\}$, then $x$ and $y\in S_p$ are $r$-intersecting if
and only if the sets representing them have at least $r$ common elements.)
Two families $A,B\subseteq S_p$ are {\em $r$-cross-intersecting}, if every
pair $x\in A$, $y\in B$ is $r$-intersecting. If $(A,A)$ is an
$r$-cross-intersecting pair, we say $A$ is {\em $r$-intersecting}. We simply say {\em
intersecting} or {\em cross-intersecting} to mean $1$-intersecting or
$1$-cross-intersecting, respectively.
\smallskip
The investigation of the maximum value for $|A|\cdot|B|$ for cross-intersecting pairs of families $A,B\subseteq S_p$ was initiated by Moon~\cite{Mo82}. She proved, using a clever induction argument, that in the special case when $p_1=p_2=\dots=p_n=k$ for some $k\ge 3$, every cross-intersecting pair $A,B\subseteq S_p$ satisfies
$$|A|\cdot|B|\le k^{2n-2},$$
with equality if and only if $A=B=\{x\in S_p\; :\; x_i=j\}$, for some $i\in[n]$ and $j\in[k]$. In the case $A=B$, Moon's theorem had been discovered by Berge~\cite{Be74}, Livingston~\cite{Liv79}, and Borg~\cite{Bo08}. See also Stanton~\cite{St80}. In his report on Livingston's paper, published in the {\em Mathematical Reviews}, Kleitman gave an extremely short proof for the case $A=B$, based on a shifting argument. Zhang~\cite{Zh13} established a somewhat weaker result, using a generalization of Katona's circle method~\cite{K72}. Note that for $k=2$, we can take $A=B$ to be any family of $2^{n-1}$ vectors without containing a pair $(x_1,\ldots,x_n), (y_1,\ldots,y_n)$ with $x_i+y_i=3$ for every $i$. Then $A$ is an intersecting family with $|A|^2=2^{2n-2}$, which is not of the type described in Moon's theorem.
Moon also considered $r$-cross-intersecting pairs in $S_p$ with $p_1=p_2=\dots=p_n=k$ for some $k > r+1$, and characterized all pairs for which $|A|\cdot|B|$ attains its maximum, that is, we have
$$|A|\cdot|B|= k^{2(n-r)}.$$
The assumption $k>r+1$ is necessary. See Tokushige~\cite{To13}, for a somewhat weaker result, using algebraic techniques.
\smallskip
Zhang~\cite{Zh13} suggested that Moon's results may be extended to arbitrary
sequences of positive integers $p=(p_1,\dots,p_n)$. The aim of this note is
twofold: (1) to establish such an extension under the assumption $\min_i p_i > r+1$,
and (2) to formulate a conjecture that covers essentially all
other interesting cases. We verify this conjecture in several special cases.
We start with the special case $r=1$, which has also been settled independently by Borg~\cite{Bo14}, using different techniques.
\medskip
\noindent
{\bf Theorem 1.} {\em Let $p=(p_1,\dots,p_n)$ be a sequence positive integers
and let $A,B\subseteq S_p$ form a pair of cross-intersecting families of
vectors.
We have $|A|\cdot|B|\le|S_p|^2/k^2$, where $k=\min_ip_i$. Equality holds for
the case $A=B=\{x\in S_p\; :\; x_i=j\}$, whenever $i\in[n]$ satisfies $p_i=k$
and $j\in[k]$. For $k\ne2$, there are no other extremal cross-intersecting
families.}
\medskip
We say that a coordinate $i\in[n]$ is {\em irrelevant} for a set $A\subseteq
S_p$ if, whenever two elements of $S_p$ differ only in coordinate $i$ and $A$
contains one of them, it also contains the other. Otherwise, we say that $i$
is {\em relevant} for $A$.
Note that no coordinate $i$ with $p_i=1$ can be relevant for any family. Each such
coordinate forces an intersection between every pair of vectors. So, if we
delete it, every $r$-cross-intersecting pair becomes $(r-1)$-cross-intersecting.
Therefore, from now on we will always assume that we have
$p_i\ge2$ for every $i$.
We call a sequence of integers
$p=(p_1,\dots,p_n)$ a {\em size vector} if $p_i\ge2$ for all $i$. The {\em
length} of $p$ is $n$.
We say that an $r$-cross-intersecting pair $A,B\subseteq
S_p$ is {\em maximal} if it maximizes the value $|A|\cdot|B|$.
\smallskip
Using this notation and terminology, Theorem~1 can be rephrased as follows.
\medskip
\noindent{\bf Theorem 1'.}
{\em Let $p=(p_1,\dots,p_n)$ be a sequence of integers with $k=\min_ip_i>2$.
For any maximal pair of cross-intersecting families,
$A,B\subseteq S_p$, we have $A=B$, and there is a single coordinate which is
relevant for $A$. The relevant coordinate $i$ must satisfy $p_i=k$.}
\medskip
See Section~\ref{cr} for a complete characterization of maximal
cross-intersecting pairs in the $k=2$ case. Here we mention that only the
coordinates with $p_i=2$ can be relevant for them, but
for certain pairs, {\em all} such coordinates are relevant
simultaneously. For example, let $n$ be odd, $p=(2,\ldots,2)$, and let $A=B$
consist of all vectors in $S_p$ which have at most $\lfloor n/2\rfloor$
coordinates that are $1$. This makes $(A,B)$ a maximal cross-intersecting pair.
\smallskip
Let $T\subseteq[n]$ be a subset of the coordinates, let $x_0\in S_p$ be an
arbitrary vector, and let $k$ be an integer satisfying $0\le k\le|T|$. The {\em Hamming ball} of
radius $k$ around $x_0$ in the coordinates $T$ is defined as the family
$$B_k=\{x\in S_p\; :\;|\{i\in T\; :\; x_i\ne(x_0)_i\}|\le k\}.$$
Note that the pair $(B_k,B_l)$ is
$(|T|-k-l)$-cross-intersecting. We use the word {\em ball} to refer to any
Hamming ball without specifying its center, radius
or its set of coordinates. A Hamming ball of radius $0$ in coordinates $T$ is
said to be obtained by {\em fixing} the coordinates in $T$.
For the proof of Theorem~1, we need the following statement, which will be established by induction on $n$, using the idea in~\cite{Mo82}.
\medskip
\noindent
{\bf Lemma 2.} {\em Let $1\le r<n$, let $p=(p_1,\dots,p_n)$ be a size
vector satisfying $3\le p_1\le p_2\le\cdots\le p_n$ and let
$A,B\subseteq S_p$ form a pair of $r$-cross-intersecting families. If
$$\frac{2}{p_{r+1}}+\sum_{i=1}^r\frac{1}{p_i}\le1,$$
then
$|A|\cdot|B|\le\prod_{i=r+1}^np_i^2$. In case of equality, we have
$A=B$ and this family can be obtained by fixing $r$ coordinates in
$S_p$.}
\medskip
By fixing any $r$ coordinates, we obtain a ``trivial'' $r$-intersecting
family $A=B\subseteq S_p$. As was observed by Frankl and F\"uredi \cite{FF80}, not all maximal size $r$-intersecting families can be obtained in this way, for certain size vectors. They considered size vectors $p=(k,\ldots,k)$
with $n\ge r+2$ coordinates, and noticed that a Hamming ball of radius $1$ in $r+2$ coordinates is $r$-intersecting. Moreover, for $k\le r$, this family is strictly larger than the trivial $r$-intersecting family. See also~\cite{AhK98}.
On the other hand, as was mentioned before, for $k\ge r+2$, Moon \cite{Mo82} proved that among all $r$-intersecting families, the trivial ones are maximal.
\smallskip
This leaves open only the case $k=r+1$, where the trivial $r$-intersecting
families and the radius $1$ balls in $r+2$ coordinates have precisely the same
size. We believe that in this case there are no larger $r$-intersecting
families. For $r=1$, it can be and has been easily verified (and follows, for
example, from our Theorem~1, which deals with the asymmetric case, when $A$
and $B$ do not necessarily coincide). Our Theorem~7 settles the problem also
for $r>3$. The intermediate cases $r=2$ or $3$ are still open, but they could possibly be handled by computer search.
\smallskip
Therefore, to characterize maximal size $r$-intersecting families $A$ or
maximal $r$-cross-intersecting pairs of families $(A,B)$ for {\em all} size
vectors, we cannot restrict ourselves to fixing $r$ coordinates. We make the
following conjecture that can roughly be considered as a generalization of the
Frankl-F\"uredi conjecture \cite{FF80} that has been proved by Frankl and Tokushige \cite{FT99}. The
generalization is twofold: we consider $r$-cross-intersecting pairs rather
than $r$-intersecting families and we allow arbitrary size vectors not just
vectors with all-equal coordinates.
\medskip
\noindent
{\bf Conjecture 3.} {\em If $1\le r\le n$ and $p$ is a size
vector of length $n$, then there exists a maximal pair of
$r$-cross-intersecting families $A,B\subseteq S_p$, where $A$ and $B$ are
balls. If we further have $p_i\ge3$ for all $i\in[n]$, then all maximal
pairs of $r$-cross-intersecting families consist of balls.}
\medskip
Note that the $r=1$ special case of Conjecture~3 is established by
Theorem~1. Some further special cases of the conjecture are settled in Theorem~7.
It is not hard to narrow down the range of possibilities for maximal
$r$-cross-intersecting
pairs that are formed by two balls, $A$ and $B$. In fact, the following theorem
implies that all such pairs are determined up to isomorphism by the
number of relevant coordinates. Assuming that Conjecture~3 is true, finding
$\max|A|\cdot|B|$ for $r$-cross-intersecting families $A,B\subseteq S_p$ boils
down to making numeric comparisons for pairs of balls obtained by various radii. In case $p_i\ge3$ for all $i$ (and still assuming
Conjecture~3), the same process also finds all maximal $r$-cross-intersecting
pairs.
\medskip
\noindent
{\bf Theorem 4.} {\em Let $1\le r\le n$ and let $p=(p_1,\dots,p_n)$ be a size
vector. If $A,B\subseteq S_p$ form a maximal pair of $r$-cross-intersecting
families, then either of them determines the other. In particular, $A$ and
$B$ have the same set of relevant coordinates. Moreover, if $A$ is a ball of radius
$l$ around $x_0\in S_p$ in a set of coordinates $T\subseteq[n]$, then
$|T|\ge l+r$, and $B$ is a ball of radius $|T|-l-r$ around $x_0$ in the same set
of the coordinates. Furthermore, we have $p_i\le p_j$ for every $i\in T$ and
$j\in[n]\setminus T$, and the radii of the balls differ by at most 1, that is,
$\big||T|-2l-r\big|\le1$.
}
\medskip
Note that if $A=B$ for a maximal pair $(A,B)$ of $r$-cross-intersecting
families, then $A$ is also a maximal size $r$-intersecting family. This is the
case, in particular, if $A$ and $B$ are balls of equal radii. However, for many size
vectors, no maximal $r$-cross-intersecting pair consists of $r$-intersecting families, as the maximal $r$-cross-intersecting
pairs are often formed by balls whose radii differ by one. For example, for the size
vector $p=(3,3,3,3,3)$, the largest $4$-intersecting family $C$ is obtained by
fixing four coordinates, while the maximal $4$-cross-intersecting pair is formed
by a singleton $A=\{x\}$ and a ball $B$ of radius $1$ around $x$ in all
coordinates. Here we have $|A|\cdot|B|=11>|C|^2=9$.
As we have indicated above, we have been unable to prove Conjecture~3 in its full generality, but we were able to verify it
in several interesting special cases. We will proceed in two steps. First we argue, using {\em entropies}, that the number of relevant coordinates in a maximal $r$-cross-intersecting family is bounded. Then we apply combinatorial methods to prove the conjecture under the assumption that the number of relevant coordinates is small.
In the case where there are many relevant coordinates for a pair of maximal
$r$-cross-intersecting families, we use entropies to bound the size of the
families and to prove
\medskip
\noindent
{\bf Theorem 5.} {\em Let $1\le r\le n$, let $p=(p_1,\dots,p_n)$ be a size
vector, let $A,B\subseteq S_p$ form a maximal pair of $r$-cross-intersecting
families, and let $T$ be the set of coordinates that are relevant for $A$ or
$B$. Then neither the size of $A$ nor the size of $B$ can exceed
$$\frac{|S_p|}{\prod_{i\in T}(p_i-1)^{1-2/p_i}}.$$
}
\medskip
We use this theorem to bound the number of relevant coordinates $i$ with
$p_i>2$. The number of relevant coordinates $i$ with $p_i=2$ can be unbounded; see Section~5.
\medskip
\noindent
{\bf Theorem 6.} {\em Let $1\le r\le n$, let $p=(p_1,\dots,p_n)$ be a size
vector, and let $A,B\subseteq S_p$ form a maximal pair of
$r$-cross-intersecting families.
For the set of coordinates $T$ relevant for
$A$ or $B$, we have $$\prod_{i=1}^rp_i\ge\prod_{i\in T}(p_i-1)^{1-2/p_i},$$
which implies that $|\{i\in T\; :\; p_i>2\}|<5r$.}
\medskip
We can characterize the maximal $r$-cross-intersecting pairs for
all size vectors $p$ satisfying $\min p_i>r+1$, and in many other cases.
\medskip
\noindent
{\bf Theorem 7.} {\em Let $1\le r\le n$, let $p=(p_1,\dots,p_n)$ be a size
vector with $p_1\le p_2\le\cdots\le p_n$, and let $A,B\subseteq
S_p$ form a pair of $r$-cross-intersecting families.
\begin{enumerate}
\item If $p_1>r+1$,
we have $|A|\cdot|B|\le\prod_{i=r+1}^np_i^2$. In case of equality,
$A=B$ holds and this family can be obtained by fixing $r$ coordinates in $S_p$.
\item If $p_1=r+1>4$, we have $|A|\cdot|B|\le\prod_{i=r+1}^np_i^2$. In case of equality,
$A=B$ holds and this family can be obtained either by fixing
$r$ coordinates in $S_p$ or by taking a Hamming ball of radius $1$ in
$r+2$ coordinates $i$, all satisfying $p_i=r+1$.
\item There is a function $t(r)=r/2+o(r)$ such that if $p_1\ge t(r)$ and
$(A,B)$ is a maximal $r$-cross-intersecting pair,
then the families $A$ and $B$ are balls in at most $r+3$
coordinates.
\end{enumerate}}
\medskip
The proof of Theorem~7 relies on the following result.
\bigskip
\bigskip
\noindent
{\bf Theorem 8.} {\em Let $1\le r\le n$ and let
$p$ be a size vector of length $n$.
\begin{enumerate}
\item If there exists a maximal pair of $r$-cross-intersecting families in
$S_P$ with at most $r+2$ relevant coordinates,
then there exists such a pair consisting of balls.
\item If $p_i>2$ for all $i\in[n]$ and
$A,B\subseteq S_p$ form a maximal pair of $r$-cross-intersecting families
with at most $r+3$ relevant coordinates, then $A$ and $B$ are balls.
\end{enumerate}}
\medskip
With an involved case analysis, it may be possible to extend Theorem~8 to
pairs with more relevant coordinates. Any such an improvement would carry
over to Theorem~7.
All of our results remain meaningful in the symmetric case where $A=B$. For instance, in
this case, Theorem~1 (also proved by Borg \cite{Bo14}) states that every intersecting family $A\subseteq S_p$
has at most $|S_p|/k$ members, where $k=\min_ip_i$. In case $k>2$,
equality can be achieved only by fixing some coordinate $i$ with $p_i=k$. Note
that in the case $A=B$ (i.e., $r$-intersecting families) the exact maximum
size is known for size vectors $(q,\ldots,q)$, \cite{FT99}.
\section{Proofs of Theorems 4 and 1}
First, we verify Theorem~4 and a technical lemma (see Lemma~9 below) which generalizes the corresponding result in~\cite{Mo82}. Our proof is slightly simpler. Lemma~9 will enable us to deduce Lemma~2, the main ingredient of the proof of Theorem~1, presented at the end of the section.
\medskip
\noindent
{\em Proof of Theorem 4.} The first statement is self-evident: if $A,B\subseteq
S_p$ form a maximal pair of $r$-cross-intersecting families, then
$$B=\{x\in S_p\; :\; x\hbox{ $r$-intersects $y$ for all }y\in A\}.$$
If a coordinate is irrelevant for $A$, then it is also irrelevant for $B$ defined by this formula. Therefore, by symmetry, $A$ and $B$ have the same set of relevant coordinates.
If $A$ is the Hamming ball around $x_0$ of radius $l$ in
coordinates $T$, then we have $B=\emptyset$ if $|T|<l+r$, which is not
possible for a maximal cross-intersecting family. If $|T|\ge l+r$, we obtain
the ball claimed in the theorem. For every $i\in T$, $j\in[n]\setminus T$,
consider the set $T'=(T\setminus\{i\})\cup\{j\}$ and the Hamming balls $A'$
and $B'$ of radii $l$ and $|T|-l-r$ around $x_0$ in the coordinates
$T'$. These balls form an $r$-cross-intersecting pair and in case $p_i>p_j$, we
have $|A'|>|A|$ and $|B'|>|B|$, contradicting the maximality of the pair
$(A,B)$.
Finally let $B_l$ be a ball of radius $l$ around some fixed vector $x$ in
a fixed set $T$ of coordinates. We claim that the size $|B_l|$ of these balls
is strictly log-concave, that is, we have
$$|B_l|^2>|B_{l-1}|\cdot|B_{l+1}|$$
for $1\le l<|T|$. As balls around different centers have the same size, we can
represent the left-hand side as $|B_l|^2=|C|$, where
$$C=\{(y,z)\mid y,z\in S_p, d(x,y)\le l,d(y,z)\le l\}.$$
Similarly, the right-hand side can be
represented as $|B_{l-1}|\cdot|B_{l+1}|=|D|$ with
$$D=\{(y,z)\mid y,z\in S_p,d(x,y)\le l-1,d(y,z)\le l+1\}.$$
We say that two pairs $(y,z)$ and $(y',z')$ (all four terms from $S_p$)
are {\em equivalent} if $z=z'$ and for every $i\in[n]$ we have either
$y_i=y'_i$ or $y_i,y'_i\in\{x_i,z_i\}$. Let us fix an equivalence class
$O$. For all $(y,z)\in O$, the element $z$ and some
coordinates $y_i$ of $y$ are {\em fixed}. We call the remaining coordinates $i$ {\em
open}. For an open coordinate $i$, the value of $y_i$ must be one of the
non-equal values $x_i$ or $z_i$. If $m$ denotes the number of open
coordinates in $O$, we have $|O|=2^m$. For a pair $(y,z)\in O$, the
distance $d(x,y)=d_1+d_2(y)$, where $d_1$ is the number of fixed
coordinates $i$ with $x_i\ne y_i$, while $d_2(y)$ is the number of open
coordinates $i$ with $x_i\ne y_i$. Note that $d_1$ is constant for all
elements of $O$, while $d_2(y)$ takes any value $j$ for $m\choose j$ members
of $O$. Similarly, we can write $d(y,z)=d_1+(m-d_2(y))$, as $y_i\ne z_i$ holds
for the same fixed coordinates where $x_i\ne y_i$, and $y_i$ is equal to
exactly one of $x_i$ and $z_i$ for an open coordinate $i$. Summarizing, we
have
$$C\cap O=\{(y,z)\in O\mid d_1+m-l\le d_2(y)\le l-d_1\},$$
$$D\cap O=\{(y,z)\in O\mid d_1+m-l-1\le d_2(y)\le l-d_1-1\}.$$
We claim that $|C\cap O|\ge|D\cap O|$. Indeed, if $l-d_1<m/2$, then $C\cap
O=D\cap O=\emptyset$. Otherwise, we have $|C\cap O|-|D\cap O|={m\choose
l-d_1}-{m\choose l-d_1+1}\ge0$. Note also that equality holds only if
$l-d_1<m/2$ or $l-d_1>m$, in which cases $C$ and $D$ are disjoint from $O$ or
contain $O$, respectively.
As $C$ contains at least as many pairs from every equivalence class as $D$
does, we have $|C|\ge|D|$. Equality cannot hold for all equivalence classes, so
we have $|C|>|D|$, as claimed.
To finish the proof of the theorem, we need to verify that the pair
$(B_{l_1},B_{l_2})$ is not maximal $r$-cross-intersecting if $r=|T|-l_1-l_2$
and $|l_1-l_2|\ge2$. This follows from the log-concavity, because in case $l_1\ge
l_2+2$ the pair $(B_{l_1-1},B_{l_2+1})$ is also $r$-cross-intersecting and $|B_{l_1-1}|\cdot|B_{l_2+1}|>|B_{l_1}|\cdot|B_{l_2}|$.
\qed
\medskip
The following lemma will also be used in the proof of Theorem~5, presented in the next section.
\medskip
\noindent
{\bf Lemma 9.} {\em Let $1\le r\le n$, let $p=(p_1,\dots,p_n)$ be a size vector, and let $A,B\subseteq S_p$ form a maximal pair of $r$-cross-intersecting families.
If $i\in[n]$ is a relevant coordinate for $A$ or $B$, then there exists a value $l\in[p_i]$ such that
$$|\{x\in A\; :\; x_i\ne l\}|\le|A|/p_i,$$
$$|\{y\in B\; :\; y_i\ne l\}|\le|B|/p_i.$$
}
\medskip
\noindent
{\em Proof.} Let us fix $r, n, p, i, A,$ and $B$ as in the
lemma. By Theorem~4, if a coordinate is irrelevant for $A$, then
it is also irrelevant for $B$ and vice versa.
In the case $n=r$, we have $A=B$ and this family must be a singleton, so the lemma is trivially true. From now on, we assume that $n>r$ and hence the notion of $r$-cross-intersecting families is meaningful for $n-1$ coordinates.
Let $q=(p_1,\ldots,p_{i-1},p_{i+1},\ldots,p_n)$. For any $l\in[p_i]$, let
$$A'_l=\{x\in A\; :\; x_i=l\},$$
$$B'_l=\{y\in B\; :\; y_i=l\},$$
and let $A_l$ and $B_l$ stand for the families obtained from $A'_l$ and $B'_l$, respectively, by dropping their $i$\/th coordinates. By definition, we have $A_l,B_l\subseteq S_q$, and $|A|=\sum_l|A_l|$ and $|B|=\sum_l|B_l|$. Furthermore, for any two distinct elements $l,m\in[p_i],$ the families $A_l$ and $B_m$ are $r$-cross-intersecting, since the vectors in $A'_l$ differ from the vectors in $B'_m$ in the $i$\/th coordinate, and therefore the $r$ indices where they agree must be elsewhere.
Let $Z$ denote the maximum product $|A^*|\cdot|B^*|$ of an
$r$-cross-intersecting pair $A^*,B^*\subseteq S_q$. We have
$|A_l|\cdot|B_m|\le Z$ for all
$l,m\in[p_i]$ with $l\ne m$. Adding an irrelevant $i$\/th coordinate to the
maximal $r$-cross-intersecting pair $A^*,B^*\subseteq S_q$, we obtain a pair
$A^{*\prime},B^{*\prime}\subseteq S_p$ with
$|A^{*\prime}|\cdot|B^{*\prime}|=p_i^2Z$. Using the maximality of $A$ and
$B$, we have $|A|\cdot|B|\ge p_i^2Z$. Let $l_0$ be chosen so as to maximize
$|A_{l_0}|\cdot |B_{l_0}|$, and let $c=|A_{l_0}|\cdot|B_{l_0}|/Z$.
Assume first that $c\le1$. Then we have
$$p_i^2Z\le|A|\cdot|B|=\sum_{l,m\in[p_i]}|A_l|\cdot|B_m|
\le\sum_{l,m\in[p_i]}Z=p_i^2Z.$$
Hence, we must have equality everywhere. This yields that $c=1$ and that $A_l$ and
$B_m$ form a maximal $r$-cross-intersecting pair for all
$l,m\in[p_i]$, $l\ne m$. This also implies that $|A_l|=|A_m|$ for $l,m\in[p_i]$, from where
the statement of the lemma follows, provided that $p_i=2$.
If $p_i\ge3$, then all families $A_l$ must be equal to one another, since one member in a maximal
$r$-cross-intersecting family determines the other, by Theorem~4. This contradicts our
assumption that the $i$\/th coordinate was relevant for $A$.
Thus, we may assume that $c>1$.
\smallskip
For $m\in[p_i]$, $m\ne l_0$, we have
$|A_{l_0}|\cdot|B_m|\le Z=|A_{l_0}|\cdot|B_{l_0}|/c$. Thus,
\begin{equation}\label{eq0}
|B_m|\le|B_{l_0}|/c,
\end{equation}
which yields that
$|B|=\sum_{m\in[p_i]}|B_m|\le(1+(p_i-1)/c)|B_{l_0}|$. By symmetry, we also
have
\begin{equation}\label{eq0b}
|A_m|\le|A_{l_0}|/c
\end{equation}
for $m\ne l_0$ and $|A|\le(1+(p_i-1)/c)|A_{l_0}|$. Combining these inequalities, we obtain
$$p_i^2Z\le|A|\cdot|B|\le(1+(p_n-1)/c)^2|A_{l_0}|\cdot|B_{l_0}|=(1+(p_i-1)/c)^2cZ.$$
We solve the resulting inequality $p_i^2\le c(1+(p_i-1)/c)^2$ for
$c>1$ and conclude that $c\ge(p_i-1)^2$. This inequality, together with Equations
(\ref{eq0}) and (\ref{eq0b}), completes the proof of Lemma~9. \qed
\medskip
\noindent
{\em Proof of Lemma 2.} We proceed by induction on $n$.
Let $A$ and $B$ form a maximal $r$-cross-intersecting pair. It is sufficient to
show that they have only $r$ relevant coordinates. Let us suppose that
the set $T$ of their relevant coordinates satisfies $|T|>r$, and choose a subset
$T'\subseteq T$ with $|T'|=r+1$. By Lemma~9, for every $i\in T'$ there exists
$l_i\in[p_i]$ such that the family $X_i=\{x\in B\; :\; x_i\ne l_i\}$ has cardinality $|X_i|\le|B|/p_i$.
If we assume that
$$\frac{2}{p_{r+1}}+\sum_{i=1}^r\frac{1}{p_i}<1$$
holds (with strict inequality), then this bound of $|X_i|$ would suffice.
In order to also be able to deal with the case
$$\frac{2}{p_{r+1}}+\sum_{i=1}^r\frac{1}{p_i}=1,$$
we show that $|X_i|=|B|/p_i$ is not possible. Considering the proof of
Lemma~9, equality here would mean that the families $A_l$ and $B_l$ (obtained
by dropping the $i$\/th coordinate from the vectors in the sets $\{x\in A\;
:\; x_i=l\}$ and $\{y\in B\; :\; y_i=l\}$, respectively) satisfy the following
condition: both $(A_{l_i},B_m)$ and
$(A_m,B_{l_i})$ should be maximal $r$-cross-intersecting pairs for all $m\ne
l_i$. By the induction hypothesis, this would imply that $A_{l_i}=B_m$ and
$A_m=B_{l_i}$, contradicting that $|A_m|<|A_{l_i}|$ and $|B_m|<|B_{l_i}|$ (see
(\ref{eq0}), in view of $c>1$). Therefore, we have $|X_i|<|B|/p_i$.
Let $C=\{x\in S_p\; :\; x_i=1\hbox{ for all }i\in[r]\}$ be the $r$-intersecting
family obtained by fixing $r$ coordinates in $S_p$.
In the family $D=B\setminus(\bigcup_{i\in T'}X_i)$, the coordinates in $T'$
are fixed. Thus, we have $$|D|\le\prod_{i\in[n]\setminus
T'}p_i\le\prod_{i=r+2}^np_i=|C|/p_{r+1}.$$
On the other hand, we have
$$|D|=|B|-\sum_{i\in T'}|X_i|>|B|(1-\sum_{i\in T'}1/p_i)\ge|B|(1-\sum_{i=1}^{r+1}1/p_i).$$
Comparing the last two inequalities, we obtain
$$|B|<\frac{|C|}{p_{r+1}(1-\sum_{i=1}^{r+1}1/p_i)}.$$
By our assumption on $p$, the denominator is at least $1$, so that we have
$|B|<|C|$. By symmetry, we also have $|A|<|C|$. Thus, $|A|\cdot|B|<|C|^2$
contradicting the maximality of the pair $(A,B)$. This completes the proof of
Lemma~2. \qed
\medskip
Now we can quickly finish the proof of Theorem~1.
\medskip
\noindent
{\em Proof of Theorem~1.} Notice that Lemma~2 implies Theorem~1, whenever
$k=\min_ip_i\ge3$. It remains to verify the statement for $k=1$ and $k=2$.
For $k=1$, it follows from the fact that all pairs of vectors in $S_p$ are
intersecting, thus the only maximal cross-intersecting pair is $A=B=S_p$.
Suppose next that $k=2$. For $x\in S_p$, let $x'\in S_p$ be defined
by $x'_i=(x_i+1\bmod p_i)$ for $i\in[n]$.
Note that $x\mapsto x'$ is a permutation of $S_p$.
Clearly, $x$ and $x'$ are not intersecting, so we either
have $x\notin A$ or $x'\notin B$. As a
consequence, we obtain that $|A|+|B|\le|S_p|$, which, in turn, implies that
$|A|\cdot|B|\le|S_p|^2/4$, as claimed. It also follows that all maximal
pairs satisfy $|A|=|B|=|S_p|/2$. \qed
\section{Using entropy: Proofs of Theorems~5 and~6}
\noindent
{\em Proof of Theorem~5.} Let $r, n, p, A, B$ and $T$ be as in the theorem. Let
us write $y$ for a randomly and uniformly selected element of $B$.
Lemma~9 implies that, for each $i\in T$, there exists a value $l_i\in[p_i]$
such that
\begin{equation}\label{eq1}
Pr[y_i=l_i]\ge1-1/p_i.
\end{equation}
We bound the {\em entropy} $H(y_i)$ of $y_i$ from above by the entropy of the indicator variable of the event $y_i=l_i$ plus the contribution coming from the entropy of $y_i$ assuming $y_i\ne l_i$:
$$H(y_i)\le h(1-1/p_i)+(1/p_i)\log(p_i-1)=\log p_i-(1-2/p_i)\log(p_i-1),$$
where $h(z)=-z\log z-(1-z)\log(1-z)$ is the entropy function, and we used that $1-1/p_i\ge1/2$.
For any $i\in[n]\setminus T$, we use the trivial estimate $H(y_i)\le\log p_i$. By the subadditivity of the entropy, we obtain
$$\log|B|=H(y)\le\sum_{i\in[n]}H(y_i)\le\sum_{i\in T}(\log
p_i-(1-2/p_i)\log(p_i-1))+\sum_{i\in[n]\setminus T}\log p_i,$$
or, equivalently,
$$|B|\le\prod_{i\in T}\frac{p_i}{(p_i-1)^{1-2/p_i}}\prod_{i\in[n]\setminus
T}p_i=\frac{|S_p|}{\prod_{i\in T}(p_i-1)^{1-2/p_i}}$$
as required. The bound on $|A|$ follows by symmetry and completes the proof of
the theorem. \qed
\medskip
Theorem~6 is a simple corollary of Theorem~5.
\medskip
\noindent
{\em Proof of Theorem~6.}
Fixing the first $r$ coordinates, we obtain the family
$$C=\{x\in S_p\; :\; x_i=1\hbox{ for all }i\in[r]\}.$$
This family is $r$-intersecting. Thus, by the maximality of the pair $(A,B)$, we have
\begin{equation}\label{eq3}
|A|\cdot|B|\ge|C|^2=\left(\prod_{i=r+1}^np_i\right)^2.
\end{equation}
Comparing this with our upper bounds on $|A|$ and $|B|$, we obtain the
inequality claimed in the theorem.
To prove the required bound on the number of relevant coordinates $i$ with
$p_i\ne2$, we assume that the coordinates are ordered, that is, $p_1\le
p_2\le\cdots\le p_n$. Applying the above estimate on $\prod_{i\in[r]}p_i$
and using $(p_i-1)^{1-2/p_i}>p_i^{1/5}$ whenever $p_i\ge3$, the theorem
follows. \qed
\section{Monotone families: Proofs of Theorems~8 and~7}
Given a vector $x\in S_p$, the set ${\mbox{supp}}(x)=\{i\in[n] : x_i>1\}$ is called the
{\em support} of $x$. A family $A\subseteq S_p$ is said to be {\em monotone}, if for any
$x\in A$ and $y\in S_p$ satisfying ${\mbox{supp}}(y)\subseteq{\mbox{supp}}(x)$, we have $y\in A$.
For a family $A\subseteq S_p$, let us define its {\em support} as
${\mbox{supp}}(A)=\{{\mbox{supp}}(x)\; :\; x\in A\}$. For a monotone family $A$, its support is
clearly subset-closed and it uniquely determines $A$, as $A=\{x\in
S_p\, :\,{\mbox{supp}}(x)\in{\mbox{supp}}(A)\}$.
\medskip
The next result shows that if we want to prove Conjecture~3, it is sufficient to prove it for monotone families. This will enable us to establish Theorems~8 and~7, that is, to verify the conjecture for maximal $r$-cross-intersecting pairs with a limited number of relevant
coordinates. Note that similar reduction to monotone families appears also in
\cite{FF80}.
\medskip
\noindent
{\bf Lemma 10.} {\em Let $1\le r\le n$ and let $p$ be a size vector of length
$n$.
There exists a maximal pair of $r$-cross-intersecting families $A,B\subseteq S_p$ such that both $A$ and $B$ are monotone.
If $p_i\ge3$ for all $i\in[n]$, and $A,B\subseteq S_p$ are maximal $r$-cross-intersecting families that are {\em not} balls, then there exists a pair of maximal $r$-cross-intersecting families that consists of monotone families that are not balls and have no more relevant coordinates than
$A$ or $B$.}
\medskip
\noindent
{\em Proof.} Consider the following {\em shift operations}. For any $i\in[n]$ and
$j\in[p_i]\setminus\{1\}$, for any family $A\subseteq S_p$ and any element $x\in A$, we define
\begin{align*}
\phi_i(x)&=(x_1,\dots,x_{i-1},1,x_{i+1},\dots,x_n),\\
\phi_{i,j}(x,A)&=\begin{cases}\phi_i(x)&\mbox{if }x_i=j\mbox{ and }\phi_i(x)\notin A\\x&\mbox{otherwise,}\end{cases}\\
\phi_{i,j}(A)&=\{\phi_{i,j}(x,A)\; :\; x\in A\}.
\end{align*}
Clearly, we have $|\phi_{i,j}(A)|=|A|$ for any family $A\subseteq S_p$. We claim that for
any pair of $r$-cross-intersecting families $A,B\subseteq S_p$, the families $\phi_{i,j}(A)$ and
$\phi_{i,j}(B)$ are also $r$-cross-intersecting. Indeed, if $x\in A$ and
$y\in B$ are $r$-intersecting vectors, then $\phi_{i,j}(x,A)$ and
$\phi_{i,j}(y,B)$ are also $r$-intersecting, unless $x$ and $y$ have exactly
$r$ common coordinates, one of them is $x_i=y_i=j$, and this common coordinate
gets ruined as $\phi_{i,j}(x,A)=x$ and $\phi_{i,j}(y,B)=\phi_i(y)$ (or vice
versa). However,
this is impossible, because this would imply that the vector $\phi_i(x)$ belongs to $A$, in spite of the fact that $\phi_i(x)$ and $y\in B$ are not $r$-intersecting.
If $(A,B)$ is a {\em maximal} $r$-cross-intersecting pair, then so is
$(\phi_{i,j}(A),\phi_{i,j}(B))$. When applying one of these shift operations
changes either of the families $A$ or $B$, then the total sum of all coordinates of all elements decreases. Therefore,
after shifting a finite number of times we arrive at a maximal pair of
$r$-intersecting families that cannot be changed by further shifting. We claim that
this pair $(A,B)$ is monotone. Let $y\in B$ and $y'\in S_p\setminus B$ be
arbitrary. We show that $B$ is monotone by showing that ${\mbox{supp}}(y')$ is not
contained in ${\mbox{supp}}(y)$. Indeed, by the maximality of the pair $(A,B)$ and using the fact that
$y'\notin B,$ there must exist $x'\in A$ such that $x'$ and $y'$ are not
$r$-cross-intersecting, and hence
$|{\mbox{supp}}(x')\cup{\mbox{supp}}(y')|>n-r$. Applying ``projections'' $\phi_i$ to
$x'$ in the coordinates $i\in{\mbox{supp}}(x')\cap{\mbox{supp}}(y)$, we obtain $x$ with
${\mbox{supp}}(x)={\mbox{supp}}(x')\setminus{\mbox{supp}}(y)$. The shift operations $\phi_{i,j}$
do not change the family $A$, thus $A$ must be closed for the projections
$\phi_i$ and we have $x\in A$. The
supports of $x$ and $y$ are disjoint. Thus, their Hamming distance is
$|{\mbox{supp}}(x)\cup{\mbox{supp}}(y)|$, which is at most $n-r$, as they are
$r$-intersecting. Therefore, ${\mbox{supp}}(x)\cup{\mbox{supp}}(y)={\mbox{supp}}(x')\cup{\mbox{supp}}(y)$ is smaller
than ${\mbox{supp}}(x')\cup{\mbox{supp}}(y')$, showing that ${\mbox{supp}}(y')\not\subseteq{\mbox{supp}}(y)$. This
proves that $B$ is monotone. By symmetry, $A$ is also monotone,
which proves the first claim of the lemma.
\smallskip
To prove the second claim, assume that $p_i\ge3$ for all $i\in[n]$. Note that
Theorem~1 establishes the lemma in the case $r=1$, so from now on we can assume without loss of generality that
$r\ge2$. Let
$A,B\subseteq S_p$ form a maximal $r$-cross-intersecting pair. By the previous
paragraph, this pair can be transformed into a monotone pair by repeated
applications of the shift
operations $\phi_{i,j}$. Clearly, these operations do not introduce new
relevant coordinates. It remains to check that the shifting operations do not
produce balls from non-balls, that is, if $A,B\subseteq S_p$ are
maximal $r$-cross-intersecting families, and $A'=\phi_{i,j}(A)$ and
$B'=\phi_{i,j}(B)$ are balls, then so are $A$ and $B$. In fact, by Theorem~4
it is sufficient to prove that one of them is a ball.
\smallskip
We saw that $A'$ and
$B'$ must also form a maximal $r$-cross-intersecting pair. Thus, by Theorem~4, there
is a set of coordinates $T\subseteq[n]$, a vector $x_0\in S_p$, and
radii $l$ and $m$ satisfying $|T|=r+l+m$ and that $A'$ and $B'$ are the
Hamming balls of radius $l$ and $m$ in coordinates $T$ around the vector
$x_0$. We can assume that $i\in T$, because otherwise $A=A'$ and we are done.
We also have that $(x_0)_i=1$, as otherwise $A'=\phi_{i,j}(A)$ is impossible.
The vectors $x\in S_p$ such that $x_i=j$ and
$$|\{k\in T\, :\, x_k\ne(x_0)_k\}|=l+1$$
are called {\em $A$-critical}. Analogously, the vectors $y\in S_p$ such that $y_i=j$ and $$|\{k\in T\, :\,y_k\ne(x_0)_k\}|=m+1$$
are said to be {\em $B$-critical}. By the definition of
$\phi_{i,j}$, the family $A$ differs from $A'$ by including some
$A$-critical vectors $x$ and losing the corresponding vectors
$\phi_i(x)$. Symmetrically, $B\setminus B'$ consists of some $B$-critical vectors
$y$ and $B'\setminus B$ consists of the corresponding vectors $\phi_i(y)$.
Let us consider the bipartite graph $G$ whose vertices on one side are the
$A$-critical vectors $x$, the vertices on the other side are the $B$-critical
vectors $y$ (considered as disjoint families, even if $l=m$), and $x$ is
adjacent to $y$ if and only if
$|\{k\in[n]\, :\, x_k=y_k\}|=r$. If $x$ and $y$ are
adjacent, then neither the pair $(x,\phi_i(y))$, nor the pair $(\phi_i(x),y)$ is
$r$-intersecting. As $A$ and $B$ are $r$-cross-intersecting, for any pair of adjacent
vertices $x$ and $y$ of $G$, we have $x\in A$ if and only if $y\in B$.
The crucial observation is that the graph $G$ is connected. Note that this is
not the case if
$p_k=2$ for some index $k\notin T$, since all $A$-critical vectors $x$ in a
connected component of $G$ would have the same value $x_k$. However, we
assumed that $p_k>2$ for $l\in[n]$. In this case, the $A$-critical vectors $x$
and $x'$ have a common $B$-critical neighbor (and, therefore, their distance in
$G$ is $2$) if and only if the symmetric difference of the $l$ element
sets $\{k\in T\setminus\{i\}\, :\, x_k\ne(x_0)_k\}$ and $\{k\in T\setminus\{i\}\, :\,
x'_k\ne(x_0)_k\}$ have at most $2r-2$ elements. We assumed that $r>1$, so this
means that all $A$-critical vectors are indeed in the same component of the
graph $G$. Therefore, either all $A$-critical vectors belong to $A$ or none of
them does. In the
latter case, we have $A=A'$. In the former case, $A$ is the Hamming ball of
radius $l$ in coordinates $T$ around the vector $x_0'$, where $x_0'$ agrees
with $x_0$ in all coordinates but in $(x_0')_i=j$. In either case, $A$ is a
ball as required. \qed
\medskip
\noindent
{\em Proof of Theorem~8.} By Lemma~10, it is enough to restrict our attention to
monotone families $A$ and $B$. We may also assume that all coordinates are
relevant (simply drop the irrelevant coordinates). Thus, we have $n\le
r+3$.
Denote by $U_l$ the Hamming ball of radius $l$
around the all-$1$ vector in the entire set of coordinates $[n]$.
Notice that the monotone families $A$ and $B$ are $r$-cross-intersecting if and
only if for $a\in{\mbox{supp}}(A)$ and $b\in{\mbox{supp}}(B)$ we have $|a\cup b|\le n-r$.
We consider all possible values of $n-r$, separately.
\smallskip
If $n=r$, both families $A$ and $B$ must coincide with the singleton $U_0$.
\smallskip
If $n=r+1$, it is still true that either $A$ or $B$ is $U_0$, and hence both
families are balls. Otherwise, both
${\mbox{supp}}(A)$ and ${\mbox{supp}}(B)$ have to contain at least one non-empty set, but the
union of these sets has size at most $n-r=1$, so we
have ${\mbox{supp}}(A)={\mbox{supp}}(B)=\{\emptyset,\{i\}\}$ for some $i\in[n]$. This
contradicts our assumption that the coordinate $i$ is relevant for $A$.
\smallskip
If $n=r+2$, we are done if $A=B=U_1$. Otherwise, we must have a $2$-element set
either in ${\mbox{supp}}(A)$ or in ${\mbox{supp}}(B)$. Let us assume that a $2$-element set
$\{i,j\}$ belongs to ${\mbox{supp}}(A)$. Then each set $b\in{\mbox{supp}}(B)$ must satisfy
$b\subseteq\{i,j\}$. This leaves five possibilities for a non-empty monotone
family $B$, as ${\mbox{supp}}(B)$ must be one of the following set systems:
\begin{enumerate}
\item $\{\emptyset\}$,
\item $\{\emptyset,\{i\}\}$,
\item $\{\emptyset,\{j\}\}$,
\item $\{\emptyset,\{i\},\{j\}\}$, and
\item$\{\emptyset,\{i\},\{j\},\{i,j\}\}$.
\end{enumerate}
Cases 2, 3, and 5 are not possible, because either $i$ or $j$ would not be
relevant for $B$.
In case 1, $A$ and $B$ are balls, as claimed. Nevertheless, this case is
impossible as the radii of $A$ and $B$ differ by $2$, contradicting Theorem~4.
It remains to deal with case 4. Here ${\mbox{supp}}(A)$ consists of the sets of
size at most $1$ and the $2$-element set $\{i,j\}$. Define
$$C=\{x\in S_p\; :\; x_k=1\mbox{ for all }k\in[n]\setminus\{i,j\}\}.$$
Note that $|A|+|B|=|U_1|+|C|$, because each vector in $S_p$ appears in the same
number of sets on both sides. Thus, we have either $|A|+|B|\le2|U_1|$ or
$|A|+|B|\le2|C|$. Since $|A|>|B|$, the above inequalities imply $|A|\cdot|B|<|U_1|^2$ or $|A|\cdot|B|<|C|^2$. This
contradicts the maximality of the pair $(A,B)$, because both $U_1$ and $C$ are
$r$-intersecting. The contradiction completes the proof of the case $n-r=2$.
\smallskip
To complete the proof of Theorem~8, we need to deal with the case $n-r=3$, i.e., when
there are $r+3$ relevant coordinates. Note that, as part 1 of
Theorem~8 does not apply to this case, we have $p_i\ge 3$ for $i\in[n]$. This
slightly simplifies the following case analysis, where we consider all
containment-maximal pairs of families $({\mbox{supp}}(A),{\mbox{supp}}(B))$ with the required
condition on the size of the pairwise unions.
Before considering the individual cases, we make a few simple observations.
First, we have $${\mbox{supp}}(A)=\{T\subseteq[n]\mid\forall U\in{\mbox{supp}}(B):|T\cup
U\le3\},$$
$${\mbox{supp}}(B)=\{U\subseteq[n]\mid\forall T\in{\mbox{supp}}(A):|T\cup U|\le3.$$
These are non-empty sets and they determine the monotone sets $A$ and
$B$.
We say that $i$ {\em dominates} $j$ in a set system $C$, if whenever $j\in T$
but $i\notin T$ for a set $T\in C$, then we have
$(T\setminus\{j\})\cup\{i\}\in C$. We say that
$i$ is {\em equivalent} to $j$ in $C$ if $i$ dominates $j$ in $C$ and $j$ also
dominates $i$. If $i$ dominates $j$ but $j$ does not dominate $i$, then
we say that $i$ {\em strictly dominates} $j$.
Note that if one of the statements {\em ``$i$ dominates $j$," ``$i$ is equivalent to $j$,"}
or {\em ``$i$ strictly dominates $j$"} holds in either ${\mbox{supp}}(A)$ or ${\mbox{supp}}(B)$, then
the same statement holds in both families. If $i$ strictly dominates $j$ in
${\mbox{supp}}(A)$, then we have $p_i\ge p_j$. Indeed, otherwise we would have
$|A'|>|A|$ and $|B'|>|B|$ for the monotone families $A'$ and $B'$ whose
supports ${\mbox{supp}}(A')$, resp.\ $supp(B')$, are obtained from ${\mbox{supp}}(A)$,
resp.\ ${\mbox{supp}}(B)$, by switching the roles of $i$ and $j$. Since $A'$ and
$B'$ are $r$-cross-intersecting, this contradicts the maximality of $(A,B)$.
For equivalent coordinates $i$ and $j$ in ${\mbox{supp}}(A)$, we may assume by symmetry
that $p_i\ge p_j$.
\smallskip
\noindent{\bf Case 1.} First assume that ${\mbox{supp}}(A)$ contains a $3$-element
set $\{i,j,k\}$. Then all sets in ${\mbox{supp}}(B)$ are contained in $\{i,j,k\}$.
Therefore, ${\mbox{supp}}(B)$ must be one of the following sets, up to a suitable permutation
of the indices $i$, $j$, and $k$.
\begin{enumerate}
\item$\{\emptyset\}$,
\item$\{\emptyset,\{i\}\}$,
\item$\{\emptyset,\{i\},\{j\}\}$,
\item$\{\emptyset,\{i\},\{j\},\{k\}\}$,
\item$\{\emptyset,\{i\},\{j\},\{i,j\}\}$,
\item$\{\emptyset,\{i\},\{j\},\{k\},\{i,j\}\}$,
\item$\{\emptyset,\{i\},\{j\},\{k\},\{i,j\},\{i,k\}\}$,
\item$\{\emptyset,\{i\},\{j\},\{k\},\{i,j\},\{i,k\},\{j,k\}\}$,
\item$\{\emptyset,\{i\},\{j\},\{k\},\{i,j\},\{i,k\},\{j,k\},\{i,j,k\}\}$.
\end{enumerate}
In all of these families, $i$ dominates $j$ and $j$ dominates $k$. So we may
assume $p_i\ge p_j\ge p_k\ge3$.
Subcases 2, 5, 7, and 9 are not possible, because $i$ is not a relevant
coordinate for $A$ in them.
In subcase 1, $A$ and $B$ are balls, but, as before, this case is still
impossible, because the radii of $A$ and $B$ differ by 3.
In subcase 3, we apply Lemma~9 to the set $B$ and coordinate $i$ to obtain
$(p_i-1)^2\le p_j$, a contradiction.
In subcase 4, a similar application of Lemma~9 yields $(p_i-1)^2\le p_j+p_k-1$
with the only solution $p_i=p_j=p_k=3$. We have
${\mbox{supp}}(A)={\mbox{supp}}(U_2)\cup\{\{i,j,k\}\}$ and thus $|A|=|U_2|+8$. We further have
$|B|=7$. We must have $n\ge4$, so that $|U_1|\ge9$ and
$|U_2|\ge33$. Using these estimates, we obtain $|A|\cdot|B|<|U_1|\cdot|U_2|$, a
contradiction.
In subcase 6, we again start with Lemma~9. It yields that $(p_i-1)^2p_j\le
p_j+p_k-1$, a contradiction.
Finally, in subcase 8, we have $(p_i-1)^2(p_j+p_k-1)\le p_jp_k$ from Lemma~9, a
contradiction.
\smallskip
\noindent{\bf Case 2.} Now we assume that ${\mbox{supp}}(A)$ contains no $3$-element
sets, but it contains two disjoint $2$-element sets $\{i,j\}$ and $\{k,l\}$. In
this case, ${\mbox{supp}}(B)$ contains the empty set and all singletons plus one of the
following families of $2$-element sets, up to a suitable symmetry on the
indices $i$, $j$, $k$, and $l$:
\begin{enumerate}
\item$\emptyset$,
\item$\{\{i,k\}\}$,
\item$\{\{i,k\},\{i,l\}\}$,
\item$\{\{i,k\},\{j,l\}\}$,
\item$\{\{i,k\},\{i,l\},\{j,l\}\}$,
\item$\{\{i,k\},\{i,l\},\{j,k\},\{j,l\}\}$.
\end{enumerate}
Note that in subcase 1, $A$ and $B$ are balls, and subcase~6 is identical with
subcase~4 with the roles of $A$ and $B$ reversed. We
use Lemma~9 and numeric comparisons to rule out the remaining cases.
Consider the monotone ball $C$ of radius $1$ in the set of coordinates
$[n]\setminus\{i\}$. This is an $r$-intersecting family. In subcases~2 and
3, $i$ dominates all other coordinates, so we may assume that $p_i$ is maximal among
all the $p_m$ ($m\in[n]$). In both subcases considered, we have $2|C|\ge|A|+|B|$. In
subcase~2, this follows from $p_i\ge p_k$, while in subcase~3, we again need to
apply the inequality in Lemma~9. In both subcases, we have $|A|>|B|$. Thus,
$|A|\cdot|B|<|C|^2$, contradicting the maximality of the pair $(A,B)$.
A similar argument works in subcases~4 and 5. Here $i$ does not dominate $l$
(nor does it dominate $j$, in subcase~4), but it dominates all other indices, and we can
still assume by symmetry that $p_i$ is maximal. This implies that $|A|+|B|<2|C|$,
so that $|A|\cdot|B|<|C|^2$, a contradiction.
\smallskip
\noindent{\bf Case 3.} Finally, assume that Cases~1 and 2 do not hold. In this
case, for any pair $T,U\in{\mbox{supp}}(A)$, we have $|T\cup U|\le3$ and, hence,
${\mbox{supp}}(B)\supseteq{\mbox{supp}}(A)$. We can further assume by symmetry that ${\mbox{supp}}(B)$
contains no $3$-element set and no pair of disjoint $2$-element sets. This
implies ${\mbox{supp}}(A)={\mbox{supp}}(B)$ so that $A=B$. In this case, ${\mbox{supp}}(A)$ contains the
empty set, the singletons, and a containment-maximal intersecting family of
pairs. There are only two types of such families to consider:
\begin{enumerate}
\item (star) ${\mbox{supp}}(A)$ contains the empty set, all singletons, and all pairs
containing some fixed coordinate $i\in[n]$.
\item (triangle) ${\mbox{supp}}(A)$ contains the empty set, all singletons, and the pairs
formed by two of the three distinct coordinates $i,j,k\in[n]$.
\end{enumerate}
Here subcase~1 is not possible, as $i$ is not a relevant coordinate for
$A$. In subcase~2, we may once again assume that $p_i$ is maximal. We use the
same $r$-intersecting family $C$ as in Case~2. To see that $|A|=|B|<|C|$ (a
contradiction), we use Lemma~9. \qed
\medskip
To extend Theorem~8 to somewhat larger values
of relevant coordinates (that is, to verify Conjecture~3, for instance, for the case
where there are $r+4$ relevant coordinates), we would have to go through a similar case analysis as
above. We would have to consider much more cases that correspond
to containment-maximal pairs of set systems $(U,V)$ with $|u\cup v|$ bounded
for $u\in U$ and $v\in V$. This seems to be doable, but the number of cases to
consider grows fast.
\medskip
Now we can prove our main theorem, verifying Conjecture~3 in several special cases.
\medskip
\noindent
{\em Proof of Theorem~7.} The statement about the case $p_1>r+1$ readily
follows from Lemma~2, as in this case the condition
$$\frac{2}{p_{r+1}}+\sum_{i=1}^r\frac{1}{p_i}\le1$$
holds.
To prove the other two statements in the theorem, we assume that $A$ and $B$ form a maximal $r$-cross-intersecting pair. We also assume without loss of generality that all coordinates are relevant for both families (simply drop the
irrelevant coordinates).
By Theorem~6, we have $\prod_{i=1}^rp_i\ge\prod_{i=1}^n(p_i-1)^{1-2/p_i}$, and
thus
$$\prod_{i=1}^r\frac{p_i}{(p_i-1)^{1-2/p_i}}\ge\prod_{i=r+1}^n(p_i-1)^{1-2/p_i}.$$
Here the function $x/(x-1)^{1-2/x}$ is decreasing for $x\ge3$, while $(x-1)^{1-2/x}$ is
increasing, and we have $p_i\ge p_1\ge3$. Therefore, we also have
$$\prod_{i=1}^r\frac{p_1}{(p_1-1)^{1-2/p_1}}\ge\prod_{i=r+1}^n(p_1-1)^{1-2/p_1},$$
$$p_1^r\ge(p_1-1)^{n(1-2/p_1)},$$
$$n\le\frac{r\log p_1}{(1-2/p_1)\log(p_1-1)}.$$
Simple calculation shows that the right-hand side of the last inequality is
strictly smaller than $r+4$ if $p_1\le t(r)$ for some function $t(r)=r/2+o(r)$
and, in particular, for $p_1=r+1\ge5$. In this case, we have $n\le r+3$ relevant
coordinates. Thus, Theorem~8 applies, yielding that $A$ and $B$ are balls. This
proves the last statement of Theorem~7.
\smallskip
For the proof of the second statement, note that we have already established
that $A$ and $B$ are balls in up to $r+3$ coordinates. Theorem~4 tells us that
the pair of radii must be $(0,0)$, $(0,1)$, $(1,1)$, or $(1,2)$. Simple calculation
shows that the first possibility (fixing the smallest $r$ coordinates) is
always optimal, and the cases where the two radii are unequal never yield maximal
$r$-cross-intersecting pairs. Finally, the construction with a ball of radius $1$ in $r+2$
coordinates matches the family obtained by fixing the $r$ smallest coordinates if and
only if all relevant coordinates satisfy $p_i=r+1$. This completes the proof of
Theorem~7. \qed
\section{Coordinates with $p_i=2$}\label{cr}
In many of our results, we had to assume $p_i>2$ for all coordinates of the
size vector. Here we elaborate on why the coordinates $p_i=2$ behave
differently.
For the simple characterization of the cases of equality in Theorem~1, the
assumption $k\ne 2$ is necessary. Here we characterize all maximal
cross-intersecting pairs in the case $k=2$.
Let $p=(p_1,\dots,p_n)$ be a size vector of positive integers with $k=\min_ip_i=2$ and let $I=\{i\in[n]\; :\; p_i=2\}$. For any set $W$ of functions $I\to[2]$, define the families
$$A_W=\{x\in S_p\; :\;\exists f\in W\; \mbox{such that } x_i=f(i)\; \mbox{for every } i\in I\},$$
$$B_W=\{y\in S_p\; :\;\not\exists f\in W\; \mbox{such that } y_i\ne f(i)\; \mbox{for every } i\in I\}.$$
The families $A_W$ and $B_W$ are cross-intersecting for any $W$. Moreover, if
$|W|=2^{|I|-1}$, we have $|A_W|\cdot|B_W|=|S_p|^2/4$, so they form a maximal
cross-intersecting pair. Note
that these include more examples than just the pairs of families described in
Theorem~1, provided that $|I|>1$.
We claim that all maximal cross-intersecting pairs are of the
form constructed above. To see this, consider a maximal pair $A,B\subseteq S_p$.
We know from the proof of Theorem~1 that $x\in A$ if and only if $x'\notin B$, where $x'$
is defined by $x'_i=(x_i+1\bmod p_i)$ for all $i\in[n]$. Let $j\in[n]$ be a
coordinate with $p_j>2$. By the same argument, we also have that
$x\in A$ holds if and only if $x''\notin B$, where $x''_i=x'_i$ for
$i\in[n]\setminus\{j\}$ and $x''_j=(x_j+2\bmod p_j)$. Thus, both $x'$ and
$x''$ belong to $B$ or neither of them does. This holds for every vector $x'$,
implying that $j$ is irrelevant for the family $B$ and thus also for $A$.
As there are no relevant coordinates for $A$ and $B$ outside the set $I$ of
coordinates with $p_i=2$, we can choose a set $W$ of functions from $I$ to
$[2]$ such that $A=A_W$. This makes
$$B=\{y\in S_p\;:\;y\mbox{ intersects all } x\in A\}=B_W.$$
We have $|A|+|B|=|S_p|$ and $|A|\cdot|B|=|S_p|^2/4$ if and only if
$|W|=2^{|I|-1}$.
\smallskip
The size vector $p=(2,\dots,2)$ of length $n$ is well studied. In this case, $S_p$ is the $n$-dimensional hypercube. If $r>1$, then all maximal
$r$-cross-intersecting pairs have an unbounded number of relevant coordinates,
as a function of $n$. Indeed, the density $|A|\cdot|B|/|S_p|^2$ is at most
$1/4$ for cross-intersecting pairs $A,B\subseteq S_p$, and strictly less than
$1/4$ for $r$-cross-intersecting families if $r>1$. Furthermore, if the number of
relevant coordinates is bounded, then this density is bounded away from $1/4$,
while if $A=B$ is the ball of radius $(n-r)/2$ in all the coordinates, then
the same density approaches $1/4$.
One can also find many maximal $2$-cross-intersecting pairs that are not
balls. For example, in the $3$-dimensional hypercube the families
$A=\{0,0,0),(0,1,1)\}$ and $B=\{(0,0,1),(0,1,0)\}$ form a maximal
$2$-cross-intersecting pair.
\smallskip
Finally, we mention that there is a simple connection between the problem
discussed in this paper and a question related to communication
complexity. Consider the following two-person communication game: Alice and
Bob each receive a vector from $S_p$, and they have to decide whether the
vectors are $r$-intersecting. In the {\em communication matrix} of such a game, the
rows are indexed by the possible inputs of Alice, the columns by the possible
inputs of Bob, and an entry of the matrix is $1$ or $0$ corresponding to the
``yes'' or ``no'' output the players have to compute for the corresponding
inputs. In the study of communication games, the submatrices of this
matrix in which all entries are equal play a special role. The largest
area of an all-$1$ submatrix is
the maximal value of $|A|\cdot|B|$ for $r$-cross-intersecting families
$A,B\subseteq S_p$.
\bigskip
\noindent{\bf Acknowledgment.} We are indebted to G. O. H. Katona, R. Radoi\v ci\'c, and D. Scheder for their valuable remarks, and to an anonymous referee for calling our attention to the manuscript of Borg~\cite{Bo14}.
| {
"timestamp": "2015-02-02T02:09:05",
"yymm": "1405",
"arxiv_id": "1405.2805",
"language": "en",
"url": "https://arxiv.org/abs/1405.2805",
"abstract": "Given a sequence of positive integers $p = (p_1, . . ., p_n)$, let $S_p$ denote the family of all sequences of positive integers $x = (x_1,...,x_n)$ such that $x_i \\le p_i$ for all $i$. Two families of sequences (or vectors), $A,B \\subseteq S_p$, are said to be $r$-cross-intersecting if no matter how we select $x \\in A$ and $y \\in B$, there are at least $r$ distinct indices $i$ such that $x_i = y_i$. We determine the maximum value of $|A|\\cdot|B|$ over all pairs of $r$- cross-intersecting families and characterize the extremal pairs for $r \\ge 1$, provided that $\\min p_i >r+1$. The case $\\min p_i \\le r+1$ is quite different. For this case, we have a conjecture, which we can verify under additional assumptions. Our results generalize and strengthen several previous results by Berge, Frankl, Füredi, Livingston, Moon, and Tokushige, and answers a question of Zhang.",
"subjects": "Combinatorics (math.CO)",
"title": "Cross-intersecting families of vectors",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9921841114418225,
"lm_q2_score": 0.8128673223709251,
"lm_q1q2_score": 0.8065140419666897
} |
https://arxiv.org/abs/1203.5207 | Linear extensions of partial orders and Reverse Mathematics | We introduce the notion of \tau-like partial order, where \tau is one of the linear order types \omega, \omega*, \omega+\omega*, and \zeta. For example, being \omega-like means that every element has finitely many predecessors, while being \zeta-like means that every interval is finite. We consider statements of the form "any \tau-like partial order has a \tau-like linear extension" and "any \tau-like partial order is embeddable into \tau" (when \tau\ is \zeta\ this result appears to be new). Working in the framework of reverse mathematics, we show that these statements are equivalent either to B\Sigma^0_2 or to ACA_0 over the usual base system RCA_0. | \section{Introduction}
Szpilrajn's Theorem (\cite{Szp30}) states that any partial order has a
linear extension. This theorem raises many natural questions, where in
general we search for properties of the partial order which are
preserved by some or all its linear extensions. For example it is
well-known that a partial order is a well partial order if and only if
all its linear extensions are well-orders.
A question which has been widely considered is the following: given a
linear order type $\tau$, is it the case that any partial order that
does not embed $\tau$ can be extended to a linear order that also does
not embed $\tau$? If the answer is affirmative, $\tau$ is said to be
extendible, while $\tau$ is weakly extendible if the same holds for any
countable partial order. For instance, the order types of the natural
numbers, of the integers, and of the rationals are extendible. Bonnet
(\cite{Bon69}) and Jullien (\cite{Jul}) characterized all countable
extendible and weakly extendible linear order types respectively.
We are interested in a similar question: given a linear order type
$\tau$ and a property characterizing $\tau$ and its suborders, is it
true that any partial order which satisfies that property has a linear
extension which also satisfies the same property? In our terminology:
does any $\tau$-like partial order have a $\tau$-like linear extension?
Here we address this question for the linear order types $\omega$,
$\omega^*$ (the inverse of $\omega$), $\omega+\omega^*$ and $\zeta$
(the order of integers). So, from now on, $\tau$ will denote one of
these.
\begin{definition}
Let $(P,\leq_P)$ be a countable partial order. We say that $P$ is
\begin{itemize}
\item \emph{$\omega$-like} if every element of $P$ has finitely
many predecessors;
\item \emph{$\omega^*$-like} if every element of $P$ has finitely
many successors;
\item \emph{$\omega+\omega^*$-like} if every element of $P$ has
finitely many predecessors or finitely many successors;
\item \emph{$\zeta$-like} if for every pair of elements $x,y\in
P$ there exist only finitely many elements $z$ with $x<_P
z<_P y$.
\end{itemize}
\end{definition}
The previous definition resembles Definition 2.3 of Hirschfeldt and
Shore (\cite{HirSho07}), where linear orders of type $\omega$,
$\omega^*$ and $\omega+\omega^*$ are introduced. The main difference is
that the order properties defined by Hirschfeldt and Shore are meant to
uniquely determine a linear order type up to isomorphism, whereas our
definitions apply to partial orders in general and do not determine an
order type. Notice also that, for instance, an $\omega$-like partial
order is also $\omega+\omega^*$-like and $\zeta$-like.
We introduce the following terminology:
\begin{definition}
We say that $\tau$ is \emph{linearizable} if every $\tau$-like partial
order has a linear extension which is also $\tau$-like.
\end{definition}
With this definition in hand, we are ready to formulate the results we
want to study:
\begin{theorem}\label{main}
The following hold:
\begin{enumerate}
\item $\omega$ is linearizable;
\item $\omega^*$ is linearizable;
\item $\omega+\omega^*$ is linearizable;
\item $\zeta$ is linearizable.
\end{enumerate}
\end{theorem}
A proof of the linearizability of $\omega$ can be found in Fra\"iss\'{e}'s
monograph (\cite[\S 2.15]{Fra00}), where the result is attributed to
Milner and Pouzet. $(2)$ is similar to $(1)$ and the proof of $(3)$
easily follows from $(1)$ and $(2)$. The linearizability of $\zeta$ is
apparently a new result (for a proof see Lemma \ref{lemma 2}
below).\medskip
In this paper we study the statements contained in Theorem \ref{main}
from the standpoint of reverse mathematics (the standard reference is
\cite{Sim09}), whose goal is to characterize the axiomatic assumptions
needed to prove mathematical theorems. We assume the reader is familiar
with systems such as {\ensuremath{\mathsf{RCA}_0}}\ and {\ensuremath{\mathsf{ACA}_0}}. The reverse mathematics of weak
extendibility is studied in \cite{DHLS} and \cite{Mon06}. The existence
of maximal linear extensions of well partial orders is studied from the
reverse mathematics viewpoint in \cite{MarSho11}.\medskip
Our main result is that the linearizability of $\tau$ is equivalent
over {\ensuremath{\mathsf{RCA}_0}}\ to the $\SI02$ bounding principle \ensuremath{\mathsf{B}\SI02}\ when $\tau \in
\{\omega, \omega^*, \zeta\}$, and to {\ensuremath{\mathsf{ACA}_0}}\ when $\tau=\omega+\omega^*$.
For more details on \ensuremath{\mathsf{B}\SI02}, including an apparently new equivalent (simply
asserting that a finite union of finite sets is finite), see
\S\ref{Section FUF} below.
The linearizability of $\omega$ appears to be the first example of a
genuine mathematical theorem (actually appearing in the literature for
its own interest, and not for its metamathematical properties) that
turns out to be equivalent to \ensuremath{\mathsf{B}\SI02}.\medskip
To round out our reverse mathematics analysis, we also consider a
notion closely related to linearizability:
\begin{definition}
We say that $\tau$ is \emph{embeddable} if every $\tau$-like partial
order $P$ embeds into $\tau$, that is there exists an order preserving
map from $P$ to $\tau$.\footnote{To formalize this definition in {\ensuremath{\mathsf{RCA}_0}},
we need to fix a canonical representative of the order type $\tau$,
which we do in Definition 1.5.}
\end{definition}
It is rather obvious that $\tau$ is linearizable if and only if $\tau$
is embeddable. Let us notice that {\ensuremath{\mathsf{RCA}_0}}\ easily proves that embeddable
implies linearizable. Not surprisingly, the converse is not true. In
fact, we show that embeddability is strictly stronger when $\tau \in
\{\omega, \omega^*, \zeta\}$, and indeed equivalent to {\ensuremath{\mathsf{ACA}_0}}. The only
exception is given by $\omega+\omega^*$, for which both properties are
equivalent to {\ensuremath{\mathsf{ACA}_0}}.\medskip
We use the following definitions in {\ensuremath{\mathsf{RCA}_0}}.
\begin{definition}[{\ensuremath{\mathsf{RCA}_0}}]
Let $\leq$ denote the usual ordering of natural numbers. The linear
order $\omega$ is $(\mathbb{N},{\leq})$, while $\omega^*$ is $(\mathbb{N},{\geq})$.
Let $\{P_i\colon i\in Q\}$ be a family of partial orders indexed by a
partial order $Q$. The \emph{lexicographic sum} of the $P_i$ along $Q$,
denoted by $\sum_{i\in Q}P_i$, is the partial order on the set
$\{(i,x)\colon i\in Q \land x\in P_i\}$ defined by
\[
(i,x) \leq (j,y) \iff i <_Q j \lor (i=j \land x \leq_{P_i} y).
\]
The \emph{sum} $\sum_{i<n}P_i$ can be regarded as the lexicographic sum
along the $n$-element chain. In particular $P_0+P_1$ is the
lexicographic sum along the $2$-element chain (and we have thus defined
$\omega + \omega^*$ and $\zeta = \omega^*+\omega$).
Similarly, the \emph{disjoint sum} $\bigoplus_{i<n} P_i$ is the
lexicographic sum along the $n$-element antichain.
\end{definition}
\section{$\SI02$ bounding and finite union of finite sets}\label{Section FUF}
Let us recall that \ensuremath{\mathsf{B}\SI02}\ (standing for \SI02 bounding, and also known as
\SI02 collection) is the scheme:
\[
\tag{\ensuremath{\mathsf{B}\SI02}} (\forall i<n) (\exists m) \varphi(i,n,m) \implies
(\exists k) (\forall i<n) (\exists m<k) \varphi(i,n,m),
\]
where $\varphi$ is any $\SI02$ formula.
It is well-known that {\ensuremath{\mathsf{RCA}_0}}\ does not prove \ensuremath{\mathsf{B}\SI02}, which is strictly
weaker than \SI02 induction. Neither of {\ensuremath{\mathsf{WKL}_0}}\ and \ensuremath{\mathsf{B}\SI02}\ implies the
other and Hirst (\cite{Hirst87}, for a widely available proof see
\cite[Theorem 2.11]{ChoJocSla01}) showed that \ensuremath{\mathsf{RT}^2_2}\ (Ramsey theorem for
pairs and two colors) implies \ensuremath{\mathsf{B}\SI02}.
A few combinatorial principles are known to be equivalent to \ensuremath{\mathsf{B}\SI02}\ over
{\ensuremath{\mathsf{RCA}_0}}.
Hirst (\cite{Hirst87}, for a widely available proof see \cite[Theorem
2.10]{ChoJocSla01}) showed that, over {\ensuremath{\mathsf{RCA}_0}}, \ensuremath{\mathsf{B}\SI02}\ is equivalent to the
infinite pigeonhole principle, i.e. the statement
\[
\tag{\ensuremath{\mathsf{RT}^1_{<\infty}}} (\forall n) (\forall f:\mathbb{N} \to n) (\exists A \subseteq \mathbb{N} \text{ infinite}) (\exists c<n) (\forall m\in A) (f(m)=c).
\]
(The notation arises from viewing the infinite pigeonhole principle as
Ramsey theorem for singletons and an arbitrary finite number of
colors.)
Chong, Lempp and Yang (\cite{ChoLemYan10}) showed that a combinatorial
principle \ensuremath{\mathsf{PART}}\ about infinite $\omega+\omega^*$ linear orders,
introduced by Hirschfeldt and Shore (\cite[\S4]{HirSho07}), is also
equivalent to \ensuremath{\mathsf{B}\SI02}. More recently, Hirst (\cite{Hirst12}) also proved
that \ensuremath{\mathsf{B}\SI02}\ is equivalent to a statement apparently similar to Hindman's
theorem, but much weaker from the reverse mathematics viewpoint.
We consider the statement that a finite union of finite sets is finite:
\[
\tag{\ensuremath{\mathsf{FUF}}} (\forall i<n) (X_i \text{ is finite}) \implies
\bigcup_{i<n} X_i \text{ is finite}.
\]
Here ``$X$ is finite'' means $(\exists m)(\forall x \in X) (x<m)$. This
statement can be viewed as a second-order version of $\Pi_0$
regularity, which in the context of first-order arithmetic is known to
be equivalent to $\Sigma_2$ bounding (see e.g.\ \cite[Theorem
2.23.4]{HajPud}).
\begin{lemma}\label{lemma 0}
Over {\ensuremath{\mathsf{RCA}_0}}, \ensuremath{\mathsf{B}\SI02}\ is equivalent to \ensuremath{\mathsf{FUF}}.
\end{lemma}
\begin{proof}
First notice that \ensuremath{\mathsf{FUF}}\ follows immediately from the instance of \ensuremath{\mathsf{B}\SI02}\
relative to the \PI01, and hence \SI02, formula $(\forall x \in X_i)
(x<m)$.
For the other direction we use Hirst's result recalled above: it
suffices to prove that \ensuremath{\mathsf{FUF}}\ implies \ensuremath{\mathsf{RT}^1_{<\infty}}. Let $f\colon\mathbb{N}\to n$ be
given. Define for each $i<n$ the set $X_i=\{m\colon f(m)=i\}$. Clearly
$\bigcup_{i<n}X_i=\mathbb{N}$ is infinite. By \ensuremath{\mathsf{FUF}}, there exists $i<n$ such
that $X_i$ is infinite. Now $X_i$ is an infinite homogeneous set for
$f$.
\end{proof}
\section{Linearizable types}
Notice that Szpilrajn's Theorem is easily seen to be computably true
(see \cite[Observation 6.1]{Dow98}) and provable in {\ensuremath{\mathsf{RCA}_0}}. We use this
fact several times without further notice.
We start by proving that \ensuremath{\mathsf{B}\SI02}\ suffices to establish the linearizability
of $\omega$, $\omega^*$ and $\zeta$.
\begin{lemma}\label{lemma 1}
{\ensuremath{\mathsf{RCA}_0}}\ proves that \ensuremath{\mathsf{B}\SI02}\ implies the linearizability of $\omega$ and
$\omega^*$.
\end{lemma}
\begin{proof}
We argue in {\ensuremath{\mathsf{RCA}_0}}\ and, by Lemma \ref{lemma 0}, we may assume \ensuremath{\mathsf{FUF}}. Let
us consider first $\omega$. So let $P$ be an $\omega$-like partial
order which, to avoid trivialities, we may assume to be infinite. We
recursively define a sequence $z_n\in P$ by letting $z_n$ be the least
(w.r.t.\ the usual ordering of $\mathbb{N}$) $x\in P$ such that $(\forall
i<n)(x\nleq_P z_i)$.
We show by \SI01 induction that $z_n$ is defined for all $n\in\mathbb{N}$.
Suppose that $z_i$ is defined for all $i<n$. We want to prove $(\exists
x\in P)(\forall i<n)(x\nleq_P z_i)$. Define $X_i=\{x\in P\colon x\leq
_P z_i\}$ for $i<n$. Since $P$ is $\omega$-like, each $X_i$ is finite.
By \ensuremath{\mathsf{FUF}}, $\bigcup_{i<n} X_i$ is also finite. The claim follows from the
fact that $P$ is infinite.
Now define for each $n\in\mathbb{N}$ the finite set
\[ P_n=\{x\in P\colon x\leq_P z_n\land(\forall i<n)(x\nleq_P z_i)\}.\]
It is not hard to see that the $P_n$'s form a partition of $P$, and
that if $x\leq_P y$ with $x\in P_i$ and $y\in P_j$, then $i\leq j$.
Then let $L$ be a linear extension of the lexicographic sum
$\sum_{n\in\omega} P_n$. $L$ is clearly a linear order and extends $P$
by the remark above. To prove that $L$ is $\omega$-like, note that the
set of $L$-predecessors of an element of $P_n$ is included in
$\bigcup_{i \leq n} P_i$, which is finite, by \ensuremath{\mathsf{FUF}}\ again.
For $\omega^*$, repeat the same construction using $\geq_P$ in place of
$\leq_P$, and let $L$ be a linear extension of
$\sum_{n\in\omega^*}P_n$.
\end{proof}
\begin{lemma}\label{lemma 2}
{\ensuremath{\mathsf{RCA}_0}}\ proves that \ensuremath{\mathsf{B}\SI02}\ implies the linearizability of $\zeta$.
\end{lemma}
\begin{proof}
In {\ensuremath{\mathsf{RCA}_0}}\ assume \ensuremath{\mathsf{FUF}}. Let $P$ be a $\zeta$-like partial order, which we
may again assume to be infinite. It is convenient to use the notation
$[x,y]_P = \{z\in P\colon x\leq_P z\leq_P y \lor y \leq_P z\leq_P x\}$,
so that $[x,y]_P \neq \emptyset$ if and only if $x$ and $y$ are
comparable.
We define by recursion a sequence $z_n\in P$ by letting $z_n$ be the
least (w.r.t.\ the ordering of $\mathbb{N}$) $x\in P$ such that
\[
x\notin \bigcup_{i,j<n}\mathopen{[}z_i,z_j\mathclose{]}_P.
\]
As before, since $P$ is infinite and $\zeta$-like, one can prove using
\SI01 induction and \ensuremath{\mathsf{FUF}}\ that $z_n$ is defined for every $n\in\mathbb{N}$. It
is also easy to prove that
\[
P=\bigcup_{i,j\in\mathbb{N}}[z_i,z_j\mathclose{]}_P.
\]
Define for each $n\in\mathbb{N}$ the set
\[
P_n=\bigcup_{i<n}[z_i, z_n\mathclose{]}_P\setminus \bigcup_{i,j<n} [z_i,z_j\mathclose{]}_P.
\]
By \ensuremath{\mathsf{FUF}}, the $P_n$'s are finite. Moreover, they clearly form a
partition of $P$. Note also that $z_n\in P_n$ and every element of
$P_n$ is comparable with $z_n$. Furthermore, every interval $[x,y]_P$
is included in some $[z_i,z_j]_P$. Notice that the same holds for any
partial order extending $\leq_P$.
We now extend $\leq_P$ to a partial order $\preceq_P$ such that any
linear extension of $(P,{\preceq_P})$ is $\zeta$-like. We say that $n$
is left if $z_n\leq_P z_i$ for some $i<n$; otherwise, we say that $n$
is right. Notice that, since $z_n \in P_n$, $n$ is right if and only if
$z_i\leq_P z_n$ for some $i<n$ or $z_n$ is incomparable with every
$z_i$ with $i<n$.
The order $\preceq_P$ places $P_n$ below or above every $P_i$ with
$i<n$ depending on whether $n$ is left or right. Formally, for $x,y\in
P$ such that $x\in P_n$ and $y\in P_m$ let
\[
x\preceq_P y \iff
(n=m\land x\leq_P y)\lor(n<m\land m\ \text{is right})\lor(m<n\land n\ \text{is left}).
\]
We claim that $\preceq_P$ extends $\leq_P$. Let $x\leq_P y$ with $x\in
P_n$ and $y\in P_m$. If $n=m$, $x\preceq_P y$ by definition. Suppose
now that $n<m$, so that we need to prove that $m$ is right. As $x\in
P_n$, $z_i\leq_P x$ for some $i\leq n$. Since $y\in P_m$, $y$ is
comparable with $z_m$. Suppose that $z_m<_P y$. Then $y\leq_P z_j$ for
some $j<m$, and so $z_i\leq_Px\leq_P y\leq_P z_j$ with $i,j<m$,
contrary to $y\in P_m$. It follows that $y\leq_P z_m$ and thereby
$z_i\leq_P z_m$ with $i<m$. Therefore, $m$ is right, as desired. The
case $n>m$ (where we need to prove that $n$ is left) is similar.
We claim that $(P,\preceq_P)$ is still $\zeta$-like. To see this, it is
enough to show that for all $i,j<n$
\[
\{x\in P\colon z_i\preceq_P x\preceq_P z_j\} \subseteq \bigcup_{k<n} P_k
\]
and apply \ensuremath{\mathsf{FUF}}. Let $x\in P_k$ be such that $z_i\prec_P x\prec_P z_j$.
Suppose, for a contradiction, that $k\geq n$ and hence that $i,j<k$. By
the definition of $\preceq_P$, $z_i\prec_P x$ implies that $k$ is
right. At the same time, $x\prec_P z_j$ implies that $k$ is left, a
contradiction.
Now let $L$ be any linear extension of $(P,{\preceq_P})$ and hence of
$(P,{\leq_P})$. We claim that $L$ is $\zeta$-like. To prove this, we
show that for all $i,j\in\mathbb{N}$
\[
\{ x\in P\colon z_i\leq_L x\leq_L z_j\}=\{x\in P\colon z_i\preceq_P x\preceq_P z_j\}.
\]
One inclusion is obvious because $\leq_L$ extends $\preceq_P$. For the
converse, observe that the $z_n$'s are $\preceq_P$-comparable with any
other element.
\end{proof}
We can now state and prove our reverse mathematics results.
\begin{theorem}\label{theorem 0}
Over {\ensuremath{\mathsf{RCA}_0}}, the following are pairwise equivalent:
\begin{enumerate}
\item \ensuremath{\mathsf{B}\SI02};
\item $\omega$ is linearizable;
\item $\omega^*$ is linearizable;
\item $\zeta$ is linearizable.
\end{enumerate}
\end{theorem}
\begin{proof}
Lemma \ref{lemma 1} gives $(1)\rightarrow(2)$ and $(1)\rightarrow(3)$.
The implication $(1)\rightarrow(4)$ is Lemma \ref{lemma 2}.
To show $(2)\rightarrow(1)$, we assume linearizability of $\omega$ and
prove \ensuremath{\mathsf{FUF}}. So let $\{X_i\colon i<n\}$ be a finite family of finite
sets. We define $P= \bigoplus_{i<n} (X_i+\{m_i\})$, where the $m_i$'s
are distinct and every $X_i$ is regarded as an antichain. $P$ is
$\omega$-like, and so by $(2)$ there exists an $\omega$-like linear
extension $L$ of $P$. Let $m_j$ be the $L$-maximum of $\{m_i \colon
i<n\}$. Then $\bigcup_{i<n} X_i$ is included in the set of
$L$-predecessors of $m_j$, and is therefore finite because $L$ is
$\omega$-like.
The implication $(3)\to(1)$ is analogous. For $(4)\to(1)$, prove \ensuremath{\mathsf{FUF}}\
by using the partial order $\bigoplus_{i<n} (\{\ell_i\} + X_i +
\{m_i\})$.
\end{proof}
We now show that the linearizability of $\omega+\omega^*$ requires
{\ensuremath{\mathsf{ACA}_0}}.
\begin{theorem}\label{theorem 1}
Over {\ensuremath{\mathsf{RCA}_0}}, the following are equivalent:
\begin{enumerate}
\item {\ensuremath{\mathsf{ACA}_0}};
\item $\omega+\omega^*$ is linearizable.
\end{enumerate}
\end{theorem}
\begin{proof}
We begin by proving $(1)\to(2)$. Let $P$ be an $\omega+\omega^*$-like
partial order. In {\ensuremath{\mathsf{ACA}_0}}\ we can define the set $P_0$ of the elements
having finitely many predecessors. So $P_1=P\setminus P_0$ consists of
elements having finitely many successors. Clearly, $P_0$ is
$\omega$-like and $P_1$ is $\omega^*$-like. Since {\ensuremath{\mathsf{ACA}_0}}\ is strong
enough to prove \ensuremath{\mathsf{B}\SI02}, by Lemma \ref{lemma 1}, $P_0$ has an $\omega$-like
linear extension $L_0$ and $P_1$ has an $\omega^*$-like linear
extension $L_1$. Since $P_0$ is downward closed and $P_1$ is upward
closed, it is not difficult to check that the linear order $L=L_0+L_1$
is $\omega+\omega^*$-like and extends $P$.\smallskip
For the converse, let $f\colon\mathbb{N}\to\mathbb{N}$ be a one-to-one function. We set
out to define an $\omega+\omega^*$-like partial order $P$ such that any
$\omega+\omega^*$-like linear extension of $P$ encodes the range of
$f$. To this end, we use an $\omega+\omega^*$-like linear order
$A=\{a_n\colon n\in\mathbb{N}\}$ given by the false and true stages of $f$.
Recall that $n\in\mathbb{N}$ is said to be true (for $f$) if $(\forall
m>n)(f(m)>f(n))$ and false otherwise, and note that the range of $f$ is
\DE01 definable from any infinite set of true stages.
The idea for $A$ comes from the well-known construction of a computable
linear order such that any infinite descending sequence computes
$\emptyset'$. This construction can be carried out in {\ensuremath{\mathsf{RCA}_0}}\ (see
\cite[Lemma 4.2]{MarSho11}). Here, we define $A$ by letting $a_n\leq
a_m$ if and only if either
\begin{gather*}
f(k)<f(n) \text{ for some }n<k\leq m, \text{ or}\\
m\leq n \text{ and } f(k)>f(m) \text{ for all }m<k\leq n.
\end{gather*}
It is not hard to see that $A$ is a linear order. Moreover, if $n$ is
false, then $a_n$ has finitely many predecessors and infinitely many
successors. Similarly, if $n$ is true, then $a_n$ has finitely many
successors and infinitely many predecessors. In particular, $A$ is an
$\omega+\omega^*$-like linear order.
Now let $P=A\oplus B$ where $B=\{b_n\colon n\in\mathbb{N}\}$ is a linear order
of order type $\omega^*$, defined by letting $b_n \leq b_m$ if and only
if $n \geq m$. It is clear that $P$ is an $\omega+\omega^*$-like
partial order. By hypothesis, there exists an $\omega+\omega^*$-like
linear extension $L$ of $P$. We claim that $n$ is a false stage if and
only if it satisfies the \PI01 formula $(\forall m)(a_n<_L b_m)$.
In fact, if $n$ is false and $b_m\leq_L a_n$, then $b_m$ has infinitely
many successors in $L$, since $a_n$ has infinitely many successors in
$P$ and a fortiori in $L$. On the other hand, $b_m$ has infinitely many
predecessors in $P$, and hence also in $L$, contradiction. Likewise, if
$n$ is true and $a_n<_L b_m$ for all $m$, then $a_n$ has infinitely
many successors as well as infinitely many predecessors in $L$, which
is a contradiction again.
Therefore, the set of false stages is \DE01, and so is the set of true
stages, which thus exists in {\ensuremath{\mathsf{RCA}_0}}. This completes the proof.
\end{proof}
\section{Embeddable types}
We turn our attention to embeddability. As noted before, {\ensuremath{\mathsf{RCA}_0}}\ suffices
to prove that ``$\tau$ is embeddable'' implies ``$\tau$ is
linearizable''. The converse is true in {\ensuremath{\mathsf{ACA}_0}}. Actually, embeddability
is equivalent to {\ensuremath{\mathsf{ACA}_0}}. We thus prove the following.
\begin{theorem}
The following are pairwise equivalent over {\ensuremath{\mathsf{RCA}_0}}:
\begin{enumerate}
\item {\ensuremath{\mathsf{ACA}_0}};
\item $\omega$ is embeddable;
\item $\omega^*$ is embeddable;
\item $\zeta$ is embeddable;
\item $\omega+\omega^*$ is embeddable;
\end{enumerate}
\end{theorem}
\begin{proof}
We first show that $(1)$ implies the other statements. Since \ensuremath{\mathsf{B}\SI02}\ is
provable in {\ensuremath{\mathsf{ACA}_0}}, it follows from Theorem \ref{theorem 0} that {\ensuremath{\mathsf{ACA}_0}}\
proves the linearizability of $\omega$, $\omega^*$ and $\zeta$. By
Theorem \ref{theorem 1}, {\ensuremath{\mathsf{ACA}_0}}\ proves the linearizability of $\omega
+\omega^*$. We now claim that in {\ensuremath{\mathsf{ACA}_0}}\ ``$\tau$ is linearizable''
implies ``$\tau$ is embeddable'' for each $\tau$ we are considering.
The key fact is that the property of having finitely many predecessors
(successors) in a partial order, as well as having exactly $n\in\mathbb{N}$
predecessors (successors), is arithmetical. Analogously, for a set, and
hence for an interval, being finite or having size exactly $n\in\mathbb{N}$ is
arithmetical too. (All these properties are in fact $\SI02$.)
We consider explicitly the case of $\omega+\omega^*$ (the other cases
are similar). So let $L$ be a $\omega+\omega^*$-like linear extension
of a given $\omega+\omega^*$-like partial order. We want to show that
$L$ is embeddable into $\omega+\omega^*$. Define $f\colon L\to
\omega+\omega^*$ by
\[ f(x)=\begin{cases}
(0,|\{y\in L\colon y<_L x\}|) & \text{if}\ x\ \text{has finitely many predecessors,}\\
(1,|\{y\in L\colon x<_L y\}|) & \text{otherwise}.
\end{cases}\]
It is easy to see that $f$ preserves the order.\smallskip
For the reversals, notice that $(5)\to(1)$ immediately follows from
Theorem \ref{theorem 1}.
As the others are quite similar, we only prove $(2)\to(1)$ with a
construction similar to that used in the proof of Theorem 3.1 in
\cite{FriHir90}. Let $f\colon\mathbb{N}\to\mathbb{N}$ be a given one-to-one function.
We want to prove that the range of $f$ exists. We fix an antichain
$A=\{a_m\colon m\in\mathbb{N}\}$ and elements $b^n_j$ for $n\in\mathbb{N}$ and $j\leq
n$. The partial order $P$ is obtained by putting for each $n\in\mathbb{N}$ the
$n+1$ elements $b^n_j$ below $a_{f(n)}$. Formally, $b^n_j \leq_P a_m$
when $f(n) \leq m$, and there are no other comparabilities.
$P$ is clearly an $\omega$-like partial order. Apply the hypothesis and
obtain an embedding $h\colon P\to\omega$. Now, we claim that $m$
belongs to the range of $f$ if and only if $(\exists
n<h(a_m))(f(n)=m)$. One implication is trivial. For the other, suppose
that $f(n)=m$. By construction, $a_m$ has at least $n+1$ predecessors
in $P$, and thus it must be $h(a_m)>n$.
\end{proof}
| {
"timestamp": "2012-04-17T02:03:06",
"yymm": "1203",
"arxiv_id": "1203.5207",
"language": "en",
"url": "https://arxiv.org/abs/1203.5207",
"abstract": "We introduce the notion of \\tau-like partial order, where \\tau is one of the linear order types \\omega, \\omega*, \\omega+\\omega*, and \\zeta. For example, being \\omega-like means that every element has finitely many predecessors, while being \\zeta-like means that every interval is finite. We consider statements of the form \"any \\tau-like partial order has a \\tau-like linear extension\" and \"any \\tau-like partial order is embeddable into \\tau\" (when \\tau\\ is \\zeta\\ this result appears to be new). Working in the framework of reverse mathematics, we show that these statements are equivalent either to B\\Sigma^0_2 or to ACA_0 over the usual base system RCA_0.",
"subjects": "Logic (math.LO)",
"title": "Linear extensions of partial orders and Reverse Mathematics",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969698879861,
"lm_q2_score": 0.8198933315126791,
"lm_q1q2_score": 0.8064445965072372
} |
https://arxiv.org/abs/1810.13418 | Sharp error estimates for spline approximation: explicit constants, $n$-widths, and eigenfunction convergence | In this paper we provide a priori error estimates in standard Sobolev (semi-)norms for approximation in spline spaces of maximal smoothness on arbitrary grids. The error estimates are expressed in terms of a power of the maximal grid spacing, an appropriate derivative of the function to be approximated, and an explicit constant which is, in many cases, sharp. Some of these error estimates also hold in proper spline subspaces, which additionally enjoy inverse inequalities. Furthermore, we address spline approximation of eigenfunctions of a large class of differential operators, with a particular focus on the special case of periodic splines. The results of this paper can be used to theoretically explain the benefits of spline approximation under $k$-refinement by isogeometric discretization methods. They also form a theoretical foundation for the outperformance of smooth spline discretizations of eigenvalue problems that has been numerically observed in the literature, and for optimality of geometric multigrid solvers in the isogeometric analysis context. | \section{Introduction}
Splines are piecewise polynomial functions that are glued together in a certain smooth way.
When using them in an approximation method, the availability of sharp error estimates is of utmost importance.
Depending on the problem to be addressed, one needs to tailor the norm to measure the error, the properties -- degree and smoothness -- of the approximant, and the space the function to be approximated belongs to. As it is difficult to trace all the works on spline approximation, we refer the reader to \cite{Schumaker2007} for an extended bibliography on the topic.
The emerging field of isogeometric analysis (IGA) triggered a renewed interest in the topic of spline approximation and related error estimates.
In particular, isogeometric Galerkin methods aim to approximate solutions of variational formulations of differential problems by using spline spaces of possibly high degree and maximal smoothness \cite{Cottrell:09}. In this context, a priori error estimates in Sobolev (semi-)norms and corresponding projectors to a suitably chosen spline space are crucial.
Classical error estimates in Sobolev (semi-)norms for spline approximation are expressed in terms of
\begin{enumerate}
\item[(a)] a certain power of the maximal grid spacing (this is the approximation power),
\item[(b)] an appropriate derivative of the function to be approximated, and
\item[(c)] a ``constant'' which is independent of the previous quantities but usually depends on the spline degree.
\end{enumerate}
An explicit expression of the constant in (c) is not always available in the literature \cite{deBoor2001}, because it is a minor issue in the most standard approximation analysis; the latter is mainly interested in the approximation power of spline spaces of a given degree.
These estimates are perfectly suited to study approximation under
$h$-refinement, i.e., refining the mesh, and is obtained by the
insertion of new knots; see \cite{Hughes:18} and references therein.
On the other hand, one of the most interesting features in IGA is $k$-refinement, which denotes degree elevation with increasing interelement smoothness (and requires the use of splines of high degree and smoothness). The above mentioned error estimates are not sufficient to explain the benefits of approximation under $k$-refinement
as long as it is not well understood how the degree of the spline affects the whole estimate, including the ``constant'' in (c).
In this paper we focus on a priori error estimates with explicit constants for approximation by spline functions defined on arbitrary knot sequences. We are able to provide accurate estimates, which are sharp or very close to sharp in several interesting cases. These a priori estimates are actually good enough to cover convergence to eigenfunctions of classical differential operators
under $k$-refinement.
The key tools to get these results are the theory of Kolmogorov $L^2$ $n$-widths and the representation of the considered Sobolev spaces in terms of integral operators described by suitable kernels \cite{Kolmogorov:36,Pinkus:85}.
The main theoretical contributions and the structure of the paper are outlined in the next subsections.
\subsection{Error estimates}
For $k\geq0$, let $C^k[a,b]$ be the classical space of functions with continuous derivatives of order $0,1,\ldots,k$ on the interval~$[a,b]$.
We further let $C^{-1}[a,b]$ denote the space of bounded, piecewise continuous functions on $[a,b]$ that are discontinuous only at a finite number of points.
Suppose ${\boldsymbol \tau} := (\tau_0,\ldots,\tau_{N+1})$ is a sequence of (break) points
such that
\begin{equation*
a=:\tau_0 < \tau_1 < \cdots < \tau_{N} < \tau_{N+1}:= b,
\end{equation*}
and let
$I_j := [\tau_j,\tau_{j+1})$,
$j=0,1,\ldots,N-1$, and $I_N := [\tau_N,\tau_{N+1}]$.
For any $p \ge 0$, let ${\cal P}_p$ be the space of polynomials of
degree at most $p$. Then, for $-1\leq k\leq p-1$, we define the space $\mathcal{S}^k_{p,{\boldsymbol \tau}}$ of splines of degree $p$ and smoothness $k$ by
$$ \mathcal{S}^k_{p,{\boldsymbol \tau} } := \{s \in C^{k}[a,b] : s|_{I_j} \in {\cal P}_p,\, j=0,1,\ldots,N \}, $$
and we set
$$ \mathcal{S}_{p,{\boldsymbol \tau}} := \mathcal{S}^{p-1}_{p,{\boldsymbol \tau}}. $$
With a slight misuse of terminology, we will refer to ${\boldsymbol \tau}$ as knot sequence and to its elements as knots.
For real-valued functions $f$ and $g$ we denote the norm and inner product on $L^2:=L^2(a,b)$ by
$$ \| f\|^2 := (f,f), \quad (f,g) := \int_a^b f(x) g(x) dx, $$
and we consider the Sobolev spaces
$$ H^r:= H^r(a,b)=\{u\in L^2 : \, \partial^\alpha u \in L^2(a,b),\, \alpha=1,\ldots,r\}. $$
Classical results in spline approximation read as follow: for any $u\in H^{r}$ and any ${\boldsymbol \tau}$ there exists $s_p\in \mathcal{S}^k_{p,{\boldsymbol \tau}}$
such that
\begin{equation}
\label{eq:classical-err-est}
\|\partial^{\ell}(u-s_p) \|\leq C(p,k,\ell,r)h^{r-\ell} \| \partial^r u \|, \quad 0\leq \ell \leq r\leq p+1, \quad \ell\leq k+1\leq p,
\end{equation}
where
\begin{equation}\label{eq:hmax}
h:=\max_{j=0,\ldots,N} h_j, \quad h_j:=\tau_{j+1}-\tau_j.
\end{equation}
The above estimates can be generalized to any $L^q$-norm; see, e.g., \cite{Schumaker2007,Lyche:18}.
A common way to construct a spline approximant is to use a quasi-interpolant, that is a linear combination of the elements of a suitable basis -- usually the B-spline basis -- whose coefficients are obtained by a convenient (local) approximation scheme \cite{deBoorF1973,LycheS1975,Lyche:18}.
Several quasi-interpolants with optimal approximation power are available in the literature; see, e.g., \cite{Lyche:18} for a constructive example.
When dealing with a specific (B-spline) quasi-interpolant, the constant in \eqref{eq:classical-err-est} often behaves quite badly (exponentially) with respect to the spline degree \cite{Lyche:18}. However, this unpleasant feature is more related to the condition number of the considered basis than to the approximation properties of the spline space. Indeed, it can be proved that the condition number of the B-spline basis in any $L^q$-norm grows like $2^p$ for arbitrary knot sequences \cite{Lyche1978,SchererS1999}.
Mainly motivated by the interest of $k$-refinement in IGA, explicit $p$-dependence in approximation bounds of the form \eqref{eq:classical-err-est} has recently received a renewed attention. In Theorem 2 of \cite{Buffa:11} a representation in terms of Legendre polynomials has been exploited to provide a constant which behaves like
${1}/(p-k)^{r-\ell} $ for spline spaces of degree $p\geq 2k+1$ and smoothness $k$.
The important case of splines with maximal smoothness has been addressed in \cite{Takacs:2016}. By considering a proper spline subspace and using Fourier analysis, see Theorems~1.1 and~7.3 of \cite{Takacs:2016}, it has been proved that for any $u\in H^{r}$ there exists $s_p\in \mathcal{S}_{p,{\boldsymbol \tau}}$ such that
\begin{equation}
\label{eq:Takacs}
\| u-s_p \|\leq (\sqrt{2}h)^{r} \| \partial^r u \|, \quad 0\leq r\leq p+1,
\end{equation}
under the assumption of sufficiently fine uniform grids, that is $h_j=h,$ $ j=0,\ldots,N$ and
$hp<b-a$.
The relevance of \eqref{eq:Takacs} is twofold: it covers the case of maximal smoothness and provides a uniform estimate for all the degrees. However, it still suffers from the serious limitation of uniform grid spacing, which is intrinsically related to the use of Fourier analysis. Moreover, it requires a restriction on the grid spacing with respect to the degree.
An interesting framework to examine approximation properties is the theory of Kolmogorov $n$-widths which defines and gives a characterization of optimal $n$-dimensional spaces for approximating function classes and their associated norms \cite{Babuska:2002,Kolmogorov:36,Pinkus:85}.
Kolmogorov $n$-widths and optimal subspaces in $L^2(a,b)$ with respect to the $L^2$-norm were studied in \cite{Evans:2009} with the goal to (numerically) assess the approximation properties of smooth splines in IGA.
In a recent sequence of papers, \cite{Floater:2017,Floater:2018,Floater:per}, it has been proved that subspaces of smooth splines of any degree on uniform grids, identified by suitable boundary conditions, are optimal subspaces for $L^2$ Kolmogorov $n$-width problems for certain function classes of importance in IGA and finite element analysis (FEA). The subspaces used in \cite{Takacs:2016} to prove \eqref{eq:Takacs} are a particular instance of a class of optimal spaces considered in \cite{Floater:2018}. As a byproduct, the results in \cite{Takacs:2016} were improved, providing a better constant, in \cite{Floater:2017} for special sequences ${\boldsymbol \tau}$ and in \cite{Floater:2018} for restricted function classes and uniform sequences ${\boldsymbol \tau}$.
The results in \cite{Floater:2017,Floater:2018} were then applied in \cite{Bressan:preprint} to show that, for uniform sequences ${\boldsymbol \tau}$ and by comparing the same number of degrees of freedom, $k$-refined spaces in IGA provide better a priori error estimates than $C^0$ FEA and $C^{-1}$ discontinuous Galerkin (DG) spaces in almost all cases of practical interest.
In this paper we complete the extension of the results in \cite{Takacs:2016}, to arbitrary knot sequences and to any function in $H^r$.
More precisely, we first show the following theorem.
\begin{theorem}\label{thm:one}
For any knot sequence ${\boldsymbol \tau}$, let $h$ denote its maximum knot distance, and let $P_p$ denote the $L^2(a,b)$-projection onto the spline space $\mathcal{S}_{p,{\boldsymbol \tau}}$. Then, for any function $u\in H^r(a,b)$,
\begin{equation} \label{eq:Sande}
\|u-P_pu\|\leq \Big(\frac{h}{\pi}\Big)^{r}\|\partial^r u\|,
\end{equation}
for all $p\geq r-1$.
\end{theorem}
We then show that this theorem also holds with $P_p$ replaced by a suitable Ritz projection.
When comparing \eqref{eq:Sande} with \eqref{eq:Takacs} we see that it does not only allow for general knot sequences but also improves on the constant with a factor $(\sqrt{2}\pi)^r$.
Theorem~\ref{thm:one} is a special case of Theorem~\ref{thm:gen}, which additionally provides estimates in higher order semi-norms and for Ritz-type projections.
We further remark that while Theorem~\ref{thm:one} is only stated for the space consisting of maximally smooth splines, $\mathcal{S}_{p,{\boldsymbol \tau}}$, it also holds for any spline space of lower smoothness, since $\mathcal{S}^{k}_{p,{\boldsymbol \tau}}\supseteq \mathcal{S}_{p,{\boldsymbol \tau}}$ for any $k=-1,\ldots,p-1$ and making the space larger does not make the approximation worse. However, it could make the approximation constant better; see Theorem 2 of \cite{Buffa:11} for cases where $k$ is small enough.
For $r=1$, the error bound in Theorem~\ref{thm:one} still holds if the full spline space is replaced by proper subspaces satisfying certain boundary conditions; see Theorems~\ref{thm:Sp1} and~\ref{thm:reduced}.
Moreover, any element $s$ in such subspaces satisfies the following inverse inequality
\begin{equation}\label{ineq:inv}
\|s'\|\leq \frac{2\sqrt{3}}{h_\mathrm{min}}\|s\|,
\end{equation}
where $h_\mathrm{min}$ is the minimum knot distance; see Theorem~\ref{thm:inv}. This generalizes the results in Corollary 5.1 and Theorem 6.1 of \cite{Takacs:2016} to arbitrary knot sequences and a large class of spline subspaces. They are the main tool for proving optimality of geometric multigrid solvers for linear systems arising from spline discretization methods \cite{Hofreither:2017}.
Note that an extension of \eqref{ineq:inv} to the whole space $\mathcal{S}_{p,{\boldsymbol \tau}}$ is not possible; see Remark 1.2 of \cite{Takacs:2016}.
\subsection{Convergence to eigenfunctions}
Spectral analysis can be used to study the error in each eigenvalue and eigenfunction of a numerical discretization of an eigenvalue problem.
For a large class of boundary and initial-value problems the total discretization error on a given mesh can be recovered from its spectral error \cite{Hughes:2008,Hughes:2014}. It is argued in \cite{Garoni:symbol} that this is of primary interest in engineering applications, since practical computations are not performed in the limit of mesh refinement. Usually the computation is performed on a few, or even just a single mesh, and then the asymptotic information deduced from classical error analysis is insufficient. It is more relevant to understand which eigenvalues/eigenfunctions are well approximated for a given mesh size.
In this paper we use the explicit constant in our a~priori error estimates for the Ritz projections to deduce the spectral error on a given mesh.
As we shall see later, the theory of Kolmogorov $n$-widths and optimal subspaces is closely related to spectral analysis. Assume $A$ is a function class defined in terms of an integral operator $K$. Then, the space spanned by the first $n$ eigenfunctions of the self-adjoint operator $KK^*$ is an optimal subspace for $A$. We show that the general sequence of optimal $n$-dimensional subspaces for $A$, introduced in \cite{Floater:2018}, then converges to this $n$-dimensional space of eigenfunctions as some parameter $p\to\infty$.
In the most interesting cases, this sequence of optimal $n$-dimensional subspaces consists of spline spaces of degree $p$.
This is naturally connected to a differential operator through the kernel of $KK^*$ being a Green's function.
By using this general framework, we analyze how well the eigenfunctions of a given differential/integral operator are approximated in optimal subspaces of fixed dimension. In particular, for fixed dimension $n$,
we identify the optimal spline subspaces that converge to spaces spanned by the first $n$ eigenfunctions of the Laplacian subject to different types of boundary conditions, as their degree $p$ increases; see Corollaries~\ref{cor:per} and~\ref{cor:eig}. These results complement those already known in the literature about the convergence of uniform spline spaces to trigonometric functions; see, e.g., \cite{Goodman:83,Ganzburg:2006} and references therein.
We detail and fine-tune our analysis for the relevant case of the eigenfunction approximation of the Laplacian with periodic boundary conditions in the space of periodic splines by the Ritz projection, a projector that can be used to prove convergence of eigenvalues and eigenfunctions of the standard Galerkin eigenvalue problem \cite{Strang:73,Boffi:2010}.
In the case of maximal smoothness $C^{p-1}$ and uniform knot sequence ${\boldsymbol \tau}$ we consider the periodic $n$-dimensional spline space of degree $p$ and show convergence to the first $n$ or $n-1$ eigenfunctions (depending on the parity of $n$) of the Laplacian with periodic boundary conditions. We conjecture that there is convergence to the first $n$ eigenfunctions for all $n$; see Conjecture~\ref{conj:per} and Remark~\ref{rem:outliers}.
For general smoothness $C^k$, $0\leq k\leq p-1$, and fixed dimension $\mu$, our error estimate ensures convergence of the projection for increasing $p$ only for a fraction of the eigenfunctions. The amount of this fraction decreases as the maximum knot distance $h$ increases. In particular, if $h=(p-k)/\mu$ then, roughly speaking, only convergence for the first $\mu/(p-k)$ of the $\mu$ considered eigenfunctions is ensured.
It is known that the spectral discretization by B-splines of degree $p$ and smoothness $C^k$ presents $p-k$ branches and only a single branch (the so-called \emph{acoustical} branch) converges to the true spectrum \cite{Hughes:2014,Garoni:symbol}.
This $1/(p-k)$ spectral convergence is in complete agreement with our results; see Remark~\ref{rem:branches} for the details.
\subsection{Outline of the paper}
The remainder of this paper is organized as follows. In Section~\ref{sec:error} we introduce a general framework for obtaining error estimates that we make use of in Section~\ref{sec:spline} to first prove Theorem~\ref{thm:one}, and then to generalize it to higher order semi-norms and to the tensor-product case. This framework is then applied to the periodic case in Section~\ref{sec:per}, where we first obtain an error estimate for periodic splines and then prove convergence, in $p$, to the first eigenfunctions of the Laplacian with periodic boundary conditions. How our error estimates relate to the theory of $n$-widths is explained in Section~\ref{sec:nw}, and their sharpness is discussed in Section~\ref{sec:sharp}. A general convergence result to the eigenfunctions of various differential/integral operators is proved in Section~\ref{sec:eig} and then applied
to show convergence of certain spline subspaces towards the eigenfunctions of the Laplacian with other boundary conditions.
Section~\ref{sec:reduced} provides error estimates for the class of reduced spline spaces considered in \cite{Takacs:2016}, and
inverse inequalities for various spline subspaces are covered in Section~\ref{sec:inverse}.
Finally, we conclude the paper in Section~\ref{sec:conclusions} by summarizing the main theoretical results and some of their practical consequences.
\section{General error estimates}\label{sec:error}
For $f\in L^2$, let $K$ be the integral operator
$$ K f(x) := \int_a^b K(x,y) f(y) dy. $$
As in \cite{Pinkus:85}, we use the notation $K(x,y)$ for the kernel of~$K$. We will in this paper only consider kernels that are continuous or piecewise continuous.
We denote by $K^*$ the adjoint, or dual, of the operator $K$,
defined by
$$ (f,K^\ast g) = (Kf, g). $$
The kernel of $K^\ast$ is $K^\ast(x,y) = K(y,x)$.
Similar to matrix multiplication,
the kernel of the composition of two integral operators $K$ and $M$
is
$$ (KM)(x,y) = (K(x,\cdot),M(\cdot,y)). $$
If $\mathcal{X}$ is any finite dimensional subspace of $L^2$ and $P$ denotes the $L^2$-projection onto $\mathcal{X}$, then we are in this paper interested in finding explicit constants $C$ in approximation results of the type
\begin{equation}\label{ineq:u}
\|(I-P)u\|\leq C\|\partial^r u\|,
\end{equation}
that holds for all functions $u$ in some Sobolev space of order $r$. For example, if $r=1$ and the functions $u$ are of the form $u=Kf=\int_a^xf(y)dy$, then \eqref{ineq:u} can equivalently be written as
$$\|(I-P)Kf\|\leq C \|f\|,$$
where $C=\|(I-P)K\|$, the $L^2$-operator norm of $(I-P)K$.
Now, given any finite dimensional subspace $\mathcal{Z}_0\supseteq\mathcal{P}_0$ of $L^2$, and any integral operator $K$, we let $\mathcal{Z}_{\bar{p}}$ for ${\bar{p}}\geq 1$ be defined by $\mathcal{Z}_{\bar{p}}:=\mathcal{P}_0+K(\mathcal{Z}_{{\bar{p}}-1})$. We further assume that they satisfy the equality
\begin{equation}\label{eq:Xsimpl}
\mathcal{Z}_{\bar{p}}:=\mathcal{P}_0+K(\mathcal{Z}_{{\bar{p}}-1}) =\mathcal{P}_0+K^*(\mathcal{Z}_{{\bar{p}}-1}),
\end{equation}
where the sums do not need to be orthogonal (or even direct).
Moreover, let $P_{\bar{p}}$ be the $L^2$-projection onto $\mathcal{Z}_{\bar{p}}$, and define $C\in\mathbb{R}$ to be
\begin{equation}\label{eq:C}
C:=\max\{\|(I-P_0)K\|,\|(I-P_0)K^*\|\}.
\end{equation}
Observe that if $K$ satisfies $K^*=\pm K$, then $C=\|(I-P_0)K\|$ and \eqref{eq:Xsimpl} is true for any initial space $\mathcal{Z}_0$. An integral operator satisfying $K^*=- K$ will be considered in Section~\ref{sec:per}.
Using the argument of Lemma~1 in \cite{Floater:2017}, we obtain the following result.
\begin{lemma}\label{lem:1simpl}
If the spaces $\mathcal{Z}_{\bar{p}}$ satisfy \eqref{eq:Xsimpl} and $P_{\bar{p}}$ denotes the $L^2$-projection onto $\mathcal{Z}_{\bar{p}}$, then
\begin{equation*
\|(I-P_{\bar{p}})K\|\leq \|K-KP_{{\bar{p}}-1}\|\leq C, \quad \forall {\bar{p}}\geq 1.
\end{equation*}
\end{lemma}
\begin{proof}
We see from \eqref{eq:Xsimpl} that $KP_{{\bar{p}}-1}$ maps into the space $\mathcal{Z}_{\bar{p}}$ for ${\bar{p}}\geq 1$. Now, since $P_{\bar{p}}$ is the best approximation into $\mathcal{Z}_{\bar{p}}$ we have,
\begin{align*}
\|(I-P_{\bar{p}})K\|\leq \|K-KP_{{\bar{p}}-1}\|=\|(I-P_{{\bar{p}}-1})K^*\|.
\end{align*}
Continuing this procedure gives
\begin{align*}
\|(I-P_{\bar{p}})K\|\leq\begin{cases}
\|(I-P_0)K\|, & {\bar{p}} \text{ even},\\
\|(I-P_0)K^*\|, & {\bar{p}} \text{ odd},
\end{cases}
\end{align*}
and the result follows from the definition of $C$ in \eqref{eq:C}.
\end{proof}
We now generalize the above result using an argument similar to Lemma~1 in \cite{Floater:2018}.
\begin{lemma}\label{lem:2simpl}
Let $r\geq 1$ be given.
If the spaces $\mathcal{Z}_{\bar{p}}$ satisfy \eqref{eq:Xsimpl} and $P_{\bar{p}}$ denotes the $L^2$-projection onto $\mathcal{Z}_{\bar{p}}$, then
\begin{align*}
\|(I-P_{\bar{p}})K^r\|\leq \|K^r-KP_{{\bar{p}}-1}K^{r-1}\|\leq C\,\|(I-P_{{\bar{p}}-1})K^{r-1}\|,
\end{align*}
for all ${\bar{p}}\geq 1$.
\end{lemma}
\begin{proof}
Similar to the previous lemma, $KP_{{\bar{p}}-1}K^{r-1}f\in \mathcal{Z}_{\bar{p}}$ is some approximation to $K^rf$ and so, since $P_{\bar{p}} K^rf$ is the best approximation,
\begin{align*}
\|K^r-P_{\bar{p}} K^r\| &\leq \|K^r-KP_{{\bar{p}}-1}K^{r-1}\| =\|K(I-P_{{\bar{p}}-1})K^{r-1}\|\\
&\leq \|K(I-P_{{\bar{p}}-1})\|\,\|(I-P_{{\bar{p}}-1})K^{r-1}\|\\
&=\|(I-P_{{\bar{p}}-1})K^*\|\,\|(I-P_{{\bar{p}}-1})K^{r-1}\|,
\end{align*}
where we used $(I-P_{{\bar{p}}-1})=(I-P_{{\bar{p}}-1})^2$.
The result now follows from Lemma~\ref{lem:1simpl} since $\|(I-P_{{\bar{p}}-1})K^*\|\leq C$ for all ${\bar{p}}\geq 1$.
\end{proof}
Similar to Theorem~4 in \cite{Floater:2018}, we obtain the following result.
\begin{theorem}\label{thm:simple}
If the spaces $\mathcal{Z}_{\bar{p}}$ satisfy \eqref{eq:Xsimpl} and $P_{\bar{p}}$ denotes the $L^2$-projection onto $\mathcal{Z}_{\bar{p}}$, then
\begin{align*
\|(I-P_{\bar{p}})K^r\|\leq \|K^r-KP_{{\bar{p}}-1}K^{r-1}\|\leq C^r,
\end{align*}
for all ${\bar{p}}\geq r-1$.
\end{theorem}
\begin{proof}
The case $r=1$ is Lemma \ref{lem:1simpl}. The cases $r\geq 2$ then follow from Lemma~\ref{lem:2simpl} and induction on $r$.
\end{proof}
\section{Spline approximation}\label{sec:spline}
In this section we prove, and generalize, Theorem \ref{thm:one}.
Consider the integral operator $K$ defined by integrating from the left,
\begin{equation}\label{eq:Kint}
(Kf)(x):=\int_a^xf(y)dy.
\end{equation}
One can check that the operator $K^*$ is then integration from the right,
\begin{equation*}
(K^*f)(x)=\int_x^bf(y)dy;
\end{equation*}
see, e.g., Section~7 of \cite{Floater:2018}.
The space $H^r$ can then be given as
\begin{equation}\label{eq:Hr}
H^r=\mathcal{P}_{0} + K(H^{r-1})=\mathcal{P}_{0} + K^*(H^{r-1})
=\mathcal{P}_{r-1}+K^r(H^0),
\end{equation}
with $H^0=L^2$, and the spline spaces $\mathcal{S}_{p,{\boldsymbol \tau}}$ satisfy
\begin{equation}\label{eq:Sp}
\mathcal{S}_{p,{\boldsymbol \tau}} = \mathcal{P}_0+K(\mathcal{S}_{p-1,{\boldsymbol \tau}}) = \mathcal{P}_0+K^*(\mathcal{S}_{p-1,{\boldsymbol \tau}}).
\end{equation}
Note that neither of the sums in \eqref{eq:Hr} and \eqref{eq:Sp} are orthogonal.
Next, recall the following Poincar\'e inequality (see, e.g., \cite{Payne:60}): for any $u\in H^1$ on the interval $(a,b)$ we have
\begin{equation}\label{ineq:Poinc}
\|u-\bar{u}\|\le \frac{b-a}{\pi}\|u'\|,
\end{equation}
where ${\bar u}:=(b-a)^{-1}\int_a^b u(x)dx$ is the mean value of $u$. This result can be proved using Fourier analysis and it is also the case $n=1$ in \cite{Kolmogorov:36}.
Let $P_0$ be the $L^2$-projection onto $\mathcal{S}_{0,{\boldsymbol \tau}}$ and $\|\cdot\|_{j}$ be the $L^2$-norm on the knot interval $I_j$. Then, using the Poincar\'e inequality on each knot interval,
we have for all $u\in H^1$ that
\begin{equation}\label{ineq:deg0proof}
\|u-P_0u\|^2= \sum_{j=0}^N\|u-P_0u\|_j^2\le \sum_{j=0}^N \Big(\frac{h_j}{\pi}\Big)^2\|u'\|^2_j.
\end{equation}
In combination with \eqref{eq:hmax}, we obtain
\begin{equation}\label{ineq:deg0}
\|u-P_0u\|\le \frac{h}{\pi}\|u'\|.
\end{equation}
Since $K(L^2), K^*(L^2)\subset H^1$ for $K$ in \eqref{eq:Kint}, it follows that for $\mathcal{Z}_0=\mathcal{S}_{0,{\boldsymbol \tau}}$ the constant $C$ in \eqref{eq:C} satisfies
\begin{equation}\label{ineq:deg0-C}
C\leq h/\pi.
\end{equation}
Using Theorem~\ref{thm:simple} we can now prove Theorem~\ref{thm:one}.
\begin{proof}[Proof of Theorem \ref{thm:one}]
Recall that $P_p$ denotes the $L^2$-projection onto $\mathcal{S}_{p,{\boldsymbol \tau}}$, and observe that $u=g+K^rf\in H^r$ for $f\in L^2$ and $g\in\mathcal{P}_{r-1}\subset \mathcal{S}_{p,{\boldsymbol \tau}}$. Then, using \eqref{eq:C} with \eqref{ineq:deg0-C} in Theorem~\ref{thm:simple} (with ${\bar{p}}=p$) we arrive at
\begin{equation*}
\|u-P_pu\|=\|(g+K^rf)-P_p(g+K^rf)\|\leq \|(I-P_p)K^r\|\,\|f\|\leq \Big(\frac{h}{\pi}\Big)^r\|\partial^ru\|,
\end{equation*}
for all $p\geq r-1$.
\end{proof}
\subsection{Higher order semi-norms}\label{subsec:firstgen}
We now generalize Theorem \ref{thm:one} to higher order semi-norms.
To do this, we define a sequence of projection operators $Q_p^q:H^q\to \mathcal{S}_{p,{\boldsymbol \tau}}$, for $q=0,\ldots,p$, by $Q_p^0:=P_p$ and
\begin{equation}\label{eq:Qproj}
(Q_p^qu)(x) := c(u) + (KQ_{p-1}^{q-1}\partial u)(x) = c(u) + \int_a^x(Q_{p-1}^{q-1}\partial u)(y) d y,
\end{equation}
where $c(u)\in\mathcal{P}_0$ is chosen such that
\begin{equation}\label{eq:Qconst}
\int_a^b(Q_p^qu)(x) d x=\int_a^b u(x)dx.
\end{equation}
Observe that these projections, by definition, commute with the derivative: $\partial Q_p^q=Q_{p-1}^{q-1}\partial$. Note also that the range of $Q_p^q$ is $ \mathcal{S}_{p,{\boldsymbol \tau}}$ for any $q=0,\ldots,p$, since the spline spaces themselves satisfy $\partial^q \mathcal{S}_{p,{\boldsymbol \tau}} = \mathcal{S}_{p-q,{\boldsymbol \tau}}$ for any $q=0,\ldots,p$.
\begin{theorem}\label{thm:gen}
Let $u\in H^r$ for $r\geq 1$ be given.
For any $q=1,\ldots,r-1$ and knot sequence ${\boldsymbol \tau}$, let $Q_p^q$ be the projection onto $S_{p,{\boldsymbol \tau}}$ defined in \eqref{eq:Qproj}. Then,
\begin{align}
\|\partial^{q-1}(u-Q^q_pu)\|&\leq \Big(\frac{h}{\pi}\Big)^{r-q+1}\|\partial^ru\|,
\label{ineq:semi1}\\
\|\partial^q(u-Q^q_pu)\|&\leq \Big(\frac{h}{\pi}\Big)^{r-q}\|\partial^ru\|,
\label{ineq:semi2}
\end{align}
for all $p\geq r-1$.
\end{theorem}
\begin{proof}
From \eqref{eq:Hr} we know that $u\in H^r$ can be written as $u=g+K^{q}v$ for $g\in\mathcal{P}_{q-1}\subset\mathcal{S}_{p,{\boldsymbol \tau}}$ and $v\in H^{r-q}$.
Then, by using the fact that $\partial^qQ^q_p=Q^{q-q}_{p-q}\partial^q=P_{p-q}\partial^q$, inequality \eqref{ineq:semi2} immediately follows from Theorem~\ref{thm:one}:
\begin{equation*}
\|\partial^q(u-Q_p^qu)\|
=\|v-P_{p-q}v\|\leq \Big(\frac{h}{\pi}\Big)^{r-q}\|\partial^{r-q}v\|= \Big(\frac{h}{\pi}\Big)^{r-q}\|\partial^ru\|, \quad p\geq r-1.
\end{equation*}
Next, we look at inequality \eqref{ineq:semi1}. First observe that with $u$ as above we have
\begin{align*}
\|\partial^{q-1}(u-Q^q_pu)\|&=\|\partial^{q-1}(g+K^qv-Q^q_p(g+K^qv))\|=\inf_{c\in\mathcal{P}_0}\|Kv-c-KP_{p-q}v\|,
\end{align*}
where we used the commuting property $\partial^{q-1}Q^q_p=Q^{1}_{p-q+1}\partial^{q-1}$ together with the definition in \eqref{eq:Qproj} and \eqref{eq:Qconst}. The above infimum is taken over all $c\in\mathcal{P}_0$, and so by making the choice $c=0$ we obtain
\begin{align*}
\|\partial^{q-1}(u-Q^q_pu)\| \leq \|Kv-KP_{p-q}v\|=\|K(I-P_{p-q})v\|.
\end{align*}
The function $v\in H^{r-q}$ can be written as $v=\hat g+K^{r-q}f$ for $f\in L^2$ and $\hat g\in\mathcal{P}_{r-q-1}\subset \mathcal{S}_{p-q,{\boldsymbol \tau}}$, and so
$$\|K(I-P_{p-q})v\|= \|(K^{r-q+1}-KP_{p-q}K^{r-q})f\|.$$
Inequality \eqref{ineq:semi1} now follows from Theorem \ref{thm:simple} (with $\mathcal{Z}_0=\mathcal{S}_{0,{\boldsymbol \tau}}$ and ${\bar{p}}=p-q+1$) and \eqref{ineq:deg0-C}.
\end{proof}
\begin{remark}
The above proof of inequality \eqref{ineq:semi1} can also be used to obtain an error estimate in the case $q=r$. Specifically, we have
\begin{equation}
\|\partial^{r-1}(u-Q^r_pu)\|\leq \frac{h}{\pi}\|\partial^ru\|,\quad\forall p\geq r,
\end{equation}
where the extra requirement on the degree, $p\geq r$, is needed to ensure that the projection $Q^r_p$ (or equivalently $P_{p-r}$) is well-defined. By using $\partial^rQ^r_p=P_{p-r}\partial^r$, one can also obtain the stability estimate $\|\partial^{r}(u-Q^r_pu)\|\leq \|\partial^ru\|$ for $p\geq r$.
\end{remark}
\begin{example}\label{ex:H1}
Let $q=1$. Since $\partial(\mathcal{S}_{p,{\boldsymbol \tau}})=\mathcal{S}_{p-1,{\boldsymbol \tau}}$, the projection operator $Q_p^1$ can equivalently be defined as the solution to the Neumann problem: find $Q_p^1u\in\mathcal{S}_{p,{\boldsymbol \tau}}$ such that
\begin{equation*}
\begin{aligned}
(\partial Q_p^1u,\partial v) &= (\partial u,\partial v),\quad \forall v\in \mathcal{S}_{p,{\boldsymbol \tau}},\\
(Q_p^1u,1)&=(u,1),
\end{aligned}
\end{equation*}
and this projection is usually referred to as a Ritz projection or a Rayleigh-Ritz projection.
Theorem \ref{thm:gen} then states that this approximation of $u\in H^r$, $r\geq 2$, satisfies the error estimates
\begin{equation}\label{ineq:RitzH1}
\begin{aligned}
\|u-Q^1_pu\|&\leq \Big(\frac{h}{\pi}\Big)^{r}\|\partial^ru\|, &&\forall p\geq r-1,\\
\|\partial(u-Q^1_pu)\|&\leq \Big(\frac{h}{\pi}\Big)^{r-1}\|\partial^ru\|, &&\forall p\geq r-1.
\end{aligned}
\end{equation}
Thus $Q_p^1u$ provides a good approximation of both the function $u$ itself, and its first derivative.
\end{example}
\subsection{Extension to higher dimensions}\label{subsec:tens}
In this subsection we briefly mention how to extend the error estimate in Theorem~\ref{thm:one} to the tensor-product case.
Let $\Omega:=(a_1,b_1)\times(a_2,b_2)$ and let $\|\cdot\|_{\Omega}$ denote the $L^2(\Omega)$-norm.
The following corollary can be concluded from Theorem \ref{thm:one} with the aid of Theorem~8 in \cite{Bressan:preprint}, but for the sake of completeness we also provide a short proof here.
\begin{corollary}
Let $P_{p_1,p_2}:=P_{p_1}\otimes P_{p_2}$ be the $L^2(\Omega)$-projection onto $\mathcal{S}_{p_1,{\boldsymbol \tau}_1}\otimes \mathcal{S}_{p_2,{\boldsymbol \tau}_2}$, and
let $h:=\max\{h_{{\boldsymbol \tau}_1},h_{{\boldsymbol \tau}_2}\}$ where $h_{{\boldsymbol \tau}_i}$ denotes the maximum knot distance in ${\boldsymbol \tau}_i$, $i=1,2$.
Then, for any $u\in H^{r}(\Omega)$ we have
\begin{align*}
\|u-P_{p_1,p_2}u\|_{\Omega}\leq \Big(\frac{h}{\pi}\Big)^r\Big(\|\partial_x^{r}u\|_{\Omega}+\|\partial_y^{r}u\|_\Omega\Big),
\end{align*}
for all $p_1,p_2\geq r-1$.
\end{corollary}
\begin{proof}
From the triangle inequality and the fact that $P_{p_1}\otimes P_{p_2}=P_{p_1}\circ P_{p_2}$, we obtain
\begin{align*}
\|u-P_{p_1}\otimes P_{p_2}u\|_{\Omega}&\leq \|u-P_{p_1}u\|_{\Omega}+\|P_{p_1}u-P_{p_1}\circ P_{p_2}u\|_{\Omega}\\
&\leq \|u-P_{p_1}u\|_{\Omega}+\|P_{p_1}\|\,\|u-P_{p_2}u\|_{\Omega}\\
&\leq\Big(\frac{h}{\pi}\Big)^r\Big(\|\partial_x^{r}u\|_{\Omega}+\|\partial_y^{r}u\|_\Omega\Big),
\end{align*}
where we used Theorem~\ref{thm:one} in each direction, together with the fact that the $L^2(\Omega)$-operator norm of $P_{p_1}$ is equal to $1$.
\end{proof}
\section{Results for periodic spline spaces}\label{sec:per}
In this section we consider the Sobolev space of periodic functions,
$$ H^r_{\mathrm{per}}:=\{u\in H^r: \, \partial^\alpha u(0)=\partial^\alpha u(1),\, \alpha=0,1,\ldots,r-1\}, $$
and the periodic spline space $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ defined by
$$ \mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}} := \{s\in \mathcal{S}_{p,{\boldsymbol \tau}}:\, \partial^\alpha s(0)=\partial^\alpha s(1),\,\alpha=0,1,\ldots,p-1\}. $$
We remark that we only consider the interval $(a,b)=(0,1)$ to simplify the exposition below.
Note that $\mathcal{S}_{0,{\boldsymbol \tau},\mathrm{per}}=\mathcal{S}_{0,{\boldsymbol \tau}}$.
Later (in Section~\ref{subsec:pereig}) we will make use of the dimension of $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ and so in this section we index the break points in ${\boldsymbol \tau}$ such that $n=\dim \mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$, i.e.,
${\boldsymbol \tau}=(\tau_0,\ldots,\tau_n)$.
Now, let $K$ be the integral operator of \cite{Floater:per}.
On the interval $(0,1)$ its kernel has the explicit representation
\begin{equation}\label{eq:Kper}
K(x,y)=\begin{cases}
-x+y-1/2, & x<y,\\
-x+y+1/2, & x\geq y.
\end{cases}
\end{equation}
Using this kernel one can check that the integral operator $K$ satisfies $K^*=-K$.
If $f\perp 1$ then $K(x,y)$ is the Green's function to the boundary value problem (see Lemma~1 in \cite{Floater:per})
\begin{equation}\label{bvp:per}
u'(x)=f(x),\quad x\in (0,1),\quad u(0)=u(1),\quad u\perp 1,
\end{equation}
meaning that $u$ is the solution of \eqref{bvp:per} if and only if it satisfies $u=Kf$. We note that if $f$ is not orthogonal to $1$ then $Kf=K(f-\int_0^1 f(x)dx)$.
By using \eqref{bvp:per} it was shown in \cite{Floater:per} that the space $H^r_{\mathrm{per}}$ is equal to
\begin{equation}\label{eq:Hper}
H^r_{\mathrm{per}} = \mathcal{P}_0\oplus K(H^{r-1}_{\mathrm{per}})
\end{equation}
with $H^0_\mathrm{per}=L^2$, and the spline spaces $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ satisfy
\begin{equation}\label{eq:Sper}
\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}=\mathcal{P}_0\oplus K(\mathcal{S}_{p-1,{\boldsymbol \tau},\mathrm{per}}).
\end{equation}
The sums in \eqref{eq:Hper} and \eqref{eq:Sper} are orthogonal.
Again, for $q=0,\ldots,p$, we can define a sequence of projection operators $Q_p^q: H^q_{\mathrm{per}}\to \mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$, in exactly the analogous way to Section~\ref{subsec:firstgen}, by $Q_p^0$ being the $L^2$-projection and
\begin{equation}\label{eq:Qprojper}
(Q_p^qu)(x) := c(u) + (KQ_{p-1}^{q-1}\partial u)(x),
\end{equation}
where $K$ now has kernel \eqref{eq:Kper}, and
where $c(u)\in\mathcal{P}_0$ is chosen such that
\begin{equation*}
\int_0^1(Q_p^qu)(x) d x=\int_0^1 u(x)dx.
\end{equation*}
Just as before, these projections commute with the derivative, $\partial Q_p^q=Q_{p-1}^{q-1}\partial$.
Now, using \eqref{eq:Hper} and \eqref{eq:Sper}, together with the fact that $\partial^q(\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}})=\mathcal{S}_{p-q,{\boldsymbol \tau},\mathrm{per}}$,
one can check that $Q_p^q$ is a Ritz projection and solves the problem
\begin{equation} \label{eq:biharmonic}
\begin{aligned}
(\partial^qQ_p^qu,\partial^qv)&=(\partial^qu,\partial^qv), \quad \forall v\in \mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}},\\
(Q_p^qu,1)&=(u,1),
\end{aligned}
\end{equation}
for all $0\leq q\leq p$.
\subsection{Error estimates}
In many applications one would be interested in finding a single spline function that can provide a good approximation of all derivatives of $u$ up to a given number $q$. The next theorem shows that $Q_p^qu$ is such a spline function, for $p$ large enough.
\begin{theorem}\label{thm:per}
Let $u\in H^r_{\mathrm{per}}$ for $r\geq 1$ be given.
For any $q=0,\ldots,r-1$ and knot sequence ${\boldsymbol \tau}$, let $Q_p^q$ be the projection onto $S_{p,{\boldsymbol \tau},\mathrm{per}}$ defined in \eqref{eq:Qprojper}. Then, for any $\ell=0,\ldots,q$ we have
\begin{align} \label{ineq:per}
\|\partial^{\ell}(u-Q^q_pu)\|&\leq \Big(\frac{h}{\pi}\Big)^{r-\ell}\|\partial^ru\|,
\end{align}
for all $p$ satisfying both $p\geq r-1$ and $p\geq 2q-\ell-1$.
\end{theorem}
\begin{proof}
From \eqref{eq:Hper} we know that $u\in H_{\mathrm{per}}^r$ can be written as the orthogonal sum $u=c+K^{r}f$ for $c\in\mathcal{P}_{0}$, $f\in L^2$ and $K$ in \eqref{eq:Kper}. Thus,
\begin{align*}
\|\partial^{\ell}(u-Q^q_pu)\|&=\|K^{r-\ell}f-Q_{p-\ell}^{q-\ell}K^{r-\ell}f\|
=\|K^{r-\ell}f-K^{q-\ell}P_{p-q}K^{r-q}f\| \\
&\leq \|K^{q-\ell}(I-P_{p-q})K^{r-q}\|\,\|f\| \\
&\leq \|K^{q-\ell}(I-P_{p-q})\|\,\|(I-P_{p-q})K^{r-q}\|\,\|\partial^r u\|,
\end{align*}
where we used that $(I-P_{p-q})=(I-P_{p-q})^2$.
Using Theorem \ref{thm:simple} and the Poincar\'e inequality \eqref{ineq:deg0}, now applied to functions in $H^1_\mathrm{per}\subset H^1$, we then find that
\begin{align*}
\|(I-P_{p-q})K^{r-q}\|&\leq \Big(\frac{h}{\pi}\Big)^{r-q}, &&\forall p\geq r-1,\\
\|K^{q-\ell}(I-P_{p-q})\|&=\|(I-P_{p-q})(K^{q-\ell})^*\|\leq \Big(\frac{h}{\pi}\Big)^{q-\ell}, &&\forall p\geq 2q-\ell-1.
\end{align*}
\end{proof}
We remark that the case $\ell=0$ in the above theorem improves upon the constant in \cite{Takacs:2016} for uniform knot sequences and generalizes the approximation results for periodic splines in \cite{Floater:per,Pinkus:85,Takacs:2016} to an arbitrary knot sequence ${\boldsymbol \tau}$.
\begin{example}
Similar to Example \ref{ex:H1}, let $q=1$ and $r\geq 2$.
Theorem \ref{thm:per} then states that the above approximation $Q_p^1u$ of $u\in H^r_{\mathrm{per}}$ satisfies the error estimates
\begin{equation*}
\begin{aligned}
\|u-Q^1_pu\|&\leq \Big(\frac{h}{\pi}\Big)^{r}\|\partial^ru\|, &&\forall p\geq r-1,\\
\|\partial(u-Q^1_pu)\|&\leq \Big(\frac{h}{\pi}\Big)^{r-1}\|\partial^ru\|, &&\forall p\geq r-1.
\end{aligned}
\end{equation*}
Thus $Q_p^1u$ provides a good approximation of both the function $u$ itself, and its first derivative.
\end{example}
\begin{example}
Let $q=2$ and $r=3$. For $Q_p^2u$ to approximate $u\in H^3_{\mathrm{per}}$ in the $L^2$-norm, the above theorem requires the degree to be at least $2q-1=3$, and not $r-1=2$ as one might expect. In view of \eqref{eq:biharmonic}, this is consistent with the known fact that the biharmonic equation must be solved with piecewise polynomials of at least cubic degree to obtain an optimal rate of convergence in $L^2$; see, e.g., p.~118 in \cite{Strang:73}.
\end{example}
\subsection{Convergence to eigenfunctions}\label{subsec:pereig}
Consider the periodic eigenvalue problem
\begin{equation}\label{eq:per}
-u''(x) = \nu u(x), \quad x\in (0,1), \quad u(0)=u(1), \quad u'(0)=u'(1).
\end{equation}
It has eigenvalues given by $\nu_0=0$ and
\begin{equation}\label{eq:eigvper}
\nu_{2i-1}=\nu_{2i}=(2\pi i)^2,\quad i=1,2,\ldots,
\end{equation}
with corresponding orthonormal eigenfunctions $\psi_0=1$ and
\begin{equation}\label{eq:eigper}
\psi_j=\sqrt{2}\begin{cases}\sin(2\pi i x), &j=2i-1,\\ \cos(2\pi i x), \quad &j=2i,\end{cases} \quad j=1,2,\ldots.
\end{equation}
Since $\psi_j\in H^r_{\mathrm{per}}$ for any $r$, we can plug these eigenfunctions into estimate \eqref{ineq:per} and obtain the following result.
\begin{corollary}\label{cor:per}
Let $q\geq \ell\geq 0$ be given and let $Q_p^q$ be the projection onto $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ defined in \eqref{eq:Qprojper}.
Then, for all $j$ satisfying $2\ceil{j/2}h<1$, we have
\begin{equation}\label{ineq:eigper}
\begin{aligned}
\|\partial^\ell(\psi_j-Q_p^q\psi_j)\| &\leq (2\ceil{{j}/{2}} h)^{p+1-\ell} \xrightarrow[p\to\infty]{} 0.
\end{aligned}
\end{equation}
\end{corollary}
\begin{proof}
First note that $\psi_0=1\in \mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ for all $p\geq 0$, and so $\psi_0-Q_p^q\psi_0=0$. From \eqref{eq:eigper} we have
$$\|\partial^r\psi_j\| = \begin{cases}(2\pi i)^r, &j=2i-1\\ (2\pi i)^r, &j=2i\end{cases},\quad j=1,2,\ldots,$$
which can be written more compactly as $\|\partial^r\psi_j\|=(2\pi\ceil{j/2})^r$. Using Theorem~\ref{thm:per} with $r=p+1$ we then obtain \eqref{ineq:eigper}.
\end{proof}
Let $[\ldots]$ denote the span of a set of functions. Then, any $v\in[\psi_0,\ldots,\psi_{l}]$ can be written as $v(x)=\sum_{j=0}^{l}c_j\psi_j(x)$. So if we let $l$ be the largest $j$ satisfying $2\ceil{j/2}h<1$, then from Corollary~\ref{cor:per} we have
\begin{align*}
\frac{\|v-Q_p^qv\|^2}{\|v\|^2}&=\frac{\|\sum_{j=0}^{l}c_j(\psi_j-Q_p^q\psi_j)\|^2}{\|\sum_{j=0}^{l}c_j\psi_j\|^2}
\leq \frac{(\sum_{j=0}^{l}|c_j|\,\|\psi_j-Q_p^q\psi_j\|)^2}{\sum_{j=0}^{l}c_j^2} \\
&\leq \sum_{j=0}^{l}\|\psi_j-Q_p^q\psi_j\|^2
\leq (l+1)\Big(2\ceil{l/2}h\Big)^{2(p+1)} \xrightarrow[p\to\infty]{} 0.
\end{align*}
Thus, the $n$-dimensional spline space $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ approximates the whole $(l+1)$-dimensional space $[\psi_0,\ldots,\psi_{l}]$ as $p\to\infty$.
\begin{remark}\label{rem:perGal}
The error estimate in Corollary~\ref{cor:per} can be used to prove convergence, in $p$, to eigenvalues and eigenfunctions of the standard Galerkin eigenvalue problem: find $\psi^h_j\in \mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ and $\nu_j^h\in\mathbb{R}$, $j=0,1,\ldots,n-1$, such that
\begin{align}\label{eq:per-discr}
(\partial \psi^h_j,\partial \phi) = \nu_j^h(\psi^h_j,\phi),\quad \forall \phi\in\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}.
\end{align}
In Chapter~6.3 of \cite{Strang:73} (see also Part~2.8 of \cite{Boffi:2010}) it is shown that the error in the eigenvalues, $|\nu_j-\nu_j^h|$, and the error in the eigenfunctions, $\|\psi_j-\psi_j^h\|$, can be bounded in terms of $\|\psi_j-Q_p^1\psi_j\|$, which goes to $0$ as $p\to\infty$ for all $j$ satisfying the requirement of Corollary~\ref{cor:per} (with $q=1$). The argument can also be extended to periodic eigenvalue problems of higher order ($q$-harmonic with $q>1$).
\end{remark}
\begin{example}\label{ex:per}
Let ${\boldsymbol \tau}$ be the uniform knot sequence. Then, $h=1/n$ and if
\begin{itemize}
\item $n$ is an odd number, say $n=2m-1$, then
$$2\ceil{j/2}h=2\ceil{j/2}/(2m-1)<1,\quad j=0,\ldots,2m-2,$$
and so, as $p\to\infty$, the $(2m-1)$-dimensional spline space $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ approximates the whole $(2m-1)$-dimensional space of eigenfunctions,
\begin{align}\label{pereig}
[1,\sin(2\pi x),\cos(2\pi x),\ldots,\sin(2\pi(m-1) x),\cos(2\pi(m-1) x)].
\end{align}
\item $n$ is an even number, say $n=2m$, then
$$2\ceil{j/2}h=\ceil{j/2}/m<1,\quad j=0,\ldots,2m-2,$$
and so, as $p\to\infty$, the $2m$-dimensional spline space $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ also approximates the $(2m-1)$-dimensional space of eigenfunctions in \eqref{pereig}. Note that for $j=2m-1$ we have $\ceil{j/2}/m=1$ and so the a priori estimate in Theorem~\ref{thm:per} does not imply convergence to the last eigenfunction in this case. This is reasonable since both $\sin(2\pi m x)$ and $\cos(2\pi m x)$ (and any linear combination of them) is a ``candidate'' for being the $2m$-th eigenfunction.
\end{itemize}
\end{example}
In the above example we observed that if ${\boldsymbol \tau}$ is a uniform knot sequence and if the dimension of $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ is equal to $2m$, then the a priori estimate in Theorem~\ref{thm:per} only guarantees convergence to the first $2m-1$ periodic eigenfunctions in \eqref{pereig}. One can check that in this case the piecewise constant spline space $\mathcal{S}_{0,{\boldsymbol \tau},\mathrm{per}}$ is orthogonal to $\cos(2\pi m x)$, since $\cos(2\pi m x)$ integrates to $0$ on each knot interval. Using integration by parts one can then find that $\mathcal{S}_{1,{\boldsymbol \tau},\mathrm{per}}$ is orthogonal to $\sin(2\pi m x)$, and in general, that the even-degree spline spaces are orthogonal to $\cos(2\pi m x)$ and the odd-degree spline spaces are orthogonal to $\sin(2\pi m x)$. We therefore make the following conjecture.
\begin{conjecture}\label{conj:per}
Let ${\boldsymbol \tau}$ be the uniform knot sequence such that the dimension of $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ is equal to $2m$.
For any $q\geq 0$ let $Q_p^q$ be the projection onto $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ defined in \eqref{eq:Qprojper}. We then conjecture that
\begin{equation*}
\begin{aligned}
\|\sin(2\pi m\cdot)-Q^q_{2i}\sin(2\pi m\cdot)\| &\xrightarrow[i\to\infty]{} 0, &&p=2i,\\
\|\cos(2\pi m\cdot)-Q^q_{2i+1}\cos(2\pi m\cdot)\| &\xrightarrow[i\to\infty]{} 0, &&p=2i+1.
\end{aligned}
\end{equation*}
\end{conjecture}
In Section~\ref{sec:optS} we look at the Laplacian with different (non-periodic) boundary conditions and find $n$-dimensional spline spaces where a corresponding error estimate guarantees convergence, in $p$, to the $n$ first eigenfunctions for all $n$ (and not just for $n$ odd/even).
\begin{remark}\label{rem:outliers}
The periodic eigenvalue problem \eqref{eq:per} could also be discretized with a Galerkin method as in \eqref{eq:per-discr} using the larger spline space
\begin{equation}\label{eq:larger-per-space}
\{s\in \mathcal{S}_{p,{\boldsymbol \tau}}:\, \partial^\alpha s(0)=\partial^\alpha s(1), \, \alpha=0,\ldots,k\},
\end{equation}
for some $0\leq k\leq p-1$ and uniform knot sequence ${\boldsymbol \tau}=(\tau_0,\ldots,\tau_n)$.
With such a discretization, a very poor approximation of the largest $p-k-1$ eigenvalues is observed numerically for all tested $n$; see Figure~\ref{fig:outliers} for some examples. These are usually referred to as \emph{outlier} modes \cite{Hughes:2014}.
Note that the number of outlier modes depends on the kind of boundary conditions of the eigenvalue problem to be solved (see \cite{Hughes:2014} for homogeneous Dirichlet boundary conditions).
Since $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ is a subspace of \eqref{eq:larger-per-space}, it follows from Remark~\ref{rem:perGal} and Example~\ref{ex:per} that we have convergence of the Galerkin eigenvalue approximation in the space \eqref{eq:larger-per-space} for the first $n$ or $n-1$ eigenvalues according to the parity of $n$. If Conjecture~\ref{conj:per} is true, this number can be raised to $n$ in all cases.
This is in agreement with the number of outlier modes observed numerically, because the dimension of the space \eqref{eq:larger-per-space} is $n+p-k-1$.
\end{remark}
\begin{figure}
\centering
\subfigure[$p=3$, $k=0$]{\includegraphics[trim=15 5 15 0,width=0.45\textwidth]{fig_outliers_n50_p3_kper1}}
\subfigure[$p=6$, $k=0$]{\includegraphics[trim=15 5 15 0,width=0.45\textwidth]{fig_outliers_n50_p6_kper1}} \\
\subfigure[$p=3$, $k=1$]{\includegraphics[trim=15 5 15 0,width=0.45\textwidth]{fig_outliers_n50_p3_kper2}}
\subfigure[$p=6$, $k=4$]{\includegraphics[trim=15 5 15 0,width=0.45\textwidth]{fig_outliers_n50_p6_kper5}} \\
\subfigure[$p=3$, $k=2$]{\includegraphics[trim=15 5 15 0,width=0.45\textwidth]{fig_outliers_n50_p3_kper3}}
\subfigure[$p=6$, $k=5$]{\includegraphics[trim=15 5 15 0,width=0.45\textwidth]{fig_outliers_n50_p6_kper6}}
\caption{Discretization of the periodic eigenvalue problem \eqref{eq:per} in the space \eqref{eq:larger-per-space} with fixed $n=50$ and varying $p\in\{3,6\}$, $k\in\{0,p-2,p-1\}$: exact eigenvalues (in {blue $\circ$}) and approximated ones (in {red $*$}).
One clearly observes $p-k-1$ outlier modes. Note that in the two top pictures the last $p-2$ approximated eigenvalues are not shown because their value exceeds the adopted scale.
} \label{fig:outliers}
\end{figure}
\begin{remark}\label{rem:branches}
Let us now consider the spline space $\mathcal{S}^k_{p,{\boldsymbol \tau}}$ for $0\leq k\leq p-1$ and uniform knot sequence ${\boldsymbol \tau}=(\tau_0,\ldots,\tau_n)$. One can then discretize problem \eqref{eq:per} with a Galerkin method using the $C^k$ periodic spline space
\begin{equation}\label{eq:even-larger-per-space}
\{s\in \mathcal{S}^k_{p,{\boldsymbol \tau}}:\, \partial^\alpha s(0)=\partial^\alpha s(1), \, \alpha=0,\ldots,k\}.
\end{equation}
The dimension of this space is $n(p-k)$.
Hence, by increasing the degree of this space one substantially increases its dimension (when $k$ is small). However, numerical evidence shows that the spectral discretization of the Laplacian by splines of degree~$p$, smoothness $C^k$, and uniform grid spacing, possesses $p-k$ branches of equal length and only a single branch (the so-called \emph{acoustical} branch \cite{Hughes:2014,Garoni:symbol}) converges to the true spectrum; see Figure~\ref{fig:branches} for some examples.
Since our results can only guarantee convergence to eigenvalues in this acoustical branch, they are in complete agreement with the numerical evidence.
\end{remark}
\begin{figure}[t!]
\centering
\subfigure[$p=3$]{\includegraphics[trim=15 5 15 0,width=0.49\textwidth]{fig_branches_n100_p3_kper1-2-p}}
\subfigure[$p=4$]{\includegraphics[trim=15 5 15 0,width=0.49\textwidth]{fig_branches_n100_p4_kper1-2-p}} \\
\subfigure[$p=3$, magnified view]{\includegraphics[trim=15 5 15 0,width=0.49\textwidth]{fig_branches_n100_p3_kper1-2-p-zoom}}
\subfigure[$p=4$, magnified view]{\includegraphics[trim=15 5 15 0,width=0.49\textwidth]{fig_branches_n100_p4_kper1-2-p-zoom}}
\caption{Discretization of the periodic eigenvalue problem \eqref{eq:per} in the space \eqref{eq:even-larger-per-space} with fixed $n=100$ and varying $k$ ($k=0$ in black, {$k=1$ in red}, {$k=p-1$ in blue}): relative eigenvalue approximation error $\nu^h_j/\nu_j-1$, $j=2,\ldots,n(p-k)$, where each $\nu^h_j$ denotes the approximated value of the $j$-th eigenvalue $\nu_j$. All cases are plotted in the interval $[0,1]$, after a proper rescaling, as it is common in the literature. One clearly observes $p-k$ spectral branches of equal length.
} \label{fig:branches}
\end{figure}
\section{$n$-Widths and kernels}\label{sec:nw}
Our next goal is to discuss the sharpness of our error estimates. To do this, we first introduce the theory of $n$-widths \cite{Kolmogorov:36,Pinkus:85}.
As before, we denote by $P$ the $L^2$-projection onto a finite dimensional subspace $\mathcal{X}$ of $L^2$.
For a subset $A$ of $L^2$, let
$$ E(A, \mathcal{X}) := \sup_{u \in A} \|u-Pu\| $$
be the distance to $A$ from $\mathcal{X}$ relative to the $L^2$-norm.
Then, the Kolmogorov $L^2$ $n$-width
of $A$ is defined by
$$ d_n(A) := \inf_{\substack{\mathcal{X}\subset L^2\\ \dim \mathcal{X}=n}} E(A, \mathcal{X}). $$
If $\mathcal{X}$ has dimension at most $n$ and satisfies
\begin{equation*
d_n(A) = E(A, \mathcal{X}),
\end{equation*}
then we call $\mathcal{X}$ an \emph{optimal} subspace for $d_n(A)$.
\begin{example}
Let $A=\{u\in H^r : \|\partial^r u\|\leq 1\}$.
Then, by looking at $u/\|\partial^r u\|$, for functions $u\in H^r$, we have for any subspace $\mathcal{X}$ of $L^2$, the sharp estimate
\begin{equation}\label{ineq:sharp}
\|u-Pu\|\leq E(A, \mathcal{X})\|\partial^r u\|.
\end{equation}
Here $E(A, \mathcal{X})$ is the least possible constant for the subspace $\mathcal{X}$.
Moreover, if $\mathcal{X}$ is optimal for the $n$-width of $A$, then
\begin{equation}\label{ineq:optimal}
\|u-Pu\|\leq d_n(A)\|\partial^r u\|,
\end{equation}
and $d_n(A)$ is the least possible constant over all $n$-dimensional subspaces $\mathcal{X}$.
\end{example}
Now, let $B$ be the unit ball in $L^2$, then in \cite{Pinkus:85,Melkman:78,Floater:2017,Floater:2018}, subsets $A$ of the form
\begin{equation*
A = K(B) = \{ K f : \|f\| \le 1 \},
\end{equation*}
for various different integral operators $K$, are considered.
Observe that for such an $A$ we have the equality
\begin{equation}\label{eq:E1}
E(A,\mathcal{X}) = \sup_{\|f\| \le 1} \| (I-P) K f \|= \|(I-P)K\|,
\end{equation}
where $\|(I-P)K\|$ is the $L^2$-operator norm of $(I-P)K$ which was used in Section~\ref{sec:error}.
The operator $K^\ast K$, being self-adjoint and positive semi-definite,
has eigenvalues
\begin{equation}\label{eq:lambda}
\lambda_1 \ge \lambda_2 \ge \cdots \ge \lambda_j \ge \cdots \ge 0,
\end{equation}
and corresponding orthogonal eigenfunctions
\begin{equation}\label{eq:phi}
K^\ast K \phi_j = \lambda_j \phi_j, \quad j=1,2,\ldots.
\end{equation}
If we further define $\psi_j := K \phi_j$, then
\begin{equation}\label{eq:psi}
K K^\ast \psi_j = \lambda_j \psi_j, \quad j=1,2,\ldots,
\end{equation}
and the $\psi_j$ are also orthogonal.
The square roots of the $\lambda_j$ are known as
the singular values of $K$ (or $K^*$).
\begin{example}
Let $K$ be the integral operator studied in Section~\ref{sec:per}. Recall that it satisfies $K^*=-K$, and so $KK^*=K^*K=-K^2$. Since the kernel of $K$ is the Green's function to problem \eqref{bvp:per}, one can show that the kernel of $-K^2$ is the Green's function to problem \eqref{eq:per}. Thus the $\lambda_j$ in \eqref{eq:lambda} satisfy $\lambda_j=1/\nu_j$ for $\nu_j$ as in \eqref{eq:eigvper}, $j=1,2,\ldots$. Moreover, the eigenfunctions $\psi_j$ in \eqref{eq:psi} are (up to a constant) equal to the $\psi_j$ in \eqref{eq:eigper} for $j=1,2,\ldots$. Note that $\nu_0=\lambda_\infty=0$.
\end{example}
From \eqref{eq:E1} we find that
\begin{align*}
E(A,\mathcal{X}) &= \|(I-P)K\| = \|K^*(I-P)\| \\
&= \sup_{\|f\| \le 1} (K^*(I-P)f,K^*(I-P)f)^{1/2}\\
&= \sup_{\substack{\|f\| \le 1 \\ f \perp \mathcal{X}}}(K^*f,K^*f)^{1/2} = \sup_{\substack{\|f\| \le 1 \\ f \perp \mathcal{X}}}(KK^*f,f)^{1/2},
\end{align*}
and by taking the infimum of the latter expression over all
$n$-dimensional subspaces $\mathcal{X}$ of $L^2$ we arrive at the following result (see p.~6 in \cite{Pinkus:85}).
\begin{theorem}\label{thm:pinkus}
$d_n(A) =\lambda_{n+1}^{1/2}$, and the space $[\psi_1,\ldots,\psi_n]$
is optimal for $d_n(A)$.
\end{theorem}
\begin{example}\label{ex:per_nwidths}
Similar to \cite{Floater:per}, we define $A^r_{\mathrm{per}} := \{u\in H^r_\mathrm{per}: \|\partial^r u\|\le 1\}$. It can be written as
$A^r_{\mathrm{per}} = \mathcal{P}_0\oplus K^r(B)$, where $K$ is the integral operator in Section~\ref{sec:per}.
Using Theorem~\ref{thm:pinkus} together with \eqref{eq:eigvper} and \eqref{eq:eigper}, we see that the $n$-widths in the periodic case, on the interval $(0,1)$, are given by
$$d_{2m-1}(A^r_\mathrm{per})=d_{2m}(A^r_\mathrm{per})=\Big(\frac{1}{2\pi m}\Big)^r,$$
and that the space of eigenfunctions in \eqref{pereig} is optimal for $d_{2m-1}(A^r_\mathrm{per})$. Moreover, since the $(2m-1)$-width is equal to the $2m$-width, the $(2m-1)$-dimensional space in \eqref{pereig} is also optimal for $d_{2m}(A^r_\mathrm{per})$.
If we let ${\boldsymbol \tau}$ be the uniform knot sequence with $h=1/(2m)$, then $\dim \mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}=2m$, and Theorem~\ref{thm:per} gives an alternative proof of Theorem~2 in \cite{Floater:per}, that $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ is in this case optimal for $d_{2m}(A^r_\mathrm{per})$ for all $p\geq r-1$.
It was shown in \cite{Floater:per} that there is no knot sequence ${\boldsymbol \tau}$ such that $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ is optimal for $d_{2m-1}(A^r_\mathrm{per})$. However,
if $h=1/(2m-1)$ then we obtain from Theorem~\ref{thm:per} that
\begin{equation*}
\|u-P_pu\|\le \Big(\frac{1}{\pi(2m-1)}\Big)^r\|\partial^r u\|,
\end{equation*}
and so it follows that periodic splines of dimension $2m-1$ on uniform knot sequences are asymptotically optimal as $m$ increases, i.e.,
$$\frac{1/(\pi(2m-1))^r}{d_{2m-1}(A^r_{\mathrm{per}})}\xrightarrow[m\to\infty]{} 1.$$
\end{example}
\section{Sharpness of error bounds}\label{sec:sharp}
In this section we discuss the sharpness of the error estimates obtained in this paper. To simplify we only consider the estimates for the $L^2$-projection. We call an error estimate \emph{sharp} if it is of the form \eqref{ineq:sharp} for some approximation $Pu$. If, additionally, the considered subspace is optimal, and the error estimate achieves the $n$-width (i.e., is of the form \eqref{ineq:optimal}), we refer to the error estimate as \emph{optimal}.
As we discussed in the previous section, in the periodic case the error estimate in Theorem~\ref{thm:per} achieves the $n$-width for all the optimal even-dimensional periodic spline spaces, and so the estimate is both sharp and optimal for these spaces.
For non-optimal, periodic spline spaces $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ it is unknown whether the estimate in Theorem~\ref{thm:per} is sharp. However, the $n$-widths in Example~\ref{ex:per_nwidths} provide a strict lower bound on $E(A^r_{\mathrm{per}},\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}})$, for $n=\dim \mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$, that is ``very close'' to the upper bound of Theorem~\ref{thm:per}. Thus, the error estimate in Theorem~\ref{thm:per} is either sharp or very close to sharp in all cases.
The error estimate in Theorem~\ref{thm:one}, on the other hand, is optimal only in the simplest case given by the Poincar\'e inequality \eqref{ineq:deg0} for $r=1$, $p=0$ and with ${\boldsymbol \tau}$ being the uniform knot sequence.
For $r=1$ and $p\geq 1$ one can use a theorem of Karlovitz \cite{Karlovitz:76}, in a similar way to Section~6 of \cite{Floater:per}, to show that the spline spaces $\mathcal{S}_{p,{\boldsymbol \tau}}$ are not optimal for the function class $\{u\in H^1: \|u'\|\le 1\}$ for any ${\boldsymbol \tau}$. For $p\geq 1$ the optimal spline spaces in this case are defined by choosing a degree-dependent knot sequence ${\boldsymbol \tau}$ and imposing certain boundary conditions on the space $\mathcal{S}_{p,{\boldsymbol \tau}}$ \cite{Floater:2017,Floater:2018}. These optimal spaces will be discussed further in Section~\ref{sec:optS}.
Turning back to the spaces $\mathcal{S}_{p,{\boldsymbol \tau}}$, one can use the $n$-width to find that for a fixed degree $p$, and for uniform knot sequences ${\boldsymbol \tau}$, the error estimate in Theorem~\ref{thm:one} is asymptotically optimal as $n=\dim \mathcal{S}_{p,{\boldsymbol \tau}}$ increases. In other words, we have for $A=\{u\in H^1: \|u'\|\le 1\}$, that
$(h/\pi)/d_n(A)\to 1$ as $n\to\infty$,
since it is known that $d_n(A)=(b-a)/(n\pi)$ on the interval $(a,b)$ \cite{Kolmogorov:36,Floater:2018}.
For $r>1$ it was shown in \cite{Melkman:78} that there exists a non-uniform, $r$-dependent, knot sequence ${\boldsymbol \xi}$ such that $\mathcal{S}_{p,{\boldsymbol \xi}}$ is optimal for $\{u\in H^r: \|\partial^r u\|\le 1\}$ for $p=r-1$. Since the error estimate in Theorem~\ref{thm:one} is minimized for uniform knot sequences, it cannot be sharp for this optimal spline space. In other words, for each degree $p>0$ there exist an $r$ ($r=p+1$) and a knot sequence ${\boldsymbol \tau}$ (${\boldsymbol \tau}={\boldsymbol \xi}$) such that the constant in \eqref{eq:Sande} can be improved. It is then natural to ask whether it can be improved in all cases. Since the choice of ${\boldsymbol \xi}$ is $r$-dependent one could expect that picking the knot sequence that gives the best possible approximation with respect to the $r$-th semi-norm could lead to worse approximation with respect to the semi-norms different from $r$.
We therefore conjecture that for any given degree $p$ there exists a knot sequence ${\boldsymbol \tau}$ such that the factor $h/\pi$ in \eqref{eq:Sande} cannot be improved for all $r=1,\ldots,p+1$. In other words, we conjecture that for an arbitrary knot sequence~${\boldsymbol \tau}$, Theorem~\ref{thm:one}, as stated, cannot be improved with a better approximation constant.
\section{General convergence to eigenfunctions}\label{sec:eig}
Motivated by the convergence results for the eigenfunctions in Corollary~\ref{cor:per},
in this section we will use results of \cite{Floater:2018} to provide a general framework for obtaining convergence to the eigenfunctions of several differential operators. We then apply this framework to the Laplacian with different boundary conditions. This will lead us to different optimal spline spaces.
\subsection{General framework}
The starting point of our analysis is to look at the function classes $A^r$ and $A^r_*$ studied in \cite{Floater:2018}.
For an arbitrary integral operator $K$, they can be defined as $A^1:=A:=K(B)$, $A^1_*:=A_*:=K^*(B)$ and
\begin{equation}\label{eq:Ar}
A^r:=K(A^{r-1}_*),\quad A^r_*:=K^*(A_{r-1}),
\end{equation}
for $r\geq 2 $, where we recall that $B$ is the unit ball in $L^2$. As stated in \cite{Floater:2018}, it follows from Theorem \ref{thm:pinkus} that
\begin{equation*}
d_n(A^r_*)=d_n(A^r)=d_n(A)^r,
\end{equation*}
and the space $[\psi_1,\ldots,\psi_n]$ is optimal for $A^r$, and the space $[\phi_1,\ldots,\phi_n]$ is optimal for $A^r_*$, for all $r\geq 1$. Moreover, let $\mathcal{X}_0$ and $\mathcal{Y}_0$ be any finite dimensional subspaces of $L^2$ and define the subspaces $\mathcal{X}_p$ and $\mathcal{Y}_p$ in an analogous way to \eqref{eq:Ar}, by
\begin{equation*
\mathcal{X}_p:=K(\mathcal{Y}_{p-1}), \quad \mathcal{Y}_p:=K^*(\mathcal{X}_{p-1}),
\end{equation*}
for $p\geq 1$.
It was then shown in Theorem~4 of \cite{Floater:2018} that if $\mathcal{X}_0$ is optimal for the $n$-width of $A$ and $\mathcal{Y}_0$ is optimal for the $n$-width of $A_*$ then, for $r\geq 1$,
\begin{itemize}
\item the subspaces $\mathcal{X}_p$ are optimal for the $n$-width of $A^{r}$, and
\item the subspaces $\mathcal{Y}_p$ are optimal for the $n$-width of $A^{r}_*$,
\end{itemize}
for all $p\geq r-1$. Using this we can prove the following.
\begin{theorem}\label{thm:eig}
Suppose $\mathcal{X}_0$ is optimal for the $n$-width of $A$ and $\mathcal{Y}_0$ is optimal for the $n$-width of $A_*$. Let $P_p$ be the $L^2$-projection onto $\mathcal{X}_p$ and $\Pi_p$ be the $L^2$-projection onto $\mathcal{Y}_p$. Then, if there exists an index $l\in\{1,2,\ldots,n\}$ such that $\lambda_l>\lambda_{n+1}$ in \eqref{eq:lambda}, we have
\begin{align*}
\frac{\|(I-P_p)\psi_j\|}{\|\psi_j\|} \xrightarrow[p\to\infty]{} 0,\quad
\frac{\|(I-\Pi_p)\phi_j\|}{\|\phi_j\|} \xrightarrow[p\to\infty]{} 0,
\end{align*}
for all $j=1,2,\ldots,l$.
\end{theorem}
\begin{proof}
The two cases are analogous and so we only look at the $\mathcal{X}_p$. It follows from Theorem~4 in \cite{Floater:2018} that $\mathcal{X}_p$ is optimal for $A^{p+1}$, and so by Theorem \ref{thm:pinkus},
\begin{equation*}
E(A^{p+1},\mathcal{X}_p)=d_n(A^{p+1}) =\lambda_{n+1}^{(p+1)/2}.
\end{equation*}
Equivalently, we have for all $f\in L^2$,
\begin{equation}\label{ineq:eig}
\begin{aligned}
\|(I-P_p)(KK^*)^{i}f\| &\leq \lambda_{n+1}^i\|f\|, &&p=2i-1,\\
\|(I-P_p)(KK^*)^{i}Kf\| &\leq \lambda_{n+1}^{i+1/2}\|f\|, &&p=2i.\\
\end{aligned}
\end{equation}
First, consider $p=2i$. Let $f=\phi_j$ for some $j=1,\ldots,l$. Then,
\begin{equation*}
\|(I-P_p)(KK^*)^{i}K\phi_j\| = \|(I-P_p)\lambda_j^{i}\psi_j\| = \lambda_j^{i}\|(I-P_p)\psi_j\|,
\end{equation*}
and from \eqref{ineq:eig},
\begin{equation*}
\|(I-P_p)\psi_j\|\leq \Big(\frac{\lambda_{n+1}}{\lambda_j}\Big)^{i}(\lambda_{n+1})^{1/2}\|\phi_j\|.
\end{equation*}
Now, using $\psi_j=K\phi_j$, we have $\|\psi_j\|=(K^*K\phi_j,\phi_j)^{1/2}=\lambda_j^{1/2}\|\phi_j\|$, and
\begin{equation*}
\frac{\|(I-P_p)\psi_j\|}{\|\psi_j\|}\leq \Big(\frac{\lambda_{n+1}}{\lambda_j}\Big)^{i+1/2}\xrightarrow[p\to\infty]{} 0,
\end{equation*}
since $\lambda_j\geq \lambda_l>\lambda_{n+1}$, and $p=2i$.
Next, consider $p=2i-1$. In this case we let $f=\psi_j$ for some $j=1,\ldots,l$. Then, by a similar argument,
\begin{equation*}
\frac{\|(I-P_p)\psi_j\|}{\|\psi_j\|}\leq \Big(\frac{\lambda_{n+1}}{\lambda_j}\Big)^i \xrightarrow[p\to\infty]{} 0.
\end{equation*}
\end{proof}
\begin{remark}\label{rem:ritz}
The above result is only proved for $L^2$-projections and so it is not sufficient to conclude convergence of eigenfunctions and eigenvalues in Galerkin eigenvalue problems. However, a more careful analysis of the optimality results in \cite{Floater:2018}, similar to the arguments of Section~\ref{sec:error}, could be used to show that Theorem~\ref{thm:eig} is true also when $P_p$ and $\Pi_p$ are replaced by certain Ritz-type projections. This is an interesting topic of further research.
\end{remark}
\subsection{Optimal spline spaces}\label{sec:optS}
Again, we consider the interval $(a,b)=(0,1)$, and
we define
the following subspaces of $\mathcal{S}_{p,{\boldsymbol \tau}}$ with certain derivatives vanishing at the boundary:
\begin{equation*}
\begin{aligned}
\mathcal{S}_{p,{\boldsymbol \tau},0} &:= \{s\in \mathcal{S}_{p,{\boldsymbol \tau}} :\, \partial^\alpha s(0)=\partial^\alpha s(1)=0,\ \ 0\leq \alpha\leq p,\ \ \alpha \text{ even}\}, \\
\mathcal{S}_{p,{\boldsymbol \tau},1} &:= \{s\in \mathcal{S}_{p,{\boldsymbol \tau}} :\, \partial^\alpha s(0)=\partial^\alpha s(1)=0,\ \ 0\leq \alpha\leq p,\ \ \alpha \text{ odd}\}, \\
\mathcal{S}_{p,{\boldsymbol \tau},2}&:= \{s\in \mathcal{S}_{p,{\boldsymbol \tau}} :\, \partial^{\alpha_0} s(0)=\partial^{\alpha_1} s(1)=0,\ \ 0\leq \alpha_0,\alpha_1\leq p, \ \ \alpha_0 \text{ even}, \ \ \alpha_1 \text{ odd}\}.
\end{aligned}
\end{equation*}
For the special (degree-dependent) knot sequences ${\boldsymbol \tau}_{p,i}$, $i=0,1,2$, where
\begin{equation*}
\begin{aligned}
{\boldsymbol \tau}_{p,0} &:= \begin{cases}
(0,\frac{1}{n+1},\frac{2}{n+1},\ldots,\frac{n}{n+1},1),\quad\ &p \text{ odd},\\
(0,\frac{1/2}{n+1},\frac{3/2}{n+1},\ldots,\frac{n+1/2}{n+1},1),\quad\ &p \text{ even},
\end{cases}\\
{\boldsymbol \tau}_{p,1} &:= \begin{cases}
(0,\frac{1/2}{n},\frac{3/2}{n},\ldots,\frac{n-1/2}{n},1),\qquad &p \text{ odd},\\
(0,\frac{1}{n},\frac{2}{n},\ldots,\frac{n-1}{n},1),\qquad &p \text{ even},
\end{cases}\\
{\boldsymbol \tau}_{p,2} &:= \begin{cases}
(0,\frac{1}{2n+1},\frac{3}{2n+1},\ldots,\frac{2n-1}{2n+1},1),\quad &p \text{ even},\\
(0,\frac{2}{2n+1},\frac{4}{2n+1},\ldots,\frac{2n}{2n+1},1),\quad &p \text{ odd},
\end{cases}
\end{aligned}
\end{equation*}
it was shown in \cite{Floater:2018} that the spline spaces $\mathcal{S}_{p,i}:=\mathcal{S}_{p,{\boldsymbol \tau}_{p,i},i}$, $i=0,1,2$ are optimal, respectively, for the function classes
\begin{equation*}
\begin{aligned}
A^r_0&:=\{u\in H^r:\, \|\partial^r u\|\leq 1,\ \ \partial^\alpha u(0)=\partial^\alpha u(1)=0,\ \ 0\leq \alpha<r,\ \ \alpha\text{ even}\},
\\
A^r_1&:=\{u\in H^r:\, \|\partial^r u\|\leq 1,\ \ \partial^\alpha u(0)=\partial^\alpha u(1)=0,\ \ 0\leq \alpha<r,\ \ \alpha\text{ odd}\},
\\
A^r_2&:=\{u\in H^r:\, \|\partial^r u\|\leq 1,\ \ \partial^{\alpha_0} u(0)=\partial^{\alpha_1} u(1)=0,\ \ 0\leq \alpha_0,\alpha_1<r,\\
&\hspace*{8cm} \alpha_0 \text{ even}, \ \ \alpha_1 \text{ odd}\},
\end{aligned}
\end{equation*}
for all $p\geq r-1$.
It was further shown that the function classes $A^r_0$ and $A^r_2$ are examples of the function classes $A^r_*$ and $A^r$ in \eqref{eq:Ar}, while $A^r_1$ is of the form $\mathcal{P}_0\oplus A^r$.
The optimal $n$-dimensional space of eigenfunctions for $A^r_i$, $i=0,1,2$, consists of the first $n$ eigenfunctions of the Laplacian satisfying the following zero boundary conditions, respectively,
\begin{equation*}
\begin{gathered}
u(0)=u(1)=0, \\
u'(0)=u'(1)=0,\\
u(0)=u'(1)=0;
\end{gathered}
\end{equation*}
in other words, the functions
\begin{equation*}
\begin{gathered}
\{\sin(\pi x), \sin(2\pi x), \ldots,\sin(n\pi x)\},\\
\{1,\cos(\pi x), \ldots,\cos((n-1)\pi x)\},\\
\{\sin((1/2)\pi x), \sin((3/2)\pi x), \ldots, \sin((n-1/2)\pi x)\}.
\end{gathered}
\end{equation*}
Since the eigenvalues are in these cases strictly decreasing, it then follows from Theorem~\ref{thm:eig}
that the $n$-dimensional spline spaces $\mathcal{S}_{p,i}$, converge, respectively, to the first $n$ sines, cosines and shifted sines.
We make this statement more precise in the next corollary.
\begin{corollary}\label{cor:eig}
If $P_{p,i}$ denotes the $L^2$-projection onto the $n$-dimensional spline spaces $\mathcal{S}_{p,i}$, then
\begin{align*}
\|(I-P_{p,0})\sin(j\pi\cdot)\| &\xrightarrow[p\to\infty]{} 0,\\
\|(I-P_{p,1})\cos((j-1)\pi\cdot)\| &\xrightarrow[p\to\infty]{} 0,\\
\|(I-P_{p,2})\sin((j-1/2)\pi\cdot)\| &\xrightarrow[p\to\infty]{} 0,
\end{align*}
for $j=1,\ldots,n$. Here $\cos(0)$ refers to the constant function $1$.
\end{corollary}
\begin{proof}
As shown in \cite{Floater:2018}, the optimal spline spaces $\mathcal{S}_{p,i}$, $i=0,1,2$, are examples of the spaces $\mathcal{X}_p$ and $\mathcal{Y}_p$ for different choices of $K$. Since in all cases we have $\lambda_n>\lambda_{n+1}$, Theorem~\ref{thm:eig} concludes the proof. Note that $\mathcal{S}_{p,1}$ is in fact of the form $\mathcal{P}_0\oplus \mathcal{X}_{p}$, and so the constant functions are in the space $\mathcal{S}_{p,1}$ for all~$p$.
\end{proof}
\begin{remark}
The above corollary can be generalized to the tensor-product case by using Theorem~8 in \cite{Bressan:preprint}.
Specifically, one can use Theorem~8 in \cite{Bressan:preprint} to obtain an error estimate for $\mathcal{S}_{p,0}\otimes\mathcal{S}_{p,0}$ in an analogous way to Section~\ref{subsec:tens}. Then, similarly to how we proved Corollary \ref{cor:per}, one can plug the eigenfunctions of the Laplacian with zero boundary conditions on the square $(0,1)^2$ into this error estimate and, since these eigenfunctions are just tensor products of the above sines (the eigenfunctions of the 1D Laplacian with zero boundary conditions), show convergence of $\mathcal{S}_{p,0}\otimes\mathcal{S}_{p,0}$ to the first $n^2$ tensor-product eigenfunctions as $p\to\infty$.
Similar arguments apply to $\mathcal{S}_{p,1}\otimes\mathcal{S}_{p,1}$ and $\mathcal{S}_{p,2}\otimes\mathcal{S}_{p,2}$.
\end{remark}
\section{Error estimates for reduced spline spaces}\label{sec:reduced}
In this section we focus on error estimates for the subspaces $\mathcal{S}_{p,{\boldsymbol \tau},1}\subset \mathcal{S}_{p,{\boldsymbol \tau}}$ defined in Section~\ref{sec:optS} and for the following variations
\begin{equation*}
\widetilde{\mathcal{S}}_{p,{\boldsymbol \tau}} :=
\{s\in \mathcal{S}_{p,{\boldsymbol \tau}} :\, \partial^\alpha s(0)=\partial^\alpha s(1)=0,\ \ 0\leq \alpha< p,\ \ \alpha \text{ odd}\}.
\end{equation*}
For uniform knot sequences, the latter are the ``reduced spline spaces'' investigated in \cite{Takacs:2016} (see Definition 5.1 of \cite{Takacs:2016}).
For even degrees $p$, the spaces $\mathcal{S}_{p,{\boldsymbol \tau},1}$ are exactly $\widetilde{\mathcal{S}}_{p,{\boldsymbol \tau}}$.
If we further remove the two last boundary conditions $\partial^p s(0)=\partial^p s(1)=0$ in $\mathcal{S}_{p,{\boldsymbol \tau},1}$ for $p$ odd (and thus increase the dimension of $\mathcal{S}_{p,{\boldsymbol \tau},1}$ by two in this case), then we again obtain a space $\widetilde{\mathcal{S}}_{p,{\boldsymbol \tau}}$.
Using the Poincar\'e inequality \eqref{ineq:deg0} and Lemma~1 in \cite{Floater:2017} one can prove that the case $r=1$ of Theorem~\ref{thm:one} holds for these reduced spline spaces $\widetilde{\mathcal{S}}_{p,{\boldsymbol \tau}}$. However, if we do not remove the two last boundary conditions $\partial^p s(0)=\partial^p s(1)=0$ in $\mathcal{S}_{p,{\boldsymbol \tau},1}$ for $p$ odd, then the obtained error estimate would, for some knot sequences ${\boldsymbol \tau}$, be worse than Theorem~\ref{thm:one} by a factor of~$2$. This is the content of Theorems~\ref{thm:Sp1} and~\ref{thm:reduced}. We start by proving the following intermediate result.
\begin{lemma}\label{lem:Sp01}
For any knot sequence ${\boldsymbol \tau}$, let $\widehat h:=\max\{2h_0,h_1,h_2,\ldots, h_{N-1},2h_{N}\}$. If $P_0$ denotes the $L^2$-projection onto the spline space $\mathcal{S}_{0,{\boldsymbol \tau},0}$, then for any function $u\in H^1_0$ we have
\begin{equation*}
\|u-P_0u\|\leq \frac{\widehat h}{\pi}\|u'\|.
\end{equation*}
\end{lemma}
\begin{proof}
Recall that any element in $\mathcal{S}_{0,{\boldsymbol \tau},0}$ is identically zero on the first and last knot intervals and piecewise constant in the interior. The result then follows by the same argument as in \eqref{ineq:deg0proof}. We apply the Poincar\'e inequality in \eqref{ineq:Poinc} for the interior knot intervals. For the first and last knot intervals we apply the inequality
\begin{equation*}
\|u\|\leq \frac{2(b-a)}{\pi}\|u'\|,
\end{equation*}
that holds for all $u\in H^1$ on an interval $(a,b)$ satisfying either $u(a)=0$ or $u(b)=0$ (see, e.g., the case $n=0$ and $i=2$ of Theorem~1 in \cite{Floater:2018}).
\end{proof}
In the proof of the next theorem we apply Lemma \ref{lem:1simpl} using an integral operator that integrates the spline space $\mathcal{S}_{p,{\boldsymbol \tau},1}$ twice. As a consequence, the ${\bar{p}}$ in Lemma \ref{lem:1simpl} will, in this case, not correspond to the degree $p$ of $\mathcal{S}_{p,{\boldsymbol \tau},1}$.
\begin{theorem}\label{thm:Sp1}
For any knot sequence ${\boldsymbol \tau}$, let $h$ denote its maximum knot distance and let $\widehat h:=\max\{2h_0,h_1,h_2,\ldots, h_{N-1},2h_{N}\}$. If $P_{p}$ denotes the $L^2$-projection onto the spline space $\mathcal{S}_{p,{\boldsymbol \tau},1}$, then for any function $u\in H^1$ we have
\begin{equation*}
\begin{aligned}
\|u-P_{p}u\|&\leq \frac{h}{\pi}\|u'\|, &&p \text{ even},\\
\|u-P_{p}u\|&\leq \frac{\widehat h}{\pi}\|u'\|, &&p \text{ odd}.
\end{aligned}
\end{equation*}
\end{theorem}
\begin{proof}
Let $\Pi$ denote the $L^2$-projection onto $\mathcal{P}_0$ and define the integral operator $K_1:=(I-\Pi)K$, where $K$ is the integral operator in \eqref{eq:Kint}. From \eqref{eq:Hr} it follows that $H^1=\mathcal{P}_0\oplus K_1(L^2).$ Furthermore, as shown in \cite{Floater:2017}, the kernel of the self-adjoint operator $K_1K_1^*$ is the Green's function to the boundary value problem
\begin{equation}\label{bvp:neumann}
-u''(x)=f(x),\quad x\in (0,1),\quad u'(0)=u'(1)=0,\quad u,f\perp 1,
\end{equation}
and we have the orthogonal decomposition (see \cite{Floater:2018})
\begin{equation}\label{eq:Sp1orth}
\mathcal{S}_{p,{\boldsymbol \tau},1} = \mathcal{P}_0\oplus K_1K_1^*(\mathcal{S}_{p-2,{\boldsymbol \tau},1}),\quad p\geq 2.
\end{equation}
For $p=0$ the result follows from the Poincar\'e inequality as shown in \eqref{ineq:deg0}. If $p$ is an even number the result then follows from Lemma~\ref{lem:1simpl} with $K_1K_1^*$ playing the role of $K$ and with $p=2{\bar{p}}$.
Next, we consider the case $p=1$. Using the definition of $K_1$ we know that $u\in H^1$ can be written as the orthogonal sum $u=c+K_1f$ for $c\in\mathcal{P}_0$ and $f\in L^2$. From \cite{Floater:2018} we recall the decomposition
\begin{equation}\label{eq:Sp1orth2}
\mathcal{S}_{p,{\boldsymbol \tau},1} = \mathcal{P}_0\oplus K_1(\mathcal{S}_{p-1,{\boldsymbol \tau},0}),\quad p\geq 1,
\end{equation}
and we define $\hat{P}_0$ to be the $L^2$-projection onto $\mathcal{S}_{0,{\boldsymbol \tau},0}$. Using \eqref{eq:Sp1orth2} it follows that $K_1\hat{P}_0$ maps into the space $\mathcal{S}_{1,{\boldsymbol \tau},1}$, and so
\begin{align*}
\|u-P_1u\| &= \|K_1f-P_1K_1f\|\leq \|K_1f-K_1\hat{P}_0f\| \\
&\leq \|K_1(I-\hat{P}_0)\|\,\|f\| = \|(I-\hat{P}_0)K_1^*\|\,\|f\|.
\end{align*}
Since $H^1_0=K_1^*(L^2)$ (see \cite{Floater:2018}), we deduce from Lemma~\ref{lem:Sp01} that
$$\|(I-\hat{P}_0)K_1^*\|\leq \frac{\widehat h}{\pi},$$
which proves the case $p=1$.
If $p$ is an odd number the result then follows from Lemma~\ref{lem:1simpl} and \eqref{eq:Sp1orth}.
\end{proof}
\begin{theorem}\label{thm:reduced}
For any knot sequence ${\boldsymbol \tau}$,
let $h$ be the maximum knot distance of ${\boldsymbol \tau}$, and let $P_{p}$ denote the $L^2$-projection onto the spline space $\widetilde{\mathcal{S}}_{p,{\boldsymbol \tau}}$. Then, for any function $u\in H^1$ we have
\begin{equation*}
\|u-P_{p}u\|\leq \frac{h}{\pi}\|u'\|.
\end{equation*}
\end{theorem}
\begin{proof}
The case of $p$ even is covered by Theorem \ref{thm:Sp1}, since $\widetilde{\mathcal{S}}_{p,{\boldsymbol \tau}}=\mathcal{S}_{p,{\boldsymbol \tau},1}$ in this case. For $p=1$ the result is the case $r=1$ of Theorem~\ref{thm:one}, since $\widetilde{\mathcal{S}}_{1,{\boldsymbol \tau}}=\mathcal{S}_{1,{\boldsymbol \tau}}$. We now consider odd degrees $p>1$. Letting $K_1$ be the integral operator in the proof of Theorem \ref{thm:Sp1}, it follows from \eqref{bvp:neumann} that
\begin{equation*}
\widetilde{\mathcal{S}}_{p,{\boldsymbol \tau}} = \mathcal{P}_0\oplus K_1K_1^*(\widetilde{\mathcal{S}}_{p-2,{\boldsymbol \tau}}),\quad p\geq 2,
\end{equation*}
since the derivative of a spline is a spline on the same knot vector of one degree lower. Using Lemma~\ref{lem:1simpl} with $K_1K_1^*$ playing the role of $K$ we obtain the claimed result.
\end{proof}
\section{Inverse inequalities} \label{sec:inverse}
In this section we show that the spline spaces $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$ (see Section~\ref{sec:per}), $\mathcal{S}_{p,{\boldsymbol \tau},i}$, $i=0,1,2$ (see Section~\ref{sec:optS}), and $\widetilde{\mathcal{S}}_{p,{\boldsymbol \tau}}$ (see Section~\ref{sec:reduced}) all satisfy an inverse inequality for any knot sequence ${\boldsymbol \tau}$.
The proof of the following theorem is done by induction. The base case can be found in Theorem~3.91 of \cite{Schwab:99} and the induction step in Theorem~6.1 of \cite{Takacs:2016}, but only for the reduced spline spaces $\widetilde{\mathcal{S}}_{p,{\boldsymbol \tau}}$. The case $\mathcal{S}_{p,{\boldsymbol \tau},0}$ was later shown in \cite{Sogn:2018}. For the sake of completeness we give the full proof here in a general form.
\begin{theorem}\label{thm:inv}
For any knot sequence ${\boldsymbol \tau}$, let $h_\mathrm{min}$ denote its minimum knot distance.
For $p\geq 1$, assume $\mathcal{S}_p$ is any subspace of $\mathcal{S}_{p,{\boldsymbol \tau}}$ such that the boundary conditions
\begin{equation}\label{cond:inv}
\partial^\alpha s(0)\partial^{\alpha-1} s(0)= \partial^\alpha s(1)\partial^{\alpha-1} s(1), \quad \alpha=1,\ldots,p-1
\end{equation}
are satisfied for all $s\in \mathcal{S}_p$. Then, the inverse inequality in \eqref{ineq:inv}
holds.
\end{theorem}
\begin{proof}
We first look at $p=1$. We will use a scaling argument. Let $\hat s$ be a linear function on the interval $[-1,1]$. Since it can be written as $\hat s(x)= a_0+a_1x$, we get
$$\|\hat s'\|^2=2a_1^2\leq3(2a_0^2+\frac{2}{3}a_1^2)=3\|\hat s\|^2.$$
By repeating this argument on each knot interval $I_j$, we have
$$\|s'\|_j\leq \frac{2\sqrt{3}}{h_j}\|s\|_j,$$
for all $s\in \mathcal{S}_{1,{\boldsymbol \tau}}$.
Finally, we arrive at
$$\|s'\|^2=\sum_{j=0}^N\|s'\|_j^2\leq \sum_{j=0}^N \Big(\frac{2\sqrt{3}}{h_j}\|s\|_j\Big)^2\leq \Big(\frac{2\sqrt{3}}{h_\mathrm{min}}\|s\|\Big)^2.$$
Next, we assume the result is true for $\mathcal{S}_{p-1}$ and consider the case of $\mathcal{S}_p$.
Using integration by parts and the Cauchy-Schwarz inequality we have
$$
\|s'\|^2=\int_0^1(s'(x))^2 d x = [s's]_0^1 -\int_0^1 s''(x)s(x) dx\leq \|s''\|\,\|s\|,
$$
where the boundary terms disappeared since $s\in\mathcal{S}_p$. Now, using the induction hypothesis together with the fact that $s'\in\mathcal{S}_{p-1}$ whenever $s\in\mathcal{S}_p$, we obtain
$$
\|s'\|^2\leq \|s''\|\,\|s\|\leq \frac{2\sqrt{3}}{h_\mathrm{min}}\|s'\|\,\|s\|,
$$
and the result follows.
\end{proof}
The spline spaces $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$, $\mathcal{S}_{p,{\boldsymbol \tau},i}$, $i=0,1,2$, and $\widetilde{\mathcal{S}}_{p,{\boldsymbol \tau}}$ all satisfy the boundary conditions in~\eqref{cond:inv}.
This brings us to the following corollary.
\begin{corollary}
The spline spaces $\mathcal{S}_{p,{\boldsymbol \tau},\mathrm{per}}$, $\mathcal{S}_{p,{\boldsymbol \tau},i}$, $i=0,1,2$, and $\widetilde{\mathcal{S}}_{p,{\boldsymbol \tau}}$ satisfy the inverse inequality in~\eqref{ineq:inv}.
\end{corollary}
\section{Conclusions}\label{sec:conclusions}
In this paper we have introduced a general framework for deriving error estimates based on the theory of Kolmogorov $L^2$ $n$-widths and the representation of Sobolev spaces in terms of integral operators described by suitable kernels. By applying this framework we have obtained sharp (or close to sharp) error estimates for spline approximation, in both the periodic and the non-periodic case. These generalize and/or improve the results known in the literature.
More precisely, for the important case of spline spaces of maximal smoothness, we have presented the following contributions:
\begin{itemize}
\item we have provided error estimates for the $L^2$-projection and Ritz projections of any function in $H^r$ for arbitrary knot sequences and with explicit constants;
\item focusing on the periodic case, we have
used the error estimate for the Ritz projection to prove convergence of the Galerkin method, in $p$, to the eigenvalues and eigenfunctions of the Laplacian with periodic boundary conditions;
\item we have related the problem of spectral convergence to the theory of Kolmogorov $n$-widths and proved a general convergence result for optimal subspaces;
\item we have identified $n$-dimensional spline spaces, all satisfying an inverse inequality and all possessing optimal approximation order for function classes in $H^1$, that converge, in $p$, to the $n$ first eigenfunctions of the Laplacian with various boundary conditions.
\end{itemize}
Besides the direct theoretical interests of the presented results, we also see several practical consequences in the IGA context:
\begin{itemize}
\item they can be a starting point for the theoretical understanding of the benefits of spline approximation under $k$-refinement by isogeometric discretization methods;
\item they provide theoretical insights into the outperformance of smooth spline discretizations of eigenvalue problems, that has been numerically observed in the literature;
\item they form a theoretical foundation for proving optimality, in $n$ and $p$, of geometric multigrid solvers for linear systems arising from (non-uniform) smooth spline discretizations.
\end{itemize}
\section*{Acknowledgements}
The authors are indebted to M.~S.~Floater for the numerous discussions which improved the content and the presentation of the paper.
This work was supported by the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata (CUP E83C18000100006) and received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement 339643.
C.~Manni and H.~Speleers are members of Gruppo Nazionale per il Calcolo Scientifico, Istituto Nazionale di Alta Matematica.
\bibliographystyle{ws-m3as}
| {
"timestamp": "2019-07-09T02:08:12",
"yymm": "1810",
"arxiv_id": "1810.13418",
"language": "en",
"url": "https://arxiv.org/abs/1810.13418",
"abstract": "In this paper we provide a priori error estimates in standard Sobolev (semi-)norms for approximation in spline spaces of maximal smoothness on arbitrary grids. The error estimates are expressed in terms of a power of the maximal grid spacing, an appropriate derivative of the function to be approximated, and an explicit constant which is, in many cases, sharp. Some of these error estimates also hold in proper spline subspaces, which additionally enjoy inverse inequalities. Furthermore, we address spline approximation of eigenfunctions of a large class of differential operators, with a particular focus on the special case of periodic splines. The results of this paper can be used to theoretically explain the benefits of spline approximation under $k$-refinement by isogeometric discretization methods. They also form a theoretical foundation for the outperformance of smooth spline discretizations of eigenvalue problems that has been numerically observed in the literature, and for optimality of geometric multigrid solvers in the isogeometric analysis context.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Sharp error estimates for spline approximation: explicit constants, $n$-widths, and eigenfunction convergence",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9863631671237731,
"lm_q2_score": 0.817574478416099,
"lm_q1q2_score": 0.8064253518900703
} |
https://arxiv.org/abs/1805.04759 | An Analog of Matrix Tree Theorem for Signless Laplacians | A spanning tree of a graph is a connected subgraph on all vertices with the minimum number of edges. The number of spanning trees in a graph $G$ is given by Matrix Tree Theorem in terms of principal minors of Laplacian matrix of $G$. We show a similar combinatorial interpretation for principal minors of signless Laplacian $Q$. We also prove that the number of odd cycles in $G$ is less than or equal to $\frac{\det(Q)}{4}$, where the equality holds if and only if $G$ is a bipartite graph or an odd-unicyclic graph. | \section{Introduction}
For a simple graph $G$ on $n$ vertices $1,2,\ldots,n$ and $m$ edges $1,2,\ldots, m$ we define its \emph{degree matrix} $D$, \emph{adjacency matrix} $A$, and \emph{incidence matrix} $N$ as follows:
\begin{enumerate}
\item $D=[d_{ij}]$ is an $n \times n$ diagonal matrix where $d_{ii}$ is the degree of the vertex $i$ in $G$ for $i=1,2,\ldots,n$.
\item $A=[a_{ij}]$ is an $n \times n$ matrix with zero diagonals where $a_{ij}=1$ if vertices $i$ and $j$ are adjacent in $G$ and $a_{ij}=0$ otherwise for $i,j=1,2,\ldots,n$.
\item $N=[n_{ij}]$ is an $n \times m$ matrix whose rows are indexed by vertices and columns are indexed by edges of $G$. The entry $n_{ij} = 1$ whenever vertex $i$ is incident with edge $j$ (i.e., vertex $i$ is an endpoint of edge $j$) and $n_{ij}=0$ otherwise.
\end{enumerate}
We define the {\it Laplacian matrix} $L$ and {\it signless Laplacian matrix} $Q$ to be $L=D-A$ and $Q=D+A$, respectively. It is well-known that both $L$ and $Q$ have nonnegative real eigenvalues \cite[Sec 1.3]{BH}. Note the relation between the spectra of $L$ and $Q$:
\begin{theorem}\cite[Prop $1.3.10$]{BH}\label{CB2}
Let $G$ be a simple graph on $n$ vertices. Let $L$ and $Q$ be the Laplacian matrix and the signless Laplacian matrix of $G$, respectively, with eigenvalues $0=\mu_1\leq \mu_2\leq \cdots \leq \mu_n$ for $L$, and $\lambda_1\leq \lambda_2\leq \cdots \leq \lambda_n$ for $Q$. Then $G$ is bipartite if and only if
$\{\mu_1,\mu_2, \ldots ,\mu_n\}=\{\lambda_1, \lambda_2, \ldots ,\lambda_n\}.$
\end{theorem}
\begin{theorem}\cite[Prop $2.1$]{CRC}
The smallest eigenvalue of the signless Laplacian of a connected graph is equal to $0$ if and only if the graph is bipartite. In this case $0$ is a simple eigenvalue.
\end{theorem}
We use the following notation for submatrices of an $n \times m$ matrix $M$: for sets $I \subset \{ 1,2,\ldots,n\}$ and $J\subset \{ 1,2,\ldots,m\}$,
\begin{itemize}
\item $M[I ; J]$ denotes the submatrix of $M$ whose rows are indexed by $I$ and columns are indexed by $J$.
\item $M(I ; J)$ denotes the submatrix of $M$ obtained by removing the rows indexed by $I$ and removing the columns indexed by $J$.
\item $M(I ; J]$ denotes the submatrix of $M$ whose columns are indexed by $J$, and obtained by removing rows indexed by $I$.
\end{itemize}
We often list the elements of $I$ and $J$, separated by commas in this submatrix notation, rather than writing them as sets. For example, $M(2 ; 3,7,8]$ is a $(n-1) \times 3$ matrix whose rows are the same as the rows of $M$ with the the second row deleted and columns are respectively the third, seventh, and eighth columns of $M$. Moreover, if $I=J$, we abbreviate $M(I ; J)$ and $M[I ; J]$ as $M(I)$ and $M[I]$ respectively. Also we abbreviate $M(\varnothing ; J]$ and $M(I;\varnothing)$ as $M(;J]$ and $M(I;)$ respectively.
A {\it spanning tree} of $G$ is a connected subgraph of $G$ on all $n$ vertices with minimum number of edges which is $n-1$ edges. The number of spanning trees in a graph $G$ is denoted by $t(G)$ and is given by Matrix Tree Theorem:
\begin{theorem}[Matrix Tree Theorem]\cite[Prop $1.3.4$]{BH}\label{MTT}
Let $G$ be a a simple graph on $n$ vertices and $L$ be the Laplacian matrix of $G$ with eigenvalues $0=\mu_1\leq \mu_2\leq \cdots \leq \mu_n$. Then the number $t(G)$ of spanning trees of $G$ is
$$t(G)=\det\left(L(i)\right)=\frac{\mu_2\cdot \mu_3 \cdots \mu_n}{n},$$
for all $i=1,2,\ldots,n$.
\end{theorem}
We explore if there is an analog of the Matrix Tree Theorem for the signless Laplacian matrix $Q$. First note that unlike $\det\left(L(i)\right)$, $\det\left(Q(i)\right)$ is not necessarily the same for all $i$ as illustrated in the following example.
\begin{example}
For the paw graph $G$ with its signless Laplacian matrix $Q$ in Figure \ref{paw}, $\det\left(Q(1)\right)=7\neq 3=\det\left(Q(2)\right)=\det\left(Q(3)\right)=\det\left(Q(4)\right)$.
\begin{figure}
\begin{center}
\begin{tikzpicture}[colorstyle/.style={circle, fill, black, scale = .5}]
\node (1) at (0,1)[colorstyle, label=above:$1$]{};
\node (2) at (0,0)[colorstyle, label=below:$2$]{};
\node (3) at (-1,-1)[colorstyle, label=below:$3$]{};
\node (4) at (1,-1)[colorstyle, label=below:$4$]{};
\draw [] (1)--(2)--(3)--(4)--(2);
\node at (2,0)[label=right:{$Q=\left[\begin{array}{cccc}
1&1&0&0\\
1&3&1&1\\
0&1&2&1\\
0&1&1&2
\end{array}\right] $}]{};
\end{tikzpicture}
\caption{Paw $G$ and its signless Laplacian matrix $Q$}\label{paw}
\end{center}
\end{figure}
\end{example}
The Matrix Tree Theorem can be proved by the Cauchy-Binet formula:
\begin{theorem}[Cauchy-Binet]\cite[Prop $1.3.5$]{BH}\label{CB}
Let $m \leq n$. For $m\times n$ matrices $A$ and $B$, we have
\[
\det(AB^T)=\sum_{S} \det(A(;S]) \det(B(;S]),
\]
where the summation runs over $\binom{n}{m}$ $m$-subsets $S$ of $\{1,2,\ldots,n\}$.
\end{theorem}
The following observation provides a decomposition of the signless Laplacian matrix $Q$ which enables us to apply the Cauchy-Binet formula on it.
\begin{obs}\label{CB3}
Let $G$ be a simple graph on $n\geq 2$ vertices with $m$ edges, and $m \geq n-1$. Suppose $N$ and $Q$ are the incidence matrix and signless Laplacian matrix of $G$, respectively. Then
\begin{enumerate}
\item[(a)] $Q=NN^T$,
\item[(b)] $Q(i)=N(i;)N(i;)^T$, $i=1,2,\ldots,n$, and
\item[(c)] $\det(Q(i))=\det(N(i;)N(i;)^T)=\sum_{S} \det(N(i;S])^2,$
where the summation runs over all $(n-1)$-subsets $S$ of $\{1,2,\ldots,m\}$, (by Cauchy-Binet formula \ref{CB}).
\end{enumerate}
\end{obs}
\section{Principal minors of signless Laplacians}
In this section we find a combinatorial formula for a principal minor $\det(Q(i))$ for the signless Laplacian matrix $Q$ of a given graph $G$. We mainly use Observation \ref{CB3}(c) given by Cauchy-Binet formula which involves determinant of submatrices of incidence matrices. This approach is completely different from the methods applied for related spectral results in \cite{CRC}. But we borrow the definition of $TU$-subgraphs from \cite{CRC} slightly modified as follows: A {\it $TU$-graph} is a graph whose connected components are trees or odd-unicyclic graphs. A {\it $TU$-subgraph} of $G$ is a spanning subgraph of $G$ that is a $TU$-graph. The following lemma finds the number of trees in a $TU$-graph.
\begin{lemma}\label{noofcyclesinTU}
If $G$ is a $TU$-graph on $n$ vertices with $n-k$ edges consisting of $c$ odd-unicyclic graphs and $s$ trees, then $s=k$.
\end{lemma}
\begin{proof}
Suppose the number vertices of the cycles are $n_1,n_2,\ldots,n_c$ and that of the trees are $t_1,t_2,\ldots,t_s$. Then the total number of edges is
\begin{align*}
n-k = \sum_{i=1}^{c} n_i + \sum_{i=1}^{s} (t_i-1) = n - s
\end{align*}
which implies $s = k$.
\end{proof}
Now we find the determinant of incidence matrices of some special graphs in the following lemmas.
\begin{lemma}\label{incidencedet}
If $G$ is an odd (resp. even) cycle, then the determinant of its incidence matrix is $\pm 2$ (resp. zero).
\end{lemma}
\begin{proof}
Let $G$ be a cycle with the incidence matrix $N$. Then up to permutation we have
$$N=PN'Q=P
\left[\begin{array}{cccccc}
1&0&0&\cdots&0&1\\
1&1&0&\cdots&0&0\\
0&1&1&\ddots&\vdots&\vdots\\
\vdots&\ddots &\ddots&\ddots&\ddots&\vdots\\
\vdots&\vdots&\ddots&\ddots&1&0\\
0&\cdots&\cdots&0&1&1\\
\end{array}\right]Q,
$$
for some permutation matrices $P$ and $Q$. By a cofactor expansion across the first row we have
$$\det(N)=\det(P)\det(N')\det(Q)=(\pm 1)(1+(-1)^{n+1})(\pm 1).$$
If $n$ is odd (resp. even), then $\det(N)= \pm 2$ (resp. zero).
\end{proof}
\begin{lemma}\label{incidencedetoddcyclic}
If $G$ is an odd unicyclic (resp. even unicyclic) graph, then the determinant of its incidence matrix is $\pm 2$ (resp. $0$).
\end{lemma}
\begin{proof}
Let $G$ be a unicyclic graph with incidence matrix $N$ and $t$ vertices not on the cycle. We prove the statement by induction on $t$. If $t = 0$, then $G$ is an odd (resp. even) cycle and then $\det(N_i) = \pm 2$ (resp. $0$) by Lemma \ref{incidencedet}. Assume the statement holds for some $t \geq 0$. Let $G$ be a unicyclic graph with $t+1$ vertices not on the cylce. Then $G$ has a pendant vertex, say vertex $i$. The vertex $i$ is incident with exactly one edge of $G$, say $e_l = \{ i,j \}$. Then $i$th row of $N$ has only one nonzero entry which is the $(i,l)$th entry and it is equal to $1$. To find $\det(N)$ we have a cofactor expansion across the $i$th row and get
\[
\det(N)=\pm 1 \cdot \big( \pm \det(N(i ; l)) \big).
\]
Note that $N(i ; l)$ is the incident matrix of $G(i)$, which is a unicyclic graph with $t$ vertices not on the cycle. By induction hypothesis, $\det(N(i ; l)) = \pm 2$ (resp. $0$). Thus $\det(N)=\pm 1 \cdot \big( \pm \det(N(i ; l)) \big)=\pm 2$ (resp. $0$).
\end{proof}
By a similar induction on the number of pendant vertices we get the following result.
\begin{lemma}\label{tree_det}
Let $H$ be a tree with at least one edge and $N$ be the incidence matrix of $H$. Then $\det(N(i;)) = \pm 1$ for all vertices $i$ of $H$.
\end{lemma}
\begin{lemma}\label{lem_tree_and_one_more_edge}
Let $H$ be a graph on $n$ vertices and $n-1$ edges with incidence matrix $N$. If $H$ has a connected component which is a tree and an edge which is not on the tree, then $\det(N(i;)) = 0$ for all vertices $i$ not on the tree.
\end{lemma}
\begin{proof}
Let $H$ have a connected component $T$ which is a tree and an edge $e_j$ which is not on $T$. Suppose $i$ is a vertex of $G$ that is not on $T$. If $T$ consists of just one vertex, then the corresponding row in $N(i;)$ is a zero row giving $\det(N(i;))=0$. Suppose $T$ has at least two vertices.
Now consider the square submatrix $N'$ of $N(i;)$ with rows corresponding to verteices of $T$ and columns corresponding to edges of $T$ together with $e_j$. Then the column of $N'$ corresponding to $e_j$ is a zero row giving $\det(N')=0$. Since entries in rows of $N_i[S]$ corresponding to $T$ that are outside of $N'$ are zero, the rows of $N(i;)$ corresponding to $T$ are linearly dependent and consequently $\det(N(i;))=0$.
\end{proof}
Now we break down different scenarios that can happen to a graph with $n$ vertices and $m=n-1$ edges.
\begin{proposition}\label{prop_enn_vert_ennminusone_edg}
Let $H$ be a graph on $n$ vertices and $m = n-1$ edges. Then one of the following is true for $H$.
\begin{enumerate}
\item \label{prop_enn_vert_ennminusone_edg_case_tree} $H$ is a tree.
\item \label{prop_enn_vert_ennminusone_edg_case_even_cycle} $H$ has an even cycle and a vertex not on the cycle.
\item \label{prop_enn_vert_ennminusone_edg_case_non_tu} $H$ has no even cycles, but $H$ has a connected component with at least two odd cycles and at least two connected components which are trees.
\item \label{prop_enn_vert_ennminusone_edg_case_tu} $H$ is a disjoint union of odd unicyclic graphs and exactly one tree, i.e., $H$ is a $TU$-graph.
\end{enumerate}
\end{proposition}
\begin{proof}
If $H$ is connected then it is a tree. This implies Case \ref{prop_enn_vert_ennminusone_edg_case_tree}. Now assume $H$ is not connected. If $H$ has no cycles, then it is a forest with at least two connected components. This would imply that $m < n-2$, contradicting the assumption that $m = n-1$. Thus $H$ has at least one cycle. Suppose $H$ has $t \geq 2$ connected components $H_i$ with $m_i$ edges and $n_i$ vertices, where the first $k$ of them have at least a cycle and the rest are trees. For $i= 1,\ldots, k$, $H_i$ has $m_i \geq n_i$. Note that
\begin{equation}\label{treenumber}
-1 = m - n = \sum_{i=1}^{t} (m_i - n_i) = \sum_{i=1}^{k} (m_i - n_i) + \sum_{i=k+1}^{t} (m_i - n_i)
\end{equation}
Since $H_i$ has a cycle for $i =1,\ldots,k$ and $H_i$ is a tree for $i = k+1,\ldots,t$,
\[
\ell := \sum_{i=1}^{k} (m_i - n_i) \geq 0,
\]
and
\[
\sum_{i=k+1}^{t} (m_i - n_i) = -(t - k).
\]
Then $t-k = \ell + 1$ by (\ref{treenumber}). In other words, in order to make up for the extra edges in the connected components with cycles, $H$ has to have exactly $\ell + 1$ connected components which are trees.
If $H$ has an even cycle, then $\ell \geq 0$ and hence $t-k \geq 1$. This means there is at least one connected component which is tree and it contains a vertex which is not in the cycle. This implies Case \ref{prop_enn_vert_ennminusone_edg_case_even_cycle}. Otherwise, all of the cycles of $H$ are odd. If it has more than one cycle in a connected component, then $\ell \geq 1$ and thus $t-k \geq 2$. This implies Case \ref{prop_enn_vert_ennminusone_edg_case_non_tu}. Otherwise, each $H_i$ with $i=1,\ldots,k$ has exactly one cycle in it, which implies $\ell = 0$, and then $t-k =1$. This implies Case \ref{prop_enn_vert_ennminusone_edg_case_tu}.
\end{proof}
\begin{theorem}\label{thm_casesQ_redo}
Let $G$ be a simple connected graph on $n\geq 2$ vertices and $m$ edges with the incidence matrix $N$. Let $i$ be an integer from $\{1,2,\ldots,n\}$. Let $S$ be an $(n-1)$-subset of $\{1,2,\ldots,m\}$ and $H$ be a spanning subgraph of $G$ with edges indexed by $S$. Then one of the following holds for $H$.
\begin{enumerate}
\item \label{thm_casesQ_redo_case_tree} $H$ is a tree. Then $\det(N(i;S]) = \pm 1$.
\item \label{thm_casesQ_redo_case_even_cycle} $H$ has an even cycle and a vertex not on the cycle. Then $\det(N(i;S]) = 0$.
\item \label{thm_casesQ_redo_case_non_tu} $H$ has no even cycles, but it has a connected component with at least two odd cycles and at least two connected components which are trees. Then $\det(N(i;S]) = 0$.
\item \label{thm_casesQ_redo_case_tu} $H$ is a $TU$-subgraph of $G$ consisting of $c$ odd-unicyclic graphs $U_1, U_2, \ldots , U_c$ and a unique tree $T$. If $i$ is a vertex of $U_j$ for some $j=1,2,\ldots,c$, then $\det(N(i;S]) =0$. If $i$ is a vertex of $T$, then $\det(N(i;S]) =\pm 2^c$.
\end{enumerate}
\end{theorem}
\begin{proof}
Suppose vertices and edges of $G$ are $1,2,\ldots,n$ and $e_1,e_2,\ldots,e_m$, respectively. Note that $m\geq n-1$ since $G$ is connected.
\begin{enumerate}
\item Suppose $H$ is a tree. Since $n\geq 2$, $H$ has an edge. Then by Lemma \ref{tree_det}, $\det(N(i;S]) = \pm 1$.
\item Suppose $H$ contains an even cycle $C$ as a subgraph and a vertex $j$ not on $C$.
{\it Case 1.} Vertex $i$ is not in $C$\\
Then the square submatrix $N'$ of $N(i;S]$ corresponding to $C$ has determinant zero by Lemma \ref{incidencedetoddcyclic}. Since entries in columns of $N(i;S]$ corresponding to $C$ that are outside of $N'$ are zero, the columns of $N(i;S]$ corresponding to $C$ are linearly dependent and consequently $\det(N(i;S])=0$. \\
{\it Case 2.} Vertex $i$ is in $C$\\
Since $i$ is in $C$, we have $j\neq i$. Consider the square submatrix $N'$ of $N(i;S]$ that has rows corresponding to vertex $j$ and vertices of $C$ excluding $i$ and columns corresponding to edges of $C$. Since vertex $j$ is not on $C$, the row of $N'$ corresponding to vertex $j$ is a zero row and consequently $\det(N')=0$. Since entries in columns of $N_i[S]$ corresponding to $C$ that are outside of $N'$ are zero, the columns of $N(i;S]$ corresponding to $C$ are linearly dependent and consequently $\det(N(i;S])=0$.
\item Suppose $H$ has no even cycles, but it has a connected component with at least two odd cycles and at least two connected components which are trees. Then vertex $i$ is not in one of the trees. Then $\det(N(i;S])=0$ by Lemma \ref{lem_tree_and_one_more_edge}.
\item Suppose $H$ is a $TU$-subgraph of $G$ consisting of $c$ odd-unicyclic graphs $U_1, U_2, \ldots , U_c$ and a unique tree $T$. If $i$ is a vertex of $U_j$ for some $j = 1, \ldots, c$, then $\det(N(i;S])=0$ by Lemma \ref{lem_tree_and_one_more_edge}. If $i$ is a vertex of the tree $T$, then $N(i;S]$ is a direct sum of incidence matrices of odd-unicyclic graphs $U_1, U_2, \ldots , U_c$ and the incidence matrix of the tree $T$ with one row deleted (which does not exist when $T$ is a tree on the single vertex $i$). By Lemma \ref{incidencedetoddcyclic} and \ref{tree_det}, $\det(N(i;S])=(\pm 2)^c\cdot (\pm 1)= \pm 2^c$.
\end{enumerate}
\end{proof}
The preceding results are summarized in the following theorem.
\begin{theorem}\label{mainlemma}
Let $G$ be a simple connected graph on $n\geq 2$ vertices and $m$ edges with the incidence matrix $N$. Let $i$ be an integer from $\{1,2,\ldots,n\}$. Let $S$ be an $(n-1)$-subset of $\{1,2,\ldots,m\}$ and $H$ be a spanning subgraph of $G$ with edges indexed by $S$.
\begin{enumerate}
\item[(a)] If $H$ is not a $TU$-subgraph of $G$, then $\det(N(i;S])=0$
\item[(b)] Suppose $H$ is a $TU$-subgraph of $G$ consisting of $c$ odd-unicyclic graphs $U_1, U_2, \ldots , U_c$ and a unique tree $T$. If $i$ is a vertex of $U_j$ for some $j=1,2,\ldots,c$, then $\det(N(i;S]) =0$. If $i$ is a vertex of $T$, then $\det(N(i;S]) =\pm 2^c$.
\end{enumerate}
\end{theorem}
For a $TU$-subgraph $H$ of $G$, the number of connected components that are odd-unicyclic graphs is denoted by $c(H)$. So a $TU$-subgraph $H$ on $n-1$ edges with $c(H)=0$ is a spanning tree of $G$.
\begin{theorem}\label{mainresult}
Let $G$ be a simple connected graph on $n\geq 2$ vertices $1,2,\ldots,n$ with the signless Laplacian matrix $Q$. Then
$$\det(Q(i))=\sum_{H}4^{c(H)},$$
where the summation runs over all $TU$-subgraphs $H$ of $G$ with $n-1$ edges consisting of a unique tree on vertex $i$ and $c(H)$ odd-unicyclic graphs.
\end{theorem}
\begin{proof}
By Observation \ref{CB3}, we have,
$$\det(Q(i))=\sum_{S} \det(N(i;S])^2,$$
where the summation runs over all $(n-1)$-subsets $S$ of $\{1,2,\ldots,m\}$. By Theorem \ref{mainlemma}, we have,
$$\det(Q(i))=\sum_{S} \det(N(i;S])^2=\sum_{H}(\pm 2^{c(H)})^2=\sum_{H}4^{c(H)},$$
where the summation runs over all $TU$-subgraphs $H$ of $G$ with $n-1$ edges consisting of a unique tree on vertex $i$ and $c(H)$ odd-unicyclic graphs.
\end{proof}
\begin{example}
Consider the Paw $G$ and its signless Laplacian matrix $Q$ in Figure \ref{paw}. To determine $\det(Q(1))$, consider the $TU$-subgraphs of $G$ with $3$ edges consisting of a unique tree on vertex $1$: $H_1$, $H_2$, $H_3$, $H_4$ in Figure \ref{tu}. Note $c(H_1)=c(H_2)=c(H_3)=0$ and $c(H_4)=1$. Then by Theorem \ref{mainresult},
$$\det(Q(1))=\sum_{H}4^{c(H)}=4^{c(H_1)}+4^{c(H_2)}+4^{c(H_3)}+4^{c(H_4)}=4^{0}+4^{0}+4^{0}+4^{1}=7.$$
\begin{figure}
\begin{center}
\begin{tikzpicture
[scale=1,colorstyle/.style={circle, draw=black!100,fill=black!100, thick, inner sep=0pt, minimum size=2mm},>=stealth]
\node (1) at (0,1)[colorstyle, label=above:$1$]{};
\node (2) at (0,0)[colorstyle, label=below:$2$]{};
\node (3) at (-1,-1)[colorstyle, label=below:$3$]{};
\node (4) at (1,-1)[colorstyle, label=below:$4$]{};
\node at (0,-1.5)[label=below:$H_1$]{};
\draw [] (1)--(2)--(3)--(4);
\node (1b) at (3,1)[colorstyle, label=above:$1$]{};
\node (2b) at (3,0)[colorstyle, label=below:$2$]{};
\node (3b) at (2,-1)[colorstyle, label=below:$3$]{};
\node (4b) at (4,-1)[colorstyle, label=below:$4$]{};
\node at (3,-1.5)[label=below:$H_2$]{};
\draw [] (1b)--(2b)--(4b)--(3b);
\node (1c) at (6,1)[colorstyle, label=above:$1$]{};
\node (2c) at (6,0)[colorstyle, label=below:$2$]{};
\node (3c) at (5,-1)[colorstyle, label=below:$3$]{};
\node (4c) at (7,-1)[colorstyle, label=below:$4$]{};
\node at (6,-1.5)[label=below:$H_3$]{};
\draw [] (1c)--(2c)--(3c);
\draw [] (2c)--(4c);
\node (1d) at (9,1)[colorstyle, label=above:$1$]{};
\node (2d) at (9,0)[colorstyle, label=below:$2$]{};
\node (3d) at (8,-1)[colorstyle, label=below:$3$]{};
\node (4d) at (10,-1)[colorstyle, label=below:$4$]{};
\node at (9,-1.5)[label=below:$H_4$]{};
\draw [] (4d)--(2d)--(3d)--(4d);
\end{tikzpicture}
\caption{$TU$-subgraphs of Paw $G$ with $3$ edges consisting of a unique tree on vertex $1$}\label{tu}
\end{center}
\end{figure}
\end{example}
\begin{corollary}
Let $G$ be a simple connected graph on $n\geq 2$ vertices $1,2,\ldots,n$. Let $Q$ be the signless Laplacian matrix of $G$ with eigenvalues $\lambda_1\leq \lambda_2\leq \cdots \leq \lambda_n$. Then
\begin{enumerate}
\item[(a)] $\det(Q(i))\geq t(G)$, the number of spanning trees of $G$, where the equality holds if and only if all odd cycles of $G$ contain vertex $i$.
\item[(b)] $$\frac{1}{n}\displaystyle\sum_{1\leq i_1<i_2<\cdots<i_n\leq n} \lambda_{i_1}\lambda_{i_2}\cdots \lambda_{i_{n-1}}
=\frac{1}{n}\displaystyle\sum_{i=1}^n\det(Q(i))\geq t(G),$$
where the equality holds if and only if $G$ is an odd cycle or a bipartite graph.
\end{enumerate}
\end{corollary}
\begin{proof}
\begin{enumerate}
\item[(a)] First note that a $TU$-subgraph $H$ on $n-1$ edges with $c(H)=0$ is a spanning tree of $G$. Then $\det(Q(i))=\sum_{H}4^{c(H)}\geq \sum_{T}4^{0}$, where the sum runs over all spanning trees $T$ of $G$ containing vertex $i$. So $\det(Q(i))$ is greater than or equal to the number of spanning trees of $G$ containing vertex $i$. Since each spanning tree contains vertex $i$, $\det(Q(i))\geq t(G)$ where the equality holds if and only if all odd-unicyclic subgraphs of $G$ contain vertex $i$ by Theorem \ref{mainresult}. Finally note that all odd-unicyclic subgraphs of $G$ contain vertex $i$ if and only if all odd cycles of $G$ contain vertex $i$.
\item[(b)] The first equality follows from the well-known linear algebraic result
\[
\displaystyle\sum_{1\leq i_1<i_2<\cdots<i_n\leq n} \lambda_{i_1}\lambda_{i_2}\cdots \lambda_{i_{n-1}}=\displaystyle\sum_{i=1}^n\det(Q(i)).
\]
Now by (a) $\det(Q(i))\geq t(G)$ where the equality holds if and only if all odd cycles of $G$ contain vertex $i$. Then
\[
\frac{1}{n}\displaystyle\sum_{i=1}^n\det(Q(i))\geq t(G)
\]
where the equality holds if and only if $\det(Q(i))=t(G)$ for all $i=1,2,\ldots,n$. So the equality holds if and only if all odd cycles of $G$ contain every vertex of $G$ which means $G$ is an odd cycle or a bipartite graph ($G$ has no odd cycles).
\end{enumerate}
\end{proof}
\section{Number of odd cycles in a graph}
In this section we find a combinatorial formula for $\det(Q)$ for the signless Laplacian matrix $Q$ of a given graph $G$. As a corollary we show that the number of odd cycles in $G$ is less than or equal to $\frac{\det(Q)}{4}$.
\begin{proposition}\label{prop_enn_vert_ennminusone_edg}
Let $H$ be a graph on $n$ vertices and $m = n$ edges. Then one of the following is true for $H$.
\begin{enumerate}
\item $H$ has a connected component which is a tree.
\item All connected components of $H$ are unicyclic and at least one of them is even-unicyclic.
\item All connected components of $H$ are odd-unicyclic.
\end{enumerate}
\end{proposition}
\begin{proof}
Suppose $H$ has $t \geq 2$ connected components $H_i$ with $m_i$ edges and $n_i$ vertices, where the first $k$ of them have at least one cycle and the rest are trees. For $i= 1,\ldots, k$, $H_i$ has $m_i \geq n_i$. Note that
\begin{equation}\label{treenumber_m=n}
0= m - n = \sum_{i=1}^{t} (m_i - n_i) = \sum_{i=1}^{k} (m_i - n_i) + \sum_{i=k+1}^{t} (m_i - n_i)
\end{equation}
Since $H_i$ has a cycle for $i =1,\ldots,k$ and $H_i$ is a tree for $i = k+1,\ldots,t$,
\[
\ell := \sum_{i=1}^{k} (m_i - n_i) \geq 0,
\]
and
\[
\sum_{i=k+1}^{t} (m_i - n_i) = -(t - k).
\]
Then $t-k = \ell$ by (\ref{treenumber_m=n}). If $H$ has a connected component which is a tree, we have Case 1. Otherwise $t-k =0$ which implies $\ell = \sum_{i=1}^{k} (m_i - n_i)=0$. Then $m_i= n_i$, for $i=1,2,\ldots,k$, i.e., all connected components of $H$ are unicyclic. If one of the unicyclic components is even-unicyclic, we get Case 2. Otherwise all connected components of $H$ are odd-unicyclic which is Case 3. Finally if $H$ is connected, it is unicyclic and cosequently it is Case 2 or 3.
\end{proof}
\begin{lemma}\label{tree_and_one_more_edge}
Let $H$ be a graph on $n$ vertices and $n$ edges with incidence matrix $N$. If $H$ has a connected component which is a tree and an edge which is not on the tree, then $\det(N) = 0$.
\end{lemma}
\begin{proof}
Let $H$ have a connected component $T$ which is a tree and an edge $e_j$ which is not on $T$. If $T$ consists of just one vertex, say $i$, then row $i$ of $N$ is a zero row giving $\det(N)=0$. Suppose $T$ has at least two vertices.
Now consider the square submatrix $N'$ of $N$ with rows corresponding to vertices of $T$ and columns corresponding to edges of $T$ together with $e_j$. Then the column of $N'$ corresponding to $e_j$ is a zero row giving $\det(N')=0$. Since entries in rows of $N$ corresponding to $T$ that are outside of $N'$ are zero, the rows of $N$ corresponding to $T$ are linearly dependent and consequently $\det(N)=0$.
\end{proof}
\begin{theorem}\label{oddunicycle}
Let $G$ be a simple graph on $n$ vertices and $m\geq n$ edges with the incidence matrix $N$. Let $S$ be a $n$-subset of $\{1,2,\ldots,m\}$ and $H$ be a spanning subgraph of $G$ with edges indexed by $S$. Then one of the following is true for $H$:
\begin{enumerate}
\item $H$ has a connected component which is a tree. Then $\det(N[S])=0$.
\item All connected components of $H$ are unicyclic and at least one of them is even-unicyclic. Then $\det(N[S])=0$.
\item $H$ has $k$ connected components which are all odd-unicyclic. Then $\det(N[S])=\pm 2^k$.
\end{enumerate}
\end{theorem}
\begin{proof}
$\;$
\begin{enumerate}
\item Suppose $H$ has a connected component which is a tree. Since $H$ has $n$ edges, $H$ has an edge not on the tree. Then $\det(N[S])=0$ by Lemma \ref{tree_and_one_more_edge}.
\item Suppose all connected components of $H$ are unicyclic and at least one of them is even-unicyclic. Since $N[S]$ is a direct sum of incidence matrices of unicyclic graphs where at least one of them is even-unicyclic, then $\det(N[S])=0$ by Lemma \ref{incidencedet}.
\item Suppose $H$ has $k$ connected components which are all odd-unicyclic. Since $N[S]$ is a direct sum of incidence matrices of $k$ odd-unicyclic graphs, then $\det(N[S])=(\pm 2)^k=\pm 2^k$ by Lemma \ref{incidencedet}.
\end{enumerate}
\end{proof}
By Theorem \ref{CB} and \ref{oddunicycle}, we have the following theorem.
\begin{theorem}\label{detQ}
Let $G$ be a simple graph with signless Laplacian matrix $Q$. Then
$$\det(Q)=\sum_{H} 4^{c(H)},$$
where the summation runs over all spanning subgraphs $H$ of $G$ whose all connected components are odd-unicyclic.
\end{theorem}
\begin{proof}
By Theorem \ref{CB} and Observation \ref{CB3},
$$\det(Q)=\det(NN^T)=\sum_{S} \det(N(;S])^2,$$
where the summation runs over all $n$-subsets $S$ of $\{1,2,\ldots,m\}$. By Theorem \ref{oddunicycle}, we have
$$\det(Q)=\sum_{S} \det(N(;S])^2=\sum_{H} (\pm 2^{c(H)})^2=\sum_{H} 4^{c(H)},$$
where the summation runs over all spanning subgraphs $H$ of $G$ whose all connected components are odd-unicyclic.
\end{proof}
Let $ous(G)$ denote the number of spanning subgraphs $H$ of a graph $G$ where each connected component of $H$ is an odd-unicyclic graph. So $ous(G)$ is the number of $TU$-subgraphs of $G$ whose all connected components are odd-unicyclic. Note that $c(H)\geq 1$ for all spanning subgraphs $H$ of $G$ whose all connected components are odd-unicyclic. By Theorem \ref{detQ}, we have an upper bound for $ous(G)$.
\begin{corollary}
Let $G$ be a simple graph with signless Laplacian matrix $Q$. Then $\det(Q)\geq 4ous(G)$.
\end{corollary}
For example, if $G$ is bipartite graph, then $\frac{\det(Q)}{4}=0=ous(G)$. If $G$ is an odd-unicyclic graph, then $\frac{\det(Q)}{4}=1=ous(G)$.\\
Note that by appending edges to an odd cycle in $G$ we get at least one $TU$-subgraph of $G$ with a unique odd-unicyclic connected component. Let $oc(G)$ denote the number of odd cycles in a graph $G$. Then $oc(G)\leq ous(G)$, where the equality holds if and only if $G$ is a bipartite graph or an odd-unicyclic graph. Then we have the following corollary.
\begin{corollary}\label{numberofoddcycles}
Let $G$ be a simple graph with signless Laplacian matrix $Q$. Then $\frac{\det(Q)}{4}\geq oc(G)$, the number of odd cycles in $G$, where the equality holds if and only if $G$ is a bipartite graph or an odd-unicyclic graph.
\end{corollary}
\section{Open Problems}
In this section we pose some problems related to results in Sections 2 and 3. First recall Corollary \ref{numberofoddcycles} which gives a linear algebraic sharp upper bound for the number of odd cycles in a graph. So an immediate question would be the following:
\begin{question}
Find a linear algebraic (sharp) upper bound of the number of even cycles in a simple graph.
\end{question}
To answer this one may like to apply Cauchy-Binet Theorem as done in Sections 2 and 3. Then a special $n\times m$ matrix $R$ will be required with the following properties:
\begin{enumerate}
\item $RR^T$ is a decomposition of a fixed matrix for a given graph $G$.
\item If $G$ is an even (resp. odd) cycle, then $\det(R)$ is $\pm c$ (resp. zero) for some fixed nonzero number $c$.
\end{enumerate}
For other open questions consider a simple connected graph $G$ on $n$ vertices and $m\geq n$ edges with signless Laplacian matrix $Q$. The characteristic polynomial of $Q$ is
$$P_Q(x)=\det(x I_n-Q)=x^n+\sum_{i=1}^n a_i x^{n-i}.$$
It is not hard to see that $a_1=-2m$ and $a_2=2m^2-m-\frac{1}{2}\sum_{i=1}^nd_i^2$ where $(d_1,d_2,\ldots,d_n)$ is the degree-sequence of $G$.
Theorem 4.4 in \cite{CRC} provides a broad combinatorial interpretation for $a_i$, $i=1,2,\ldots,n$. A combinatorial expression for $a_3$ is obtained in \cite[Thm 2.6]{WHAB} by using mainly Theorem 4.4 in \cite{CRC}. Note that
$$a_3=(-1)^3\displaystyle\sum_{1\leq i_1<i_2<i_3\leq n} \det(Q[i_1,i_2,i_3]).$$
So it may not be difficult to find corresponding combinatorial interpretation of $\det(Q[i_1,i_2,i_3])$ in terms of subgraphs on three edges. Similarly we can investigate other coefficients and corresponding minors which we essentially did for $a_{n}$ and $a_{n-1}$ in Sections 3 and 2 respectively. So the next coefficient to study is $a_{n-2}$ which entails the following question:
\begin{question}
Find a combinatorial expression or a lower bound for $\det(Q(i_1,i_2))$.
\end{question}
By Cauchy-Binet Theorem,
$$\det(Q(i_1,i_2))=\sum_{S} \det(N(i_1,i_2;S])^2,$$
where the summation runs over all $(n-2)$-subsets $S$ of the edge set $\{1,2,\ldots,m\}$. So it comes down to finding a combinatorial interpretation of $\det(N(i_1,i_2;S])$.
\vspace*{24pt}
| {
"timestamp": "2018-05-15T02:08:22",
"yymm": "1805",
"arxiv_id": "1805.04759",
"language": "en",
"url": "https://arxiv.org/abs/1805.04759",
"abstract": "A spanning tree of a graph is a connected subgraph on all vertices with the minimum number of edges. The number of spanning trees in a graph $G$ is given by Matrix Tree Theorem in terms of principal minors of Laplacian matrix of $G$. We show a similar combinatorial interpretation for principal minors of signless Laplacian $Q$. We also prove that the number of odd cycles in $G$ is less than or equal to $\\frac{\\det(Q)}{4}$, where the equality holds if and only if $G$ is a bipartite graph or an odd-unicyclic graph.",
"subjects": "Combinatorics (math.CO)",
"title": "An Analog of Matrix Tree Theorem for Signless Laplacians",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130551025345,
"lm_q2_score": 0.8152324960856175,
"lm_q1q2_score": 0.8062755815725016
} |
https://arxiv.org/abs/1609.06172 | Optimal stretching for lattice points and eigenvalues | We aim to maximize the number of first-quadrant lattice points in a convex domain with respect to reciprocal stretching in the coordinate directions. The optimal domain is shown to be asymptotically balanced, meaning that the stretch factor approaches 1 as the "radius" approaches infinity. In particular, the result implies that among all p-ellipses (or Lamé curves), the p-circle encloses the most first-quadrant lattice points as the radius approaches infinity.The case p=2 corresponds to minimization of high eigenvalues of the Dirichlet Laplacian on rectangles, and so our work generalizes a result of Antunes and Freitas. Similarly, we generalize a Neumann eigenvalue maximization result of van den Berg, Bucur and Gittins.The case p=1 remains open: which right triangles in the first quadrant with two sides along the axes will enclose the most lattice points, as the area tends to infinity? | \section{\bf Introduction}
Among ellipses of given area centered at the origin and symmetric about both axes, which one encloses the most integer lattice points in the open first quadrant? One might guess the optimal ellipse would be circular, but a non-circular ellipse can enclose more lattice points, as shown in \autoref{fig:counterexample}. Nonetheless, optimal ellipses must become more and more circular as the area increases to infinity, by a striking result of Antunes and Freitas \cite{AF13}.
To formulate the problem more precisely, consider the number of positive-integer lattice points lying in the elliptical region
\[
\Big( \frac{x}{s^{-1}} \Big)^{\! 2} + \Big( \frac{y}{s} \Big)^{\! 2} \leq r^2 ,
\]
where the ellipse has ``radius'' $r>0$ and semiaxes proportional to $s^{-1}$ and $s$. Notice that the area $\pi r^2$ of the ellipse is independent of the ``stretch factor'' $s$. Denote by $s=s(r)$ a value (not necessarily unique) of the stretch factor that maximizes the lattice point count. The theorem of Antunes and Freitas says $s(r) \to 1$ as $r \to \infty$, as illustrated in \autoref{fig:p2}. In other words, optimal ellipses become circular in the infinite limit. (Their theorem was stated differently, in terms of minimizing the $n$-th eigenvalue of the Dirichlet Laplacian on rectangles, with the square being asymptotically minimal. \autoref{sec:relation} explains the connection.) The analogous result for optimal ellipsoids becoming asymptotically spherical was proved recently in three dimensions by van den Berg and Gittins \cite{BBG16b} and in higher dimensions by Gittins and Larson \cite{GL17}, once again in the eigenvalue formulation.
\begin{figure}
\includegraphics[scale=0.35]{p=2_lattice.png}
\caption{\label{fig:counterexample}Circle $s=1$ and ellipse $s=1.15$, for radius $r=4.96$. The ellipse encloses three more points than the circle, as shown in bold.}
\end{figure}
This paper extends the result of Antunes and Freitas from circles to essentially arbitrary concave curves in the first quadrant that decrease between the intercept points $(0,1)$ and $(1,0)$. The ``ellipses'' in this situation are the images of the concave curve under rescaling by $s^{-1}$ and $s$ in the horizontal and vertical directions, respectively. \autoref{thm:s_bounded} says the maximizing $s(r)$ is bounded. \autoref{th:S_limit} shows under a mild monotonicity hypothesis on the second derivative of the curve that $s(r) \to 1$ as $r \to \infty$. Thus the most ``balanced'' curve in the family will enclose the most lattice points in the limit.
Marshall \cite{marshall} recently extended this result to higher dimensions by somewhat different methods. We have generalized also to translated lattices in $2$-dimensions \cite{Lau_Liu2}.
\begin{figure}
\includegraphics[scale=0.45]{shift_optimal_p_2.png}
\caption{\label{fig:p2}Optimal $s$-values for maximizing the number of lattice points in the $2$-ellipse. The graph plots the largest value $s(r)$ versus $\log r$. The plotted $r$-values are multiples of $\sqrt{3}/10$, an irrational number chosen in the hope of exhibiting generic behavior. The horizontal axis is at height $s=1$.}
\end{figure}
\autoref{th:S_limit_general} allows the curvature to blow up or degenerate at the intercept points, which permits us to treat the family of $p$-ellipses for $1<p<\infty$. In each case the $p$-circle is asymptotically optimal for the lattice counting problem in the first quadrant. The case $p=1$ is open problem. Our numerical evidence in \autoref{sec:oneellipse} suggests that the first-quadrant right triangle enclosing the most lattice points does \emph{not} necessarily approach a 45--45--90 degree triangle as $r \to \infty$. Instead one seems to get an infinite limit set of optimal triangles. See \autoref{conj:p_1}, where we describe the recent proof by Marshall and Steinerberger \cite{marshall_steinerberger}.
If one counts lattice points in the \emph{closed} first quadrant, that is, counting points on the axes as well, then the results reverse direction from maximization to minimization of the lattice count. \autoref{th:S_limit_neumann} shows that the value $s=s(r)$ minimizing the number of enclosed lattice points will tend to $1$ as $r \to \infty$. In the case of circles and ellipses, this result was obtained recently by van den Berg, Bucur and Gittins \cite{BBG16a} (and in higher dimensions by Gittins and Larson \cite{GL17}, generalized by Marshall \cite{marshall}). As explained in \autoref{sec:relation}, they showed that the maximizing rectangle for the $n$-th eigenvalue of the Neumann Laplacian must approach a square as $n \to \infty$.
This paper builds on the framework of Antunes and Freitas for ellipses, with new ingredients introduced to handle general concave curves. First we develop a new non-sharp bound on the counting function (\autoref{prop:counting_ineq}) in order to control the stretch factor $s(r)$. Then we prove more precise lattice counting estimates (\autoref{th:asymptotic}) of Kr\"{a}tzel type, relying on a theorem of van der Corput (\autoref{app-exponential}).
Convex decreasing curves in the first quadrant, such as $p$-ellipses with $0<p<1$, have been treated by Ariturk and Laugesen \cite{AL17} by building on this paper's results.
\subsection*{Spectral motivations and results}
This paper is inspired by recent efforts to understand the behavior of high eigenvalues of the Laplacian. Write $\lambda_n$ for the $n$-th eigenvalue of the Dirichlet Laplacian on a bounded domain $\Omega$ of area $1$ in the plane. (We restrict to $2$ dimensions for simplicity.) Denote the minimum value of this eigenvalue over all such domains by $\lambda_n^*$, and suppose it is achieved on a domain $\Omega_n^*$. What can one say about the shape of this minimizing domain?
For the first eigenvalue, the minimizing domain $\Omega_1^*$ is a disk, by the Faber--Krahn inequality. For the second eigenvalue, $\Omega_2^*$ is a union of two disjoint equal-area disks, as Krahn and later P. Szego showed. A long-standing conjecture says $\Omega_3^*$ should be a disk and $\Omega_4^*$ should be a union of disjoint non-equal-area disks. For higher eigenvalues ($n \geq 5$), minimizing domains found numerically do not have recognizable shapes; see \cite{AF12,Oud04} and references therein. Antunes and Freitas remark, though, that the ``most natural guess'' is $\Omega_n^*$ approaches a disk as $n \to \infty$, which is known to occur if the area normalization is strengthened to a perimeter normalization \cite{AF16,BF13}. This conjecture would imply the famous P\'olya conjecture $\lambda_n \geq 4\pi n /|\Omega|$, as Colbois and El Soufi \cite[Corollary~2.2]{colbois_soufi} showed using subadditivity of $n \mapsto \lambda_n^*$.
A partial result of Freitas \cite{f16} succeeds in determining the leading order asymptotic as $n \to \infty$ of the minimum value of the eigenvalue sum $\lambda_1+\dots+\lambda_n$ (rather than of $\lambda_n$ itself). This result provides no information on the shapes of the minimizing domains. Larson \cite{Larson} shows among convex domains that the disk asymptotically maximizes the Riesz means of the Laplace eigenvalues, for Riesz exponents $\geq 3/2$. If this Riesz exponent could be lowered to $0$, giving asymptotic maximality of the disk for the eigenvalue counting function, then one would obtain the desired conjecture about the eigenvalue minimizing domain $\Omega_n^*$.
A complete resolution for rectangular domains was found by Antunes and Freitas \cite{AF13}, using lattice counting methods as explained in \autoref{sec:relation}. They proved that the minimizing domain for $\lambda_n$ among rectangles approaches a square as $n \to \infty$. Similarly, the cube is asymptotically minimal in $3$ and higher dimensions \cite{BBG16b,GL17}.
\vspace*{-5pt}
\subsection*{Open problem for the harmonic oscillator}
Asymptotic optimality of the square for minimizing Dirichlet eigenvalues of the Laplacian on rectangles suggests an analogous open problem for harmonic oscillators. Consider the Schr\"odinger operator in $2$ dimensions with parabolic potential $(sx)^2 + (y/s)^2$, where $s> 0$ is a parameter. Write $s(n)$ for a parameter value that minimizes the $n$-th eigenvalue of this operator. What is the limiting behavior of $s(n)$ as $n \to \infty$?
The results on rectangular domains (which may be regarded as infinite potential wells) might suggest $s(n) \to 1$, but we think that it is not the case. Instead we believe $s(n)$ might cluster around infinitely many values as $n \to \infty$. Indeed, after rescaling, the Schr\"odinger operator has eigenvalues of the form $s(j-1/2)+(k-1/2)/s$, which leads to a lattice point counting problem inside right triangles, like in \autoref{sec:oneellipse} for $p = 1$, except now the lattices are shifted by $1/2$ to the left and downwards. For the unshifted lattice, numerical work in \autoref{sec:oneellipse} suggests that the optimal stretching parameter $s$ does not converge to $1$, and instead has many cluster points as $r \to \infty$. Recent investigations \cite[Section~10]{Lau_Liu2} suggest that this clustering phenomenon persists for shifted lattices and hence for the harmonic oscillator.
\section{\bf Assumptions and definitions}
\label{sec:assumptions}
The first quadrant is the \emph{open} set $\{ (x,y) : x,y>0 \}$.
Assume throughout the paper that $\Gamma$ is a concave, strictly decreasing curve in the first quadrant. Our theorems assume the $x$- and $y$-intercepts of the curve are equal, occurring at $x=L$ and $y=L$ respectively, as shown in \autoref{fig:gammafig}. Write $\operatorname{Area}(\Gamma)$ for the area enclosed by the curve $\Gamma$ and the $x$- and $y$-axes.
\begin{figure}
\includegraphics[scale=0.35]{gammafig.png}
\caption{\label{fig:gammafig}A concave decreasing curve $\Gamma$ in the first quadrant, with intercepts at $L$. The point $(\alpha,\beta)$ on the curve is relevant to \autoref{th:S_limit}.}
\end{figure}
We represent the curve $\Gamma$ by $y=f(x)$ for $x\in [0,L]$, so that $f$ is a concave strictly decreasing function, and of course $f$ is continuous. In particular
\[
L=f(0)>f(x)>f(L)=0
\]
whenever $x\in (0,L)$. Denote the inverse function of $f(x)$ by $g(y)$ for $y \in[0,L]$, so that $g$ also is concave and strictly decreasing.
We define a rescaling of the curve by parameter $r>0$:
\begin{align*}
r\Gamma
& = \text{image of $\Gamma$ under the radial scaling $\left(\begin{smallmatrix}
r & 0 \\
0 & r
\end{smallmatrix}\right)$} \\
& = \text{graph of $rf({x/r})$,}
\end{align*}
and define an area-preserving stretch of the curve by:
\begin{align*}
\Gamma(s)
& = \text{image of $\Gamma$ under the diagonal scaling $\left(\begin{smallmatrix}
s^{-1} & 0\\
0 & s
\end{smallmatrix}\right)$} \\
& = \text{graph of $s f({sx})$,}
\end{align*}
where $s>0$. In other words, $\Gamma(s)$ is obtained from $\Gamma$ after compressing the $x$-direction by $s$ and stretching the $y$-direction by $s$. Define the counting function for
$r\Gamma(s)$ by
\begin{align*}
N(r,s) &=\text{number of positive-integer lattice points lying inside or on $r\Gamma(s)$ } \\
&=\# \big\{ (j,k)\in\mathbb{N} \times \mathbb{N}:k\leq rsf(js/r) \big\}.
\end{align*}
For each $r>0$, consider the set
\[
S(r) = \argmax_{s >0} N(r,s)
\]
consisting of the $s$-values that maximize the number of first-quadrant lattice points enclosed by the curve $r\Gamma(s)$. The set $S(r)$ is well-defined because the maximum is indeed attained, as the following argument shows. The curve $r\Gamma(s)$ has $x$-intercept at $rs^{-1}L$, which is less than $1$ if $s>rL$ and so in that case the curve encloses no positive-integer lattice points. Similarly if $s<(rL)^{-1}$, then $r\Gamma(s)$ has height less than $1$ and contains no lattice points in the first quadrant. Thus for each fixed $r>0$, if $s$ is sufficiently small or sufficiently large then the counting function $N(r,s)$ equals zero, while obviously for intermediate values of $s$ the integer-valued function $s \mapsto N(r,s)$ is bounded. Hence $N(r,s)$ attains its maximum at some $s>0$.
For later reference, we write down this bound on optimal $s$-values.
\begin{lemma}[$r$-dependent bound on optimal stretch factors]\label{lemma:bound}
If $\Gamma$ is a concave, strictly decreasing curve in the first quadrant with equal intercepts (as in \autoref{fig:gammafig}), then
\[
S(r)\subset \big[(rL)^{-1}, rL\big] \qquad \text{whenever $r \geq 2/L$.}
\]
\end{lemma}
\begin{proof}
The curve $r\Gamma(1)$ has horizontal and vertical intercepts at $rL\geq 2$. Hence by concavity, $r\Gamma(1)$ encloses the point $(1,1)$, and so the counting function $s \mapsto N(r,s)$ is greater than zero when $s=1$. On the other hand when $s< (rL)^{-1}$ or $s> rL$, we know $N(r,s)=0$ by the paragraph before the lemma. Thus the maximum can only be attained when $s$ lies in the interval $\big[(rL)^{-1}, rL\big]$.
\end{proof}
\section{\bf Main results}
\label{sec:mainresults}
The curve $\Gamma$ has $x$- and $y$-intercepts at $L$, in the theorems in that follow; see \autoref{fig:gammafig}. We start by improving \autoref{lemma:bound} to show the maximizing set $S(r)$ is bounded, and the bounds can be evaluated explicitly in the limit as $r \to \infty$.
\begin{theorem}[Uniform bound on optimal stretch factors] \label{thm:s_bounded}
If $\Gamma$ is a concave, strictly decreasing curve in the first quadrant then
\[
S(r) \subset [s_1,s_2] \qquad \text{for all $r \geq 2/L$,}
\]
for some constants $s_1,s_2>0$. Furthermore, given $\varepsilon>0$,
\[
S(r) \subset \big[\frac{1}{4+\varepsilon},4+\varepsilon\big] \qquad \text{for all large $r$.}
\]
\end{theorem}
The proof appears in \autoref{sec:boundedness}.
If the concave decreasing curve is smooth with monotonic second derivative, then in addition to being bounded above and below the maximizing set $S(r)$ converges to $\{ 1 \}$, as the next theorem shows. Recall that $g$ is the inverse function of $f$.
\begin{theorem}[Optimal concave curve is asymptotically balanced]\label{th:S_limit}
Assume $(\alpha,\beta) \in \Gamma$ is a point in the first quadrant such that $f \in C^2[0, \alpha]$ with $f'<0$ on $(0,\alpha]$ and $f'' < 0$ on $[0, \alpha]$, and similarly $g \in C^2[0, \beta]$ with $g'<0$ on $(0,\beta]$ and $g'' < 0$ on $[0, \beta]$. Further suppose $f''$ is monotonic on $[0,\alpha]$ and $g''$ is monotonic on $[0,\beta]$.
Then the optimal stretch factor for maximizing $N(r,s)$ approaches $1$ as $r$ tends to infinity, with
\[
S(r) \subset \big[ 1-O(r^{-1/6}),1+O(r^{-1/6}) \big] ,
\]
and the maximal lattice count has asymptotic formula
\[
\max_{s > 0} N(r,s) = r^2\operatorname{Area} (\Gamma)-rL + O(r^{2/3}) .
\]
\end{theorem}
The theorem is proved in \autoref{sec:mainproof}. Slight improvements to the decay rate $O(r^{-1/6})$ and the error term $O(r^{2/3})$ are possible, as explained after \autoref{th:asymptotic}.
\subsection*{More general curves for lattice counting}
We want to weaken the smoothness and monotonicity assumptions in \autoref{th:S_limit}. We start with a definition of piecewise smoothness.
\begin{definition}[$PC^2$]\
(i) We say a function $f$ is piecewise $C^2$-smooth on a half-open interval $(0,\alpha]$ if $f$ is continuous and a partition $0=\alpha_0<\alpha_1 < \dots < \alpha_l = \alpha$ exists such that $f \in C^2(0,\alpha_1]$ and $f \in C^2[\alpha_{i-1},\alpha_i]$ for $i=2,\dots,l$. Write $PC^2(0,\alpha]$ for the class of such functions.
(ii) Write $f^\prime<0$ to mean that $f^\prime$ is negative on the subintervals $(0,\alpha_1]$ and $[\alpha_{i-1},\alpha_i]$ for $i=2,\dots,l$, with the derivative being taken in the one-sided senses at the partition points $\alpha_1,\dots,\alpha_l$. The meaning of $f^{\prime \prime}<0$ is analogous.
(iii) We label partition points using the same letter as for the right endpoint. In particular, the partition for $g \in PC^2(0,\beta]$ is $0=\beta_0<\dots<\beta_\ell=\beta$.
\end{definition}
For the next theorem, take a point $(\alpha ,\beta) \in \Gamma$ lying in the first quadrant and suppose we have numbers $a_1,a_2,b_1,b_2>0$ and positive valued functions $\delta(r)$ and $\epsilon(r)$ such that as $r \to \infty$:
\begin{align}
\delta(r) = O(r^{-2a_1}) , \qquad & f''\big(\delta(r)\big)^{-1} = O(r^{1-4a_2}) , \label{eq:f_sup}\\
\epsilon(r) = O(r^{-2b_1}) , \qquad & g''\big(\epsilon(r)\big)^{-1} = O(r^{1-4b_2}) . \label{eq:g_sup}
\end{align}
(The second condition in \autoref{eq:f_sup} says that $f''(x)$ cannot be too small as $x \to 0$.)
Let
\[
e=\min \{ \tfrac{1}{6},a_1,a_2,b_1,b_2 \} .
\]
Now we extend \autoref{th:S_limit} to a larger class of concave decreasing curves.
\begin{theorem}[Optimal concave curve is asymptotically balanced]\label{th:S_limit_general}\
\noindent Assume $f\in PC^2(0,\alpha]$ with $f'<0$ and $f''<0$, and $f^{\prime \prime}$ is monotonic on each subinterval of the partition. Similarly assume $g\in PC^2(0,\beta]$ with $g'<0$ and $g''<0$, and $g^{\prime \prime}$ is monotonic on each subinterval of the partition. Suppose the positive functions $\delta(r)$ and $\epsilon(r)$ satisfy conditions \autoref{eq:f_sup} and \autoref{eq:g_sup}.
Then the optimal stretch factor for maximizing $N(r,s)$ approaches $1$ as $r$ tends to infinity, with
\[
S(r) \subset \big[ 1-O(r^{-e}),1+O(r^{-e}) \big] ,
\]
and the maximal lattice count has asymptotic formula
\[
\max_{s > 0} N(r,s) = r^2\operatorname{Area} (\Gamma)-rL + O(r^{1-2e}) .
\]
\end{theorem}
The proof is presented in \autoref{sec:generalproof}.
\begin{example}[Optimal $p$-ellipses for lattice point counting]\rm \label{ex:p-ellipse}
Fix $1<p<\infty$, and consider the $p$-circle
\[
\Gamma : |x|^p+|y|^p=1 ,
\]
which has intercept $L=1$. That is, the $p$-circle is the unit circle for the $\ell^p$-norm on the plane. Then the $p$-ellipse
\[
r\Gamma(s) : |sx|^p + |s^{-1}y|^p \leq r^p
\]
has first-quadrant counting function
\[
N(r,s) = \# \{ (j,k) \in {\mathbb N} \times {\mathbb N} : (js)^p+(ks^{-1})^p \leq r^p \} .
\]
We will show that the $p$-ellipse containing the maximum number of positive-integer lattice points must approach a $p$-circle in the limit as $r \to \infty$, with
\[
S(r) \subset [1-O(r^{-e}),1+O(r^{-e})]
\]
where $e=\min \{ \tfrac{1}{6},\tfrac{1}{2p} \}$.
\autoref{th:S_limit} fails to apply to $p$-ellipses when $1<p <2$, because the second derivative of the curve is not monotonic (see $f^{\prime \prime}(x)$ below), and the theorem fails to apply when $2<p<\infty$ because $f^{\prime \prime}(0)=0$ in that case. Instead we will apply \autoref{th:S_limit_general}.
To verify that the $p$-circle satisfies the hypotheses of \autoref{th:S_limit_general}, we let $\alpha=\beta=2^{-1/p}$ and choose
\[
\delta(r)=r^{-1/p} , \qquad \epsilon(r)=r^{-1/p} ,
\]
for all large $r$. Then $\delta(r)=r^{-2a_1}$ with $a_1=1/2p$. Next,
\begin{align*}
f(x) & = (1-x^p)^{1/p} , \\
f'(x) & = -x^{p-1} (1-x^p)^{-1+1/p} , \\
f''(x) & =-(p-1)x^{p-2}(1-x^p)^{-2+1/p} ,
\end{align*}
so that
\[
\big| f''\big(\delta(r)\big) \big|^{-1} \leq (\text{const.}) r^{1-2/p} ,
\]
and hence $a_2=1/2p$ in \autoref{eq:f_sup}. Thus $f$ satisfies hypothesis \autoref{eq:f_sup}.
Further, the interval $(0,\alpha)$ can be partitioned into subintervals on which $f''$ is monotonic, because the third derivative
\[
f'''(x) = -(p-1) x^{p-3} (1 - x^p)^{-3 + 1/p} \big( (1+p)x^p + p-2 \big)
\]
vanishes at most once in the unit interval.
The calculations are the same for $g$, and so the desired conclusion for $p$-ellipses follows from \autoref{th:S_limit_general} when $1<p<\infty$.
For $p=\infty$, the $\infty$-circle is a Euclidean square and the $\infty$-ellipse is a rectangle. Many different rectangles of given area can contain the same number of lattice points. For example, a $4 \times 1$ rectangle and $2 \times 2$ square each contain $4$ lattice points in the first quadrant. All such matters can be handled by the explicit formula $N(r,s)=\lfloor rs^{-1} \rfloor \lfloor rs \rfloor$ for the counting function when $p=\infty$.
The case $p=1$ is an open problem, as discussed in \autoref{sec:oneellipse}. The case $0<p<1$ has been handled by Ariturk and Laugesen \cite{AL17} using results in this paper.
Incidentally, an explicit estimate on the number of lattice points in the full $p$-ellipse in all four quadrants was obtained by Kr\"{a}tzel \cite[Theorem 2]{kratzel04} for $p \geq 2$. See the informative survey by Ivi\'c \emph{et al.}\ \cite[{\S}3.1]{IKKN06}.
\end{example}
\subsection*{Lattice points in the closed first quadrant, and Neumann eigenvalues} Our results have analogues for lattice point counting in the closed (rather than open) first quadrant, as we now explain. When counting nonnegative-integer lattice points, which means we include lattice points on the axes, the counting function for $r\Gamma(s)$ is
\begin{align*}
\mathcal{N}(r,s) =\#\{(j,k)\in\mathbb{Z}_{+} \times \mathbb{Z}_{+}:k\leq rsf(js/r)\} ,
\end{align*}
where $\mathbb{Z}_+ = \{0,1,2,3,\ldots\}$. Define
\[
\mathcal{S}(r) = \argmin_{s >0} \mathcal{N}(r,s) .
\]
In other words, the set $\mathcal{S}(r)$ consists of the $s$-values that minimize the number of lattice points inside the curve $r\Gamma(s)$ in the \emph{closed} first quadrant. Notice we employ the calligraphic letters $\mathcal N$ and $\mathcal S$ when working with nonnegative-integer lattice points.
\begin{theorem}[Uniform bound on optimal stretch factors] \label{thm:s_bounded_neumann}
If $\Gamma$ is a concave, strictly decreasing curve in the first quadrant then
\[
\mathcal{S}(r) \subset [s_1,s_2] \qquad \text{for all $r \geq 2/L$,}
\]
for some constants $s_1,s_2>0$.
\end{theorem}
\begin{theorem}[Optimal concave curve is asymptotically balanced]\label{th:S_limit_neumann}
Under the assumptions of \autoref{th:S_limit}, the optimal stretch factor for minimizing $\mathcal{N}(r,s)$ approaches $1$ as $r$ tends to infinity:
\begin{align*}
\mathcal{S}(r) & \subset [1-O(r^{-1/6}),1+O(r^{-1/6})] , \\
\min_{s > 0} \mathcal{N}(r,s) & = r^2\operatorname{Area} (\Gamma) + rL + O(r^{2/3}) ,
\end{align*}
and under the assumptions of \autoref{th:S_limit_general} we have similarly that:
\begin{align*}
\mathcal{S}(r) & \subset [1-O(r^{-e}),1+O(r^{-e})] , \\
\min_{s > 0} \mathcal{N}(r,s) & = r^2\operatorname{Area} (\Gamma) + rL + O(r^{1-2e}) .
\end{align*}
\end{theorem}
The proofs are in \autoref{sec:closedquadrant}.
Consequently, we reprove a recent theorem of van den Berg, Bucur and Gittins \cite{BBG16a} saying that the optimal rectangle of area $1$ for maximizing the $n$-th Neumann eigenvalue of the Laplacian approaches a square as $n \to \infty$. See \autoref{sec:relation} for discussion, and an explanation of why the approach in this paper is simpler.
\section{\bf Proof of \autoref{thm:s_bounded}}
\label{sec:boundedness}
To control the stretch factors and hence prove \autoref{thm:s_bounded}, we will first derive a rough lower bound on the counting function, and then a more sophisticated upper bound. The leading order term in these bounds is simply the area inside the rescaled curve and thus is best possible, while the second term scales like the length of the curve and so at least has the correct order of magnitude.
Assume $\Gamma$ is concave and decreasing in the first quadrant, with $x$- and $y$-intercepts at $L$ and $M$ respectively. The intercepts need not be equal, in the lemmas and proposition below. Recall that $N(r,s)$ counts the positive-integer lattice points under the curve $\Gamma$, while $\mathcal{N}(r,s)$ counts nonnegative-integer lattice points.
\begin{lemma}[Relation between counting functions] \label{le:relation}
For each $r,s>0$,
\[
\mathcal{N}(r,s)=N(r,s)+r(s^{-1}L+sM ) + \rho(r,s)
\]
for some number $\rho(r,s) \in [-1,1]$.
\end{lemma}
\begin{proof}
The difference between the two counting functions is simply the number of lattice points lying on the coordinate axes inside the intercepts of $r\Gamma(s)$. There are
\[
\lfloor rs^{-1}L \rfloor + \lfloor rsM \rfloor +1
\]
such lattice points, and so the lemma follows immediately.
\end{proof}
\begin{lemma}[Rough lower bound] \label{le:basic}
The number $N(r,s)$ of positive-integer lattice points lying inside $r\Gamma(s)$ in the first quadrant satisfies
\[
N(r,s) \geq r^2\operatorname{Area}(\Gamma) - r(s^{-1}L+sM) - 1 , \qquad r,s > 0 .
\]
\end{lemma}
\begin{proof}
Notice $\mathcal{N}(r,s)$ equals the total area of the squares of sidelength $1$ having lower left vertices at nonnegative-integer lattice points inside the curve $r\Gamma(s)$. The union of these squares contains $r\Gamma(s)$, since the curve is decreasing. Hence $\mathcal{N}(r,s) \geq r^2\operatorname{Area}(\Gamma)$, and so
\begin{align*}
N(r,s)
& \geq \mathcal{N}(r,s) - r(s^{-1}L+sM ) - 1 \qquad \text{by \autoref{le:relation}} \\
& \geq r^2\operatorname{Area}(\Gamma) - r(s^{-1}L+sM ) - 1 .
\end{align*}
\end{proof}
For the upper bound in the next proposition, remember $\Gamma$ is the graph of $y=f(x)$, where $f$ is concave and decreasing on $[0,L]$, with $f(0)=M, f(L)=0$. We do not assume $f$ is differentiable in the next result, although in order to guarantee the constant $C$ in the proposition is positive, we assume $f$ is \emph{strictly} decreasing.
\begin{proposition}[Two-term upper bound on counting function]\label{prop:counting_ineq}
Let $C=M-f(L/2)$.
\noindent (a) The number $N$ of positive-integer lattice points lying inside $\Gamma$ in the first quadrant satisfies
\begin{equation}
N \leq \operatorname{Area}(\Gamma) - \frac{1}{2} C
\end{equation}
provided $L \geq 1$.
\noindent (b) The number $N(r,s)$ of positive-integer lattice points lying inside $r\Gamma(s)$ in the first quadrant satisfies
\[
N(r,s)\leq r^2\operatorname{Area}(\Gamma) - \frac{1}{2}Crs
\]
whenever $r \geq s/L$.
\end{proposition}
\begin{proof}
\begin{figure}
\includegraphics[scale=0.4]{rectangle_dirichlet.png}
\caption{\label{fig:dirichlet_counting}Positive integer lattice count satisfies $N \leq \operatorname{Area}(\Gamma) - \operatorname{Area}(\text{triangles})$, in proof of \autoref{prop:counting_ineq}(a).}
\end{figure}
Part (a).
Clearly $N$ equals the total area of the squares of sidelength $1$ having upper right vertices at positive-integer lattice points inside the curve $\Gamma$. Consider also the right triangles of width $1$ formed by secant lines on $\Gamma$ (see \autoref{fig:dirichlet_counting}), that is, the triangles with vertices $\big( i-1,f(i-1) \big),\big( i,f(i) \big),\big( i-1,f(i) \big)$, where $i=1,\ldots,\lfloor L \rfloor$. These triangles lie above the squares by construction, and lie below $\Gamma$ by concavity. Hence
\begin{equation}\label{eq:total_area}
N+\operatorname{Area}(\text{triangles})\leq \operatorname{Area}(\Gamma) .
\end{equation}
Since $f$ is decreasing, we find
\begin{align}
\operatorname{Area}(\text{triangles})&=\sum_{i=1}^{\lfloor L\rfloor} \frac{1}{2}\big(f(i-1) - f(i)\big) \nonumber\\
&=\frac{1}{2}\big(f(0)-f(\lfloor L\rfloor)\big)\label{eq:triangle-area-exact}\\
&\geq \frac{1}{2}\big(M- f(L/2)\big) = \frac{1}{2} C, \label{eq:area}
\end{align}
because $\lfloor L \rfloor \geq L/2$ when $L \geq 1$. Combining \autoref{eq:total_area} and \autoref{eq:area} proves Part (a).
\smallskip
Part (b).
Simply replace $\Gamma$ in Part (a) with the curve $r\Gamma(s)$, meaning we replace $L, M, f(x)$ with $rs^{-1}L, rsM, rsf(sx/r)$ respectively.
\end{proof}
\subsection*{Proof of \autoref{thm:s_bounded}} Recall the intercepts are assumed equal ($L=M$) in this theorem. Let $r \geq 2/L$ and suppose $s \in S(r)$. Then $r \geq s/L$ by \autoref{lemma:bound}, and so the upper bound in \autoref{prop:counting_ineq}(b) gives
\[
N(r,s)\leq r^2\operatorname{Area}(\Gamma) - \frac{1}{2}Crs .
\]
The lower bound in \autoref{le:basic} with ``$s=1$'' says
\begin{equation}\label{eq:s-1-lower-bound}
N(r,1) \geq r^2\operatorname{Area}(\Gamma) - 2rL - 1 .
\end{equation}
The value $s \in S(r)$ is a maximizing value, and so $N(r,1) \leq N(r,s)$. The preceding inequalities therefore imply
\[
\frac{1}{2}Crs \leq 2rL+1 \leq \frac{5}{2}rL .
\]
Hence $s \leq 5L/C \equiv s_2$, and so the set $S(r)$ is bounded above.
Interchanging the roles of the horizontal and vertical axes, we similarly find $s^{-1} \leq 5L/\widetilde{C} \equiv s_1^{-1}$, so that the set $S(r)$ is bounded below away from $0$, completing the first part of the proof.
The fact that $S(r)$ is bounded will help imply an improved bound in the limit as $r \to \infty$. Going back to the proof of \autoref{prop:counting_ineq}(a), we see from \autoref{eq:total_area} and \autoref{eq:triangle-area-exact} that
\[
N+\frac{1}{2}\big(f(0)-f(\lfloor L\rfloor) \big) \leq \operatorname{Area}(\Gamma).
\]
Rescaling the curve from $\Gamma$ to $r\Gamma(s)$, so that $N$ and $f(x)$ become $N(r,s)$ and $rsf\big(\frac{s}{r}x\big)$, respectively, and the $x$-intercept $L$ becomes $rL/s$, we see the last inequality becomes
\[
N(r,s) + \frac{1}{2} rs\big(f(0)-f(\frac{s}{r}\lfloor \frac{rL}{s}\rfloor)\big)\leq r^2\operatorname{Area}(\Gamma).
\]
Hence
\begin{equation*}\label{eq:p_optimal}
N(r,s) \leq r^2\operatorname{Area}(\Gamma)-\frac{1}{2}rsL + o(r) \qquad \text{as $r \to \infty$,}
\end{equation*}
where to get the error term $o(r)$ we used that $s \in S(r)$ is bounded above and below ($s_1 \leq s \leq s_2$) and $f(L)=0$.
Since $s$ is a maxmizing value we have $N(r,1)\leq N(r,s)$, and so \autoref{eq:s-1-lower-bound} and the above inequality imply
\[
\frac{1}{2}rsL+o(r) \leq 2rL + 1 ,
\]
which implies $\limsup_{r \to \infty} s \leq 4$. Similarly $\limsup_{r\to \infty} s^{-1} \leq 4$, by interchanging the axes.
\section{\bf Two-term counting estimates with explicit remainder}
We start with a result for $C^2$-smooth curves. What matters in the following proposition is that the right side of estimate \autoref{eq:gamma_a_asymptotic} below has the form $O(r^\theta)$ for some $\theta<1$, and that the $s$-dependence in the estimate can be seen explicitly. The detailed dependence on the functions $f$ and $g$ will not be important for our purposes.
The horizontal and vertical intercepts $L$ and $M$ need not be equal, in this section.
\begin{proposition}[Two-term counting estimate]\label{th:asymptotic}
Take a point $(\alpha ,\beta) \in \Gamma$ lying in the first quadrant, and assume that $f \in C^2[0, \alpha]$ with $f'<0$ on $(0,\alpha]$ and $f'' < 0$ on $[0, \alpha]$, and similarly $g \in C^2[0, \beta]$ with $g'<0$ on $(0,\beta]$ and $g'' < 0$ on $[0, \beta]$. Further suppose $f''$ is monotonic on $[0,\alpha]$ and $g''$ is monotonic on $[0,\beta]$.
\noindent (a) The number $N$ of positive-integer lattice points inside $\Gamma$ in the first quadrant satisfies:
\begin{align*
&\big|N-\operatorname{Area} (\Gamma)+(L+M)/2\big|\nonumber\\
&\leq 6\Big(\int_0^\alpha |f''(x)|^{1/3} \,\mathrm{d} x+\int_0^\beta |g''(y)|^{1/3}\,\mathrm{d} y\Big) +175\big(\max_{ [0,\alpha]}\frac{1}{|f''|^{1/2}}+\max_{ [0,\beta]}\frac{1}{|g''|^{1/2}}\big)\nonumber\\
& \hspace{4cm}+\frac{1}{4} \big( |f'(\alpha)|+|g'(\beta)| \big)+3 .
\end{align*}
\noindent (b) The number $N(r,s)$ of positive-integer lattice points lying inside $r\Gamma(s)$ in the first quadrant satisfies (for $r,s>0$):
\begin{align}
&\big|N(r,s)-r^2\operatorname{Area} (\Gamma)+r(s^{-1}L+sM)/2\big|\nonumber\\
&\leq 6r^{2/3}\Big(\int_0^\alpha |f''(x)|^{1/3} \,\mathrm{d} x+\int_0^\beta |g''(y)|^{1/3}\,\mathrm{d} y\Big) +175r^{1/2}\big(\max_{ [0,\alpha]}\frac{s^{-3/2}}{|f''|^{1/2}}+\max_{[0,\beta]}\frac{s^{3/2}}{|g''|^{1/2}}\big)\nonumber\\
& \hspace{4cm}+\frac{1}{4}(s^2|f'(\alpha)|+s^{-2}|g'(\beta)|)+3. \label{eq:gamma_a_asymptotic}
\end{align}
\end{proposition}
\autoref{th:asymptotic} and its proof are closely related to work of Kr\"{a}tzel \cite[Theorem~1]{kratzel04}. We give a direct proof below for two reasons: we want the estimate \autoref{eq:gamma_a_asymptotic} that depends explicitly on the stretching parameter $s$, and we want a proof that can be modified to use a weaker monotonicity hypothesis, in \autoref{th:asymptotic_general}.
A better bound on the right side of \autoref{eq:gamma_a_asymptotic}, giving order $O(r^{\theta+\epsilon})$ with $\theta = 131/208 \simeq 0.63 < 2/3$, can be found in work of Huxley \cite{Hux03}, with precursors in \cite[Theorems~18.3.2 and 18.3.3]{Hux96}. That bound is difficult to prove, though, and the improvement is not important for our purposes since it leads to only a slight improvement in the rate of convergence for $S(r)$, namely from $O(r^{-1/6})$ to $O(r^{(\theta+\epsilon-1)/2})$ in \autoref{th:S_limit}.
\begin{proof}
Part (a).
We divide the region under $\Gamma$ into three parts. Let $N_1$ count the lattice points lying to the left of the line $x=\alpha$ and above $y=\beta$, and $N_2$ count the lattice points to the right of $x=\alpha$ and below $y=\beta$, and $N_3$ count the lattice points in the remaining rectangle $(0,\alpha] \times (0,\beta]$. That is,
\begin{align*}
N_1&=\sum_{0<m\leq \alpha} \ \sum_{\beta <n\leq f(m)} 1 = \sum_{0<m\leq \alpha} \big( \lfloor f(m)\rfloor-\lfloor \beta \rfloor \big), \\
N_2&=\sum_{0<n\leq \beta} \ \sum_{\alpha<m\leq g(n)} 1 = \sum_{0<n\leq \beta} \big( \lfloor g(n)\rfloor-\lfloor \alpha \rfloor \big) , \\
N_3&=\lfloor \alpha \rfloor \lfloor \beta\rfloor .
\end{align*}
In terms of the \emph{sawtooth} function $\psi$, defined by
\[
\psi(x)=x-\lfloor x \rfloor -1/2 ,
\]
one can evaluate
\[
N_1
=\sum_{0<m\leq \alpha} \big( f(m) -\psi\big(f(m)\big)-1/2-\lfloor \beta \rfloor \big) .
\]
Then we apply the Euler--Maclaurin summation formula
\[
\sum_{0<m\leq \alpha} f(m)=\int_0^\alpha f(x) \,\mathrm{d} x -\psi (\alpha)f(\alpha)+\psi(0)f(0) +\int_0^\alpha f'(x) \psi(x) \,\mathrm{d} t
\]
(which we observe for later reference holds whenever $f$ is piecewise $C^1$-smooth) to deduce that
\begin{align*}
N_1
&=\int_0^{\alpha} f(x) \,\mathrm{d} x -\psi(\alpha) f(\alpha) +\psi(0) f(0)+\int_0^{\alpha} f'(x) \psi(x) \,\mathrm{d} x \\
& \hspace{3cm} - \sum_{0<m\leq \alpha} \psi \big(f(m)\big) - \lfloor \alpha \rfloor (1/2+\lfloor \beta \rfloor) \\
& =\int_0^{\alpha } f(x) \,\mathrm{d} x -\psi(\alpha) \beta -M/2+\int_0^{\alpha} f'(x) \psi(x) \,\mathrm{d} x \\
& \hspace{3cm} - \sum_{0<m\leq \alpha} \psi \big(f(m)\big) - \lfloor \alpha \rfloor (1/2+\lfloor \beta \rfloor) .
\end{align*}
Similarly
\begin{align*}
N_2& =\int_0^{\beta } g(y) \,\mathrm{d} y -\psi(\beta) \alpha -L/2+\int_0^{\beta} g'(y) \psi(y) \,\mathrm{d} y \\
& \hspace{3cm} - \sum_{0<n\leq \beta} \psi \big(g(n)\big) - \lfloor \beta \rfloor (1/2+\lfloor \alpha \rfloor) ,
\end{align*}
and so
\begin{align}
N&=N_1+N_2+N_3\nonumber\\
&=\int_0^{\alpha } f(x) \,\mathrm{d} x+\int_0^{\beta } g(y) \,\mathrm{d} y - \lfloor \alpha \rfloor \lfloor \beta \rfloor - (L+M)/2 \nonumber\\
& \hspace{1cm} -\psi(\alpha) \beta -\lfloor \alpha \rfloor/2 -\psi(\beta) \alpha -\lfloor \beta \rfloor/2 \nonumber\\
& \hspace{1cm} +\int_0^{\alpha} f'(x) \psi(x) \,\mathrm{d} x +\int_0^{\beta} g'(y) \psi(y) \,\mathrm{d} y \nonumber \\
& \hspace{1cm} - \sum_{0<m\leq \alpha} \psi \big(f(m)\big) - \sum_{0<n\leq \beta} \psi \big(g(n)\big) \nonumber\\
&=\operatorname{Area}(\Gamma) -(L+M)/2+\int_0^{\alpha} f'(x) \psi(x) \,\mathrm{d} x +\int_0^{\beta} g'(y) \psi(y) \,\mathrm{d} y \nonumber\\
& \hspace{1cm}- \sum_{0<m\leq \alpha} \psi \big(f(m)\big) - \sum_{0<n\leq \beta} \psi \big(g(n)\big) + \text{remainder} \label{eq:lattice_count}
\end{align}
where
\begin{equation}\label{eq:const_bound}
\text{remainder} =-(\alpha -\lfloor \alpha \rfloor)(\beta- \lfloor \beta \rfloor) + (\alpha -\lfloor \alpha \rfloor + \beta -\lfloor \beta \rfloor)/2 .
\end{equation}
This remainder lies between $0$ and $1$, since $0 \leq -xy+(x+y)/2 \leq 1$ when $x,y \in [0,1]$.
We estimate the sum of sawtooth functions in \autoref{eq:lattice_count} by using \autoref{lemma:kratzel} (which is due to van der Corput): since $f''$ is monotonic and nonzero on $[0,\alpha]$, the thoerem implies
\begin{align}
\Big|\sum_{0<m\leq \alpha }\psi\big(f(m)\big)\Big|& \leq 6 \int_0^{\alpha} |f''(x)|^{1/3} \,\mathrm{d} x + 175 \max_{[0,\alpha]} \frac{1}{|f''|^{1/2}}+1 \label{eq:psi_f_bound}
\end{align}
and similarly
\begin{align}\label{eq:psi_g_bound}
\Big|\sum_{0<n\leq \beta }\psi\big(g(n)\big)\Big|& \leq 6\int_0^{\beta } |g''(y)|^{1/3} \,\mathrm{d} y + 175 \max_{[0,\beta]} \frac{1}{|g''|^{1/2}}+1 .
\end{align}
To estimate the integrals of $f'\psi$ and $g'\psi$ in \autoref{eq:lattice_count}, we introduce the antiderivative of the sawtooth function, $\Psi(t)=\int_0^t \psi(z) \,\mathrm{d} z$, and observe that $-1/8 \leq \Psi(t) \leq 0$ for all $t \in {\mathbb R}$. By integration by parts and the fact that $f''<0$, we have
\begin{align}\label{eq:int_f_bound}
\big| \int_0^{\alpha} f'(x) \psi(x) \,\mathrm{d} x \big|&=\Big| \big[ f'(x) \Psi(x) \big]_{x=0}^{x=\alpha}- \int_0^{\alpha} f''(x)\Psi(x) \,\mathrm{d} x\Big| \nonumber\\
&\leq \frac{1}{8} |f'(\alpha)| + \frac{1}{8} \big| \int_0^\alpha f''(x) \,\mathrm{d} x \big| \nonumber\\
&= \frac{1}{8}|f'(\alpha)|+\frac{1}{8} \big( f'(0) - f'(\alpha) \big) \nonumber\\
&\leq \frac{1}{4}|f'(\alpha)|
\end{align}
since $f^\prime(\alpha) \leq f^\prime(0) \leq 0$. The same argument gives
\begin{equation}\label{eq:int_g_bound}
\big| \int_0^{\beta} g'(y) \psi(y) \,\mathrm{d} y \big|\leq \frac{1}{4}|g'(\beta)| .
\end{equation}
Combining \autoref{eq:lattice_count}--\autoref{eq:int_g_bound} completes the proof of Part~(a).
\smallskip
Part (b). Simply apply Part (a) to the curve $r\Gamma(s)$ by replacing $L, M, f(x), g(y), \alpha, \beta$ with $rs^{-1}L, rsM, rsf(sx/r), rs^{-1}g(s^{-1}y/r), rs^{-1}\alpha, rs\beta$ respectively.
\end{proof}
\begin{remark}
\autoref{th:asymptotic} continues to hold if the point $(\alpha,\beta)=(L,0)$ lies at the right endpoint of the curve. One simply removes all mention of $g,\beta_j$ and $\epsilon$ from the hypotheses of the proposition, and removes all such terms from the conclusions, as can be justified by inspecting the proof above. The same remark holds for the advanced counting estimate in the following \autoref{th:asymptotic_general}.
\end{remark}
\subsection*{Advanced counting estimate}
The hypotheses in the last result are somewhat restrictive. In particular, we would like to handle infinite curvature at the intercepts of the curve $\Gamma$, meaning $f^{\prime \prime}$ must be allowed to blow up at $x=0$. Further, we would like to relax the monotonicity assumption on $f^{\prime \prime}$. The next result achieves these goals.
Two numbers $\delta$ and $\epsilon$ appear in the next Proposition. Their role in the proof is that on the intervals $0<x \leq \delta$ and $0<y \leq \epsilon$ we bound the sawtooth function trivially with $|\psi| \leq 1/2$. On the remaining intervals we seek cancellations.
\begin{proposition}[Two-term counting estimate for more general curve]\label{th:asymptotic_general}
Take a point $(\alpha ,\beta) \in \Gamma$ lying in the first quadrant, and assume $f\in PC^2(0,\alpha]$ with $f'<0$ and $f''<0$, and that $f^{\prime \prime}$ is monotonic on $(\alpha_{i-1},\alpha_i]$ for $i=1,\dots,l$. Similarly assume $g\in PC^2(0,\beta]$ with $g'<0$ and $g''<0$, and that $g^{\prime \prime}$ is monotonic on $(\beta_{j-1},\beta_j]$ for $j=1,\dots,\ell$.
\noindent (a) If $\delta \in (0,\alpha)$ and $\epsilon \in (0,\beta)$ then the number $N$ of positive-integer lattice points inside $\Gamma$ in the first quadrant satisfies:
\begin{align*}
\big|N- & \operatorname{Area} (\Gamma)+(L+M)/2\big|\nonumber\\
& \leq 6\Big(\int_0^\alpha |f''(x)|^{1/3} \,\mathrm{d} x+\int_0^\beta |g''(y)|^{1/3}\,\mathrm{d} y\Big) \nonumber\\
& \quad + 175 \Big(\frac{1}{|f''(\delta)|^{1/2}}+\frac{1}{|g''(\epsilon)|^{1/2}}\Big) + 350 \Big(\sum_{i=1}^l \frac{1}{|f''(\alpha_i)|^{1/2}} + \sum_{j=1}^\ell \frac{1}{|g''(\beta_j)|^{1/2}}\Big) \nonumber \\
& \quad + \frac{1}{4} \big( \sum_{i=1}^l |f'(\alpha_i)| + \sum_{j=1}^\ell |g'(\beta_j)|\big)+\frac{1}{2}\big(\delta+\epsilon\big)+l+\ell+1 . \label{eq:gamma_asymptotic_general}
\end{align*}
\noindent (b) If functions
\[
\delta : (0,\infty) \to (0,\alpha) , \qquad \epsilon : (0,\infty) \to (0,\beta) ,
\]
are given, then the number $N(r,s)$ of positive-integer lattice points inside $r\Gamma(s)$ in the first quadrant satisfies (for $r,s>0$):
\begin{align}
& \big|N(r,s)- r^2\operatorname{Area} (\Gamma)+r(s^{-1}L+sM)/2\big|\nonumber\\
&\leq 6r^{2/3}\Big(\int_0^\alpha |f''(x)|^{1/3} \,\mathrm{d} x+\int_0^\beta |g''(y)|^{1/3}\,\mathrm{d} y\Big) \nonumber \\
& \quad + 175 r^{1/2}\Big(\frac{s^{-3/2}}{|f'' \big( \delta(r) \big)|^{1/2}}+\frac{s^{3/2}}{|g'' \big( \epsilon(r) \big)|^{1/2}}\Big) + 350r^{1/2} \Big( \sum_{i=1}^l \frac{s^{-3/2}}{|f''(\alpha_i)|^{1/2}} + \sum_{j=1}^\ell \frac{s^{3/2}}{|g''(\beta_j)|^{1/2}}\Big) \nonumber \\
& \quad+\frac{1}{4} \big( \sum_{i=1}^l s^2|f'(\alpha_i)| + \sum_{j=1}^\ell s^{-2}|g'(\beta_j)|\big)+\frac{r}{2}\big(s^{-1}\delta(r)+s\epsilon(r)\big)+l+\ell+1 . \label{eq:gamma_a_asymptotic_general}
\end{align}
\end{proposition}
The integral of $|f''|^{1/3}$ appearing in the conclusion of \autoref{th:asymptotic_general} is finite, because by H\"{o}lder's inequality and the fact that $f^{\prime \prime}<0$ and $f$ is decreasing, we have
\[
\int_0^{\alpha_1} |f''(x)|^{1/3} \,\mathrm{d} x \leq \alpha_1^{2/3} \Big| \int_0^{\alpha_1} f''(x) \,\mathrm{d} x \Big|^{\! 1/3} = \alpha_1^{2/3} |f'(0^+)-f'(\alpha_1^-)|^{1/3} < \infty .
\]
The integral of $|g^{\prime \prime}|^{1/3}$ is finite for similar reasons.
\begin{proof}
Part (a).
The lattice point counting equation \autoref{eq:lattice_count} holds just as in the proof of \autoref{th:asymptotic}, and so the task is to estimate each of the terms on the right side of that equation.
Estimate \autoref{eq:psi_f_bound} on the sum of the sawtooth function is no longer valid, because $f''$ is no longer assumed to be monotonic on the whole interval $[0,\alpha]$. To control this sawtooth sum, we first observe
\[
\big|\sum_{0 < m \leq \delta }\psi\big(f(m)\big)\big| \leq \frac{1}{2} \delta
\]
since $|\psi| \leq 1/2$ everywhere. Next, we have $\delta \in (\alpha_{j-1},\alpha_j]$ for some $j \in \{ 1,\ldots,l \}$, and
\[
\big|\sum_{\delta < m \leq \alpha_j}\psi\big(f(m)\big)\big| \leq 6 \int_\delta^{\alpha_j} |f''(x)|^{1/3} \,\mathrm{d} x + 175 \max \Big\{ \frac{1}{|f''(\delta)|^{1/2}} , \frac{1}{|f''(\alpha_j)|^{1/2}} \Big\} + 1
\]
by \autoref{lemma:kratzel} applied on the interval $[\delta,\alpha_j]$. Applying that theorem again on each interval $[\alpha_{i-1},\alpha_i]$ with $i=j+1,\ldots,l$ gives that
\[
\big|\sum_{\alpha_{i-1} < m \leq \alpha_i}\psi\big(f(m)\big)\big| \leq 6 \int_{\alpha_{i-1}}^{\alpha_i} |f''(x)|^{1/3} \,\mathrm{d} x + 175 \max \Big\{ \frac{1}{|f''(\alpha_{i-1})|^{1/2}} , \frac{1}{|f''(\alpha_i)|^{1/2}} \Big\} + 1 .
\]
By summing the last three displayed inequalities, we deduce a sawtooth bound
\begin{align}
& \Big|\sum_{0<m\leq \alpha }\psi\big(f(m)\big)\Big| \notag \\
& \leq \frac{1}{2} \delta+ 6 \int_\delta^\alpha |f''(x)|^{1/3} \,\mathrm{d} x + \frac{175}{|f''(\delta)|^{1/2}} + \sum_{i=j}^{l-1} \frac{350}{|f''(\alpha_i)|^{1/2}} + \frac{175}{|f''(\alpha)|^{1/2}} + l-j+1 \nonumber \\
& \leq \frac{1}{2} \delta+ 6 \int_0^\alpha |f''(x)|^{1/3} \,\mathrm{d} x + \frac{175}{|f''(\delta)|^{1/2}} + \sum_{i=1}^l \frac{350}{|f''(\alpha_i)|^{1/2}} + l . \label{eq:psi_f_bound_gd}
\end{align}
Next, we adapt estimate \autoref{eq:int_f_bound} on the integral of $f^\prime \psi$ by simply applying the same argument on each interval $[\alpha_{i-1},\alpha_i]$, hence finding
\begin{align}\label{eq:int_f_bd_gd}
\Big| \int_0^{\alpha} f'(x) \psi(x) \,\mathrm{d} x \Big|
&\leq \sum_{i=1}^l \Big[ \frac{1}{8}|f'(\alpha_i)|+\frac{1}{8} \big( f'(\alpha_{i-1})-f'(\alpha_i) \big) \Big] \nonumber\\
&\leq \frac{1}{4}\sum_{i=1}^l|f'(\alpha_i)|.
\end{align}
By combining \autoref{eq:lattice_count}, \autoref{eq:const_bound} with \autoref{eq:psi_f_bound_gd}, \autoref{eq:int_f_bd_gd} and the analogous estimates on $g$, we complete the proof of Part~(a).
\smallskip
Part (b).
Apply Part (a) to the curve $r\Gamma(s)$ by replacing $L, M, f(x), g(y), \alpha, \beta,\delta,\epsilon$ with $rs^{-1}L, rsM, rsf(sx/r), rs^{-1}g(s^{-1}y/r), rs^{-1}\alpha, rs\beta,rs^{-1}\delta(r),rs\epsilon(r)$ respectively.
\end{proof}
\section{\bf A unified approach}
\label{sec:structure}
The next proposition provides a unified framework for proving our theorems later in the paper. It adapts the scheme of proof employed by Antunes and Freitas \cite{AF13}.
\begin{proposition}\label{prop:unified}
Let $A \in {\mathbb R}, L>0$, and $0<\theta<1$. Consider a real valued function $H(r,s)$ (for $r,s>0$) such that for each closed interval $[s_1,s_2] \subset (0,\infty)$ one has
\begin{equation}\label{eq:two-term-ineq}
H(r,s) = Ar^2 - Lr(s+s^{-1})/2+O(r^\theta),
\end{equation}
with $s \in [s_1,s_2]$ allowed to vary as $r \to \infty$.
Assume the function $s \mapsto H(r,s)$ attains its maximum value, for each $r>0$, and write $S(r) = \argmax_{s>0} H(r,s)$ for the set of maximizing points. Suppose
\begin{equation}\label{eq:two-term-upper-bound}
S(r) \subset [s_1,s_2] \qquad \text{for all large $r>0$,}
\end{equation}
for some constants $s_1,s_2>0$.
Then the maximizing set $S(r)$ converges to the point $\{ 1 \}$ as $r \to \infty$, with
\[
S(r) \subset \big[1-O(r^{-(1-\theta)/2}),1+O(r^{-(1-\theta)/2})\big],
\]
and the maximum value of $H$ has asymptotic formula
\[
\max_{s > 0} H(r,s) = Ar^2-Lr + O(r^{\theta}) .
\]
\end{proposition}
The error term $O(r^\theta)$ in \eqref{eq:two-term-ineq} has implied constant depending on the interval $[s_1,s_2]$.
\begin{proof}
Since $S(r) \subset [s_1,s_2]$ by hypothesis \autoref{eq:two-term-upper-bound}, the asymptotic estimate \autoref{eq:two-term-ineq} implies
\begin{align*}
H(r,s) & = Ar^2-Lr(s+s^{-1})/2 + O(r^{\theta}) , \\
H(r,1) & = Ar^2-Lr + O(r^{\theta}),
\end{align*}
for $s \in S(r)$ and $r \to \infty$. Since $s$ is a maximizing value, we have $H(r,1)\leq H(r,s)$ and so
\begin{equation} \label{eq:s-sinverse}
s+s^{-1} \leq 2 +O(r^{-(1-\theta)}) .
\end{equation}
Hence $s=1+O(r^{-(1-\theta)/2})$ by \autoref{le:squarecompletion}, which proves the first claim in the theorem. For the second claim, when $s \in S(r)$ we have
$
H(r,s) = Ar^2-Lr + O(r^{\theta})
$
as $r \to \infty$, by \autoref{eq:two-term-ineq} and using also that $1 \leq (s+s^{-1})/2 \leq 1+O(r^{-(1-\theta)})$ by \autoref{eq:s-sinverse}.
\end{proof}
\section{\bf Proof of \autoref{th:S_limit} and \autoref{th:S_limit_general}}
\label{sec:mainproof}
\subsection*{Proof of \autoref{th:S_limit}}
The theorem follows directly from \autoref{prop:unified} with $H(r,s)$ being the lattice counting function $N(r,s)$. The hypotheses of the proposition are verified as follows.
Suppose $0<s_1<s_2<\infty$. By \autoref{th:asymptotic}(b) with $L=M$ one has
\begin{equation}\label{eq:two-term-ineqN}
N(r,s) = \operatorname{Area}(\Gamma)r^2 - Lr(s+s^{-1})/2+O(r^{2/3}),
\end{equation}
with $s \in [s_1,s_2]$ as $r \to \infty$. Thus hypothesis \eqref{eq:two-term-ineq} holds for $N(r,s)$ with the choices $A=\operatorname{Area}(\Gamma), \theta=2/3$, and $L$ equalling the intercept value of $\Gamma$.
The boundedness hypothesis \eqref{eq:two-term-upper-bound} holds by \autoref{thm:s_bounded}.
\subsection*{Proof of \autoref{th:S_limit_general}}
\label{sec:generalproof}
Again let $H(r,s)$ be the lattice counting function $N(r,s)$, take $A=\operatorname{Area}(\Gamma)$, let $L$ be the intercept value of $\Gamma$, and note the boundedness hypothesis \eqref{eq:two-term-upper-bound} holds by \autoref{thm:s_bounded}. To finish verifying the hypotheses of \autoref{prop:unified}, we suppose $0<s_1<s_2<\infty$ and show that \autoref{eq:two-term-ineq} holds.
Take $\theta=1-2e$, where the number $e=\min \{ \tfrac{1}{6},a_1,a_2,b_1,b_2 \}$ was defined in \autoref{th:S_limit_general}. Hypothesis \eqref{eq:two-term-ineq} is the assertion that
\begin{equation}\label{eq:two-term-ineqNgeneral}
N(r,s) = \operatorname{Area}(\Gamma)r^2 - Lr(s+s^{-1})/2+O(r^{1-2e}),
\end{equation}
with $s \in [s_1,s_2]$ as $r \to \infty$. To verify this asymptotic, we will estimate the remainder terms in \autoref{th:asymptotic_general}(b) as follows. In that proposition take $L=M$, and note $\delta(r)<\alpha$ and $\epsilon(r)<\beta$ for all large $r$ by assumptions \autoref{eq:f_sup} and \autoref{eq:g_sup}. We will show the right side of estimate \autoref{eq:gamma_a_asymptotic_general} in \autoref{th:asymptotic_general}(b) is bounded by
\begin{align*}
& O(r^{2/3}) + s^{-3/2} O(r^{1-2a_2}) + s^{3/2} O(r^{1-2b_2}) + (s^{-3/2}+s^{3/2}) O(r^{1/2}) \\
& \quad \qquad + (s^2+s^{-2}) O(1) + s^{-1} O(r^{1-2a_1}) + s O(r^{1-2b_1}) + O(1)
\end{align*}
for large enough $r$, where the implied constants in the $O(\cdot)$-terms depend only on the curve $\Gamma$ and are independent of $s$. Since each one of these $O(\cdot)$-terms is bounded by $O(r^{1-2e})$, and $s$ and $s^{-1}$ are bounded when $s \in [s_1,s_2]$, hypothesis \eqref{eq:two-term-ineq} will hold as desired.
Examining now the right side of \autoref{eq:gamma_a_asymptotic_general}, we see the first two terms are obviously $O(r^{2/3})$. For the next term, observe by assumption in \autoref{eq:f_sup} that
\[
\frac{r^{1/2} s^{-3/2}}{|f''(\delta(r))|^{1/2}}=s^{-3/2}O(r^{1-2a_2}) ,
\]
and similarly for the analogous term involving $g''$. Since $f''(\alpha_i)$ and $g''(\beta_j)$ are constant, the corresponding terms in \autoref{eq:gamma_a_asymptotic_general} can be estimated by $(s^{-3/2}+s^{3/2}) O(r^{1/2})$. Similarly, the terms in \eqref{eq:gamma_a_asymptotic_general} involving $f'(\alpha_i)$ and $g'(\beta_j)$ can be estimated by $(s^2+s^{-2}) O(1)$. Next, $s^{-1}r\delta(r)=s^{-1}O(r^{1-2a_1})$ by the assumption in \autoref{eq:f_sup}, and similarly for $\epsilon(r)$. And, of course, $l+\ell+1$ is constant, which completes the verification of hypothesis \autoref{eq:two-term-ineq}.
\section{\bf Proof of \autoref{thm:s_bounded_neumann} and \autoref{th:S_limit_neumann}}
\label{sec:closedquadrant}
First we need a two-term bound on the counting function in the closed first quadrant, as provided by the next proposition. The result is an analogue of \autoref{prop:counting_ineq}, although the constant $\mathcal{C}$ is slightly different than in that result.
Assume $f$ is concave and strictly decreasing on $[0,L]$, with $f(0)=M, f(L)=0$. The intercepts $L$ and $M$ need not be equal.
\begin{proposition}[Two-term lower bound on counting function]\label{prop:counting_ineq_neumann}
Let $\mathcal{C}=M-f(L/4)$.
\noindent (a) The number $\mathcal{N}$ of nonnegative-integer lattice points lying inside $\Gamma$ in the closed first quadrant satisfies:
\begin{equation}
\mathcal{N} \geq \operatorname{Area}(\Gamma) + \frac{1}{2} \mathcal{C} .
\end{equation}
\noindent (b) The number of nonnegative-integer lattice points lying inside $r\Gamma(s)$ in the closed first quadrant satisfies (for $r,s>0$):
\[
\mathcal{N}(r,s)\geq r^2\operatorname{Area}(\Gamma) + \frac{1}{2}\mathcal{C}rs .
\]
\end{proposition}
\begin{proof}
Part (a).
Clearly $\mathcal{N}$ equals the total area of the squares of sidelength $1$ having lower left vertices at nonnegative-integer lattice points inside the curve $\Gamma$. The union of these squares contains $\Gamma$, since the curve is decreasing.
We separate the proof into cases according to the value of $L$.
Case (i): Suppose $L\leq 2$, so that $L/4 \leq 1/2$. Consider a rectangle whose lower left vertex sits on the curve at $x=L/4$, and has vertices
\[
\big( L/4,f(L/4) \big), \quad \big( 1, f(L/4) \big), \quad \big( 1, M \big), \quad \big( L/4, M \big) .
\]
By construction, this rectangle lies inside the union of squares of sidelength $1$, and it lies above $\Gamma$ because the curve is decreasing. Hence
\begin{align*}
\mathcal{N}
& \geq \operatorname{Area}(\Gamma) + \operatorname{Area}(\text{rectangle}) \\
& = \operatorname{Area}(\Gamma) +(1-L/4)\big(M - f(L/4)\big) \\
& \geq \operatorname{Area}(\Gamma)+\frac{1}{2}\big(M-f(L/4)\big)
\end{align*}
as desired.
Case (ii): Suppose $L\geq 2$. Consider the right triangles of width $1$ formed by tangent lines from the right on $\Gamma$, that is, the triangles with vertices $\big( i,f(i) \big),\big( i+1,f(i) \big),\big( i+1,f(i)+f'(i^+) \big)$, where $i=0,1,\ldots,\lfloor L \rfloor-1$. These triangles all lie above the horizontal axis, since by concavity $f(i)+f'(i^+) \geq f(i+1) \geq 0$; the last inequality explains why the biggest $i$-value we consider is $\lfloor L \rfloor - 1$.
\begin{figure}
\includegraphics[scale=0.4]{rectangle_neumann.png}
\caption{\label{fig:neumann_counting}Nonnegative integer lattice count satisfies $\mathcal{N} \geq \operatorname{Area}(\Gamma) + \operatorname{Area}(\text{triangles})$, in proof of \autoref{prop:counting_ineq_neumann}(a) when $L \geq 2$.}
\end{figure}
Thus these triangles lie inside the union of squares of sidelength $1$, and lie above $\Gamma$ by concavity. Hence
\[
\mathcal{N} \geq \operatorname{Area}(\Gamma) + \operatorname{Area}(\text{triangles}) .
\]
To complete the proof of Case (ii), we estimate
\begin{align*}
\operatorname{Area}(\text{triangles})
& \geq \frac{1}{2} \sum_{i=1}^{\lfloor L \rfloor - 1} |f'(i^+)| \\
& \geq \frac{1}{2} \sum_{i=1}^{\lfloor L \rfloor - 1} \big( f(i-1)-f(i) \big) \qquad \text{by concavity} \\
& = \frac{1}{2} \big( f(0) - f(\lfloor L \rfloor - 1) \big) \\
& \geq \frac{1}{2}\big( M-f(L/4) \big) ,
\end{align*}
because $\lfloor L \rfloor-1 \geq L/2 \geq L/4$ when $L \geq 2$.
\smallskip
Part (b).
Replace $\Gamma$ in Part (a) with the curve $r\Gamma(s)$, meaning we replace $L, M, f(x)$ with $rs^{-1}L, rsM, rsf(sx/r)$ respectively.
\end{proof}
\subsection*{Proof of \autoref{thm:s_bounded_neumann}} Since $N(r,s)\leq r^2\operatorname{Area}(\Gamma)$, taking $s=1$ and $L=M$ in \autoref{le:relation} gives that
\[
\mathcal{N}(r,1)\leq r^2 \operatorname{Area}(\Gamma) + 2rL + 1.
\]
Now suppose $s\in \mathcal{S}(r)$ is a minimizing value, so that $\mathcal{N}(r,s)\leq \mathcal{N}(r,1)$. Since
\[
\mathcal{N}(r,s) \geq r^2 \operatorname{Area}(\Gamma) +\frac{1}{2}\mathcal{C}rs
\]
by \autoref{prop:counting_ineq_neumann}(b), we conclude from above that
\[
\frac{1}{2•}\mathcal{C}rs \leq 2rL+1\leq \frac{5}{2•}rL ,
\]
where the last inequality holds for $r \geq 2/L$. Hence $s \leq 5L/\mathcal{C}$, and so the set $\mathcal{S}(r)$ is bounded above. Interchanging the horizontal and vertical axes and recalling $L=M$ (\emph{i.e.}, the intercepts are equal in this theorem), one finds similarly that $s^{-1}\leq 5L/\widetilde{\mathcal{C}}$. Hence $\mathcal{S}(r)$ is bounded below away from $0$, which completes the proof.
\subsection*{Proof of \autoref{th:S_limit_neumann}}
The theorem will follow from \autoref{prop:unified} with the choice $H(r,s)=-\mathcal{N}(r,s)$, since maximizing $s \mapsto H(r,s)$ corresponds to minimizing $s \mapsto \mathcal{N}(r,s)$. The boundedness hypothesis \eqref{eq:two-term-upper-bound} of the proposition holds by \autoref{thm:s_bounded_neumann}. The other hypothesis \eqref{eq:two-term-ineq} is verified as follows.
Taking $L=M$ in the relation between $\mathcal{N}(r,s)$ and $N(r,s)$ in \autoref{le:relation}, and calling on the asymptotic for $N(r,s)$ in either \autoref{eq:two-term-ineqN} (under the assumptions of \autoref{th:S_limit}) or \autoref{eq:two-term-ineqNgeneral} (under the assumptions of \autoref{th:S_limit_general}), we deduce
\[
\mathcal{N}(r,s) = \operatorname{Area}(\Gamma)r^2+Lr(s^{-1}+s)/2+ O(r^\theta)
\]
with $s \in [s_1,s_2]$ allowed to vary as $r \to \infty$, where
\[
\theta =
\begin{cases}
2/3 & \text{under the assumptions of \autoref{th:S_limit},} \\
1-2e & \text{under the assumptions of \autoref{th:S_limit_general}.}
\end{cases}
\]
That is, we have verified hypothesis \eqref{eq:two-term-ineq} with $H=-\mathcal{N},A=-\operatorname{Area}(\Gamma)$, and $L$ the intercept value of $\Gamma$.
\section{\bf Open problem for $1$-ellipses --- lattice points in right triangles}
\label{sec:oneellipse}
Lattice point maximization for right triangles appears to be an open problem. Consider the $p$-circle with $p=1$, which is a diamond with vertices at $(\pm 1,0)$ and $(0,\pm 1)$. It intersects the first quadrant in the line segment $\Gamma$ joining the points $(0,1)$ and $(1,0)$. Here $L=M=1$. Stretching the $1$-circle in the $x$- and $y$-directions gives a $1$-ellipse
\[
|sx|+|s^{-1}y|=1 ,
\]
which together with the coordinate axes forms a right triangle of area $1/2$ in the first quadrant, with one vertex at the origin and hypotenuse $\Gamma(s)$ joining the vertices at $(s^{-1},0)$ and $(0,s)$. As previously, we write $S(r)$ for the set of $s$-values that maximize the number of positive-integer (first quadrant) lattice points below or on $r\Gamma(s)$, when $r>0$.
First of all, the 45--45--90 degree triangle ($s=1$) does not always enclose the most lattice points: \autoref{fig:counterexample_p=1} shows an example.
\begin{figure}
\includegraphics[scale=0.35]{p=1.png}
\caption{\label{fig:counterexample_p=1}The $1$-ellipse $sx+s^{-1}y=r$ with $r=4.96$, for $s=1$ (solid) and $s=\sqrt{2}$ (dashed). The dashed line encloses three more lattice points (shown in bold) than the solid line.}
\end{figure}
The open problem is to understand the limiting behavior of the maximizing $s$-values. Does $S(r)$ converge to $\{ 1 \}$ as $r \to \infty$? We proved the answer is ``Yes'' for $p$-ellipses when $1<p<\infty$ (\autoref{ex:p-ellipse}), but for $p=1$ we suggest the answer is ``No''. Numerical evidence in \autoref{fig:optimal_s_p_1} suggests that the set $S(r)$ does not converge to $\{ 1 \}$ as $r \to \infty$. Indeed, the plotted heights appear to cluster at a large number of values, possibly dense in some interval around $s=1$. These cluster values presumably have some number theoretic significance.
\begin{figure}
\includegraphics[scale=0.45]{optimal_s_p_1.png}
\caption{\label{fig:optimal_s_p_1}Optimal $s$-values for maximizing the number of lattice points in the $1$-ellipse (triangle). The graph plots $\sup S(r)$ versus $\log r$. The plotted $r$-values are multiples of $\sqrt{3}/10$, and the horizontal axis is at height $s=1$.}
\end{figure}
In the remainder of the section we remark that maximizing $s$-values are $\leq 3$ in the limit as $r \to \infty$, and we describe the numerical scheme that generates \autoref{fig:optimal_s_p_1}. Lastly, we explain why $s= \sqrt{2}$ is a good candidate for a cluster value as $r \to \infty$.
\subsection*{The bound on maximizing $s$-values for right triangles ($p=1$)} Given $\varepsilon>0$, we claim
\[
S(r) \subset \big[\frac{1}{3+\varepsilon},3+\varepsilon\big] \qquad \text{for all large $r$.}
\]
This bound is slightly better than the one in \autoref{thm:s_bounded} (which had $4$ instead of $3$), and can be proved in the same way with the help of a special formula for $N(r,1)$:
\begin{align}
N(r,1)
&= \# \ \text{first-quadrant lattice points under the line $y=r-x$}\nonumber\\
&= \frac{1}{2} \lfloor r \rfloor \lfloor r -1\rfloor \nonumber\\
&\geq \frac{1}{2}(r -1)(r -2)
= \frac{1}{2} r^2 -\frac{3}{2}r + 1.\label{eq:p_1_lower_bound}
\end{align}
\subsection*{How can one efficiently maximize the lattice counting function for the $1$-ellipse?} A brute force method of counting how many lattice points lie under the line $r\Gamma(s)$, and then varying $s$ to maximize that number of lattice points, is simply unworkable in practice. The counting function $N(r,s)$ jumps up and down in value as $s$ varies, sometimes jumping quite rapidly, and a brute force method of sampling at a finite collection of $s$-values can never be expected to capture all such jump points or their precise locations.
Instead, for a given $r$ we should pre-identify the possible jump values of $s$, and use that information to count the lattice points. We start with the simple observation that a lattice point $(j,k)$ lies under the line $r\Gamma(s)$ if and only if
\[
sj+s^{-1}k \leq r ,
\]
which is equivalent to
\begin{equation}\label{eq:s_quadratic}
js^2 -r s + k \leq 0 .
\end{equation}
For this quadratic inequality to have a solution, the discriminant must be nonnegative, $r^2-4jk\geq 0$, and thus we need only consider lattice points beneath the hyperbola $r^2=4xy$. For each such lattice point, equality holds in \autoref{eq:s_quadratic} for two positive $s$-values, namely
\[
s_{min}(j,k;r) = \frac{r-\sqrt{r^2-4jk}}{2j} , \quad
s_{max}(j,k;r) = \frac{r+\sqrt{r^2-4jk}}{2j}.
\]
The geometrical meaning of these values can be understood, as follows: as $s$ increases from $0$ to $\infty$, one endpoint of the line segment $r\Gamma(s)$ slides up on the $y$-axis while the other endpoint moves left on the $x$-axis. The line segment passes through the point $(j,k)$ twice: first when $s=s_{min}(j,k;r)$ and again when $s=s_{max}(j,k;r)$. The point $(j,k)$ lies below the line when $s$ belongs to the closed interval between these two values.
Thus the counting function is
abad
\begin{align*}
N(r,s)
&=\# \big\{ (j,k):s_{min}(j,k;r) \leq s \leq s_{max}(j,k;r) \big\} \\
& = \sum_{j,k>0}\mathbbm{1}_{s_{min}(j,k;r)\leq s}-\sum_{j,k>0}\mathbbm{1}_{s_{max}(j,k;r)<s}
\end{align*}
where we sum only over positive-integer lattice points with $4jk \leq r^2$.
The last formula says that the counting function $N(r,s)$ equals the number of values $s_{min}(j,k;r)$ that are less than or equal to $s$ minus the number of values $s_{max}(j,k;r)$ that are less than $s$. To facilitate the evaluation in practice, one should sort the list of values of $s_{min}(j,k;r)$ into increasing order, and similarly sort the list of values of $s_{max}(j,k;r)$. The numbers in these two lists are the only numbers where $N(r,s)$ can change value, as $s$ increases. In particular, when $s$ increases to $s_{min}(j,k;r)$, the point $(j,k)$ is picked up by the line segment for the first time and so $N(r,s)$ increases by $1$. When $s$ increases strictly beyond $s_{max}(j,k;r)$, the point $(j,k)$ is dropped by the line segment and so $N(r,s)$ decreases by $1$. Note the counting function might increase or decrease by more than $1$ at some $s$-values, if the sorted lists of $s_{min}$ and $s_{max}$ values have repeated entries (arising from lattice points that are picked up by, or else dropped by, the line segment at the same $s$-value).
After sorting the $s_{min}$ and $s_{max}$ lists, we evaluate the maximum of $N(r,s)$ by scanning through the two lists, increasing a counter by $1$ at each number in the sorted $s_{min}$ list, and decreasing the counter just after each number in the sorted $s_{max}$ list. The largest value achieved by the counter is the maximum of $N(r,s)$, and $S(r)$ consists of the closed interval or intervals of $s$-values on which this maximum count is attained.
By this method, we can maximize the lattice counting function for the $1$-ellipse in a computationally efficient manner, for any given $r>0$. The code is available in \cite[Appendix~B]{thesis}.
When presenting the results of this method graphically, in \autoref{fig:optimal_s_p_1}, we plot only the largest $s$ value in $S(r)$, because the family of $1$-ellipses is invariant under the map $s \mapsto 1/s$ and so the smallest value in $S(r)$ will be just the reciprocal of the largest value.
\subsection*{Why is the $1$-ellipse not covered by our theorems?} For the $p$-ellipse with $p=1$, \autoref{th:S_limit_general} does not apply because $f$ is linear and so $f^{\prime \prime} \equiv 0$. Specifically, in the proof we see inequalities \autoref{eq:psi_f_bound} and \autoref{eq:psi_g_bound} are no longer useful, since their right sides are infinite. The situation cannot easily be rescued, because the left side of \autoref{eq:gamma_a_asymptotic} need not even be $o(r)$. For example, when $s=1$ and $r$ is an integer, by evaluating the number $N(r,1)$ of lattice points under the curve $y=r-x$ we find
\[
N(r,1) - r^2 \operatorname{Area}(\Gamma) + r(L+M)/2
= \frac{1}{2} r(r-1) - \frac{1}{2} r^2 + r = \frac{1}{2} r ,
\]
which is of order $r$ and hence has the same order as the ``boundary term'' $r(L+M)/2$ on the left side. Thus the method breaks down completely for $p=1$. We seek instead to illuminate the situation through numerical investigations.
\subsection*{A cluster value at $s = \sqrt{2}${\,}?}
Inspired by the numerical calculations in \autoref{fig:optimal_s_p_1}, we will show that $s=\sqrt{2}$ gives a substantially higher count of lattice points than $s=1$, for a certain sequence of $r$-values tending to infinity. This observation suggests (but does not prove) that $\sqrt{2}$ or some number close to it should belong to $S(r)$ for those $r$-values. To be clear: we have not found a proof of this claim. Doing so would provide a counterexample to the idea that the set $S(r)$ converges to $\{ 1 \}$ as $r \to \infty$.
To compare the counting functions for $s=1$ and $s=\sqrt{2}$, we first notice that for $s=1$ the counting function for the $1$-circle is given by
\[
N(r,1) = \lfloor r \rfloor \lfloor r-1 \rfloor / 2 , \qquad r>0 .
\]
At $s=\sqrt{2}$ the slope of the $1$-ellipse is $-2$, and for the special choice $r=\sqrt{2}(m+1/2)$ with $m \geq 1$ the counting function can be evaluated explicitly as
\[
N(r,\sqrt{2}) = m^2 .
\]
We further choose $m$ such that $r \in (n-1/4,n)$ for some integer $n$, noting that an increasing sequence of such $m$-values can be found due to the density in the unit interval of multiples of $\sqrt{2}$ modulo $1$. Then, writing $r=n-\epsilon$ where $\epsilon<1/4$, we have
\begin{align*}
N(r,\sqrt{2})-N(r,1)
& = m^2 - (n-1)(n-2)/2 \\
& = \frac{1}{2} (r^2-\sqrt{2}r+1/2) - \frac{1}{2} (r+\epsilon-1)(r+\epsilon-2) \\
& \geq \frac{1}{2}r - (\text{constant}) .
\end{align*}
Hence $\limsup_{r \to \infty} \big( N(r,\sqrt{2})-N(r,1) \big)/r \geq 1/2$, and so $s=\sqrt{2}$ can give (for certain choices of $r$) a substantially higher count of lattice points than $s=1$, as we wanted to show.
The work above implies that $1 \notin S(r)$ for a sequence of $r$-values tending to infinity. More generally, Marshall and Steinerberger showed that if $x>0$ is rational then $\sqrt{x}\notin S(r)$ for a sequence of $r$-values tending to infinity (see \cite[Theorem~1]{marshall_steinerberger}), while if $x>0$ is irrational then $\sqrt{x}\notin S(r)$ for all sufficiently large $r$ (see \cite[Lemma~2]{marshall_steinerberger} and its associated discussion).
\subsection*{Conjecture for $p=1$}
To finish the chapter, we state some of our numerical observations as a conjecture. Let
\begin{align*}
S &= \{ (r,s): r>0, s \in S(r)\} \subset (0,\infty) \times (0,\infty),\\
\overline{S} &= \text{closure of $S$ in $[0,\infty] \times [0,\infty]$}, \\
S(\infty) & = \{ s\in [0, \infty]: (\infty,s) \in \overline{S}\}.
\end{align*}
Earlier in the chapter we proved that $S(\infty) \subset [1/3,3]$.
The clustering behavior of $S(r)$ observed in \autoref{fig:optimal_s_p_1} suggests the following conjecture.
\begin{conjecture}[$p=1$] \label{conj:p_1}
The limiting set $S(\infty)$ is countably infinite, and is contained in
\[
[1/3,3] \cap \{\sqrt{x}:x\in \mathbb{Q}, x>0 \}.
\]
\end{conjecture}
Marshall and Steinerberger \cite[Theorem~2]{marshall_steinerberger} recently proved that $S(\infty)$ contains (countably) infinitely many square roots of rational numbers and is contained in $[1/\sqrt{5},\sqrt{5}]$. For example, they showed the set $S(\infty)$ contains $1$ and $\sqrt{3/2}$. Yet a precise characterization of the set remains elusive. One would like a characterization in terms of some number theoretic condition.
\section{\bf Connection between counting function maximization and eigenvalue minimization}
\label{sec:relation}
Maximizing a counting function is morally equivalent to minimizing the size of the things being counted. Let us apply this general principle to the case of the circle
\[
\Gamma:x^2+y^2=1 \quad \text{in the first quadrant,}
\]
and its associated ellipses $r\Gamma(s)$. In this section, $L=M=1$ and $\operatorname{Area} (\Gamma) = \pi/4$.
\subsection*{Minimizing eigenvalues of the Dirichlet Laplacian on rectangles}
Write
\begin{equation} \label{eq:Dirichleteigen}
\{ \lambda_n(s) : n=1,2,3,\ldots \} = \{ (js)^2+(ks^{-1})^2 : j,k =1,2,3,\ldots \}
\end{equation}
so that $\lambda_n(s)$ is the $n$th eigenvalue of the Dirichlet Laplacian on a rectangle of side lengths $s^{-1}\pi$ and $s\pi$. (The eigenfunctions have the form $\sin(jsx) \sin(ks^{-1}y)$.) Then the lattice point counting function is the eigenvalue counting function, because
\begin{align*}
N(r,s)
& = \# \{ (j,k) : (js)^2+(ks^{-1})^2 \leq r^2 \} \\
& = \# \{ n : \lambda_n(s) \leq r^2 \} .
\end{align*}
Define
\[
S_*(n) = \argmin_{s>0} \lambda_n(s) ,
\]
so that $S_*(n)$ is the set of $s$-values that minimize the $n$th eigenvalue.
The next result says that the rectangle minimizing the $n$th eigenvalue will converge to a square as $n \to \infty$.
\begin{corollary}[Optimal Dirichlet rectangle is asymptotically balanced, due to Antunes and Freitas \protect{\cite[Theorem~2.1]{AF13}}; Gittins and Larson \protect{\cite{GL17}}] \label{co:relationDirichlet} \
\noindent The optimal stretch factor for minimizing $\lambda_n(s)$ approaches $1$ as $n \to \infty$, with
\[
S_*(n) \subset [1-O(n^{-1/12}),1+O(n^{-1/12})] ,
\]
and the minimal Dirichlet eigenvalue satisfies the asymptotic formula
\[
\min_{s > 0} \lambda_n(s) = \frac{4}{\pi} n + \Big( \frac{4}{\pi} \Big)^{\! 3/2} n^{1/2} + O(n^{1/3}) .
\]
\end{corollary}
The proof is a modification of our \autoref{th:S_limit}. Full details are provided in the ArXiv version of this paper \cite[Corollary 10]{Lau_Liu}. In the proof one relies on \autoref{prop:counting_ineq} to bound the stretch factor $s$ of the optimal rectangle. \autoref{prop:counting_ineq} is simpler in both statement and proof than the corresponding Theorem~3.1 of Antunes and Freitas \cite{AF13}, which contains an additional lower order term with an unhelpful sign.
\begin{remark}
One would like to prove using only the definition of the counting function that
\[
\text{$S_*(n) \to 1$ \quad if and only if \quad $S(r) \to 1$,}
\]
or in other words that the rectangle minimizing the $n$th eigenvalue will converge to a square if and only if the ellipse maximizing the number of lattice points converges to a circle. Then \autoref{co:relationDirichlet} would follow qualitatively from \autoref{th:S_limit}, without needing any additional proof. Our attempts to find such an abstract equivalence have failed due to possible multiplicities in the eigenvalues. Perhaps an insightful reader will see how to succeed where we have failed.
\end{remark}
\subsection*{Maximizing eigenvalues of the Neumann Laplacian on rectangles}
If one considers lattice points in the closed first quadrant, that is, allowing also the lattice points on the axes, then one obtains the Neumann eigenvalues of the rectangle having side lengths $s^{-1}\pi$ and $s\pi$:
\[
\{ \mu_n(s) : n=1,2,3,\ldots \} = \{ (js)^2+(ks^{-1})^2 : j,k =0,1,2,\ldots \} .
\]
Notice the first eigenvalue is always zero: $\mu_1(s)=0$ for all $s$. The lattice point counting function is once again an eigenvalue counting function, because
\[
\mathcal{N}(r,s)
= \# \{ (j,k) : (js)^2+(ks^{-1})^2 \leq r^2 \} = \# \{ n : \mu_n(s) \leq r^2 \} .
\]
The appropriate problem is to maximize the $n$th eigenvalue (rather than minimizing as in the Dirichlet case), and so we let
\[
\mathcal{S}_*(n) = \argmax_{s>0} \mu_n(s) .
\]
The corollary below says that the rectangle maximizing the $n$th Neumann eigenvalue will converge to a square as $n \to \infty$.
\begin{corollary}[Optimal Neumann rectangle is asymptotically balanced, due to van den Berg, Bucur and Gittins \protect{\cite{BBG16a}}; Gittins and Larson \protect{\cite{GL17}}] \label{co:relationNeumann} \
\noindent The optimal stretch factor for maximizing $\mu_n(s)$ approaches $1$ as $n \to \infty$, with
\[
\mathcal{S}_*(n) \subset [1-O(n^{-1/12}),1+O(n^{-1/12})] ,
\]
and the maximal Neumann eigenvalue satisfies the asymptotic formula
\[
\max_{s > 0} \mu_n(s) = \frac{4}{\pi} n - \Big( \frac{4}{\pi} \Big)^{\! 3/2} n^{1/2} + O(n^{1/3}) .
\]
\end{corollary}
One adapts the arguments used for \autoref{th:S_limit_neumann}. A complete proof is in \cite[Corollary 11]{Lau_Liu}. Note that our lower bound on the counting function in \autoref{prop:counting_ineq_neumann}, which one uses to control the stretch factor $s$ of the optimal rectangle, is simpler in both statement and proof than the corresponding Lemma~2.2 by van den Berg \emph{et al.}\ \cite{BBG16a}. Further, our \autoref{prop:counting_ineq_neumann} holds for all $r>0$, whereas \cite[Lemma~2.2]{BBG16a} holds only for $r \geq 2s$. Consequently we need not establish an \emph{a priori} bound on $s$ as was done in \cite[Lemma~2.3]{BBG16a}.
Those authors did obtain a slightly better rate of convergence than we do, by calling on sophisticated lattice counting estimates of Huxley; see the comments after \autoref{th:asymptotic}.
| {
"timestamp": "2017-09-07T02:05:14",
"yymm": "1609",
"arxiv_id": "1609.06172",
"language": "en",
"url": "https://arxiv.org/abs/1609.06172",
"abstract": "We aim to maximize the number of first-quadrant lattice points in a convex domain with respect to reciprocal stretching in the coordinate directions. The optimal domain is shown to be asymptotically balanced, meaning that the stretch factor approaches 1 as the \"radius\" approaches infinity. In particular, the result implies that among all p-ellipses (or Lamé curves), the p-circle encloses the most first-quadrant lattice points as the radius approaches infinity.The case p=2 corresponds to minimization of high eigenvalues of the Dirichlet Laplacian on rectangles, and so our work generalizes a result of Antunes and Freitas. Similarly, we generalize a Neumann eigenvalue maximization result of van den Berg, Bucur and Gittins.The case p=1 remains open: which right triangles in the first quadrant with two sides along the axes will enclose the most lattice points, as the area tends to infinity?",
"subjects": "Metric Geometry (math.MG); Number Theory (math.NT); Spectral Theory (math.SP)",
"title": "Optimal stretching for lattice points and eigenvalues",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9833429624315189,
"lm_q2_score": 0.8198933293122507,
"lm_q1q2_score": 0.8062363353237494
} |
https://arxiv.org/abs/2109.09928 | A numerical study of L-convex polyominoes and 201-avoiding ascent sequences | For L-convex polyominoes we give the asymptotics of the generating function coefficients, obtained by analysis of the coefficients derived from the functional equation given by Castiglione et al. \cite{CFMRR7}. For 201-avoiding ascent sequences, we conjecture the solution, obtained from the first 23 coefficients of the generating function. The solution is D-finite, indeed algebraic. The conjectured solution then correctly generates all subsequent coefficients. We also obtain the asymptotics, both from direct analysis of the coefficients, and from the conjectured solution. As well as presenting these new results, our purpose is to illustrate the methods used, so that they may be more widely applied. | \section{Introduction}
\label{introduction}
In \cite{CFMRR7}, Castiglione et al. gave a functional equation for the number of $L$-convex polyominoes. These are defined as polyominoes with the property that any two cells may be joined by an $L$-shaped path. An example is shown in Fig. \ref{Lconvex},
and it can be seen that a typical such polygon can be considered as a stack polyomino placed atop an upside-down stack polyomino. A stack polyomino is just a column-convex bargraph polyomino. The perimeter generating function has a simple, rational expression, $$P(x)=\frac{(1-x)^2}{2(1-x)^2-1} = 1+2x+7x^2+24x^3+\cdots,$$ and is given in the On-line Encyclopaedia of Integer Sequences (OEIS) as A003480, \cite{OEIS}.
This is readily solved to give $$[x^n]P(x) =\frac{(2+\sqrt{2})^{n+1}-(2-\sqrt{2})^{n+1}}{4\sqrt{2}} \sim \frac{1+\sqrt{2}}{4}(2+\sqrt{2})^n.$$
The area generating function is \cite{CFMRR7}
\begin{equation} \label{eq:lconv}
A(q)=1+\sum_{k \ge 0} \frac{q^{k+1}f_k(q)}{(1-q)^2(1-q^2)^2\cdots(1-q^k)^2(1-q^{k+1})} = 1+q+2q^2+6q^3+15q^4+ \cdots,
\end{equation}
where $$f_k(q)=2f_{k-1}(q)-(1-q^k)^2 f_{k-2},$$ with initial conditions $f_0(q)=1,$ and $f_1(q)=1+2q-q^2.$
We used this expression to generate 2000 terms of the sequence, and these are given in the OEIS \cite{OEIS} as sequence A126764. Analysis of this sequence allowed us to derive the asymptotics as
$$[q^n]A(q) \sim \frac{13\sqrt{2}}{768\cdot n^{3/2}}\exp(\pi\sqrt{13n/6}).$$ In the next section we will describe how this result was obtained.
\begin{figure}[ht]
\centerline{ \includegraphics[width=0.5\linewidth]{for_tony.pdf} }
\caption{An $L$-convex polyomino.}
\label{Lconvex}
\end{figure}
The second problem we are considering is that of 201-avoiding ascent sequences, defined below.
Given a sequence of non-negative integers, $n_1 n_2 n_3 \ldots n_k$ the number of {\em ascents} in this sequence is $$asc(n_1 n_2 n_3 \ldots n_k) = |\{ 1 \le j<i : n_j \le n_{j+1} \} |.$$
The given sequence is an {\em ascent sequence} of length $k$ if it satisfies $n_1=0$ and $n_i \in [0,1+asc(n_1 n_2 n_3 \ldots n_{k-1} )] $ for all $2 \le i \le k.$ For example, $(0,1,0,2,3,1,0,2)$ is an ascent sequence,
but $(0,1,2,1,4,3)$ is not, as $4 > asc(0121)+1=3.$
Ascent sequences came to prominence when Bousquet-M\'elou et al. \cite{BCDK10} related them to $(2+2)$-free posets. They have subsequently been linked to other combinatorial structures. See \cite{K11} for a number of examples. Later, Duncan and Steingrimsson \cite{DS11} studied {\em pattern-avoiding ascent sequences. }
A pattern is simply a word on nonegative integers (repetitions allowed). Given an ascent sequence $(n_1 n_2 n_3 \ldots n_k),$ a pattern $p$ is a subsequence $n_{i_1}n_{i_2}\ldots n_{i_j},$ where $j$ is just the length of $p,$ and where the letters appear in the same relative order of size as those in $p.$ For example, the ascent sequence $(0,1,0,2,3,1)$ has three occurrences of the sequence $001,$ namely $002,$ $003$ and $001.$ If an ascent sequence does not contain a given pattern, it is said to be {\em pattern avoiding}.
The connection between pattern-avoiding ascent sequences and other combinatorial objects, such as set partitions, is the subject of \cite{DS11}, while the connection between pattern-avoiding ascent sequences and a number of stack sorting problems is explored in \cite{CCF20}.
Considering patterns of length three, the number of ascent sequences of length $n$ avoiding the patterns $001,$ $010,$ $011,$ and $012$ is given in the OEIS (sequence A000079) as $2^{n-1}.$ For the pattern $102$ the number is $(3^n+1)/2$ (OEIS A007051), while for $101$ and $021$ the number is just given by the $n^{th}$ Catalan number, $C_n,$ given in the OEIS as sequence A000108.
For the pattern 201, the first twenty-eight terms of the generating function are given in the OEIS as sequence A202062, and it is this sequence that we have used in our investigation.
First, we found that the coefficients $u(n)$ given in the OEIS satisfied a recurrence relation, given in Sec. \ref{201} below.
This recurrence can be converted to a second-order inhomogeneous ODE, or, as we prefer, a third-order homogeneous ODE. The smallest root of the polynomial multiplying the third derivative in the ODE is $x=0.1370633395\ldots$ and is the radius of convergence of the generating function, and of course the reciprocal of the growth constant $\mu=7.295896946 \ldots.$
This ODE, readily converted into differential operator form, can be factored into the direct sum of two differential operators, one of first order and one of second order. The solution of the first order ODE is a rational function
while the solution of the second turns out to satisfy a cubic algebraic equation.
This can be solved by one's favourite computer algebra package (we give the solution below), and expanding this solution, and adding it to the expansion of the solution of the first-order ODE, gives the required expansion.
This analysis required only the first 24 terms given in OEIS, so the correct prediction of the next four terms gives us confidence that this is indeed the exact solution.
Expanding this solution and analysing the coefficients, as described below, leads us to the result
\begin{equation}
u(n) \sim C\frac{\mu^n}{n^{9/2}},
\end{equation}
where $$C=\frac{35}{16}\left (\frac{4107}{\pi} - \frac{84}{\pi}\sqrt{9289}\cos\left (\frac{\pi}{3} +\frac{1}{3} \arccos\left [\frac{255709\sqrt{9289}}{24653006} \right ]\right )\right )^{1/2}$$
In the next two sections we give the derivation of the results given above.
\section{$L$-convex polyominoes} \label{Lconv}
As mentioned above, a typical $L$-convex polyomino can be considered as a stack polyomino placed atop an upside-down stack polyomino. Stack polyominos counted by area have generating function
$$S(q)=\sum s_n q^n =\sum_{n \ge 1} \frac{q^n}{(q)_{n-1}(q)_n},$$ where $(q)_n \equiv \prod_{k=1}^n (1-q^k),$ and
$$s_n \sim \frac{\exp(2\pi\sqrt{n/3})}{8\cdot 3^{3/4} \cdot n^{5/4}},$$ as first shown by Auluck \cite{A51}.
Thus putting two such objects together, one would expect a similar expression for the asymptotic form of the coefficients, that is to say, an expression of the form $l_n \sim \frac{\exp(a\pi n^\beta)}{c n^\delta},$ where
we write $L(x)=\sum l_n x^n$ for the ordinary generating function of $L$-convex polyominoes. We expect both exponents $\beta$ and $\delta$ to be simple rationals, as for stack polyominoes, and the constants $a$ and $c$ to be products of integers and small fractional powers.
The analysis of series with asymptotics of this type is described in detail in \cite{G15} and we will not repeat that discussion here, but simply apply the methods described there.
First, we consider the ratios of successive coefficients, $r_n = l_n/l_{n-1}.$ For a power-law singularity, one expects the sequence of ratios to approach the growth constant linearly when plotted against $1/n.$ In our case the growth constant is 1. That is to say, there is no exponential growth.
For a singularity of the assumed type, which is called a {\em stretched exponential,} it follows that the ratio of coefficients behaves as
\begin{equation} \label{eq:rn}
r_n = \frac{l_n}{l_{n-1}} = 1+\frac{a\beta\pi}{n^{1-\beta}}+ O \left ( \frac{1}{n} \right ),
\end{equation}
so we expect the ratios to approach a limit of 1 linearly when plotted against $1/n^{1-\beta},$ and to display curvature when plotted against $1/n.$
\begin{figure}[h!]
\begin{minipage}[t]{0.45\textwidth}
\centerline{\includegraphics[width=\textwidth,angle=0]{r1L.jpg} }
\caption{$L$-convex ratios plotted against $1/n.$}
\label{fig:r1}
\end{minipage}
\hspace{0.05\textwidth}
\begin{minipage}[t]{0.45\textwidth}
\centerline{\includegraphics[width=\textwidth,angle=0]{r2L.jpg}}
\caption{$L$-convex ratios plotted against $1/\sqrt{n}.$}
\label{fig:r2}
\end{minipage}
\end{figure}
We show the ratios plotted against $1/n$ and $1/\sqrt{n}$ in Figs. \ref{fig:r1} and \ref{fig:r2} respectively. These plots are behaving as expected, with the plot against $1/n$ displaying considerable curvature, while the plot against $1/\sqrt{n}$ is
visually linear. This is strong evidence that $\beta=1/2,$ just as is the case for stack polyominoes.
In fact we can easily refine this estimate.
From (\ref{eq:rn}), one sees that
\begin{equation} \label{eq:rbeta}
r_n-1 = a\beta \pi \cdot n^{\beta-1} + O\left ( \frac{1}{n} \right ).
\end{equation}
Accordingly, a plot of $\log(r_n-1)$ versus $\log{n}$ should be linear, with gradient $\beta-1.$ We would expect an estimate of $\beta$ close to that which linearised the ratio plot. In Fig. \ref{fig:ll1} we show the log-log plot, and in Fig. \ref{fig:grad} we show the local gradient plotted against $1/\sqrt{n}.$ The linearity of the first plot is obvious, while the second is convincingly going to a limit of $-0.5$ as $n \to \infty.$
\begin{figure}[h!]
\begin{minipage}[t]{0.45\textwidth}
\centerline{\includegraphics[width=\textwidth,angle=0]{rllL.jpg} }
\caption{Log-log plot of $r_n-1$ against $n.$}
\label{fig:ll1}
\end{minipage}
\hspace{0.05\textwidth}
\begin{minipage}[t]{0.45\textwidth}
\centerline{\includegraphics[width=\textwidth,angle=0]{betaL.jpg}}
\caption{Gradient of log-log plot.}
\label{fig:grad}
\end{minipage}
\end{figure}
Having convincingly established that $\beta=1/2,$ just as for stack polyominoes, it remains to determine the other parameters. There are several ways one might proceed, but here is one that works quite well. From the conjectured asymptotic form, we write
\begin{equation} \label{eqn:fit}
\lambda_n \equiv \frac{log(l_n)}{\pi\sqrt{n}} \sim a -\frac{\delta\log{n}}{\pi\sqrt{n}} -\frac{\log{c}}{\pi\sqrt{n}} ,
\end{equation}
so one can readily fit successive triple of coefficients $\lambda_{k-1}, \lambda_k, \lambda_{k+1},$ to the linear equation $\lambda_n=e_1+e_2\frac{\log{n}}{\pi\sqrt{n}} +e_3\frac{1}{\pi\sqrt{n}}, $ with $k$ increasing until one runs out of known coefficients. Then $e_1$ should give an estimator of $a,$ $e_2$ should give an estimator of $-\delta$ and $e_3$ should give an estimator of $-\log(c).$ The result of doing this is shown for $e_1$ and $e_2$ in Figs. \ref{fig:e1} and \ref{fig:e2} respectively.
We estimate the limits as $n \to \infty$ of $e_1$ as approximately $1.472,$ and $e_2$ as $-1.5.$ From the asymptotic expression for $s_n,$ we expect $a$ to likely involve a square root. So we look at $e_1^2 = 2.16678,$ which we conjecture to be $13/6.$ The exponent $\delta$ we expect to be a simple rational, and $3/2$ is indeed a simple rational. We don't show the plot for $e_3,$ as it does not give a precise enough estimate to conjecture the value of $\log(c)$ with any precision.
\begin{figure}[h!]
\begin{minipage}[t]{0.45\textwidth}
\centerline{\includegraphics[width=\textwidth,angle=0]{e1.jpg} }
\caption{Plot of $e_1$ against $1/\sqrt{n}.$}
\label{fig:e1}
\end{minipage}
\hspace{0.05\textwidth}
\begin{minipage}[t]{0.45\textwidth}
\centerline{\includegraphics[width=\textwidth,angle=0]{e2.jpg}}
\caption{Plot of $e_2$ against $1/\sqrt{n}.$}
\label{fig:e2}
\end{minipage}
\end{figure}
So at this stage we can reasonably conjecture that $$l_n \sim \frac{\exp(\pi\sqrt{13n/6})}{c \cdot n^{3/2}}.$$ We reached this stage based on only 100 terms in the expansion. In order to both gain more confidence in the conjectured form, and to calculate the constant, we needed more terms, and eventually generated 2000 terms from eqn. (\ref{eq:lconv}).
With hindsight, an arguably more elegant way to analyse this series is to consider only the coefficients $l_{n^2}.$ Denote $l_{n^2} \equiv \ell_n.$ With $m=n^2,$ the conjectured form becomes $$\ell_n \sim \frac{\exp(n\pi\sqrt{13/6})}{c \cdot n^{3}}.$$ We have 44 coefficients of the series $\ell_n$ available, and these grow in the usual power-law manner, that is, $\ell_n \sim D\cdot \mu^n \cdot n^g.$ This sequence can then be analysed by the standard methods of series analysis.
We assume the asymptotic form to be $\ell_n \sim \frac{\exp(n\pi\sqrt{a})}{c \cdot n^{b}},$ with $a,$ $b,$ and $c$ to be determined.
We first form the ratios,$$r_n^{(sq)}=\ell_n/\ell_{n-1} = \mu(1+g/n+ o(1/n)),$$ where $\mu=\exp(\pi \sqrt{a})$ and $g=-b.$ Plotting the ratios against $1/n$ should give a linear plot with gradient $\mu\cdot g$ and ordinate $\mu.$ For a pure power-law the term $o(1/n)$ is $O(1/n^2),$ and the estimate of $\mu$ can be refined by plotting the linear intercepts $\ell_n^{(sq)}=n\cdot r_n-(n-1)\cdot r_{n-1}$ against $1/n^2.$ The results of doing this are shown in Figs. \ref{fig:r3} and \ref{fig:r4} for the ratios and linear intercepts respectively.
\begin{figure}[h!]
\begin{minipage}[t]{0.45\textwidth}
\centerline{\includegraphics[width=\textwidth,angle=0]{Lr3.jpg} }
\caption{Plot of ratios $r_n^{(sq)}$ against $1/{n}.$}
\label{fig:r3}
\end{minipage}
\hspace{0.05\textwidth}
\begin{minipage}[t]{0.45\textwidth}
\centerline{\includegraphics[width=\textwidth,angle=0]{Lr4.jpg}}
\caption{Plot of linear intercepts $l_n^{(sq)}$ against $1/{n^2}.$}
\label{fig:r4}
\end{minipage}
\end{figure}
It can be seen that the linear intercepts are much better converged. We can go further and eliminate the $O(1/n^2)$ term by forming the sequence $t_n=(n^2\cdot \ell_n^{(sq)}-(n-1)^2 \cdot \ell_{n-1}^{(sq)})/(2n-1),$ and these are shown in Fig. \ref{fig:r5}, plotted against $1/n^3.$
From this we estimate that the intercept of the plot with the ordinate is about 101.931. This is the growth constant $\mu$ of this sequence, and should be equal to $\exp(\pi\sqrt{a}),$ from which we find $a \approx 2.16666,$ which strongly suggests that $a=13/6$ exactly.
To estimate the exponent $b$ we find from the expression for the ratios that the exponent $g \sim (r_n^{(sq)}/\mu-1)\cdot n.$ We define $g_n=(r_n^{(sq)}/\mu-1)\cdot n,$ and using the estimate of $\mu$ just given, we obtain the plot shown in Fig. \ref{fig:g2}. This is rather convincingly approaching $g=-3.$
We can do better by calculating the linear intercepts, so forming the sequence $g2_n=n\cdot g_n - (n-1)\cdot g_{n-1}.$ A plot of $g2_n$ against $1/n^2$ is shown in Fig. \ref{fig:g3}. The result $g=-3$ is totally convincing.
\begin{figure}[h!]
\begin{minipage}[t]{0.45\textwidth}
\centerline{\includegraphics[width=\textwidth,angle=0]{Lr5.jpg} }
\caption{Plot of sequence $t_n$ against $1/{n^3}.$}
\label{fig:r5}
\end{minipage}
\hspace{0.05\textwidth}
\begin{minipage}[t]{0.45\textwidth}
\centerline{\includegraphics[width=\textwidth,angle=0]{Lgsq.jpg}}
\caption{Plot of exponent estimates $g_n$ against $1/{n}.$}
\label{fig:g2}
\end{minipage}
\end{figure}
\begin{figure}[h!]
\centerline{\includegraphics[width=8.5cm,angle=0]{Lg2sq.jpg} }
\caption{Plot of sequence $t_n$ against $1/{n^3}.$}
\label{fig:g3}
\end{figure}
In order to calculate the constant $c,$ we form the sequence $$c_n \equiv \frac{\exp(\pi\sqrt{13n/6})}{l_n\cdot n^{3/2}},$$ and extrapolate the sequence $c_n$ using any of a variety of standard methods.
We used the Bulirsch-Stoer method \cite{SB80}, applied to the coefficient sequence $\{\ell_n\},$ with parameter $1/2,$ and $44$ terms in the sequence (corresponding to $44^2=1936$ terms in the original series). This gave the estimate $c \approx 0.023938510821419.$ This unknown number is likely to involve a square root, cube root or fourth root of a small integer, just as did $s_n.$
We investigated this by dividing by various powers of small integers, and tried to identify the result. Fortuitously, dividing the approximate value by $\sqrt{2}$ gave a result that the Maple\copyright\, command {\em identify} reported as $13/768.$ This implies $c=13\sqrt{2}/768=0.023938510821419577\cdots,$ agreeing to all quoted digits with the approximate value. The occurrence of $13$ in this fraction, as well as in the exponent square-root, is a reassuring feature, as is the factorisation of 768 as $3\cdot 2^8.$
Thus we conclude with the confident conjecture that the asymptotic form of the coefficients of $L$-convex polyominoes is
$$l_n \sim \frac{3\cdot 2^8\cdot \exp(\pi\sqrt{13n/6})}{13\sqrt{2}\cdot n^{3/2}}.$$
\section{201-avoiding ascent sequence}\label{201}
From the coefficients $u(n), \,\,\,n=0..27$ given as A202062 in the OEIS \cite{OEIS},
we used the {\em gfun} package of Maple$^\copyright$ and immediately found that the coefficients satisfy the recurrence relation
\begin{align}
\notag
&(2n^2 + n)u(n) + (6n^2 + 45n + 60)u(n + 1) + (-34n^2 - 263n - 480)u(n + 2)\\
\notag
&+ (44n^2 + 421n + 984)u(n + 3) + (-20n^2 - 235n - 684)u(n + 4)\\
\notag
& + (2n^2 + 31n + 120)u(n + 5)=0, \\
&{\rm with\,\,\,}u(0) = 1, u(1) = 1, u(2) = 2, u(3) = 5, u(4) = 15.
\end{align}
This recurrence can be converted to a second-order inhomogeneous ODE, again using {\em gfun} or, as we prefer, a third-order homogeneous ODE, using the gfun command {\em diffeqtohomdiffeq}, giving
$$P_3(x)f^{'''}(x)+P_2(x)f^{''}(x)+P_1(x)f^{'}(x)+P_0(x)f(x)=0,$$ where
\begin{align}
\notag
&P_3(x) =-2x^2(x^3 + 5x^2 - 8x + 1)(4x^4 - 30x^3 + 48x^2 - 36x + 15)(x - 1)^2,\\
\notag
&P_2(x)=-3x(x - 1)(12x^8 - 30x^7 - 652x^6 + 2734x^5 - 4767x^4 + 4758x^3\\
\notag
& - 2843x^2 + 870x - 85),\\
\notag
&P_1(x)=-24x^9 + 30x^8 + 2754x^7 - 13278x^6 + 28884x^5 - 38106x^4 \\
\notag
&+ 32436x^3 - 16620x^2 + 4350x - 420,\\
\notag
&P_0(x)=30(3x - 2)(3x^5 - 10x^4 + 19x^3 - 28x^2 + 24x - 7).
\end{align}
The smallest root of the cubic factor in $P_3(x)$ is $x=0.1370633395\ldots$ and is the radius of convergence of the solution, and of course the reciprocal of the growth constant $\mu=7.295896946 \ldots.$ Explicitly, $$\mu=\frac{14}{3}\cos\left ( \frac{\arccos(\frac{13}{14})} {3} \right ) + \frac{8}{3}.$$
This ODE can then be studied using the Maple\copyright\, package {\em DEtools}. We first convert the ODE to differential operator form through the command {\em de2diffop}, then factor this into the direct sum of two differential operators by the command {\em DFactorLCLM}. One of these operators is first order and one is second order.
The solution of the first order ODE is immediately given by the {\em dsolve} command, and is the rational function
\begin{equation} \label{eqn:ODE1}
y_1(x) = \frac{x^4 + 26x^3 - 45x^2 + 18x + 1}{12(x - 1)x^3}.
\end{equation}
To solve the second-order ODE, we obtain a series solution, the first term of which is O$(x^{-3}).$ We multiply the solution by $x^3$ to obtain a regular power series, then use the {\em gfun} command {\em seriestoalgeq} to discover the cubic equation,
\begin{align} \label{cubic}
\notag
&4(x-1)^3y_2(x)^3-\\
\notag
&3(x - 1)(x^2 - x + 1)(x^6 - 235x^5 + 1430x^4 - 1695x^3 + 270x^2 + 229x + 1)y_2(x)\\
\notag
&+x^{12} + 510x^{11} - 14631x^{10} + 80090x^9 - 218058x^8 + 316290x^7 - 253239x^6\\
& + 131562x^5 - 70998x^4 + 37950x^3 - 8955x^2 - 522x + 1=0.
\end{align}
This can be solved by Maple's\copyright\, {\em solve} command, giving three solutions. Inspection of their expansion reveals the appropriate one, and simplifying this gives the following rather cumbersome solution:
Let
\begin{align}
\notag
&P_1=x^{12} + 510x^{11} - 14631x^{10} + 80090x^9 - 218058x^8 + 316290x^7 - 253239x^6\\
\notag
& + 131562x^5 - 70998x^4 + 37950x^3- 8955x^2 - 522x + 1-24\sqrt{3x(x - 1)(x^3 + 5x^2 - 8x + 1)^7} ,
\end{align}
$$P_2=(x^2 - x + 1)(x - 1)^4(x^6 - 235x^5 + 1430x^4 - 1695x^3 + 270x^2 + 229x + 1),\,\, {\rm and}$$
$$P_3=(3^{5/6}i + 3^{1/3}), \,\,P_4=(-3^{5/6}i + 3^{1/3}), $$ then
\begin{equation}
y_2(x)=\frac{-3^{2/3}\left [ P_4\left (-P_1\cdot (x-1)^6\right )^{2/3}+P_2\cdot P_3 \right ]}{12(-P_1\cdot (x-1)^6)^{1/3}(x-1)^3}.
\end{equation}
The solution to the original ODE is then
$$y(x)=\frac{y_2(x)}{12x^3}-y_1(x)=1+x+2x^2+5x^3+15x^4+\cdots.$$
This analysis required only the first 24 terms given in the OEIS, so the correct prediction of the next four terms gives us confidence that this is indeed the exact solution.
We next obtained the first 5000 terms in only a few minutes of computer time by expanding this solution.
We used these terms to calculate the amplitude. That is to say, we now know that the coefficients behave asymptotically as
\begin{equation}
u(n) \sim C\frac{\mu^n}{n^{9/2}}.
\end{equation}
Equivalently, the generating function behaves as
$$U(x)=\sum u(n)x^n =A(1-\mu \cdot x)^{7/2},$$
where $C=A/\Gamma(-7/2) = 105A/(16\sqrt{\pi}).$
We estimate $C$ by assuming a pure power law, so that $$\frac{u(n)\cdot n^{9/2}}{\mu^n} = C(1+\sum_{k \ge 1} a_k/n^k).$$ We calculated the first twenty coefficients of this expansion, which
allowed us to estimate $C=13.4299960869\cdots$ with 74-digit accuracy.
Unless one is very fortunate, and the Maple\copyright\, command {\em identify} determines an expression for this constant, (which it doesn't in this case), to identify this constant requires some experience-based guesswork.
Such constants in favourable cases are a product of rational numbers and square roots of small integers, sometimes with integer or half-integer powers of $\pi.$
These powers of $\pi$ usually arise from the conversion factor in going from the generating function amplitude $A$ to the coefficient amplitude $C.$ That is to say, we might expect the amplitude $A$ to be simpler than $C.$ And, to eliminate square-roots, we will try and identify $A^2$ rather than $A.$
We do this by seeking the minimal polynomial with root $A^2,$ using the command {\em MinimalPolynomial} in either Maple\copyright\, or Mathematica\copyright. In fact, one only requires 20 digit accuracy in the estimate of $A^2$ to
establish the minimal polynomial, $A^6 - 1369A^4 + 17839A^2 + 1,$ which can be solved to give
$$C=\frac{35}{16}\left (\frac{4107}{\pi} - \frac{84}{\pi}\sqrt{9289}\cos\left (\frac{\pi}{3} +\frac{1}{3} \arccos\left [\frac{255709\sqrt{9289}}{24653006} \right ]\right )\right )^{1/2}.$$
This derivation includes a degree of hindsight. In fact we searched for the minimal polynomial for the amplitude $C,$ by including various powers of $\pi,$ and then choose the polynomial of minimal degree. This required a much greater degree of precision in our estimate of $C$ to ensure we found the correct minimal polynomial.
It has been pointed out to us by Jean-Marie Maillard that the amplitude $A$ can be obtained directly from the solution of the cubic eqn. (\ref{cubic}), by extracting the coefficient of $(1-\mu \cdot x)^{7/2}.$ Proceeding in that manner, Maillard directly obtained the minimal polynomial that we obtained by numerical experimentation. While that is a more elegant method in this case, it is usually not possible, as in the case of L-convex polyominoes, where the numerical approach can still yield conjecturally exact results.
\section{Conclusion}
We have shown how experimental mathematics can be used to conjecture exact asymptotics, in the case of $L$-convex polyominoes, and an exact solution, in the case of $201$-avoiding ascent sequences.
We hope that the results will be of interest, and that the methods will be more widely applied, as there are many outstanding combinatorial problems that lend themselves to such an approach.
We recognise that these results are conjectural, and not rigorous. We leave proofs to those more capable, and in the hope that the maxim of the late lamented J. M. Hammersley to the effect that ``it is much easier to prove something when you know that it is true" will aid that endeavour.
\section{Acknowledgements}
AJG wishes to acknowledge helpful discussions with Jean-Marie Maillard, and with Paolo Massazza on the topic of L-convex polyominoes, and to thank the ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS) for support.
| {
"timestamp": "2021-09-30T02:05:50",
"yymm": "2109",
"arxiv_id": "2109.09928",
"language": "en",
"url": "https://arxiv.org/abs/2109.09928",
"abstract": "For L-convex polyominoes we give the asymptotics of the generating function coefficients, obtained by analysis of the coefficients derived from the functional equation given by Castiglione et al. \\cite{CFMRR7}. For 201-avoiding ascent sequences, we conjecture the solution, obtained from the first 23 coefficients of the generating function. The solution is D-finite, indeed algebraic. The conjectured solution then correctly generates all subsequent coefficients. We also obtain the asymptotics, both from direct analysis of the coefficients, and from the conjectured solution. As well as presenting these new results, our purpose is to illustrate the methods used, so that they may be more widely applied.",
"subjects": "Combinatorics (math.CO)",
"title": "A numerical study of L-convex polyominoes and 201-avoiding ascent sequences",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.983342957061873,
"lm_q2_score": 0.8198933271118222,
"lm_q1q2_score": 0.8062363287574368
} |
https://arxiv.org/abs/0909.5024 | Generalized Sidon sets | We give asymptotic sharp estimates for the cardinality of a set of residue classes with the property that the representation function is bounded by a prescribed number. We then use this to obtain an analogous result for sets of integers, answering an old question of Simon Sidon. | \section{Introduction}
A \emph{Sidon set} $A$ in a commutative group is a set with the property
that the sums $a_1+a_2$, $a_i\in A$ are all distinct except when
they coincide because of commutativity. We consider the case when,
instead of that, a bound is imposed on the number of such
representations. When this bound is $g$, these sets are often called
$B_2[g]$ sets. This being both clumsy and ambiguous, we will avoid
it, and fix our notation and terminology below.
Our main interest is in sets of integers and residue classes, but we
formulate our concepts and some results in a more general setting.
Let $G$ be a commutative group.
\begin{Def} \label{defrep}
For $A \subset G$, we define the corresponding \emph{representation function} as
$$r(x) = \sharp \{(a_1,a_2): a_i \in A, \, a_1 + a_2 = x\}.$$
The \emph{restricted representation function} is
$$r'(x) = \sharp \{(a_1,a_2): a_i \in A,\, a_1 + a_2 = x,\, a_1\ne a_2\}.$$
Finally, the \emph{unordered representation function} $r^*(x)$ counts the pairs
$(a_1, a_2)$ where
$(a_1, a_2)$ and $(a_2, a_1)$ are identified. With an ordering given on $G$
(not necessarily in any connection with the group operation) we can write this
as
$$r^*(x) = \sharp \{(a_1,a_2): a_i \in A,\, a_1 + a_2 = x, \,a_1\leq a_2\}.$$
\end{Def}
These functions are not independent; we have always the equality
\[ r^*(x) = r(x) - \frac{r'(x)}{2} \]
and the inequalities
\[ r'(x) \leq r(x) \leq 2r^*(x) . \]
We have $r(x)=r'(x)$ except for $x=2a$ with $a\in A$. If we are in this last case and there are no
elements of order 2 in $G$, then necessarily $r(x)=r'(x)+1$, and
the quantities are more closely connected:
\[ r'(x) = 2 \low{\frac{r(x)}{2}}, \ r^*(x) = \up{\frac{r(x)}{2}} . \]
This is the case in ${\mathbb {Z}}$, or in ${\mathbb {Z}}_q$ for odd values of $q$. For even $q$
this is not necessarily true, but both for constructions and estimates the
difference seems to be negligible, as we shall see. In a group with lots of
elements of order 2, like in ${\mathbb {Z}}_2^m$, the difference is substantial.
Observe that $r$ and $r'$ make sense in a noncommutative group as well,
while $r^*$ does not.
\begin{Def}
We say that $A$ is a \emph{$g$-Sidon set}, if $r(x)\leq g$ for all $x$.
It is a \emph{weak $g$-Sidon set}, if $r'(x)\leq g$ for all $x$.
It is an \emph{unordered $g$-Sidon set}, if $r^*(x)\leq g$ for all $x$.
\end{Def}
\begin{Note}
When we have a set of integers $C \ {\subseteq} \ [1, m]$, we say that it is a $g$-Sidon set $\pmod m$ if the residue classes $\{c \pmod{m} : c \in C\}$ form a $g$-Sidon set in $\mathbb{Z}_m$.
\end{Note}
The strongest possible of these concepts is that of an unordered 1-Sidon
set, and this is what is generally simply called a Sidon set. A weak 2-Sidon
set is sometimes called a weak Sidon set.
These concepts are closely connected. If there are no elements of order 2,
then $2k$-Sidon sets and unordered $k$-Sidon sets coincide, in
particular, a Sidon set is the same as a 2-Sidon set. Also, in this
case $(2k+1)$-Sidon sets and weak $2k$-Sidon sets coincide.
Specially, a 3-Sidon set and a weak 2-Sidon set are the same.
Our aim is to find estimates for the maximal size of a $g$-Sidon set
in a finite group, or in an interval of integers.
\subsection{The origin of the problem: g-Sidon sets in the integers}
In 1932, the analyst S. Sidon asked to a young P. Erd\H os about the
maximal cardinality of a $g$-Sidon set of integers in $\{1,\dots , n\}$.
Sidon was interested in this problem in connection
with the study of the $L_p$ norm of Fourier series with frequencies
in these sets but Erd\H os was captivated by the combinatorial and
arithmetical flavour of this problem and it was one of his favorite
problems; not in vain it has been one of the main topics in
Combinatorial Number Theory.
\begin{Def}
For a positive integer $n$
\[ \beta _g(n) = \max \ab A : A \subset \{1, \dots , n\}, \, A \text{ is a }
g\text{-Sidon set} . \]
We define $\beta '_g(n)$ and $\beta ^*_g(n)$ analogously.
\end{Def}
The behaviour of this quantity is only known for classical
Sidon sets and for weak Sidon
sets : we have $\beta _2(n)\sim \sqrt n$ and $\beta _3(n)\sim \sqrt
n$.
The reason which makes easier the case $g=2$ is that
2-Sidon sets have the property that the differences $a-a'$
are all distinct. Erd\H os an Tur\'an \cite{ET} used this to prove that
$\beta_2(n)\le \sqrt n+O(n^{1/4})$ and Lindstr\"om \cite{L} refined that
to get $\beta_2(n)\le \sqrt n+n^{1/4}+1$. For weak Sidon sets Ruzsa \cite{Ru} proved that
$\beta_3(n)\le \sqrt n+4n^{1/4}+11$.
For the lower bounds, the classical constructions of Sidon sets of Singer \cite{Si}, Bose \cite{BC} and Ruzsa \cite{Ru} in some finite groups, $\mathbb Z_m$,
give $\beta_3(n)\ge \beta_2(n)\ge \sqrt n(1+o(1))$. Then,
$\lim_{n\to \infty}\frac{\beta_2(n)}{\sqrt n}=\lim_{n\to \infty}\frac{\beta_3(n)}{\sqrt
n}=1$.
However for $g\ge 4$ it has not even been proved that $\lim_{n\to
\infty}\beta_g(n)/\sqrt n$ exists.
For this reason we write
\[ \overline \beta _g = \limsup_{n \to \infty} \beta _g(n)/\sqrt n \qquad \text{and} \qquad \underline \beta _g = \liminf_{n \to \infty} \beta _g(n)/\sqrt n . \]
It is very likely that these limits coincide, but this has only been proved for
$g=2,3$.
A wide literature has been written with bounds for
$\overline{\beta}_g$ and $\underline{\beta}_g$ for arbitrary $g$. The trivial
counting argument gives $\overline{\beta}_g\le \sqrt{2g}$
while the strategy of pasting Sidon sets in $\mathbb Z_m$ in the obvious way gives
$\underline{\beta}_g\ge \sqrt{g/2}$.
The problem of narrowing this gap has attracted the attention of many mathematicians
in the last years.
For example, while for $g=4$ the trivial upper bound gives
$\overline{\beta}_4\le \sqrt 8$, it was proved in \cite{Ci1} that
$\overline{\beta}_4\le \sqrt 6$, which was refined to
$\overline{\beta}_4\le 2.3635...$ in \cite{P} and to
$\overline{\beta}_4\le 2.3218...$ in \cite{HP}.
On the other hand, Kolountzakis \cite{K} proved that $\underline{\beta}_4\ge \sqrt 2$, which was improved to $\underline{\beta}_4\ge 3/2$ in \cite{CRT} and to $\underline{\beta}_4\ge 4/\sqrt
7=1.5118...$ in \cite{HP}.
We describe below the progress done for large $g$:
\smallskip
\begin{center}\begin{tabular}{ll}
$\frac{\overline{\beta}_g}{\sqrt g}$ & $\le \sqrt 2 = 1.4142...$ (trivial) \\
& $\le 1.3180... $ (J. Cilleruelo - I. Z. Ruzsa - C. Trujillo,\ \cite{CRT})\\
& $\le 1.3039... $ (B. Green,\ \cite{G}) \\
& $\le 1.3003... $ (G. Martin - K. O'Bryant,\ \cite{Mar1})\\
& $\le 1.2649... $ (G. Yu,\ \cite{Yu})\\
& $\le 1.2588... $ (G. Martin - K. O'Bryant,\ \cite{Mar2})
\end{tabular}\end{center}
\smallskip
\begin{center}\begin{tabular}{ll}
$\lim_{g\to \infty}\frac{\underline{\beta}_g}{\sqrt g}$ & $\ge 1/\sqrt 2 = 0.7071...$ (M. Kolountzakis,\ \cite{K}) \\
& $\ge 0.75 $ (J. Cilleruelo - I. Z. Ruzsa - C. Trujillo,\ \cite{CRT})\\
& $\ge 0.7933... $ (G. Martin - K. O'Bryant,\ \cite{Mar})\\
& $\ge \sqrt{2/\pi}=0.7978... $ (J. Cilleruelo - C. Vinuesa,\ \cite{CV}).
\end{tabular}\end{center}
\bigskip
Our main result connects this problem with a quantity arising from the analogous
continuous problem, first studied by Schinzel and Schmidt
\cite{Schin}. Consider all nonnegative real functions $f$ satisfying
$f(x)=0$ for all $x\notin [0,1]$, and
\[ \int _0^1 f(t) f(x-t) \, dt \leq 1 \]
for all $x$. Define the constant $\sigma $ by
\begin{equation} \label{sigma}
\sigma = \sup \int _0^1 f(x) \, dx
\end{equation}
where the supremum is taken over all functions $f$ satisfying the above
restrictions.
\begin{Th} \label{integer}
\[ \lim_{g\to \infty} \frac{\underline \beta _g}{\sqrt g} =
\lim_{g\to \infty} \frac{\overline \beta _g}{\sqrt g} = \sigma . \]
\end{Th}
In other words, the theorem above says that the maximal cardinality of a $g$-Sidon set in $\{1, \dots, n\}$ is
$$\beta_g(n) = \sigma \sqrt{gn}(1-\varepsilon(g,n))$$
where $\varepsilon(g,n) \to 0$ when both $g$ and $n$ go to infinity.
Schinzel and Schmidt \cite{Schin} and Martin and O'Bryant
\cite{Mar2} conjectured that $\sigma =2/\sqrt \pi = 1.1283...$, and an extremal
function was given by $f(x)=1/\sqrt {\pi x}$ for $0<x\leq 1$. But
recently this has been disproved \cite{MV} with an explicit $f$
which gives a greater value. The current state of the art for this
constant is
$$1.1509... \le \sigma\le 1.2525...$$
both bounds coming from \cite{MV}.
The main difficulty in Theorem \ref{integer} is establishing the lower
bound for $\lim \frac{\underline \beta _g}{\sqrt g}$. Indeed the
upper bound $\lim \frac{\overline \beta _g}{\sqrt g} \le \sigma$
was already proved in \cite{CV} using a result of Schinzel and Schmidt from \cite{Schin}.
We include however a complete proof of the theorem.
The usual strategy to construct large $g$-Sidon sets in the integers
is pasting large Sidon sets modulo $m$ in a suitable form. The strategy
of pasting $g$-Sidon sets modulo $m$ had not been tried before since there
were no large enough known $g$-Sidon sets modulo $m$.
Precisely, the heart of the proof of this theorem is the construction of large $g$-Sidon sets modulo $m$.
\subsection{g-Sidon sets in finite groups}
\begin{Def}
For a finite commutative group $G$ write
\[ \alpha _g(G) = \max \ab A : A \subset G, \, A \text{ is a } g\text{-Sidon set} . \]
We define $\alpha '_g(G)$ and $\alpha ^*_g(G)$ analogously.
For the cyclic group $G={\mathbb {Z}}_q$, with an abuse of notation, we write
$\alpha _g(q) = \alpha _g({\mathbb {Z}}_q)$.
\end{Def}
An obvious estimate of this quantity is
\[ \alpha _g(q) \leq \sqrt {gq} . \]
Our aim is to show that for large $g$ for some values of $q$ this is
asymptotically the correct value. More exactly, write
\[ \alpha _g = \limsup_{q \to \infty} \alpha _g(q)/\sqrt q . \]
The case $g=2$ (Sidon sets) is well known, we have $\alpha _2 = 1$.
It is also known \cite{Ru} that $\alpha_3 =1$. Very little is known
about $\alpha_g$ for $g \ge 4$.
For $g = 2k^2$, Martin and O'Bryant \cite{Mar} generalized the well
known constructions of Singer \cite{Si}, Bose \cite{BC} and Ruzsa
\cite{Ru}, obtaining $\alpha_g \ge \sqrt{g/2}$ for these values of
$g$.
We are unable to exactly determine $\alpha _g$ for any $g\geq 4$, but we will find its
asymptotic behaviour. Our main result sounds as follows.
\begin{Th} \label{modular}
We have
\[ \alpha _g = \sqrt g + O\left( g^{3/10} \right), \]
in particular,
\[ \lim_{g\to \infty} \frac{\alpha _g}{\sqrt g} = 1. \]
\end{Th}
\bigskip
In Section 2, as a warm-up, we give a slight improvement of the obvious upper estimate.
In Section 3 we construct dense $g$-Sidon sets in groups ${\mathbb
{Z}}_p^2$. In Section 4 we use this to construct $g$-Sidon sets
modulo $q$ for certain values of $q$.
Section 5 is devoted to the proof of the upper bound of Theorem
\ref{integer}. In Section 6 we prove the lower bound of Theorem \ref{integer}
pasting copies of the large g-Sidon sets in $\mathbb Z_q$ which we
constructed in Section 4. In these two sections, we connect the discrete
and the continuous world, combining some ideas from Schinzel and Schmidt
and some probabilistic arguments used in \cite{CV}.
\section{An upper estimate}
The representation function $r(x)$ behaves differently
at elements of $2\cdot A=\{2a:a\in A\}$ and the rest; in particular, it can be odd only
on this set. Hence we formulate our result in a flexible
form that takes this into account.
\begin{Th}
Let $G$ be a finite commutative group with $\ab G =q$. Let $k\geq 2$ and $l\geq 0$ be
integers and $A\subset G$ a set such that the corresponding representation function
satisfies
\[ r(x) \leq \left\{ \begin{array}{lcc}
k, & \text{ if } x \notin 2\cdot A, \\
k+l, & \text{ if } x \in 2\cdot A . \\
\end{array} \right. \]
We have
\begin{equation} \label{Aup}
\ab A < \sqrt {(k-1)q} + 1 + \frac{l}{2} + \frac{l(l+1)}{2(k-1)} . \end{equation}
\end{Th}
\begin{Cor}
Let $G$ be a finite commutative group with $\ab G =q$, and let
$A\subset G$ be a $g$-Sidon set. If $g$ is even, then
\[ \ab A \leq \sqrt {(g-1)q} + 1 . \]
If $g$ is odd, then
\[ \ab A \leq \sqrt {(g-2)q} + \frac{3}{2} + \frac{1}{g-2} . \]
\end{Cor}
Indeed, these are cases $k=g$, $l=0$ and $k=g-1$, $l=1$ of the previous
theorem.
\begin{Cor}
Let $A \subset {\mathbb {Z}}_q$ be a weak $g$-Sidon set. If $q$ is even, then
\[ \ab A \leq \sqrt {(g-1)q} + 2 + \frac{3}{g-1} . \]
If $q$ is odd, then
\[ \ab A \leq \sqrt {(g-1)q} + \frac{3}{2} + \frac{1}{g-1} . \]
\end{Cor}
To deduce this, we put $k=g$ and $l=2$ if $q$ is even, $l=1$ if $q$ is
odd.
\begin{proof}
Write $\ab A = m$.
We shall estimate the quantity
\[ R = \sum r(x)^2 \]
in two ways.
First, observe that
\[ r(x)^2 - kr(x) = r(x) \left( r(x)-k\right) \leq \left\{ \begin{array}{lcc}
0, & \text{ if } x \notin 2\cdot A, \\
l(k+l), & \text{ if } x \in 2\cdot A , \\
\end{array} \right. \]
hence
\[ R \leq k \sum r(x) + l(k+l) \ab{2\cdot A} . \]
Since clearly $\sum r(x)= m^2$ and $ \ab {2\cdot A}\leq m$, we conclude
\begin{equation} \label{Rup} R \leq km^2 + l(k+l)m . \end{equation}
Write
$$ d(x) = \sharp \{(a_1,a_2): a_i \in A, \, a_1 - a_2 = x\}.$$
Clearly $d(0)=m$. We also have $\sum d(x) = m^2$, and, since the equations
$x+y=u+v$ and $x-u=v-y$ are equivalent,
\[ \sum d(x)^2 = R . \]
We separate the contribution of $x=0$ and use the inequality of the arithmetic
and quadratic mean to conclude
\[ R = m^2 + \sum _{x\ne 0} d(x)^2 \geq m^2 + \frac{1}{q-1} \left( \sum _{x\ne 0} d(x)
\right)^2 > m^2 + \frac{m^2 (m-1)^2}{q} . \]
A comparison with the upper estimate \eqref {Rup} yields
\[ \frac{m^2 (m-1)^2}{q} < (k-1)m^2 + l(k+l)m . \]
This can be rearranged as
\[ (m-1)^2 < (k-1)q + \frac{l(k+l)q}{m} . \]
Now if $m<\sqrt {(k-1)q}$, then we are done; if not, we use the opposite
inequality to estimate the second summand and we get
\[ (m-1)^2 < (k-1)q + \frac{l(k+l)\sqrt q}{\sqrt {k-1}} . \]
We take square root and use the inequality $\sqrt {x+y}\leq \sqrt x + \frac{y}{2\sqrt x}$ to
obtain
\[ m-1 < \sqrt {(k-1)q} + \frac{l(k+l)}{2(k-1)} \]
which can be written as \eqref {Aup}.
\end{proof}
\section{Construction in certain groups}
In this section we construct large $g$-Sidon sets in groups $G={\mathbb {Z}}_p^2$, for
primes $p$. We shall establish the following result.
\begin{Th} \label{pxp}
Given $k$, for every sufficiently large prime $p \geq p_0(k)$ there is
a set $A {\subseteq} {\mathbb {Z}}_p^2$ with $kp - k + 1$ elements which is a $g$-Sidon
set for $g= \lfloor k^2 + 2 k^{3/2} \rfloor$.
\end{Th}
Observe that the trivial upper bound in this case is
$$|A| \leq \sqrt{gq} \le kp \sqrt{1 + \frac{2}{\sqrt{k}}} < (k+\sqrt k)p . $$
\begin{proof}
Let $p$ be a prime.
For every $u \not \equiv 0$ in ${\mathbb {Z}}_p$ consider the set
$$A_u = \left\{ \left( x, \frac{x^2}{u} \right) : x \in {\mathbb {Z}}_p \right\} \subset {\mathbb {Z}}_p^2.$$
Clearly $|A_u| = p$.
We are going to study the sumset of two such sets.
For any ${\underline{a}} = (a, b) \in {\mathbb {Z}}_p^2$ we shall calculate the representation function
$$r_{u,v}({\underline{a}}) = \sharp \{({\underline{a}}_1, {\underline{a}}_2): {\underline{a}}_1 \in A_u, {\underline{a}}_2 \in A_v, {\underline{a}}_1 + {\underline{a}}_2 = {\underline{a}} \}.$$
The most important property for us sounds as follows.
\begin{Lemma} \label{u+v}
If $u + v \equiv u' + v'$ and $\left(\frac{uvu'v'}{p} \right) = -1$ then $r_{u,v}(x) + r_{u',v'}(x) = 2$ for all $x$.
\end{Lemma}
\begin{proof}
If $a \equiv x + y$ and $b \equiv \frac{x^2}{u} + \frac{y^2}{v}$,
with $uv \not \equiv 0$, then $y \equiv a- x$ and we have $b \equiv
\frac{x^2}{u} + \frac{(a - x)^2}{v}$. We can rewrite this equation
as $(u + v)x^2 - 2aux + ua^2 - buv \equiv 0$. The discriminant of
this quadratic equation is
$\Delta \equiv 4uv((u+v)b - a^2)$. The number of solutions is
\begin{equation*}
r_{u,v}(a,b) =
\left\{
\begin{array}{lccrl}
1 \qquad \text{ if } & \left (\frac{\Delta}p\right ) & = &0 & \\
2 \qquad \text{ if } &\left( \frac{\Delta}{p} \right) &=& + 1&\ \text{(}
\Delta \text{ quadratic residue)}\\
0 \qquad \text{ if } &\left( \frac{\Delta}{p} \right) &=& -1& \ \text{(}
\Delta \text{ quadratic nonresidue)}.\\ \end{array} \right.
\end{equation*}
We can express this as
\begin{equation*}
r_{u,v}(a,b) = 1 + \left( \frac{\Delta}{p} \right) .
\end{equation*}
\smallskip
Now, since
$$\Delta\Delta'\equiv 4uv((u+v)b-a^2)4u'v'((u'+v')b-a^2)\equiv
16uvu'v'((u+v)b-a^2)^2$$ we have
$$\left (\frac{\Delta}p\right )\left (\frac{\Delta'}p\right )=\left (\frac{\Delta \Delta'}p\right )=\left (\frac{uvu'v'}p\right
)\left (\frac{((u+v)b-a^2)^2}p\right )=-\left
(\frac{((u+v)b-a^2)^2}p\right ).$$ If $(u+v)b-a^2\equiv 0$, we have
$\left (\frac{\Delta}p\right )=\left (\frac{\Delta'}p\right )=0$. If
not, we have $\left (\frac{\Delta}p\right )\left
(\frac{\Delta'}p\right )=-1$. In any case get
$$\left (\frac{\Delta}p\right )+\left (\frac{\Delta'}p\right )=0.$$
\end{proof}
We resume the proof of the theorem.
We put
$$A = \bigcup_{u = t + 1}^{t + k} A_u. $$
and we will show that for a suitable choice of t this will be a good set.
Since $(0,0)\in A_u$ for all $u$ and the rest of the $A_u$'s are disjoint, we have
$|A| = k (p-1) + 1$.
We can estimate the corresponding representation function as
$$r(x) \leq \sum _{u,v=t+1}^{t+k} r_{u,v} (x)$$
(equality fails sometimes, because representations involving $(0,0)$ are
counted once on the left and several times on the right).
We parametrize the variables of summation as
$u = t + i, v = t + j$ with $1 \leq i, j \leq k$. So $2 \leq i + j \leq 2k$ and we can write
$i + j = k + 1 + l$ with $|l| \leq k - 1$.
For fixed $l$, we have $k - |l|$ pairs $i, j$ (which means $k - |l|$ pairs
$u, v$). These pairs can be split into two groups: $n^+$ of them will have
$\left( \frac{uv}{p} \right) = 1$ and $n^-$ will have
$\left( \frac{uv}{p} \right) = -1$. Clearly
\[ n^+ + n^- = k - |l|, \ n^+ - n^- = \sum \left(\frac{uv}{p} \right) .\]
Of these $n^+ + n^-$ pairs we can combine
$\min\{n^+, n^-\}$ into pairs of pairs with opposite quadratic character, that
is, with $\left(\frac{uvu'v'}{p} \right) = -1$. For these
we use Lemma \ref{u+v} to estimate the sum of the corresponding representation
functions $r_{u,v}+r_{u',v'}$ by 2. For the uncoupled pairs we can only
estimate the individual values by 2. Altogether this gives
\begin{eqnarray*}
\sum _{i + j= k + 1 + l} r_{u,v} (x) & \leq & 2 (\min\{n^+, n^-\}) + 2(\max\{n^+, n^-\} -
\min\{n^+, n^-\}) \\ & = & 2(\max\{n^+, n^-\}) \\
& = & n^+ + n^- + |n^+ - n^-| \\
& = & k - |l| + \left| \sum \left(\frac{uv}{p} \right) \right|. \\
\end{eqnarray*}
Adding this for all possible value of $l$, for a fixed $t$ we obtain
$$r(x) \leq k^2 + \sum_{|l| \leq k - 1} \left| \sum_{i + j = k + 1 + l} \left( \frac{(t + i)(t
+ j)}{p} \right) \right| = k^2 + S_t . $$
We are going to show that $S_t$ is small on average. Since we need values
with $u, v \not \equiv 0$, we can use only $0\leq t\leq p-1-k$; however, the complete sum is easier
to work with.
Applying the Cauchy-Schwarz
inequality we get
\begin{eqnarray*}
\sum_{t=0}^{p-1} S_t & = & \sum_{t,l} \left| \sum_{i + j = k + 1 + l} \left( \frac{(t + i)(t + j)}{p} \right) \right|\\
& \leq & \sqrt{2kp \sum_{l,t} \left( \sum_{i+j = k+1+l} \left( \frac{(t+i)(t+j)}{p} \right) \right)^2}\\
& \leq & \sqrt{2kp \sum_{i + j = i' + j'} \sum_t \left( \frac{(t+i)(t+j)(t+i')(t+j')}{p} \right)}.
\end{eqnarray*}
To estimate the inner sum we use Weil's Theorem that asserts
$$\left| \sum_{t=0}^{p-1} \left( \frac{f(t)}{p} \right) \right| \leq \deg f
\sqrt{p}$$
for any polynomial $f$ which is not a constant multiple of a square. Hence
$$ \sum_{t=0}^{p-1} \left( \frac{(t+i)(t+j)(t+i')(t+j')}{p} \right) \leq 4 \sqrt{p} $$
except when the enumerator as a polynomial of $t$ is a square.
The numerator will be a square if the four numbers $i,i',j,j'$ form two
equal pairs. This happens exactly $k(2k-1)$ times. Indeed, we may have $i=i'$,
$j=j'$, $k^2$ cases, or $i=j'$, $j=i'$, another $k^2$ cases. The $k$ cases when
all four coincide have been counted twice. Finally, if $i=j$ and $i'=j'$, then
the equality of sums implies that all are equal, so this gives no new case. In
these cases for the sum we use the trivial upper estimate $p$.
The total number of quadruples $i,i',j,j'$ is $\leq k^3$, since three of them
determine the fourth uniquely.
Combining our estimates we obtain
$$\sum_{t=0}^{p-1} S_t \leq \sqrt{2p^2k^2(2k-1)+ 8p^{3/2}k^4}. $$
This implies that there is a value of $t$, $0\leq t\leq p-k-1$ such that
$$ S_t \leq \frac{\sqrt{2p^2k^2(2k-1)+ 8p^{3/2}k^4}}{p-k} < 2k^{3/2} $$
if $p$ is large enough. This yields that $r(x)<k^2 + 2k^{3/2}$ as claimed.
\end{proof}
\section{Construction in certain cyclic groups}
In this section we show how to project a set from ${\mathbb {Z}}_p^2$ into ${\mathbb {Z}}_q$ with
$q=p^2s$.
\begin{Th}
Let $A {\subseteq} \mathbb{Z}_p^2$ be a $g$-Sidon set with $|A| = m$, and put $q=p^2s$ with
a positive integer $s$. There is a $g'$-Sidon set $A' {\subseteq} {\mathbb
{Z}}_q$ with $|A'| = ms$ and $g'=g(s+1)$.
\end{Th}
\begin{proof}
An element of $A$ is a pair of residues modulo $p$, which we shall
represent by integers in $[0, p-1]$. Given and element $(a,b)\in A$, we put into
$A'$ all numbers of the form
$a + cp + bsp$ with $0 \leq c \leq s-1$. Clearly $\ab {A'}=sm$.
To estimate the representation function of $A'$ we need to tell,
given $a, b, c$, how many $a_1, b_1, c_1, a_2, b_2, c_2$ are there such that
\begin{equation} \label{c1}
a + cp + bsp \equiv a_1 + c_1p + b_1sp + a_2 + c_2 p + b_2sp \pmod{ p^2s}\end{equation}
with $ (a_1, b_1), (a_2, b_2) \in A$ and $0 \leq c_1, c_2 \leq s-1$.
First consider congruence \eqref {c1} modulo $p$. We have
$$a \equiv a_1 + a_2 \pmod{ p}, $$
hence $a_1 + a_2 = a + \delta p$ with $ \delta = 0 $ or 1.
We substitute this into \eqref {c1}, substract $a$ and divide by $p$ to
obtain
$$c + bs \equiv \delta + c_1 + c_2 + (b_1 + b_2)s \pmod{ ps} . $$
We take this modulo $s$:
$$c \equiv \delta + c_1 + c_2 \pmod{ s}, $$
consequently $\delta + c_1 + c_2 = c + \eta s$ with $ \eta = 0 $ or 1.
Again substituting back, substracting $c$ and dividing by $s$ we obtain
$$b \equiv \eta + b_1 + b_2 \pmod{ p}.$$
So $(a,b) = (a_1, b_1) + (a_2, b_2) + (0,\eta)$ which means that for $a,
b, \eta$ given, we have $\leq g$ possible values of $a_1, b_1, a_2, b_2$.
Now we are going to find the number of possible values of $c_1, c_2$
for $a, b, c, \eta,
a_1, b_1, a_2, b_2$ given.
Observe that from these data we can calculate $\delta = (a_1 + a_2 - a)/p$.
For $c_1, c_2$ we have the equation
$c_1 + c_2 = c - \delta + \eta s$.
If $\eta =0$, we have $c_1\leq c$, at most $c+1$ possibilities.
If $\eta =1$, we have $c_1+c_2\geq c+s-1$, hence $c-1 < c_1\leq s-1$, which gives at
most $s-c$ possibilities.
Hence, if $a,b,c,\eta $ are given, our estimate is $g(c+1)$ or $g(s-c)$,
depending on $\eta $. Adding the two estimates we get the claimed bound $g(s+1)$.
\end{proof}
On combining this result with Theorem \ref{pxp} we obtain the following
result.
\begin{Th} \label{ciclicp2s}
For any positive integers $k,s$, for every sufficiently large prime $p$,
there is a set
$A {\subseteq} {\mathbb {Z}}_{p^2s}$ with $(kp - k + 1)s$ elements which is a $\lfloor k^2 + 2
k^{3/2} \rfloor(s+1)$-Sidon set.
\end{Th}
Put $q=p^2s$ and $g=\lfloor k^2 + 2
k^{3/2} \rfloor (s+1)$. Thus,
\begin{eqnarray*}
\frac{\alpha_g(q)}{\sqrt{gq}} \ge \frac{|A|}{\sqrt{gq}} & = &
\frac{(kp-k+1)s}{\sqrt{\lfloor k^2 + 2 k^{3/2} \rfloor (s+1)
p^2s}} \\
& \ge & \frac{(kp-k)s}{\sqrt{(k^2 + 2 k^{3/2})(s+1) p^2s}}\\
& \ge & \frac{p-1}{p\sqrt{(1 + 2/\sqrt k)(1+1/s)}}.
\end{eqnarray*}
A convenient choice of the parameters is $k=4s^2$ (so $s = \Theta(g^{1/5})$). Assuming that, we get
\begin{equation*}
\frac{\alpha_g(q)}{\sqrt{gq}}\ge \frac{p-1}p\cdot \frac 1{1+1/s}.
\end{equation*}
Thus, the Prime Number Theorem says that
\begin{equation*}
\frac{\alpha_g}{\sqrt g}=\limsup_{q\to
\infty}\frac{\alpha_g(q)}{\sqrt{gq}}\ge \limsup_{p\to \infty
}\frac{p-1}p\cdot \frac 1{1+1/s}=1+O(g^{-1/5}),
\end{equation*}
which completes the proof of Theorem \ref{modular}.
\section{Upper bound}
We turn now to the proof of Theorem \ref{integer}, which says:
$$\lim_{g \rightarrow \infty} \liminf_{N \rightarrow \infty} \frac{\beta_g(N)}{\sqrt{g}{\sqrt{N}}} = \lim_{g \rightarrow \infty} \limsup_{N \rightarrow \infty} \frac{\beta_g(N)}{\sqrt{g}{\sqrt{N}}} = \sigma.$$
We will prove it in two stages:
\begin{enumerate}
\item [Part A.] $$\limsup_{g \rightarrow \infty} \limsup_{N \rightarrow \infty} \frac{\beta_g(N)}{\sqrt{g}{\sqrt{N}}} \le \sigma.$$
\item [Part B.] $$\liminf_{g \rightarrow \infty} \liminf_{N \rightarrow \infty} \frac{\beta_g(N)}{\sqrt{g}{\sqrt{N}}} \ge \sigma.$$
\end{enumerate}
For Part A we will use the ideas of Schinzel and Schmidt
\cite{Schin}, which give a connection between convolutions and number
of representations, between the continuous and the discrete world.
For the sake of completeness we rewrite the results and the proofs
in a more convenient way for our purposes.
\bigskip
Remember from (\ref{sigma}) the definition of $\sigma$:
$$\sigma = \sup_{f \in \mathcal{F}} |f|_1,$$
where $\mathcal{F} = \{f: f \ge 0, \ \text{supp}(f) {\subseteq} [0,1], \ |f*f|_{\infty} \le 1 \}.$
\smallskip
We will use the next result, which is assertion (ii) of Theorem 1 in
\cite{Schin} (essentially the same result appears in \cite{Mar2} as Corollary 1.5):
\begin{Th} \label{PolSchin}
Let $\sigma$ be the constant defined above and $\mathcal{Q}_N = \{Q \in \mathbb{R}_{\ge 0} [x] : Q \not \equiv 0, \deg Q < N\}$. Then
$$\sup_{Q \in \mathcal{Q}_N} \frac{|Q|_1}{\sqrt{N} \sqrt{|Q^2|_{\infty}}} \le \sigma,$$
where $|P|_1$ is the sum and $|P|_{\infty}$ the maximum of the coefficients of a polynomial $P$.
\end{Th}
\begin{proof}
First of all, observe that the definition of $\sigma$ is equivalent to this one:
$$\sigma = \sup_{g \in \mathcal{G}} \frac{|g|_1}{\sqrt{|g*g|_{\infty}}},$$
where $\mathcal{G} = \{g: g \ge 0, \ \text{supp}(g) {\subseteq} [0,1] \}.$
\smallskip
Given a polynomial $Q = a_0 + a_1 x + \ldots + a_{N-1} x^{N-1}$ in $\mathcal{Q}_N$, we define the step function $g$ with support in $[0,1)$ having
$$g(x) = a_i \ \text{ for } \ \frac{i}{N} \le x < \frac{i+1}{N} \ \text{ for every } \ i = 0, 1, \ldots, N-1.$$
The convolution of this step function with itself is the polygonal function:
$$ g*g(x) = \sum_{i=0}^{j}a_i a_{j-i} \left(x - \frac{j}{N} \right) + \sum_{i=0}^{j-1}a_i a_{j-1-i} \left(\frac{j+1}{N} - x \right) \text{ if } x \in \left[ \frac{j}{N}, \frac{j+1}{N} \right)$$
for every $j=0, 1, \ldots, 2N - 1$, where we define $a_N = a_{N+1} = \ldots = a_{2N-1} = 0$.
So, $$\sup_{x} (g*g)(x) = \frac{1}{N} \sup_{0 \le j \le 2N-2} \left( \sum_{i=0}^{j} a_i a_{j-i} \right).$$
Since, obviously, $\int_0^1 g(x) \ dx = \frac{1}{N} \sum_{i=0}^{N-1} a_i$, we have:
$$\frac{|Q|_1}{\sqrt{N} \sqrt{|Q^2|_{\infty}}} = \frac{\int_0^1 g(x) \ dx}{\sqrt{\sup_{x} (g*g)(x)}} \le \sigma.$$
And because we have this for every $Q$, the theorem is proved.
\end{proof}
Now, given a $g$-Sidon set $A \ {\subseteq} \ \{0, 1, \ldots, N - 1 \}$, we define the polynomial $Q_A(x) = \sum_{a \in A} x^a$, so $Q_A^2(x) = \sum_{n} r(n) x^n$. Then, Theorem \ref{PolSchin} says that
$$\sigma \ge \frac{|Q_A|_1}{\sqrt{|Q_A^2|_{\infty}} \sqrt{N}} \ge \frac{|A|}{\sqrt{g} \sqrt{N}}.$$
Since this happens for every $g$-Sidon set in $\{0, 1, \ldots, N-1\}$, we have that
$$\frac{\beta_g(N)}{\sqrt{g}{\sqrt{N}}} \le \sigma.$$
This proves Part A of Theorem \ref{integer}, which is the easy
part.
\begin{Rem}
In fact, not only Schinzel and Schmidt prove the result above in \cite{Schin}, but they also prove (see Theorem \ref{SchSch}) that
$$\lim_{N \to \infty} \sup_{Q \in \mathcal{Q}_N} \frac{|Q|_1}{\sqrt{N} \sqrt{|Q^2|_{\infty}}} = \sigma.$$
Newman polynomials are polynomials all of whose coefficients are 0 or 1. In \cite{Yu}, Gang Yu conjectured that for every sequence of Newman polynomials $Q_N$ with $\deg Q_N = N-1$ and $|Q_N|_1 = o(N)$
$$\limsup_{N \to \infty} \frac{|Q_N|_1}{\sqrt{N} \sqrt{|Q_N^2|_{\infty}}} \le 1.$$
Greg Martin and Kevin O'Bryant \cite{Mar2} disproved this conjecture, finding a sequence of Newman polynomials with $\deg Q_N = N-1$, $|Q_N|_1 = o(N)$ and $$\limsup_{N \to \infty} \frac{|Q_N|_1}{\sqrt{N} \sqrt{|Q_N^2|_{\infty}}} = \frac{2}{\sqrt{\pi}}.$$
In fact, with the probabilistic method it can be proved without much effort that there is a sequence of Newman polynomials, with $\deg Q_N = N-1$ and $|Q_N|_1 = O(N^{1/2} (\log N)^{\beta})$ for any given $\beta > 1/2$, such that
$$\limsup_{N \to \infty} \frac{|Q_N|_1}{\sqrt{N} \sqrt{|Q_N^2|_{\infty}}} = \sigma.$$
Our Theorem \ref{integer} says that given $\varepsilon > 0$, there exists a constant $c_\varepsilon$ and a sequence of polynomials, $Q_N$, with $\deg Q_N = N-1$ and $|Q_N|_1 \le c_{\varepsilon} N^{1/2}$ such that
$$\limsup_{N \to \infty} \frac{|Q_N|_1}{\sqrt{N} \sqrt{|Q_N^2|_{\infty}}} \ge \sigma - \varepsilon.$$
Observe that this growth is close to the best possible, since taking $|Q_N|_1 = o(N^{1/2})$ makes $\frac{|Q_N|_1}{\sqrt{N} \sqrt{|Q_N^2|_{\infty}}} \to 0$.
\end{Rem}
\section{Connecting the discrete and the continuous world}
For Part B of the proof of Theorem \ref{integer} we will need
another result of Schinzel and Schmidt (assertion (iii) of Theorem 1
in \cite{Schin}) which we state in a more convenient form for our
purposes:
\begin{Th} \label{SchSch}
For every $0 < \alpha < 1/2$, for any $0 < \varepsilon < 1$ and for every $n > n(\varepsilon)$, there exist non-negative real numbers $a_0, a_1, \ldots, a_n$ such that
\begin {enumerate}
\item $a_i \le n^{\alpha} (1 - \varepsilon)$ for every $i = 0, 1, \ldots, n$.
\item $\sum_{i=0}^{n} a_i \ge n \sigma (1 - \varepsilon)$.
\item $\sum_{0 \le i, m-i \le n} a_i a_{m-i} \le n (1 + \varepsilon)$ for every $m = 0, 1, \ldots, 2n$.
\end {enumerate}
\end{Th}
\begin{proof}
We start with a real nonnegative function defined in $[0,1]$, $g$, with $|g * g|_{\infty} \le 1$ and $|g|_1$ close to $\sigma$, say $|g|_1 \ge \sigma (1 - \varepsilon/2)$.
\bigskip
For $r < s$ we have the estimation
\begin{eqnarray} \label{estim1}
\left( \int_{r}^{s} g(x) \ dx \right)^2 & = & \int_{r}^{s} \int_{r}^{s} g(x) g(y) \ dx \ dy \nonumber \\
& = & \int_{r + x}^{s + x} \int_{r}^{s} g(x) g(z - x) \ dx \ dz \\
& \le & \int_{2r}^{2s} \int_{r}^{s} g(x) g(z - x) \ dx \ dz \le 2 (s - r) \nonumber
\end{eqnarray}
which implies that
\begin{equation} \label{estim}
\int_{r}^{s} g(x) \ dx \le \sqrt{2(s-r)}.
\end{equation}
\bigskip
Trying to ``discretize'' our function $g$, we define for $i = 0, 1, 2, \ldots, n$:
$$a_i = \frac{n}{2L} \int_{(i - L)/n}^{(i + L)/n} g(x) \ dx$$
where $1 \le L \le n/2$ is an integer that will be determined later.
\bigskip
Estimation (\ref{estim}) proves that
\begin {equation} \label{primera}
a_i \le \sqrt{n/L} \ \text{ for } \ i = 0, 1, 2, \ldots, n.
\end {equation}
\bigskip
Now we give a lower bound for the sum $\sum_{i=0}^{n} a_i$:
$$\sum_{i=0}^{n} a_i = \frac{n}{2L} \int_{0}^{1} \nu(x) g(x) \ dx,$$
where
\begin{eqnarray*}
\nu (x) & = & \sharp \left\{ i \in [0,n] : \frac{i-L}{n} \le x \le \frac{i+L}{n} \right\} \\
& = & \sharp \left\{ i : \max\{ 0, nx - L \} \le i \le \min \{ n, nx + L \} \right\}.
\end{eqnarray*}
\smallskip
Taking in account that an interval of length $M$ has $\ge \lfloor M \rfloor$ integers and an interval of length $M$ starting or finishing at an integer has $\lceil M \rceil$ integers, and since $L \in \mathbb Z$ and $1 \le L \le n/2$, we have
\begin{displaymath}
\nu (x) \ge
\left\{
\begin{array}{lll}
nx + L = 2L - (L - nx) & \textrm{ if } 0 \le x \le L/n \\
2L & \textrm{ if } L/n \le x \le 1 - L/n \\
n - nx + L = 2L - (L - n(1-x)) & \textrm{ if } 1 - L/n \le x \le 1 \\
\end{array}
\right.
\end{displaymath}
and so
$$\sum_{i=0}^{n} a_i \ge n \int_{0}^{1} g(x) \ dx - \frac{n}{2L} \int_{0}^{L/n} (L - nx) g(x) \ dx - \frac{n}{2L} \int_{1 - L/n}^{1} (L - n(1-x)) g(x) \ dx.$$
Now, using the fact that $|g|_1 \ge \sigma (1 - \varepsilon / 2)$ and estimation (\ref{estim}),
\begin{equation} \label{segunda}
\sum_{i=0}^{n} a_i \ge n \sigma (1 - \varepsilon / 2) - \sqrt{2nL}.
\end{equation}
\bigskip
Also, for every $m \le 2n$ we give an upper bound for the sum $\sum_{0 \le i, m-i \le n} a_i a_{m-i}$. First we write:
$$\sum_{0 \le i, m-i \le n} a_i a_{m-i} = \left( \frac{n}{2L} \right)^2 \sum_{0 \le i, m-i \le n} \int_{(m - i - L)/n}^{(m - i + L)/n} \int_{(i - L)/n}^{(i + L)/n} g(x) g(y) \ dx \ dy.$$
\smallskip
Now, as in (\ref{estim1}), we set $z = x + y$ and we consider the set:
$$S_i = \left\{(x, z) : \frac{i - L}{n} \le x \le \frac{i+L}{n} \ \text{ and } \ \frac{m - i - L}{n} \le z - x \le \frac{m - i + L}{n} \right\}.$$
Then,
$$\sum_{0 \le i, m-i \le n} a_i a_{m-i} = \left( \frac{n}{2L} \right)^2 \sum_{0 \le i, m-i \le n} \int \int_{S_i} g(x) g(z - x) \ dx \ dz$$
and, defining $\mu(x,z) = \sharp \{\max\{0, m-n\} \le i \le \min\{m, n\} : i - L \le nx \le i + L \ \text{ and } \ m - i - L \le n(z - x) \le m - i + L \}$,
$$\sum_{0 \le i, m-i \le n} a_i a_{m-i} = \left( \frac{n}{2L} \right)^2 \int \int \mu(x, z) g(x) g(z - x) \ dx \ dz.$$
If we write $h = i - nx$ then we are imposing $-L \le h \le L$ and $m - L - nz \le h \le m + L - nz$, so $$-L + \max \{0, m - nz \} \le h \le L + \min \{0, m - nz \},$$
and $\mu (x,z) \le \lambda(z)$, which is the number of $h$'s in this interval (it could be empty), and this number is clearly $\le 2L + 1$. Also, for each fixed $h$, $z$ moves in an interval of length $2L/n$.
This means (remember that $|g * g|_{\infty} \le 1$)
\begin{eqnarray*}
\sum_{0 \le i, m-i \le n} a_i a_{m-i} & \le & \left( \frac{n}{2L} \right)^2 \int \lambda (z) \int g(x) g(z - x) \ dx \ dz \\
& \le & \left( \frac{n}{2L} \right)^2 \int \lambda (z) \ dz \\
& \le & \left( \frac{n}{2L} \right)^2 \frac{2L(2L + 1)}{n}
\end{eqnarray*}
so the sum
\begin{equation} \label{tercera}
\sum_{0 \le i, m-i \le n} a_i a_{m-i} \le n \left( 1 + \frac{1}{2L} \right).
\end{equation}
Finally, looking at (\ref{primera}), (\ref{segunda}) and (\ref{tercera}), and choosing the integer $L = \lceil n^{1 - 2 \alpha} / (1 - \varepsilon)^2 \rceil$ with $0 < \alpha < 1/2$, for sufficiently large $n$ we'll have:
$$a_i \le n^{\alpha} (1 - \varepsilon) \qquad , \qquad \sum_{i=0}^{n} a_i \ge n \sigma (1 - \varepsilon) \qquad \text{and} \qquad \sum_{0 \le i, m-i \le n} a_i a_{m-i} \le n ( 1 + \varepsilon).$$
\end{proof}
\begin{Rem}
Now, we will construct random sets. We want to use the numbers obtained in Theorem \ref{SchSch} to define probabilities, $p_i$, and it will be convenient to know the sum of the $p_i$'s. This is the motivation for defining
$$p_i = a_i \cdot \dfrac{\sigma n^{1 - \alpha}}{\sum_{i=0}^{n} a_i} \quad \text{ for } \quad i = 0, 1, \ldots, n.$$
\end{Rem}
\smallskip
Now we fix $\alpha=1/3$, although any $\alpha \in (0,1/2)$ would
work. Then we have $p_i = a_i \cdot \dfrac{\sigma
n^{2/3}}{\sum_{i=0}^{n} a_i}$, so for any $0 < \varepsilon < 1$ and
for every $n > n(\varepsilon)$, we have $p_0, p_1, \ldots, p_n$ such
that:
$$p_i \le 1 \qquad , \qquad \sum_{i=0}^{n} p_i = \sigma n^{2/3} \qquad \text{ and } \qquad \sum_{0 \le i, m-i \le n} p_i p_{m-i} \le n^{1/3} \dfrac{1 + \varepsilon}{(1 - \varepsilon)^2}.$$
\bigskip
In order to prove that the number of elements and the number of representations in our probabilistic sets are what we expect with high probability, we'll use Chernoff's inequality (see Corollary 1.9 in \cite{Tao}).
\begin{Prop} \label{Cher}
\textbf{(Chernoff's inequality)} Let $X=t_1+\cdots +t_n$ where the $t_i$ are independent Boolean random variables. Then for
any $\delta>0$
\begin{eqnarray}
{\mathbb{P}}(|X-{\mathbb{E}}(X)|\ge \delta{\mathbb{E}}(X)) \le 2e^{-\min(\delta^2/4,\delta/2){\mathbb{E}}(X)}.
\end{eqnarray}
\end{Prop}
\smallskip
Then, we have the next two lemmas which also appear in \cite{CV}:
\begin{Lemma} \label{number}
We consider the probability space of all the subsets $A \ {\subseteq} \ \{0, 1, \ldots, n\}$ defined by ${\mathbb{P}}(i \in A) = p_i$.
With the $p_i$'s defined above, given $0 < \varepsilon < 1$, there exists $n_0(\varepsilon)$ such that, for all $n \ge n_0$,
$${\mathbb{P}}(|A| \ge \sigma n^{2/3} (1 - \varepsilon)) > 0.9.$$
\end{Lemma}
\begin{proof}
Since $|A|$ is a sum of independent Boolean variables and ${\mathbb{E}}(|A|) = \sum_{i=0}^{n} p_i = \sigma n^{2/3}$, we can apply Proposition \ref{Cher} to deduce that for large enough n
$${\mathbb{P}}(|A| < \sigma n^{2/3} (1 - \varepsilon) ) \le 2 e^{- \sigma n^{2/3} \varepsilon^2 / 4} < 0.1.$$
\end{proof}
\begin{Lemma} \label{repre}
We consider the probability space of all the subsets $A \ {\subseteq} \ \{0, 1, \ldots, n\}$ defined by ${\mathbb{P}}(i \in A) = p_i$.
Again for the $p_i$'s defined above, given $0 < \varepsilon < 1$, there exists $n_1(\varepsilon)$ such that, for all $n \ge n_1$,
$$r (m) \le n^{1/3} \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right)^3 \text{ for all } m = 0, 1, \ldots, 2n$$
with probability $> 0.9$.
\end{Lemma}
\begin{proof}
Since $r(m) = \sum_{0 \le i, m-i \le n} \mathbb I(i \in A) \mathbb I(m - i \in A)$ is a sum of Boolean variables which are not independent, it is convenient to consider
$$r'(m)/2 = \sum_{\substack{0 \le i, m-i \le n \\ i < m/2}} \mathbb I(i \in A) \mathbb I(m - i \in A)$$
leaving in mind that $r(m) = r'(m) + \mathbb I(m/2 \in A)$.
\smallskip
From the independence of the indicator functions, and following the notation introduced in Definition \ref{defrep}, the expected value of $r'(m)/2$ is
\begin{eqnarray*}
\mu_m & = & {\mathbb{E}}(r'(m)/2) = \sum_{\substack{0 \le i, m-i \le n \\ i < m/2}} {\mathbb{E}}(\mathbb I(i \in A)\mathbb I(m - i \in A)) \\
& = & \sum_{\substack{0 \le i, m-i \le n \\ i < m/2}} p_i p_{m-i} \le \frac{n^{1/3}}{2} \cdot \frac{1 + \varepsilon}{(1 - \varepsilon)^2},
\end{eqnarray*}
for every $m = 0, 1, \ldots, 2n$, for $n$ large enough.
\smallskip
If $\mu_m = 0$ then ${\mathbb{P}}(r'(m)/2 > 0) = 0$, so we can consider the next two cases:
\begin{itemize}
\item If $\frac{1}{3} \cdot \frac{n^{1/3}(1 + \varepsilon)}{2 (1 - \varepsilon)^2} \le \mu_m$, we can apply Proposition \ref{Cher} (observe that $\varepsilon < 2$ and then $\varepsilon^2/4 \le \varepsilon/2$) to have
\begin{eqnarray*}
{\mathbb{P}} \left( r'(m)/2 \ge \frac{n^{1/3}}{2} \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right)^2 \right) & \le & {\mathbb{P}} ( r'(m)/2 \ge \mu_m (1 + \varepsilon)) \\
& \le & 2 \exp \left(- \frac{\varepsilon ^2 \mu_m}{4} \right) \\
& \le & 2 \exp \left(- \frac{n^{1/3} \varepsilon ^2 (1 + \varepsilon)}{24 (1 - \varepsilon)^2} \right)
\end{eqnarray*}
\item If $0 < \mu_m < \frac{1}{3} \cdot \frac{n^{1/3}(1 + \varepsilon)}{2 (1 - \varepsilon)^2}$ then we define $\delta = \frac{n^{1/3}}{2 \mu_m} \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right)^2 - 1$ (observe that $\delta \ge 2$ and then $\delta/2 \le \delta^2/4$) and we can apply Proposition \ref{Cher} to have
\begin{eqnarray*}
{\mathbb{P}} \left( r'(m)/2 \ge \frac{n^{1/3}}{2} \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right)^2 \right) & = & {\mathbb{P}}(r'(m)/2 \ge \mu_m (1 + \delta)) \\
& \le & 2 \exp \left(- \frac{\delta \mu_m}{2} \right) \\
& = & 2 \exp \left( - \frac{n^{1/3}}{4} \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right)^2 + \frac{\mu_m}{2} \right) \\
& \le & 2 \exp \left( - \frac{n^{1/3}}{4} \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right)^2 + \frac{n^{1/3}(1 + \varepsilon)}{12(1 - \varepsilon)^2} \right) \\
& \le & 2 \exp \left( - \frac{n^{1/3}}{6} \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right)^2 \right) \\
\end{eqnarray*}
\end{itemize}
Then,
\begin{eqnarray*}
&& {\mathbb{P}} \left( r'(m)/2 \ge \frac{n^{1/3}}{2} \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right)^2 \text{ for some } m \right)\\
&& \le 4n \left( \exp \left(- \frac{n^{1/3} \varepsilon ^2 (1 + \varepsilon)}{24 (1 - \varepsilon)^2} \right) + \exp \left( - \frac{n^{1/3}}{6} \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right)^2 \right) \right)
\end{eqnarray*}
which is $< 0.1$ for $n$ large enough.
\smallskip
Remembering that $r(m) = r'(m) + \mathbb I(m/2 \in A)$,
$${\mathbb{P}} \left( r(m) \ge n^{1/3} \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right)^2 + \ \mathbb I(m/2 \in A) \ \text{ for some } m \right) < 0.1 \text{ for } n \text{ large enough},$$
and finally
$${\mathbb{P}} \left( r(m) \ge n^{1/3} \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right)^3 \text{ for some } m \right) < 0.1 \text{ for } n \text{ large enough}.$$
\end{proof}
Lemmas \ref{number} and \ref{repre} imply that, given $0 < \varepsilon < 1$, for $n \ge \max \{n_0, n_1\}$, the probability that our random set $A$ satisfies $|A| \ge \sigma n^{2/3} (1 - \varepsilon)$ and $r(m) \le n^{1/3} \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right)^3$ for every $m$ is greater than $0.8$. In particular, for every $n \ge \max\{n_0, n_1\}$ we have a set $A {\subseteq} \{0, 1, \ldots, n\}$ satisfying these conditions.
\section{From residues to integers}
In order to prove Part B of Theorem \ref{integer}, we will also need the next lemma, which allows us to ``paste'' copies of a $g_2$-Sidon set in a cyclic group with a dilation of a $g_1$-Sidon set in the integers.
\begin{Lemma} \label{pasting}
Let $A = \{0 = a_1 < \ldots < a_k \}$ be a $g_1$-Sidon set in $\mathbb{Z}$ and let $C \subseteq [1, q]$ be a $g_2$-Sidon set $\pmod{q}$. Then $B = \cup_{i=1}^{k}(C + q a_i)$ is a $g_1 g_2$-Sidon set in $[1, q(a_k + 1)]$ with $k|C|$ elements.
\end{Lemma}
\begin{proof}
Suppose we have $g_1g_2 + 1$ representations of an element as the sum of two
$$b_{1,1} + b_{2,1} = b_{1,2} + b_{2,2} = \cdots = b_{1,g_1 g_2 + 1} + b_{2,g_1 g_2 + 1}.$$
Each $b_{i,j} = c_{i,j} + q a_{i,j}$ in a unique way. Now we can look at the equality modulo $q$ to have
$$c_{1,1} + c_{2,1} = c_{1,2} + c_{2,2} = \cdots = c_{1,g_1 g_2 + 1} + c_{2,g_1 g_2 + 1} \pmod{q}.$$
Since $C$ is a $g_2$-Sidon set $\pmod{q}$, by the pigeonhole principle, there are at least $g_1 + 1$ pairs $(c_{1,i_1},c_{2,i_1})$, ..., $(c_{1,i_{g_1 + 1}},c_{2,i_{g_1 + 1}})$ such that:
$$c_{1,i_1} = \cdots = c_{1,i_{g_1 + 1}} \qquad \text{and} \qquad c_{2,i_1} = \cdots = c_{2,i_{g_1 + 1}}.$$
So the corresponding $a_i$'s satisfy
$$a_{1,i_1} + a_{2,i_1} = \cdots = a_{1,i_{g_1 + 1}} + a_{2,i_{g_1 + 1}},$$
and since $A$ is a $g_1$-Sidon set, there must be an equality
$$a_{1, k} = a_{1, l} \qquad \text{and} \qquad a_{2, k} = a_{2, l}$$
for some $k, l \in \{ i_1, \ldots, i_{g_1 + 1} \}$.
Then, for these $k$ and $l$ we have
$$b_{1, k} = b_{1, l} \qquad \text{and} \qquad b_{2, k} = b_{2, l},$$
which completes the proof.
\end{proof}
\bigskip
With all these weapons, we are ready to finish our proof.
\bigskip
Given $0 < \varepsilon < 1$ we have that:
\begin{enumerate}
\item [a)] For every large enough $g$ we can define $n = n(g)$ as the least integer such that $g = \lfloor n^{1/3} \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right)^3 \rfloor$, and such an $n$ exists because $n^{1/3} \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right)^3$ grows more slowly than $n$. Observe that $n(g) \to \infty$ when $g \to \infty$.
Now, by lemmas \ref{number} and \ref{repre}, there is $g_0 = g_0(\varepsilon)$ such that for every $g_1 \ge g_0$ we can consider $n = n(g_1)$ and we have a $g_1$-Sidon set $A {\subseteq} \{0, 1, \ldots, n\}$ such that
$$\frac{|A|}{\sqrt{g_1}\sqrt{n+1}} \ge \sigma \sqrt{\frac{n}{n + 1}} \cdot \frac{(1 - \varepsilon)^{5/2}}{(1 + \varepsilon)^{3/2}}.$$
\item [b)] By Theorem \ref{ciclicp2s}, there are $g_2 = g_2(\varepsilon)$, $s = s(\varepsilon)$ and a sequence $q_0 = p_r^2 s$, $q_1 = p_{r+1}^2 s$, $q_2 = p_{r+2}^2 s$, $\ldots$ (where $p_i$ is the $i$-th prime and $r = r(\varepsilon)$) such that for every $i = 0, 1, 2, \ldots$ there is a $g_2$-Sidon set $A_i {\subseteq} \mathbb{Z}_{q_i}$ with
$$\frac{|A_i|}{\sqrt{g_2 q_i}} \ge 1 - \varepsilon.$$
\end{enumerate}
\smallskip
So, given $0 < \varepsilon < 1$:
\begin{enumerate}
\item [1) ] For every $g \ge g_0(\varepsilon) g_2(\varepsilon)$ there is a $g_1 = g_1(g)$ such that
$$g_1 g_2 \le g < (g_1 + 1) g_2,$$
and we have $n = n(g_1)$ with $g_1 = \lfloor n^{1/3} \left( \frac{1 - \varepsilon}{1 + \varepsilon}\right)^3\rfloor$ and a $g_1$-Sidon set $A {\subseteq} \{0, 1, \ldots, n \}$ with
$$\frac{|A|}{\sqrt{g_1}\sqrt{n + 1}} \ge \sigma \sqrt{\frac{n}{n + 1}} \cdot \frac{(1 - \varepsilon)^{5/2}}{(1 + \varepsilon)^{3/2}}.$$
\smallskip
\item [2) ] For any $N \ge (n + 1) q_0$, there is an $i = i(N)$ such that
$$(n + 1) q_i \le N < (n + 1) q_{i + 1},$$
and we have a $g_2$-Sidon set $\pmod{q_i}$, $A_i$, with
$$\frac{|A_i|}{\sqrt{g_2 q_i}} \ge 1 - \varepsilon.$$
\end{enumerate}
Then, for any $g$ and $N$ large enough, applying Lemma \ref{pasting} we can construct a $g_1 g_2$-Sidon set from $A$ and $A_i$ with $|A||A_i|$ elements in $[1, N]$.
So we have that $\beta_g(N) \ge \beta_{g_1 g_2}(N) \ge |A||A_i|$ and then
\begin{eqnarray*}
\frac{\beta_g(N)}{\sqrt{g} \sqrt{N}} & \ge & \frac{\beta_{g_1 g_2}(N)}{\sqrt{(g_1 + 1)g_2} \sqrt{(n + 1)q_{i+1}}} \\
& \ge & \frac{|A| |A_i|}{\sqrt{g_1 g_2} \sqrt{(n + 1)q_i}} \sqrt{\frac{g_1}{g_1 + 1}} \sqrt{\frac{q_i}{q_{i+1}}} \\
& \ge & \sigma \frac{(1 - \varepsilon)^{7/2}}{(1 + \varepsilon)^{3/2}} \sqrt{\frac{n}{n + 1}} \sqrt{\frac{g_1}{g_1 + 1}} \sqrt{\frac{p_{r+i}}{p_{r+i+1}}}.
\end{eqnarray*}
Finally, as a consequence of the Prime Number Theorem, this means that, given $0 < \varepsilon < 1$, for $g$ and $N$ large enough
$$\frac{\beta_g(N)}{\sqrt{g} \sqrt{N}} \ge \sigma \frac{(1 - \varepsilon)^{9/2}}{(1 + \varepsilon)^{3/2}}$$
i. e.
$$\liminf_{g \rightarrow \infty} \liminf_{N \rightarrow \infty} \frac{\beta_g(N)}{\sqrt{g} \sqrt{N}} \ge \sigma.$$
| {
"timestamp": "2009-09-28T09:12:44",
"yymm": "0909",
"arxiv_id": "0909.5024",
"language": "en",
"url": "https://arxiv.org/abs/0909.5024",
"abstract": "We give asymptotic sharp estimates for the cardinality of a set of residue classes with the property that the representation function is bounded by a prescribed number. We then use this to obtain an analogous result for sets of integers, answering an old question of Simon Sidon.",
"subjects": "Number Theory (math.NT); Combinatorics (math.CO)",
"title": "Generalized Sidon sets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806540875628,
"lm_q2_score": 0.822189123986562,
"lm_q1q2_score": 0.8062227489824232
} |
https://arxiv.org/abs/1208.3958 | A Note on why Enforcing Discrete Maximum Principles by a simple a Posteriori Cutoff is a Good Idea | Discrete maximum principles in the approximation of partial differential equations are crucial for the preservation of qualitative properties of physical models. In this work we enforce the discrete maximum principle by performing a simple cutoff. We show that for many problems this a posteriori procedure even improves the approximation in the natural energy norm. The results apply to many different kinds of approximations including conforming higher order and $hp$-finite elements. Moreover in the case of finite element approximations there is no geometrical restriction on the partition of the domain. | \section{Introduction}
\label{sec:introduction}
Consider a function $u:\Omega\to \setR$ on some bounded domain
$\Omega\subset\setR^d$ such that
\begin{align}\label{eq:rd}
-\Delta u + c \,u \le 0
\end{align}
in the variational sense for some nonnegative $c$. Then for $u^+=\max\{0,u\}$, we have the estimate
\begin{align}\label{eq:mp}
\sup_{\Omega} u \le \sup_{\partial\Omega} u^+,
\end{align}
which is well-known as weak maximum principle; compare e.g. with
\cite{GilbargTrudinger:83}.
Maximum principles usually reflect fundamental physical
principles like e.g. the positivity of the density and it is
desirable that numerical schemes respect physical
principles. Therefore, for a conforming approximation $U$,
computed by some numerical scheme, there arises the question if it
satisfies a so called discrete maximum principle
\begin{align}
\label{eq:DMP}\tag{DMP}
\sup_{\Omega} U \le \sup_{\partial\Omega} U^+.
\end{align}
Suppose for example that the approximation $U$ is generated by a
conforming finite element method.
In this case there are plenty of results on discrete
maximum principles; without attempting to
provide a complete list, we refer to
\cite{CiarletRaviart:73,Santos:82,
BrandtsKorotovKrizek:2008,LiHuang:10,
DieningKreuzerSchwarzacher:12}. All those results have in common that
they base on piece wise affine finite element spaces and on
strong geometrically restrictions on the underlying partitions of the
domain. To be more precise, they are basically restricted to
non-obtuse or even acute simplicial meshes. Although there are some
results on non-obtuse refinement in
\cite{KorotovKrizek:2011,KorotovKrizek:2005},
it is clear that those restrictions introduce serious complications
for the refinement and meshing of the domain $\Omega$.
The situation is even less satisfying for
higher order finite elements let alone $hp$-finite elements. To our
best knowledge for these schemes discrete maximum principles
are known only in a relaxed sense \cite{Schatz:80} or in very
restrictive situations; see
\cite{HoehnMittelmann:81}.
Based on an analysis of discrete Green's functions, Dr{\u{a}}g{\u{a}}nescu, Dupont, and Scott
\cite{DragDupScott:04} suggest that the discrete maximum principle
for piece wise affine functions may only fail in a small region close
to the boundary. This motivates to simply
cutoff those small regions with unphysical behavior. In other
words, we define
\begin{align}\label{eq:cut}
U^* = \min\big\{U,\,\sup_{\partial\Omega} U^+\big\}\qquad\text{in}~\Omega.
\end{align}
Obviously the modified function
$U^*$ satisfies the \eqref{eq:DMP} independent of the polynomial degree or some
underlying partition of $\Omega$. Although so far
this technique has lacked of mathematical justification,
it is actually quite popular in engineering.
In this paper we overcome this drawback. In fact, using variational techniques from
\cite{DieningKreuzerSchwarzacher:12,OttLM98,BilFuc02}, we prove in
section~\ref{sec:cutting} that the modified function $U^*$ is even a
better approximation than $U$. To be more precise, if
$u_{|\partial\Omega}=U_{|\partial\Omega}$, then we have
\begin{align}
\label{eq:main}
\normm{u-U^*}\le\normm{u-U}
\end{align}
in the norm
$\normm{\cdot}^2=\int_\Omega\abs{\nabla\cdot}^2+c\,\abs{\cdot}^2\,{\rm
d}x$, induced by the reaction diffusion differential operator in
\eqref{eq:rd}. In section~\ref{sec:applications} we generalize the
techniques to systems of pde's and the $p$-Laplace operator.
We emphasize that the results are not restricted to finite element
approximation nor to Galerkin schemes. In fact, the results can be
applied to any conforming approximation of $u$.
\section{Enforcing the Discrete Maximum Principle}
\label{sec:cutting}
In this section we shall prove the main result~\eqref{eq:cut} for functions
satisfying \eqref{eq:rd}.
To this end, for the bounded domain $\Omega\subset\setR^d$, $d\in\setN$, let
$L^2(\Omega)$ and $H^1(\Omega)$ be
the space of square integrable Lebesgue and
Sobolev functions on $\Omega$, respectively.
We denote by $\big(H^1(\Omega)\big)^*$ the dual of $H^1(\Omega)$
and by $H_0^1(\Omega)$ the subspace of functions in $H^1(\Omega)$
with vanishing trace on the boundary $\partial\Omega$.
The variational formulation of \eqref{eq:rd} reads as follows:
Let $0\le c\in L^\infty(\Omega)$, i.e.,\xspace a nonnegative essentially
bounded function. We assume that $u\in
H^1(\Omega)$ such that
\begin{align}\label{eq:rdweak}
\int_\Omega\nabla u\cdot\nabla v +c\, uv\,{\rm d}x =: F(v)\le 0 \qquad
\text{for all}~v\in H_0^1(\Omega)~\text{with}~v\ge 0~\text{in}~\Omega.
\end{align}
With this definition we have $F\in \big(H^{1}(\Omega)\big)^*$
and it is well known that $u$ satisfies the weak maximum principle
\eqref{eq:mp}; see \cite{GilbargTrudinger:83}.
Moreover, $u$ is the unique minimizer of
\begin{align}\label{eq:energy}
\ensuremath\mathcal{J}(v):=\frac12 \int_\Omega |\nabla v|^2+ c\,|v|^2\,{\rm d}x - F(v)
\end{align}
in $u+H^1_0(\Omega)$. In other words, $u$ is minimal among all
functions in $H^1(\Omega)$ that coincide with $u$ on the boundary
$\partial\Omega$.
Let now $U\in H^1(\Omega)$ be some approximation to $u$. Note that
there is no restriction on the kind of approximation beyond of that it
is conforming; we will
come back to this issue in Remark \ref{r:bnd} and the Conclusion \S\ref{sec:conclusion} below.
It follows by
standard arguments that
\begin{align}\label{eq:U^*}
U^*\mathrel{:=} \min\big\{U,\,\sup_{\partial\Omega} U^+\big\}=
(U-\sup_{\partial\Omega} U^+\big)^-+\sup_{\partial\Omega} U^+\in H^1(\Omega).
\end{align}
Consequently, $U^*$ satisfies the
\eqref{eq:DMP}. If $U_{|\partial\Omega}=u_{|\partial\Omega}$, we have
by \eqref{eq:mp} that
\begin{align*}
|u(x)-U^*(x)| &= \left\{
\begin{alignedat}{2}
&|u(x) - U(x)|, &\quad&\text{if}~ U(x)\le
\sup_{\partial\Omega} U^+\\
\sup_{\partial\Omega} U^+ -
u(x), &\quad&\text{if}~U(x)>
\sup_{\partial\Omega} U^+
\end{alignedat}\right\}
\le \abs{u(x)-U(x)},
\end{align*}
for almost every $x\in\Omega$. Hence, it follows
\begin{align}\label{eq:L2}
\norm{u-U^*}_{L^2(\Omega)}\le
\norm{u-U}_{L^2(\Omega)}.
\end{align}
Here $\|\cdot\|_{L^2(\Omega)}^2\mathrel{:=}\int_\Omega|\cdot|^2\,{\rm d}x$
denotes the standard norm on $L^2(\Omega)$.
The corresponding estimate \eqref{eq:main} for the energy norm is
less obvious. To prove this estimate we need
some more properties of the truncated function $U^*$.
Similarly as in \cite{OttLM98} we have on the one hand, that
\begin{align*}
U^*(x)=
\begin{cases}
U(x) \qquad&\text{if}~ U(x)\le \sup_{\partial\Omega} U^+,
\\
\sup_{\partial\Omega} U^+\ge0,\qquad&\text{if}~ U(x)> \sup_{\partial\Omega} U^+,
\end{cases}\qquad x\in\Omega.
\end{align*}
and hence we obtain
\begin{subequations}\label{eq:U*props}
\begin{align}
\label{eq:U^*<U}
U^*\le U\qquad\text{and}\qquad\abs{U^*}&\leq
\abs{U}\qquad\text{in}~\Omega.
\end{align}
On the other hand we have
\begin{align*}
\nabla U^*(x) =
\begin{cases}
\nabla U(x),\qquad&\text{if}~ U(x)\le \sup_{\partial\Omega} U^+,
\\
0,\qquad&\text{if}~ U(x)> \sup_{\partial\Omega} U^+,
\end{cases} \qquad x\in\Omega
\end{align*}
and consequently
\begin{align}
\label{eq:nablaU^*<nablaU}
\abs{\nabla U^*}\le \abs{\nabla U}\qquad\text{in}~\Omega.
\end{align}
\end{subequations}
These observations are the key properties for proving the following result.
\begin{theorem}\label{thm:main}
Suppose the conditions of this section. In particular,
let $U\in H^1(\Omega)$ and let $U^*\in H^1(\Omega)$ be the
modification of $U$ according to \eqref{eq:U^*}. Then we have for
the energy $\mathcal{J}\colon H^1(\Omega)\to \setR$ defined in \eqref{eq:energy}, that
\begin{align}\label{eq:reduction}
\ensuremath\mathcal{J}(U^*)\le \ensuremath\mathcal{J}(U)\qquad\text{and}\qquad
U^*-U\in H_0^{1}(\Omega).
\end{align}
\end{theorem}
\begin{proof}
The second claim is a direct consequence of the definition
\eqref{eq:U^*} of $U^*$. For the first claim we observe from
\eqref{eq:nablaU^*<nablaU} that
\begin{align*}
\int_\Omega\abs{\nabla U^*}^2\,{\rm d}x \le \int_\Omega\abs{\nabla U}^2\,{\rm d}x.
\end{align*}
Furthermore, it follows from the second inequality in \eqref{eq:U^*<U} that
\begin{align}\label{ineq:L2}
\int_\Omega \abs{U^*}^2\,{\rm d}x\le\int_\Omega \abs{U}^2\,{\rm d}x
\end{align}
and since $F(v)\le 0$ for all $v\in H^1(\Omega)$, $v\ge 0$, we
obtain
\begin{align*}
F(U)-F(U^*)=F(U-U^*) \le 0\quad\Rightarrow\quad -F(U^*)\le -F(U).
\end{align*}
Combining these estimates together with the definition
\eqref{eq:energy} of the energy
proves the theorem.
\end{proof}
\begin{corollary}\label{cor:main}
Suppose the conditions of this sections and assume further that
\begin{align*}
U-u\in H_0^1(\Omega), \qquad\text{i.e.,\xspace}~u=U~\text{on}~\partial\Omega.
\end{align*}
Then we have
\begin{align*}
\normm{u-U^*}\le \normm{u-U}
\end{align*}
for the energy norm $\normm{\cdot}^2\mathrel{:=}
\int_\Omega\abs{\nabla\cdot}^2 +c\,\abs{\cdot}^2\,{\rm d}x$ on
$H^1_0(\Omega)$ induced by \eqref{eq:rdweak}.
\end{corollary}
\begin{proof}
It follows
from the assumption $u-U\in H_0^1(\Omega)$ that $U,U^*\in
u+H_0^1(\Omega)$. We recall that $u$ is the unique minimizer of the
energy $\ensuremath\mathcal{J}$
in $u+H_0^1(\Omega)$. Therefore, together with Theorem~\ref{thm:main}, we have
\begin{align}\label{eq:4}
0\le \ensuremath\mathcal{J}(U^*)-\ensuremath\mathcal{J}(u)\le \ensuremath\mathcal{J}(U)-\ensuremath\mathcal{J}(u).
\end{align}
On the other hand, for arbitrary $v\in u+H_0^1(\Omega)$ we have
$u-v\in H_0^1(\Omega)$ and it follows from
\eqref{eq:rdweak} that
\begin{align*}
\ensuremath\mathcal{J}(v)&-\ensuremath\mathcal{J}(u)= \frac12 \int_\Omega |\nabla v|^2+
c\,|v|^2\,{\rm d}x -\frac12 \int_\Omega |\nabla u|^2+ c\,|v|^2\,{\rm d}x
+ F(u-v)
\\
&= \frac12 \int_\Omega |\nabla v|^2+ c\,|v|^2\,{\rm d}x - \int_\Omega
\nabla v\cdot\nabla u+ c\,vu\,{\rm d}x + \frac12 \int_\Omega |\nabla
u|^2+ c\,|u|^2\,{\rm d}x
\\
&=\frac12\normm{u-v}^2.
\end{align*}
Using this observation with $v=U$ respectively $v=U^*$ in \eqref{eq:4} proves
the claim.
\end{proof}
\begin{remark}
For $c\equiv 0$ the maximum principle \eqref{eq:mp} reads
as
\begin{align*}
\sup_{\Omega}u\le \sup_{\partial \Omega}u.
\end{align*}
Consequently, in this case, we define
\begin{align*}
U^*\mathrel{:=} \min\big\{U,\,\sup_{\partial\Omega} U\big\}.
\end{align*}
Since $c\equiv0$ we do not need to
have the second estimate in \eqref{eq:U^*<U} in order to prove
Theorem \ref{thm:main}. All other estimates, namely the first
estimate in \eqref{eq:U^*<U} and \eqref{eq:nablaU^*<nablaU}, stay
true for $U^*$ as defined above. Therefore,
Theorem \ref{thm:main} and Corollary \ref{cor:main} are
still valid in this case; compare also with the examples of Section~\ref{sec:applications}
below.
\end{remark}
\begin{remark}\label{r:bnd}
In the approximation of solutions to partial differential
equations with finite elements one usually
considers the error of the boundary values and the
residual interior $\Omega$, separately; see e.g. \cite{BrennerScott:08}. To be more precise, let
$u$ satisfy \eqref{eq:rdweak} and let $G$ be a discrete
approximation of $u$ on $\partial\Omega$. We consider an approximation $U\in
H_0^1(\Omega)$ with $U=G$ on $\partial\Omega $ to the weak solution $u_G\in H_0^1(\Omega)$ of the problem
\begin{align*}
-\Delta u_G +c\,u_G = -\Delta u +c\,u \quad\text{in}~\Omega,\qquad\text{and}\qquad
u_G=G~\text{on}~\partial\Omega.
\end{align*}
Then $U$ satisfies Theorem
\ref{thm:main} and Corollary \ref{cor:main} with $u_G$ instead of
$u$, i.e.,\xspace let $U^*\in H^1(\Omega)$ be defined as in \eqref{eq:U^*}, then
\begin{align*}
\normm{u-U^*}\le \normm{u_G-U^*} + \normm{u-u_G}\le \normm{u_G-U} + \normm{u-u_G}
\end{align*}
The error $\normm{u-u_G}$ can be estimated by means of the trace $u-G$
on~$\partial\Omega$. Similar
techniques can be applied in the case of a curved boundary.
\end{remark}
\section{Extensions}
\label{sec:applications}
In the previous
section, for the ease of presentation, we restricted ourselves to linear scalar valued
reaction diffusion problems. However, the presented
ideas can be generalized to more complicated problems. We
shall present two examples.
\subsection{Convex Hull Property}
\label{sec:convex-hull-property}
In this section we shall generalize the results of Section
\ref{sec:cutting} to vector valued functions.
In order to do so, we first have to
generalize the cutoff process in \eqref{eq:cut} to higher
dimensions. To this end, let $K\subset \setR^m$, $m\in\setN$, be a
convex set and define $\Pi_K:\setR^m\to K$ to be the closest point projection
with respect to the Euclidean norm $|\cdot|\colon\setR^m\to\setR$. In other
words
\newcommand{\ensuremath{\boldsymbol{x}}}{\ensuremath{\boldsymbol{x}}}
\newcommand{\ensuremath{\boldsymbol{y}}}{\ensuremath{\boldsymbol{y}}}
\newcommand{\ensuremath{\boldsymbol{u}}}{\ensuremath{\boldsymbol{u}}}
\newcommand{\ensuremath{\boldsymbol{U}}}{\ensuremath{\boldsymbol{U}}}
\newcommand{\ensuremath{\boldsymbol{v}}}{\ensuremath{\boldsymbol{v}}}
\begin{align}\label{eq:PiK}
\Pi_K x := \ensuremath{\operatorname{argmin}}_{y \in K} \abs{x-y}.
\end{align}
This definition can be
extended to
\begin{align*}
\Pi_K: L^2(\Omega)^m\to L^2(\Omega)^m\qquad\text{setting}\qquad (\Pi_K
\ensuremath{\boldsymbol{v}})(x):=\Pi_K \ensuremath{\boldsymbol{v}}(x).
\end{align*}
It follows from the convexity of $K$, with elementary computations, that
$\Pi_K$ is $1$-Lipschitz and hence a generalized chain rule implies $
\Pi_K:H^1(\Omega)^m\to H^1(\Omega)^m$ with
\begin{align}\label{eq:estPiK}
|\nabla
(\Pi_K \ensuremath{\boldsymbol{v}})(x)|\le \operatorname{Lip}(\Pi_K) |\nabla \ensuremath{\boldsymbol{v}}(x)| = |\nabla
\ensuremath{\boldsymbol{v}}(x)|\qquad x\in\Omega;
\end{align}
compare with
\cite{BilFuc02,AmbrosioDalMaso:1990,AmbrosioFuscoPallara:2000}.
This is the replacement for \eqref{eq:nablaU^*<nablaU}.
Let $\ensuremath{\boldsymbol{u}}\in H^1(\Omega)^m$ be such that
\begin{align*}
\int_\Omega \nabla\ensuremath{\boldsymbol{u}}\colon\nabla\ensuremath{\boldsymbol{v}} \,{\rm d}x = 0\qquad\text{for
all}~\ensuremath{\boldsymbol{v}}\in H_0^1(\Omega)^m.
\end{align*}
It is well known that $\ensuremath{\boldsymbol{u}}$ satisfies the
convex hull property
\begin{align*}
\ensuremath{\boldsymbol{u}}(\Omega)\subset\operatorname{conv\,hull}\ensuremath{\boldsymbol{u}}(\partial\Omega)
\end{align*}
(see \cite{BilFuc02}) and that $\ensuremath{\boldsymbol{u}}$ is the unique minimizer of the energy
\begin{align*}
\ensuremath\mathcal{J}(\ensuremath{\boldsymbol{v}})\mathrel{:=} \int_\Omega\frac12 |\nabla \ensuremath{\boldsymbol{v}}|^2\,{\rm d}x\qquad\text{in}~\ensuremath{\boldsymbol{u}}+ H_0^1(\Omega)^m.
\end{align*}
Let $\ensuremath{\boldsymbol{U}}\in H^1(\Omega)^m$
be some approximation of
$\ensuremath{\boldsymbol{u}}$ and define
\begin{align*}
\ensuremath{\boldsymbol{U}}^*\mathrel{:=} \Pi_K\ensuremath{\boldsymbol{U}}, \qquad\text{with}~K\mathrel{:=}
\operatorname{conv\,hull}\ensuremath{\boldsymbol{U}}(\partial\Omega).
\end{align*}
Consequently $\ensuremath{\boldsymbol{U}}^*$
satisfies the discrete convex hull property
\begin{align*}
\ensuremath{\boldsymbol{U}}^*(\Omega)\subset\operatorname{conv\,hull}\ensuremath{\boldsymbol{U}}^*(\partial\Omega)
\end{align*}
and it follows from \eqref{eq:estPiK} similar as in the proof of Theorem
\ref{thm:main} that $\ensuremath{\boldsymbol{U}}^*-\ensuremath{\boldsymbol{U}}\in
H_0^1(\Omega)^m$ and
\begin{align}
\label{eq:mainv}
\begin{split}
\ensuremath\mathcal{J}(\ensuremath{\boldsymbol{U}}^*)=\frac12\int_\Omega|\nabla \ensuremath{\boldsymbol{U}}^*|^2\,{\rm d}x\le
\frac12\int_\Omega|\nabla
\ensuremath{\boldsymbol{U}}|^2\,{\rm d}x=\ensuremath\mathcal{J}(\ensuremath{\boldsymbol{U}}).
\end{split}
\end{align}
Moreover, as in Corollary \ref{cor:main}, we obtain for
$\normm{\cdot}^2=\int_\Omega\abs{\nabla \cdot}^2\,{\rm d}x$ that
\begin{align}
\label{eq:mainvb}
\normm{\ensuremath{\boldsymbol{u}}-\ensuremath{\boldsymbol{U}}^*}\le \normm{\ensuremath{\boldsymbol{u}}-\ensuremath{\boldsymbol{U}}}\qquad\text{if}~\ensuremath{\boldsymbol{u}}-\ensuremath{\boldsymbol{U}}\in
H_0^1(\Omega)^m.
\end{align}
\begin{remark}\label{r:reactd}
The results of this section can be generalized
to reaction diffusion problems. To this end we observe that, in
order to prove \eqref{eq:mainv},
we need a replacement of \eqref{eq:U^*<U} for vector-valued
functions. Note that for reaction diffusion problems the convex hull
property reads as
\begin{align*}
\ensuremath{\boldsymbol{u}}(\Omega)\subset\operatorname{conv\,hull}
\big(\{0\}\cup\ensuremath{\boldsymbol{u}}(\partial\Omega)\big).
\end{align*}
Therefore, defining $K\mathrel{:=}
\operatorname{conv\,hull}(\{0\}\cup\ensuremath{\boldsymbol{U}}(\partial\Omega))$ we obtain
$|\ensuremath{\boldsymbol{U}}^*(x)|\le|\ensuremath{\boldsymbol{U}}(x)|$, $x\in\Omega$. This is the required replacement of
\eqref{eq:U^*<U}.
\end{remark}
\subsection{Nonlinear Problems}
\label{sec:nonlinear-problems}
Recalling the proof of Theorem \ref{thm:main} and \eqref{eq:U*props},
we observe that formally \eqref{eq:reduction} holds for energies
\begin{align*}
\ensuremath\mathcal{J}(v)=\int_\Omega \mathcal{F}(x,|v|,|\nabla v|)\,{\rm d}x- F(v)
\end{align*}
with $\mathcal{F}:\Omega\times\setR\times\setR\to \setR$ being monotone
in its second and third variable. In particular,
Theorem~\ref{thm:main} can be applied to many nonlinear problems.
As an example, we consider the nonlinear $p$-Laplace problem. To this
end, for fixed $p\in(1,\infty)$, $\frac1p+\frac1q=1$, let
$W^{1,p}(\Omega)$
be the space of $p$
integrable functions on $\Omega$ with $p$-integrable weak derivatives.
Let $u\in W^{1,p}(\Omega)$ such that
\begin{alignat*}{2}
-\operatorname{div} |\nabla u|^{p-2}\nabla u &=: F\le0&\qquad&\text{in}~\Omega;
\end{alignat*}
in the distributional sense.
Then $u$
is the unique minimizer of the energy
\begin{align*}
\ensuremath\mathcal{J}(v)\mathrel{:=} \int_\Omega \frac1p\abs{\nabla v}^p\,{\rm d}x
-F(v)\qquad\text{in}~u+W^{1,p}_0(\Omega).
\end{align*}
Here $W^{1,p}_0(\Omega)$ is the subspace of
functions in $W^{1,p}(\Omega)$ with vanishing trace on $\partial\Omega$.
Moreover, it is well known that $u$ satisfies
\begin{align*}
\sup_{\Omega}u\le \sup_{\partial\Omega}u;
\end{align*}
see
e.g. \cite{OttLM98}. Let $U\in W^{1,p}(\Omega)$ be some approximation
to $u$ and let
\begin{align*}
U^*\mathrel{:=} \min\big\{U,\sup_{\partial\Omega}U\big\}.
\end{align*}
This implies \eqref{eq:nablaU^*<nablaU}.
Note that the differential operator contains no reactive term and thus
we do not require \eqref{eq:U^*<U} to conclude, similar as in the proof of Theorem \ref{thm:main}, that
\begin{align*}
\ensuremath\mathcal{J}(U^*)\le\ensuremath\mathcal{J}(U) \qquad\text{and thus}\qquad \ensuremath\mathcal{J}(U^*)-\ensuremath\mathcal{J}(u)\le \ensuremath\mathcal{J}(U)-\ensuremath\mathcal{J}(u).
\end{align*}
In order to prove a result analog to Corollary \ref{cor:main} it remains to
correlate the energy difference to a reasonable measure of distance.
This can be done using the so-called quasi-norm introduced by Barrett
and Liu in \cite{BarrettLiu:93}. It follows from
\cite[Lemma 13]{Kreuzer:12} that there exist constants $C,c>0$
such that
\begin{align*}
c\big(\ensuremath\mathcal{J}(v)-\ensuremath\mathcal{J}(u)\big)&\le
\normm{v-u}^2_{(\nabla u)}
\mathrel{:=} \int_\Omega (\abs{u}+\abs{v})^{p-2}\abs{u-v}^2\,{\rm d}x\le C\big(\ensuremath\mathcal{J}(v)-\ensuremath\mathcal{J}(u)\big)
\end{align*}
for all $v\in u+W^{1,p}_0(\Omega)$. Hence,
with $v=U$ respectively $v=U^*$, it follows from above, that
\begin{align*}
\sqrt{c}\,\normm{u-U^*}_{(\nabla u)}\le \sqrt{C}\,\normm{u-U}_{(\nabla u)}.
\end{align*}
\section{Conclusion}
\label{sec:conclusion}
Maximum principles often reflect physical behavior of solutions and
it is therefore
desirable that numerical approximations satisfy a maximum
principle as well. In this paper we
enforce the maximum principle by performing a simple
cutoff to the approximation. We show that for many problems
this truncation even improves the approximation.
Hence, all error estimates and convergence results for the
approximation can directly be applied to the error of the truncated function.
We emphasize that we do not specify the sort of
approximation beyond that it is conforming.
Therefore, among others, all presented results apply to all conforming finite
element approximation including high order elements and conforming
$hp$-methods. There is no restriction on the underlying
partition of $\Omega$ like non obtuseness of the triangulation and
the results even apply to partitions involving complicated
element geometries.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2012-08-21T02:07:42",
"yymm": "1208",
"arxiv_id": "1208.3958",
"language": "en",
"url": "https://arxiv.org/abs/1208.3958",
"abstract": "Discrete maximum principles in the approximation of partial differential equations are crucial for the preservation of qualitative properties of physical models. In this work we enforce the discrete maximum principle by performing a simple cutoff. We show that for many problems this a posteriori procedure even improves the approximation in the natural energy norm. The results apply to many different kinds of approximations including conforming higher order and $hp$-finite elements. Moreover in the case of finite element approximations there is no geometrical restriction on the partition of the domain.",
"subjects": "Numerical Analysis (math.NA); Mathematical Physics (math-ph)",
"title": "A Note on why Enforcing Discrete Maximum Principles by a simple a Posteriori Cutoff is a Good Idea",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018390836985,
"lm_q2_score": 0.8267117940706734,
"lm_q1q2_score": 0.8062108619699045
} |
https://arxiv.org/abs/2211.07227 | Stochastic approximation approaches for CVaR-based variational inequalities | This paper considers variational inequalities (VI) defined by the conditional value-at-risk (CVaR) of uncertain functions and provides three stochastic approximation schemes to solve them. All methods use an empirical estimate of the CVaR at each iteration. The first algorithm constrains the iterates to the feasible set using projection. To overcome the computational burden of projections, the second one handles inequality and equality constraints defining the feasible set differently. Particularly, projection onto to the affine subspace defined by the equality constraints is achieved by matrix multiplication and inequalities are handled by using penalty functions. Finally, the third algorithm discards projections altogether by introducing multiplier updates. We establish asymptotic convergence of all our schemes to any arbitrary neighborhood of the solution of the VI. A simulation example concerning a network routing game illustrates our theoretical findings. | \section{Introduction}
\label{sec:introduction}
This document is a template for \LaTeX. If you are
reading a paper or PDF version of this document, please download the
electronic file, trans\_jour.tex, from the IEEE Web site at \underline
{http://www.ieee.org/authortools/trans\_jour.tex} so you can use it to prepare your manuscript. If
you would prefer to use LaTeX, download IEEE's LaTeX style and sample files
from the same Web page. You can also explore using the Overleaf editor at
\underline
{https://www.overleaf.com/blog/278-how-to-use-overleaf-with-}\discretionary{}{}{}\underline
{ieee-collabratec-your-quick-guide-to-getting-started\#.}\discretionary{}{}{}\underline{xsVp6tpPkrKM9}
If your paper is intended for a conference, please contact your conference
editor concerning acceptable word processor formats for your particular
conference.
IEEE will do the final formatting of your paper. If your paper is intended
for a conference, please observe the conference page limits.
\subsection{Abbreviations and Acronyms}
Define abbreviations and acronyms the first time they are used in the text,
even after they have already been defined in the abstract. Abbreviations
such as IEEE, SI, ac, and dc do not have to be defined. Abbreviations that
incorporate periods should not have spaces: write ``C.N.R.S.,'' not ``C. N.
R. S.'' Do not use abbreviations in the title unless they are unavoidable
(for example, ``IEEE'' in the title of this article).
\subsection{Other Recommendations}
Use one space after periods and colons. Hyphenate complex modifiers:
``zero-field-cooled magnetization.'' Avoid dangling participles, such as,
``Using \eqref{eq}, the potential was calculated.'' [It is not clear who or what
used \eqref{eq}.] Write instead, ``The potential was calculated by using \eqref{eq},'' or
``Using \eqref{eq}, we calculated the potential.''
Use a zero before decimal points: ``0.25,'' not ``.25.'' Use
``cm$^{3}$,'' not ``cc.'' Indicate sample dimensions as ``0.1 cm
$\times $ 0.2 cm,'' not ``0.1 $\times $ 0.2 cm$^{2}$.'' The
abbreviation for ``seconds'' is ``s,'' not ``sec.'' Use
``Wb/m$^{2}$'' or ``webers per square meter,'' not
``webers/m$^{2}$.'' When expressing a range of values, write ``7 to
9'' or ``7--9,'' not ``7$\sim $9.''
A parenthetical statement at the end of a sentence is punctuated outside of
the closing parenthesis (like this). (A parenthetical sentence is punctuated
within the parentheses.) In American English, periods and commas are within
quotation marks, like ``this period.'' Other punctuation is ``outside''!
Avoid contractions; for example, write ``do not'' instead of ``don't.'' The
serial comma is preferred: ``A, B, and C'' instead of ``A, B and C.''
If you wish, you may write in the first person singular or plural and use
the active voice (``I observed that $\ldots$'' or ``We observed that $\ldots$''
instead of ``It was observed that $\ldots$''). Remember to check spelling. If
your native language is not English, please get a native English-speaking
colleague to carefully proofread your paper.
Try not to use too many typefaces in the same article. You're writing
scholarly papers, not ransom notes. Also please remember that MathJax
can't handle really weird typefaces.
\subsection{Equations}
Number equations consecutively with equation numbers in parentheses flush
with the right margin, as in \eqref{eq}. To make your equations more
compact, you may use the solidus (~/~), the exp function, or appropriate
exponents. Use parentheses to avoid ambiguities in denominators. Punctuate
equations when they are part of a sentence, as in
\begin{equation}E=mc^2.\label{eq}\end{equation}
Be sure that the symbols in your equation have been defined before the
equation appears or immediately following. Italicize symbols ($T$ might refer
to temperature, but T is the unit tesla). Refer to ``\eqref{eq},'' not ``Eq. \eqref{eq}''
or ``equation \eqref{eq},'' except at the beginning of a sentence: ``Equation \eqref{eq}
is $\ldots$ .''
\subsection{\LaTeX-Specific Advice}
Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead
of ``hard'' references (e.g., \verb|(1)|). That will make it possible
to combine sections, add equations, or change the order of figures or
citations without having to go through the file line by line.
Please don't use the \verb|{eqnarray}| equation environment. Use
\verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}|
environment leaves unsightly spaces around relation symbols.
Please note that the \verb|{subequations}| environment in {\LaTeX}
will increment the main equation counter even when there are no
equation numbers displayed. If you forget that, you might write an
article in which the equation numbers skip from (17) to (20), causing
the copy editors to wonder if you've discovered a new method of
counting.
{\BibTeX} does not work by magic. It doesn't get the bibliographic
data from thin air but from .bib files. If you use {\BibTeX} to produce a
bibliography you must send the .bib files.
{\LaTeX} can't read your mind. If you assign the same label to a
subsubsection and a table, you might find that Table I has been cross
referenced as Table IV-B3.
{\LaTeX} does not have precognitive abilities. If you put a
\verb|\label| command before the command that updates the counter it's
supposed to be using, the label will pick up the last counter to be
cross referenced instead. In particular, a \verb|\label| command
should not go before the caption of a figure or a table.
Do not use \verb|\nonumber| inside the \verb|{array}| environment. It
will not stop equation numbers inside \verb|{array}| (there won't be
any anyway) and it might stop a wanted equation number in the
surrounding equation.
If you are submitting your paper to a colorized journal, you can use
the following two lines at the start of the article to ensure its
appearance resembles the final copy:
\smallskip\noindent
\begin{small}
\begin{tabular}{l}
\verb+\+\texttt{documentclass[journal,twoside,web]\{ieeecolor\}}\\
\verb+\+\texttt{usepackage\{\textit{Journal\_Name}\}}
\end{tabular}
\end{small}
\section{Units}
Use either SI (MKS) or CGS as primary units. (SI units are strongly
encouraged.) English units may be used as secondary units (in parentheses).
This applies to papers in data storage. For example, write ``15
Gb/cm$^{2}$ (100 Gb/in$^{2})$.'' An exception is when
English units are used as identifiers in trade, such as ``3\textonehalf-in
disk drive.'' Avoid combining SI and CGS units, such as current in amperes
and magnetic field in oersteds. This often leads to confusion because
equations do not balance dimensionally. If you must use mixed units, clearly
state the units for each quantity in an equation.
The SI unit for magnetic field strength $H$ is A/m. However, if you wish to use
units of T, either refer to magnetic flux density $B$ or magnetic field
strength symbolized as $\mu _{0}H$. Use the center dot to separate
compound units, e.g., ``A$\cdot $m$^{2}$.''
\section{Some Common Mistakes}
The word ``data'' is plural, not singular. The subscript for the
permeability of vacuum $\mu _{0}$ is zero, not a lowercase letter
``o.'' The term for residual magnetization is ``remanence''; the adjective
is ``remanent''; do not write ``remnance'' or ``remnant.'' Use the word
``micrometer'' instead of ``micron.'' A graph within a graph is an
``inset,'' not an ``insert.'' The word ``alternatively'' is preferred to the
word ``alternately'' (unless you really mean something that alternates). Use
the word ``whereas'' instead of ``while'' (unless you are referring to
simultaneous events). Do not use the word ``essentially'' to mean
``approximately'' or ``effectively.'' Do not use the word ``issue'' as a
euphemism for ``problem.'' When compositions are not specified, separate
chemical symbols by en-dashes; for example, ``NiMn'' indicates the
intermetallic compound Ni$_{0.5}$Mn$_{0.5}$ whereas
``Ni--Mn'' indicates an alloy of some composition
Ni$_{x}$Mn$_{1-x}$.
\begin{figure}[!t]
\centerline{\includegraphics[width=\columnwidth]{fig1.png}}
\caption{Magnetization as a function of applied field.
It is good practice to explain the significance of the figure in the caption.}
\label{fig1}
\end{figure}
Be aware of the different meanings of the homophones ``affect'' (usually a
verb) and ``effect'' (usually a noun), ``complement'' and ``compliment,''
``discreet'' and ``discrete,'' ``principal'' (e.g., ``principal
investigator'') and ``principle'' (e.g., ``principle of measurement''). Do
not confuse ``imply'' and ``infer.''
Prefixes such as ``non,'' ``sub,'' ``micro,'' ``multi,'' and ``ultra'' are
not independent words; they should be joined to the words they modify,
usually without a hyphen. There is no period after the ``et'' in the Latin
abbreviation ``\emph{et al.}'' (it is also italicized). The abbreviation ``i.e.,'' means
``that is,'' and the abbreviation ``e.g.,'' means ``for example'' (these
abbreviations are not italicized).
A general IEEE styleguide is available at \underline{http://www.ieee.org/authortools}.
\section{Guidelines for Graphics Preparation and Submission}
\label{sec:guidelines}
\subsection{Types of Graphics}
The following list outlines the different types of graphics published in
IEEE journals. They are categorized based on their construction, and use of
color/shades of gray:
\subsubsection{Color/Grayscale figures}
{Figures that are meant to appear in color, or shades of black/gray. Such
figures may include photographs, illustrations, multicolor graphs, and
flowcharts.}
\subsubsection{Line Art figures}
{Figures that are composed of only black lines and shapes. These figures
should have no shades or half-tones of gray, only black and white.}
\subsubsection{Author photos}
{Head and shoulders shots of authors that appear at the end of our papers. }
\subsubsection{Tables}
{Data charts which are typically black and white, but sometimes include
color.}
\begin{table}
\caption{Units for Magnetic Properties}
\label{table}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{|p{25pt}|p{75pt}|p{115pt}|}
\hline
Symbol&
Quantity&
Conversion from Gaussian and \par CGS EMU to SI $^{\mathrm{a}}$ \\
\hline
$\Phi $&
magnetic flux&
1 Mx $\to 10^{-8}$ Wb $= 10^{-8}$ V$\cdot $s \\
$B$&
magnetic flux density, \par magnetic induction&
1 G $\to 10^{-4}$ T $= 10^{-4}$ Wb/m$^{2}$ \\
$H$&
magnetic field strength&
1 Oe $\to 10^{3}/(4\pi )$ A/m \\
$m$&
magnetic moment&
1 erg/G $=$ 1 emu \par $\to 10^{-3}$ A$\cdot $m$^{2} = 10^{-3}$ J/T \\
$M$&
magnetization&
1 erg/(G$\cdot $cm$^{3}) =$ 1 emu/cm$^{3}$ \par $\to 10^{3}$ A/m \\
4$\pi M$&
magnetization&
1 G $\to 10^{3}/(4\pi )$ A/m \\
$\sigma $&
specific magnetization&
1 erg/(G$\cdot $g) $=$ 1 emu/g $\to $ 1 A$\cdot $m$^{2}$/kg \\
$j$&
magnetic dipole \par moment&
1 erg/G $=$ 1 emu \par $\to 4\pi \times 10^{-10}$ Wb$\cdot $m \\
$J$&
magnetic polarization&
1 erg/(G$\cdot $cm$^{3}) =$ 1 emu/cm$^{3}$ \par $\to 4\pi \times 10^{-4}$ T \\
$\chi , \kappa $&
susceptibility&
1 $\to 4\pi $ \\
$\chi_{\rho }$&
mass susceptibility&
1 cm$^{3}$/g $\to 4\pi \times 10^{-3}$ m$^{3}$/kg \\
$\mu $&
permeability&
1 $\to 4\pi \times 10^{-7}$ H/m \par $= 4\pi \times 10^{-7}$ Wb/(A$\cdot $m) \\
$\mu_{r}$&
relative permeability&
$\mu \to \mu_{r}$ \\
$w, W$&
energy density&
1 erg/cm$^{3} \to 10^{-1}$ J/m$^{3}$ \\
$N, D$&
demagnetizing factor&
1 $\to 1/(4\pi )$ \\
\hline
\multicolumn{3}{p{251pt}}{Vertical lines are optional in tables. Statements that serve as captions for
the entire table do not need footnote letters. }\\
\multicolumn{3}{p{251pt}}{$^{\mathrm{a}}$Gaussian units are the same as cg emu for magnetostatics; Mx
$=$ maxwell, G $=$ gauss, Oe $=$ oersted; Wb $=$ weber, V $=$ volt, s $=$
second, T $=$ tesla, m $=$ meter, A $=$ ampere, J $=$ joule, kg $=$
kilogram, H $=$ henry.}
\end{tabular}
\label{tab1}
\end{table}
\subsection{Multipart figures}
Figures compiled of more than one sub-figure presented side-by-side, or
stacked. If a multipart figure is made up of multiple figure
types (one part is lineart, and another is grayscale or color) the figure
should meet the stricter guidelines.
\subsection{File Formats For Graphics}\label{formats}
Format and save your graphics using a suitable graphics processing program
that will allow you to create the images as PostScript (PS), Encapsulated
PostScript (.EPS), Tagged Image File Format (.TIFF), Portable Document
Format (.PDF), Portable Network Graphics (.PNG), or Metapost (.MPS), sizes them, and adjusts
the resolution settings. When
submitting your final paper, your graphics should all be submitted
individually in one of these formats along with the manuscript.
\subsection{Sizing of Graphics}
Most charts, graphs, and tables are one column wide (3.5 inches/88
millimeters/21 picas) or page wide (7.16 inches/181 millimeters/43
picas). The maximum depth a graphic can be is 8.5 inches (216 millimeters/54
picas). When choosing the depth of a graphic, please allow space for a
caption. Figures can be sized between column and page widths if the author
chooses, however it is recommended that figures are not sized less than
column width unless when necessary.
There is currently one publication with column measurements that do not
coincide with those listed above. Proceedings of the IEEE has a column
measurement of 3.25 inches (82.5 millimeters/19.5 picas).
The final printed size of author photographs is exactly
1 inch wide by 1.25 inches tall (25.4 millimeters$\,\times\,$31.75 millimeters/6
picas$\,\times\,$7.5 picas). Author photos printed in editorials measure 1.59 inches
wide by 2 inches tall (40 millimeters$\,\times\,$50 millimeters/9.5 picas$\,\times\,$12
picas).
\subsection{Resolution }
The proper resolution of your figures will depend on the type of figure it
is as defined in the ``Types of Figures'' section. Author photographs,
color, and grayscale figures should be at least 300dpi. Line art, including
tables should be a minimum of 600dpi.
\subsection{Vector Art}
In order to preserve the figures' integrity across multiple computer
platforms, we accept files in the following formats: .EPS/.PDF/.PS. All
fonts must be embedded or text converted to outlines in order to achieve the
best-quality results.
\subsection{Color Space}
The term color space refers to the entire sum of colors that can be
represented within the said medium. For our purposes, the three main color
spaces are Grayscale, RGB (red/green/blue) and CMYK
(cyan/magenta/yellow/black). RGB is generally used with on-screen graphics,
whereas CMYK is used for printing purposes.
All color figures should be generated in RGB or CMYK color space. Grayscale
images should be submitted in Grayscale color space. Line art may be
provided in grayscale OR bitmap colorspace. Note that ``bitmap colorspace''
and ``bitmap file format'' are not the same thing. When bitmap color space
is selected, .TIF/.TIFF/.PNG are the recommended file formats.
\subsection{Accepted Fonts Within Figures}
When preparing your graphics IEEE suggests that you use of one of the
following Open Type fonts: Times New Roman, Helvetica, Arial, Cambria, and
Symbol. If you are supplying EPS, PS, or PDF files all fonts must be
embedded. Some fonts may only be native to your operating system; without
the fonts embedded, parts of the graphic may be distorted or missing.
A safe option when finalizing your figures is to strip out the fonts before
you save the files, creating ``outline'' type. This converts fonts to
artwork what will appear uniformly on any screen.
\subsection{Using Labels Within Figures}
\subsubsection{Figure Axis labels }
Figure axis labels are often a source of confusion. Use words rather than
symbols. As an example, write the quantity ``Magnetization,'' or
``Magnetization M,'' not just ``M.'' Put units in parentheses. Do not label
axes only with units. As in Fig. 1, for example, write ``Magnetization
(A/m)'' or ``Magnetization (A$\cdot$m$^{-1}$),'' not just ``A/m.'' Do not label axes with a ratio of quantities and
units. For example, write ``Temperature (K),'' not ``Temperature/K.''
Multipliers can be especially confusing. Write ``Magnetization (kA/m)'' or
``Magnetization (10$^{3}$ A/m).'' Do not write ``Magnetization
(A/m)$\,\times\,$1000'' because the reader would not know whether the top
axis label in Fig. 1 meant 16000 A/m or 0.016 A/m. Figure labels should be
legible, approximately 8 to 10 point type.
\subsubsection{Subfigure Labels in Multipart Figures and Tables}
Multipart figures should be combined and labeled before final submission.
Labels should appear centered below each subfigure in 8 point Times New
Roman font in the format of (a) (b) (c).
\subsection{File Naming}
Figures (line artwork or photographs) should be named starting with the
first 5 letters of the author's last name. The next characters in the
filename should be the number that represents the sequential
location of this image in your article. For example, in author
``Anderson's'' paper, the first three figures would be named ander1.tif,
ander2.tif, and ander3.ps.
Tables should contain only the body of the table (not the caption) and
should be named similarly to figures, except that `.t' is inserted
in-between the author's name and the table number. For example, author
Anderson's first three tables would be named ander.t1.tif, ander.t2.ps,
ander.t3.eps.
Author photographs should be named using the first five characters of the
pictured author's last name. For example, four author photographs for a
paper may be named: oppen.ps, moshc.tif, chen.eps, and duran.pdf.
If two authors or more have the same last name, their first initial(s) can
be substituted for the fifth, fourth, third$\ldots$ letters of their surname
until the degree where there is differentiation. For example, two authors
Michael and Monica Oppenheimer's photos would be named oppmi.tif, and
oppmo.eps.
\subsection{Referencing a Figure or Table Within Your Paper}
When referencing your figures and tables within your paper, use the
abbreviation ``Fig.'' even at the beginning of a sentence. Do not abbreviate
``Table.'' Tables should be numbered with Roman Numerals.
\subsection{Checking Your Figures: The IEEE Graphics Analyzer}
The IEEE Graphics Analyzer enables authors to pre-screen their graphics for
compliance with IEEE Transactions and Journals standards before submission.
The online tool, located at
\underline{http://graphicsqc.ieee.org/}, allows authors to
upload their graphics in order to check that each file is the correct file
format, resolution, size and colorspace; that no fonts are missing or
corrupt; that figures are not compiled in layers or have transparency, and
that they are named according to the IEEE Transactions and Journals naming
convention. At the end of this automated process, authors are provided with
a detailed report on each graphic within the web applet, as well as by
email.
For more information on using the Graphics Analyzer or any other graphics
related topic, contact the IEEE Graphics Help Desk by e-mail at
graphics@ieee.org.
\subsection{Submitting Your Graphics}
Because IEEE will do the final formatting of your paper,
you do not need to position figures and tables at the top and bottom of each
column. In fact, all figures, figure captions, and tables can be placed at
the end of your paper. In addition to, or even in lieu of submitting figures
within your final manuscript, figures should be submitted individually,
separate from the manuscript in one of the file formats listed above in
Section \ref{formats}. Place figure captions below the figures; place table titles
above the tables. Please do not include captions as part of the figures, or
put them in ``text boxes'' linked to the figures. Also, do not place borders
around the outside of your figures.
\subsection{Color Processing/Printing in IEEE Journals}
All IEEE Transactions, Journals, and Letters allow an author to publish
color figures on IEEE Xplore\textregistered\ at no charge, and automatically
convert them to grayscale for print versions. In most journals, figures and
tables may alternatively be printed in color if an author chooses to do so.
Please note that this service comes at an extra expense to the author. If
you intend to have print color graphics, include a note with your final
paper indicating which figures or tables you would like to be handled that
way, and stating that you are willing to pay the additional fee.
\section{Conclusion}
A conclusion section is not required. Although a conclusion may review the
main points of the paper, do not replicate the abstract as the conclusion. A
conclusion might elaborate on the importance of the work or suggest
applications and extensions.
\appendices
Appendixes, if needed, appear before the acknowledgment.
\section*{Acknowledgment}
The preferred spelling of the word ``acknowledgment'' in American English is
without an ``e'' after the ``g.'' Use the singular heading even if you have
many acknowledgments. Avoid expressions such as ``One of us (S.B.A.) would
like to thank $\ldots$ .'' Instead, write ``F. A. Author thanks $\ldots$ .'' In most
cases, sponsor and financial support acknowledgments are placed in the
unnumbered footnote on the first page, not here.
\section*{References and Footnotes}
\subsection{References}
References need not be cited in text. When they are, they appear on the
line, in square brackets, inside the punctuation. Multiple references are
each numbered with separate brackets. When citing a section in a book,
please give the relevant page numbers. In text, refer simply to the
reference number. Do not use ``Ref.'' or ``reference'' except at the
beginning of a sentence: ``Reference \cite{b3} shows $\ldots$ .'' Please do not use
automatic endnotes in \emph{Word}, rather, type the reference list at the end of the
paper using the ``References'' style.
Reference numbers are set flush left and form a column of their own, hanging
out beyond the body of the reference. The reference numbers are on the line,
enclosed in square brackets. In all references, the given name of the author
or editor is abbreviated to the initial only and precedes the last name. Use
them all; use \emph{et al.} only if names are not given. Use commas around Jr.,
Sr., and III in names. Abbreviate conference titles. When citing IEEE
transactions, provide the issue number, page range, volume number, year,
and/or month if available. When referencing a patent, provide the day and
the month of issue, or application. References may not include all
information; please obtain and include relevant information. Do not combine
references. There must be only one reference with each number. If there is a
URL included with the print reference, it can be included at the end of the
reference.
Other than books, capitalize only the first word in a paper title, except
for proper nouns and element symbols. For papers published in translation
journals, please give the English citation first, followed by the original
foreign-language citation See the end of this document for formats and
examples of common references. For a complete discussion of references and
their formats, see the IEEE style manual at
\underline{http://www.ieee.org/authortools}.
\subsection{Footnotes}
Number footnotes separately in superscript numbers.\footnote{It is recommended that footnotes be avoided (except for
the unnumbered footnote with the receipt date on the first page). Instead,
try to integrate the footnote information into the text.} Place the actual
footnote at the bottom of the column in which it is cited; do not put
footnotes in the reference list (endnotes). Use letters for table footnotes
(see Table \ref{table}).
\section{Submitting Your Paper for Review}
\subsection{Final Stage}
When you submit your final version (after your paper has been accepted),
print it in two-column format, including figures and tables. You must also
send your final manuscript on a disk, via e-mail, or through a Web
manuscript submission system as directed by the society contact. You may use
\emph{Zip} for large files, or compress files using \emph{Compress, Pkzip, Stuffit,} or \emph{Gzip.}
Also, send a sheet of paper or PDF with complete contact information for all
authors. Include full mailing addresses, telephone numbers, fax numbers, and
e-mail addresses. This information will be used to send each author a
complimentary copy of the journal in which the paper appears. In addition,
designate one author as the ``corresponding author.'' This is the author to
whom proofs of the paper will be sent. Proofs are sent to the corresponding
author only.
\subsection{Review Stage Using ScholarOne\textregistered\ Manuscripts}
Contributions to the Transactions, Journals, and Letters may be submitted
electronically on IEEE's on-line manuscript submission and peer-review
system, ScholarOne\textregistered\ Manuscripts. You can get a listing of the
publications that participate in ScholarOne at
\underline{http://www.ieee.org/publications\_standards/publications/}\discretionary{}{}{}\underline{authors/authors\_submission.html}.
First check if you have an existing account. If there is none, please create
a new account. After logging in, go to your Author Center and click ``Submit
First Draft of a New Manuscript.''
Along with other information, you will be asked to select the subject from a
pull-down list. Depending on the journal, there are various steps to the
submission process; you must complete all steps for a complete submission.
At the end of each step you must click ``Save and Continue''; just uploading
the paper is not sufficient. After the last step, you should see a
confirmation that the submission is complete. You should also receive an
e-mail confirmation. For inquiries regarding the submission of your paper on
ScholarOne Manuscripts, please contact oprs-support@ieee.org or call +1 732
465 5861.
ScholarOne Manuscripts will accept files for review in various formats.
Please check the guidelines of the specific journal for which you plan to
submit.
You will be asked to file an electronic copyright form immediately upon
completing the submission process (authors are responsible for obtaining any
security clearances). Failure to submit the electronic copyright could
result in publishing delays later. You will also have the opportunity to
designate your article as ``open access'' if you agree to pay the IEEE open
access fee.
\subsection{Final Stage Using ScholarOne Manuscripts}
Upon acceptance, you will receive an email with specific instructions
regarding the submission of your final files. To avoid any delays in
publication, please be sure to follow these instructions. Most journals
require that final submissions be uploaded through ScholarOne Manuscripts,
although some may still accept final submissions via email. Final
submissions should include source files of your accepted manuscript, high
quality graphic files, and a formatted pdf file. If you have any questions
regarding the final submission process, please contact the administrative
contact for the journal.
In addition to this, upload a file with complete contact information for all
authors. Include full mailing addresses, telephone numbers, fax numbers, and
e-mail addresses. Designate the author who submitted the manuscript on
ScholarOne Manuscripts as the ``corresponding author.'' This is the only
author to whom proofs of the paper will be sent.
\subsection{Copyright Form}
Authors must submit an electronic IEEE Copyright Form (eCF) upon submitting
their final manuscript files. You can access the eCF system through your
manuscript submission system or through the Author Gateway. You are
responsible for obtaining any necessary approvals and/or security
clearances. For additional information on intellectual property rights,
visit the IEEE Intellectual Property Rights department web page at
\underline{http://www.ieee.org/publications\_standards/publications/rights/}\discretionary{}{}{}\underline{index.html}.
\section{IEEE Publishing Policy}
The general IEEE policy requires that authors should only submit original
work that has neither appeared elsewhere for publication, nor is under
review for another refereed publication. The submitting author must disclose
all prior publication(s) and current submissions when submitting a
manuscript. Do not publish ``preliminary'' data or results. The submitting
author is responsible for obtaining agreement of all coauthors and any
consent required from employers or sponsors before submitting an article.
The IEEE Transactions and Journals Department strongly discourages courtesy
authorship; it is the obligation of the authors to cite only relevant prior
work.
The IEEE Transactions and Journals Department does not publish conference
records or proceedings, but can publish articles related to conferences that
have undergone rigorous peer review. Minimally, two reviews are required for
every article submitted for peer review.
\section{Publication Principles}
The two types of contents of that are published are; 1) peer-reviewed and 2)
archival. The Transactions and Journals Department publishes scholarly
articles of archival value as well as tutorial expositions and critical
reviews of classical subjects and topics of current interest.
Authors should consider the following points:
\begin{enumerate}
\item Technical papers submitted for publication must advance the state of knowledge and must cite relevant prior work.
\item The length of a submitted paper should be commensurate with the importance, or appropriate to the complexity, of the work. For example, an obvious extension of previously published work might not be appropriate for publication or might be adequately treated in just a few pages.
\item Authors must convince both peer reviewers and the editors of the scientific and technical merit of a paper; the standards of proof are higher when extraordinary or unexpected results are reported.
\item Because replication is required for scientific progress, papers submitted for publication must provide sufficient information to allow readers to perform similar experiments or calculations and
use the reported results. Although not everything need be disclosed, a paper
must contain new, useable, and fully described information. For example, a
specimen's chemical composition need not be reported if the main purpose of
a paper is to introduce a new measurement technique. Authors should expect
to be challenged by reviewers if the results are not supported by adequate
data and critical details.
\item Papers that describe ongoing work or announce the latest technical achievement, which are suitable for presentation at a professional conference, may not be appropriate for publication.
\end{enumerate}
\section{Reference Examples}
\begin{itemize}
\item \emph{Basic format for books:}\\
J. K. Author, ``Title of chapter in the book,'' in \emph{Title of His Published Book, x}th ed. City of Publisher, (only U.S. State), Country: Abbrev. of Publisher, year, ch. $x$, sec. $x$, pp. \emph{xxx--xxx.}\\
See \cite{b1,b2}.
\item \emph{Basic format for periodicals:}\\
J. K. Author, ``Name of paper,'' \emph{Abbrev. Title of Periodical}, vol. \emph{x, no}. $x, $pp\emph{. xxx--xxx, }Abbrev. Month, year, DOI. 10.1109.\emph{XXX}.123456.\\
See \cite{b3}--\cite{b5}.
\item \emph{Basic format for reports:}\\
J. K. Author, ``Title of report,'' Abbrev. Name of Co., City of Co., Abbrev. State, Country, Rep. \emph{xxx}, year.\\
See \cite{b6,b7}.
\item \emph{Basic format for handbooks:}\\
\emph{Name of Manual/Handbook, x} ed., Abbrev. Name of Co., City of Co., Abbrev. State, Country, year, pp. \emph{xxx--xxx.}\\
See \cite{b8,b9}.
\item \emph{Basic format for books (when available online):}\\
J. K. Author, ``Title of chapter in the book,'' in \emph{Title of
Published Book}, $x$th ed. City of Publisher, State, Country: Abbrev.
of Publisher, year, ch. $x$, sec. $x$, pp. \emph{xxx--xxx}. [Online].
Available: \underline{http://www.web.com}\\
See \cite{b10}--\cite{b13}.
\item \emph{Basic format for journals (when available online):}\\
J. K. Author, ``Name of paper,'' \emph{Abbrev. Title of Periodical}, vol. $x$, no. $x$, pp. \emph{xxx--xxx}, Abbrev. Month, year. Accessed on: Month, Day, year, DOI: 10.1109.\emph{XXX}.123456, [Online].\\
See \cite{b14}--\cite{b16}.
\item \emph{Basic format for papers presented at conferences (when available online): }\\
J.K. Author. (year, month). Title. presented at abbrev. conference title. [Type of Medium]. Available: site/path/file\\
See \cite{b17}.
\item \emph{Basic format for reports and handbooks (when available online):}\\
J. K. Author. ``Title of report,'' Company. City, State, Country. Rep. no., (optional: vol./issue), Date. [Online] Available: site/path/file\\
See \cite{b18,b19}.
\item \emph{Basic format for computer programs and electronic documents (when available online): }\\
Legislative body. Number of Congress, Session. (year, month day). \emph{Number of bill or resolution}, \emph{Title}. [Type of medium]. Available: site/path/file\\
\textbf{\emph{NOTE: }ISO recommends that capitalization follow the accepted practice for the language or script in which the information is given.}\\
See \cite{b20}.
\item \emph{Basic format for patents (when available online):}\\
Name of the invention, by inventor's name. (year, month day). Patent Number [Type of medium]. Available: site/path/file\\
See \cite{b21}.
\item \emph{Basic format}\emph{for conference proceedings (published):}\\
J. K. Author, ``Title of paper,'' in \emph{Abbreviated Name of Conf.}, City of Conf., Abbrev. State (if given), Country, year, pp. \emph{xxxxxx.}\\
See \cite{b22}.
\item \emph{Example for papers presented at conferences (unpublished):}\\
See \cite{b23}.
\item \emph{Basic format for patents}$:$\\
J. K. Author, ``Title of patent,'' U.S. Patent \emph{x xxx xxx}, Abbrev. Month, day, year.\\
See \cite{b24}.
\item \emph{Basic format for theses (M.S.) and dissertations (Ph.D.):}
\begin{enumerate}
\item J. K. Author, ``Title of thesis,'' M.S. thesis, Abbrev. Dept., Abbrev. Univ., City of Univ., Abbrev. State, year.
\item J. K. Author, ``Title of dissertation,'' Ph.D. dissertation, Abbrev. Dept., Abbrev. Univ., City of Univ., Abbrev. State, year.
\end{enumerate}
See \cite{b25,b26}.
\item \emph{Basic format for the most common types of unpublished references:}
\begin{enumerate}
\item J. K. Author, private communication, Abbrev. Month, year.
\item J. K. Author, ``Title of paper,'' unpublished.
\item J. K. Author, ``Title of paper,'' to be published.
\end{enumerate}
See \cite{b27}--\cite{b29}.
\item \emph{Basic formats for standards:}
\begin{enumerate}
\item \emph{Title of Standard}, Standard number, date.
\item \emph{Title of Standard}, Standard number, Corporate author, location, date.
\end{enumerate}
See \cite{b30,b31}.
\item \emph{Article number in~reference examples:}\\
See \cite{b32,b33}.
\item \emph{Example when using et al.:}\\
See \cite{b34}.
\end{itemize}
\section{Introduction}
\label{sec:introduction}
\IEEEPARstart{T}{his} document is a template for \LaTeX. If you are
reading a paper or PDF version of this document, please download the
electronic file, trans\_jour.tex, from the IEEE Web site at \underline
{http://www.ieee.org/authortools/trans\_jour.tex} so you can use it to prepare your manuscript. If
you would prefer to use LaTeX, download IEEE's LaTeX style and sample files
from the same Web page. You can also explore using the Overleaf editor at
\underline
{https://www.overleaf.com/blog/278-how-to-use-overleaf-with-}\discretionary{}{}{}\underline
{ieee-collabratec-your-quick-guide-to-getting-started\#.}\discretionary{}{}{}\underline{xsVp6tpPkrKM9}
If your paper is intended for a conference, please contact your conference
editor concerning acceptable word processor formats for your particular
conference.
IEEE will do the final formatting of your paper. If your paper is intended
for a conference, please observe the conference page limits.
\subsection{Abbreviations and Acronyms}
Define abbreviations and acronyms the first time they are used in the text,
even after they have already been defined in the abstract. Abbreviations
such as IEEE, SI, ac, and dc do not have to be defined. Abbreviations that
incorporate periods should not have spaces: write ``C.N.R.S.,'' not ``C. N.
R. S.'' Do not use abbreviations in the title unless they are unavoidable
(for example, ``IEEE'' in the title of this article).
\subsection{Other Recommendations}
Use one space after periods and colons. Hyphenate complex modifiers:
``zero-field-cooled magnetization.'' Avoid dangling participles, such as,
``Using \eqref{eq}, the potential was calculated.'' [It is not clear who or what
used \eqref{eq}.] Write instead, ``The potential was calculated by using \eqref{eq},'' or
``Using \eqref{eq}, we calculated the potential.''
Use a zero before decimal points: ``0.25,'' not ``.25.'' Use
``cm$^{3}$,'' not ``cc.'' Indicate sample dimensions as ``0.1 cm
$\times $ 0.2 cm,'' not ``0.1 $\times $ 0.2 cm$^{2}$.'' The
abbreviation for ``seconds'' is ``s,'' not ``sec.'' Use
``Wb/m$^{2}$'' or ``webers per square meter,'' not
``webers/m$^{2}$.'' When expressing a range of values, write ``7 to
9'' or ``7--9,'' not ``7$\sim $9.''
A parenthetical statement at the end of a sentence is punctuated outside of
the closing parenthesis (like this). (A parenthetical sentence is punctuated
within the parentheses.) In American English, periods and commas are within
quotation marks, like ``this period.'' Other punctuation is ``outside''!
Avoid contractions; for example, write ``do not'' instead of ``don't.'' The
serial comma is preferred: ``A, B, and C'' instead of ``A, B and C.''
If you wish, you may write in the first person singular or plural and use
the active voice (``I observed that $\ldots$'' or ``We observed that $\ldots$''
instead of ``It was observed that $\ldots$''). Remember to check spelling. If
your native language is not English, please get a native English-speaking
colleague to carefully proofread your paper.
Try not to use too many typefaces in the same article. You're writing
scholarly papers, not ransom notes. Also please remember that MathJax
can't handle really weird typefaces.
\subsection{Equations}
Number equations consecutively with equation numbers in parentheses flush
with the right margin, as in \eqref{eq}. To make your equations more
compact, you may use the solidus (~/~), the exp function, or appropriate
exponents. Use parentheses to avoid ambiguities in denominators. Punctuate
equations when they are part of a sentence, as in
\begin{equation}E=mc^2.\label{eq}\end{equation}
Be sure that the symbols in your equation have been defined before the
equation appears or immediately following. Italicize symbols ($T$ might refer
to temperature, but T is the unit tesla). Refer to ``\eqref{eq},'' not ``Eq. \eqref{eq}''
or ``equation \eqref{eq},'' except at the beginning of a sentence: ``Equation \eqref{eq}
is $\ldots$ .''
\subsection{\LaTeX-Specific Advice}
Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead
of ``hard'' references (e.g., \verb|(1)|). That will make it possible
to combine sections, add equations, or change the order of figures or
citations without having to go through the file line by line.
Please don't use the \verb|{eqnarray}| equation environment. Use
\verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}|
environment leaves unsightly spaces around relation symbols.
Please note that the \verb|{subequations}| environment in {\LaTeX}
will increment the main equation counter even when there are no
equation numbers displayed. If you forget that, you might write an
article in which the equation numbers skip from (17) to (20), causing
the copy editors to wonder if you've discovered a new method of
counting.
{\BibTeX} does not work by magic. It doesn't get the bibliographic
data from thin air but from .bib files. If you use {\BibTeX} to produce a
bibliography you must send the .bib files.
{\LaTeX} can't read your mind. If you assign the same label to a
subsubsection and a table, you might find that Table I has been cross
referenced as Table IV-B3.
{\LaTeX} does not have precognitive abilities. If you put a
\verb|\label| command before the command that updates the counter it's
supposed to be using, the label will pick up the last counter to be
cross referenced instead. In particular, a \verb|\label| command
should not go before the caption of a figure or a table.
Do not use \verb|\nonumber| inside the \verb|{array}| environment. It
will not stop equation numbers inside \verb|{array}| (there won't be
any anyway) and it might stop a wanted equation number in the
surrounding equation.
If you are submitting your paper to a colorized journal, you can use
the following two lines at the start of the article to ensure its
appearance resembles the final copy:
\smallskip\noindent
\begin{small}
\begin{tabular}{l}
\verb+\+\texttt{documentclass[journal,twoside,web]\{ieeecolor\}}\\
\verb+\+\texttt{usepackage\{\textit{Journal\_Name}\}}
\end{tabular}
\end{small}
\section{Units}
Use either SI (MKS) or CGS as primary units. (SI units are strongly
encouraged.) English units may be used as secondary units (in parentheses).
This applies to papers in data storage. For example, write ``15
Gb/cm$^{2}$ (100 Gb/in$^{2})$.'' An exception is when
English units are used as identifiers in trade, such as ``3\textonehalf-in
disk drive.'' Avoid combining SI and CGS units, such as current in amperes
and magnetic field in oersteds. This often leads to confusion because
equations do not balance dimensionally. If you must use mixed units, clearly
state the units for each quantity in an equation.
The SI unit for magnetic field strength $H$ is A/m. However, if you wish to use
units of T, either refer to magnetic flux density $B$ or magnetic field
strength symbolized as $\mu _{0}H$. Use the center dot to separate
compound units, e.g., ``A$\cdot $m$^{2}$.''
\section{Some Common Mistakes}
The word ``data'' is plural, not singular. The subscript for the
permeability of vacuum $\mu _{0}$ is zero, not a lowercase letter
``o.'' The term for residual magnetization is ``remanence''; the adjective
is ``remanent''; do not write ``remnance'' or ``remnant.'' Use the word
``micrometer'' instead of ``micron.'' A graph within a graph is an
``inset,'' not an ``insert.'' The word ``alternatively'' is preferred to the
word ``alternately'' (unless you really mean something that alternates). Use
the word ``whereas'' instead of ``while'' (unless you are referring to
simultaneous events). Do not use the word ``essentially'' to mean
``approximately'' or ``effectively.'' Do not use the word ``issue'' as a
euphemism for ``problem.'' When compositions are not specified, separate
chemical symbols by en-dashes; for example, ``NiMn'' indicates the
intermetallic compound Ni$_{0.5}$Mn$_{0.5}$ whereas
``Ni--Mn'' indicates an alloy of some composition
Ni$_{x}$Mn$_{1-x}$.
\begin{figure}[!t]
\centerline{\includegraphics[width=\columnwidth]{fig1.png}}
\caption{Magnetization as a function of applied field.
It is good practice to explain the significance of the figure in the caption.}
\label{fig1}
\end{figure}
Be aware of the different meanings of the homophones ``affect'' (usually a
verb) and ``effect'' (usually a noun), ``complement'' and ``compliment,''
``discreet'' and ``discrete,'' ``principal'' (e.g., ``principal
investigator'') and ``principle'' (e.g., ``principle of measurement''). Do
not confuse ``imply'' and ``infer.''
Prefixes such as ``non,'' ``sub,'' ``micro,'' ``multi,'' and ``ultra'' are
not independent words; they should be joined to the words they modify,
usually without a hyphen. There is no period after the ``et'' in the Latin
abbreviation ``\emph{et al.}'' (it is also italicized). The abbreviation ``i.e.,'' means
``that is,'' and the abbreviation ``e.g.,'' means ``for example'' (these
abbreviations are not italicized).
A general IEEE styleguide is available at \underline{http://www.ieee.org/authortools}.
\section{Guidelines for Graphics Preparation and Submission}
\label{sec:guidelines}
\subsection{Types of Graphics}
The following list outlines the different types of graphics published in
IEEE journals. They are categorized based on their construction, and use of
color/shades of gray:
\subsubsection{Color/Grayscale figures}
{Figures that are meant to appear in color, or shades of black/gray. Such
figures may include photographs, illustrations, multicolor graphs, and
flowcharts.}
\subsubsection{Line Art figures}
{Figures that are composed of only black lines and shapes. These figures
should have no shades or half-tones of gray, only black and white.}
\subsubsection{Author photos}
{Head and shoulders shots of authors that appear at the end of our papers. }
\subsubsection{Tables}
{Data charts which are typically black and white, but sometimes include
color.}
\begin{table}
\caption{Units for Magnetic Properties}
\label{table}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{|p{25pt}|p{75pt}|p{115pt}|}
\hline
Symbol&
Quantity&
Conversion from Gaussian and \par CGS EMU to SI $^{\mathrm{a}}$ \\
\hline
$\Phi $&
magnetic flux&
1 Mx $\to 10^{-8}$ Wb $= 10^{-8}$ V$\cdot $s \\
$B$&
magnetic flux density, \par magnetic induction&
1 G $\to 10^{-4}$ T $= 10^{-4}$ Wb/m$^{2}$ \\
$H$&
magnetic field strength&
1 Oe $\to 10^{3}/(4\pi )$ A/m \\
$m$&
magnetic moment&
1 erg/G $=$ 1 emu \par $\to 10^{-3}$ A$\cdot $m$^{2} = 10^{-3}$ J/T \\
$M$&
magnetization&
1 erg/(G$\cdot $cm$^{3}) =$ 1 emu/cm$^{3}$ \par $\to 10^{3}$ A/m \\
4$\pi M$&
magnetization&
1 G $\to 10^{3}/(4\pi )$ A/m \\
$\sigma $&
specific magnetization&
1 erg/(G$\cdot $g) $=$ 1 emu/g $\to $ 1 A$\cdot $m$^{2}$/kg \\
$j$&
magnetic dipole \par moment&
1 erg/G $=$ 1 emu \par $\to 4\pi \times 10^{-10}$ Wb$\cdot $m \\
$J$&
magnetic polarization&
1 erg/(G$\cdot $cm$^{3}) =$ 1 emu/cm$^{3}$ \par $\to 4\pi \times 10^{-4}$ T \\
$\chi , \kappa $&
susceptibility&
1 $\to 4\pi $ \\
$\chi_{\rho }$&
mass susceptibility&
1 cm$^{3}$/g $\to 4\pi \times 10^{-3}$ m$^{3}$/kg \\
$\mu $&
permeability&
1 $\to 4\pi \times 10^{-7}$ H/m \par $= 4\pi \times 10^{-7}$ Wb/(A$\cdot $m) \\
$\mu_{r}$&
relative permeability&
$\mu \to \mu_{r}$ \\
$w, W$&
energy density&
1 erg/cm$^{3} \to 10^{-1}$ J/m$^{3}$ \\
$N, D$&
demagnetizing factor&
1 $\to 1/(4\pi )$ \\
\hline
\multicolumn{3}{p{251pt}}{Vertical lines are optional in tables. Statements that serve as captions for
the entire table do not need footnote letters. }\\
\multicolumn{3}{p{251pt}}{$^{\mathrm{a}}$Gaussian units are the same as cg emu for magnetostatics; Mx
$=$ maxwell, G $=$ gauss, Oe $=$ oersted; Wb $=$ weber, V $=$ volt, s $=$
second, T $=$ tesla, m $=$ meter, A $=$ ampere, J $=$ joule, kg $=$
kilogram, H $=$ henry.}
\end{tabular}
\label{tab1}
\end{table}
\subsection{Multipart figures}
Figures compiled of more than one sub-figure presented side-by-side, or
stacked. If a multipart figure is made up of multiple figure
types (one part is lineart, and another is grayscale or color) the figure
should meet the stricter guidelines.
\subsection{File Formats For Graphics}\label{formats}
Format and save your graphics using a suitable graphics processing program
that will allow you to create the images as PostScript (PS), Encapsulated
PostScript (.EPS), Tagged Image File Format (.TIFF), Portable Document
Format (.PDF), Portable Network Graphics (.PNG), or Metapost (.MPS), sizes them, and adjusts
the resolution settings. When
submitting your final paper, your graphics should all be submitted
individually in one of these formats along with the manuscript.
\subsection{Sizing of Graphics}
Most charts, graphs, and tables are one column wide (3.5 inches/88
millimeters/21 picas) or page wide (7.16 inches/181 millimeters/43
picas). The maximum depth a graphic can be is 8.5 inches (216 millimeters/54
picas). When choosing the depth of a graphic, please allow space for a
caption. Figures can be sized between column and page widths if the author
chooses, however it is recommended that figures are not sized less than
column width unless when necessary.
There is currently one publication with column measurements that do not
coincide with those listed above. Proceedings of the IEEE has a column
measurement of 3.25 inches (82.5 millimeters/19.5 picas).
The final printed size of author photographs is exactly
1 inch wide by 1.25 inches tall (25.4 millimeters$\,\times\,$31.75 millimeters/6
picas$\,\times\,$7.5 picas). Author photos printed in editorials measure 1.59 inches
wide by 2 inches tall (40 millimeters$\,\times\,$50 millimeters/9.5 picas$\,\times\,$12
picas).
\subsection{Resolution }
The proper resolution of your figures will depend on the type of figure it
is as defined in the ``Types of Figures'' section. Author photographs,
color, and grayscale figures should be at least 300dpi. Line art, including
tables should be a minimum of 600dpi.
\subsection{Vector Art}
In order to preserve the figures' integrity across multiple computer
platforms, we accept files in the following formats: .EPS/.PDF/.PS. All
fonts must be embedded or text converted to outlines in order to achieve the
best-quality results.
\subsection{Color Space}
The term color space refers to the entire sum of colors that can be
represented within the said medium. For our purposes, the three main color
spaces are Grayscale, RGB (red/green/blue) and CMYK
(cyan/magenta/yellow/black). RGB is generally used with on-screen graphics,
whereas CMYK is used for printing purposes.
All color figures should be generated in RGB or CMYK color space. Grayscale
images should be submitted in Grayscale color space. Line art may be
provided in grayscale OR bitmap colorspace. Note that ``bitmap colorspace''
and ``bitmap file format'' are not the same thing. When bitmap color space
is selected, .TIF/.TIFF/.PNG are the recommended file formats.
\subsection{Accepted Fonts Within Figures}
When preparing your graphics IEEE suggests that you use of one of the
following Open Type fonts: Times New Roman, Helvetica, Arial, Cambria, and
Symbol. If you are supplying EPS, PS, or PDF files all fonts must be
embedded. Some fonts may only be native to your operating system; without
the fonts embedded, parts of the graphic may be distorted or missing.
A safe option when finalizing your figures is to strip out the fonts before
you save the files, creating ``outline'' type. This converts fonts to
artwork what will appear uniformly on any screen.
\subsection{Using Labels Within Figures}
\subsubsection{Figure Axis labels }
Figure axis labels are often a source of confusion. Use words rather than
symbols. As an example, write the quantity ``Magnetization,'' or
``Magnetization M,'' not just ``M.'' Put units in parentheses. Do not label
axes only with units. As in Fig. 1, for example, write ``Magnetization
(A/m)'' or ``Magnetization (A$\cdot$m$^{-1}$),'' not just ``A/m.'' Do not label axes with a ratio of quantities and
units. For example, write ``Temperature (K),'' not ``Temperature/K.''
Multipliers can be especially confusing. Write ``Magnetization (kA/m)'' or
``Magnetization (10$^{3}$ A/m).'' Do not write ``Magnetization
(A/m)$\,\times\,$1000'' because the reader would not know whether the top
axis label in Fig. 1 meant 16000 A/m or 0.016 A/m. Figure labels should be
legible, approximately 8 to 10 point type.
\subsubsection{Subfigure Labels in Multipart Figures and Tables}
Multipart figures should be combined and labeled before final submission.
Labels should appear centered below each subfigure in 8 point Times New
Roman font in the format of (a) (b) (c).
\subsection{File Naming}
Figures (line artwork or photographs) should be named starting with the
first 5 letters of the author's last name. The next characters in the
filename should be the number that represents the sequential
location of this image in your article. For example, in author
``Anderson's'' paper, the first three figures would be named ander1.tif,
ander2.tif, and ander3.ps.
Tables should contain only the body of the table (not the caption) and
should be named similarly to figures, except that `.t' is inserted
in-between the author's name and the table number. For example, author
Anderson's first three tables would be named ander.t1.tif, ander.t2.ps,
ander.t3.eps.
Author photographs should be named using the first five characters of the
pictured author's last name. For example, four author photographs for a
paper may be named: oppen.ps, moshc.tif, chen.eps, and duran.pdf.
If two authors or more have the same last name, their first initial(s) can
be substituted for the fifth, fourth, third$\ldots$ letters of their surname
until the degree where there is differentiation. For example, two authors
Michael and Monica Oppenheimer's photos would be named oppmi.tif, and
oppmo.eps.
\subsection{Referencing a Figure or Table Within Your Paper}
When referencing your figures and tables within your paper, use the
abbreviation ``Fig.'' even at the beginning of a sentence. Do not abbreviate
``Table.'' Tables should be numbered with Roman Numerals.
\subsection{Checking Your Figures: The IEEE Graphics Analyzer}
The IEEE Graphics Analyzer enables authors to pre-screen their graphics for
compliance with IEEE Transactions and Journals standards before submission.
The online tool, located at
\underline{http://graphicsqc.ieee.org/}, allows authors to
upload their graphics in order to check that each file is the correct file
format, resolution, size and colorspace; that no fonts are missing or
corrupt; that figures are not compiled in layers or have transparency, and
that they are named according to the IEEE Transactions and Journals naming
convention. At the end of this automated process, authors are provided with
a detailed report on each graphic within the web applet, as well as by
email.
For more information on using the Graphics Analyzer or any other graphics
related topic, contact the IEEE Graphics Help Desk by e-mail at
graphics@ieee.org.
\subsection{Submitting Your Graphics}
Because IEEE will do the final formatting of your paper,
you do not need to position figures and tables at the top and bottom of each
column. In fact, all figures, figure captions, and tables can be placed at
the end of your paper. In addition to, or even in lieu of submitting figures
within your final manuscript, figures should be submitted individually,
separate from the manuscript in one of the file formats listed above in
Section \ref{formats}. Place figure captions below the figures; place table titles
above the tables. Please do not include captions as part of the figures, or
put them in ``text boxes'' linked to the figures. Also, do not place borders
around the outside of your figures.
\subsection{Color Processing/Printing in IEEE Journals}
All IEEE Transactions, Journals, and Letters allow an author to publish
color figures on IEEE Xplore\textregistered\ at no charge, and automatically
convert them to grayscale for print versions. In most journals, figures and
tables may alternatively be printed in color if an author chooses to do so.
Please note that this service comes at an extra expense to the author. If
you intend to have print color graphics, include a note with your final
paper indicating which figures or tables you would like to be handled that
way, and stating that you are willing to pay the additional fee.
\section{Conclusion}
A conclusion section is not required. Although a conclusion may review the
main points of the paper, do not replicate the abstract as the conclusion. A
conclusion might elaborate on the importance of the work or suggest
applications and extensions.
\appendices
Appendixes, if needed, appear before the acknowledgment.
\section*{Acknowledgment}
The preferred spelling of the word ``acknowledgment'' in American English is
without an ``e'' after the ``g.'' Use the singular heading even if you have
many acknowledgments. Avoid expressions such as ``One of us (S.B.A.) would
like to thank $\ldots$ .'' Instead, write ``F. A. Author thanks $\ldots$ .'' In most
cases, sponsor and financial support acknowledgments are placed in the
unnumbered footnote on the first page, not here.
\section*{References and Footnotes}
\subsection{References}
References need not be cited in text. When they are, they appear on the
line, in square brackets, inside the punctuation. Multiple references are
each numbered with separate brackets. When citing a section in a book,
please give the relevant page numbers. In text, refer simply to the
reference number. Do not use ``Ref.'' or ``reference'' except at the
beginning of a sentence: ``Reference \cite{b3} shows $\ldots$ .'' Please do not use
automatic endnotes in \emph{Word}, rather, type the reference list at the end of the
paper using the ``References'' style.
Reference numbers are set flush left and form a column of their own, hanging
out beyond the body of the reference. The reference numbers are on the line,
enclosed in square brackets. In all references, the given name of the author
or editor is abbreviated to the initial only and precedes the last name. Use
them all; use \emph{et al.} only if names are not given. Use commas around Jr.,
Sr., and III in names. Abbreviate conference titles. When citing IEEE
transactions, provide the issue number, page range, volume number, year,
and/or month if available. When referencing a patent, provide the day and
the month of issue, or application. References may not include all
information; please obtain and include relevant information. Do not combine
references. There must be only one reference with each number. If there is a
URL included with the print reference, it can be included at the end of the
reference.
Other than books, capitalize only the first word in a paper title, except
for proper nouns and element symbols. For papers published in translation
journals, please give the English citation first, followed by the original
foreign-language citation See the end of this document for formats and
examples of common references. For a complete discussion of references and
their formats, see the IEEE style manual at
\underline{http://www.ieee.org/authortools}.
\subsection{Footnotes}
Number footnotes separately in superscript numbers.\footnote{It is recommended that footnotes be avoided (except for
the unnumbered footnote with the receipt date on the first page). Instead,
try to integrate the footnote information into the text.} Place the actual
footnote at the bottom of the column in which it is cited; do not put
footnotes in the reference list (endnotes). Use letters for table footnotes
(see Table \ref{table}).
\section{Submitting Your Paper for Review}
\subsection{Final Stage}
When you submit your final version (after your paper has been accepted),
print it in two-column format, including figures and tables. You must also
send your final manuscript on a disk, via e-mail, or through a Web
manuscript submission system as directed by the society contact. You may use
\emph{Zip} for large files, or compress files using \emph{Compress, Pkzip, Stuffit,} or \emph{Gzip.}
Also, send a sheet of paper or PDF with complete contact information for all
authors. Include full mailing addresses, telephone numbers, fax numbers, and
e-mail addresses. This information will be used to send each author a
complimentary copy of the journal in which the paper appears. In addition,
designate one author as the ``corresponding author.'' This is the author to
whom proofs of the paper will be sent. Proofs are sent to the corresponding
author only.
\subsection{Review Stage Using ScholarOne\textregistered\ Manuscripts}
Contributions to the Transactions, Journals, and Letters may be submitted
electronically on IEEE's on-line manuscript submission and peer-review
system, ScholarOne\textregistered\ Manuscripts. You can get a listing of the
publications that participate in ScholarOne at
\underline{http://www.ieee.org/publications\_standards/publications/}\discretionary{}{}{}\underline{authors/authors\_submission.html}.
First check if you have an existing account. If there is none, please create
a new account. After logging in, go to your Author Center and click ``Submit
First Draft of a New Manuscript.''
Along with other information, you will be asked to select the subject from a
pull-down list. Depending on the journal, there are various steps to the
submission process; you must complete all steps for a complete submission.
At the end of each step you must click ``Save and Continue''; just uploading
the paper is not sufficient. After the last step, you should see a
confirmation that the submission is complete. You should also receive an
e-mail confirmation. For inquiries regarding the submission of your paper on
ScholarOne Manuscripts, please contact oprs-support@ieee.org or call +1 732
465 5861.
ScholarOne Manuscripts will accept files for review in various formats.
Please check the guidelines of the specific journal for which you plan to
submit.
You will be asked to file an electronic copyright form immediately upon
completing the submission process (authors are responsible for obtaining any
security clearances). Failure to submit the electronic copyright could
result in publishing delays later. You will also have the opportunity to
designate your article as ``open access'' if you agree to pay the IEEE open
access fee.
\subsection{Final Stage Using ScholarOne Manuscripts}
Upon acceptance, you will receive an email with specific instructions
regarding the submission of your final files. To avoid any delays in
publication, please be sure to follow these instructions. Most journals
require that final submissions be uploaded through ScholarOne Manuscripts,
although some may still accept final submissions via email. Final
submissions should include source files of your accepted manuscript, high
quality graphic files, and a formatted pdf file. If you have any questions
regarding the final submission process, please contact the administrative
contact for the journal.
In addition to this, upload a file with complete contact information for all
authors. Include full mailing addresses, telephone numbers, fax numbers, and
e-mail addresses. Designate the author who submitted the manuscript on
ScholarOne Manuscripts as the ``corresponding author.'' This is the only
author to whom proofs of the paper will be sent.
\subsection{Copyright Form}
Authors must submit an electronic IEEE Copyright Form (eCF) upon submitting
their final manuscript files. You can access the eCF system through your
manuscript submission system or through the Author Gateway. You are
responsible for obtaining any necessary approvals and/or security
clearances. For additional information on intellectual property rights,
visit the IEEE Intellectual Property Rights department web page at
\underline{http://www.ieee.org/publications\_standards/publications/rights/}\discretionary{}{}{}\underline{index.html}.
\section{IEEE Publishing Policy}
The general IEEE policy requires that authors should only submit original
work that has neither appeared elsewhere for publication, nor is under
review for another refereed publication. The submitting author must disclose
all prior publication(s) and current submissions when submitting a
manuscript. Do not publish ``preliminary'' data or results. The submitting
author is responsible for obtaining agreement of all coauthors and any
consent required from employers or sponsors before submitting an article.
The IEEE Transactions and Journals Department strongly discourages courtesy
authorship; it is the obligation of the authors to cite only relevant prior
work.
The IEEE Transactions and Journals Department does not publish conference
records or proceedings, but can publish articles related to conferences that
have undergone rigorous peer review. Minimally, two reviews are required for
every article submitted for peer review.
\section{Publication Principles}
The two types of contents of that are published are; 1) peer-reviewed and 2)
archival. The Transactions and Journals Department publishes scholarly
articles of archival value as well as tutorial expositions and critical
reviews of classical subjects and topics of current interest.
Authors should consider the following points:
\begin{enumerate}
\item Technical papers submitted for publication must advance the state of knowledge and must cite relevant prior work.
\item The length of a submitted paper should be commensurate with the importance, or appropriate to the complexity, of the work. For example, an obvious extension of previously published work might not be appropriate for publication or might be adequately treated in just a few pages.
\item Authors must convince both peer reviewers and the editors of the scientific and technical merit of a paper; the standards of proof are higher when extraordinary or unexpected results are reported.
\item Because replication is required for scientific progress, papers submitted for publication must provide sufficient information to allow readers to perform similar experiments or calculations and
use the reported results. Although not everything need be disclosed, a paper
must contain new, useable, and fully described information. For example, a
specimen's chemical composition need not be reported if the main purpose of
a paper is to introduce a new measurement technique. Authors should expect
to be challenged by reviewers if the results are not supported by adequate
data and critical details.
\item Papers that describe ongoing work or announce the latest technical achievement, which are suitable for presentation at a professional conference, may not be appropriate for publication.
\end{enumerate}
\section{Reference Examples}
\begin{itemize}
\item \emph{Basic format for books:}\\
J. K. Author, ``Title of chapter in the book,'' in \emph{Title of His Published Book, x}th ed. City of Publisher, (only U.S. State), Country: Abbrev. of Publisher, year, ch. $x$, sec. $x$, pp. \emph{xxx--xxx.}\\
See \cite{b1,b2}.
\item \emph{Basic format for periodicals:}\\
J. K. Author, ``Name of paper,'' \emph{Abbrev. Title of Periodical}, vol. \emph{x, no}. $x, $pp\emph{. xxx--xxx, }Abbrev. Month, year, DOI. 10.1109.\emph{XXX}.123456.\\
See \cite{b3}--\cite{b5}.
\item \emph{Basic format for reports:}\\
J. K. Author, ``Title of report,'' Abbrev. Name of Co., City of Co., Abbrev. State, Country, Rep. \emph{xxx}, year.\\
See \cite{b6,b7}.
\item \emph{Basic format for handbooks:}\\
\emph{Name of Manual/Handbook, x} ed., Abbrev. Name of Co., City of Co., Abbrev. State, Country, year, pp. \emph{xxx--xxx.}\\
See \cite{b8,b9}.
\item \emph{Basic format for books (when available online):}\\
J. K. Author, ``Title of chapter in the book,'' in \emph{Title of
Published Book}, $x$th ed. City of Publisher, State, Country: Abbrev.
of Publisher, year, ch. $x$, sec. $x$, pp. \emph{xxx--xxx}. [Online].
Available: \underline{http://www.web.com}\\
See \cite{b10}--\cite{b13}.
\item \emph{Basic format for journals (when available online):}\\
J. K. Author, ``Name of paper,'' \emph{Abbrev. Title of Periodical}, vol. $x$, no. $x$, pp. \emph{xxx--xxx}, Abbrev. Month, year. Accessed on: Month, Day, year, DOI: 10.1109.\emph{XXX}.123456, [Online].\\
See \cite{b14}--\cite{b16}.
\item \emph{Basic format for papers presented at conferences (when available online): }\\
J.K. Author. (year, month). Title. presented at abbrev. conference title. [Type of Medium]. Available: site/path/file\\
See \cite{b17}.
\item \emph{Basic format for reports and handbooks (when available online):}\\
J. K. Author. ``Title of report,'' Company. City, State, Country. Rep. no., (optional: vol./issue), Date. [Online] Available: site/path/file\\
See \cite{b18,b19}.
\item \emph{Basic format for computer programs and electronic documents (when available online): }\\
Legislative body. Number of Congress, Session. (year, month day). \emph{Number of bill or resolution}, \emph{Title}. [Type of medium]. Available: site/path/file\\
\textbf{\emph{NOTE: }ISO recommends that capitalization follow the accepted practice for the language or script in which the information is given.}\\
See \cite{b20}.
\item \emph{Basic format for patents (when available online):}\\
Name of the invention, by inventor's name. (year, month day). Patent Number [Type of medium]. Available: site/path/file\\
See \cite{b21}.
\item \emph{Basic format}\emph{for conference proceedings (published):}\\
J. K. Author, ``Title of paper,'' in \emph{Abbreviated Name of Conf.}, City of Conf., Abbrev. State (if given), Country, year, pp. \emph{xxxxxx.}\\
See \cite{b22}.
\item \emph{Example for papers presented at conferences (unpublished):}\\
See \cite{b23}.
\item \emph{Basic format for patents}$:$\\
J. K. Author, ``Title of patent,'' U.S. Patent \emph{x xxx xxx}, Abbrev. Month, day, year.\\
See \cite{b24}.
\item \emph{Basic format for theses (M.S.) and dissertations (Ph.D.):}
\begin{enumerate}
\item J. K. Author, ``Title of thesis,'' M.S. thesis, Abbrev. Dept., Abbrev. Univ., City of Univ., Abbrev. State, year.
\item J. K. Author, ``Title of dissertation,'' Ph.D. dissertation, Abbrev. Dept., Abbrev. Univ., City of Univ., Abbrev. State, year.
\end{enumerate}
See \cite{b25,b26}.
\item \emph{Basic format for the most common types of unpublished references:}
\begin{enumerate}
\item J. K. Author, private communication, Abbrev. Month, year.
\item J. K. Author, ``Title of paper,'' unpublished.
\item J. K. Author, ``Title of paper,'' to be published.
\end{enumerate}
See \cite{b27}--\cite{b29}.
\item \emph{Basic formats for standards:}
\begin{enumerate}
\item \emph{Title of Standard}, Standard number, date.
\item \emph{Title of Standard}, Standard number, Corporate author, location, date.
\end{enumerate}
See \cite{b30,b31}.
\item \emph{Article number in~reference examples:}\\
See \cite{b32,b33}.
\item \emph{Example when using et al.:}\\
See \cite{b34}.
\end{itemize}
\section{Introduction} \label{section introduciton}
\IEEEPARstart{A}{} variety of equilibrium-seeking problems in game theory can be cast as a variational inequality (VI) problem~\cite{FF-JSP:03}. For example, Nash equilibria of a game and Wardrop equilibria of a network routing game both correspond to solutions of a VI under mild conditions. Inspired by real-life, it is natural to perceive the costs or utilities that players wish to optimize in such games as uncertain. Faced with randomness, risk-preferences of players often determines their decisions. Players optimize a risk-measures of random costs in such scenarios and consequently, equilibria can be found by solving a VI involving risk-measures of uncertain costs. Motivated by this setup, we consider VIs defined by the conditional value-at-risk (CVaR) of random costs and develop stochastic approximation (SA) schemes to solve them.
\subsubsection*{Literature review}
The most popular way of incorporating uncertainty in VIs is the stochastic variational inequality (SVI) problem, see e.g.~\cite{UVS:13} and references therein. Here, the map associated to the VI is the expectation of a random function. SA methods for solving SVI are well studied~\cite{UVS:13, YC-GL-YO:14}. A key feature of such schemes is the use of an unbiased estimators of the map using any number of sample of the uncertainty. This leads to strong convergence guarantees under a mild set of assumptions. However, the empirical estimator of CVaR, while being consistent, is biased~\cite{RPR-NDS:10}. Therefore, depending on the required level of precision, more samples are required to estimate the CVaR. This biasedness poses challenges in the convergence analysis of SA schemes. For a general discussion on risk-based VIs, including CVaR, and their potential applications, see~\cite{UR:14}. In~\cite{AC:19-cdc}, a sample average approximation method for estimating the solution of CVaR-based VI was discussed. Our work also broadly relates to~\cite{AT-YC-MG-SM:17} and~\cite{CJ-LAP-FM-SM-CS:18} where sample-based methods are used for optimizing the CVaR and other risk measures, respectively. The convergence analysis of our iterative methods consists of approximating the asymptotic behavior of iterates with a trajectory of a continuous-time dynamical system and studying their stability. See~\cite{HJK-DSC:78} and~\cite{VB:08} for a comprehensive account of such analysis.
\subsubsection*{Statement of Contributions}
Our starting point is the definition of the CVaR-based variational inequality (VI), where the map defining the VI consists of components that are the CVaR of random functions. The feasibility set, assumed to be a convex compact set, is defined by a set of inequality and linear equality constraints. We motivate the VI problem using two examples from noncooperative games. Our \emph{first contribution} is the first SA scheme termed as the projected method. This iterative method consists of moving along the empirical estimate of the map defining the VI and projecting each iterate onto the feasibility set. We show that under strict monotonicity, the projected algorithm asymptotically converges to any arbitrary neighborhood of the solution of the VI, where the size of the neighborhood influences the number of samples required to form the empirical estimate in each iteration. Our \emph{second contribution} is the subspace-constrained method that overcomes the computational burden of calculating projections onto the feasibility set by dealing with equality and inequalities differently. In particular, the proximity to satisfying inequality constraints is ensured using penalty functions and iterates are constrained on the subspace generated by linear equality constraints by pre-multiplying the iteration step by an appropriate matrix. We establish that under strict monotonicity, the algorithm converges asymptotically to any neighborhood of the solution of the VI. Our \emph{third contribution} is the multiplier-driven method where projections are discarded altogether by introducing a multiplier for the inequality constraints. The iterates satisfy equality constraints throughout the execution as is the case with the subspace-constrained method by using matrix pre-multiplication. The iterates are shown to converge asymptotically under strict monotonicity to any neighborhood of the solution of the VI. Finally, we demonstrate the behavior of the algorithms using a network routing example. Our work here is an extension of~\cite{JV-AC:20}. Most notably, we have extended the result for the first projection-based algorithm to the case with only a finite number samples of the uncertainty available in each iteration. In addition our second and third algorithm have been modified by introducing a matrix pre-multiplication which constrains iterates to lie in the affine subspace defined by the equality constraints.
\section{Preliminaries} \label{section preliminaries}
\subsubsection{Notation} \label{sec:notation}
Let $\mathbb{R}$ and $\mathbb{N}$ denote the real and natural numbers, respectively. For $N \in \mathbb{N}$, we write ${\until{N} := \{1, 2, \dots, N\}}$. For a given $x \in \mathbb{R}$, we use the notation $[x]^+ := \max(x,0)$. For $x, y \in \mathbb{R}$, the operator $[x]_{y}^+$ equals $x$ if $y > 0$ and it equals $\max\{0,x\}$ if $y \leq 0$. For $x \in \mathbb{R}^n$, we let $x_i$ denote the $i$-th element of $x$, and the $i$-th element of the vector $[x]^+$ is $[x_i]^+$. For vectors $x,y \in \mathbb{R}^n$, $[x]_{y}^+$ denotes the vector whose $i$-th element is $[x_i]_{y_i}^+$, $i \in \until{n}$. The Euclidean norm of $x$ is given by $\norm{x}$. The Euclidean projection of $x$ onto the set $\mathcal{H}$ is then denoted $\Pi_\mathcal{H}(x):=\operatorname{argmin}_{y \in \mathcal{H}} \norm{x-y}$. The $\epsilon$-neigborhood of $x$ is defined as ${\mathcal{B}_{\epsilon}(x) := \setdef{y \in \mathbb{R}^n}{\norm{y-x} < \epsilon}}$. The closure of a set $\SS \subset \mathbb{R}^n$ is denoted by $\mathrm{cl}(\SS)$. The normal cone to a set $\mathcal{X} \subseteq \mathbb{R}^n$ at $x \in \mathcal{X}$ is defined as ${\mathcal{N}_\mathcal{X}(x) := \setdef{y \in \mathbb{R}^n}{y^\top(z - x) \leq 0 \enskip \forall z \in \mathcal{X}}}$. The set ${\mathcal{T}_\mathcal{X}(x) := \mathrm{cl}\big(\cup_{y \in \mathcal{X}} \cup_{\lambda > 0} \lambda(y - x)\big)}$ is referred to as the tangent cone to $\mathcal{X}$ at $x \in \mathcal{X}$.
\subsubsection{Variational inequalities and KKT points}\label{sec:vi-kkt}
For a given map $F: \mathbb{R}^n \rightarrow \mathbb{R}^n$ and a closed set $\mathcal{H} \subseteq \mathbb{R}^n$, the associated \textit{variational inequality} (VI) problem, $\operatorname{VI}(\mathcal{H},F)$, is to find ${h^* \in \mathcal{H}}$ such that ${(h - h^*)^\top F(h^*) \geq 0}$ holds for all ${h \in \mathcal{H}}$. The set of all points that solve $\operatorname{VI}(\mathcal{H},F)$ is denoted $\operatorname{SOL}(\mathcal{H},F)$. The map $F$ is called \textit{monotone} if $\big(F(x) \! - \! F(y)\big)^\top \! (x \! - \! y) \! \ge \! 0$ holds for all ${x,y \in \mathbb{R}^n}$. If the inequality is strict for $x \not = y$, then $F$ is \textit{strictly monotone}. If $\mathcal{H}$ is nonempty, compact and $F$ is continuous, then $\operatorname{SOL}(\mathcal{H},F)$ is nonempty. If $F$ is strictly monotone, then $\operatorname{VI}(\mathcal{H},F)$ has at most one solution. Under the \textit{linear independence constraint qualification} (LICQ), we next characterize $\operatorname{SOL}(\mathcal{H},F)$ as the set of Karush-Kuhn-Tucker (KKT) points.
\begin{lemma}\longthmtitle{KKT points of $\operatorname{VI}(\mathcal{H},F)$}\label{le:KKT}
Let
\begin{equation*}
\mathcal{H} := \setdef{h \in \mathbb{R}^n }{Ah = b, \enskip q^i(h) \leq 0, \enskip \forall i \in [s]},
\end{equation*}
where $A \in \mathbb{R}^{l \times n}$, $b \in \mathbb{R}^{l}$ and $l \in \mathbb{N}$, and the functions ${q^i: \mathbb{R}^n \rightarrow \mathbb{R}}$, $i \in \until{s}$, $s \in \mathbb{N}$ are convex and continuously differentiable. For $q(h) \! := \! (q^1(h), \dots, q^s(h))^\top \! \in \! \mathbb{R}^s$, let ${D q(h) \! \in \! \mathbb{R}^{s \times n}}$ be the Jacobian at $h$. For any ${h^* \in \mathbb{R}^n}$, if there exists a multiplier ${(\lambda^*, \mu^*) \in \mathbb{R}^s \times \mathbb{R}^l}$ satisfying
\begin{equation} \label{KKT}
\begin{split}
&F(h^*) + \big( D q(h^*)\big)^\top \lambda^* + A^\top \mu^* = 0, \\
Ah^* & = b, \quad q(h^*) \leq 0, \quad \lambda^{*} \geq 0, \quad \lambda^{*\top}q(h^*) = 0,
\end{split}
\end{equation}
then we have $h^* \in \operatorname{SOL}(\mathcal{H},F)$. Such a point $(h^*,\lambda^*, \mu^*)$ is referred to as a KKT point of the $\operatorname{VI}(\mathcal{H},F)$. Conversely, for $h^* \in \operatorname{SOL}(\mathcal{H},F)$, let $\mathcal{I}_{h^*} = \setdef{i \in \until{s}}{q^i(h^*) = 0}$. If the vectors $\{\nabla q^i(h^*)\}_{i \in \mathcal{I}_{h^*}}$ and the row vectors $\{A_j\}_{j \in [l]}$ are linearly independent, or in other words, the LICQ holds at $h^*$, then there exists a $(\lambda^*, \mu^*)$ satisfying \eqref{KKT}.
\end{lemma}
The above result is well known in the context of convex optimization. The extension to the VI setting can be deduced from $\text{\cite[Proposition 3.46]{DS:18}}$, \cite[Theorem 12.1]{UF-WK-GS:10}, and noting that if $h^* \in \operatorname{SOL}(\mathcal{H}, F)$, then it is also a minimizer of the function $y \mapsto y^\top F(h^*)$ subject to $y \in \mathcal{H}$.
\subsubsection{Projected dynamical systems}\label{sec:proj}
For a given map ${F: \mathbb{R}^n \times [0,\infty) \rightarrow \mathbb{R}^n}$ and a closed set $\mathcal{H} \subseteq \mathbb{R}^n$ the associated projected dynamical system is given by
\begin{equation*}
\dot{h}(t) = \Pi_{\mathcal{T}_{\mathcal{H}}(h(t))}\big(F(h,t)\big).
\end{equation*}
Here $\mathcal{T}_{\mathcal{H}}(h)$ is the tangent cone of $\mathcal{H}$ at $h$ ${\text{(see Section~\ref{sec:notation})}}$.
We say that a map $\bar{h} : [0,\infty) \rightarrow \mathcal{H}$ with $\bar{h}(0) \in \mathcal{H}$ is a solution of the above system when $\bar{h}(\cdot)$ is absolutely continuous and $\dot{\bar{h}}(t) = \Pi_{\mathcal{T}_{\mathcal{H}}(\bar{h}(t))}\Big(F\big(\bar{h}(t), t\big)\Big)$ for almost all $t \in [0, \infty)$. Note that $\bar{h}(t) \in \mathcal{H}$ for all $t$. We use the terms solution and trajectory interchangeably throughout the paper.
\subsubsection{Conditional Value-at-Risk}
The \textit{Conditional Value-at-Risk} (CVaR) \textit{at level} $\alpha \in (0,1]$ of a real-valued random variable $Z$, defined on a probability space $(\Omega, \mathcal{F}, \mathbb{P})$, is
\begin{equation*}
\operatorname{CVaR}_\alpha[Z] := \inf_{\eta \in \mathbb{R}}\big\{\eta + \alpha^{-1} \mathbb{E}[Z - \eta]^+\big\},
\end{equation*}
where the expectation is with respect to $\mathbb{P}$. The value $\alpha$ is a constant that characterizes risk-averseness.
Given $N$ i.i.d samples $\{\widehat{Z}_j\}_{j \in [N]}$ of random variable $Z$, one can approximate $\operatorname{CVaR}_\alpha[Z]$ using the following empirical estimate:
\begin{equation} \label{estimator CVaR}
\widehat{\operatorname{CVaR}}_\alpha^N[Z] = \inf_{\eta \in \mathbb{R}}\big\{ \eta + (N \alpha)^{-1} \textstyle\sum_{j=1}^N[\widehat{Z}_j - \eta]^+ \big\}.
\end{equation}
This estimator is biased, but consistent \cite[Page 300]{AS-DD-AR:14}. That is, $\widehat{\operatorname{CVaR}}_\alpha^N[Z] \to \operatorname{CVaR}_\alpha[Z]$ almost surely as $N \to \infty$.
\section{Problem statement and motivating examples} \label{section problem statement}
Consider a set of functions $C_i: \mathbb{R}^n \times \mathbb{R}^m \rightarrow \mathbb{R}$, ${i \in \until{n}}$, $(h,\xi) \mapsto C_i(h,\xi)$, where $\xi$ represents a random variable with distribution $\mathbb{P}$. For a fixed $h$, $C_i(h,\xi)$ is therefore a real-valued random variable. Define the map $\map{F_i}{\mathbb{R}^n}{\mathbb{R}}$ as the CVaR of $C_i$ at level $\alpha \in (0,1]$:
\begin{equation} \label{definition elements F}
F_i(h) := \operatorname{CVaR}_\alpha\big[C_i(h,\xi)\big], \quad \text{ for all } i \in \until{n}.
\end{equation}
For notational convenience, let $C: \mathbb{R}^n \times \mathbb{R}^m \rightarrow \mathbb{R}^n$ and $\map{F}{\mathbb{R}^n}{\mathbb{R}^n}$ be the element-wise concatenation of the maps $\{C_i\}_{i \in \until{n}}$ and $\{F_i\}_{i \in \until{n}}$, respectively. Let $\mathcal{H} \subseteq \mathbb{R}^n$ be a nonempty closed set of the form
\begin{equation} \label{explicit contrained set 2}
\mathcal{H} := \setdef{h \in \mathbb{R}^n }{Ah = b, \enskip q^i(h) \leq 0, \enskip \forall i \in [s]},
\end{equation}
where $A \in \mathbb{R}^{l \times n}$, $b \in \mathbb{R}^l$ and $l \in \mathbb{N}$, and the functions ${q^i: \mathbb{R}^n \rightarrow \mathbb{R}}$, ${i \in \until{s}}$, $s \in \mathbb{N}$ are convex and continuously differentiable. The objective of this paper is to provide stochastic approximation (SA) algorithms to solve the variational inequality problem $\operatorname{VI}(\mathcal{H},F)$. Our strategy will be to use an empirical estimator, derived from samples of $C(h,\xi)$, of the map $F$ at each iteration of the algorithm. Below we discuss two motivating examples for our setup.
\subsection{CVaR-based routing games} \label{section routing game}
Consider a directed graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$, where ${\mathcal{V} = \until{n}}$ is the set of vertices, and ${\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}}$ is the set edges. To such a graph we associate a set $\mathcal{W} \subseteq \mathcal{V} \times \mathcal{V}$ of origin-destination (OD) pairs. An OD-pair $w$ is given by an ordered pair $(v^w_o,v^w_d)$, where $v^w_o, v^w_d \in \mathcal{V}$ are called the origin and the destination of $w$, respectively. The set of all paths in $\mathcal{G}$ from the origin to the destination of $w$ is denoted $\mathcal{P}_w$. The set of all paths is given by $\mathcal{P} = \cup_{w \in \mathcal{W}} \mathcal{P}_w$. Each of the participants, or agents, of the routing game is associated to an OD-pair, and can choose which path to take to travel from its origin to its destination. The choices of all agents give rise to a flow vector $h \in \mathbb{R^{|\mathcal{P}|}}$. We consider a non-atomic routing game and so $h$ is a continuous variable.
For each (OD)-pair $w$, a value $D_w \geq 0$ defines the demand associated to it. The feasible set $\mathcal{H} \subset \mathbb{R}^{|\mathcal{P}|}$ is then given by ${\mathcal{H} = \setdef{h}{\sum_{p \in \mathcal{P}_w}h_p = D_w, \enskip \forall w \in \mathcal{W}, \enskip h_p \geq 0 \enskip \forall p \in \mathcal{P} }}$. To each of the paths $p \in \mathcal{P}$, we associate a cost function $C_p: \mathbb{R}^{|\mathcal{P}|} \times \mathbb{R}^{m} \rightarrow \mathbb{R}, (h,\xi) \mapsto C_p(h,\xi)$, which depends on the flow $h$, as well as on the uncertainty $\xi \in \mathbb{R}^m$. Each agent chooses $p \in \mathcal{P}_w$ that minimizes $\operatorname{CVaR}_\alpha\big[C_p(h,\xi)\big]$. These elements define the CVaR-based routing game~\cite{AC:19-cdc} to which we assign the following notion of equilibrium: the flow $h^* \in \mathcal{H}$ is said to be a CVaR-based Wardrop equilibrium (CWE) of the CVaR-based routing game if, for all $w \in \mathcal{W}$ and all $p,p' \in \mathcal{P}_w$ such that $h^*_p > 0$, we have
\begin{equation*}
\operatorname{CVaR}_\alpha\big[C_p(h^*,\xi)\big] \leq \operatorname{CVaR}_\alpha\big[C_{p'}(h^*,\xi)].
\end{equation*}
The intuition is that at equilibrium, for each agent, there is no other path than the selected one that has a smaller value of conditional value-at-risk. Under continuity of $C_p$, the set of CWE is equal to the set of solutions of $\operatorname{VI}(\mathcal{H},F)$, where $F: \mathbb{R}^{|\mathcal{P}|} \rightarrow \mathbb{R}^{|\mathcal{P}|}$ takes the form~\eqref{definition elements F}.
\subsection{CVaR-based Nash equilibrium}
A more general example of our setup would be in finding the Nash equilibrium of a non-cooperative game~\cite[section 1.4.2]{FF-JSP:03}. Let there be $N$ players with individual cost functions $\map{\theta_i}{\mathbb{R}^{nN}}{\mathbb{R}}$, $x \mapsto \theta_i(x)$ and possible strategy sets $\mathcal{X}_i \subseteq \mathbb{R}^n$. Here $x \in \mathbb{R}^{nN}$ denotes the vector containing the strategies of all players, where $x_i \in \mathbb{R}^n$ is the strategy of player $i$. We assume without loss of generality that the action/strategy sets of each player are of the same dimension $n$. An alternative notation is $\theta_i(x) = \theta_i(x_i, x_{-i})$, where $x_{-i}$ is the vector containing the strategies of all players except $i$. Each player $i$ aims to minimize its cost $\theta_i$ by choosing its own strategy optimally. That is, for any fixed $\tilde{x}_{-i}$ they solve
\begin{align*}
\text{minimize } \quad &\theta_i(x_i,\tilde{x}_{-i}), \\
\text{subject to } \quad &x_i \in \mathcal{X}_i.
\end{align*}
A Nash equilibrium of such a game is a solution vector $x^*$ such that none of the players can reduce their cost by changing their strategy. Under the assumption that the sets $\mathcal{X}_i$ are convex and closed, and the functions $x_i \mapsto \theta_i(x_i, \tilde{x}_{-i})$ are convex and continuously differentiable for any $\tilde{x}_{-i}$, a joint strategy vector $x^*$ is a Nash equilibrium if and only if it is a solution to $\operatorname{VI}(\mathcal{X}, F)$, where $F(x) := (\nabla_{x_i}\theta_i(x))_{i = 1}^N$ is the concatenation of the gradients of $\theta_i$ functions, and ${\mathcal{X} = \prod_{i = 1}^n\mathcal{X}_i}$ . Consider the functions $\theta_i$ of the form
\begin{equation*}
\theta_i(x) := \operatorname{CVaR}_\alpha [f_i(x_i, x_{-i})g(\xi) + \bar{f}_i(x_i, x_{-i})],
\end{equation*}
where functions $f_i$, $g$ and $\bar{f}_i$ are real-valued, $\xi$ models the uncertainty, and $f_i(x_i,x_{-i}) \geq 0$ for all $x$. Then, $\operatorname{VI}(\mathcal{X}, F)$ is a CVaR-based variational inequality as discussed in our paper. Specifically, in this case, since CVaR is positive-homogeneous and shift-invariant~\cite[Chapter 6]{AS-DD-AR:14}, we have
\begin{align*}
\theta_i(x) = \operatorname{CVaR}_\alpha[g(\xi)] f_i(x_i,x_{-i}) + \bar{f}_i(x_i, x_{-i}).
\end{align*}
As a consequence, we get
\begin{align*}
\nabla \theta_i(x) = \operatorname{CVaR}_\alpha[g(\xi)] \nabla_{x_i} f_i(x_i,x_{-i}) + \nabla_{x_i} \bar{f}_i(x_i, x_{-i}).
\end{align*}
Under the assumption that $\nabla_{x_i} f_i$ is nonnegative for all ${x \in \mathcal{X}}$, we get
\begin{align*}
\nabla \theta_i(x) = \operatorname{CVaR}_\alpha[g(\xi) \nabla_{x_i} f_i(x_i,x_{-i}) + \nabla_{x_i} \bar{f}_i(x_i, x_{-i})].
\end{align*}
where $\operatorname{CVaR}$ is understood component-wise. Thus, $F$ can be written as concatenation of $\operatorname{CVaR}$ of various functions and finding the Nash equilibrium of this game is equivalent to solving $\operatorname{VI}(\mathcal{X}, F)$, which fits into our presented framework.
\section{Stochastic approximation algorithms for solving $\operatorname{VI}(\mathcal{H}, F)$} \label{section results} \label{section stochastic approximation algorithms}
In this section, we introduce the SA algorithms along with their convergence analysis. All introduced schemes approximate $F$ with the estimator given in~\eqref{estimator CVaR}. Given $N$ independently and identically distributed samples $\big\{(\widehat{C_i(h,\xi)} )_j\big\}_{j=1}^N$ of the random variable $C_i(h,\xi)$, let
\begin{align*}\label{eq:Fhat}
\widehat{F}^N_i(h) := \textstyle \inf_{t \in \mathbb{R}}\Big\{ t + (N \alpha)^{-1} \sum_{j=1}^N\big[(\widehat{C_i(h,\xi)})_j - t\big]^+ \Big\}
\end{align*}
stand for the estimator of $F_i(h)$. Analogously, the estimator of $F(h)$ formed using the element-wise concatenation of $\widehat{F}^N_i(h)$, $i \in \until{n}$, is denoted by $\widehat{F}^N(h)$. We assume that the $N$ samples of each cost function are a result of the same set of $N$ events, that is, the distribution of $\widehat{F}^N(h)$ depends on $\mathbb{P}^N$. We next present our first algorithm.
\subsection{Projected algorithm}
For a given sequence of step-sizes $\{\gamma^k\}_{k=0}^\infty$, with $\gamma^k > 0$ for all $k$, a sequence $\{N_k\}_{k=0}^\infty \subset \mathbb{N}$, and an initial vector $h^0 \in \mathcal{H}$, the first algorithm under consideration, which we will refer to as the \textit{projected algorithm}, is given by
\begin{equation}\label{eq:projected-original-form}
h^{k+1} = \Pi_{\mathcal{H}}\big( h^k - \gamma^k\widehat{F}^{N_k}(h^{k})\big),
\end{equation}
where $\Pi_{\mathcal{H}}$ is the projection operator (see Section~\ref{sec:notation}) and $h^k$ is the $k$-th iterate of $h$ produced by the algorithm. The above algorithm is inspired by the SA schemes for solving a stochastic VI problem, see~\cite{UVS:13} for details on other SA schemes. The key difference from the setup in~\cite{UVS:13} is the fact that there the map $F$ is the expected value of a random variable for which an unbiased estimator $\widehat{F}$ is available. In our case the estimator is biased posing limitations on the sample requirements for convergence of the algorithms. We can write the projected algorithm~\eqref{eq:projected-original-form} equivalently as
\begin{equation} \label{algorithm with error term}
h^{k+1} = \Pi_{\mathcal{H}}\Big( h^k - \gamma^k\big(F(h^{k}) + \widehat{\beta}^{N_k} \big) \Big),
\end{equation}
where $\widehat{\beta}^{N_k} := \widehat{F}^{N_k}(h^k) - F(h^k)$ is the error introduced by the estimation. For this and the upcoming algorithms, common assumptions on the sequence $\{\gamma^k\}$ are
\begin{equation} \label{stepsize}
\begin{split}
\textstyle \sum_{k = 0}^\infty \gamma^k = \infty, &\qquad \textstyle \sum_{k = 0}^\infty (\gamma^{k})^2 < \infty.
\end{split}
\end{equation}
Our first result gives sufficient conditions for convergence of \eqref{algorithm with error term} to any neighborhood of the solution $h^*$ of $\operatorname{VI}(\mathcal{H},F)$.
\begin{proposition} \longthmtitle{Convergence of the projected algorithm~\eqref{eq:projected-original-form}} \label{projected theorem}
Let $F$ as defined in \eqref{definition elements F} be a strictly monotone, continuous function, and let $\mathcal{H}$ be a compact convex set of the form~\eqref{explicit contrained set 2}. For the algorithm \eqref{algorithm with error term}, assume that step-sizes $\{\gamma^k\}$ satisfies \eqref{stepsize} and the sequence $\{N_k\}$ is such that $\{\widehat{\beta}^{N_k}\}$ is bounded with probability one. Then, for any $\epsilon > 0$ there exists $N_{\epsilon} \in \mathbb{N}$ such that $N_k \geq N_\epsilon$ for all $k$ implies, with probability one,
\begin{equation*}
\lim_{k \rightarrow \infty}\norm{h^k - h^*} \le \epsilon.
\end{equation*}
\end{proposition}
\begin{proof}
To ease the exposition of this proof, we split the error as $\widehat{\beta}^{N_k} = e^{N_k} + \widehat{\varepsilon}^{N_k}$, where $e^{N_k} = \mathbb{E}[\widehat{\beta}^{N_k}]$. Note that we then have $\mathbb{E}[\widehat{\varepsilon}^{N_k}] = 0$, and by the boundedness assumption, there exists a constant $B_e > 0$ such that $\norm{e^{N_k}} \leq B_e$ for all $k$. The first step of the proof is to show that the sequence $\{h^k\}$ converges to a trajectory of the following continuous-time projected dynamical system:
%
\begin{equation} \label{eq:ODE projected}
\dot{\bar{h}}(t) = \Pi_{\mathcal{T}_\mathcal{H}(\bar{h}(t))} \Big( -F\big(\bar{h}(t)\big) - e(t) \Big), \quad \bar{h}(0) \in \mathcal{H}.
\end{equation}
%
Here $e(\cdot)$ is a uniformly bounded measurable map satisfying $\norm{e(t)} \leq B_e$ for all $t$ (see Section~\ref{sec:proj} for further details on how solutions to projected dynamical systems are defined). For the sake of rigor, we note that the existence of a trajectory of~\eqref{eq:ODE projected} starting from any point in $\mathcal{H}$ is guaranteed by~\cite[Lemma A.1]{MS-AC-NM:22-auto}. To make precise the convergence of the sequence generated by~\eqref{algorithm with error term} to a trajectory of~\eqref{eq:ODE projected}, we say that $\{h^k\}$ converges to a trajectory $\bar{h}( \cdot )$ of \eqref{eq:ODE projected} if
%
\begin{equation} \label{eq:convergence to trajectory}
\lim_{i \rightarrow \infty} \sup_{j \geq i}\Bnorm{h^j - \bar{h}\Bigl(\sum_{k=i}^{j-1} \gamma^k\Bigr)} = 0.
\end{equation}
%
That is, the discrete-time trajectory formed by the linear interpolation of the iterates $\{h^k\}$ approaches the continuous time trajectory $t \mapsto \bar{h}(t)$. The proof of the existence of a map $\bar{h}(\cdot)$ satisfying~\eqref{eq:convergence to trajectory} is similar to that of~\cite[Theorem 5.3.1]{HJK-DSC:78}, with the only change being the existence of an error term $e(t)$ in dynamics~\eqref{eq:ODE projected} which is absent in the cited reference. The inclusion of the error term is facilitated by reasoning presented in the proof of~\cite[Theorem 5.2.2]{HJK-DSC:78}. We avoid repeating these arguments here in the interest of space.
Convergence of the sequence $\{h^k\}$ can now be analyzed by studying the asymptotic stability of \eqref{eq:ODE projected}. To this end, we consider the candidate Lyapunov function
%
\begin{equation*}
V\big(\bar{h}\big) = \frac{1}{2}\norm{\bar{h} - h^*}^2,
\end{equation*}
%
where $h^*$ is the unique solution of $\operatorname{VI}(\mathcal{H},F)$. We first look at the case $e(\cdot) \equiv 0$. For notational convenience, define the right-hand side of~\eqref{eq:ODE projected} in such a case by the map ${\map{X_{e\equiv0}}{\mathbb{R}^n}{\mathbb{R}^n}}$. The Lie derivative of $V$ along $X_{e\equiv 0}$ is then given by
%
\begin{align} \label{eq:Lie-derivative}
\nabla V(\bar{h})^\top X_{e \equiv 0}(\bar{h}) = (\bar{h} - h^*)^\top \Pi_{\mathcal{T}_\mathcal{H}(\bar{h})} \big( -F(\bar{h}) \big).
\end{align}
%
We want to show that the right-hand side of the above equation is negative for all $\bar{h} \neq h^*$. We first note that by Moreau’s decomposition theorem~\cite[Theorem 3.2.5]{JBHU-CL:93}, for any $v \in \mathbb{R}^n$ and $\bar{h} \in \mathcal{H}$, we have ${\Pi_{\mathcal{T}_\mathcal{H}(\bar{h})}(v) = v - \Pi_{\mathcal{N}_\mathcal{H}(\bar{h})}(v)}$, where $\mathcal{N}_\mathcal{H}(\bar{h})$ is the normal cone to $\mathcal{H}$ at $\bar{h}$. Using the above relation in~\eqref{eq:Lie-derivative} gives
%
\begin{align}
\nabla V(\bar{h})^\top \! X_{e \equiv 0}(\bar{h}) \! = &-(\bar{h} - h^*)^\top \! F(\bar{h}) \notag \\
&+ (h^* - \bar{h})^\top \! \Pi_{\mathcal{N}_\mathcal{H}(\bar{h})} \big( \! - \! F(\bar{h}) \big) \notag \\
\leq &-(\bar{h} - h^*)^\top \! F(\bar{h}), \label{eq:lyap-ineq-1}
\end{align}
%
where the inequality is due to the definition of the normal cone (see Section \ref{sec:notation}) and the fact that $h^* \in \mathcal{H}$.
Due to strict monotonicity of $F$, we have ${(\bar{h} - h^*)^\top F(\bar{h}) > (\bar{h} - h^*)^\top F(h^*)}$ whenever ${\bar{h} \neq h^*}$. Furthermore, since $h^* \in \operatorname{SOL}(\mathcal{H},F)$ we know that ${(\bar{h} - h^*)^\top F(h^*) \geq 0}$ for all $\bar{h} \in \mathcal{H}$. Combining these two facts implies that the function $W(\bar{h}) := (\bar{h} - h^*)^\top F(\bar{h})$ satisfies $W(\bar{h}) > 0$ whenever $\bar{h} \neq h^*$.
Using this in the inequality~\eqref{eq:lyap-ineq-1} yields
%
\begin{equation} \label{eq:Lyapunov-less-than}
\nabla V(\bar{h})^\top X_{e \equiv 0}(\bar{h}) \le - W(\bar{h}) < 0
\end{equation}
%
whenever $\bar{h} \neq h^*$. Now let $\overline{\mathcal{H}}_{\epsilon} := \setdef{h \in \mathcal{H}}{\norm{h - h^*} \geq \epsilon}$. Since $\mathcal{H}$ is compact, $\overline{\mathcal{H}}_{\epsilon}$ is compact. Since $W$ is continuous, there exists a $\delta > 0$ such that $W(\bar{h}) \ge \delta$ for all $\bar{h} \in \overline{\mathcal{H}}_{\epsilon}$. Therefore we get, from~\eqref{eq:Lyapunov-less-than},
%
\begin{align}\label{eq:Lie-delta}
\nabla V(\bar{h})^\top X_{e \equiv 0}(\bar{h}) \le - \delta, \quad \text{for all } \bar{h} \in \overline{\mathcal{H}}_{\epsilon}.
\end{align}
%
Next, we drop the assumption that $e(\cdot) \equiv 0$ and use the map $\map{X}{\mathbb{R}^n \times [0,\infty)}{\mathbb{R}^n}$ to denote the right-hand side of~\eqref{eq:ODE projected}. Consider any trajectory $t \mapsto \bar{h}(t)$ of~\eqref{eq:ODE projected}. Since the map is absolutely continuous and $V$ is differentiable, we have for almost all $t \ge 0$ and for $\bar{h}(t) \in \overline{\mathcal{H}}_\epsilon$,
%
\begin{align*}
\frac{dV}{dt}(t) = \nabla V(\bar{h}(t))^\top X(\bar{h}(t),t) \le - \delta - (\bar{h}(t) - h^*)^\top e(t),
\end{align*}
%
where for obtaining the above inequality we have first used Moreau's decomposition as before to get rid of the projection operator in $X$ and then employed~\eqref{eq:Lie-delta}. Next we bound the error term in the above inequality. Since $\mathcal{H}$ is compact and $ h^* \in \mathcal{H}$, there exists $B_h > 0$ such that $\norm{\bar{h} - h^*} \le B_h$ for all $\bar{h} \in \mathcal{H}$. In addition $\norm{e(t)} \le B_e$ for all $t$, where $B_e$ is the bound satisfying $\norm{e^{N_k}} \le B_e$.
Since the empirical estimate of the CVaR is consistent, we know that $B_e$ can be made arbitrarily small by selecting $N_k$ to be appropriately large for all $k$. That is, there exists $N_\epsilon \in \mathbb{N}$ such that when $N_k > N_\epsilon$ we have $\norm{e^{N_k}} < \frac{\delta}{B_h}$. Consequently, if $N_k > N_\epsilon$ for all $k$, then $\norm{e(t)} < \frac{\delta}{B_h}$ for all $t$. By selecting such a sample size at each iteration and thus bounding the error term, we obtain
%
\begin{equation*}
\begin{split}
\frac{dV}{dt}(t)&\leq -\delta - (\bar{h}(t) - h^*)^\top e(t) \\
&\leq -\delta + \norm{\bar{h}(t) - h^*} \norm{e(t)} < -\delta + B_h \frac{\delta}{B_h} \leq 0,
\end{split}
\end{equation*}
%
which holds for almost all $t$ whenever $\bar{h}(t) \in \overline{\mathcal{H}}_\epsilon$. That is, the trajectory converges to the set $\setdef{h \in \mathcal{H}}{\norm{h - h^*} \le \epsilon}$ as $t \to \infty$. This concludes the proof.
\end{proof}
In the above result, the restriction $N_k \geq N_\epsilon$ does not need to hold for all $k$. The result also holds if there exists a $K \in \mathbb{N}$ such that $N_k \geq N_\epsilon$ for all $k \geq K$. Regarding boundedness of $\{\widehat{\beta}^{N_k}\}$, it is ensured if for example each $C_i$ is bounded over the set $\mathcal{H} \times \Xi$, where $\Xi$ is the support of $\xi$.
Despite the convergence property established in Proposition \ref{projected theorem}, the algorithm in \eqref{algorithm with error term} suffers from some disadvantages. Most notably, the algorithm requires computing projections onto the set $\mathcal{H}$ at each iteration, which can be computationally expensive. To address these issues we propose two algorithms that achieve similar convergence to any neighborhood of the solution of the $\operatorname{VI}(\mathcal{H},F)$. The first requires projection onto inequality constraints only and the second does not involve any projection on the primal iterates and instead ensures feasibility using dual variables. As in Proposition~\ref{projected theorem}, we will impose continuity and monotonicity assumptions on $F$ in the upcoming results. We provide the following general result on the continuity and monotonicity properties of $F$.
\begin{lemma}\longthmtitle{Sufficient conditions for monotonicity and continuity of $F$}
The following hold:
\begin{itemize}
\item If for any $\epsilon > 0$ there exist a $\delta > 0$ such that ${\norm{h - h'} \le \delta}$ implies $\norm{ C_i(h,\xi) - C_i(h',\xi)} \le \epsilon$ for all $i \in \until{n}$ and all $\xi$, then $F$ is continuous.
\item Assume that there exist functions ${f_i:\mathbb{R}^n \rightarrow \mathbb{R}}$ and ${g_i: \mathbb{R}^m \rightarrow \mathbb{R}}$ such that ${C_i(h,\xi) \equiv f_i(h) + g_i(\xi)}$, for all $i \in [n]$. Let ${f(h):= \big(f_1(h), \dots, f_n(h)\big)}$. Then, $F$ is monotone (resp. strictly) if $f$ is monotone (resp. strictly monotone).
\end{itemize}
\end{lemma}
\begin{proof}
Continuity follows by arguments similar to the proof of \cite[Lemma IV.8]{AC:19-cdc}. For the second part, note that CVaR satisfies ${\operatorname{CVaR}_\alpha\big[C_i(h,\xi)\big] = f_i(h) + \operatorname{CVaR}_\alpha\big[g_i(\xi)\big]}$, for all $h$ and $i \in [n]$ \cite[Page 261]{AS-DD-AR:14}. The proof then follows from the fact that $F(h) - F(h^*) = f(h) - f(h^*)$.
\end{proof}
In the above result, the continuity condition, that may be difficult to check in practice, holds if $\xi$ has a compact support and for any fixed $\xi$, the functions $C_i$ are continuous with respect to $h$. We now introduce our next algorithm.
\subsection{Subspace-constrained algorithm}
In this section, we take a closer look at the form of $\mathcal{H}$ given in~\eqref{explicit contrained set 2} and design an algorithm that handles inequality and equality constraints independently. To this end, we write $\HH_{\mathrm{aff}} := \setdef{h \in \mathbb{R}^n}{Ah=b}$, and ${\HH_{\mathrm{ineq}} := \setdef{h \in \mathbb{R}^n}{q^i(h) \le 0, \enskip \forall i \in [s]}}$ for the sets of points satisfying the equality and inequality constraints, respectively. We then have $\mathcal{H} = \HH_{\mathrm{aff}} \cap \HH_{\mathrm{ineq}}$. It turns out that, using matrix operation, we can ensure that the iterates of our algorithm always remain in $\HH_{\mathrm{aff}}$. The method works as follows. Let $\{a_1,\cdots,a_l\}$ be the row vectors of $A$, and let $\{u_1,\cdots,u_n\}$ be an orthonormal basis for $\mathbb{R}^n$ such that the first $M \in \mathbb{N}$ vectors $\{u_1,\cdots,u_{M}\}$ form a basis for the span of vectors $\{a_1,\cdots,a_l\}$. Then, for the subspace ${\SS = \setdef{g \in \mathbb{R}^n}{Ag = 0}}$, we have
\begin{equation*}
\Pi_{\SS}(v) = \textstyle \Big(I - \sum_{i = 1}^{M} u_i u_i^\top \Big)v, \quad \text{ for any } v \in \mathbb{R}^n.
\end{equation*}
This well known fact follows from \cite[Theorem 7.10]{DCL-SRL-JJM:16} and noting that $\Pi_{\SS}(v) = v - \Pi_{\SS^{\perp}}(v)$, where $\SS^{\perp}$ is the set of vectors orthogonal to the subspace $\SS$. Thus, the projection onto $\SS$ is achieved by pre-multiplying with the matrix
\begin{equation} \label{eq:subspace-projection-matrix}
L := I - \sum_{i = 1}^{M} u_i u_i^T.
\end{equation}
Consequently, for any vector $z$ of the form $z = L v$, ${v \in \mathbb{R}^n}$ we have $A z = 0$. To construct $L$ one can find the orthonormal basis vectors $\{u_i\}_{i \in [l]}$ for the span of $\{a_j\}_{j \in [l]}$ and $\mathbb{R}^n$ by using Gram-Scmhidt orthogonalization process \cite[Section 6.4]{DCL-SRL-JJM:16}. Alternatively, if $A$ has full row rank one can use ${L := I - A^\top(AA^\top)^{-1}A}$, see e.g.,~\cite{CDM:00}. We use this projection operator to define our next method called the \textit{subspace-constrained algorithm}:
\begin{equation}\label{eq:subspace-algo}
h^{k+1} \! = \! h^k \! - \! \gamma^k L \Big(F(h^k) \! + \! c\big(h^k \! - \! \Pi_{\mathcal{H}_{\mathrm{ineq}}}(h^k)\big) \!+ \! \widehat{\beta}^{N_k}\Big),
\end{equation}
where the initial iterate $h^0 \in \HH_{\mathrm{aff}}$. In the above, $c > 0$ is a parameter to be specified later in the convergence result, the error sequence $\{\widehat{\beta}^{N_k}\}$ is as defined in~\eqref{algorithm with error term}, and $L \in \mathbb{R}^{n \times n}$ is as defined in \eqref{eq:subspace-projection-matrix}.
Due to the presence of $L$ in the above algorithm, the direction in which the iterate moves in each iteration is projected onto the subspace $\SS$. Hence, $h^k \in \HH_{\mathrm{aff}}$ for all $k$. We formally establish this in the below result. Furthermore, convergence to a neighbourhood of the set $\HH_{\mathrm{ineq}}$ is achieved through the term $h^k - \Pi_{\HH_{\mathrm{ineq}}}(h^k)$. That is, the higher the value of the design parameter $c$, the closer the limit of $\{h^k\}$ is to $\HH_{\mathrm{ineq}}$. Together, these mechanisms ensure that we keep iterates close to $\mathcal{H}$ and ultimately drive them to a neighbourhood of $h^*$.
\begin{proposition} \label{pr:subspace-cons} \longthmtitle{Convergence of the subspace-constrained algorithm~\eqref{eq:subspace-algo}}
Let $F$ as defined in \eqref{definition elements F} be a strictly monotone, continuous function, and let $\mathcal{H}$ be a compact convex set of the form~\eqref{explicit contrained set 2}. For the algorithm \eqref{eq:subspace-algo}, assume that step-sizes $\{\gamma^k\}$ satisfies \eqref{stepsize} and that the sequence $\{N^k\}$ is such that there exists $B_{\mathrm{traj}} \in \mathbb{R}$ satisfying $\norm{h^k} \le B_{\mathrm{traj}}$ and also $\{\widehat{\beta}^{N_k}\}$ is bounded with probability one. Then, for any $\epsilon > 0$, there exist $c_{\epsilon}(B_{\mathrm{traj}}) > 0$ and $N_{\epsilon}(B_{\mathrm{traj}}) \in \mathbb{N}$ such that $c \geq c_{\epsilon}(B_{\mathrm{traj}})$ and ${N_k \geq N_{\epsilon}(B_{\mathrm{traj}})}$ for all $k$ imply that the iterates of~\eqref{eq:subspace-algo} satisfy, with probability one,
%
\begin{equation*}
\lim_{k \rightarrow \infty} \norm{h^k - h^*} \leq \epsilon.
\end{equation*}
%
\end{proposition}
\begin{proof}
First we show that $h^k \in \HH_{\mathrm{aff}}$ for all $k$. To see this, recall that $AL v= 0$ for any $v \in \mathbb{R}^n$. Using this in~\eqref{eq:subspace-algo} implies $Ah^{k+1} = Ah^k$ for all $k$. Consequently, for all $k$, we have $Ah^k = A h^0 = b$ and therefore $h^k \in \HH_{\mathrm{aff}}$.
Analogous to the proof of Proposition \ref{projected theorem}, it can be established that $\{h^k \}$ converges with probability one, in the sense of \eqref{eq:convergence to trajectory}, to a trajectory of the following dynamics
%
\begin{equation} \label{eq:subspace-ode}
\dot{\bar{h}}(t) \! = \! - \! L \Bigl( F\big(\bar{h}(t)\big) \! + \! c\Big(\bar{h}(t) \! - \! \Pi_{\HH_{\mathrm{ineq}}}\big(\bar{h}(t) \big) \Big) \! - \! e(t) \Bigr) ,
\end{equation}
%
with the initial state $\bar{h}(0) \in \HH_{\mathrm{aff}}$. Here, $e(\cdot)$ is a uniformly bounded measurable map satisfying $\norm{e(t)} \leq B$ for all $t$. We will use the above fact to establish convergence of the sequence $\{h^k\}$ by analyzing the asymptotic stability of~\eqref{eq:subspace-ode}. Note that $A\dot{\bar{h}}(t) = 0$ for all $t$, and therefore a trajectory $\bar{h}(\cdot)$ of~\eqref{eq:subspace-ode} satisfies $\bar{h}(t) \in \HH_{\mathrm{aff}}$ for all $t \geq 0$ as $\bar{h}(0) \in \HH_{\mathrm{aff}}$. Now consider the Lyapunov candidate
%
\begin{align*}
V(\bar{h} ) = \frac{1}{2} \norm{\bar{h} - h^*}^2,
\end{align*}
%
where $h^*$ is the unique solution of $\operatorname{VI}(\mathcal{H},F)$, that follows from strict monotonicity. As was the case for the previous result, we will first analyze the evolution of $V$ along~\eqref{eq:subspace-ode} when $e \equiv 0$. Therefore, we define the notation ${\map{X_{e\equiv0}}{\mathbb{R}^n}{\mathbb{R}^n}}$ to represent the right-hand side of~\eqref{eq:subspace-ode} with $e \equiv 0$. The Lie derivative of $V$ along $X_{e\equiv0}$ is
%
\begin{align}
\nabla V (\bar{h})^\top X_{e \equiv 0}(\bar{h}) = -(\bar{h} &- h^*)^\top L \Bigl( F(\bar{h}) \notag
\\
& + c\big(\bar{h} - \Pi_{\HHineq} (\bh) \big) \Bigr). \label{eq:subspace-lie}
\end{align}
%
Since $\bar{h},h^* \in \HH_{\mathrm{aff}}$, we have $A(\bar{h} - h^*) = 0$ and so ${(\bar{h} - h^*) \in \SS}$. Consequently, for any vector $v \in \mathbb{R}^n$, we have
%
\begin{align*}
(\bar{h} - h^*)^\top v &= (\bar{h} - h^*)^\top \big( \Pi_{\SS}(v) + \Pi_{\SS^\perp}(v) \big) \notag
\\
& = (\bar{h} - h^*)^\top \Pi_{\SS}(v) = (\bar{h} - h^*)^\top L v.
\end{align*}
%
Using the above equality in~\eqref{eq:subspace-lie} gives
%
\begin{align}
\nabla V (\bar{h})^\top X_{e \equiv 0}(\bar{h}) = -(\bar{h} &- h^*)^\top \Bigl( F(\bar{h}) \notag
\\
& + c\big(\bar{h} - \Pi_{\HHineq} (\bh) \big) \Bigr). \label{eq:Lie-d-subspace}
\end{align}
%
We first upper bound the second term on the right-hand side of the above equality. We have
%
\begin{align}
& - c (\bar{h} - h^*)^\top \big(\bar{h} - \Pi_{\HHineq} (\bh) \big) \notag
\\
& = - c \Big( \bar{h} - \Pi_{\HHineq} (\bh) + \Pi_{\HHineq} (\bh) - h^* \Big)^\top \big(\bar{h} - \Pi_{\HHineq} (\bh) \big) \notag
\\
& = - c \norm{ \bar{h} - \Pi_{\HHineq} (\bh) }^2 \notag
\\
& \qquad + c \big( h^* - \Pi_{\HHineq} (\bh) \big)^\top \big(\bar{h} - \Pi_{\HHineq} (\bh) \big) \le 0, \label{eq:subspace-lie-second}
\end{align}
%
where for the inequality we have used the fact that ${\big(\bar{h} - \Pi_{\HHineq} (\bh)\big)^\top\big(h^* - \Pi_{\HHineq} (\bh) \big) \leq 0}$ for any $\bar{h} \in \mathbb{R}^n$ (see \cite[Thm. 3.1.1]{JBHU-CL:01}). Note that the inequality~\eqref{eq:subspace-lie-second} is strict whenever $\bar{h} \neq \Pi_{\HHineq} (\bh)$. We now turn our attention towards the first term in~\eqref{eq:Lie-d-subspace}. Due to strict monotonicity of $F$ and the fact that $h^* \in \operatorname{SOL}(\mathcal{H},F)$, we obtain
%
\begin{align}
- (\bar{h} - h^*)^\top F(\bar{h}) < - (\bar{h} - h^*)^\top F(h^*) \le 0 \label{eq:subspace-strict}
\end{align}
%
whenever $\bar{h} \in \mathcal{H}$ and $\bar{h} \not = h^*$. The above inequality along with~\eqref{eq:subspace-lie-second} shows $\nabla V (\bar{h})^\top X_{e \equiv 0}(\bar{h}) \le 0$ for any $\bar{h} \in \mathcal{H}$. However, recalling the approach in the proof of Proposition~\ref{projected theorem}, what we require in order to establish convergence is the existence of $\delta > 0$ such that
%
\begin{align}
\nabla V (\bar{h})^\top X_{e \equiv 0}(\bar{h}) \le -\delta \text{ for all } \bar{h} \in \overline{\mathcal{H}}_{\epsilon}, \label{eq:subspace-delta}
\end{align}
%
where $\overline{\mathcal{H}}_{\epsilon} := \setdef{h \in \HH_{\mathrm{aff}}}{\norm{h - h^*} \geq \epsilon}$. We obtain this bound below. Note that the strict inequality~\eqref{eq:subspace-strict} along with continuity of $F$ imply that for any $h \in \mathcal{H} \setminus \{h^*\}$, there exists $\varepsilon_h > 0$ such that
%
\begin{align}
-(\hat{h} - h^*)^\top F(\hat{h}) < 0 \text{ for all } \hat{h} \in \mathcal{B}_{\varepsilon_h}(h), \label{eq:subspace-nbd}
\end{align}
%
where we recall that $\mathcal{B}_{\varepsilon_h}(h)$ is the open $\varepsilon_h$-ball centered at $h$. Now let $\mathcal{H}_{\epsilon} := \mathcal{H} \setminus \mathcal{B}_{\epsilon}(h^*)$. Since $\mathcal{H}$ is compact, so is $\mathcal{H}_{\epsilon}$. Using this property and~\eqref{eq:subspace-nbd}, we deduce that there exists $\varepsilon_0 > 0$ such that for every $h \in \mathcal{H}_\epsilon$ we have
%
\begin{align}
- (\hat{h} - h^*)^\top F(\hat{h}) < 0 \text{ for all } \hat{h} \in \mathcal{B}_{\varepsilon_0}(h). \label{eq:subspace-nbd-two}
\end{align}
%
Next define
%
\begin{equation*}
\Delta_{\varepsilon_0} := \setdef{\bar{h} \in \HH_{\mathrm{aff}} \setminus \mathcal{B}_{\epsilon}(h^*) }{ \bar{h} \not \in \mathcal{B}_{\varepsilon_0}(\mathcal{H}_\epsilon) \text{ and } \norm{\bar{h}} \leq B_{\mathrm{traj}}}.
\end{equation*}
%
Here, $\mathcal{B}_{\varepsilon_0}(\mathcal{H}_\epsilon)$ is the open $\varepsilon_0$-ball of the set $\mathcal{H}_\epsilon$ and ${B_{\mathrm{traj}} > 0}$ is used as an upper bound on any trajectory $\bar{h}(\cdot)$ of~\eqref{eq:subspace-ode}. Note that $\Delta_{\varepsilon_0}$ is compact. Therefore, there exists $B_F > 0$ satisfying
%
\begin{equation}
-(\bar{h} - h^*)^\top F(\bar{h}) \leq B_F \text{ for all } \bar{h} \in \Delta_{\varepsilon_0}. \label{eq:subspace-F-bound}
\end{equation}
%
Furthermore, by definition, if $\bar{h} \in \Delta_{\varepsilon_0}$, then $\bar{h} \not \in \mathcal{H}$ and $\bar{h} \in \HH_{\mathrm{aff}}$. Thus, $\bar{h} \in \Delta_{\varepsilon_0}$ implies $\bar{h} \not \in \HH_{\mathrm{ineq}}$. That is, for such a point, the inequality~\eqref{eq:subspace-lie-second} holds strictly. This along with compactness of $\Delta_{\varepsilon_0}$ implies that there exists $B_{\Pi} > 0$ such that
%
\begin{equation}\label{eq:subspace-negative-pi}
-(\bar{h} - h^*)^\top \big(\bar{h} - \Pi_{\HHineq} (\bh) \big) \leq -B_{\Pi} \text{ for all } \bar{h} \in \Delta_{\varepsilon_0}.
\end{equation}
%
Using~\eqref{eq:subspace-F-bound} and~\eqref{eq:subspace-negative-pi} in~\eqref{eq:subspace-lie-second} and setting $c > \frac{B_F}{B_{\Pi}}$ yields
%
\begin{align}
\nabla V (\bar{h})^\top X_{e \equiv 0}(\bar{h}) < 0 \text{ for all } \bar{h} \in \Delta_{\varepsilon_0}. \label{eq:subspace-first-region}
\end{align}
%
Now consider $\bar{h}$ satisfying $\bar{h} \notin \Delta_{\epsilon_0} \cup \mathcal{B}_{\epsilon}(h^*)$ and ${\norm{\bar{h}} \leq B_{\mathrm{traj}}}$. Note that such a point belongs to ${\HH_{\mathrm{aff}} \cap \mathcal{B}_{\varepsilon_0}(\mathcal{H}_\epsilon) \cap \mathcal{B}_{B_{\mathrm{traj}}}(0)}$. Thus, by~\eqref{eq:subspace-nbd-two}, we have $-(\bar{h} - h^*)^\top F(\bar{h}) < 0$ for such a point. This fact combined with~\eqref{eq:subspace-first-region} leads us to the conclusion that
%
\begin{align*}
\nabla V(\bar{h})^\top X_{e \equiv 0}(\bar{h}) < 0 \text{ for all } \bar{h} \in \overline{\mathcal{H}}_\epsilon.
\end{align*}
%
Since the left-hand side of the above equation is a continuous function and $\overline{\mathcal{H}}_\epsilon$ is compact, we deduce that~\eqref{eq:subspace-delta} holds. The rest of the proof is then analogous to the corresponding section of the proof in Proposition \ref{projected theorem}.
\end{proof}
\begin{remark}\longthmtitle{Practical considerations of~\eqref{eq:subspace-algo}}\label{equivalent to bounded problem}
In Proposition~\ref{pr:subspace-cons}, for small values of $\epsilon$, one would require a large value of $c$ to ensure convergence. This may result in large oscillations of $h^k$ when $\gamma^k$ remains large. Such behavior can be prevented by either starting with small values of $\gamma^k$ or increasing $c$ along iterations, until it reaches a predetermined size. The result is then still valid but the convergence can only be guaranteed once $c$ reaches the required size.
We note that the required assumption of boundedness of $\{h^k\}$ can be ensured by constraining the iterates in $\{h^k\}$ to lie in a hyper-rectangle containing $\mathcal{H}$ (cf. \cite[Page 40]{HJK-DSC:78}). However, on the boundary of the hyper-rectangle, one would have to make use of steps of the form \eqref{algorithm with error term} to ensure that the iterates remain in the feasible set. \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol
\end{remark}
\subsection{Multiplier-driven algorithm}
Algorithms \eqref{algorithm with error term} and \eqref{eq:subspace-algo} involve projection onto $\mathcal{H}$ or $\HH_{\mathrm{ineq}}$ at each iteration, which can be computationally burdensome. Our next algorithm overcomes this limitation. We assume $\mathcal{H}$ to be of the form \eqref{explicit contrained set 2} and introduce a multiplier variable ${\lambda \in {\mathbb{R}}_{\ge 0}^s}$ that enforces satisfaction of the inequality constraint as the algorithm progresses. In order to simplify the coming equations we introduce the notation ${H(h,\lambda) := F(h) + Dq(h)^\top \lambda}$, where $Dq(h)$ is the Jacobian of $q$ at $h$. The \emph{multiplier-driven algorithm} is now given as
\begin{equation} \label{Lagrangian algorithm}
\begin{split}
h^{k + 1} &= h^k - \gamma^k L \big(H(h^k,\lambda^k) + \widehat{\beta}^{N_k}\big),
\\
\lambda^{k+1} &= \big[\lambda^{k} + \gamma^k q(h^k) \big]^+.
\end{split}
\end{equation}
Here $L$ is as defined in \eqref{eq:subspace-projection-matrix}. Also recall that $\widehat{\beta}^{N_k}$ is the error due to empirical estimation of $F$. The next result establishes the convergence properties of~\eqref{Lagrangian algorithm} to a KKT point of the VI (see Section~\ref{section preliminaries} for definitions) and so to a solution of the VI.
\begin{proposition} \longthmtitle{Convergence of the multiplier-driven algorithm~\eqref{Lagrangian algorithm}} \label{lagrange theorem}
Let $F$, as defined in \eqref{definition elements F}, be a strictly monotone, continuous function, and let $\mathcal{H}$ be a compact convex set of the form \eqref{explicit contrained set 2}, where functions $q^i$, $i \in \until{s}$, are affine. Assume that the LICQ holds for ${h^* \in \operatorname{SOL}(\mathcal{H}, F)}$, and let $(h^*,\lambda^*, \mu^*
)$ be an associated KKT point.
For algorithm \eqref{Lagrangian algorithm}, assume that step-sizes $\{\gamma^k\}$ satisfies \eqref{stepsize} and let $\{N_k\}$ be such that $\{\widehat{\beta}^{N_k}\}$, $\{h^k\}$, and $\{\lambda^k\}$
are bounded with probability~one.
Then, for any $\epsilon > 0$, there exists an $N_\epsilon \in \mathbb{N}$ such that if $N_k \geq N_\epsilon$ for all $k$, then, with probability one,
%
\begin{equation*}
\lim_{k \rightarrow \infty} \norm{h^k - h^*} \leq \epsilon.
\end{equation*}
%
\end{proposition}
\begin{proof}
Analogous to the proof of Proposition~\ref{projected theorem}, the first step establishes convergence with probability one of the sequence $\big\{(h^k, \lambda^k)\big\}$, in the sense of \eqref{eq:convergence to trajectory}, to a trajectory $\big(\bar{h}(\cdot),\bar{\lambda}(\cdot) \big)$ of the following dynamics
%
\begin{subequations}\label{eq:ODE-h-lm}
\begin{align}
\dot{\bar{h}}(t) &= -L \Bigl( H\big(\bar{h}(t), \bar{\lambda}(t)
\big) + e(t) \Bigr), \label{ODE lambda part0}
\\
\dot{\bar{\lambda}}(t) &= \Big[q\big(\bar{h}(t)\big)\Big]_{\bar{\lambda}(t)}^+, \label{ODE lambda part}
\end{align}
\end{subequations}
%
with initial condition $\bar{h}(0) \in \mathbb{R}^n$ and $\bar{\lambda}(0) \in {\mathbb{R}}_{\ge 0}^l$. Note that due to the presence of this operator in~\eqref{ODE lambda part}, $\bar{\lambda}$ is contained in the nonnegative orthant along any trajectory of the system.
The map $\bar{e}(\cdot)$ is uniformly bounded and so, as before, we have $\norm{e(t)} \le B_e$ for all $t$. The proof of convergence of the iterates to a continuous trajectory is similar to that of \cite[Theorem 5.2.2]{HJK-DSC:78} and is not repeated here for brevity. Note that, as was the case for Proposition \ref{pr:subspace-cons}, multiplication with the matrix $L$ ensure that $h^k,\bar{h}(t) \in \HH_{\mathrm{aff}}$ for all $k$ and $t \geq 0$. Next, we analyze the convergence of~\eqref{eq:ODE-h-lm}. We will occasionally use $\bar{x}$ as shorthand for $(\bar{h},\bar{\lambda})$. Define the candidate Lyapunov function
%
\begin{equation} \label{lyapunov function}
V(\bar{h},\bar{\lambda}) := \frac{1}{2}\big(\norm{\bar{h} - h^*}^2 + \norm{\bar{\lambda} - \lambda^*}^2\big),
\end{equation}
%
where $h^*$ is the unique solution of $\operatorname{VI}(\mathcal{H}, F)$ and there exist $\mu^* \in \mathbb{R}^l$ such that $(h^*,\lambda^*, \mu^*)$ is an associated KKT point.We analyze the evolution of \eqref{lyapunov function} for the case $e \equiv 0$. Denoting the right-hand side of~\eqref{eq:ODE-h-lm} for this case by $X_{e \equiv 0}$, the Lie derivative of $V$ along~\eqref{eq:ODE-h-lm} is
%
\begin{equation} \label{lagrange proof vdot}
\begin{split}
\nabla V(\bar{x})^\top X_{e \equiv 0}&(\bar{x}) = -(\bar{h} - h^*)^\top H(\bar{h},\bar{\lambda})
\\
\! \!&+ (\bar{\lambda} \! - \! \lambda^*)^\top \! \big(q(\bar{h}) \! + \! [q(\bar{h})]_{\bar{\lambda}}^+ \! - \! q(\bar{h})\big) \!.
\end{split}
\end{equation}
%
Here we have dropped the matrix $L$ from the term ${(\bar{h} - h^*)^\top LH(\bar{h},\bar{\lambda})}$, which is justified by the same argument used for deriving \eqref{eq:Lie-d-subspace}. Note that for any $i$, ${([q(\bar{h})]_{\bar{\lambda}}^+)_i = (q(\bar{h}))_i}$ if $\bar{\lambda}_i > 0$. Also, if $\bar{\lambda}_i = 0$, then ${\bar{\lambda}_i - \lambda^*_i \le 0}$. Thus, we find $(\bar{\lambda} \! - \! \lambda^*)^\top([q(\bar{h})]_{\bar{\lambda}}^+ \! - \! q(\bar{h})) \leq 0$. Since $q$ is affine, we have $Dq(\bar{h}) = Dq(h^*)$ for all $\bar{h} \in \mathbb{R}^n$. Combined with strict monotonicity this gives, for $\bar{h} \not = h^*$,
%
\begin{equation} \label{eq:derivation-in-multiplier}
\begin{split}
0&<(\bar{h} - h^*)^\top \big(H(\bar{h},\bar{\lambda})
- H(h^*,\bar{\lambda}) \big)
\\
&=(\bar{h} - h^*)^\top \big(H(\bar{h},\bar{\lambda})
- H(h^*,\lambda^*)
\\
&+ Dq(h^*)^\top\lambda^* - Dq(h^*)^\top\bar{\lambda}\big).
\end{split}
\end{equation}
%
From \eqref{KKT} we have $-H(h^*,\lambda^*) = A^\top \mu^*$. Since we have $\bar{h},h^* \in \HH_{\mathrm{aff}}$ it follows that $-(\bar{h} - h^*)^\top H(h^*,\lambda^*) = 0$. Then, using the assumption that $q$ is affine, \eqref{eq:derivation-in-multiplier} gives us $- (\bar{h} - h^*)^\top H(\bar{h},\bar{\lambda}) < (\lambda^* - \bar{\lambda})^\top \big(q(\bar{h}) - q(h^*)\big)$ Combining these derivations, and writing $W(\bar{h})$ for the right-hand side of \eqref{lagrange proof vdot} we get that for ${\bar{h} \not = h^*}$, ${W(\bar{h}) < (\bar{\lambda} -\lambda^{*} )^\top q(h^*)}$. From \eqref{KKT} we have $\lambda^{*\top }q(h^*) = 0$ and $\bar{\lambda}^\top q(h^*) \leq 0$, which then implies $\nabla V(\bar{h},\bar{\lambda}) X_{e \equiv 0}(\bar{h},\bar{\lambda}) < 0$ for almost all $t$ with $\bar{h}(t) \not = h^*$. The rest of the proof is analogous to the corresponding section of the proof of Proposition \ref{projected theorem}.
\end{proof}
\begin{remark} \longthmtitle{Implementation aspect of Proposition~\ref{lagrange theorem}} \label{remark generalization}
In Proposition \ref{lagrange theorem} we require boundedness of $\{h^k\}$, $\{\lambda^k\}$. When upper bounds on $\norm{\lambda^*}$ are known beforehand, projection onto hyper-rectangles can ensure boundedness of $\{\lambda^k\}$, while the result remains valid, (cf.~\cite[Page 40,~Theorem 5.2.2]{HJK-DSC:78}). For boundedness of $\{h^k\}$, see Remark \ref{equivalent to bounded problem}.
\relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol
\end{remark}
\section{Simulations} \label{section simulations}
\begin{figure}
\centering
\includegraphics[width = 0.9\columnwidth]{converge_plot_hold.eps}
\vspace*{-2ex}
\caption{\footnotesize Plot illustrating the convergence of the algorithms for the routing example explained in Section~\ref{section simulations}. The initial condition for all algorithms is set as $h_0$ defined as $(h_0)_i = 30$ for $i \in \{1,\dots,10\}$, $(h_0)_i = 60$ for $i \in \{11, \dots, 20\}$ and $(h_0)_i = 20$ for $i \in \{21,\dots, 30\}$.}
\label{figure logscale}
\end{figure}
Here we demonstrate an application of the presented stochastic approximation algorithms for finding the solutions of a CVaR-based variational inequality. The example is an instance of a CVaR-based routing game (see Section~\ref{section routing game}) based on the Sioux Falls network~\cite{120820}. The network consists of $24$ nodes and $76$ edges. To each of the edges, we associate an affine cost function given by $C_e(f_e,u_e) = t_e(1 + u_{e} \frac{100}{c_e}f_e)$, where $f_e$ is the flow over edge $e$, and $t_e$ and $c_e$ are the free-flow travel time and capacity of edge $e$, respectively, as obtained from~\cite{120820}. The uncertainty $u_e$ has the uniform distribution over the interval $[0,0.5]$ for all edges connected to the vertices $10$, $16$, or $17$. For the rest of the edges, $u_e$ is set to zero. This defines the cost functions for all edges, and consequently defines the costs of all paths through the network as well. We consider three origin destination(OD) pairs $\mathcal{W} = \{(1,19),(13,8),(12,18)\}$, and for each of these paths we select the ten paths that have the smallest free-flow travel time associated to them. The set of these 30 paths we denote as $\mathcal{P}$. The demands for each OD-pair are given by $d_{1,19} = 300$, $d_{13,8} = 600$, $d_{12,18} = 200$. We aim to find a CVaR-based Wardrop equilibrium which is equivalent to finding a solution of the VI problem defined by a map ${F(h) := A h + b + \operatorname{CVaR}_\alpha[\xi]}$, and a feasible set $\mathcal{H} = \setdef{{h \in \mathbb{R}^{30}}}{{h \geq 0}, {\sum_{i = 1}^{10}h_i = 300}, {\sum_{i = 11}^{20}h_i = 600}, \newline {\sum_{i = 21}^{30}h_i = 200}}$. Here, $h,b \in \mathbb{R}^{30}$, ${A \in \mathbb{R}^{30 \times 30}}$, and ${\alpha = 0.05}$. The exact values of $A$ and $b$ and the distribution of $\xi$ are constructed using the cost functions and the network structure, see~\cite[Section 6]{AC:22} for details.
In Figure \ref{figure logscale}, we see the evolution of the error for each of the different algorithms. The stepsize sequence for the projected, subspace-constrained, and multiplier-driven algorithms are $\gamma^k = \frac{100}{100 + k}$, $\gamma^k = \frac{200}{200 + k}$, and ${\gamma^k = \min(\frac{100}{100 + k},\frac{1}{2})}$, respectively. In addition, for the subspace-constrained algorithm we initially let $c$ depend on $k$, to prevent unstable behaviour. We used $c = \min(\frac{1}{\gamma^k},200)$. For the multiplier driven algorithm, for similar reasons, we used a modified step-size sequence for updating the multipliers $\lambda$ given by $\gamma^k_{\lambda} = 2 \gamma^k$ for $k < 1000$ and $\gamma^k_{\lambda} = 0.5 \gamma^k$, otherwise. The figure shows that all algorithms converge to a neighbourhood of the solution of the variational inequality, albeit requiring a different number of iterations. Specifically, the number of iterations taken by the projected algorithm to converge is two orders of magnitude less than that of the subspace-constrained and multiplier-driven algorithms. The quality of convergence is summarized in Table \ref{tab:mean}, where we can see both the accuracy of the achieved convergence as well as the effect of increasing the sample sizes. It is important to note that the errors shown in Fig.~\ref{figure logscale} and Table~\ref{tab:mean} are in terms of the deviation in the value of the map $\norm{F(h^k) - F(h^*)}$, rather than deviation in the solution $\norm{h^k - h^*}$. This is because the solution $h^*$ is not unique for the formulated VI. However, for any two solutions ${h^*,\tilde{h}^* \in \operatorname{SOL}(F,\mathcal{H})}$ we do have $F(\tilde{h}^*) = F(h^*)$.
\begin{table}[]
\centering
\begin{tabular}{l|l|l|l}
\text{Samples per iteration}&25 &50 &100 \\ \hline
\text{Projected}&0.3875 &0.2062 &0.1157 \\ \hline
\text{Subspace constrained}&0.3780 &0.2015 &0.1332 \\ \hline
\text{Multiplier-driven}&0.3889 &0.1987 & 0.1064
\end{tabular}
\caption{The average error of the iterates for each of the algorithms after an upper bound on the error of $0.6$, $0.3$ and $0.15$ has been achieved using $25$, $50$ and $100$ samples in each iteration respectively. The number of iterates used are $1000$, $50000$ and $100000$ for the projected, subspace-constrained and multiplier-driven algorithms respectively.}
\label{tab:mean}
\end{table}
\section{Conclusions} \label{section conclusion}
We have considered variational inequalities defined by the CVaR of cost functions and provided stochastic approximation algorithms for solving them. We have analyzed the asymptotic convergence of these algorithms when, at each iteration, only a finite number of samples are used to estimate the CVaR. Future work will focus on analyzing the finite-time properties of the introduced algorithms and the sample complexity for a desired error tolerance of the last iterate. Finally, we wish to explore accelerated methods to solve the problem.
| {
"timestamp": "2022-11-16T02:11:51",
"yymm": "2211",
"arxiv_id": "2211.07227",
"language": "en",
"url": "https://arxiv.org/abs/2211.07227",
"abstract": "This paper considers variational inequalities (VI) defined by the conditional value-at-risk (CVaR) of uncertain functions and provides three stochastic approximation schemes to solve them. All methods use an empirical estimate of the CVaR at each iteration. The first algorithm constrains the iterates to the feasible set using projection. To overcome the computational burden of projections, the second one handles inequality and equality constraints defining the feasible set differently. Particularly, projection onto to the affine subspace defined by the equality constraints is achieved by matrix multiplication and inequalities are handled by using penalty functions. Finally, the third algorithm discards projections altogether by introducing multiplier updates. We establish asymptotic convergence of all our schemes to any arbitrary neighborhood of the solution of the VI. A simulation example concerning a network routing game illustrates our theoretical findings.",
"subjects": "Optimization and Control (math.OC)",
"title": "Stochastic approximation approaches for CVaR-based variational inequalities",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.967410256173572,
"lm_q2_score": 0.8333246015211008,
"lm_q1q2_score": 0.8061667662332679
} |
https://arxiv.org/abs/1404.3469 | Polynomial reconstruction of the matching polynomial | The matching polynomial of a graph is the generating function of the numbers of its matchings with respect to their cardinality. A graph polynomial is polynomial reconstructible, if its value for a graph can be determined from its values for the vertex-deleted subgraphs of the same graph. This note discusses the polynomial reconstructibility of the matching polynomial. We collect previous results, prove it for graphs with pendant edges and disprove it for some graphs. | \section{Counterexamples for arbitrary graphs}
\label{sec:counterexamples}
While it is true that the matching polynomials of graphs with an odd number of vertices or with an pendant edge are polynomial reconstructible, it does not hold for arbitrary graphs.
There are graphs which have the same polynomial deck and yet their matching polynomials are different. Although there are already counterexamples with as little as six vertices, its seems that nothing have been published before in connection with the question addressed here.
\begin{rema}
The matching polynomials $M(G, x, y)$ of arbitrary graphs are not polynomial reconstructible. The minimal counterexample for simple graphs (with respect to the number of vertices and edges) are the graphs $G_1$, $G_2$ shown in Figure~\ref{fig:counterexample}.
\end{rema}
\begin{figure}
\begin{center}
\includegraphics{figure_counterexample}
\end{center}
\caption{Graphs $G_1$ and $G_2$, which are the minimal simple graphs creating a counterexamples for the polynomial reconstructibility of the matching polynomial $M(G, x, y)$. The decks of $G_1$ and $G_2$ consist of six graphs, each isomorphic to $G_1'$ and $G_2'$, respectively. Unlike the matchin polynomials of $G_1$ and $G_2$, the matching polynomials of $G_1'$ and $G_2'$ coincide.}
\label{fig:counterexample}
\end{figure}
The graphs creating the minimal counterexample have six vertices and there are three more pairs of such simple graphs, which are given in Figure \ref{fig:more_counterexamples}.
\begin{figure}
\begin{center}
\includegraphics{figure_more_counterexamples}
\end{center}
\caption{The other counterexamples on six vertices for the polynomial reconstructibility of the matching polynomial $M(G, x, y)$.}
\label{fig:more_counterexamples}
\end{figure}
The question arises, whether or not there are such counterexamples consisting of graphs with an arbitrary even number of vertices. In the reminder, we give an affirmative answer to this questions.
Let $P_n$ and $C_n$ be a \emph{path} and a \emph{cycle} on $n$ vertices, respectively. For a graph $G = (V, E)$, $\overline{G}$ denotes the \emph{complement} of $G$, i.e. $\overline{G} = (V, \binom{V}{2} \setminus E)$. For two graphs $G$ and $H$, the \emph{disjoint union} of $G$ and $H$ is denoted by $G \mathrel{\mathaccent\cdot\cup} H$.
\begin{theo}
Let $k \leq 3$. The matching polynomials $M(G, x, y)$ of the graphs $C_{2k}$, $C_k \mathrel{\mathaccent\cdot\cup} C_k$ and $\overline{C_{2k}}$, $\overline{C_k \mathrel{\mathaccent\cdot\cup} C_k}$ are not polynomial reconstructible.
\end{theo}
\begin{proof}
Due to \textcite[Corollary 2.3]{godsil1981c}, the matching polynomial of a graph is determined by the matching polynomial of the complement of this graph. Furthermore, $\overline{G_{-v}} = \overline{G}_{-v}$. Therefore, it is enough to consider the graphs $C_{2k}$ and $C_k \mathrel{\mathaccent\cdot\cup} C_k$.
The matching polynomials of these two graphs do not coincide because $C_{2k}$ has exactly two perfect matchings, while $C_k \mathrel{\mathaccent\cdot\cup} C_k$ has zero ($k$ odd) or four ($k$ even) perfect matchings.
On the other hand, their polynomial decks are identical. At first observe, that $(C_{2k})_{-v}$ is isomorphic to $P_{2k-1}$ and $(C_k \mathrel{\mathaccent\cdot\cup} C_k)_{-v}$ is isomorphic to $C_k \mathrel{\mathaccent\cdot\cup} P_{k-1}$ for every vertex $v$ of the respective graph.
It remains to show that the matching polynomials of these graphs in the deck coincide, i.e. $M(P_{2k-1}, x, y) = M(C_k \mathrel{\mathaccent\cdot\cup} P_{k-1}, x, y)$. Therefore, we make use of the well-known recurrence relation for the matching polynomial \cite[Theorem 1]{farrell1979b}:
\begin{align*}
M(G, x, y) = M(G_{-e}, x, y) + y \cdot M(G_{-u-v}, x, y),
\end{align*}
where $e = \{u, v\}$ is an edge of $G$, $G_{-e}$ is the graph with the edge $e$ deleted and $G_{-u-v}$ is the graph with the vertices of $e$ deleted.
Applying the recurrence relation to the edge connecting the $(k-2)$th and $(k-1)$th vertex of $P_{2k-1}$ (counted from either side), we obtain
\begin{align*}
M(P_{2k-1}, x, y) = M(P_{k-1} \mathrel{\mathaccent\cdot\cup} P_k, x, y) + y \cdot M(P_{k-2} \mathrel{\mathaccent\cdot\cup} P_{k-1}, x, y).
\end{align*}
Applying the recurrence relation to an edge of the cycle in $C_k \mathrel{\mathaccent\cdot\cup} P_{k-1}$, we obtain exactly the same term:
\begin{align*}
M(C_k \mathrel{\mathaccent\cdot\cup} P_{k-1}, x, y) = M(P_{k} \mathrel{\mathaccent\cdot\cup} P_{k-1}, x, y) + y \cdot M(P_{k-2} \mathrel{\mathaccent\cdot\cup} P_{k-1}, x, y).
\end{align*}
It follows, that the polynomial decks coincide, while the matching polynomials of the original graphs do not. Hence, those cannot be determined from the corresponding polynomial decks.
\end{proof}
In fact, the above construction for $k = 2$, in the case of the graphs $C_{4}$ and $C_2 \mathrel{\mathaccent\cdot\cup} C_2$, where $C_2$ is a graph on two vertices connected by two parallel edges, provide an even smaller counterexample, though the graphs are not simple.
In addition, to obtain examples on an arbitrary even number of vertices such that the graphs and their complements are connected, the construction of the graphs $G_3$ and $G_4$ as well as of their complements $G_5$ and $G_6$ can be generalized analogously.
\section{Introduction}
The famous (and still unsolved) reconstruction conjecture of \textcite{kelly1957} and \textcite{ulam1960} states that every graph $G$ with at least three vertices can be reconstructed from (the isomorphism classes of) its vertex-deleted subgraphs.
With respect to a graph polynomial $P(G)$, this question may be adapted as follows: Can $P(G)$ of a graph $G = (V, E)$ be reconstructed from the graph polynomials of the vertex deleted-subgraphs, that is from the collection $P(G_{-v})$ for $v \in V$? Here, this problem is considered for the matching polynomial of a graph, which is the generating function of the number of its matchings with respect to their cardinality.
This paper aims to prove that graphs with pendant edges are polynomial reconstructible and, on the other hand, to display some evidence that arbitrary graphs are not.
In the reminder of this section the necessary definitions and notation are given. Further, the previous results from the literature are mentioned in Section \ref{sec:known_results}. Section \ref{sec:pendant_edges} and Section \ref{sec:counterexamples} contain the result for pendant edges and the counterexamples in the general case.
Let $G = (V, E)$ be a graph. A \emph{matching} in $G$ is an edge subset $A \subseteq E$, such that no two edges have a common vertex. The \emph{matching polynomial} $M(G, x, y)$ is defined as
\begin{align}
M(G, x, y) = \sum_{\substack{A \subseteq E \\ A \text{ is matching in }G}}{x^{\md(G, A)} y^{\abs{A}}},
\end{align}
where $\md(G, A) = \abs{V} - \abs{\bigcup_{e \in A}{e}}$ is the number of vertices not included in any of the edges of $A$. A matching $A$ is a \emph{perfect matching}, if its edges include all vertices, that means if $\md(G, A) = 0$. A \emph{near-perfect matching} $A$ is a matching that includes all vertices except one, that means $\md(G, A) = 1$. For more information about matchings and the matching polynomial, see \cite{farrell1979b, gutman1977, lovasz1986}.
There are also two versions of univariate matching polynomials defined in the literature, namely the \emph{matching defect polynomial} and the \emph{matching generating polynomial} \cite[Section 8.5]{lovasz1986}. For simple graphs, the previously mentioned matching polynomials are equivalent to each other.
For a graph $G = (V, E)$ with a vertex $v \in V$, $G_{-v}$ is the graph arising from the \emph{deletion} of $v$, i.e. arising by the removal of all edges incident to $v$ and $v$ itself. The multiset of (the isomorphism classes of) the vertex-deleted subgraphs $G_{-v}$ for $v \in V$ is the \emph{deck} of $G$. The \emph{polynomial deck} $\mathcal{D}_P(G)$
with respect to a graph polynomial $P(G)$ is the multiset of $P(G_{-v})$ for $v \in V$. A graph polynomial $P(G)$ is \emph{polynomial reconstructible}, if $P(G)$ can be determined from $\mathcal{D}_P(G)$.
\section{Previous results}
\label{sec:known_results}
For results about the polynomial reconstruction of other graph polynomials, see the article by \textcite[Section 1]{bresar2005} and the references therein. For additional results, see \cites{li1995}[Section 7]{tittmann2011}[Subsection 4.7.3]{trinks2012c}.
By arguments analogous to those used in Kelly's Lemma \cite{kelly1957}, the derivative of the matching polynomials of a graph $G = (V, E)$ equals the sum of the polynomials in the corresponding polynomial deck.
\begin{prop}[Lemma 1 in \cite{farrell1987}]
Let $G = (V, E)$ be a graph. The matching polynomial $M(G, x, y)$ satisfies
\begin{align}
\frac{\delta}{\delta x} M(G, x, y) = \sum_{v \in V}{M(G_{-v}, x, y)}.
\end{align}
\end{prop}
In other words, all coefficients of the matching polynomial except the one corresponding to the number of perfect matchings can be determined from the polynomial deck and thus also from the deck:
\begin{align}
m_{i, j}(G) = \frac{1}{i} \sum_{v \in V}{m_{i, j}(G_{-v})} \qquad \forall i \leq 1,
\end{align}
where $m_{i, j}(G)$ is the coefficient of the monomial $x^i y^j$ in $M(G,x,y)$.
Consequently, the (polynomial) reconstruction of the matching polynomial reduces to the determination of the number of perfect matchings.
\begin{prop} \label{prop:polynomial_reconstruction}
The matching polynomial $M(G, x, y)$ of a graph $G$ can be determined from its polynamial deck $\mathcal{D}_M(G)$ and its number of perfect matchings. In particular, the matching polynomials $M(G, x, y)$ of graphs with an odd number of vertices are polynomial reconstructible.
\end{prop}
\textcite[Statement 6.9]{tutte1979} has shown that the number of perfect matchings of a simple graph can be determined from its deck and therefore gave an affirmative answer on the reconstruction problem for the matching polynomial.
The matching polynomial of a simple can graph also be reconstructed from the deck of edge-extracted and edge-deleted subgraphs \cite[Theorem 4 and 6]{farrell1987} and from the polynomial deck of the edge-extracted graphs \cite[Corollary 2.3]{gutman1992}. For a simple graph $G$ on $n$ vertices, the matching polynomial is reconstructible from the collection of induced subgraphs of $G$ with $\floor{\frac{n}{2}} + 1$ vertices \cite[Theorem 4.1]{godsil1981b}.
\section*{Acknowledgement}
Many thanks are due to Julian A. Allagan for his suggestions improving the presentation of this paper.
\printbibliography
\end{document}
\section{Result for simple graphs with pendant edges}
\label{sec:pendant_edges}
\begin{theo} \label{theo:forest_perfect_matching}
Let $G = (V, E)$ be a forest. $G$ has a perfect matching if and only if each vertex-deleted subgraph $G_{-v}$ for $v \in V$ has a near-perfect matching.
\end{theo}
\begin{proof}
For the first direction we assume that $G$ has a perfect matching $M$. Then each vertex-deleted subgraph $G_{-v}$ has a near-perfect matching $M' = M \setminus {e}$, where $e$ is the edge in the matching $M$ incident to $v$.
For the second direction, let $w$ be one of the vertices of degree $1$ and $u$ its neighbor. If each vertex-deleted subgraph has a near-perfect matching, say $M'$, so does $G_{-u}$. Hence, $M' \cup \{\{u, w\}\}$ is a perfect matching of $G$.
\end{proof}
Actually, this theorem can be generalized to simple graphs with a pendant edge (or equivalently a vertex of degree $1$).
\begin{theo} \label{theo:pendant_perfect_matching}
Let $G = (V, E)$ be a simple graph with a vertex of degree $1$. $G$ has a perfect matching if and only if each vertex-deleted subgraph $G_{-v}$ for $v \in V$ has a near-perfect matching.
\end{theo}
The proof is exactly the same as for the theorem above. We do not know whether or not this can be further generalized to arbitrary simple connected graphs and are also not aware of publications regarding this questions. Therefore, this problem seems to be worth further studies.
Forests have either none or one perfect matching. Because every pendant edge must be in a perfect matching (in order to cover the vertices of degree $1$) and the same holds recursively for the subforest arising by deleting all the vertices of the pendant edges. Therefore, from Proposition \ref{prop:polynomial_reconstruction} and Theorem \ref{theo:forest_perfect_matching} the polynomial reconstructibility of the matching polynomials follows.
\begin{coro}
The matching polynomials $M(G, x, y)$ of forests are polynomial reconstructible.
\end{coro}
On the other hand, arbitrary graphs with pendant edges can have more than one perfect matching. However, Theorem \ref{theo:forest_perfect_matching} can be extended to obtain the number of perfect matchings. For a graph $G = (V, E)$, the number of perfect matchings and of near-perfect machtings of $G$ is denoted by $\np(G)$ and $\nnp(G)$, respectively.
\begin{theo} \label{theo:pendant_number_perfect_matching}
Let $G = (V, E)$ be a simple graph with a pendant edge $e = \{u, w\}$ where $w$ is a vertex of degree $1$. Then we have
\begin{align}
&\np(G) = \nnp(G_{-u}) \leq \nnp(G_{-v}) \qquad \forall v \in V \text{ and particularly} \\
&\np(G) = \min{\{\nnp(G_{-v}) \mid v \in V\}}.
\end{align}
\end{theo}
\begin{proof}
For each vertex $v \in V$, each perfect matching of $G$ corresponds to a near-perfect matching of $G_{-v}$ (by removing the edge including $v$). But the converse is not necessarily true, namely there are near-perfect matching of $G_{-v}$ leaving a non-neighbor of $v$ in $G$ unmatched. Thus, we have $\np(G) \leq \nnp(G_{-v})$.
In case of the vertex $u$, each near-perfect matching $M'$ of $G_{-u}$ corresponds to a perfect matching $M$ of $G$, namely $M' \cup \{e\}$, and vice versa. Thus, we have $\np(G) = \nnp(G_{-u})$, giving the result.
\end{proof}
By applying this theorem, the number of perfect matchings of a simple graph with pendant edges can be determined from its polynomial deck and the following result is obtained as a corollary.
\begin{coro}
The matching polynomials $M(G, x, y)$ of simple graphs with a pendant edge are polynomial reconstructible.
\end{coro} | {
"timestamp": "2014-04-15T02:10:55",
"yymm": "1404",
"arxiv_id": "1404.3469",
"language": "en",
"url": "https://arxiv.org/abs/1404.3469",
"abstract": "The matching polynomial of a graph is the generating function of the numbers of its matchings with respect to their cardinality. A graph polynomial is polynomial reconstructible, if its value for a graph can be determined from its values for the vertex-deleted subgraphs of the same graph. This note discusses the polynomial reconstructibility of the matching polynomial. We collect previous results, prove it for graphs with pendant edges and disprove it for some graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "Polynomial reconstruction of the matching polynomial",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138190064204,
"lm_q2_score": 0.8244619263765706,
"lm_q1q2_score": 0.806087818663027
} |
https://arxiv.org/abs/1611.06842 | Almost tiling of the Boolean lattice with copies of a poset | Let $P$ be a partially ordered set. If the Boolean lattice $(2^{[n]},\subset)$ can be partitioned into copies of $P$ for some positive integer $n$, then $P$ must satisfy the following two trivial conditions:(1) the size of $P$ is a power of $2$,(2) $P$ has a unique maximal and minimal element.Resolving a conjecture of Lonc, it was shown by Gruslys, Leader and Tomon that these conditions are sufficient as well. In this paper, we show that if $P$ only satisfies condition (2), we can still almost partition $2^{[n]}$ into copies of $P$. We prove that if $P$ has a unique maximal and minimal element, then there exists a constant $c=c(P)$ such that all but at most $c$ elements of $2^{[n]}$ can be covered by disjoint copies of $P$. | \section{Introduction}
The \emph{Boolean lattice} $(2^{[n]},\subset)$ is the power set of $[n]=\{1,...,n\}$ ordered by inclusion. If $P$ and $Q$ are partially ordered sets (posets), a subset $P'\subset Q$ is a \emph{copy} of $P$ if the subposet of $Q$ induced on $P'$ is isomorphic to $P$. A \emph{chain} is a copy of a totally ordered set, and an \emph{antichain} is a copy of a totally unordered set.
It is an easy exercise to show that every poset $P$ has a copy in $2^{[n]}$ for $n$ sufficiently large. Moreover, if $P$ has a unique maximal and minimal element, then every element of $2^{[n]}$ is contained in some copy of $P$. Therefore, it is natural to ask whether it is possible to partition $2^{[n]}$ into copies of $P$.
The case when $P$ is a chain of size $2^{k}$ was conjectured by Sands \cite{sands}. A slightly more general conjecture was proposed by Griggs \cite{griggs}: if $h$ is a positive integer and $n$ is sufficiently large, then $2^{[n]}$ can be partitioned into chains such that at most one of these chains have size different from $h$. This conjecture was confirmed by Lonc \cite{lonc}. The author of this paper \cite{me} also established the order of the minimal $n$ for which such a partition exists.
\begin{theorem}\label{chainpartition}
(\cite{me}) Let $h$ be a positive integer. If $n=\Omega(h^{2})$, the Boolean lattice $2^{[n]}$ can be partitioned into chains $C_{1},...,C_{r}$ such that $2h>|C_{1}|\geq h$ and $|C_{2}|=...=|C_{r}|=h$.
\end{theorem}
We note that Theorem \ref{chainpartition} is stated in \cite{me} as Theorem 5.2 without the additional condition that $|C_{1}|\geq h$. However, the proof of Theorem 5.2 presented in \cite{me} actually gives this insignificantly stronger result, which we shall exploit in this paper.
Lonc \cite{lonc} also proposed two conjectures concerning the cases when $P$ is not necessarily a chain. First, he conjectured that if $P$ has a unique maximal and minimal element and size $2^{k}$, then $2^{[n]}$ can be partitioned into copies of $P$ for $n$ sufficiently large. It is easy to see that these conditions on $P$ are necessary, and it was proved by Gruslys, Leader and the author of this paper \cite{posettiling} that these conditions are sufficient as well.
\begin{theorem}(\cite{posettiling})\label{poset}
Let $P$ a poset with a unique maximal and minimal element and $|P|=2^{k}$, $k\in \mathbb{N}$. If $n$ is sufficiently large, the Boolean lattice $2^{[n]}$ can be partitioned into copies of $P$.
\end{theorem}
The second conjecture of Lonc targets posets which might not satisfy the conditions stated in Theorem \ref{poset}. In this case, it seems likely that we can still cover almost every element of $2^{[n]}$ with disjoint copies of $P$.
\begin{conjecture}(\cite{lonc})\label{loncconjecture}
Let $P$ be a poset. If $n$ is sufficiently large and $|P|$ divides $2^{n}-2$, then $2^{[n]}\setminus \{\emptyset,[n]\}$ can be partitioned into copies of $P$.
\end{conjecture}
In \cite{posettiling}, a slightly weaker conjecture is proposed.
\begin{conjecture}(\cite{posettiling})\label{mainconjecture}
Let $P$ be a poset. There exists a constant $c=c(P)$ such that for every positive integer $n$, there exists $S\subset 2^{[n]}$ such that $2^{[n]}\setminus S$ can be partitioned into copies of $P$ and $|S|\leq c$.
\end{conjecture}
In other words, Conjecture \ref{mainconjecture} asks if we can cover all but a constant number of the elements of $2^{[n]}$ with disjoint copies of $P$. Note that Conjecture \ref{loncconjecture} implies Conjecture \ref{mainconjecture} in the case $|P|$ is not divisible by $4$.
The main result of this manuscript is a proof of Conjecture \ref{mainconjecture} in the case $P$ has a unique maximal and minimal element. We believe that the method of proof presented here might lead to the proof of this conjecture in its full generality. We prove the following theorem.
\begin{customthm}{A}\label{mainthm}
Let $P$ be a poset with a uniqe maximal and minimal element. There exists a constant $c=c(P)$ such that the following holds. Let $n$ be a positive integer, then there exists $S\subset 2^{[n]}$ such that $2^{[n]}\setminus S$ can be partitioned into copies of $P$, and $|S|\leq c$.
\end{customthm}
This paper is organized as follows. In Section \ref{prelim}, we introduce the main definitions and notation used throughout the paper, and outline the proof of Theorem \ref{mainthm}. In Section \ref{grid}, we prove a modification of Theorem \ref{poset}, where $2^{[n]}$ is replaced with an appropriately sized grid. In Section \ref{grid2}, we show that Conjecture \ref{mainconjecture} holds when $P$ is a grid. In Section \ref{finalproof}, we prove Theorem \ref{mainthm} and conclude with some discussion.
\section{Preliminaries}\label{prelim}
\subsection{Cartesian product of posets}
The proof of Theorem \ref{mainthm} relies heavily on the product structure of $2^{[n]}$. Let us define the \emph{cartesian product} of posets.
If $P_{1},...,P_{k}$ are posets with partial orderings $\preceq_{1},...,\preceq_{k}$ respectively, the cartesian product $P_{1}\times...\times P_{k}$ is also a poset with partial ordering $\preceq$ such that $(p_{1},...,p_{k})\preceq(p_{1}',...,p_{k}')$ if $p_{i}\preceq_{i} p_{i}'$ for $i=1,...,k$. Also, $P^{k}=P\times...\times P$ is the $k$-th \emph{cartesian power} of $P$, where the cartesian product contains $k$ terms.
For every positive integer $m$, we shall view $[m]$ as a poset with the natural total ordering. For positive integers $a_{1},...,a_{d}$, the cartesian product $[a_{1}]\times...\times [a_{d}]$ is called a \emph{$d$-dimensional grid}; $2$-dimensional grids may be referred to as \emph{rectangles}. That is, if $\preceq$ is the partial ordering on $[a_{1}]\times...\times [a_{d}]$, then $\preceq$ is defined such that if $(x_{1},...,x_{d}),(y_{1},...,y_{d})\in [a_{1}]\times...\times [a_{d}]$, then $(x_{1},...,x_{d})\leq (y_{1},...,y_{d})$ iff $x_{i}\leq y_{i}$ for $i=1,...,d$. Note that the Boolean lattice $2^{[n]}$ is isomorphic to the $n$-dimensional grid $[2]^{n}$ and we might occasionally switch between the two notation without further comments.
\subsection{Outline of the proof}
The framework used in \cite{posettiling} for the proof of Theorem \ref{poset} can be easily modified to show that if $P$ has a unique maximal and minimal element, then Theorem \ref{poset} holds if $2^{[n]}$ is replaced with some appropriately sized grid. We prove the following theorem in Section \ref{grid}.
\begin{theorem}\label{gridpartitionthm}
Let $P$ be a poset with a unique maximal and minimal element. If $d$ is sufficiently large, then $[2|P|]^{d}$ can be partitioned into copies of $P$.
\end{theorem}
Now our task is reduced to showing that Theorem \ref{mainthm} holds when $P$ is a grid. The new ingredient in our paper is the following result, which shall be proved in Section \ref{grid2}.
\begin{theorem}\label{gridthm}
Let $P$ be a $d$-dimensional grid. If $n=\Omega(d|P|^{4})$, then there exists $S\subset 2^{[n]}$ such that $2^{[n]}\setminus S$ can be partitioned into copies of $P$ and $|S|\leq (24|P|^{2})^{d}$.
\end{theorem}
These two theorems combined immediately yield our main result, see Section \ref{finalproof}.
\section{Partitioning the grid}\label{grid}
In this section, we prove Theorem \ref{gridpartitionthm}. The proof of this theorem follows the same ideas as the proof of Theorem \ref{poset} in \cite{posettiling}, with slight modifications. We shall reuse some of the results of \cite{posettiling} in order to shorten and simplify this manuscript.
\subsection{Weak partitions}
Let $S$ be a finite set and let $\mathcal{F}$ be a family of subsets of $S$. We call a function $w:\mathcal{F}\rightarrow \mathbb{N}$ a \emph{weight function}. If $x\in S$, the \emph{weight of $x$} is the total weight of the sets in $\mathcal{F}$ containing $x$.
Let $t$ be a positive integer. The family $\mathcal{F}$ \emph{contains a $t$-partition}, if there exists a weight function on $\mathcal{F}$ such that the weight of each $x\in S$ is $t$. Similarly, $\mathcal{F}$ \emph{contains a $(1\mod t)$-partition}, if there exists a weight function on $\mathcal{F}$ such that the weight of each $x\in S$ equals to $1\mod t$.
If $\mathcal{F}$ contains a $1$-partition, then the family of sets having weight $1$ in $\mathcal{F}$ is a partition of $S$ in the usual sense. Also, if $\mathcal{F}$ has a $1$-partition, it trivially has a $t$-partition and a ${(1\mod t)}$-partition as well for every positive integer $t$.
A remarkable result of Gruslys, Leader and Tan \cite{tiling} is that the existence of a $t$-partition and a ${(1\mod t)}$-partition also implies the existence of a $1$-partition of $S^{n}$, for some sufficiently large $n$, into sets that "look like" elements of $\mathcal{F}$. Let us define this precisely.
For a positive integer $n$, define $\mathcal{F}(n)$ to be the family of all subsets of $S^{n}$ of the form $$\{s_{1}\}\times...\times \{s_{i-1}\}\times F\times \{s_{i+1}\}\times...\times \{s_{n}\},$$
where $i\in [n]$, $s_{1},...,s_{i-1},s_{i+1},...,s_{n}\in S$ and $F\in \mathcal{F}$. The following result played a key role in both \cite{tiling,posettiling}, and shall play an important role in this paper as well. It was stated explicitly in \cite{posettiling} as Theorem 5.
\begin{theorem}(\cite{posettiling})\label{Spartition}
Let $S$ be a finite set, $\mathcal{F}$ be a family of subsets of $S$ and $t\in \mathbb{N}$. Suppose that $\mathcal{F}$ contains a $t$-partition and a $(1\mod t)$-partition. Then for $n$ sufficiently large, $S^{n}$ can be partitioned into elements of $\mathcal{F}(n)$.
\end{theorem}
Let us say a few remarks about this theorem. Note that if $S$ is endowed with a partial ordering and $\mathcal{F}$ is the family of all copies of some poset $P$, then $\mathcal{F}(n)$ is also a family of copies of $P$ in $S^{n}$. (However, it is not true that every copy of $P$ in $S^{n}$ is an element of $\mathcal{F}(n)$.) Also, at first glance it might not be clear why is it easier to find a $t$-partition or a $(1\mod t)$-partition than a $1$-partition, but as we shall see in the next section, this problem is substantially simpler.
\subsection{$t$ and $(1\mod t)$-partitions}
In order to prove Theorem \ref{gridpartitionthm}, we only need to show that the family of copies of $P$ in $[2|P|]^{m}$ contains a $t$-partition and a $(1\mod t)$-partition for some $m,t$ positive integers.
\begin{theorem}\label{tpartition}
Let $P$ be a poset with a unique maximal and minimal element. Then there exist positive integers $m$ and $t$ such that the family of copies of $P$ in $[2|P|]^{m}$ contains a $t$-partition.
\end{theorem}
\begin{proof}
In \cite{posettiling}, Lemma 3 states that there exist $m$ and $t$ such that the family of copies of $P$ in $[2]^{m}$ contains a $t$-partition. However, $[2|P|]^{m}$ can be trivially partitioned into copies of $[2]^{m}$: for $(i_{1},...,i_{m})\in [|P|]^{m}$, let
$$B_{i_{1},...,i_{m}}=\{(2i_{1}+\epsilon_{1}-2,...,2i_{m}+\epsilon_{m}-2):(\epsilon_{1},...,\epsilon_{m})\in [2]^{m}\}.$$
Then $B_{i_{1},...,i_{m}}$ is isomorphic to $[2]^{m}$ and the family of these sets form a partition of $[2|P|]^{m}$.
\end{proof}
\begin{theorem}\label{modtpartition}
Let $P$ be a poset with a unique maximal and minimal element and let $t$ be a positive integer. Then there exists a positive integer $m$ such that the family of copies of $P$ in $[2|P|]^{m}$ contains a $(1\mod t)$-partition.
\end{theorem}
\begin{proof}
If $|P|=1$, the proof is trivial, so we can suppose that $|P|\geq 2$.
Let $d$ be a positive integer such that $2^{[d]}$ contains a copy of $P$. We show that $m=2d-1$ suffices. For simplicity, write $Q=[2|P|]^{m}$ and let $\preceq$ be the partial ordering on $Q$.
Let $\mathcal{F}$ be the family of copies of $P$ in $Q$. Also, for a subset $A\subset Q$, let $I_{A}:Q\rightarrow \mathbb{N}$ be the indicator function of $A$.
Say that a function $f:Q\rightarrow \mathbb{Z}$ is \emph{realizable} if there exists a weight function $w:\mathcal{F}\rightarrow \mathbb{N}$ such that $$f(x)\equiv\sum_{\substack{F\in\mathcal{F}\\x\in F}} w(F)\mod t$$
for all $x\in Q$. Note that if $f$ and $g$ are realizable, then both $f+g$ and $f-g$ are realizable, as $f-g\equiv f+(t-1)g\mod t$.
Our task is to show that $I_{Q}$ is realizable.
Let $\mathbf{a}$ be the element of $Q$ whose every coordinate is $2$.
\begin{claim}\label{twoelementsclaim}
For every $\mathbf{x}\in Q$, the function $I_{\{\mathbf{x}\}}-I_{\{\mathbf{a}\}}$ is realizable.
\end{claim}
\begin{proof}
Consider two cases according to the number of $1$ coordinates of $\mathbf{x}$.
Case 1: The number of coordinates of $\mathbf{x}$ equal to $1$ is at most $d-1$. Let $J$ be a set of $d$ indices $j\in [m]$ such that $\mathbf{x}(j)\geq 2$. Let
$$R=\{(a_{1},...,a_{m})\in [2]^{m}: a_{j}=1\mbox{, if }j\not\in J\}.$$
Then $R$ is isomorphic to $2^{[d]}$, so it contains a copy of $P$. Let such a copy be $A\subset R$. If $\mathbf{b}$ is the maximal element of $A$, then $\mathbf{b}\preceq \mathbf{x}$ and $\mathbf{b}\preceq \mathbf{a}$. Hence, $A_{1}=(A\setminus\{\mathbf{b}\})\cup\{\mathbf{x}\}$ and $A_{2}=(A\setminus\{\mathbf{b}\})\cup\{\mathbf{a}\}$ are both copies of $P$, which implies that the function
$$I_{A_{1}}-I_{A_{2}}=I_{\{\mathbf{x}\}}-I_{\{\mathbf{a}\}}$$
is realizable.
Case 2: The number of coordinates of $\mathbf{x}$ equal to $1$ is at least $d$. In a similar way as in the previous case, we show that $I_{\{\mathbf{x}\}}-I_{\{\mathbf{a}\}}$ is realizable. Let $J$ be a set of $d$ indices $j\in [m]$ such that $\mathbf{x}(j)= 1$. Let
$$R=\{(a_{1},...,a_{m})\in \{2|P|-1,2|P|\}^{m}: a_{j}=2|P|\mbox{, if }j\not\in J\}.$$
Then $R$ is isomorphic to $2^{[d]}$, so it contains a copy of $P$. Let such a copy be $A\subset R$. If $\mathbf{c}$ is the minimal element of $A$, then $\mathbf{x}\preceq \mathbf{c}$ and $\mathbf{a}\preceq \mathbf{c}$. Hence, $A_{1}=(A\setminus\{\mathbf{c}\})\cup\{\mathbf{x}\}$ and $A_{2}=(A\setminus\{\mathbf{c}\})\cup\{\mathbf{a}\}$ are both copies of $P$, which implies that the function
$$I_{A_{1}}-I_{A_{2}}=I_{\{\mathbf{x}\}}-I_{\{\mathbf{a}\}}$$
is realizable.
\end{proof}
The previous claim immediately yields the following claim.
\begin{claim}\label{sum0claim}
Let $f:Q\rightarrow \mathbb{Z}$ be a function such that
$$\sum_{\mathbf{x}\in Q}f(\mathbf{x})\equiv 0\mod t.$$
Then $f$ is realizable.
\end{claim}
\begin{proof}
Define the function $g:Q\rightarrow \mathbb{Z}$ such that
$$g=\sum_{\mathbf{x}\in Q}f(\mathbf{x})(I_{\{\mathbf{x}\}}-I_{\{\mathbf{a}\}}).$$
Clearly, $g$ is realizable as every term in the sum is realizable by Claim \ref{twoelementsclaim}. Also, $f(\mathbf{x})=g(\mathbf{x})$ for all $\mathbf{x}\neq \mathbf{a}$. But $g$ satisfies the equality $\sum_{\mathbf{x}\in Q}g(\mathbf{x})\equiv 0\mod t$ as well, so we must have $f(\mathbf{a})\equiv g(\mathbf{a})\mod t$. Hence, $f\equiv g\mod t$, which gives that $f$ is realizable.
\end{proof}
Now let $A\subset Q$ be a copy of $P$ and $s=t|Q|/|P|$. Note that $s$ is an integer. The function $sI_{A}$ is realizable and $f=I_{Q}-sI_{A}$ satisfies the equality $\sum_{\mathbf{x}\in Q}f(\mathbf{x})\equiv 0\mod t,$ so $f$ is also realizable by Claim \ref{sum0claim}. But then $I_{Q}=f+sI_{A}$ is realizable as well.
\end{proof}
Let us remark that if the family of copies of $P$ in $[2|P|]^{m_{0}}$ contains a $t$-partition or a $(1\mod t)$-partition, then the family of copies of $P$ in $[2|P|]^{m}$ also contains a $t$-partition or $(1\mod t)$-partition, respectively, for $m>m_{0}$, as $[2|P|]^{m}$ can be trivially partitioned into copies of $[2|P|]^{m_{0}}$.
\subsection{Proof of Theorem \ref{gridpartitionthm}}
Now everything is set to prove that if $P$ has a unique maximal and minimal element, then $[2|P|]^{d}$ can be partitioned into copies of $P$ for $d$ sufficiently large.
\begin{proof}[Proof of Theorem \ref{gridpartitionthm}]
By Theorem \ref{tpartition}, there exist positive integers $t$ and $m_{1}$ such that the family of copies of $P$ in $[2|P|]^{m_{1}}$ contains a $t$-partition. Also, by Theorem \ref{modtpartition}, there exists a positive integer $m_{2}$ such that the family of copies of $P$ in $[2|P|]^{m_{2}}$ contains a $(1\mod t)$-partition. Let $m=\max\{m_{1},m_{2}\}$, then the family of copies of $P$ in $[2|P|]^{m}$ contains both a $t$ and a $(1\mod t)$-partition. Hence, by Theorem \ref{Spartition}, there exists a positive integer $n$ such that $([2|P|]^{m})^{n}$ can be partitioned into copies of $P$. But then $d\geq mn$ suffices as $([2|P|]^{m})^{n}\cong [2|P|]^{mn}$.
\end{proof}
\section{Almost partitioning into grids}\label{grid2}
In this section, we prove Theorem \ref{gridthm}. Let us briefly outline our strategy for the proof of this theorem. Let $P$ be a $d$-dimensional grid. First, we use Theorem \ref{chainpartition} to partition $2^{[m]}$ (for sufficiently large $m$) into chains $C_{1},...,C_{r}$ such that all these chains are large (but still constant sized) and at most one of them, $C_{1}$ might have size not divisible by $|P|$. This chain partition induces a partition of $2^{[dm]}\simeq (2^{[m]})^{d}$ into the family of grids $D_{i_{1},...,i_{d}}\simeq C_{i_{1}}\times...\times C_{i_{d}}$, where $(i_{1},...,i_{d})\in [r]^{d}$. We conclude by showing that unless $i_{1}=...=i_{d}=1$, the grid $D_{i_{1},...,i_{d}}$ can be partitioned into copies of $P$, so the subset of elements which are uncovered by disjoint copies of $P$ is $D_{1,...,1}$, a set of constant size.
Now let us first show that as long as one side of a $d$-dimensional grid $G$ is divisible by $2|P|$, and all the sides of $G$ are large, we can partition $G$ into copies of $P$.
\begin{lemma}\label{gridintogrid}
Let $P$ be a $d$-dimensional grid and let $c_{1},...,c_{d-1}$ be positive integers satisfying $c_{i}\geq 12|P|^{2}$ for $i\in [d-1]$. Then the $d$-dimensional grid $[2|P|]\times [c_{1}]\times...\times[c_{d-1}]$ can be partitioned into copies of $P$.
\end{lemma}
\begin{proof}
Let us first prove our theorem for $d=2$, the general statement follows by induction on $d$.
\begin{claim}\label{rectangle}
Let $a,b,c$ be positive integers such that $c\geq a^{2}b+2a$ and $a$ is even. Then $[ab]\times [c]$ can be partitioned into copies of $[a]\times [b]$.
\end{claim}
\begin{proof}
Write $c=ad+r$, where $d,r$ are positive integers and $0\leq r<a$. If $r=0$, we can trivially partition $[ab]\times [c]$ into copies of $[a]\times [b]$. So let us assume that $r\geq 1$, and let
$$\epsilon= \begin{cases} 1 &\mbox{if } r=1 \\
2 &\mbox{if } r\geq 2. \end{cases}$$
By the condition $c\geq a^{2}b+2a$, we have $d\geq br+2$. Let $R=[a]\times [b]$ and denote by $\preceq$ the partial ordering on $[ab]\times [c]$.
Consider the following two collections of copies of $R$ in $[ab]\times [c]$. For $(i,j)\in [a/2]\times [d]$, let
$$A_{i,j}=\{(b(i-1)+x,r+a(j-1)+y:(x,y)\in [b]\times [a]\}$$
and
$$B_{i,j}=\{(ab/2+b(i-1)+x,a(j-1)+y:(x,y)\in [b]\times [a]\},$$
see Figure \ref{image}.
Clearly, the sets $A_{i,j}$ and $B_{i,j}$ are copies of $R$ and pairwise disjoint. Also, their union covers all elements of $[ab]\times [c]$, except for the elements of the sets $S=[ab/2]\times [r]$ and $T=[ab/2+1,ab]\times [ad+1,c]$.
Now we shall modify some of the sets $A_{i,j}$ by switching their maximal elements with an element of $T$, and we shall switch the minimal elements of some of the sets $B_{i,j}$ with an element of $S$. We shall execute these switches in a way that the freed up elements of $[ab]\times [c]$ can be easily partitioned into copies of $R$.
For $(i,j)\in [a/2]\times [br]$, let $x_{i,j}=(bi,r+aj)$, which is the maximal element of $A_{i,j}$, and let $y_{i,j}=(ab/2+b(i-1)+1,a(j+\epsilon-1)+1)$, which is the minimal element of $B_{i,j+\epsilon}$. We remark that we used the inequality $j+2\leq br+2\leq d$ to guarantee that $A_{i,j}$ and $B_{i,j+\epsilon}$ exist. Let $\phi$ be any bijection between $[a/2]\times [br]$ and $T$, and let $\varphi$ be a bijection between $[a/2]\times [br]$ and $S$.
For $(i,j)\in [a/2]\times [br]$, let
$$A'_{i,j}=(A_{i,j}\setminus \{x_{i,j}\})\cup\{\phi(i,j)\},$$
$$B'_{i,j+\epsilon}=(B_{i,j+\epsilon}\setminus \{y_{i,j}\})\cup\{\varphi(i,j)\}.$$
Also, for $(i,j)\in [a/2]\times [br+1,d]$, let $A'_{i,j}=A_{i,j}$, and for $(i,j)\in [a/2]\times([\epsilon]\cup [br+\epsilon+1,d])$, let $B'_{i,j}=B_{i,j}$.
For $(i,j)\in [a/2]\times [br]$, the set $A_{i,j}$ is contained in $[ab/2]\times [abr+r]$. As $abr+r<ad+1$, we have that every element of $T$ is $\preceq$-larger than every element of $A_{i,j}$. Hence, $A'_{i,j}$ is also a copy of $R$, remembering that $x_{i,j}$ is the maximal element of $A_{i,j}$. Similarly, $B'_{i,j+\epsilon}$ is also a copy of $R$, as $B'_{i,j+\epsilon}$ is equal to $B_{i,j}$ with its minimal element replaced with a $\preceq$-smaller element.
For $(i,j)\in [a/2]\times [d]$, the sets $A'_{i,j}$ and $B'_{i,j}$ are disjoint copies of $R$ and their union covers all elements of $[ab]\times [c]$, except for the set $$X=\{x_{i,j}:(i,j)\in [a/2]\times [br]\}\cup\{y_{i,j}:(i,j)\in [a/2]\times [br]\}.$$
But $X$ can be easily partitioned into copies of $R$. For $k\in [r]$, let
$$C_{k}=\{x_{i,b(k-1)+j}:(i,j)\in [a/2]\times [b]\}\cup \{y_{i,b(k-1)+j}:(i,j)\in [a/2]\times [b]\}.$$
Clearly, the sets $C_{k}$ for $k=1,..,r$ partition $X$, so the only thing left is to show that $C_{k}$ is a copy of $R$.
We show that the function $\pi:C_{k}\rightarrow R$ defined by $\pi(x_{i,b(k-1)+j})=(i,j)$ and $\pi(y_{i,b(k-1)+j})=(i+a/2,b)$ is an isomorphism.
As $\pi$ defines an isomorphism between $\{x_{i,b(k-1)+j}:(i,j)\in [a/2]\times [b]\}$ and $[a/2]\times [b]$, and it defines an isomorphism between $\{y_{i,b(k-1)+j}:(i,j)\in [a/2]\times [b]\}$ and $[a/2+1]\times [b]$, it is enough to show that $x_{i,b(k-1)+j}\preceq y_{i',b(k-1)+j'}$ if and only if $j\leq j'$. But we have
$$x_{i,b(k-1)+j}=(bi,r+a(b(k-1)+j))$$
and
$$y_{i',b(k-1)+j'}=(ab/2+b(i'-1)+1,a(b(k-1)+j'+\epsilon-1)+1),$$
so $x_{i,b(k-1)+j}\preceq y_{i',b(k-1)+j'}$ holds if and only if $r+aj\leq a(j'+\epsilon-1)+1$. By the choice of $\epsilon$, this inequality is equivalent to $j\leq j'$, finishing our proof.
\begin{figure}
\begin{center}
\includegraphics[scale=1]{image1.eps}
\end{center}
\caption{Partitioning $[ab]\times [c]$ into copies of $R$}
\label{image}
\end{figure}
\end{proof}
First, we show that if $P=[a_{1}]\times...\times [a_{d}]$, $a_{1}$ is even, $\alpha=|P|=a_{1}...a_{d}$, $c_{1},...,c_{d-1}>3\alpha^{2}$, then the grid $G=[\alpha]\times [c_{1}]\times...\times [c_{d-1}]$ can be partitioned into copies of $P$.
Let us proceed by induction on $d$. If $d=1$, the statement is trivial. Let us assume that $d\geq 2$ and the statement holds for all grids of dimension $d-1$. Let $a'_{1}=a_{1}a_{2}$ and $a'_{i}=a_{i+1}$ for $i\in [2,d-1]$. Define the $(d-1)$-dimensional grid $P'=[a'_{1}]\times...\times[a'_{d-1}]$. We have that $a'_{1}$ is even, $\alpha=a'_{1}...a'_{d-1}$ and $c_{1},...,c_{d-2}>3\alpha^{2}$. Hence, by our induction hypothesis, the grid $G'=[\alpha]\times [c_{1}]\times...\times [c_{d-2}]$ can be partitioned into copies of $P'$. But this gives a partition of $G=G'\times [c_{d-1}]$ into copies of $P'\times [c_{d-1}]$. Hence, it is enough to show that $P'\times [c_{d-1}]$ can be partitioned into copies of $P$.
As $a_{1}$ is even and $c_{d-1}\geq3\alpha^{2}\geq a_{1}^{2}a_{2}+2a_{1}$, an immediate application of Claim \ref{rectangle} shows that the rectangle $R=[a_{1}a_{2}]\times [c_{d-1}]$ can be partitioned into copies of $R'=[a_{1}]\times [a_{2}]$. But then $P'\times [c_{d-1}]\cong R\times [a_{3}]\times...\times [a_{d}]$ can be partitioned into copies of $P=R'\times [a_{3}]\times....\times [a_{d}]$ as well.
To finish our proof, let us assume that $P=[a_{1}]\times...\times [a_{d}]$, where $a_{1}$ is not necessarily even, and $c_{1},...,c_{d-1}\geq 12|P|^{2}$. Let $P'=[2a_{1}]\times...\times [a_{d}]$. We have $|P'|=2|P|$, so by the previous argument, we have that $[|P'|]\times [c_{1}]\times...\times[c_{d-1}]$ can be partitioned into copies of $P'$. But $P'$ can be trivially further partitioned into two copies of $P$.
\end{proof}
We shall use the following immediate corollary of the previous theorem.
\begin{corollary}\label{gridcor}
Let $P$ be a $d$-dimensional grid and let $c_{1},...,c_{d}$ be positive integers satisfying $c_{i}\geq 12|P|^{2}$ for $i\in [d]$, and suppose that at least one of $c_{1},...,c_{d}$ is divisible by $2|P|$. Then the $d$-dimensional grid $ [c_{1}]\times...\times[c_{d}]$ can be partitioned into copies of $P$.
\end{corollary}
\begin{proof}
Without loss of generality, suppose that $c_{d}$ is divisible by $2|P|$. Then we can trivially partition $[c_{1}]\times...\times [c_{d}]$ into copies of $[2|P|]\times [c_{1}]\times...\times [c_{d-1}]$. But $[2|P|]\times [c_{1}]\times...\times [c_{d-1}]$ can be partitioned into copies of $P$ by Lemma \ref{gridintogrid}.
\end{proof}
Now we are ready to prove that we can almost partition $2^{[n]}$ into copies of a grid.
\begin{proof}[Proof of Theorem \ref{gridthm}]
Let $h=12|P|^{2}$. By Theorem \ref{chainpartition}, there exists a constant $c$ such that if $m\geq ch^{2}$, then $2^{[m]}$ can be partitioned into chains $C_{1},...,C_{r}$ such that $h\leq |C_{1}|\leq 2h$ and $|C_{2}|=...=|C_{r}|=h$. Suppose that $n\geq cdh^{2}$, then we can find positive integers $m_{1},...,m_{d}$ such that $n=m_{1}+...+m_{d}$ and $m_{i}\geq ch^{2}$ for $i\in [d]$. Then $2^{[m_{i}]}$ has a partition into chains $C_{i,1},...,C_{i,r_{i}}$ such that $h\leq |C_{i,1}|\leq 2h$ and $|C_{i,2}|=...=|C_{i,r_{i}}|=h$.
The sets $C_{1,j_{1}}\times...\times C_{d,j_{d}}$ for $(j_{1},...,j_{d})\in [r_{1}]\times...\times [r_{d}]$ partition $2^{[m_{1}]}\times...\times 2^{[m_{d}]}$. Hence, as $2^{[n]}\cong 2^{[m_{1}]}\times...\times 2^{[m_{d}]}$, the Boolean lattice $2^{[n]}$ also has a partition into sets $B_{j_{1},...,j_{d}}$ for $(j_{1},...,j_{d})\in [r_{1}]\times...\times [r_{d}]$, where $B_{j_{1},...,j_{d}}$ is isomorphic to $C_{1,j_{1}}\times...\times C_{d,j_{d}}$. But then $B_{j_{1},...,j_{d}}$ is a $d$-dimensional grid isomorphic to $[|C_{1,j_{1}}|]\times...\times[|C_{d,j_{d}}|]$, where $|C_{i,j_{l}}|\geq 12|P|^{2}$ for $(i,l)\in [d]\times [d]$. If $(j_{1},...,j_{d})\neq (1,...,1)$, then we also have that at least one of $|C_{j_{1}}|,...,|C_{j_{d}}|$ is divisible by $2|P|$. Thus, applying Corollary \ref{gridcor}, $B_{j_{1},...,j_{d}}$ can be partitioned into copies of $P$ unless $(j_{1},...,j_{d})\neq (1,...,1)$. Setting $S=B_{1,...,1}$, we get that $2^{[n]}\setminus S$ can be partitioned into copies of $P$. As
$$|S|=|C_{1,1}|...|C_{d,1}|\leq 2^{d}h^{d}=(24|P|^{2})^{d},$$
our proof is finished.
\end{proof}
\section{Conclusion}\label{finalproof}
In this section, we finish the proof of our main theorem and conclude with some discussion.
Let us remind the reader of the statement of our main theorem. We shall prove that if the poset $P$ has a unique maximal and minimal element, then all but a constant number of elements of $2^{[n]}$ can be covered by disjoint copies of $P$.
\begin{proof}[Proof of Theorem \ref{mainthm}]
By Theorem \ref{gridpartitionthm}, there exists a positive integer $d$ such that $[2|P|]^{d}$ can be partitioned into copies of $P$. Also, by Theorem \ref{gridthm}, there exists $n_{0}=\Omega(d(2|P|)^{4d})$ such that for $n>n_{0}$, all but at most $24^{d}(2|P|)^{2d^{2}}$ elements of $2^{[n]}$ can be covered by disjoint copies of $[2|P|]^{d}$.
Setting $c(P)=\max\{2^{n_{0}},24^{d}(2|P|)^{2d^{2}}\}$, we get that all but at most $c(P)$ elements of $2^{[n]}$ can be covered by disjoint copies of $P$ for all positive integer $n$.
\end{proof}
Say that a poset $Q$ is \emph{connected} if its comparability graph is connected. The following generalization of Theorem \ref{mainthm} can be easily proved following the same line of ideas. Let us only sketch its proof.
\begin{customthm}{B}
Let $P,Q$ be posets such that $P$ has a unique maximal and minimal element, and $Q$ is connected. For every positive integer $n$, there exists a constant $c=c(P,Q)$ such that all but at most $c$ elements of $Q^{n}$ can be covered by copies of $P$.
Also, if every prime divisor of $|P|$ also divides $|Q|$, then $Q^{n}$ can be partitioned into copies of $P$ for $n$ sufficiently large.
\end{customthm}
\begin{proof}[Sketch proof.]
We are done if we prove the following generalization of Theorem \ref{gridthm}.
\begin{customthm}{\protect\NoHyper\ref{gridthm}\protect\endNoHyper$^{+}$}
Let $P$ be a grid and let $Q$ be a connected poset. Then there exists $c=c(P,Q)$ such that the following holds. For every positive integer $n$, there exists $S\subset Q^{n}$ such that $Q^{n}\setminus S$ can be partitioned into copies of $P$ and $|S|\leq c$.
Also, if every prime divisor of $|P|$ also divides $|Q|$, then $Q^{n}$ can be partitioned into copies of $P$ for $n$ sufficiently large.
\end{customthm}
The proof of this theorem easily follows from the combination of Corollary \ref{gridcor} and from the following generalization of Theorem \ref{chainpartition}, which appeared as Theorem 5.1 in \cite{me}.
\begin{customthm}{\protect\NoHyper\ref{chainpartition}\protect\endNoHyper$^{+}$}(\cite{me}) Let $Q$ be a connected poset and let $c$ be a positive integer. If $n$ is sufficiently large, then $Q^{n}$ can be partitioned into chains $C_{1},...,C_{r}$ such that $c\leq |C_{1}|< 2c$ and $|C_{2}|=...=|C_{r}|=c$.
\end{customthm}
Again, this theorem is stated in \cite{me} without the additional condition that $c\leq |C_{1}|< 2c$, but the proof appearing in \cite{me} yields this slightly stronger result.
\end{proof}
Conjecture \ref{mainconjecture} is still open in the case $P$ does not have a unique maximal or minimal element. However, we believe that a similar approach as the one presented in this paper may lead to its proof in its full generality. Let us propose such an approach.
Let $S_{2k}$ denote the poset on $2k$ elements with partial ordering $\preceq$, whose elements can be partitioned into two $k$ element sets $A_{0},A_{1}$ such that $A_{0}$ and $A_{1}$ are antichains, and every element of $A_{0}$ is $\preceq$-smaller than every element of $A_{1}$.
\begin{enumerate}
\item Show that for fixed $k$, if $n$ is sufficiently large, then $2^{[n]}$ can be partitioned into copies of $S_{2k}$ and a chain of size at least $k$.
\item\label{step2} Show that if $P$ is a poset, then there exist positive integers $m$ and $t$ such that the family of copies of $P$ in $S_{2|P|}^{m}$ has a $t$-partition and a $(1\mod t)$-partition. This would imply that $S_{2|P|}^{n}$ can be partitioned into copies of $P$ for $n$ sufficiently large.
\item Show that for positive integers $k$ and $l$, if the positive integers $a$ and $b$ are sufficiently large, then $S_{4kl}\times [a]\times [b]$ can be partitioned into copies of $S_{2k}\times S_{2l}$. This would imply a statement similar to Lemma \ref{gridintogrid}, saying that the cartesian product $S_{2^{d}k_{1}...k_{d}}\times [a_{1}]\times..\times [a_{2d-2}]$ can be partitioned into copies of $S_{2k_{1}}\times...\times S_{2k_{d}}$, if $a_{1},...,a_{2d-2}$ are sufficiently large.
\end{enumerate}
These three steps, if proven, imply Conjecture \ref{mainconjecture} in a similar fashion as the combination of Theorem \ref{chainpartition}, \ref{gridpartitionthm}, \ref{gridthm} imply Theorem \ref{mainthm}. The advantage of this approach is that after proving step \ref{step2}, which seems to be the most straightforward of the three steps, we can forget about the poset $P$ and we can work with the family of posets $\{S_{2k}\}_{k=1,2,...}$ instead.
| {
"timestamp": "2016-11-22T02:12:24",
"yymm": "1611",
"arxiv_id": "1611.06842",
"language": "en",
"url": "https://arxiv.org/abs/1611.06842",
"abstract": "Let $P$ be a partially ordered set. If the Boolean lattice $(2^{[n]},\\subset)$ can be partitioned into copies of $P$ for some positive integer $n$, then $P$ must satisfy the following two trivial conditions:(1) the size of $P$ is a power of $2$,(2) $P$ has a unique maximal and minimal element.Resolving a conjecture of Lonc, it was shown by Gruslys, Leader and Tomon that these conditions are sufficient as well. In this paper, we show that if $P$ only satisfies condition (2), we can still almost partition $2^{[n]}$ into copies of $P$. We prove that if $P$ has a unique maximal and minimal element, then there exists a constant $c=c(P)$ such that all but at most $c$ elements of $2^{[n]}$ can be covered by disjoint copies of $P$.",
"subjects": "Combinatorics (math.CO)",
"title": "Almost tiling of the Boolean lattice with copies of a poset",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850832642354,
"lm_q2_score": 0.8198933381139645,
"lm_q1q2_score": 0.8060249105675587
} |
https://arxiv.org/abs/1304.7515 | A short note on short pants | It is a theorem of Bers that any closed hyperbolic surface admits a pants decomposition consisting of curves of bounded length where the bound only depends on the topology of the surface. The question of the quantification of the optimal constants has been well studied and the best upper bounds to date are linear in genus, a theorem of Buser and Seppälä. The goal of this note is to give a short proof of an linear upper bound which slightly improves the best known bounds. | \section{Introduction} \label{introduction}
A pants decomposition of a hyperbolic surface is a maximal collection of disjoint simple closed geodesics, which as its name indicates, decomposes the surface into three holed spheres or pairs of pants. In the case of closed surfaces of genus $g\geq 2$, a pants decomposition contains $3g-3$ curves which decompose the surface into $2g-2$ pairs of pants. Any surface admits an infinite number of pants decompositions and even up to homeomorphism the number of different types of pants decomposition grows quickly (roughly like $g!$). Bers proved that there exists a constant ${\mathcal B}_g$ which only depends on the genus $g$ such that any closed hyperbolic surface of genus $g$ has a pants decomposition with all curves of length less than ${\mathcal B}_g$.
The first notable step in the direction of quantifying ${\mathcal B}_g$ was obtained by Buser \cite{BuserHab} where an upper bounds of order $g \log g$ and lower bounds of order $\sqrt{g}$ were established. The first upper bounds linear in $g$ were obtained by Buser and Sepp\"al\"a \cite{BuserSeppala92} and Buser extended these bounds to the case of variable curvature \cite{BuserBook}. The best bounds known to date \cite[Th. 5.1.4]{BuserBook} are $6\sqrt{3\pi} (g-1)$ so the best known linear factor is $ \approx 18.4$.
It should also be noted that the direct method of computing the optimal constant in each genus seems out of reach as the only known constant is ${\mathcal B}_2$, a result of Gendulphe \cite{GendulpheBers}.
The goal of this note is to offer a short proof of a linear upper bounds which provide a slight improvement on previously known bounds.
\begin{theorem}\label{thm:main}
Every closed hyperbolic surface of genus $g\geq 2$ has a pants decomposition with all curves of length at most
$$
4\pi (g-1) + 4 R_g
$$
where $R_g$ is at a logarithmic function in $g$ which can be taken to be
$$
R_g= {\,\rm arccosh}\frac{1}{\sqrt{2} \sin\frac{\pi}{12g-6}}< \log(4g-2) + {\,\rm arcsinh} \,1.
$$
\end{theorem}
The theorem provides an improvement on the factor in front of the genus from $\approx 18.4$ to $\approx 12.6$. The true growth rate of ${\mathcal B}_g$ remains unknown. It follows from the bounds in the closed case that surfaces with $n$ cusps and genus $g$ also have short pants decompositions where the bounds depend on $n$ and $g$ this time. For fixed genus and growing number of cusps, the growth rate of the optimal constants is known to grow like $\sqrt{n}$ (see \cite{Balacheff-Parlier-Sabourau, Balacheff-Parlier}) which seems to indicate that the growth rate for the closed surfaces might be of order $\sqrt{g}$. However, if one considers sums of lengths of curves in a pants decomposition instead of the maximum length then the case of cusps is very different from the genus case (compare \cite{Balacheff-Parlier-Sabourau, Guth-Parlier-Young}).
{\bf Acknowledgements.}
Some of the ideas in this paper came while I was preparing a master class at the Schr\"odinger Institute in Vienna during the special program on ``Teichm\"uller Theory", Winter 2013, and I thank the organizers for inviting me. I'd also like to thank Robert Young for his helpful comments.
\section{Preliminaries} \label{preliminaries}
To a curve $\gamma$ or a free homotopy class $[\gamma]$ of curve on a topological surface $\Sigma$ we associate a length function $\ell_{S}(\gamma)$ which associates to a hyperbolic structure $S$ on $\Sigma$ the length of the unique closed geodesic in $[\gamma]$. A first tool that we shall use is the following lemma which in particular will allow us to restrict the proof of the main theorem to the case of surfaces with systole of length at least $2 {\,\rm arcsinh} \, 1$.
\begin{lemma}[Length expansion lemma]\label{lem:LE} Let $\Sigma$ be a topological surface with $n > 0$ boundary curves $\gamma_1,\hdots,\gamma_n$. For any hyperbolic surface $S \cong \Sigma$ and any
$(\varepsilon_1,\hdots,\varepsilon_n) \in ({\mathbb R}^+)^n\setminus \{0\}$ there exists a hyperbolic surface $S'\cong \Sigma$ with
$$\ell_{S'}(\gamma_1)=\ell_S(\gamma_1)+\varepsilon_1,\hdots,\ell_{S'}(\gamma_n)=\ell_S(\gamma_n)+\varepsilon_n$$
and such that any non-trivial simple closed curve $\gamma\subset \Sigma$ satisfies
$$
\ell_{S'}(\gamma)>\ell_{S}(\gamma).
$$
\end{lemma}
This result seems to have been known for a long time, as it is claimed in \cite{ThurstonSpine} (also see \cite{ParlierLengths} for a direct proof and \cite{PapaTheretShort} for a stronger version).
The following result, due to Bavard \cite{BavardDisque}, is sharp.
\begin{lemma}[Marked systoles]\label{lem:MS} For any $x\in S$, $S$ a closed hyperbolic surface of genus $g$, there exists a geodesic loop $\delta_x$ based in $x$ such that
$$
\ell(\delta_x) \leq 2 {\,\rm arccosh} \frac{1}{2 \sin \frac{\pi}{12g-6}}.
$$
\end{lemma}
What Bavard actually proves is that the above value is a sharp bound on the diameter of the largest embedded open disk of the surface. A weaker version of this lemma can be obtained by comparing the area of an embedded disk to the area of the surface. The area of $D$ an embedded disk of radius $r$ on a hyperbolic surface is the same as the area of such a disk in the hyperbolic plane so
$$
{\rm area}(D) = 2\pi (\cosh r - 1).
$$
Comparing this to ${\rm area}(S) = 4\pi (g-1)$ shows
$$
r < 2 \log(2g-1 + \sqrt{2g(2g-2)}) < 2 \log(4g-2).
$$
This weaker bound can be found in \cite{BuserBook}[Lemma 5.2.1] but note that the order of growth of this bound is the same as in Bavard's result.
Consider a hyperbolic surface $S$ possibly with geodesic boundary. In the free homotopy class of a simple closed geodesic loop $\gamma_x$ based at a point $x\in S$ lies a unique simple closed geodesic $\gamma$ (possibly a cusp or a boundary geodesic). In the event where $\gamma$ is not a cusp, it will be useful to bound the Hausdorff distance between $\gamma$ and $x$.
\begin{lemma}
Let $S,\gamma_x,\gamma$ be as above. Then
$$
\max_{y\in \gamma} d(x,y) < {\,\rm arccosh} \left( \cosh\frac{\ell(\gamma_x)}{2} \coth \frac{\ell(\gamma)}{2}\right).
$$
\end{lemma}
\begin{proof}
Note that $\gamma_x$ and $\gamma$ bound a cylinder that can be cut into two tri-rectangles with consecutive sides of length $\sfrac{\ell(\gamma)}{2}$ and $d(\gamma_x,\gamma)$ as in figure \ref{fig:loop}.
\begin{figure}[h]
\leavevmode \SetLabels
\L(.755*.45) $d(\gamma_x,\gamma)$\\
\L(.51*.32) $\frac{\ell(\gamma)}{2}$\\
\L(.51*.84) $\frac{\ell(\gamma_x)}{2}$\\
\endSetLabels
\begin{center}
\AffixLabels{\centerline{\epsfig{file =loop.pdf,width=8.0cm,angle=0}}}
\vspace{-18pt}
\end{center}
\caption{From a geodesic loop to a closed geodesic} \label{fig:loop}
\end{figure}
Hyperbolic trigonometry in the tri-rectangle implies
\begin{equation}\label{eqn:tri}
\sinh d(\gamma_x,\gamma) \sinh \frac{\ell(\gamma)}{2} < 1.
\end{equation}
Now the maximum distance between $x$ and $\gamma$ is at most the length of the diagonal of the tri-rectangle. By hyperbolic trigonometry in one of the right angles triangles bounded by this diagonal, we obtain for all $y\in \gamma$:
$$
\cosh d(x,y) \leq \cosh d(\gamma_x,\gamma) \cosh \frac{\ell(\gamma_x)}{2}
$$
which via equation \ref{eqn:tri} becomes
\begin{eqnarray*}
d(x,y)& < &{\,\rm arccosh} \left(\cosh \left({\,\rm arcsinh} \frac{1}{\sinh\frac{\ell(\gamma)}{2}}\right) \cosh \frac{\ell(\gamma_x)}{2}\right) \\
&=& {\,\rm arccosh} \left( \cosh\frac{\ell(\gamma_x)}{2} \coth \frac{\ell(\gamma)}{2}\right).
\end{eqnarray*}
\end{proof}
It is the following corollary of these lemmas that we shall use in the sequel. It is obtained by replacing $\ell(\gamma_x)$ by Bavard's bound, $\ell(\gamma)$ with $2 {\,\rm arcsinh} 1$ and by a simple manipulation.
\begin{corollary}\label{cor:distance}
Let $\gamma_x$ be the shortest geodesic loop based in $x\in S$ a closed surface and $\gamma$ the unique closed geodesic in its homotopy class. If $\ell(\gamma)\geq 2 {\,\rm arcsinh} 1$, then for all $y\in \gamma$
$$
d(x,y) < R_g:={\,\rm arccosh}\frac{1}{\sqrt{2} \sin\frac{\pi}{12g-6}}.
$$
\end{corollary}
A further small manipulation gives the following rougher upper bound on this distance where the order of growth is more apparent:
$$
R_g < \log (4g-2) + {\,\rm arcsinh} 1.
$$
\section{Proof of main theorem}
We begin with any surface $S\in {\mathcal M}_g$ and our goal is to find a pants decomposition of $S$ which contains all simple closed geodesics of $S$ of length $\leq 2 {\,\rm arcsinh} 1$ and which has relatively short length. Recall that all simple closed geodesics of length less than $2 {\,\rm arcsinh} 1$ are disjoint, and it is for this reason that this value appears. Note that $S$ may have a pants decomposition of shorter length which doesn't contain all simple closed geodesics of length $\leq 2 {\,\rm arcsinh} 1$ but we restrict ourselves to searching for those that do. We'll call such pants decompositions {\it admissible} pants decompositions.
As we are only looking among admissible pants decompositions, we can immediately apply Lemma \ref{lem:LE} to deform our surface $S$ to a new surface $S'$ with all simple closed geodesics of length greater or equal to $2 {\,\rm arcsinh} 1$ and with the length of all curves $\gamma$ lying in admissible pants decompositions of length at least $\ell_S(\gamma)$. (If $S$ already had this property, then $S'=S$.)
We now construct algorithmically a pants decomposition of $S'$. The algorithm has two main steps and a fail-safe step.
The algorithm is initiated as follows. Consider $x_1\in S'$ and $\gamma_{x_1}$ the shortest geodesic loop based at $x_1$. We set $\gamma_1$ to be the unique closed geodesic in the same free homotopy class and we cut $S'$ along $\gamma_1$ to obtain a surface with boundary (and possibly disconnected)
$$
S_1:= S'\setminus \gamma_1.
$$
Note that as such $S_1$ is an open surface but we could equivalently treat it as a compact surface with two simple closed geodesic boundary curves by considering its closure (but not its closure {\it inside} $S$). We will proceed in the sequel in a similar way.\\
\underline{Main Step 1}
Choose $x_{k+1}\in S_k$ with $d(x_{k+1},\partial S_k) > R_g$. Consider $\gamma_{x_{k+1}}$ the shortest geodesic loop in $x_{k+1}$. Observe that in light of Corollary \ref{cor:distance} $\gamma_{x_{k+1}}$ is not freely homotopic to any of the boundary curves of $S_k$. Set $\gamma_{k+1}$ to be the unique simple closed geodesic in the same free homotopy class and consider the surface
$$
S'_{k+1}:= S_k\setminus \gamma_{k+1}.
$$
We remove any pair of pants from $S'_{k+1}$ to obtain $S_{k+1}$
If there are no more remaining $x\in S_k$ with $d(x,\partial S_k) > R_g$ we proceed to the next main step, otherwise the step is repeated. For further reference we note that all curves created in this step have length at most
$$2 {\,\rm arccosh} \frac{1}{2\sin
\frac{\pi}{12g-6}}$$
and thus in particular have length strictly less than $2R_g$.
\underline{Main Step 2}
All $x\in S_k$ satisfy $d(x,\partial S_k) \leq R_g$. Consider a point $x_{k+1} \in S_k$ such that there are two distinct geodesic paths that realize the distance from $x_{k+1}$ to $\partial S_k$. This provides a non-trivial simple path $c'$ from $\partial S_k$ to $\partial S_k$, where by non-trivial we mean that $S_k\setminus c'$ does not include a disk. In particular, in the free homotopy class of $c'$ with end points allowed to glide on the same boundary curves, there is a unique simple geodesic arc $c$ of minimal length, perpendicular in both end points to $\partial S_k$.
There are two possible topological configurations for $c$ depending on whether $c$ is a path between two distinct boundary curves or not (see figure \ref{fig:toptypes} for an illustration).
\begin{figure}[h]
\leavevmode \SetLabels
\L(.14*.79) $\alpha_1$\\
\L(.44*.79) $\alpha_2$\\
\L(.3*.08) $\tilde{\alpha}$\\
\L(.72*1.01) $\alpha$\\
\L(.535*.2) $\tilde{\alpha}_1$\\
\L(.845*.2) $\tilde{\alpha}_2$\\
\L(.3*.74) $c$\\
\L(.65*.3) $c$\\
\endSetLabels
\begin{center}
\AffixLabels{\centerline{\epsfig{file =toptypes.pdf,width=12.0cm,angle=0}}}
\vspace{-18pt}
\end{center}
\caption{The two topological types for path $c$} \label{fig:toptypes}
\end{figure}
\underline{Case 1}: If $c$ is a path between distinct boundary curves $\alpha_1$ and $\alpha_2$, then $c\cup \alpha_1 \cup \alpha_2$ is contained in a unique pair of pants $(\alpha_1,\alpha_2,\tilde{\alpha})$. We set
$$
\gamma_{k+1}:=\tilde{\alpha}
$$
and
$$S'_{k+1}:=S_k\setminus (\alpha_1,\alpha_2,\tilde{\alpha}).
$$
\underline{Case 2}: If $c$ is a path with endpoints on a single boundary curve $\alpha$ then $c\cup \alpha$ is contained in a unique pair of pants $(\alpha,\tilde{\alpha}_1,\tilde{\alpha}_2)$.
If $\tilde{\alpha}_1 \neq \tilde{\alpha}_2$ then we set
$$
\gamma_{k+1}:=\tilde{\alpha}_1, \gamma_{k+2}:=\tilde{\alpha}_2
$$
and
$$S'_{k+2}:=S_k\setminus (\alpha_1,\alpha_2,\tilde{\alpha}).
$$
If $\tilde{\alpha}_1=\tilde{\alpha}_2$ then $(\alpha_1,\alpha_2,\tilde{\alpha})$ is contained in a one holed torus $T$ and we set
$$
\gamma_{k+1}:=\tilde{\alpha}_1
$$
and
$$S'_{k+1}:=S_k\setminus T.
$$
The algorithm continues until $\gamma_{3g-3}$ is constructed, i.e., when a full pants decomposition is obtained.\\
\underline{Lengths of curves}
Begin by observing that in both types of steps described above, at each step we have
$$
\ell(\partial S_{k+1}) < \ell(\partial S_{k})+4 R_g.
$$
Indeed: if $S_{k+1}$ is obtained by cutting along a curve as in Step 1, then the length of the curve is strictly shorter than $2R_g$ and the boundary increases by at most twice this length.
If $S_{k+1}$ is obtained as in Step 2, case 1, then the curve $\tilde{\alpha}$ is of length at most
$$\ell(\alpha_1)+\ell(\alpha_2)+4R_g.$$
As $S_{k+1}$ is obtained by removing the pair of pants with curves $\alpha_1, \alpha_2$ and $\tilde{\alpha}$, the boundary of $S_{k+1}$ no longer contains $\alpha_1$ and $\alpha_2$ and the boundary length increases by at most $4R_g$. In Step 2, case 2, one argues similarly.
In order to ensure that the length of the constructed curves does not surpass the desired length, the algorithm contains a fail-safe step. \\
\underline{Fail-safe step}
If at any step $\ell(\partial S_k) \geq 4\pi (g-1)$ then the next curve is constructed following slightly different procedure which we describe here. First observe that if
$$\ell(\partial S_k) \geq 4\pi (g-1)$$ with $S_k$ obtained as above, then
$$
\ell(\partial S_k) < 4\pi (g-1) +4 R_g
$$
as at every step boundary length cannot increase by more than $4 R_g$.
We consider an $r$-neighborhood of $\partial S_k$. For small enough $r$, this neighborhood is a union of cylinders around the boundary curves. We let $r$ grow until the topology changes, i.e., until the cylinder first bump into each other. We choose one of the geodesic paths $c$ created at the bumping point.
Here we use an area argument to bound the length of $c$. The area of an embedded $r$-neighborhood of $\ell(\partial S_k)$ is at most that of the surface thus
$$
\ell(\partial S_k) \sinh r < 4\pi(g-1).
$$
By assumption this implies
$$
r< {\,\rm arcsinh} 1
$$
and thus
$$
\ell(c) < 2 {\,\rm arcsinh} 1.
$$
As before, there are two topological types for $c$, case 1 and case 2 as above. In both cases, we borrow the notation from above, but we argue slightly differently for the lengths.
In case 1 we have a pair of pants with boundary curves $\alpha_1,\alpha_2$ and $\tilde{\alpha}$ which we decompose into two right angles hexagons. By the hexagon relations we have
\begin{eqnarray*}
\cosh \frac{\ell(\tilde{\alpha})}{2}&=& \sinh \frac{\ell(\alpha_1)}{2}\sinh \frac{\ell(\alpha_2)}{2} \cosh \ell(c) - \cosh \frac{\ell(\alpha_1)}{2}\cosh \frac{\ell(\alpha_2)}{2}\\
&<& \sinh \frac{\ell(\alpha_1)}{2}\sinh \frac{\ell(\alpha_2)}{2} \, 3 - \cosh \frac{\ell(\alpha_1)}{2}\cosh \frac{\ell(\alpha_2)}{2}\\
&<& \cosh\left( \frac{\ell(\alpha_1)}{2}+ \frac{\ell(\alpha_2)}{2} \right).
\end{eqnarray*}
From this
$$
\ell(\tilde{\alpha}) < \ell(\alpha_1)+ \ell(\alpha_2).
$$
So at this step we have
$$
\ell(\partial S_{k+1}) < \ell(\partial S_{k}).
$$
A similar (and easier) argument shows that the same conclusion holds in case 2 by looking at a pentagon decomposition of the pants $(\alpha,\tilde{\alpha}_1,\tilde{\alpha}_2)$.
Note that after a fail-safe step the boundary length decreases so it is possible that we return to Main Step 2 but otherwise we continue to create curves while decreasing the total boundary length.
All the curves $\gamma_k$ created are at one point boundary curves of a surface $S'_k$ from Main Step 1, a boundary curve of a surface $S_k$ from Main Step 2 or a boundary curve of $S_k$ from the fail-safe step. As such their lengths are all bounded by the total boundary lengths of these surfaces. Thus
$$
\ell(\gamma_k) < 4\pi (g-1) +4 R_g
$$
and the theorem is proved.
\addcontentsline{toc}{section}{References}
\def$'${$'$}
| {
"timestamp": "2013-05-23T02:01:14",
"yymm": "1304",
"arxiv_id": "1304.7515",
"language": "en",
"url": "https://arxiv.org/abs/1304.7515",
"abstract": "It is a theorem of Bers that any closed hyperbolic surface admits a pants decomposition consisting of curves of bounded length where the bound only depends on the topology of the surface. The question of the quantification of the optimal constants has been well studied and the best upper bounds to date are linear in genus, a theorem of Buser and Seppälä. The goal of this note is to give a short proof of an linear upper bound which slightly improves the best known bounds.",
"subjects": "Geometric Topology (math.GT); Differential Geometry (math.DG)",
"title": "A short note on short pants",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9886682474702956,
"lm_q2_score": 0.8152324871074608,
"lm_q1q2_score": 0.8059944743093835
} |
https://arxiv.org/abs/2206.13158 | Sharp inequalities involving the Cheeger constant of planar convex sets | We are interested in finding sharp bounds for the Cheeger constant via different geometrical quantities, such as the area $|\cdot|$, the perimeter $P$, the inradius $r$, the circumradius $R$, the minimal width $w$ and the diameter $d$. In particular, we provide new sharp inequalities between these quantities for planar convex bodies and we provide new conjectures based on numerical simulations. In particular, we completely solve the Blasche-Santaló diagrams describing all the possible inequalities involving the Cheeger constant, the perimeter and the inradius, then, the Cheeger constant, the diameter and the inradius, and, finally, the Cheeger constant, the circumradius and the inradius. | \section{Introduction}
Let $\Omega$ be a bounded subset of $\mathbb{R}^2$. The Cheeger constant of $\Omega$, introduced by Jeff Cheeger in \cite{cheger} in connection with the first eigenvalue of the Laplacian, is defined as
\begin{equation}
\label{chee}
h(\Omega):=\inf\left\{\frac{P(E)}{\abs{E}} \, : \, E \, \text{measurable and} \, E\subseteq\Omega\right\},
\end{equation}
where $P(E)$ is the perimeter of $E$ in the sense of De Giorgi and $\abs{E}$ is the area of $E$.
The minimum in \eqref{chee} is achieved when $\Omega$ has Lipschitz boundary, see as a reference \cite{parini}, and the set $E$ that realizes this minimum is called a \emph{Cheeger set} of $\Omega$ and it is denoted by $C_\Omega$. For the properties of the Cheeger constant and for an introductory survey, see for example \cite{caselles,algo,parini}. In particular, in the case of planar convex sets, the authors in \cite{caselles} prove that the Cheeger set is unique, while in \cite{algo} the authors give a characterization for the Cheeger constant.
The problem of finding the Cheeger constant of a domain has been widely considered and has several applications (see \cite{parini} for a general overview). One of the possible interpretations of the Cheeger constant can be found for instance in the context of maximal flow and minimal cut problems: in particular, in \cite{strang}, the authors solve the problem of computing exact continuous optimal curves and surfaces for image segmentation in $2D$ and $3D$ reconstruction from a stereo image pair and this has applications in medical image process (see \cite{appleton}). The Cheeger problem appears also in the study of plate failure under stress (see \cite{keller}).
For this reason, it is useful to have estimates of the Cheeger constant in terms of geometric quantities that can be easily computed.
In the present paper, we are interested in describing all possible inequalities involving the Cheeger constant of a given open, bounded and convex set $\Omega\subset\mathbb{R}^2$ and two among the following geometrical quantities: the area $\abs{\Omega}$, the perimeter $P(\Omega)$, the inradius $r(\Omega)$, the circumradius $R(\Omega)$, the minimal width $\omega(\Omega)$ and the diameter $d(\Omega)$. So, we aim to study the associated Blaschke--Santal\'o diagrams of these triplets and collect them all in one single paper together with new conjectures.
A Blaschke--Santal\'o diagram is a tool that allows to visualize all the possible inequalities between three geometric quantities. More precisely, if we consider three shape functionals $(J_1,J_2,J_3)$, we want to find a system of inequalities describing the set
$$\{(J_1(\Omega), J_2(\Omega))|\ J_3(\Omega)=1, \, \Omega \in \mathcal{K}^2\},$$
where we denote by $\mathcal{K}^2$ the class of non-empty sets in $\mathbb{R}^2$ that are open, bounded and convex.
This kind of diagram was introduced by Blaschke in \cite{blaschke2}, in order to investigate all possible relations between the volume, the surface area and the integral mean curvature in the class of compact convex sets in $\mathbb{R}^3$.
Following the idea of Blaschke, Santal\'o in \cite{santalo} proposed the study of these diagrams for all the triplets of the following geometrical quantities: area, perimeter, inradius, circumradius, minimal width and diameter; these diagrams were studied under the constraint of convexity and six of them are still not completely solved. We refer to the introduction in \cite{delyon2} for the accurate state of art.
Moreover, for classical results about Blaschke--Santal\'o diagram, we refer for example to \cite{cifre3,cifre4,cifre2,cifre_salinas, cifre, cifre_gomis,santalo} and for more recent results we recall \cite{MR3653891,branden,delyon,delyon2,bs-numerics, FL21, LZ}.
In \cite{ftJMAA} and \cite{ftouhi_cheeger} the author studies the Blaschke--Santal\'o diagram involving the Cheeger constant.
More precisely, in \cite{ftJMAA}, it is studied the Blaschke--Santal\'o diagram between the Cheeger constant, the area and the inradius, and it is proved that, if $\Omega$ in $\mathcal{K}^2$, then
\begin{equation}\label{eq:hra}
\frac{1}{r(\Omega)}+\frac{\pi r(\Omega)}{|\Omega|}\leq h(\Omega) \leq \frac{1}{r(\Omega)}+\sqrt{\frac{\pi}{|\Omega|}},
\end{equation}
where the upper bound in \eqref{eq:hra} is achieved by (and only by) sets which are homothetical to their form body (see Definition \ref{formbody}), for instance circumscribed sets, meanwhile the lower one is achieved by (and only by) stadiums. Then, in \cite{ftouhi_cheeger}, it is studied the diagram between Cheeger constant, area and perimeter and it is proved that if $\Omega\in\mathcal{K}^2$, then
\begin{equation}\label{eq:hpa}
\frac{P(\Omega)+\sqrt{4\pi|\Omega|}}{2|\Omega|}\leq h(\Omega)\leq \frac{P(\Omega)}{|\Omega|},
\end{equation}
where the upper bound is achieved by any set which is Cheeger of itself (in particular stadiums), meanwhile the lower one is achieved, for example, by circumscribed polygons. We also recall that in \cite{henrot_luc} the maximization problem of the Cheeger constant among sets of constant width is studied.
Now we state the main results of the paper.
In order to do that, we need to define the following classes of admissible sets (we refer to \cite[Table 2.1]{inequalities_convex} for the associated constraints):
\begin{enumerate}
\item $\displaystyle{\mathcal{K}^2_{r,P}=\{\Omega \in \mathcal{K}^2: \, r(\Omega)=r, \, P(\Omega)=P\}}$, where $P\ge 2\pi r$;
\vspace{2mm}
\item $\mathcal{K}^2_{d,r}=\{\Omega \in \mathcal{K}^2: \, r(\Omega)=r, \, d(\Omega)=d\}$, where $d\ge 2 r$;
\vspace{2mm}
\item $\mathcal{K}^2_{r,R}=\{\Omega \in \mathcal{K}^2: \, r(\Omega)=r, \, R(\Omega)=R\}$, where $R\ge r$;
\vspace{2mm}
\item $\mathcal{K}^2_{\omega,d}=\{\Omega \in \mathcal{K}^2: \, \omega(\Omega)=\omega, \, d(\Omega)=d\}$, where $\omega\le d$;
\vspace{2mm}
\item $\mathcal{K}^2_{R,\omega}=\{\Omega \in \mathcal{K}^2: \, R(\Omega)=R, \, \omega(\Omega)=\omega\}$, where $2R\ge \omega$;
\vspace{2mm}
\item $\mathcal{K}^2_{\omega,P}=\{\Omega \in \mathcal{K}^2: \, \omega(\Omega)=\omega, \, P(\Omega)=P\}$, where $P\ge \pi \omega$;
\vspace{2mm}
\item $\mathcal{K}^2_{A,\omega}=\{\Omega \in \mathcal{K}^2: \, \abs{\Omega}=A, \, \omega(\Omega)=\omega\}$, where $\sqrt{3}A\ge \omega^2$;
\vspace{2mm}
\item $\mathcal{K}^2_{d,R}=\{\Omega \in \mathcal{K}^2: \, d(\Omega)=d, \, R(\Omega)=R\}$, where $\sqrt{3}R\le d< 2R$;
\vspace{2mm}
\item $\mathcal{K}^2_{\omega,r}=\{\Omega \in \mathcal{K}^2: \, \omega(\Omega)=\omega, \, r(\Omega)=r\}$, where $2r< w\le 3r$;
\vspace{2mm}
\item $\mathcal{K}^2_{A,R}=\{\Omega \in \mathcal{K}^2: \, \abs{\Omega}=A, \, R(\Omega)=R\}$, where $A\le \pi R^2$.
\vspace{2mm}
\item $\mathcal{K}^2_{R,P}=\{\Omega \in \mathcal{K}^2: \, R(\Omega)=R, \, P(\Omega)=P\}$, where $4 R<P\le 2\pi R$;
\vspace{2mm}
\item $\mathcal{K}^2_{P, d}=\{\Omega \in \mathcal{K}^2: \, P(\Omega)=P, \, d(\Omega)=d\}$, where $2 d<P\leq \pi d$.
\vspace{2mm}
\item $\mathcal{K}^2_{A,d}=\{\Omega \in \mathcal{K}^2: \, \abs{\Omega}=A, \, d(\Omega)=d\}$, where $\pi d^2\ge 4A$
\end{enumerate}
\newpage
Firstly, let us state the following existence result.
\begin{theorem}\label{th:existence}
Let $\Omega\in \mathcal{K}^2$, then the minimization and the maximization shape optimization problems of the Cheeger constant $h(\Omega)$ admit a solution in the classes of sets defined in $(1)-(13)$.
\end{theorem}
In the following Theorem, we consider the classes of sets for which we are able to explicitly state the solution of the maximum and the minimum problem, which is equivalent to completely solve the relative Blaschke--Santaló diagrams. For the precise definitions of the below-mentioned extremal sets see Section \ref{ext} and for the explicit bounds see Propositions \ref{prop_hrP}, \ref{prop_hdr} and \ref{prop_hRr}.
\begin{theorem}\label{th2}
The following results hold
\begin{enumerate}[label=(\roman*)]
\item The maximum and the minimum in $\mathcal{K}^2_{r,P}$ are achieved respectively by circumscribed sets and stadiums.
\vspace{2mm}
\item The maximum in $\mathcal{K}^2_{d,r}$ is achieved by symmetrical two-cup bodies; moreover, there exists $D_0>0$ such that if $d\geq r D_0$ the minimum in $\mathcal{K}^2_{d,r}$ is achieved by symmetrical spherical slices, while, if $d<rD_0$, the minimum is achieved by regular smoothed nonagons.
\vspace{2mm}
\item The maximum and the minimum in $\mathcal{K}^2_{r,R}$ are achieved respectively by two-cup bodies and symmetrical spherical slices.
\end{enumerate}
\end{theorem}
In the following Theorem we analyse the classes of sets, for which we can partially solve the related Blaschke--Santaló diagrams, that are $\mathcal{K}^2_{\omega,d}$ $\mathcal{K}^2_{R,\omega}$, $\mathcal{K}^2_{\omega, P}$ and $\mathcal{K}^2_{A,\omega}$. See Propositions \ref{prop_hdw}, \ref{prop_hRw}, \ref{prop_hwP} and \ref{prop_hAw} for the explicit bounds.
\begin{theorem}\label{th3}
The following results hold
\begin{enumerate}[label=(\roman*)]
\item If $\omega\le \sqrt{3}/2 d$, the maximum in $\mathcal{K}^2_{\omega,d}$ is achieved by subequilateral triangles. Moreover, the minimum is achieved by symmetrical spherical slices
\item The minimum in
$\mathcal{K}^2_{R,\omega}$ is achieved by symmetrical spherical slices, while, if $\omega\le 3/2 R$, the maximum is achieved by subequilateral triangles.
\item The minimum in $\mathcal{K}^2_{\omega,P}$ is achieved by stadiums, while, if $\omega\le \frac{P}{2\sqrt{3}}$, the maximum is achieved by subequilateral triangles.
\item The maximum in $\mathcal{K}^2_{A,\omega}$ is achieved by subequilateral triangles, while, if $\omega\le \frac{2\sqrt{A}}{\sqrt{\pi}}$, the maximum is achieved by stadiums.
\end{enumerate}
\end{theorem}
In the class $\mathcal{K}^2_{d, R}$ we have only solved the maximization problem, while in the class $\mathcal{K}^2_{R,\omega}$ the minimization one. See Propositions \ref{prop_hRd} and \ref{prop_hRw} for the explicits bounds.
\begin{theorem}\label{th4}
The following results hold
\begin{enumerate}[label=(\roman*)]
\item The maximum in $\mathcal{K}^2_{d,R}$ is achieved by subequilateral triangles.
\item The minimum in $\mathcal{K}^2_{\omega, r}$ is achieved by subequilateral triangles.
\end{enumerate}
\end{theorem}
As far as the remaining classes of sets, that are $\mathcal{K}^2_{A,R}, \mathcal{K}^2_{R,P}, \mathcal{K}^2_{P,d}$ and $\mathcal{K}^2_{A,d}$, we are not able to identify the extremal sets. We refer to Propositions \ref{prop_hAR}, \ref{prop_hRP}, \ref{prop_hPd}, \ref{prop_hdA} for bounds that are not sharp in the Blaschke--Santaló sense, which means that they do not correspond to parts of the boundary of the diagram.
The paper is organized as follows. In Section \ref{sec2} we state the preliminary results, the definitions used throughout the paper and the known inequalities relating the Cheeger constant to one of the geometric quantities taken into consideration. Section \ref{secnum} is dedicated to the description of the numerical methods used to compute the functionals and to approach the Blaschke--Santal\'o diagrams. In Section \ref{sec3} we prove the main theorems and in Section \ref{secult} we state new conjectures. In the final Appendix, for the convenience of the reader, we have collected all the inequalities proved throughout the paper.
\section{Notations and Preliminaries}\label{sec2}
Throughout this article, $\norma{\cdot}$ will denote the Euclidean norm in $\mathbb{R}^2$,
while $(\cdot)$ is the standard Euclidean scalar product for $n=2$. We denote by $P(\Omega)$ the perimeter of $\Omega$ and by $|\Omega|$ the volume of $\Omega$. Moreover, $B_r$ is the ball of radius $r>0$ centered at the origin, while $\mathbb{S}^1$ is the unit sphere in $\mathbb{R}^2$. In the following, we work with the class of sets $\mathcal{K}^2$ defined as
\begin{equation*}
\mathcal{K}^2:=\{ \Omega\:|\: \Omega \;\, \text{is an open, bounded and convex set of }\; \mathbb{R}^2\}\setminus \{\emptyset\}.
\end{equation*}
We also define the following function, known as cardinal sinus,
\begin{equation}\label{sinc}
{\rm sinc}:x\in\mathbb{R}\longmapsto {\rm sinc}(x)=\frac{\sin(x)}{x},
\end{equation}
and we will denote its inverse, whenever it is defined, as ${\rm arcsinc}(x)$.
\subsection{Classical results ad preliminary lemmas }
We provide the classical definitions and results that we need in the following.
\begin{definition}
\label{minksum}
Let $\Omega,K\subset \mathbb{R}^2$ two convex bounded sets with non-empty interior. We define the \emph{Minkowski sum} $(+)$ and \emph{difference} $(\sim)$ as
\begin{equation*}
\label{sum}
\Omega+K:=\{x+y \, : \, x\in \Omega, \, y\in K\},
\end{equation*}
\begin{equation*}
\label{diff}
\Omega\sim K:=\{x\in \mathbb{R}^2 \, : \, x+K\subseteq \Omega\}.
\end{equation*}
\end{definition}
We recall now the definition of Hausdorff distance.
\begin{definition}
Let $\Omega,K\subset\mathbb{R}^2$ two non-empty compact sets, we define the Hausdorff distance between $\Omega$ and $K$ as
\begin{equation*}
\label{disth}
d_{\mathcal{H}}(\Omega,K)=\inf \left\{ \varepsilon>0 \; :\; \Omega\subset K+B_{\epsilon} \right\}.
\end{equation*}
\end{definition}
Note that, in the case that $\Omega$ and $K$ are convex sets, we have that $d_\mathcal{H}(\Omega,K)=d_\mathcal{H} (\partial \Omega, \partial K)$.
Let $\{\Omega_k\}_{k\in\mathbb N}$ be a sequence of non-empty, open, bounded convex subsets of $\mathbb{R}^2$, we say that $\Omega_k$ converges to $\Omega$ in the Hausdorff sense and we denote
\[
\Omega_k\stackrel{\mathcal H}{\longrightarrow} \Omega
\]
if and only if $d_{\mathcal H}(\Omega_k,\Omega)\to 0$ as $k\to \infty$. We recall that, by the Blaschke selection theorem (see as a reference \cite{schneider}, Theorem $1.8.7$), every bounded sequence of convex sets has a subsequence that converges in the Hausdorff sense to a convex set.
Let us now recall the following definitions:
\begin{definition}
Let $\Omega\in\mathcal{K}^2$. The \emph{distance function from the boundary of} $\Omega$ is the function $ d(\cdot, \partial \Omega):\Omega \to [0,+\infty[$ defined as
$$d(x,\partial\Omega)=\inf_{y\in\partial\Omega}\norma{x-y}.$$
The \emph{inradius} $r(\Omega)$ of $\Omega$ is defined as $$r(\Omega)=\sup_{x\in \Omega} d(x,\partial\Omega).$$
Finally, the \emph{circumradius} $R(\Omega)$ is defined as
$$R(\Omega)= \min_{x\in\Omega}\max_{y\in\partial\Omega} \norma{x-y}.$$
\end{definition}
We need now to introduce the support function of a convex set.
\begin{definition}\label{support}
Let $\Omega\in \mathcal{K}^2$. The \emph{support} function of $\Omega$ is defined as
\begin{equation*}
p_\Omega(y)=\max_{x\in \Omega} (x\cdot y), \qquad y\in \mathbb{R}^2.
\end{equation*}
\end{definition}
In this paper, we will also consider the minimal width (or thickness) of a convex set, that is to say the minimal distance between two parallel supporting hyperplanes. More precisely, we have
\begin{definition}
Let $\Omega\in\mathcal{K}^2$. The width of $\Omega$ in the direction $y \in \mathbb{R}$ is defined as
\begin{equation*}
\omega_{\Omega}(y)=p_\Omega(y)+p_\Omega( -y)
\end{equation*}
and the \emph{minimal width} of $\Omega$ as
\begin{equation*}
\omega(\Omega)=\min\{ \omega_{\Omega}(y)\,|\; y\in\mathbb{S}^{1}\}.
\end{equation*}
\end{definition}
We introduce now the inner parallel set of a convex set $\Omega$. \begin{definition}
\label{inner_parallel}
Let $\Omega$ be an open, bounded and convex set, the \emph{inner parallel set} of $\Omega$ at distance $t\in [0, r(\Omega)]$ is
\begin{equation*}
\Omega_{-t}=\{x\in \Omega \, : \, d(x,\partial\Omega)\ge t\}.
\end{equation*}
\end{definition}
\begin{remark}\label{rem:in}
We remark that, by definition, we have that
$$\Omega_{-t}=\Omega \sim tB_1.$$
Moreover, we observe that for any $y\in \mathbb{S}^1$ and for every $\Omega,K\in \mathcal{K}^2$ , one has, see e.g. \cite{schneider},
\begin{equation*
p_{\Omega\sim K}( y)\le p_\Omega(y)-p_K(y),
\end{equation*}
so, in the case $K=tB_1$, this reads
\begin{equation}\label{diff_parall}
p_{\Omega_{-t}}(y)\le p_\Omega(y)-t.
\end{equation}
Moreover, as it is observed in \cite[Proposition 3.2]{jahn},
one has
\begin{equation}
\label{circpar}
R(\Omega+ K)\le R(\Omega)+R(K)
\end{equation}
and, if $K=tB_1$, equality holds in \eqref{circpar}.
\end{remark}
We are now in a position to prove the following Lemma.
\begin{lemma}\label{lem:diameter_inner_set}
Let $\Omega\in\mathcal{K}^2$. We have for every $ t\in [0,r(\Omega)]$:
\begin{align}
\label{inr}
&r(\Omega_{-t})= r(\Omega)-t, \\
\label{eq:diameter}
&d(\Omega_{-t})\leq d(\Omega)-2t,\\
\label{width}
&\omega (\Omega_{-t})\leq \omega (\Omega)-2t,\\
\label{circumradius}
&R(\Omega_{-t})\leq R(\Omega)-t,\\
\label{perimeter}
&P(\Omega_{-t})\leq P(\Omega)-2\pi t.
\end{align}
\end{lemma}
\begin{proof}
The proof of \eqref{inr} can be found in \cite[Lemma 1.4]{larson}.
Let us now prove \eqref{eq:diameter}.
Let $x_t,y_t\in \Omega_{-t}$ be two diametrical points of $\Omega_{-t}$ (i.e., such that $\|x_t-y_t\|=d(\Omega_{-t})$). We denote by $x,y\in \Omega$ the points corresponding to the intersection of the line containing $x_t$ and $y_t$ with the boundary of $\Omega$. We have
$$d(\Omega)\ge \|x-y\|=\|x-x_t\|+\|x_t-y_t\|+\|y_t-y\|=\|x-x_t\|+d(\Omega_{-t})+\|y_t-y\|\ge d(\Omega_{-t})+2t,$$
where the last inequality is a consequence of the fact that $x_t,y_t\in \Omega_{-t}=\{x\in \Omega\ |\ d(x,\partial \Omega)\ge t\}$. \\
The proof of \eqref{width} follows directly from the definition of width and \eqref{diff_parall}.
We prove now \eqref{circumradius}.
\begin{comment}
Let $\overline{x}$ be the center of the circumcircle of $\Omega$: {\color{red} it coincides with the center of the circumcirle of $\Omega_{-t}$. \ilias{I don't think so, you can take for example a triangle}} By definition, one has
$$R(\Omega)=\max_{y\in\partial\Omega}\norma{\overline{x}-y}$$
and
\begin{equation}
\label{cc}
R(\Omega_{-t})=\max_{y\in\partial\Omega_{-t}} \norma{\overline{x}-y}.
\end{equation}
Let $y_t\in \partial \Omega_t$ be the point that realizes the maximum in \eqref{cc} and $y\in\partial \Omega$ the point obtained by the intersection of the line containing $\overline{x}$ and $y_t$ with the boundary of $\Omega$.So,
$$R(\Omega)=\max_{y\in\partial\Omega}\norma{\overline{x}-y}\ge \norma{\overline{x}-y}=\norma{\overline{x}-y_t}+\norma{y_t-y}\ge R(\Omega_{-t})+t.$$
\end{comment}
As observed in Remark \ref{rem:in}, and, in particular, by formula \eqref{circpar}, for every $\Omega\in \mathcal{K}^2$, we have that $R(\Omega+tB_1)=R(\Omega)+t$. Thus, we have $$R(\Omega_{-t})= R(\Omega_{-t}+tB_1)-t\leq R(\Omega)-t.$$
The last inequality follows from the inclusion $\Omega_{-t}+t B_1 \subset \Omega$.
\item For \eqref{perimeter}, we use a similar method as above:
$$P(\Omega_{-t}) = P(\Omega_{-t}+t B_1) - t P(B_1)= P(\Omega_{-t}+t B_1) -2\pi t \leq P(\Omega)-2\pi t.$$
\end{proof}
Finally, the following Lemma will play a key role in the proof of the main theorem.
\begin{lemma}\label{lem:main}
Let $\Omega\in \mathcal{K}^2$. We assume that there exists a continuous function $g_\Omega:[0,r(\Omega)]\rightarrow \mathbb{R}$ such that
\begin{equation}\label{kawohl}
\forall t\in [0,r(\Omega)],\ \ \ \ |\Omega_{-t}|\leq g^\Omega(t),\ \ \ \text{(resp. $|\Omega_{-t}|\ge g^\Omega(t)$)}.
\end{equation}
We then have the inequality
\begin{equation}\label{kawohl2}
h(\Omega)\ge \frac{1}{t_{g^\Omega}}\ \ \ \ \ \text{(resp. $h(\Omega)\leq \frac{1}{t_{g^\Omega}}$)},
\end{equation}
where $t_{g^\Omega}$ is the smallest (resp. largest) solution to the equation $g^\Omega(t) = \pi t^2$ on $[0,r(\Omega)]$.
\end{lemma}
\begin{proof}
From Theorem 1 in \cite{algo}, we know that there exists a unique $t=t_\Omega>0$ such that $\abs{\Omega_{-t}}=\pi t^2$ and $\displaystyle{h(\Omega)=1/t_\Omega}$. So, it is clear that, if there exists a function $g(t)$ such that \eqref{kawohl} holds, the smallest (resp. largest) solution to $g^\Omega(t)=\pi t^2$, that we denote by $t_{g^\Omega}$, must satisfy
\eqref{kawohl2}.
\end{proof}
\begin{tikzpicture}
\draw[-latex] (-.5,0) -- (7,0);
\draw (7,0) node[right] {$t$};
\draw [-latex] (0,-.5) -- (0,5);
\draw [domain=0:7, red ,line width=0.4mm] plot(\x,{.09*(\x)^2});
\draw [domain=0:6.1, green] plot(\x,{sqrt(2.7^2-\x)-.03*\x^2});
\draw [domain=0:6.1] plot(\x,{1.2*(sqrt(2.7^2-\x)-.03*\x^2)+.75});
\path[name path=line1, domain=0:7, variable = \x] plot(\x,{.09*(\x)^2});
\path[name path=line2, domain=0:6.1, variable = \x] plot(\x,{sqrt(2.7^2-\x)-.03*\x^2});
\path[name intersections={of = line1 and line2, by = P1}];
\filldraw[black] (P1) circle(1.5pt);
\path[name path=line3, domain=0:6.1, variable = \x] plot(\x,{1.2*(sqrt(2.7^2-\x)-.03*\x^2)+.75});
\path[name intersections={of = line1 and line3, by = P2}];
\filldraw[black] (P2) circle(1.5pt);
\path[name path = line4] (P1) --++ (-90:3);
\path[name path = line5] (0,0) --++ (0:11);
\path[name intersections={of = line4 and line5, by = P3}];
\draw[black, dashed] (P1) -- (P3);
\draw (P1 |- 0,0) --++ (-90:0.1);
\draw[black] (P1 |- 0,0) node[below] {$\frac{1}{h(\Omega)}$};
\path[name path = line6] (P2) --++ (-90:5);
\path[name intersections={of = line5 and line6, by = P4}];
\draw[black, dashed] (P2) -- (P4);
\draw[black] (P4 |- 0,0) --++ (-90:0.1);
\draw[black] (P4 |- 0,0) node[below] {$t_{g^\Omega}$};
\path[name path = line7] (0,0) --++ (90:7);
\path[name intersections={of = line2 and line7, by = P5}];
\draw[black] (P5) --++ (0:0.1);
\draw[black] (P5) --++ (180:0.1);
\draw (P5) node[left] {$|\Omega|$};
\path[name intersections={of = line3 and line7, by = P6}];
\draw[black] (P6) --++ (0:0.1);
\draw[black] (P6) --++ (180:0.1);
\draw (P6) node[left] {$g^{\Omega}(0)$};
\path[name intersections={of = line2 and line5, by = P7}];
\draw[black] (P7) --++ (90:0.1);
\draw[black] (P7) --++ (-90:0.1);
\draw (P7) node[below] {$r$};
\draw[red] (8.5,3.5)-- (9,3.5);
\draw(9,3.5) node[right] {Graph of the function $t\longmapsto \pi t^2$.};
\draw (8.5,3) -- (9,3);
\draw(9,3) node[right] {Graph of the function $t\longmapsto g^\Omega(t)$.};
\draw [green] (8.5,2.5) -- (9,2.5);
\draw(9,2.5) node[right] {Graph of the function $t\longmapsto |\Omega_{-t}|$.};
\draw (8.2,2.2) -- (8.2,3.8);
\draw (8.2,2.2) -- (15,2.2);
\draw (15,2.2) -- (15,3.8);
\draw (8.2,3.8) -- (15,3.8);
\end{tikzpicture}
\subsection{Inequalities relating the Cheeger constant to one geometric quantity}
In the following paragraph we recall some known inequalities relating the Cheeger constant to one of the geometric quantity taken into account, obtained just combining classical results.
\begin{proposition}\label{two}
Let $\Omega\in\mathcal{K}^2$. We have:
\begin{enumerate}
\item $h(\Omega)\ge 2\sqrt{\frac{\pi}{|\Omega|}}$;
\item $h(\Omega)\ge\frac{4\pi}{P(\Omega)}$;
\item $h(\Omega)\geq \frac{4}{d(\Omega)} $ ;
\item $\frac{1}{r(\Omega)}\leq h(\Omega)\leq \frac{2}{r(\Omega)}$;
\item $h(\Omega)\geq \frac{2}{R(\Omega)} $.
\end{enumerate}
In $(1)-(2)-(3)-(5)$ the equality is achieved by balls, while the upper bound in $(4)$ is achieved by balls and the lower bound by thinning vanishing stadiums.
\end{proposition}
\begin{proof}
\begin{enumerate}
\item We have, using the classical isoperimetric inequality $P(E)\ge 2\sqrt{\pi|E|}$,
\begin{equation}\label{h1}
h(\Omega)= \inf_{\underset{E\subset \Omega,\ |E|>0}{E\text{ is measurable }}}\frac{P(E)}{|E|}\ge \inf_{\underset{E\subset \Omega,\ |E|>0}{E\text{ is measurable }}} \frac{2\sqrt{\pi}\sqrt{|E|}}{|E|}= \inf_{\underset{E\subset \Omega,\ |E|>0}{E\text{ is measurable }}} \frac{2\sqrt{\pi}}{\sqrt{|E|}}= \frac{2\sqrt{\pi}}{\sqrt{|\Omega|}}.
\end{equation}
\item Using \eqref{h1} and the classical isoperimetric inequality, we have
$$h(\Omega)\ge \frac{2\sqrt{\pi}}{\sqrt{|\Omega|}} \ge \frac{4\pi}{P(\Omega)}. $$
\item Using \eqref{h1} and the isodiametric inequality $|\Omega|\leq \frac{\pi}{4}d(\Omega)^2$, we have
$$h(\Omega)\ge \frac{2\sqrt{\pi}}{\sqrt{|\Omega|}} \ge \frac{2\sqrt{\pi} }{\sqrt{\frac{\pi}{4}d(\Omega)^2}}=\frac{4}{d(\Omega)}.$$
\item For the upper bound, we have
$$h(\Omega)= \inf_{\underset{E\subset \Omega,\ |E|>0}{E\text{ is measurable }}}\frac{P(E)}{|E|}\leq \frac{P(D_{r(\Omega)})}{|D_{r(\Omega)}|}= \frac{2}{r(\Omega)},$$
where $D_{r(\Omega)}$ is a ball of radius $r(\Omega)$ inscribed in $\Omega$.
\\
For the lower bound, we have
$$h(\Omega)= \inf_{\underset{E\subset \Omega,\ |E|>0}{E\text{ is measurable }}}\frac{P(E)}{|E|} \ge \inf_{\underset{E\subset \Omega,\ |E|>0}{E\text{ is measurable }}}\frac{1}{r(E)}\ge \frac{1}{r(\Omega)},$$
where we use the inequalities $|E|<r(E)P(E)$ (see \cite{inequalities_convex}) and $r(E)\leq r(\Omega)$.
\item Using \eqref{h1} and the inequality $|\Omega|\leq \pi R(\Omega)^2$ (see \cite{inequalities_convex}), we have
$$h(\Omega)\ge \frac{2\sqrt{\pi}}{\sqrt{|\Omega|}} \ge \frac{2}{R(\Omega)}.$$
\end{enumerate}
\end{proof}
\subsection{Extremal sets and their properties} \label{ext
In this Section, we describe special planar shapes that appear in the statement of the main results.
Firstly, we recall the definition of the form body of convex set $\Omega$, following \cite{schneider}. A point $x\in\partial\Omega$ is called \emph{regular} if the supporting hyperplane at $x$ is uniquely defined. The set of all regular points of $\partial\Omega$ is denoted by ${\rm reg}(\Omega)$.
We also let $U(\Omega)$ denote the set of all outward pointing unit normals to $\partial\Omega$ at points of ${\rm reg}(\Omega)$.
\begin{definition}\label{formbody}
The form body $\Omega^\star$ of a set $\Omega\in\mathcal{K}^2$ is
defined as
$$\Omega^\star =\bigcap_{u\in U(\Omega)} \{x\in \mathbb{R}^2: \, (x,u)\le 1\}.$$
\end{definition}
\begin{definition}
\label{circumscribed}
A circumscribed set is a set which is homothetic to its form body.
\end{definition}
We observe that a polygon whose incircle touches all its sides is a particular circumscribed set.
\begin{definition}\label{stadium}
A \emph{stadium} $\mathcal{R}$ is defined as the convex hull of the union of two balls in $\mathbb{R}^2$ with the same radius (see Figure \ref{fig:stad}).
\end{definition}
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw [line width=0.4pt,dashed] (-2,0) circle (1cm);
\draw [line width=0.4pt, dashed] (2,0) circle (1cm);
\draw [line width=2pt] (-2,1)-- (2,1);
\draw [line width=2pt] (-2,-1)-- (2,-1);
\draw [shift={(-2,0)},line width=2pt] plot[domain=1.5707963267948966:4.71238898038469,variable=\t]({1*1*cos(\t r)+0*1*sin(\t r)},{0*1*cos(\t r)+1*1*sin(\t r)});
\draw [shift={(2,0)},line width=2pt] plot[domain=-1.5707963267948966:1.5707963267948966,variable=\t]({1*1*cos(\t r)+0*1*sin(\t r)},{0*1*cos(\t r)+1*1*sin(\t r)});
\end{tikzpicture}
\hspace{-2em}\caption{Stadium} \label{fig:stad}
\end{figure}
\begin{definition}\label{def:slice}
The \emph{symmetrical spherical slice} $\mathcal{S}$ of diameter $d$ and width $\omega$ is the convex set obtained by the intersection of a ball of radius $d/2$ and a strip of width $\omega<d$ centered at the center of the ball (see Figure \ref{fig:slice}).
\end{definition}
\begin{figure}[ht]\label{fig:slice}
\centering
\begin{tikzpicture}[scale=0.7]
\draw [line width=0.4pt,dashed] (0,0) circle (3cm);
\draw [shift={(0,0)},line width=2pt] plot[domain=2.8017557441356713:3.481429563043915,variable=\t]({1*3*cos(\t r)+0*3*sin(\t r)},{0*3*cos(\t r)+1*3*sin(\t r)});
\draw [shift={(0,0)},line width=2pt] plot[domain=-0.3398369094541218:0.339836909454122,variable=\t]({1*3*cos(\t r)+0*3*sin(\t r)},{0*3*cos(\t r)+1*3*sin(\t r)});
\draw [line width=2pt] (-2.8284271247461903,1)-- (2.82842712474619,1);
\draw [line width=2pt] (-2.8284271247461903,-1)-- (2.82842712474619,-1);
\end{tikzpicture}
\caption{Symmetrical spherical slice.}
\end{figure}
\begin{definition}
A \emph{two-cup body} $\mathcal{C}$ is the convex hull of a ball in $\mathbb{R}^2$ with two points that are symmetric with respect to the center of the ball (see Figure \ref{fig:cup}).
\end{definition}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.7]
\draw [line width=0.4pt,dashed] (0,0) circle (2cm);
\draw [line width=2pt] (-4,0)-- (-1,1.7320508075688774);
\draw [line width=2pt] (1,1.7320508075688772)-- (4,0);
\draw [line width=2pt] (4,0)-- (1,-1.7320508075688774);
\draw [line width=2pt] (-1,-1.7320508075688772)-- (-4,0);
\draw [shift={(0,0)},line width=2pt] plot[domain=1.0471975511965976:2.0943951023931957,variable=\t]({1*2*cos(\t r)+0*2*sin(\t r)},{0*2*cos(\t r)+1*2*sin(\t r)});
\draw [shift={(0,0)},line width=2pt] plot[domain=4.1887902047863905:5.235987755982988,variable=\t]({1*2*cos(\t r)+0*2*sin(\t r)},{0*2*cos(\t r)+1*2*sin(\t r)});
\end{tikzpicture}
\caption{Two-cup body.} \label{fig:cup}
\end{figure}
\begin{definition}
A \emph{subequilateral triangle} $T_I$ is an isosceles triangle with two equal angles greater than $\pi/3$.
\end{definition}
The following class of sets (introduced in \cite{yamanouti}, see also \cite{santalo}) represents a way to pass in a continuous manner from the equilateral triangle to the Reuleaux triangle.
\begin{definition}\label{yamanouti}
A \emph{Yamanouti set} $Y$ is a set obtained by an equilateral triangle by constructing on any side an arc of circle centered in the opposite vertex and with radius less or equal than the side itself. The Yamanouti set is the convex hull of the set obtained in this way (see Figure \ref{yama}).
\end{definition}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.5]
\draw [line width=0.4pt,dashed] (-2,0)-- (0,3.4641016151377553);
\draw [line width=0.4pt,dashed] (2,0)-- (0,3.4641016151377553);
\draw [line width=0.4pt,dashed] (2,0)-- (-2,0);
\draw [line width=2pt] (0,3.4641016151377553)-- (-0.8325821135072361,2.533076897813977);
\draw [line width=2pt] (0,3.4641016151377553)-- (0.8325821135072389,2.5330768978139733);
\draw [line width=2pt] (2,0)-- (1.61,1.186549619695696);
\draw [line width=2pt] (2,0)-- (0.7774178864927629,-0.2555249023719164);
\draw [line width=2pt] (-2,0)-- (-0.77741788649276,-0.2555249023719166);
\draw [line width=2pt] (-2,0)-- (-1.61,1.1865496196956953);
\draw [shift={(0,3.4641016151377553)},line width=2pt] plot[domain=4.506350634077912:4.918427326691467,variable=\t]({1*3.8*cos(\t r)+0*3.8*sin(\t r)},{0*3.8*cos(\t r)+1*3.8*sin(\t r)});
\draw [shift={(2,0)},line width=2pt] plot[domain=2.4119555316847165:2.824032224298272,variable=\t]({1*3.8*cos(\t r)+0*3.8*sin(\t r)},{0*3.8*cos(\t r)+1*3.8*sin(\t r)});
\draw [shift={(-2,0)},line width=2pt] plot[domain=0.3175604292915215:0.7296371219050757,variable=\t]({1*3.8*cos(\t r)+0*3.8*sin(\t r)},{0*3.8*cos(\t r)+1*3.8*sin(\t r)});
\end{tikzpicture}
\caption{Yamanouti set}
\label{yama}
\end{figure}
\begin{definition}
A \emph{symmetrical lens} $\mathcal{L}$ is defined as the intersection of two balls of the same radius in $\mathbb{R}^2$ (see Figure $\ref{Symmetrical lens}$).
\end{definition}
\begin{figure}[h]
\centering
\begin{tikzpicture}
\draw [shift={(0,0)},line width=2pt] plot[domain=0.5235987755982988:2.6179938779914944,variable=\t]({1*3*cos(\t r)+0*3*sin(\t r)},{0*3*cos(\t r)+1*3*sin(\t r)});
\draw [shift={(0,3)},line width=2pt] plot[domain=3.665191429188092:5.759586531581287,variable=\t]({1*3*cos(\t r)+0*3*sin(\t r)},{0*3*cos(\t r)+1*3*sin(\t r)});
\end{tikzpicture}
\caption{Symmetrical lens.}
\label{Symmetrical lens}
\end{figure}
In \cite{delyon2} the authors define the smoothed regular nonagon as follows.
\begin{definition}\label{def:nonagon}
Let $r>0$ and $2 r<d<2\sqrt{3} r$. The \emph{smoothed regular nonagon} of inradius $r$ and diameter $d$, that we denote by $\mathcal{N}$, is the convex set enclosed in an equilateral triangle $T_E$ of inradius $r$, following the construction below.
Let $\eta_i$ the normal angles to the sides of $T_E$ and let
$$\tau:= (3+\sqrt{d^2-3 r^2})/2, \,\, \, \, \text{and}\, \, \, \, h:=\sqrt{d^2-\tau^2}.$$
We define now the points $A_i, B_i, M_i$, for $i=1,2,3$:
$$A_i:=\begin{pmatrix}
\cos{\eta_i}+h \sin{\eta_i} \\
\sin{\eta_i}-h\cos{\eta_i}
\end{pmatrix}, \, \, B_i:=\begin{pmatrix}
\cos{\eta_i}-h \sin{\eta_i} \\
\sin{\eta_i}+h\cos{\eta_i}
\end{pmatrix}, \, \, \, M_i:=(1-\tau)\times\begin{pmatrix}
\cos{\eta_i} \\
\sin{\eta_i}
\end{pmatrix}.$$
We obtain $K_E(d)$ as follows (see Figure \ref{fig:non}):
\begin{itemize}
\item the points $A_i, B_i$ and $M_i$, for $i=1,2,3$, belong to $\partial \mathcal{N}$;
\item $\arc{B_1M_3}$ and $\arc{M_1A_3}$ are diametrically opposed arcs of the same circle of diameter $d$, the same for the pairs $\arc{B_2M_1}$ and $\arc{M_2A_1}$, $\arc{M_2 B_3}$ and $\arc{M_3A_2}$;
\item the boundary contains the segments $\overline{A_iB_i}$, for $i=1,2,3$, and the contact point $I_i$ with the incircle is the middle of the corresponding segment.
\end{itemize}
\end{definition}
\definecolor{uuuuuu}{rgb}{0.26666666666666666,0.26666666666666666,0.26666666666666666}
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=2]
\draw [line width=0.5pt,dashed] (-1.7320508075688776,-1)-- (0,2);
\draw [line width=0.5pt,dashed] (0,2)-- (1.7320508075688772,-1);
\draw [line width=0.5pt,dashed] (1.7320508075688772,-1)-- (-1.7320508075688776,-1);
\draw [shift={(0.26031766672006906,0.1502944749556473)},line width=2pt] plot[domain=1.7941968844018048:2.412567118938454,variable=\t]({1*1.175*cos(\t r)+0*1.175*sin(\t r)},{0*1.175*cos(\t r)+1*1.175*sin(\t r)});
\draw [shift={(0.26031766672006906,0.1502944749556473)},line width=2pt] plot[domain=4.917815739437731:5.536185973974379,variable=\t]({1*1.175*cos(\t r)+0*1.175*sin(\t r)},{0*1.175*cos(\t r)+1*1.175*sin(\t r)});
\draw [shift={(0,-0.30058894991129426)},line width=2pt] plot[domain=-0.3001982179913911:0.31817201654525773,variable=\t]({1*1.175*cos(\t r)+0*1.175*sin(\t r)},{0*1.175*cos(\t r)+1*1.175*sin(\t r)});
\draw [shift={(0,-0.30058894991129426)},line width=2pt] plot[domain=2.8234206370445354:3.441790871581184,variable=\t]({1*1.175*cos(\t r)+0*1.175*sin(\t r)},{0*1.175*cos(\t r)+1*1.175*sin(\t r)});
\draw [shift={(-0.2603176667200691,0.15029447495564732)},line width=2pt] plot[domain=3.888591986795:4.506962221331649,variable=\t]({1*1.175*cos(\t r)+0*1.175*sin(\t r)},{0*1.175*cos(\t r)+1*1.175*sin(\t r)});
\draw [shift={(-0.2603176667200691,0.15029447495564732)},line width=2pt] plot[domain=0.7290255346513395:1.3473957691879885,variable=\t]({1*1.175*cos(\t r)+0*1.175*sin(\t r)},{0*1.175*cos(\t r)+1*1.175*sin(\t r)});
\draw [line width=2pt] (-1.1160254037844388,0.06698729810778098)-- (-0.616025403784439,0.9330127018922192);
\draw [line width=2pt] (0.616025403784439,0.9330127018922192)-- (1.1160254037844388,0.06698729810778058);
\draw [line width=2pt] (-0.5,-1)-- (0.5,-1);
\begin{scriptsize}
\draw [fill=black] (0,-1) circle (1.5pt);
\draw[color=black] (0.06912757788806292,-1.1312016285860344) node {$I_1$};
\draw [fill=uuuuuu] (-0.8660254037844388,0.5) circle (1.5pt);
\draw[color=uuuuuu] (-0.8959352773734842,0.6739581877547015) node {$I_3$};
\draw [fill=uuuuuu] (-0.616025403784439,0.9330127018922192) circle (1.5pt);
\draw[color=uuuuuu] (-0.6409693831911335,1.1110425777815864) node {$A_3$};
\draw [fill=uuuuuu] (0.8660254037844389,0.5) circle (1.5pt);
\draw[color=uuuuuu] (0.9797200571107441,0.6739581877547015) node {$I_2$};
\draw [fill=uuuuuu] (-1.1160254037844388,0.06698729810778098) circle (1.5pt);
\draw[color=uuuuuu] (-1.241795246763608,0.20597972252004358) node {$B_3$};
\draw [fill=uuuuuu] (0.616025403784439,0.9330127018922192) circle (1.5pt);
\draw[color=uuuuuu] (0.688330463759486,1.1110425777815864) node {$B_2$};
\draw [fill=uuuuuu] (1.1160254037844388,0.06698729810778058) circle (1.5pt);
\draw[color=uuuuuu] (1.1891563273319607,0.24597972252004358) node {$A_2$};
\draw [fill=uuuuuu] (0.5,-1) circle (1.5pt);
\draw[color=uuuuuu] (0.5699534414605375,-1.1394134781704879) node {$B_1$};
\draw [fill=uuuuuu] (-0.5,-1) circle (1.5pt);
\draw[color=uuuuuu] (-0.4316982856844117,-1.1194134781704879) node {$A_1$};
\draw [fill=uuuuuu] (0,1.296095379298727) circle (1.5pt);
\draw[color=uuuuuu] (0.06912757788806292,1.4752795694706569) node {$M_1$};
\draw [fill=uuuuuu] (1.1224515242003252,-0.6480476896493634) circle (1.5pt);
\draw[color=uuuuuu] (1.2982622521241877,-0.57338833606587083) node {$M_3$};
\draw [fill=uuuuuu] (-1.1224515242003252,-0.6480476896493633) circle (1.5pt);
\draw[color=uuuuuu] (-1.2909011715558347,-0.49338833606587083) node {$M_2$};
\end{scriptsize}
\end{tikzpicture}
\caption{Smoothed regular nonagon}\label{fig:non}
\end{figure}
We recall now some of the Blaschke--Santaló
sharp inequalities that we will need in the sequel between three of the following geometric quantities: perimeter, area, inradius, circumradius, diameter and width. On the other hand, in the rest of the paper, when needed, we will directly refer to the results contained in the survey paper \cite{inequalities_convex}.
Firstly, let us consider the diagram $(A,d,r)$. We have the following two theorems.
\begin{theorem}[\cite{cifre_salinas} \& \cite{delyon2} Theorem 1]
Let $\Omega\in \mathcal{K}^2$. Then, it holds
\begin{equation}
\label{del_2cap}
\abs{\Omega}\ge r(\Omega)\sqrt{d^2(\Omega)-4r^2(\Omega)}+r^2(\Omega)\left(\pi- 2\arccos{\left(\frac{2r(\Omega)}{d(\Omega)}\right)}\right),
\end{equation}
where equality holds if and only if $\Omega$ is a two-cup body.
\end{theorem}
\begin{theorem}[\cite{delyon2}, Theorem 2] \label{delthm}
Let $\Omega\in \mathcal{K}^2$. Then, it holds
\begin{equation}\label{del:non}
\abs{\Omega}\le \psi\left(d\left(\Omega\right),r(\Omega)\right),
\end{equation}
where
\begin{equation}\label{func_del}
\psi(d,r):=\begin{cases}
\displaystyle{\frac{3\sqrt{3}r}{2}(\sqrt{d^2-3r^2}-r)+\frac{3d^2}{2}\left(\frac{\pi}{3}-\arccos{\left(\frac{\sqrt{3}r}{d}\right)}\right)}, & \text{if} \, \, \, d\le r D^*\vspace{1mm} \\
\displaystyle{r\sqrt{d^2-4r^2}+\frac{d^2}{2}\arcsin{\left(\frac{2r}{d}\right)}}, & \text{if} \, \, \, d\ge r D^*
\end{cases}
\end{equation}
and $D^*$ is the unique number in $[2,2\sqrt{3}]$ for which the two expression of the function $\psi(d,r)$ are equal.
Moreover, if $d(\Omega)\le r(\Omega)D^*$, we have equality in \eqref{del:non} if and only if $\Omega$ is a regular smoothed nonagon, while, if $d(\Omega)>r(\Omega)D^*$, we have equality if and only if $\Omega$ is a symmetrical spherical slice
\end{theorem}
As far as the diagram $(A, \omega, R)$, we recall the following.
\begin{theorem}[\cite{cifre_salinas}, Theorem 3] \label{cifrsalth}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}
\label{henk}
\abs{\Omega}\leq \chi(\omega(\Omega), R(\Omega)),
\end{equation}
where
\begin{equation}\label{henkchi}
\chi(\omega(\Omega), R(\Omega)):=\frac{\omega(\Omega)}{2} \sqrt{4 R(\Omega)^2-\omega(\Omega)^2} + 2R(\Omega)^2 \arcsin{\frac{\omega(\Omega)}{2R(\Omega)}},
\end{equation}
and equality in \eqref{henk} holds if and only if $\Omega$ is a symmetrical spherical slice.
\end{theorem}
\begin{theorem}[\cite{cifre_salinas}, Theorem 6]
Let $\Omega\in\mathcal{K}^2$. Then, if $\omega(\Omega)\leq \frac{3}{2}R(\Omega)$, it holds
\begin{equation}\label{ARw_low}
16\abs{\Omega}^6\geq R^2(\Omega) \omega^2(\Omega)\left(16 \abs{\Omega}^4-R^2(\Omega)\omega^6(\Omega)\right)
\end{equation}
and equality holds if and only if $\Omega$ is a subequilateral triangle
\end{theorem}
We recall the following inequality from the diagram $(R,r,\omega)$.
\begin{theorem}[\cite{cifre_gomis}, Theorem 2]\label{cifre_gomis_1_thm}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{cifre_gomis_1_eq}
\left(4 r(\Omega)-\omega(\Omega) \right)\left( \omega(\Omega)-2 r(\Omega)\right)\leq \frac{2 r^3(\Omega)}{R(\Omega)}
\end{equation}
and equality holds if and only if $\Omega$ is an isosceles triangle.
\end{theorem}
\begin{theorem}[\cite{santalo}, Section 10]\label{santalo_thm_1}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{santalo_eq_1}
\omega(\Omega)\leq R(\Omega)+r(\Omega),
\end{equation}
where equality is achieved by any set obtained by an equilateral triangle of circumradius $R(\Omega)$ by replacing the edges by three equal circular arcs, in particular each arc of circle is centered on the height relative to the same side.
\end{theorem}
The following theorem deals with the $(A, r, R)$ diagram.
\begin{theorem}[\cite{cifre_salinas}, Theorems 1 and 2]\label{inr_circ_thm}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{secondhRr}
\abs{\Omega}\ge 2r(\Omega)\left(\sqrt{R(\Omega)^2-r(\Omega)^2}+r\arcsin{\frac{r(\Omega)}{R(\Omega)}}\right),
\end{equation}
and equality in \eqref{secondhRr} holds if and only if $\Omega$ is a two-cup body.
Moreover, we have
\begin{equation}\label{cifreRr}
\abs{\Omega}\le \varphi(R(\Omega), r(\Omega)),
\end{equation}
where
\begin{equation}
\label{cifreArR}
\varphi(R(\Omega), r(\Omega)):= 2\left(r\sqrt{R(\Omega)^2-r(\Omega)^2}+R^2(\Omega)\arcsin{\frac{r(\Omega)}{R(\Omega)}}\right),
\end{equation}
and equality in \eqref{cifreArR} holds if and only if $\Omega$ is a symmetrical spherical slice
\end{theorem}
Now we quote two inequalities concerning the $(A,w,r)$ and $(P,w,r)$ diagrams.
\begin{theorem}[\cite{cifre_salinas}, Theorem 5] \label{cifre_salinas_thm_rw}
Let $\Omega \in \mathcal{K}^2$. Then, it holds
\begin{equation} \label{cifre_salinas_rwA}
(\omega(\Omega)-2r(\Omega))^2(4r(\Omega)-\omega(\Omega))\abs{\Omega}^2\le r^4(\Omega)\omega^3(\Omega),
\end{equation}
\begin{equation} \label{cifre_salinas_rwP}
(\omega(\Omega)-2r(\Omega))^2(4r(\Omega)-\omega(\Omega))P^2(\Omega)\le 4 r(\Omega)^2\omega^3(\Omega)
\end{equation}
In both inequalities, equality holds if and only if $\Omega$ is a subequilateral triangle
\end{theorem}
We recall this result from the $(d,w,r)$ diagram.
\begin{theorem}[\cite{cifre4}, Theorem 1-2]\label{cifre_wrd_thm}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{cifre_wrd_eq_1}
d^2(\Omega)(\omega(\Omega)-2r(\Omega))^2(4r(\Omega)-\omega(\Omega))\le 4r^4(\Omega)\omega(\Omega),
\end{equation}
where equality holds if and only if $\Omega$ is a subequilateral triangle $T_I$, and
\begin{equation}\label{cifre_wrd_eq_2}
\omega(\Omega)-r(\Omega)\leq \dfrac{\sqrt{3}}{3} d(\Omega),
\end{equation}
and equality holds if $\Omega$ is a Yamanouti set
\end{theorem}
Finally, as far as the diagram $(A,P,R)$, we have
\begin{theorem}[\cite{santalo}]
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{santalo_APR}
|\Omega|<\frac{P(\Omega)\left(P(\Omega)-4R(\Omega)\cos{\arcsinc{\left(\frac{4R(\Omega)}{P(\Omega)}\right)}}\right)}{8\arcsinc{\left(\frac{4R(\Omega)}{P(\Omega)}\right)}},
\end{equation}
where the function ${\rm arcsinc}$ is the inverse of the function defined in \eqref{sinc} and the equality is attained by symmetrical lenses.
\end{theorem}
\section{Numerical results and Blaschke--Santal\'o diagrams}\label{secnum}
In this Section, we introduce numerical tools, that we use to obtain more information on the diagrams and state some conjectures.
\subsection{Generation of random convex polygons}
We want to provide a numerical approximation of the diagrams studied in Section \ref{sec3}. To do so, a natural idea is to generate a large number of convex sets (more precisely convex polygons) for each we compute the involved functionals. Nevertheless, the task of (properly) generating random convex polygons is quite challenging and interesting on its own. The main difficulty is that one wants to design an efficient and fast algorithm that allows obtaining a uniform distribution of the generated random convex polygons. For more clarification, let us discuss two different (naive) approaches:
\begin{itemize}
\item one easy way to generate random convex polygons is by rejection sampling. We generate a random set of points in a square; if they form a convex polygon, we return it, otherwise we try again. Unfortunately, the probability of a set of $n$ points uniformly generated inside a given square to be in convex position is equal to $p_n = \left(\frac{\binom{2n-2}{n-1}}{n!}\right)^2$, see \cite{random_polygon}. Thus, the random variable $X_n$ corresponding to the expected number of iterations needed to obtain a convex distribution follows a geometric law of parameter $p_n$, which means that its expectation is given by $\mathbb{E}(X_n)=\frac{1}{p_n}=\left(\frac{n!}{\binom{2n-2}{n-1}}\right)^2 $. For example, if $N=20$, the expected number of iterations is approximately equal to $2.10^9$, and since one iteration is performed in an average of $0.7$ seconds, this means that the algorithm will need about $50$ years to provide one convex polygon with $20$ sides;
\item another natural approach is to generate random points and take their convex hull. This method is quite fast, as one can compute the convex hull of $N$ points in a $\mathcal{O}(N\log(N))$ time (see \cite{MR475616} for example), but it is not quite relevant since it yields to a biased distribution.
\end{itemize}
In order to avoid the issues stated above, we use an algorithm presented in \cite{sander}, that is based on the work of P. Valtr \cite{random_polygon}, where the author computes
the probability of a set of $n$ points uniformly generated inside a given square to be in convex position. The author remarks (in Section 4) that the proof yields a fast and
non-biased method to generate random convex sets inside a given square. We also refer to \cite{sander} for a nice description of the steps of the method and a beautiful
animation where one can follow each step, one will also find an implementation of Valtr's algorithm in Java that we decided to translate in Matlab.
To obtain the different figures below, we generate $10^5$ random convex polygons of unit area and number of sides between $3$ and $30$, for which we compute the involved functionals. We then obtain
clouds of dots that provide approximations of the diagrams. This approach has been used in several works, we refer for example
to \cite{AH11}, \cite{ftJMAA} and \cite{FL21}.
\subsection{About the computation of the functionals}
Let us give few details on the numerical computation of the functionals involved in the paper:
\begin{itemize}
\item The \textbf{Cheeger constant} is computed by using a Beniamin Bogosel’s code \cite{zbMATH07173414} based on the characterization of the Cheeger sets of planar convex sets given in \cite{kawohl} and the toolbox Clipper, a very good implementation of polygon offset computation by Agnus Johnson.
\item The \textbf{inradius} is also computed by using the tootbox Clipper and the fact that $r(\Omega)$ is the solution to the equation $|\Omega_{-t}|=0$.
\item The \textbf{diameter} is computed via a fast method of computation, which consists of finding all antipodal pairs of points and looking for the diametrical one between them. This is classically known as Shamos algorithm \cite{MR805539}.
\item The \textbf{area} is computed by using Matlab's function \texttt{"polyarea"}.
\item The \textbf{minimum width} of a polygon $\Omega$ of vertices $\{A_1,\cdots,A_N\}$ is computed by using the following formula
$$\omega(\Omega)=\min_{i\in \llbracket 1,N \rrbracket} \max_{j\in \llbracket 1,N \rrbracket} d(A_j,(A_iA_{i+1})),$$
where $d(A_j,(A_iA_{i+1}))$ corresponds to the distance between the point $A_j$ and the line $(A_iA_{i+1})$ (with the convention $A_{N+1}:=A_1$).
\item The \textbf{circumradius} of a convex set $\Omega$ can be written as follows
$$R(\Omega) = \min_{c\in \Omega}\max_{x \in \Omega} \|c-x\|.$$
It is computed by using Matlab's routine \texttt{"fminmax"}.
\end{itemize}
\section{Proof of the main Results} \label{sec3}
\subsection{Proof of Theorem \ref{th:existence} }
We start this Section by proving the existence results stated in Theorem \ref{th:existence}.
\begin{proof}[Proof of the existence]
Let us consider the minimization problem of the Cheeger constant in the classes of sets $(1)-(13)$; the maximization problem can be dealt in an analogous way.\\
For all of these classes of sets, in order to prove the existence of the solution, we consider a minimizing sequence $(\Omega_k)_{k\in\mathbb N}$ and we prove that it satisfies the hypothesis of the Blaschke Selection Theorem (see Theorem $1.8.7$ in \cite{schneider}), that is to say its boundedness up to translations. Concerning the class of sets involving a diameter or a circumradius constraint, it is clear that, up to a translation, the minimizing sequence is contained in a sufficiently big ball. So it remains to study the problem in $\mathcal{K}^2_{r,P}$, $\mathcal{K}^2_{A,\omega}$, $\mathcal{K}^2_{r,\omega}$ and $\mathcal{K}^2_{\omega,P}$. Concerning $\mathcal{K}^2_{r,P}$ and $\mathcal{K}^2_{\omega,P}$, we know from \cite{inequalities_convex} that $P=P(\Omega_k)>2 d(\Omega_k)$, for every $k$, so the sequence of the diameters $d(\Omega_k)$ is equibounded and, consequently, there exists a sufficiently big ball containing the sequence $(\Omega_k)_{k}$.
As far as $\mathcal{K}^2_{r,\omega}$, it is possible to prove the boundness of the minimizing sequence whenever $\omega(\Omega_k)>2r(\Omega_k)$, indeed it holds (see \cite{inequalities_convex})
$$d(\Omega_k)\le \frac{\omega^2(\Omega_k)}{2(\omega(\Omega_k)-2r(\Omega_k))}.$$
For the last class $\mathcal{K}^2_{A,\omega}$, from \cite{inequalities_convex}, we know that, if $2\omega(\Omega_k)\leq \sqrt{3} d(\Omega_k)$, then $$2\abs{\Omega_k}\ge \omega(\Omega_k)d(\Omega_k),$$and, also in this case, the boundedness follows.
So, for every class of sets considered, the Blaschke selection theorem ensures us that, up to a subsequence, $\Omega_k$ converges in the Hausdorff sense to a set $\Omega^*$; it remains only to prove that this set belongs to the relative class of admissible sets. We observe that all the considered constraints are stable for the Hausdorff convergence. In particular, in \cite{delyon2} it is proved the stability of the inradius, in \cite{pierre} the stability of the diameter and in \cite{schneider} the stability of area, perimeter and width. It only remains to show that the circumradius is continuous with respect to the Hausdorff distance in the class of admissible sets having a circumradius constraint. Since $R(\Omega_k)=R$, $\forall k\in\mathbb N$, we have $\Omega_k\subseteq B_R$. Using the stability of the Hausdorff convergence for the inclusion (see \cite{pierre}, Prop $2.2.17$), we have that $\Omega^*\subseteq B_R$, and consequently $R(\Omega^*)\le R$. By contradiction, let us suppose that $R(\Omega^*)<R$, so there exists $\overline{R}>0$ such that $R(\Omega^*)<\overline{R}<R$ and so $\Omega^*\subseteq B_{\overline{R}}$. Therefore, by the Hausdorff convergence, for sufficiently large $k$, $\Omega_k\subseteq B_{\overline{R}}$, but this would imply $R\le \overline{R}$, that is absurd.
In order to conclude, we observe that in all the above cases, the set $\Omega^*$ cannot degenerate to a segment. If we are working in a class of sets involving an inradius, area or width constraint, then, it is clear that, thanks to the continuity of the inradius, width and area under Hausdorff convergence, there exists, up to a translation, a sufficiently small ball contained in the minimizing sequence.
On the other hand,
if we consider the minimization problem in $\mathcal{K}^2_{d,R}$, $\mathcal{K}^2_{P,d}$ and $\mathcal{K}^2_{R,P}$, the inradius can be bounded from below by a positive quantity.
In \cite[Section 9]{santalo} it is proved that
\begin{equation*}
r(\Omega_k)\geq \dfrac{d^2(\Omega_k)\sqrt{4 R^2(\Omega_k)-d^2(\Omega_k)}}{2R(\Omega_k)\left(2 R(\Omega_k)+\sqrt{4 R^2(\Omega_k)-d^2(\Omega_k)}\right)};
\end{equation*}
in \cite[Section 3]{henk} it is proved that
\begin{equation*}
r(\Omega_k)\geq \frac{P(\Omega_k)}{4}-\frac{d(\Omega_k)}{2}
\end{equation*}
which also yields to
\begin{equation*}
r(\Omega_k)\geq \dfrac{ P(\Omega_k)}{4}-R(\Omega_k),
\end{equation*}
because $d(\Omega_k)\leq 2R(\Omega_k)$.
Recalling now that the Cheeger constant is continuous with respect to the Hausdorff convergence when the sets do not degenerate to a segment (see \cite[Proposition 3.1]{reverse_cheeger}), the existence part of the theorem is proved.
\end{proof}
\subsection{Proof of Theorem \ref{th2}}
The following paragraphs of this Section are dedicated to the proof of the explicit bounds and their sharpness in the Blaschke--Santaló sense.
\subsubsection{The triplet $(h,r,P)$}
\begin{proposition}\label{prop_hrP}
Let $\Omega\in \mathcal{K}^2$. Then, it holds
\begin{equation}\label{eq:hrP}
\frac{1}{r(\Omega)} +\frac{\pi}{P(\Omega)-\pi r(\Omega)} \leq h(\Omega)\leq \frac{1}{r(\Omega)}+\sqrt{\frac{2\pi }{P(\Omega)r(\Omega)}},
\end{equation}
where equalities are achieved by circumscribed sets in the upper bound and by the stadiums in the lower bound.
\end{proposition}
\begin{proof}
We combine the classical convex geometric inequalities (see \cite{bonnesen2, bonnesen, inequalities_convex})
\begin{equation}\label{pra}
\frac{P(\Omega)r(\Omega)}{2}\leq |\Omega|\leq r(\Omega)(P(\Omega)-\pi r(\Omega)),
\end{equation}
with the estimates \eqref{eq:hra} to obtain the optimal inequalities \eqref{eq:hrP}.
The upper bound in \eqref{eq:hra} is an equality for circumscribed sets, since both the upper bound in \eqref{eq:hra} and the lower bound in \eqref{pra} are equalities for circumscribed sets.
The lower bound is achieved by stadiums, since both the lower bound in \eqref{eq:hra} and the upper bound in \eqref{pra} are achieved with equality sign on this class of sets.
\end{proof}
\subsubsection{The triplet $(h,d,r)$}
In the following we will denote by $\mathcal{S}_{r,d}$ the symmetrical spherical slice of inradius $r$ and diameter $d$ and we will denote by $\mathcal{N}_{r,d}$ the regular smoothed nonagon of same inradius and diameter, see Definitions \ref{def:nonagon} and \ref{def:slice}
\begin{proposition}\label{prop_hdr} Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{up_hdr}
h(\Omega)\leq \frac{1}{r(\Omega)}+ \sqrt{\frac{\pi}{r(\Omega)\sqrt{d^2(\Omega)-4r^2(\Omega)}+r^2(\Omega)\left(\pi-2\arccos{\left(\frac{2r(\Omega)}{d(\Omega)}\right)}\right)}},
\end{equation}
where equality is achieved if and only if $\Omega$ is a symmetric two-cup body. Moreover, we have
\begin{equation}
\label{hdr_low}
h(\Omega) \ge \dfrac{1}{t_{g_1^\Omega}}
\end{equation}
where $t_{g_1^\Omega}$ is the smallest solution to the equation
$$g_1^\Omega(t):=\psi(d(\Omega)-2t,r(\Omega)-t)=\pi t^2$$ on the interval $[0, r(\Omega)]$ and the function $\psi$ is defined in \eqref{func_del}. Moreover, there exists $D_0$ such that the problem $$\min\{h(\Omega)\ |\ \Omega\in\mathcal{K}^2_{d,r}\}$$ is solved by $\mathcal{N}_{r,d}$ if $d<rD_0$ and by $\mathcal{S}_{r,d}$ if $d\ge r D_0$.
\end{proposition}
\begin{proof}
Let us start by proving \eqref{up_hdr}. We just need to combine the upper bound in \eqref{eq:hra}, which is sharp on circumscribed sets (see Definition \ref{circumscribed}), and \eqref{del_2cap}, which is an equality for and only for two-cup bodies, that are particular circumscribed sets.
Let us now prove \eqref{hdr_low}.
For any fixed parameter $r>0$, we consider the function
\begin{equation*}
\Psi_r(x) =\begin{cases}
f_r(x):= \displaystyle{\frac{3\sqrt{3}r}{2}(\sqrt{x^2-3r^2}-r)+\frac{3x^2}{2}\left(\frac{\pi}{3}-\arccos{\left(\frac{\sqrt{3}r}{x}\right)}\right)}, & \text{if} \, \, \, x\le r D^*\vspace{1mm} \\
g_r(x):=\displaystyle{r\sqrt{x^2-4r^2}+\frac{x^2}{2}\arcsin{\left(\frac{2r}{x}\right)}}, & \text{if} \, \, \, x\ge r D^*
\end{cases}
\end{equation*}
defined for $x\ge 2r$. Let us prove that the function $\Psi_r$ is strictly increasing.
We have for $x<r D^*$
$$f_r'(x) = \frac{3 \sqrt{3} r x}{2 \sqrt{x^2-3 r^2}} + 3 x \left(\frac{\pi }{3}-\arccos\left(\frac{\sqrt{3} r}{x}\right)\right)-\frac{3 \sqrt{3} r}{2 \sqrt{1-\frac{3 r^2}{x^2}}}= 3 x \left(\frac{\pi }{3}-\arccos\left(\frac{\sqrt{3} r}{x}\right)\right),$$
and, for $x>r D^* $,
$$g_r'(x) = -\frac{r}{\sqrt{1-\frac{4 r^2}{x^2}}}+\frac{r x}{\sqrt{x^2-4 r^2}}+x \arcsin\left(\frac{2 r}{x}\right)=x \arcsin\left(\frac{2 r}{x}\right)>0. $$
Thus, the function $f_r$ is increasing on $[2r,2\sqrt{3}r]$ and is decreasing on $[2\sqrt{3}r,+\infty)$ and the function $g_r$ is increasing on $[2r,+\infty)$. Moreover, we have by Theorem \ref{delthm} that $D^*\leq 2\sqrt{3}$. So, the function $f_r$ is increasing on the sub-interval $[2r,r D^*]$ and, since $f_r(r D^*)=g_r(r D^*)=\Psi_r(D^*)$, the function $\Psi_r$ is increasing on $[2r,+\infty)$.
Let $t\in [0,r(\Omega)]$, we apply the result of Theorem \ref{delthm} on the convex set $\Omega_{-t}$. Thus, we have
$$|\Omega_{-t}|\leq \Psi_{r(\Omega_{-t})}(d(\Omega_{-t})) = \Psi_{r(\Omega)-t}(d(\Omega_{-t}))\leq \Psi_{r(\Omega)-t}(d(\Omega)-2t)=:g_1^\Omega(t),$$
where we use the monotonicity of the function $\Psi_{r(\Omega)-t}$ and \eqref{inr} and \eqref{eq:diameter}.
Now, using Lemma \ref{lem:main}, we have the following bound for the Cheeger constant
$$h(\Omega)\ge \frac{1}{t_{g_1^\Omega}},$$
where $t_{g_1^\Omega}$ is the smallest solution to the equation $g_1^\Omega(t)=\pi t^2$ on the interval $[0,r(\Omega)]$.
Let $r>0$ and $d\ge 2r$.
We consider two cases:
\begin{itemize}
\item If $d>r D^*$, we have for every $t\in[0,r)$
\begin{equation}\label{eq:proof_hdr}
|(\S_{r,d})_{-t}|=|\S_{r-t,d-2t}|=\Psi_{r-t}(d-2t),
\end{equation}
where the first equality is a consequence of the equality $(\S_{r,d})_{-t}=\S_{r-t,d-2t}$ and the second one is a consequence of \cite[Theorem 2]{delyon2} and the following estimate
$$d((\S_{r,d})_{-t})=d-2t>rD^*-2t>rD^*-tD^*=(r-t)D^*=r((\S_{r,d})_{-t})D^*,$$
where we used $D^*\approx 2,3888>2$ (see \cite[Theorem 2]{delyon2}). Thus, we have by \eqref{eq:proof_hdr}
$$ h(\S_{r,d})=\frac{1}{t_{g_1^\Omega}},$$
with
$$r(\S_{r,d})= r,\ \ d(\S_{r,d})= d.$$
\item If $d<r D^*$, we take $t^*:=\frac{D^*r-d}{2+D^*}$ as the value for which the graphs of the functions $t\longmapsto |(\mathcal{N}_{r,d})_{-t}|$ and $t\longmapsto |(\S_{r,d})_{-t}|$ intersect each other, see Figure \ref{fig:discuss_1}. We note that $(\mathcal{N}_{r,d})_{-t}=\mathcal{N}_{r-t,d-2t}$ for every $t\in [0,t^*]$.
As shown in the figures below, we have the following cases:
\begin{itemize}
\item If $t^*>\frac{1}{h(\mathcal{N}_{r,d})}$, we have $h(\mathcal{N}_{r,d})=\frac{1}{t_{g_1^\Omega}}$.
\item If $t^*<\frac{1}{h(\mathcal{N}_{r,d})}$, we have $h(\S_{r,d})=\frac{1}{t_{g_1^\Omega}}$.
\item If $t^*=\frac{1}{h(\mathcal{N}_{r,d})}$, we have $h(\mathcal{N}_{r,d})=h(\S_{r,d})=\frac{1}{t_{g_1^\Omega}}$.
\end{itemize}
\end{itemize}
\begin{center}
\begin{tikzpicture}
\draw[->] (-.5,0) -- (7,0);
\draw (7,0) node[right] {$t$};
\draw [->] (0,-.5) -- (0,5);
\draw [domain=0:7, red,line width=0.4mm] plot(\x,{.09*(\x)^2});
\draw [domain=0:6.1, green] plot(\x,{sqrt(2.7^2-\x)-.03*\x^2});
\draw [domain=1.3:6.1, dashed, blue, line width=0.8mm] plot(\x,{sqrt(2.7^2-\x)-.03*\x^2});
\draw (4.2,0) node[below] {\tiny $ \frac{1}{h(\mathcal{S}_{r,d})}$};
\draw (3.9,-0.1) -- (3.9,0.1);
\draw (3.1,0) node[below] {\tiny $ \frac{1}{h(\mathcal{N}_{r,d})}$};
\draw (3.25,-0.1) -- (3.25,0.1);
\draw [dotted] (3.25,0) -- (3.25,1);
\draw [dotted] (3.9,0) -- (3.9,1.3);
\draw (1.3,0) node[below] {$t^*$};
\draw (1.3,-0.1) -- (1.3,0.1);
\draw (0,4) node[left] {$|\mathcal{N}_{r,d}|$};
\draw (-0.1,4) -- (0.1,4);
\draw (0,2.7) node[left] {$|\mathcal{S}_{r,d}|$};
\draw (-0.1,2.7) -- (0.1,2.7);
\draw (6.1,0) node[below] {$r$};
\draw (6.1,-0.1) -- (6.1,0.1);
\draw [dotted] (1.3,0) -- (1.3,2.4);
\draw(0,4) .. controls (2,1.3) and (2.5,1) .. (6.1,0);
\draw[ dashed, blue, line width=0.8mm](0,4) .. controls (1,2.7) .. (1.3,2.4);
\draw (1.3,2.4) node {\Large $\bullet$};
\draw[ dashed, blue, line width=0.8mm](8.5,4) -- (9,4);
\draw (9,4) node[right] {Graph of the function $t\longmapsto \Psi_{r-t}(d-2t)$.};
\draw[line width=0.4mm, red](8.5,3.5) -- (9,3.5);
\draw(9,3.5) node[right] {Graph of the function $t\longmapsto \pi t^2$.};
\draw (8.5,3) -- (9,3);
\draw(9,3) node[right] {Graph of the function $t\longmapsto |(\mathcal{N}_{r,d})_{-t}|=|\mathcal{N}_{r-t,d-2t}|$.};
\draw [green] (8.5,2.5) -- (9,2.5);
\draw(9,2.5) node[right] {Graph of the function $t\longmapsto |(\mathcal{S}_{r,d})_{-t}|=|\mathcal{S}_{r-t,d-2t}|$.};
\draw (8.2,2.2) -- (8.2,4.3);
\draw (8.2,2.2) -- (17.5,2.2);
\draw (17.5,2.2) -- (17.5,4.3);
\draw (8.2,4.3) -- (17.5,4.3);
\end{tikzpicture}
\vspace{1cm}
\begin{tikzpicture}
\draw[->] (-.5,0) -- (7,0);
\draw (7,0) node[right] {$t$};
\draw [->] (0,-.5) -- (0,5);
\draw [domain=0:7, red,line width=0.4mm] plot(\x,{.09*(\x)^2});
\draw [domain=0:6.1, green] plot(\x,{sqrt(2.7^2-\x)-.03*\x^2});
\draw [domain=4.8:6.1, dashed, blue, line width=0.8mm] plot(\x,{sqrt(2.7^2-\x)-.03*\x^2});
\draw (3.5,0) node[below] {\tiny $ \frac{1}{h(\mathcal{S}_{r,d})}$};
\draw (3.9,-0.1) -- (3.9,0.1);
\draw (4.25,-.3) node[below] {\tiny $ \frac{1}{h(\mathcal{N}_{r,d})}$};
\draw (4.15,-0.1) -- (4.15,0.1);
\draw [dotted] (4.15,0) -- (4.15,1.5);
\draw [dotted] (3.9,0) -- (3.9,1.3);
\draw (4.8,0) node[below] {$t^*$};
\draw (4.8,-0.1) -- (4.8,0.1);
\draw (0,4) node[left] {$|\mathcal{N}_{r,d}|$};
\draw (-0.1,4) -- (0.1,4);
\draw (0,2.7) node[left] {$|\mathcal{S}_{r,d}|$};
\draw (-0.1,2.7) -- (0.1,2.7);
\draw (6.1,0) node[below] {$r$};
\draw (6.1,-0.1) -- (6.1,0.1);
\draw [dotted] (4.8,0) -- (4.8,1);
\draw(0,4) .. controls (3,3.8) and (5,0) .. (6.1,0);
\draw[ dashed, blue, line width=0.8mm](0,4) .. controls (1.6,3.84) and (3,2.84) .. (4.8,.9);
\draw (4.8,.9) node {\Large $\bullet$};
\end{tikzpicture}
\label{fig:discuss_1}
\end{center}
This ends the proof of the existence of optimal sets.
\end{proof}
\begin{remark}
We note that the symmetrical slices and the smoothed nonagons are not the only sets solving the shape optimization problem $\min\{h(\Omega)\ |\ \Omega\in \mathcal{K}^2_{d,r}\}$. Indeed, if for example we consider a spherical slice $\mathcal{S}$ and denote by $C_\S$ its Cheeger set, we have $h(\S)=h(C_\S)$ and by the explicit characterization of the Cheeger sets given in \cite[Theorem 1]{kawohl}, we have
$$d(C_\S)=d(\S)$$
and
$$r(C_\S)=r\left(\S_{-\frac{1}{h(\S)}}+\frac{1}{h(\S)}B_1\right)=r\left(\S_{-\frac{1}{h(\S)}}\right)+\frac{1}{h(\S)}= r(\S)-\frac{1}{h(\S)}+\frac{1}{h(\S)}=r(S),$$
meanwhile $\S\ne C_\S$, which proves the non-uniqueness.
\end{remark}
\begin{remark}
We give the following explicit lower bound.
In \cite{inequalities_convex}, it is proved that
\begin{equation*}\label{scotty}
|\Omega|< 2 d(\Omega)r(\Omega).
\end{equation*}
By applying the strategy of Lemma \ref{lem:main}, we obtain that
\begin{equation*}
h(\Omega) \geq \dfrac{4-\pi}{d(\Omega)+2 r(\Omega)-\sqrt{(d(\Omega)+2r(\Omega))^2-2(4-\pi)d(\Omega)r(\Omega)}}.
\end{equation*}
\end{remark}
\subsubsection{The triplet $(h,r,R)$ }
\begin{proposition}\label{prop_hRr}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{hrRlow}
h(\Omega) \ge \frac{1}{t_{g_2^\Omega}},
\end{equation}
where $t_{g_2^\Omega}$ is the smallest solution of the equation
$${g_2^\Omega}(t):=2\left(\left(r-t\right)\sqrt{(R(\Omega)-t)^2-(r(\Omega)-t)^2}+(R(\Omega)-t)^2\arcsin{\left(\frac{r(\Omega)-t}{R(\Omega)-t}\right)}\right)=\pi t^2.$$
The equality in \eqref{hrRlow} is achieved if and only if $\Omega$ is a symmetrical spherical slice. Moreover, it holds
\begin{equation}
\label{hrRup}
h(\Omega)\le \frac{1}{r(\Omega)}+ \sqrt{\frac{\pi }{2r(\Omega)\left(\sqrt{R(\Omega)^2-r(\Omega)^2}+r(\Omega)\arcsin\left(\frac{r(\Omega)}{R(\Omega)}\right)\right)}},
\end{equation}
where the equality in \eqref{hrRup} is achieved by two-cup bodies.
\end{proposition}
\begin{proof}
In order to prove \eqref{hrRlow}, we apply the result of Lemma \ref{lem:main}. Let us introduce the function $$\varphi:(R,r)\longmapsto 2\left(r\sqrt{R^2-r^2}+R^2\arcsin{\frac{r}{R}}\right),$$ which is increasing with respect to the first variable, indeed
$$\frac{\partial\varphi}{\partial R}(R,r)=2R \arcsin\left(\frac{r}{R}\right)>0$$
By applying \eqref{cifreRr} (where the equality holds only for symmetrical spherical slices), we have for every $t\in [0,r(\Omega)]$
$$|\Omega_{-t}|\leq \varphi(R(\Omega_{-t}),r(\Omega_{-t}))=\varphi(R(\Omega_{-t}),r(\Omega)-t)\leq \varphi(R(\Omega)-t,r(\Omega)-t)=:{g_2^\Omega}(t),$$
where the last inequality is a consequence of the monotonicity of the function $R\longmapsto \varphi(R,r)$ and the fact that $R(\Omega_{-t})\leq R(\Omega)-t$ (see as a reference \ref{lem:diameter_inner_set}). Finally, we conclude by applying the result of Lemma \ref{lem:main}.
In order to prove \eqref{hrRup}, we combine the upper bound in \eqref{eq:hra}
and the inequality \eqref{secondhRr}.
As far as the sharpness of \eqref{hrRup} is concerned, we observe that \eqref{eq:hra} is sharp on circumscribed sets (see Definition \ref{circumscribed}) and \eqref{secondhRr} is attained by symmetric two-cup bodies, that are also circumscribed sets.
\end{proof}
\begin{remark}
We can give the following explicit lower bound
\begin{equation*}
h(\Omega)\ge \frac{4-\pi}{2(R(\Omega)+r(\Omega))-\sqrt{4(R(\Omega)+r(\Omega))^2-4(4-\pi)R(\Omega)r(\Omega)}},
\end{equation*}
which can be obtained starting from
$$\abs{\Omega}\le 4R(\Omega)r(\Omega),$$
see \cite{henk}, and the strategy from Lemma \ref{lem:main}.
\end{remark}
\subsection{Proof of Theorem \ref{th3}}
\subsubsection{The triplet $(h,\omega,d)$}
\begin{proposition}\label{prop_hdw}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{hdwlow}
h(\Omega)\ge \frac{1}{t_{g_3^\Omega}}
\end{equation}
where $t_{g_3^\Omega}$ is the smallest solution to
$${g_3^\Omega}(t):=\frac{\omega(\Omega)-2t}{2}\sqrt{(d(\Omega)-2t)^2-(\omega(\Omega)-2t)^2}+ \frac{(d(\Omega)-2t)^2}{2}\arcsin\left(\frac{\omega(\Omega)-2t}{d(\Omega)-2t}\right)=\pi t^2.$$
The equality in \eqref{hdwlow} is achieved by symmetrical spherical slices.
Moreover,
\begin{itemize}
\item if $\omega(\Omega)\le \frac{\sqrt{3}}{2}d(\Omega)$, it holds
\begin{equation}\label{hdwup}
h(\Omega)\le h(T_I),
\end{equation}
where $T_I$ is the subequilateral triangle such that $\omega(T_I)=\omega(\Omega)$ and $d(T_I)=d(\Omega)$. The equality in \eqref{hdwup} is achieved by the isosceles triangle $T_I$,
\item and if $\frac{\sqrt{3}}{2}d(\Omega)\leq\omega(\Omega) \leq d(\Omega)$, we have
\begin{equation}\label{hdwup_1}
h(\Omega)\le \frac{\sqrt{3}}{\sqrt{3}\omega(\Omega)-d(\Omega)}+\sqrt{\frac{2\pi}{\pi \omega(\Omega)^2-\sqrt{3}d(\Omega)^2+6\omega(\Omega)(\tan\left(\arccos(\frac{\omega(\Omega)}{d(\Omega)})\right)-\arccos(\frac{\omega(\Omega)}{d(\Omega)}))}}.
\end{equation}
The equality in \eqref{hdwup_1} is achieved by equilateral triangles.
\end{itemize}
\end{proposition}
\begin{proof}
Let us start by proving the lower bound \eqref{hdwlow}, by using the strategy given in Lemma \ref{lem:main}.
For every $\Omega\in\mathcal{K}^2$, it holds (see \cite{kubota} and also \cite{inequalities_convex}) \begin{equation*}
\label{Adw_up}
\abs{\Omega}\le \frac{\omega(\Omega)}{2}\sqrt{d^2(\Omega)-\omega^2(\Omega)}+ \frac{d^2(\Omega)}{2}\arcsin\left(\frac{\omega(\Omega)}{d(\Omega)}\right)
\end{equation*}
with equality if and only if $\Omega$ is a symmetrical spherical slice. If we denote by
$$f(d,w)=\frac{w}{2}\sqrt{d^2-w^2}+\frac{d^2}{2}\arcsin\left(\frac{w}{d}\right),$$
we have
$$\frac{\partial f}{\partial d}(d,w)= d\arcsin\left(\frac{w}{d}\right)>0,$$
$$\frac{\partial f}{\partial w}(d,w)=\sqrt{d^2-w^2}>0.$$
Thus, using Lemma \ref{lem:main}, we have
$$\abs{\Omega_{-t}}\le f(d(\Omega_{-t}), \omega(\Omega_{-t}))\le f(d(\Omega)-2t, \omega(\Omega)-2t)$$
and
$$h(\Omega)\ge \frac{1}{t_{g_3^\Omega}},$$
where $t_{g_3^\Omega}$ is the smallest solution to
$${g_3^\Omega}(t):=f(d(\Omega)-2t, \omega(\Omega)-2t)=\pi t^2.$$
In order to prove the upper bound \eqref{hdwup}, we consider the following minimization problem for the area in the class of convex sets with given diameter and width, studied in \cite{sholander} and \cite{inequalities_convex}:
\begin{enumerate}[label=(\roman*)]
\item if $2\omega(\Omega)\le \sqrt{3}d(\Omega)$, then
\begin{equation}\label{min1}
2\abs{\Omega}\ge \omega(\Omega)d(\Omega),
\end{equation}
where equality is achieved by triangles;
\item if $\sqrt{3}d(\Omega)\le 2\omega(\Omega)\le 2 d(\Omega)$, then
\begin{equation}\label{min2}
\displaystyle{2\abs{\Omega}\ge\pi \omega^2(\Omega)-\sqrt{3}d^2(\Omega)+6\omega^2(\Omega)\left(\tan\left(\arccos\left(\frac{\omega(\Omega)}{d(\Omega)}\right)\right)-\arccos\left(\frac{\omega(\Omega)}{d(\Omega)}\right)\right)}=|T_Y|,
\end{equation}
where the equality is achieved by the Yamanouti set $T_Y$ such that $\omega(T_Y)=\omega(\Omega)$ and $d(T_Y)=d(\Omega)$.
\end{enumerate}
Moreover, if we consider the minimization problem of the inradius in the class of convex set with given diameter and width, we have from Theorem \ref{cifre_wrd_thm} (see \eqref{cifre_wrd_eq_1} and \eqref{cifre_wrd_eq_2}):
\begin{equation}\label{min3}
r(\Omega)\geq \begin{cases}
r(T_I), & \text{if}\ \, 2\omega(\Omega)\le \sqrt{3}d(\Omega)\\
r(T_Y) & \text{if}\ \, \sqrt{3}d(\Omega)\le 2\omega(\Omega)\le 2 d(\Omega).
\end{cases}
\end{equation}
So combining \eqref{eq:hra} with the estimates \eqref{min1}, \eqref{min2} and \eqref{min3}, we obtain \eqref{hdwlow}
\begin{equation*}
h(\Omega)\leq \begin{cases}
\frac{1}{r(T_I)}+\sqrt{\frac{\pi}{|T_I|}}= h(T_I), & \text{if}\ \, 2\omega(\Omega)\le \sqrt{3}d(\Omega)\\
\frac{1}{r(T_Y)}+\sqrt{\frac{\pi}{|T_Y|}} & \text{if}\ \, \sqrt{3}d(\Omega)\le 2\omega(\Omega)\le 2 d(\Omega).
\end{cases}
\end{equation*}
The explicit formula given in the inequality \eqref{hdwup_1} is obtained by using \eqref{min2} and $r(T_Y)=\omega(T_Y)-\frac{d(T_Y)}{\sqrt{3}}$, see \cite[Theorem 2]{cifre4}.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.4]{fig_hwd.eps}
\includegraphics[scale=.5]{fig_hwd_zoom.eps}
\caption{Blaschke--Santal\'o diagram of the triplet $(h,\omega,d)$.}
\label{fig:hwd}
\end{figure}
\newpage
We are also able to give an explicit, but not sharp, lower bound of the Cheeger constant in terms of the width and the diameter.
\begin{remark}
Let $\Omega\in \mathcal{K}^2$. Then, it holds
\begin{equation}\label{lr}
h(\Omega)>\frac{1}{\omega(\Omega)}+\frac{1}{d(\Omega)}+\sqrt{\left(\frac{1}{\omega(\Omega)}+\frac{1}{d(\Omega)}\right)^2-\frac{4-\pi}{\omega(\Omega)d(\Omega)}},
\end{equation}
where equality is asymptotically achieved by a sequence of thin collapsing rectangles.
In order to prove \eqref{lr}, it is enough to consider the inequality
$$\abs{\Omega}\le \omega(\Omega) d(\Omega)$$
and to use the strategy of Lemma \ref{lem:main}.
\end{remark}
\subsubsection{ The triplet $(h, R,\omega)$}
\begin{proposition}\label{prop_hRw}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{low_hRw}
h(\Omega)\ge \frac{1}{t_{g_4^\Omega}},
\end{equation}
where $t_{g_4^\Omega}$ is the smallest solution of the equation
$$g_4^\Omega(t):= \frac{(\omega(\Omega)-2t)}{2} \sqrt{4 \left(R(\Omega)-t\right)^2-\left(\omega(\Omega)-2t\right)^2} + 2(R(\Omega)-t)^2 \arcsin{\left(\frac{\omega(\Omega)-2t}{2(R(\Omega)-t)}\right)}= \pi t^2$$
on $[0,r(\Omega)]$.
The equality in \eqref{low_hRw} is achieved by symmetrical spherical slices.
Moreover, it holds
\begin{equation}\label{upper_hRw}
h(\Omega)\leq h(T_I), \qquad \text{ if } \, \omega(\Omega)\le \frac{3}{2}R(\Omega),
\end{equation}
where $T_I$ is the subequilateral triangle such that $R(T_I)=R(\Omega)$ and $\omega(T_I)=\omega(\Omega)$. The equality in \eqref{upper_hRw} is realized by the subequilateral triangle $T_I$.
\end{proposition}
\begin{proof}
Let us start by proving the lower bound \eqref{low_hRw}, by using the strategy given in Lemma \ref{lem:main}.
Let us recall the function defined in \eqref{henkchi}: $$\chi:(\omega, R)\longmapsto \frac{\omega}{2} \sqrt{4 R^2-\omega^2} + 2R^2 \arcsin{\frac{\omega}{2R}}.$$
We have, for every $R,\omega>0$,
$$\frac{\partial \chi}{\partial R}(\omega, R) = 4R \arcsin{\frac{\omega}{2 R}}\ge 0 \ \ \ \text{and}\ \ \ \ \frac{\partial \chi}{\partial \omega}(\omega, R) = \sqrt{4R^2-\omega^2}\ge 0.$$
Thus, using Theorem \ref{cifrsalth}, we have, for every $t\in [0,r(\Omega))$,
$$|\Omega_{-t}|\leq \chi(\omega(\Omega_{-t}), R(\Omega_{-t}))\leq \chi(\omega(\Omega)-2t ,R(\Omega)-t):=g_4^\Omega(t),$$
where we use \eqref{width} and \eqref{circumradius}.
By Lemma \ref{lem:main}, we have
$$h(\Omega)\ge \frac{1}{t_{g_4^\Omega}},$$
where $t_{g_4^\Omega}$ is the smallest solution to the equation $g_4^\Omega(t)=\pi t^2$ on the interval $[0,r(\Omega)]$.
Let us now prove the upper bound \eqref{upper_hRw}.
We recall the inequality \eqref{eq:hra}:
\begin{equation}
\label{kf}
h(\Omega)\le \frac{1}{r(\Omega)}+ \sqrt{\frac{\pi}{\abs{\Omega}}},
\end{equation}
where equality is achieved for instance by circumscribed sets, in particular, by triangles.
In order to prove \eqref{upper_hRw}, we consider the following facts:
\begin{enumerate}[label=(\roman*)]
\item In \eqref{ARw_low}, it is proved that, if $\Omega\in\mathcal{K}^2$,
\begin{equation}\label{firsty}
\abs{\Omega}\ge \abs{T_I} \quad \text{if} \, \, \omega(\Omega)\le \frac{3}{2}R(\Omega),
\end{equation}
where $T_I$ is the subequilateral triangle with given width and circumradius and
nothing is known if $\omega(\Omega)\ge 3/2R(\Omega)$;
\item In Theorem \ref{cifre_gomis_1_thm}, see \eqref{cifre_gomis_1_eq}, and Theorem \ref{santalo_thm_1}, see \eqref{santalo_eq_1}, it is proved that if
\begin{equation}\label{lasty}
r(\Omega) \geq \begin{cases}
r(T_I) & \text{if} \, \, {\omega(\Omega)\le \frac{3}{2}R(\Omega)}\\
\omega(\Omega)-R(\Omega) & \text{if} \, \, {\omega(\Omega)\ge \frac{3}{2}R(\Omega)},
\end{cases}
\end{equation}
where, if $\omega(\Omega)\ge \frac{3}{2}R(\Omega)$, equality is achieved by any set obtained by an equilateral triangle of circumradius $R(\Omega)$ by replacing the edges by three equal circular arcs, in particular each arc of circle is centered on the height relative to the same side.
\end{enumerate}
So, if $\omega(\Omega)\le \frac{3}{2}R(\Omega)$, combining \eqref{kf} with \eqref{firsty} and \eqref{lasty}, we get the thesis.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.4]{fig_hw_circumradius.eps}
\includegraphics[scale=.3]{fig_hw_circumradius_zoom.eps}
\caption{Blaschke--Santal\'o diagram of the triplet $(h,\omega,R)$.}
\label{fig:hwR}
\end{figure}
In the following Remark, we give some not-sharp bounds, that are explicit.
\begin{remark}
We can prove that
\begin{equation}\label{ns}
h(\Omega)\le \frac{3}{\omega(\Omega)}+\sqrt{\frac{\pi}{\sqrt{3}R(\Omega)\omega(\Omega)}}.
\end{equation}
We recall the following inequalities, proved in \cite{inequalities_convex}:
\begin{equation}
\label{Arw}
\abs{\Omega}\ge \sqrt{3}R(\Omega)\omega(\Omega)\ \ \ \text{and}\ \ \
\omega(\Omega)\le 3 r(\Omega).
\end{equation}
By combining these estimates and the upper bound in \eqref{eq:hra}, we have
$$h(\Omega)\leq \frac{3}{\omega(\Omega)}+\sqrt{\frac{\pi}{\sqrt{3}R(\Omega)\omega(\Omega)}}.$$
Since the equality in \eqref{eq:hra} is achieved by circumscribed sets, while the equalities in \eqref{Arw} are achieved by equilateral triangles, that are particular circumscribed sets, we have the equality in \eqref{ns} for equilateral triangles.
Moreover, another not-sharp upper bound can be obtained by using the strategy from Lemma \ref{lem:main}, starting from
$$\abs{\Omega}< 2 R(\Omega)\omega(\Omega),$$
which is asymptotically achieved by a sequence of rectangles with circumradius that goes to infinity (see \cite{henk}). We get
$\abs{\Omega_{-t}}\le 2 R(\Omega_{-t})\omega(\Omega_{-t})\le 2(R(\Omega)-t)(\omega(\Omega)-2t)$
and, consequently,
$$ h(\Omega)\ge\frac{4-\pi}{(2R(\Omega)+\omega(\Omega))-\sqrt{(2R(\Omega)+\omega(\Omega))^2-(4-\pi)(2R(\Omega)\omega(\Omega))}}.$$
\end{remark}
\subsubsection{The triplet $(h, \omega, P)$}
\begin{proposition}\label{prop_hwP}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}
\label{hwp_low}
h(\Omega)\ge \frac{2}{\omega(\Omega)}+\frac{2\pi}{2P(\Omega)-\pi \omega(\Omega)},
\end{equation}
where the equality is achieved by stadiums.
Moreover, if $P(\Omega)\ge 2\sqrt{3}\omega(\Omega)$, it holds
\begin{equation}
\label{hwp_up}
h(\Omega)\le h(T_I),
\end{equation}
where $T_I$ is the subequilateral triangle such that $P(T_I)=P(\Omega)$ and $\omega(T_I)=\omega(\Omega)$. The equality in \eqref{hwp_up} is achieved by the subequilateral triangle $T_I$.
\end{proposition}
\begin{proof}
The inequality \eqref{hwp_low} is a consequence of \eqref{hAw_low} and the inequality
$$|\Omega|\leq \frac{\omega(\Omega)}{2}\left(P(\Omega)-\frac{\pi\omega(\Omega)}{2}\right),$$
which is an equality for stadiums (see for example \cite{inequalities_convex} and the references therein).
Let us now assume that $P(\Omega)\ge 2\sqrt{3}\omega(\Omega)$. In order to prove \eqref{hwp_up}, we recall the inequality \eqref{cifre_salinas_rwP}
$$ P(\Omega)\le \sqrt{\frac{4 r^2(\Omega)\omega^3(\Omega)}{(\omega(\Omega)-2r(\Omega))^2(4r(\Omega)-\omega(\Omega))}}=:f_{\omega(\Omega)}(r(\Omega))$$
By direct computations, we prove that the continuous function
$$f_{\omega(\Omega)}: r\longmapsto \sqrt{\frac{4 r^2 \omega(\Omega)^3}{(\omega(\Omega)-2r)^2(4r-\omega(\Omega))}}$$
is strictly increasing on $\left[\frac{\omega(\Omega)}{3}, \frac{\omega(\Omega)}{2}\right)$. Let us denote by $g_{\omega(\Omega)}$ the inverse function of $f_{\omega(\Omega)}$, which is also continuous and strictly increasing. We have
$$r(\Omega)\ge g_{\omega(\Omega)}(P(\Omega))=r(T_I),$$
where $T_I$ is any subequilateral triangle such that $\omega(T_I) = \omega(\Omega)$ and $P(T_I)=P(\Omega)$. Moreover, since $P(\Omega)\ge 2\sqrt{3}\omega(\Omega)$, we have by the results of \cite{yamanouti}
$$|\Omega|\ge |T_I|,$$
see also \cite{inequalities_convex}.
Finally, we have $$h(\Omega)\leq \frac{1}{r(\Omega)}+\sqrt{\frac{\pi}{|\Omega|}}\leq \frac{1}{r(T_I)}+\sqrt{\frac{\pi}{|T_I|}}=h(T_I).$$
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.4]{fig_hpw.eps}
\includegraphics[scale=.5]{fig_hpw_zoom.eps}
\caption{Blaschke--Santal\'o diagram of the triplet $(h,\omega,P)$.}
\label{fig:hpw}
\end{figure}
\subsubsection{The triplet $(h, |\cdot| , \omega)$}
\begin{proposition}\label{prop_hAw}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{hAw_low}
h(\Omega)\geq \frac{2}{\omega(\Omega)}+\frac{\pi \omega(\Omega)}{2 \abs{\Omega} },
\end{equation}
where equality is achieved by stadiums. Moreover, it holds
\begin{equation}\label{hAw_up}
h(\Omega)\leq h(T_I),
\end{equation}
where $T_I$ is a subequilateral triangle such that $|\Omega|=|T_I|$ and $\omega(\Omega)=\omega(T_I)$. The equality in \eqref{hAw_up} is achieved by the subequilateral triangle $T_I$.
\end{proposition}
\begin{proof}
Let us prove the inequality \eqref{hAw_low}. We recall the lower bound in \eqref{eq:hra}
$$h(\Omega)\ge \frac{1}{r(\Omega)}+\frac{\pi r(\Omega)}{2|\Omega|},$$
which is an equality if and only if $\Omega$ is a stadium. Since the function $r\longmapsto \frac{1}{r}+\frac{\pi r}{2|\Omega|}$ is strictly decreasing and $r(\Omega)\leq \frac{\omega(\Omega)}{2}$ (where equality holds for stadiums), we obtain \eqref{hAw_low}.
Let us now prove inequality \eqref{hAw_up}. We recall the inequality \eqref{cifre_salinas_rwA}
$$ |\Omega|\le \sqrt{\frac{ r^4(\Omega)\omega^3(\Omega)}{(\omega(\Omega)-2r(\Omega))^2(4r(\Omega)-\omega(\Omega))}}=:f_{\omega(\Omega)}(r(\Omega))$$
By direct computations, we prove that the continuous function
$$f_{\omega(\Omega)}: r\longmapsto \sqrt{\frac{r^4 \omega(\Omega)^3}{(\omega(\Omega)-2r)^2(4r-\omega(\Omega))}}$$
is strictly increasing on $\left[\frac{\omega(\Omega)}{3}, \frac{\omega(\Omega)}{2}\right)$. Let us denote by $g_{\omega(\Omega)}$ the inverse function of $f_{\omega(\Omega)}$, which is also continuous and strictly increasing. We have
$$r(\Omega)\ge g_{\omega(\Omega)}(|\Omega|)=r(T_I),$$
where $T_I$ is any subequilateral triangle such that $\omega(T_I) = \omega(\Omega)$ and $|T_I|=|\Omega|$. Thus, we have
$$h(\Omega)\leq \frac{1}{r(\Omega)}+\sqrt{\frac{\pi}{|\Omega|}}\leq\frac{1}{r(T_I)}+\sqrt{\frac{\pi}{|T_I|}}=h(T_I),$$with equality if and only if $\Omega=T_I$.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.5]{fig_hwa.eps}
\caption{Blaschke--Santal\'o diagram of the triplet $(h,\omega,|\cdot|)$.}
\label{fig:hwa}
\end{figure}
\begin{remark}\label{rk:hAw}
One may use classical convex geometry inequalities to obtain simpler bounds than the implicit one given in \eqref{hAw_up}. Indeed, if we combine the inequalities in \eqref{eq:hra} and the following classical ones
\begin{equation*}\label{eq:rw}
\frac{2}{\omega(\Omega)}\leq \frac{1}{r(\Omega)}\leq \frac{2}{\omega(\Omega)}+ \frac{\omega(\Omega)}{\sqrt{3}|\Omega|}
\end{equation*}
where the lower bound is realized in particular by stadiums, and the upper bound by equilateral triangles (see for example \cite{inequalities_convex} and the references therein), we obtain the following inequalities
\begin{equation}\label{hAw_up1}
h(\Omega)\leq \frac{2}{\omega(\Omega)}+ \frac{\omega(\Omega)}{\sqrt{3}|\Omega|}+\sqrt{\frac{\pi}{|\Omega|}}
\end{equation}
and
\begin{equation}\label{hAw_up2}
h(\Omega)\leq \frac{2}{\omega(\Omega)-\frac{\omega(\Omega)^3}{4|\Omega|}}+\sqrt{\frac{\pi}{|\Omega|}}.
\end{equation}
The bound \eqref{hAw_up1} is achieved by equilateral triangles and \eqref{hAw_up2} is asymptotically achieved by a sequence of thin subequilateral triangles.
\end{remark}
\subsection{Proof of Theorem \ref{th4}}
\subsubsection{The triplet $(h, d,R)$}
\begin{proposition}\label{prop_hRd}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{hRdup}
h(\Omega)\le \frac{2R(\Omega) \left(2R(\Omega)+ \sqrt{4R(\Omega)^2-d(\Omega)^2}\right)}{d(\Omega)^2\sqrt{4R(\Omega)^2-d(\Omega)^2}}+ \sqrt{\frac{4\pi R(\Omega)^2}{d(\Omega)^3 \sqrt{4R(\Omega)^2-d(\Omega)^2}}},
\end{equation}
where the equality is achieved by subequilateral triangles.
\end{proposition}
\begin{proof}
The inequality \eqref{hRdup} is obtained by combining the upper bound in \eqref{eq:hra},
which is an equality for circumscribed sets (in particular subequilateral triangles), and
$$
r(\Omega)\ge \frac{d(\Omega)^2\sqrt{4R(\Omega)^2-d(\Omega)^2}}{2R(\Omega)\left(2R(\Omega)+\sqrt{4R(\Omega)^2-d(\Omega)^2}\right)}\ \ \ \text{and}\ \ \ |\Omega|\ge \frac{d(\Omega)^3\sqrt{4R(\Omega)^2-d(\Omega)^2}}{4R(\Omega)^2},
$$
respectively proved in \cite{santalo} and \cite{cifre2}, where the equalities hold only for subequilateral triangles.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.44]{fig_hdR.eps}
\caption{Blaschke--Santal\'o diagram of the triplet $(h,R,d)$.}
\label{fig:hdR}
\end{figure}
\newpage
\subsubsection{The triplet $(h, \omega, r)$}
\begin{proposition}\label{prop_hwr}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{ineq:hwr_lower}
h(\Omega)\ge \frac{1}{r(\Omega)}+\frac{1}{r(\Omega)}\sqrt{\pi\left(1-\frac{2r(\Omega)}{\omega(\Omega)}\right)\sqrt{\frac{4r(\Omega)}{\omega(\Omega)}-1}}, \end{equation}
where the equality is achieved by subequilateral triangles.
\end{proposition}
\begin{proof}
The proof of \eqref{ineq:hwr_lower} is inspired by the proof of \cite[Theorem 5]{cifre_salinas}. It is known that the incircle of a set $\Omega$ meets the boundary of $\Omega$ either in two diametrically opposite points, or in three points that form the vertices of a triangle, see \cite{bonnesen}. In the first case, we have $\omega(\Omega)=2r(\Omega)$, thus the inequality \eqref{ineq:hwr_lower} is equivalent to $h(\Omega)\ge \frac{1}{r(\Omega)}$, which is proved in Proposition \ref{two}. In the second case, let us denote by $T$ a triangle formed by three lines of support common to $\Omega$ and the incircle.
We have $r(\Omega)=r(T)$ and, by $\Omega \subset T$ and the monotonicity with respect to the inclusion, we get
\begin{equation}\label{h}
h(\Omega)\ge h(T)
\end{equation}
and
\begin{equation}\label{omega}
\omega(\Omega)\leq \omega(T).
\end{equation}
So, inequality \eqref{ineq:hwr_lower} is equivalent to the following one
$$
\frac{1}{r(\Omega)h(\Omega)}f\left(\frac{r(\Omega)}{\omega(\Omega)}\right)\leq 1,
$$
where $f:x\in\left[\frac{1}{3},\frac{1}{2}\right]\longmapsto 1+\sqrt{\pi\left(1-2x\right)\sqrt{4x-1}}$. We observe that the function $g:x\in\left[\frac{1}{3},\frac{1}{2}\right]\longmapsto \left(1-2x\right)\sqrt{4x-1}$ is decreasing. Indeed,
$$g'(x) = \frac{4(1-3x)}{\sqrt{4x-1}}\leq 0.$$
Thus, $f$ is also decreasing on $\left[\frac{1}{3},\frac{1}{2}\right]$. Then, since $\frac{r(\Omega)}{\omega(\Omega)}\ge \frac{r(T)}{\omega(T)}$ by \eqref{omega}, we have
$$f\left(\frac{r(\Omega)}{\omega(\Omega)}\right) \leq f\left(\frac{r(T)}{\omega(T)}\right).$$
Moreover, we have, by \eqref{h}, $$\frac{1}{r(\Omega)h(\Omega)}\leq \frac{1}{r(T)h(T)}.$$
Thus, we have
$$\frac{1}{r(\Omega)h(\Omega)}f\left(\frac{r(\Omega)}{\omega(\Omega)}\right) \leq \frac{1}{r(T)h(T)}f\left(\frac{r(T)}{\omega(T)}\right)=\frac{1}{1+r(T)\sqrt{\frac{\pi}{|T|}}}f\left(\frac{r(T)}{\omega(T)}\right),$$
where we used the equality $h(T) = \frac{1}{r(T)}+\sqrt{\frac{\pi}{|T|}}$, which holds because $T$ is a triangle, see \cite{ftJMAA}.
Now, we use the inequality $$|T|\leq \frac{r(T)^2}{\left(1-\frac{2r(T)}{\omega(T)}\right)\sqrt{\frac{4r(T)}{\omega(T)}-1}},$$
which is an equality if and only if $T$ is a subequilateral triangle, see \cite[Theorem 5]{cifre_salinas}. We then have
$$\frac{1}{r(\Omega)h(\Omega)}f\left(\frac{r(\Omega)}{\omega(\Omega)}\right) \leq\frac{1}{1+r(T)\sqrt{\frac{\pi}{|T|}}}f\left(\frac{r(T)}{\omega(T)}\right)\leq 1,$$
which ends the proof.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.40]{fig_hw_inradius.eps}
\caption{Blaschke--Santal\'o diagram of the triplet $(h,\omega,r)$.}
\label{fig:hwr}
\end{figure}
\begin{remark}It is possible to give a not sharp upper bound
\begin{equation*}\label{ineq:hwr_upper}
h(\Omega) \leq \frac{1}{r(\Omega)}+ \frac{\sqrt{\pi\sqrt{3}}}{\omega(\Omega)}
\end{equation*}
In order to prove it, we combine the upper bound in \eqref{eq:hra} and $\omega(\Omega)^2\leq \sqrt{3}|\Omega|$, see \cite{inequalities_convex}.
\end{remark}
\subsection{Some not-sharp bounds}
The following four diagrams remain unsolved. Nevertheless, we provide some partial results.
\subsubsection{The triplet $(h,R,|\cdot|)$}
\begin{proposition}\label{prop_hAR}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{RA_low}
h(\Omega)\ge \frac{1}{2R(\Omega)}+\frac{\pi R(\Omega)}{2|\Omega|}+\sqrt{\frac{\pi}{|\Omega|}},
\end{equation}
where equality is achieved by balls. Moreover, it holds
\begin{equation}
\label{RA_up}
h(\Omega)<\frac{1}{R(\Omega)}+\frac{4R(\Omega)}{|\Omega|},
\end{equation}
where equality is achieved by a sequence of thinning stadiums.
\end{proposition}
\begin{proof}
Let us start by proving \eqref{RA_low}.
Combining the lower bound in \eqref{eq:hpa} and the inequality (see \cite{inequalities_convex})$$|\Omega|\leq R(\Omega)(P(\Omega)-\pi R(\Omega)),$$ we obtain the claimed result.
As far as \eqref{RA_up}, we have
$$h(\Omega) \leq \frac{P(\Omega)}{|\Omega|}<\frac{\frac{|\Omega|}{R(\Omega)}+4R(\Omega)}{|\Omega|}=\frac{1}{R(\Omega)}+\frac{4R(\Omega)}{|\Omega|},$$
where we use the inequality (see as a reference \cite{inequalities_convex})
\begin{equation}\label{ARP_scott}
|\Omega|>R(\Omega)(P(\Omega)-4R(\Omega)).
\end{equation}
The equality in \eqref{ARP_scott}, and so the equality in \eqref{RA_up}, are asymptotically achieved by thinning stadiums.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.42]{fig_hRA.eps}
\caption{Blaschke--Santal\'o diagram of the triplet $(h,R,|\cdot|)$.}
\label{fig:hRA}
\end{figure}
\subsubsection{The triplet $(h, P,R)$}
\begin{proposition}\label{prop_hRP}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{PR_up}
h(\Omega)<\frac{P(\Omega)}{R(\Omega)(P(\Omega)-4R(\Omega))},
\end{equation}
where equality is achieved by thinning stadiums. Moreover, it holds
\begin{equation}
\label{PR_low}
h(\Omega) \ge \frac{4\arcsinc{\left(\frac{4R(\Omega)}{P(\Omega)}\right)}}{P(\Omega)-4R(\Omega)\cos{\left(\arcsinc{\left(\frac{4R(\Omega)}{P(\Omega)}\right)}\right)}} + \sqrt{\frac{8\pi \arcsinc{\left(\frac{4R(\Omega)}{P(\Omega)}\right)}}{P(\Omega)\left(P(\Omega)-4R(\Omega)\cos{\left(\arcsinc{\left(\frac{4R(\Omega)}{P(\Omega)}\right)}\right)}\right)}},
\end{equation}
where equality is achieved by balls.
\end{proposition}
\begin{proof}
The inequality \eqref{PR_low} is a direct consequence of the inequalities $h(\Omega)\leq \frac{P(\Omega)}{|\Omega|}$ and $|\Omega|>R(\Omega)(P(\Omega)-4R(\Omega))$.\\
As far as \eqref{PR_up}, we combine the lower bound in \eqref{eq:hpa} and inequality \eqref{santalo_APR}.
The equality in \eqref{ARP_scott}, and so the equality in \eqref{PR_up}, is asymptotically achieved by thinning stadiums, while the equality in the upper bound in \eqref{eq:hpa} is achieved by circumscribed set, the equality in \eqref{santalo_APR} is achieved by symmetrical lenses, so, the equality in \eqref{PR_up} is achieved by balls.
\end{proof}
\begin{remark}
We note that one can use the inequality (see \cite{inequalities_convex})
$$|\Omega|\leq 2R(\Omega)(P(\Omega)-2R(\Omega))$$
and the Lemma \ref{lem:main} to obtain the following explicit bound:
$$h(\Omega)\ge \frac{P(\Omega)+2(\pi-2)R(\Omega)+\sqrt{P(\Omega)^2-2\pi P(\Omega)R(\Omega)+4\pi(\pi-1)R(\Omega)^2}}{2R(\Omega)(P(\Omega)-2R(\Omega))}.$$
\end{remark}
\begin{figure}[h]
\centering
\includegraphics[scale=.38]{fig_hP_circumradius.eps}
\includegraphics[scale=.5]{fig_hP_circumradius_zoom.eps}
\caption{Blaschke--Santal\'o diagram of the triplet $(h,P,R)$.}
\label{fig:hp_circumradius}
\end{figure}
\pagebreak
\subsubsection{The triplet $(h, P, d)$}
\begin{proposition}\label{prop_hPd}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
\begin{equation}\label{ineq:hpd_lower}
h(\Omega) \ge \frac{4 \arcsinc{\left(\frac{2d(\Omega)}{P(\Omega)}\right)}}{P(\Omega)-2d(\Omega)\cos{\left(\arcsinc{\left(\frac{2d(\Omega)}{P(\Omega)}\right)}\right)}} + \sqrt{\frac{8\pi\arcsinc{\left(\frac{2d(\Omega)}{P(\Omega)}\right)}}{P(\Omega)(P(\Omega)-2d(\Omega)\cos{\left(\arcsinc{\left(\frac{2d(\Omega)}{P(\Omega)}\right)}\right)}}},
\end{equation}
where equality is achieved for balls. Moreover, if $P(\Omega) \in (2d(\Omega),3d(\Omega)]$, it holds
\begin{equation}\label{ineq:hpd_up1}
h(\Omega) < \frac{4}{P(\Omega)-2d(\Omega)} + \sqrt{\frac{4\pi}{(P(\Omega)-2d(\Omega))\sqrt{P(\Omega)(4d(\Omega)-P(\Omega))}}},
\end{equation}
and, if $P(\Omega) \in [3d(\Omega),\pi d(\Omega)]$,
\begin{equation}\label{ineq:hpd_up2}
h(\Omega) < \frac{4}{P(\Omega)-2d(\Omega)} + \sqrt{\frac{4\pi}{\sqrt{3} d(\Omega)(P(\Omega)-2d(\Omega))}}.
\end{equation}
\end{proposition}
\begin{proof}
The inequality \eqref{ineq:hpd_lower} is obtained by combining the lower bound in \eqref{eq:hpa}
and the following inequality (see \cite{inequalities_convex})
$$|\Omega|<\frac{P(\Omega)\left(P(\Omega)-2d(\Omega)\cos{\arcsinc{\left(\frac{2d(\Omega)}{P(\Omega)}\right)}}\right)}{8\arcsinc{\left(\frac{2d(\Omega)}{P(\Omega)}\right)}}.$$
The upper bounds \eqref{ineq:hpd_up1} and \eqref{ineq:hpd_up2} are obtained by combining the upper bound in \eqref{eq:hra} and the inequalities (see \cite{inequalities_convex})
$$ P(\Omega)<2d(\Omega)+4r(\Omega)$$
and
\begin{equation*}\label{min4}
4|\Omega|\geq \begin{cases}
(P(\Omega)-2d(\Omega))\sqrt{4P(\Omega)(d(\Omega)-P(\Omega))}, & \text{if}\ \, P(\Omega) \in (2d(\Omega),3d(\Omega)]\\
\sqrt{3} d(\Omega)(P(\Omega)-2d(\Omega)) & \text{if}\ \, P(\Omega) \in (3d(\Omega),\pi d(\Omega)].
\end{cases}
\end{equation*}
\end{proof}
\begin{figure}[h!]
\centering
\includegraphics[scale=.31]{fig_hPd.eps}
\includegraphics[scale=.35]{fih_hPd_zoom1.eps}
\includegraphics[scale=.35]{fig_hPd_zoom2.eps}
\caption{Blaschke--Santal\'o diagram of the triplet $(h,P,d)$.}
\label{fig:hpd}
\end{figure}
\newpage
\subsubsection{The triplet $(h, |\cdot|, d)$}
\begin{proposition}\label{prop_hdA}
Let $\Omega\in\mathcal{K}^2$. Then, it holds
$$ h(\Omega)> \frac{d(\Omega)}{|\Omega|}+\sqrt{\frac{\pi}{|\Omega|}},$$
where the equality is asymptotically achieved by thin vanishing symmetrical two-cup bodies. Moreover, it holds
$$h(\Omega) \leq \frac{4}{d(\Omega)}+\frac{2d(\Omega)}{|\Omega|},$$
and
$$h(\Omega) < \frac{2 d(\Omega)}{|\Omega|}+\sqrt{\frac{\pi}{|\Omega|}},$$
where, in both cases, the equality is asymptotically achieved by thin vanishing rectangles.
\end{proposition}
\begin{proof}
We have $$h(\Omega) \ge \frac{P(\Omega)}{2|\Omega|}+\sqrt{\frac{\pi}{|\Omega|}} > \frac{2 d(\Omega)}{2|\Omega|}+ \sqrt{\frac{\pi}{|\Omega|}}, $$
where we use the lower bound in \eqref{eq:hpa} and $P(\Omega)>2 d(\Omega)$.
Moreover, we have
$$h(\Omega) \leq \frac{P(\Omega)}{|\Omega|} \leq \frac{4}{d(\Omega)}+\frac{2d(\Omega)}{|\Omega|},$$
where we use the inequality $P(\Omega)\leq \frac{4|\Omega|}{d(\Omega)}+ 2d(\Omega)$ (see \cite{inequalities_convex}).
On the other hand, we have
$$h(\Omega) \leq \frac{1}{r(\Omega)}+\sqrt{\frac{\pi}{|\Omega|}}<\frac{2 d(\Omega)}{|\Omega|}+ \sqrt{\frac{\pi}{|\Omega|}},$$
where we have use the upper bound in \eqref{eq:hra} and the inequality $|\Omega|<2d(\Omega) r(\Omega)$.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.44]{fig_hdA.eps}
\includegraphics[scale=.44]{fig_hdA_zoom.eps}
\caption{Blaschke--Santal\'o diagram of the triplet $(h,d,|\cdot|)$.}
\label{fig:hdA}
\end{figure}
\newpage
\section{Conclusions and conjectures}\label{secult}
In this last Section we collect all the conjectures that we can deduce from the numerical approximation of the Blaschke--Santaló diagrams that we have plotted.
\begin{conjecture}
As far as the diagram $(h, \omega, d)$ is concerned, we conjecture that, if $\sqrt{3}/2 d(\Omega)\le\omega(\Omega)\le d(\Omega)$, then for all $\Omega\in \mathcal{K}^2$
$$h(\Omega)\le h(Y)$$
where $Y$ is the Yamanouti set with $\omega(Y)=\omega(\Omega) $ and $d(Y)=d(\Omega)$ (see Figure \ref{fig:hwd}).
\end{conjecture}
\begin{conjecture} As far as the diagram $(h, R, \omega)$ is concerned, we conjecture that, if $\omega(\Omega)\ge \frac{3}{2}R(\Omega)$, then, for all $\Omega\in \mathcal{K}^2$,
\begin{equation*}
h(\Omega)\leq h(\Bar{T}),
\end{equation*}
where $\Bar{T}$ is a set obtained by an equilateral triangle of circumradius $R(\Omega)$ by replacing the edges by three equal circular arcs, as described in Theorem \ref{santalo_thm_1} (see Figure \eqref{fig:hwR}).
\end{conjecture}
\begin{conjecture} As far as the diagram $(h, \omega, P)$ is concerned, we conjecture that, if $\pi\omega(\Omega)\le P(\Omega)\le 2\sqrt{3}\omega(\Omega)$, then, for all $\Omega\in \mathcal{K}^2$,
\begin{equation*}
h(\Omega)\leq h(Y),
\end{equation*}
where $Y$ is the Yamanouti set with $P(Y)=P(\Omega)$ and $\omega(Y)=\omega(\Omega)$. (see Figure \eqref{fig:hpw}).
\end{conjecture}
\begin{conjecture} As far as the diagram $(h, d, R)$ is concerned, we conjecture that
\begin{equation*}
h(\Omega)\geq h(N),
\end{equation*}
where $N$ is the set described as follows (see \cite{cifre2}).
Let $\Gamma$ and $\gamma$ be the circumcircle and the incircle of a constant width set: they are concentric and $d=\omega= R + r$. The extremal set can be constructed in the following way: an equilateral triangle $PQR$ is inscribed in the circle $\Gamma$, and now we take the circular arcs of radius $R+r$ drawn about the three vertex points. These arcs touch $\gamma$ at the opposite points $\Bar{P}, \Bar{Q}, \Bar{R}$ of $P, Q, R$, respectively. Furthermore, we construct three circles of radius $(R + r)/2$ that have the sides of the triangle as chords and whose centers lie inside the triangle. The required constant width set has 3-fold symmetry and it is formed by nine arcs
of the six constructed circles (see Figure \ref{fig:hdR}).
\end{conjecture}
\begin{conjecture}
As far as the diagram $(h,|\cdot|, R)$ is concerned, we conjecture that
$$h(\Omega)\le h(\mathcal{S})$$
and
$$h(\Omega)\ge h(\mathcal{C}),$$
where $\mathcal{S}$ is the symmetrical spherical slice with $\abs{\mathcal{S}}=\abs{\Omega}$ and $R(\mathcal{S})=R(\Omega)$ and $\mathcal{C}$ is the two-cup body with $\abs{\mathcal{C}}=\abs{\Omega}$ and $R(\mathcal{C})=R(\Omega)$ (see Figure \ref{fig:hRA}).
\end{conjecture}
\begin{conjecture}
As far as the diagram $(h, R, P)$ is concerned, we conjecture that
$$h(\Omega)\le h(\mathcal{S})$$
and
$$h(\Omega)\ge h(\mathcal{Q}),$$
where here $\mathcal{S}$ is the symmetrical spherical slice with $R(\mathcal{S})=R(\Omega)$ and $P(\mathcal{S})=P(\Omega)$ and $\mathcal{Q}$ the quasi-lens with $R(\mathcal{Q})=R(\Omega)$ and $P(\mathcal{Q})=P(\Omega)$ , which is defined as the convex hull of the Cheeger set of a symmetrical lens and its two diametrical points. (see Figure \ref{fig:hp_circumradius}).
\end{conjecture}
\begin{conjecture}
As far as the diagram $(h, P, d)$ is concerned, we conjecture that
$$h(\Omega)\leq h(T_I) \quad \text{if }\ \ P(\Omega)\le 3d(\Omega)$$
and
$$h(\Omega)\ge h(\mathcal{Q}),$$
where $T_I$ is the subequilateral triangle with $R(T_I)=R(\Omega)$ and $P(T_I)=P(\Omega)$ and $Q$ is the quasi-lens set with $R(Q)=R(\Omega)$ and $P(Q)=P(\Omega)$ (see Figure \ref{fig:hpd}).
\end{conjecture}
\begin{conjecture}
As far as the diagram $(h, |\cdot|,d )$ is concerned, we conjecture that
$$h(\Omega)\ge h(\mathcal{C}),$$
and
$$h(\Omega)\le \begin{cases}
h(\mathcal{S}), \, \text{if } d>\frac{2}{\sqrt[4]{3}}\sqrt{\abs{\Omega}}\\
h(\mathcal{N}), \, \text{if } d(\Omega) < \frac{2}{\sqrt[4]{3}}\sqrt{\abs{\Omega}},
\end{cases}$$
where $\mathcal{C}$ is the two-cup body with $|\mathcal{C}|=|\Omega|$ and $d(\mathcal{C})=d(\Omega)$ , $\mathcal{S}$ the spherical slice with $|\mathcal{S}|=|\Omega|$ and $d(\mathcal{S})=d(\Omega)$ and $\mathcal{N}$ the smoothed nonagon with $|\mathcal{N}|=|\Omega|$ and $d(\mathcal{N})=d(\Omega)$ (see Figure \ref{fig:hdA}).
\end{conjecture}
\newpage
\section{Appendix: Summary tables with the results}
In this first table we summarize the results relative to the diagrams that are completely solved.
\begin{figure}[!h] \label{table1}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Param. & Condition & Inequality & Extremal sets & Ref. \tabularnewline
\hline
$h,P,A$ & & $h\leq \frac{P}{A}$ & Cheeger to itself & \cite{ftouhi_cheeger} \tabularnewline
& & $h\ge \frac{P+\sqrt{4\pi A}}{2A}$ & circumscribed sets & \tabularnewline \hline
$h,r,A$ & & $h\leq \frac{1}{r}+\sqrt{\frac{\pi}{A}}$ & circumscribed sets & \cite{ftJMAA} \tabularnewline
& & $h\ge \frac{1}{r}+\frac{\pi r}{A}$ & stadiums & \tabularnewline\hline
$h,r, P$ & & ${h\leq \frac{1}{r}+\sqrt{\frac{2\pi }{P r}}}$ & circumscribed sets & Prop. \ref{prop_hrP} \tabularnewline
& & ${h\ge \frac{1}{r}+\frac{\pi }{P -\pi r}}$ & stadiums & \tabularnewline
\hline
$h,d,r$ & & ${h\leq \frac{1}{r}+ \sqrt{\frac{\pi}{r\sqrt{d^2-4r^2}+r^2(\pi-2\arccos{\left(\frac{2r}{d}\right)})}}}$ & two-cup bodies & Prop. \ref{prop_hdr} \tabularnewline
& & ${h\ge \frac{1}{t_{g_1^\Omega}} }$ \textcolor{blue}{$(i)$} & spherical slices/smoothed nonagons &
\tabularnewline
\hline
$h, r, R$ & & $h\le \frac{1}{r}+ \sqrt{\frac{\pi }{2r\left(\sqrt{R^2-r^2}+r\arcsin\left(\frac{r}{R}\right)\right)}}$ & two-cup bodies & Prop. \ref{prop_hRr} \tabularnewline
& & $h\ge\frac{1}{t_{g_2^\Omega}}$ \textcolor{blue}{$(ii)$} & spherical slices & \tabularnewline
\hline
\end{tabular}
\label{fig:polygons}
\end{figure}
\begin{footnotesize}
\newline
\textcolor{blue}{$(i)$} $t_{g_1^\Omega}$ is the smallest solution on $[0, r(\Omega)]$ to $$g_1^\Omega(t):=\psi(d(\Omega)-2t,r(\Omega)-t)=\pi t^2, $$ where
\begin{equation*}
\psi(d,r):=\begin{cases}
\displaystyle{\frac{3\sqrt{3}r}{2}(\sqrt{d^2-3r^2}-r)+\frac{3d^2}{2}\left(\frac{\pi}{3}-\arccos{\left(\frac{\sqrt{3}r}{d}\right)}\right)}, & \text{if} \, \, \, d\le r D^*\vspace{1mm} \\
\displaystyle{r\sqrt{d^2-4r^2}+\frac{d^2}{2}\arcsin{\left(\frac{2r}{d}\right)}}, & \text{if} \, \, \, d\ge r D^*.
\end{cases}
\end{equation*}
and $D^*$ is the unique number in $[2,2\sqrt{3}]$ for which the two espression of the function $\psi(d,r)$ are equal.
\newline
\textcolor{blue}{$(ii)$} $t_{g_2^\Omega}$ is the smallest solution on $[0, r(\Omega)]$ to $${g_2^\Omega}(t):=2\left((r-t)\sqrt{(R(\Omega)-t)^2-(r(\Omega)-t)^2}+(R(\Omega)-t)^2\arcsin{\left(\frac{r(\Omega)-t}{R(\Omega)-t}\right)}\right)=\pi t^2.$$
\end{footnotesize}
\newpage
In this second table we summarize the results of the partially solved Blaschke--Santal\'o diagrams.
\begin{figure}[!h] \label{table3}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Param. & Condition & Inequality & Extremal sets & Ref. \tabularnewline
\hline
$h, \omega, d$ & $\omega\le \sqrt{3}/2d$ & $h\le \frac{1}{r(\omega, d)}+\sqrt{\frac{2\pi}{\omega d}} $ \textcolor{blue}{$(iii)$} & subequilateral triangles & Prop. \ref{prop_hdw} \tabularnewline
& $\sqrt{3}/2d\le \omega\le d$ & $ h\le \frac{\sqrt{3}}{\sqrt{3}\omega-d}+\sqrt{\frac{2\pi}{\pi \omega^2-\sqrt{3}d^2+6\omega(\tan\left(\arccos(\frac{\omega}{d})\right)-\arccos(\frac{\omega}{d}))}}$ & equilateral triangles & \tabularnewline
& & $h(\Omega)\ge \frac{1}{t_{g_3^\Omega}}$ \textcolor{blue}{$(iv)$}& spherical slices & \tabularnewline
\hline
$h, R, \omega$ & $\omega\le 3/2 R$ & $h\leq \frac{1}{r(\omega, R)}+\sqrt{\frac{\pi}{A(\omega, R)}}$ \textcolor{blue}{$(v)$} & subequilateral triangles & Prop. \ref{prop_hRw} \tabularnewline
& & $h\ge\frac{1}{t_{g_4^\Omega}}$ \textcolor{blue}{$(vi)$} & spherical slices & \tabularnewline
\hline
$h, \omega, P$ & $P\geq 2\sqrt{3}\omega$ & $h\le \frac{1}{r(\omega, P)}+\sqrt{\frac{\pi}{A(\omega, P)}} $ \textcolor{blue}{$(vii)$} & subequilateral triangles & Prop. \ref{prop_hwP} \tabularnewline
& & $ h\ge \frac{2}{\omega}+\frac{2\pi}{2P-\pi \omega}$ & stadiums & \tabularnewline
\hline
$h, A, \omega$ & & $ h\leq \frac{1}{r(\omega, A)}+ \sqrt{\frac{\pi}{A}}$ \textcolor{blue}{$(viii)$} & subequilateral triangles & Prop. \ref{prop_hAw} \tabularnewline
& & $ h\geq \frac{2}{\omega}+\frac{\pi \omega}{2 A }$ & stadiums & \tabularnewline
\hline
$h, d, R$ & & $h\leq \frac{2R(2R+\sqrt{4R^2-d^2})}{d^2\sqrt{4R^2-d^2}}+\sqrt{\frac{4\pi R^2}{d^3\sqrt{4R^2-d^2}}}$ & subequilateral triangles & Prop. \ref{prop_hRd} \tabularnewline
\hline
$h,\omega,r$ & & $h\geq \frac{1}{r}+\frac{1}{r}\sqrt{\pi\left(1-\frac{2r}{\omega}\right)\sqrt{\frac{4r}{\omega}-1}}$ & subequilateral triangles & Prop. \ref{prop_hwr}
\tabularnewline
\hline
\end{tabular}
\label{fig:polygons1}
\end{figure}
\begin{footnotesize}
\textcolor{blue}{$(iii)$} $r(\omega,d)$ is given by
\begin{equation*}
d^2\left(\omega-2r(\omega,d)\right)^2(4r(\omega,d)-\omega)= 4r^4(\omega,d)\;\omega.
\end{equation*}
\newline
\textcolor{blue}{$(iv)$} $t_{g_3^\Omega}$ is the smallest solution to
$${g_3^\Omega}(t):=f(d(\Omega)-2t, \omega(\Omega)-2t)=\pi t^2, $$
where
$$f(d,w)=\frac{w}{2}\sqrt{d^2-w^2}+\frac{d^2}{2}\arcsin\left(\frac{w}{d}\right)$$
\newline
\textcolor{blue}{$(v)$} $r(\omega,R)$ is given by
\begin{equation*}
\left(4 r(\omega,R)-\omega \right)\left( \omega-2 r(\omega,R)\right)= \frac{2 r^3(\omega,R)}{R}
\end{equation*}
and $A(\omega,R)$ is given by
\begin{equation*}
16A(\omega,R)^6= R^2 \omega^2\left(16 A(\omega,R)^4-R^2\omega^6\right).
\end{equation*}
\newline
\textcolor{blue}{$(vi)$} $t_{g_4^\Omega}$ is the smallest solution on $[0, r(\Omega)]$ to
$$g_4^\Omega(t):= \chi(\omega(\Omega)-2t ,R(\Omega)-t)=\pi t^2,$$
where $$\chi(\omega, R):=\frac{\omega}{2} \sqrt{4 R^2-\omega^2} + 2R^2 \arcsin{\frac{\omega}{2R}}.$$
\newline
\textcolor{blue}{$(vii)$} $r(\omega, P)$ is given by
\begin{equation*}
(\omega-2r(\omega, P))^2(4r(\omega, P)-\omega)P^2= 4 r(\omega, P)^2\omega^3
\end{equation*}
and $A(\omega, P)$ is the middle root of the equation
$$128 P A(\omega, P)^3 - 16\omega(5P^2+\omega^2)A(\omega, P)^2 + 16\omega^2 P^3 A(\omega, P)- \omega^3P^4=0$$
\newline
\textcolor{blue}{$(viii)$} $r(\omega, A)$ is given by
\begin{equation*}
(\omega-2r(\omega, A))^2(4r(\omega, A)-\omega)A^2= r^4(\omega,A) \;\omega^3.
\end{equation*}
\end{footnotesize}
\begin{figure}[!h] \label{table5}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Param. & Condition & Inequality & Extremal sets & Ref. \tabularnewline
\hline
$h, A, R$ & & $ h<\frac{1}{R}+\frac{4R}{A}$ & thinning stadiums & Prop. \ref{prop_hAR} \tabularnewline
& & $ h\ge \frac{1}{2R}+\frac{\pi R}{2A}+\sqrt{\frac{\pi}{A}}$ & balls & \tabularnewline
\hline
$h, R, P$ && $ h<\frac{P}{R(P-4R)}$ & thinning stadiums & Prop. \ref{prop_hRP} \tabularnewline
& & $ h \ge \frac{4\arcsinc{\left(\frac{4R}{P}\right)}}{P-4R\cos{\left(\arcsinc{\left(\frac{4R}{P}\right)}\right)}} + \sqrt{\frac{8\pi \arcsinc{\left(\frac{4R}{P}\right)}}{P\left(P-4R\cos{\left(\arcsinc{\left(\frac{4R}{P}\right)}\right)}\right)}}$& balls& \tabularnewline
\hline
$h, P, d$ & $2d<P<3d$ & $ h < \frac{4}{P-2d} + \sqrt{\frac{4\pi}{(P-2d)\sqrt{P(4d-P)}}}$ & & Prop. \ref{prop_hPd} \tabularnewline
& $3 d\leq P\leq \pi d$ & $ h < \frac{4}{P-2d} + \sqrt{\frac{4\pi}{\sqrt{3} d(P-2d)}}$ & & \tabularnewline
& & $ h \ge \frac{4 \arcsinc{\left(\frac{2d}{P}\right)}}{P-2d\cos{\left(\arcsinc{\left(\frac{2d}{P}\right)}\right)}} + \sqrt{\frac{8\pi\arcsinc{\left(\frac{2d}{P}\right)}}{(P-2d\cos{\left(\arcsinc{\left(\frac{2d}{P}\right)}\right)}}}$& balls & \tabularnewline
\hline
$h, A, d$ && $ h\leq \frac{4}{d}+\frac{2d}{A}$ & & Prop. \ref{prop_hdA} \tabularnewline
& & $ h< \frac{2 d}{A}+\sqrt{\frac{\pi}{A}}$ & & \tabularnewline
& & $h> \frac{d}{A}+\sqrt{\frac{\pi}{A}}$& thinning two-cup & \tabularnewline
\hline
\end{tabular}
\end{figure}
Finally in this last table we have summarized the not Blaschke--Santaló sharp inequalities that we have found.
\newpage
\begin{comment}
\section{Appendix: Table with the results}
\begin{figure}[!h] \label{table2}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Param. & Condition & Inequality & Extremal sets & Ref. \tabularnewline
\hline
$h,r, P$ & & ${h\leq \frac{1}{r}+\sqrt{\frac{2\pi }{P r}}}$ & circumscribed sets & Prop. \ref{prop_hrP} \tabularnewline
& & ${h\ge \frac{1}{r}+\frac{\pi }{P -\pi r}}$ & stadiums & \tabularnewline
\hline
$h,d,r$ & & ${h\leq \frac{1}{r}+ \sqrt{\frac{\pi}{r\sqrt{d^2-4r^2}+r^2(\pi-2\arccos{\left(\frac{2r}{d}\right)})}}}$ & two-cup bodies & Prop. \ref{prop_hdr} & \tabularnewline
& & ${h\ge \frac{1}{t_{g_1^\Omega}} }$ \textcolor{blue}{$(*)$} & spherical slices/smoothed nonagons &
\tabularnewline
\hline
$h, \omega, R$ & $\omega\le 3/2 R$ & $h\leq h(T_I)$ & subequilateral triangles & Prop. \ref{prop_hRw} & \tabularnewline
& & $h\ge\frac{1}{t_{g_2^\Omega}}$ \textcolor{blue}{$(**)$} & spherical slices & &\tabularnewline
\hline
$h, R, r$ & & $h\le \frac{1}{r}+ \sqrt{\frac{\pi }{2r\left(\sqrt{R^2-r^2}+r\arcsin\left(\frac{r}{R}\right)\right)}}$ & two-cup bodies & Prop. \ref{prop_hRr} & \tabularnewline
& & $h\ge\frac{1}{t_{g_3^\Omega}}$ \textcolor{blue}{$(***)$} & spherical slices & &\tabularnewline
\hline
$h, A, \omega$ & & $ h\leq h(T_I)$ & subequilateral triangles & Prop. \ref{prop_hAw} & \tabularnewline
& & $ h\geq \frac{2}{\omega}+\frac{\pi \omega}{2 A }$ & stadiums & &\tabularnewline
\hline
$h, \omega, P$ & $P\geq 2\sqrt{3}\omega$ & $h\le h(T_I) $ & subequilateral triangles & Prop. \ref{prop_hwP} & \tabularnewline
& & $ h\ge \frac{2}{\omega}+\frac{2\pi}{2P-\pi \omega}$ & stadiums & &\tabularnewline
\hline
$h, d, \omega$ & $\omega\le \sqrt{3}/2d$ & $h\le h(T_I) $ & subequilateral triangles & Prop. \ref{prop_hdw} & \tabularnewline
& $\sqrt{3}/2d\le \omega\le d$ & $ h\le \frac{\sqrt{3}}{\sqrt{3}\omega-d}+\sqrt{\frac{2\pi}{\pi \omega^2-\sqrt{3}d^2+6\omega(\tan(\delta-\delta)}}$ \textcolor{red}{$(\star)$} & Yamanouti sets & & \tabularnewline
& & $h(\Omega)\ge \frac{1}{t_{g_4^\Omega}}$ \textcolor{blue}{$(****)$}& spherical slices & &\tabularnewline
\hline
$h,P,A$ & & $h\leq \frac{P}{A}$ & Cheeger to itself & \cite{ftouhi_cheeger} & \tabularnewline
& & $h\ge \frac{P+\sqrt{4\pi A}}{2A}$ & circumscribed sets & & \tabularnewline
\hline
$h,r,A$ & & $h\leq \frac{1}{r}+\sqrt{\frac{\pi}{A}}$ & circumscribed sets & \cite{ftJMAA} & \tabularnewline
& & $h\ge \frac{1}{r}+\frac{\pi r}{A}$ & stadiums & &\tabularnewline
\hline
$h,\omega,r$ & & $h\geq \frac{1}{r}+\frac{1}{r}\sqrt{\pi\left(1-\frac{2r}{\omega}\right)\sqrt{\frac{4r}{\omega}-1}}$ & subequilateral triangles & Prop. \ref{prop_hwr} & \tabularnewline
\hline
$h,R,d$ & & $h\leq \frac{2R(2R+\sqrt{4R^2-d^2})}{d^2\sqrt{4R^2-d^2}}+\sqrt{\frac{4\pi R^2}{d^3\sqrt{4R^2-d^2}}}$ & subequilateral triangles & Prop. \ref{prop_hRd} & \tabularnewline
\hline
\end{tabular}
\caption{Table of the best known inequalities.}
\label{fig:polygons}
\end{figure}
\textbf{Comments:}
\newline
\textcolor{red}{$(\star)$} We set $\delta=\arccos(\frac{\omega}{d})$
\newline
\textcolor{blue}{$(*)$} $t_{g_1^\Omega}$ is the smallest solution on $[0, r(\Omega)]$ to $$g_1^\Omega(t):=\psi(d(\Omega)-2t,r(\Omega)-t)=\pi t^2, $$ where
\begin{equation}\label{func_del}
\psi(d,r):=\begin{cases}
\displaystyle{\frac{3\sqrt{3}r}{2}(\sqrt{d^2-3r^2}-r)+\frac{3d^2}{2}\left(\frac{\pi}{3}-\arccos{\left(\frac{\sqrt{3}r}{d}\right)}\right)}, & \text{if} \, \, \, d\le r D^*\vspace{1mm} \\
\displaystyle{r\sqrt{d^2-4r^2}+\frac{d^2}{2}\arcsin{\left(\frac{2r}{d}\right)}}, & \text{if} \, \, \, d\ge r D^*.
\end{cases}
\end{equation}
and $D^*$ is the unique number in $[2,2\sqrt{3}]$ for which the two espression of the function $\psi(d,r)$ are equal.
\newline
\textcolor{blue}{$(**)$} $t_{g_2^\Omega}$ is the smallest solution on $[0, r(\Omega)]$ to
$$g_2^\Omega(t):= \chi(\omega(\Omega)-2t ,R(\Omega)-t=\pi t^2,$$
where $$\chi(\omega, R):=\omega \sqrt{4 R^2-\omega^2} + 4R^2 \arcsin{\frac{\omega}{2R}}.$$
\newline
\textcolor{blue}{$(***)$} $t_{g_3^\Omega}$ is the smallest solution on $[0, r(\Omega)]$ to $${g_3^\Omega}(t):=2\left(r\sqrt{(R(\Omega)-t)^2-(r(\Omega)-t)^2}+(R(\Omega)-t)^2\arcsin{\left(\frac{r(\Omega)-t}{R(\Omega)-t}\right)}\right)=\pi t^2.$$
\newline
\textcolor{blue}{$(****)$} $t_{g_4^\Omega}$ is the smallest solution to
$${g_4^\Omega}(t):=f(d(\Omega)-2t, \omega(\Omega)-2t)=\pi t^2, $$
where
$$f(d,w)=\frac{w}{2}\sqrt{d^2-w^2}+d^2\arcsin\left(\frac{w}{d}\right)$$
\end{comment}
\medskip\noindent
{\bf Acknowledgements}:
The authors would like to thank Jimmy Lamboley for useful discussions.
The first author is supported by the Alexander von Humboldt-Professorship program and by the project ANR-18-CE40- 0013 SHAPO financed by the French Agence Nationale de la Recherche (ANR).
The third author is supported by an Alexander von Humboldt Research Fellowship.
\bibliographystyle{abbrv}
| {
"timestamp": "2022-06-28T02:26:51",
"yymm": "2206",
"arxiv_id": "2206.13158",
"language": "en",
"url": "https://arxiv.org/abs/2206.13158",
"abstract": "We are interested in finding sharp bounds for the Cheeger constant via different geometrical quantities, such as the area $|\\cdot|$, the perimeter $P$, the inradius $r$, the circumradius $R$, the minimal width $w$ and the diameter $d$. In particular, we provide new sharp inequalities between these quantities for planar convex bodies and we provide new conjectures based on numerical simulations. In particular, we completely solve the Blasche-Santaló diagrams describing all the possible inequalities involving the Cheeger constant, the perimeter and the inradius, then, the Cheeger constant, the diameter and the inradius, and, finally, the Cheeger constant, the circumradius and the inradius.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Sharp inequalities involving the Cheeger constant of planar convex sets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9886682468025243,
"lm_q2_score": 0.815232480373843,
"lm_q1q2_score": 0.8059944671076806
} |
https://arxiv.org/abs/1801.07238 | On a Helly-type question for central symmetry | We study a certain Helly-type question by Konrad Swanepoel. Assume that $X$ is a set of points such that every $k$-subset of $X$ is in centrally symmetric convex position, is it true that $X$ must also be in centrally symmetric convex position? It is easy to see that this is false if $k\le 5$, but it may be true for sufficiently large $k$. We investigate this question and give some partial results. | \section{Introduction}
The classical Carathéodory theorem in dimension $2$ can be stated in the following equivalent way:
Let $X$ be a set of points in the plane, if any $4$ points from $X$ are in convex positions then $X$ is in convex position. In 2010, Konrad Swanepoel \cite{KS} asked the following Helly-type question which was inspired by this formulation of Carathéodory's theorem.
For brevity, we say that a set of points is in \emph{c.s.c. position} (short for centrally symmetric convex position) if it is contained in the boundary of a centrally symmetric convex body.
\begin{question}
Does there exist a number $k$ such that for any planar set $X$ the following holds: If any $k$ points from $X$ are in c.s.c position, then the whole set $X$ is in c.s.c. position.
\end{question}
It is clear from Carathéodory's theorem that $X$ should be in convex position. One can also see that $k\geq 6$ since any $5$ points in convex position are in c.s.c. position. This follows from the fact that any $5$ points pass through a quadric curve. Since the points must be in convex position, the points lie on an ellipse, parabola, a branch of a hyperbola or a union of two lines and in each of these cases there is a centrally symmetric convex body containing these points on its boundary.
It is not clear that such a $k$ exists although we suspect that it does. In this short note, we prove the following two results in Sections \ref{sec:9} and \ref{sec:curve}.
\begin{thm}\label{thm:9}
There is a set $X$ consisting of $9$ points that is not in c.s.c. position such that any $8$ of its points are in c.s.c. position. This implies that, if $k$ exists, then $k\ge 9$.
\end{thm}
\begin{thm}\label{thm:curve}
Let $\Gamma$ be a closed curve such that any $6$ points of $\Gamma$ are in c.s.c. position, then $\Gamma$ bounds a centrally symmetric convex region.
\end{thm}
Before proving these theorems, we describe a way to decide whether a finite set $X$ is in c.s.c. position or not. For more information on Carathéodory's theorem and Helly-type theorems we recommend \cite{Eck} and \cite{Mat}.
\section{Centrally symmetric convex position}
We start with a useful definition.
\begin{defin}
Let $X$ be a point set and $O$ be a point. The set $X_O$ denotes the reflection of $X$ with respect to $O$, i.e., $X_O=2O-X$.
If $X\cup X_O$ is in convex position then we say that $O$ is an \emph{admissible center for $X$}, the set of all admissible centers is denoted by $\mathcal{M}_X$.
\end{defin}
Swanepoel's question can be reformulated in terms of admissible centers, since $X$ is in c.s.c. position if and only if $\mathcal{M}_X$ is non-empty.
The main goal of this section is to give a simple way of constructing $\mathcal{M}_X$.
We start with the simplest possible case. The description of the set of admissible center for a finite set $X$ can be obtained from the following simple lemmas.
\begin{lem}
Let $\triangle=\{a,b,c\}$ be three non-collinear points. The three lines passing through the midpoints of the sides of $\conv(\triangle)$ divide the plane into $7$ regions. The set $\mathcal{M}_\triangle$, shown in Figure \ref{fig:triang}, is the union of the closed components of this division that do not intersect $\triangle$.
\end{lem}
The set $\mathcal{M}_\triangle$ is naturally represented as the union of $4$ convex subsets. We call these subsets the \emph{center-part}, \emph{$a$-part}, \emph{$b$-part} and \emph{$c$-part} as in Figure \ref{fig:triang}.
\begin{figure}[ht]
\includegraphics{triang.pdf}
\caption{Set of admissible centers for a triangle.}
\label{fig:triang}
\end{figure}
\begin{lem}\label{lem:intersection}
For a given set $X$ in convex position we have that
$$\mathcal{M}_X=\bigcap \left\{\mathcal{M}_Y:Y\subset X,\#(Y)=3\right\}.$$
\end{lem}
These last two lemmas provide us with a way to construct the set of admissible centers of a set with $n$ points in convex position as the intersection of $\binom n3$ sets, each of which is the union of four convex sets. We see below how we can achieve the same thing using fewer sets.
\begin{defin}
Assume $X$ is a finite set of points in convex position such that $X$ is not contained in a line. Let $ab$ be a side of $\conv(X)$ and let $c\in X$ be a farthest point from the line $ab$. We call the triangle $abc$ a \emph{tallest triangle of $X$ with respect to side $ab$}.
\end{defin}
The tallest triangle has appeared before, at least as source of interesting questions for mathematical Olympiads (see e.g. \cite{TT} or \cite{TT2}).
\begin{thm}\label{thm:tallest}
If $X$ is a finite set of points in convex position, then the set of admissible centers for $X$ is the intersection of the sets of admissible centers of the tallest triangles of $X$, i.e.,
$$\mathcal{M}_X=\bigcap \left\{ \mathcal{M}_{\{a,b,c\}}: abc \text{ is a tallest triangle of } X \right\}.$$
\end{thm}
\begin{proof}
The set $\mathcal{M}_X$ is included in the intersection on the right-hand side of the formula, so we only need to prove that any point from the intersection is in $\mathcal{M}_X$.
Let $O$ be any point from the intersection and let $a$ be any point from $X$. We will show that it is possible to find a supporting line of $\conv(X\cup X_O)$ at $a$.
Let $b$ be one of the neighbors of $a$ on the boundary of $\conv(X)$, say in the counter-clockwise direction. Let $abc$ be a tallest triangle of $X$ with respect to $ab$. If $O$ lies in $a$-part or $b$-part of the admissible set for triangle $abc$, then $ab$ is a supporting line for $\conv(X\cup X_O)$. Therefore $O$ lies in the $c$-part or in the central part of $\mathcal{M}_{\{a,b,c\}}$.
Similarly, if $d$ is the other neighbor of $a$ on the boundary of $X$ (in the clockwise direction), and $ade$ is a tallest triangle of $X$ with respect to $ad$, then $O$ lies in the central part or in the $e$-part of $\mathcal{M}_{\{a,d,e\}}$, otherwise we are done.
There are two possibilities for the positions of $c$ and $e$. Either they coincide, or $e$ is in the clockwise direction from $c$. In the case $c=e$, the only admissible point from $\mathcal{M}_{\{a,b,c\}}$ is the midpoint of $ac$, which also belongs to the $b$-part of $\mathcal{M}_{\{a,b,c\}}$. So, the line $ab$ is a supporting line of $\conv(X\cup X_O)$ as we have shown before.
\begin{figure}[ht]
\includegraphics{tallest.pdf}
\caption{A supporting line of $\conv(X\cup X_O)$ at $a$.}
\label{fig:tall}
\end{figure}
In the latter case, as shown in Figure \ref{fig:tall}, the point $O\in abc\cap ade$, and $c$ and $e$ are connected by a sequence of sides of $X$. Then there is a side $pq$ of $X$ in the angle $\angle cae$ such that $O$ is inside triangle $apq$. It is not difficult to see that $apq$ is a tallest triangle of $X$. Since $O$ is inside $apq$ and in $\mathcal{M}_{\{a,p,q\}}$, it is in the central part of this set of admissible centers. It follows that the line parallel to $pq$ through $a$ is also a supporting line of $\conv(X\cup X_O)$.
\end{proof}
\section{Example showing \texorpdfstring{$k\ge 9$}{k >= 9}}\label{sec:9}
In this section we prove Theorem \ref{thm:9} by giving an explicit example of a set $X$ with $9$ points such that $\mathcal{M}_X=\emptyset$, but $\mathcal{M}_Y\neq\emptyset$ for every $Y\subset X$ with $8$ points.
\begin{proof}[Proof of Theorem \ref{thm:9}]
Start with a regular $9$-gon with center $O$ and label its vertices as $a_1$, $b_1$, $c_1$, $a_2$, $b_2$, $c_2$, $a_3$, $b_3$, $c_3$ in counter-clockwise order. Now, take the triangle $a_1a_2a_3$ and, with center $O$, scale it down by a factor of $0.93$. Then we are left with an almost regular $9$-gon such as the one shown in Figure \ref{fig:9gon}. This will be the set $X$.
\begin{figure}[ht]
\includegraphics{9gon.pdf}
\caption{The $9$-gon for Theorem \ref{thm:9}.}
\label{fig:9gon}
\end{figure}
A subset $Y$ of $X$ with $8$ points can be of two types, depending on whether or not it is missing a point $a_i$ from $X$. For each of these, a point of $\mathcal{M}_Y$ close to $O$ will serve as an admissible center. If we choose coordinates so that $O=(0,0)$ and $b_1=(1,0)$, then points in $\mathcal{M}_Y$ corresponding to $Y=X\setminus\{a_1\}$ and $Y=X\setminus\{b_2\}$ are $(0.04,0)$ and $(0.02,0)$, respectively (see Figure \ref{fig:9gon2}).
\begin{figure}[ht]
\includegraphics{9gon2.pdf}
\caption{The original and reflected $8$-gons with their respective centers.}
\label{fig:9gon2}
\end{figure}
All that is left is to show that $\mathcal{M}_X=\emptyset$. By Lemma \ref{lem:intersection}, we only need to consider the triangles determined by $X$. Let us consider first the triangle $a_1b_2c_2$, it is not hard to see that $\mathcal{M}_X$ must be a subset of the center part of $\mathcal{M}_{\{a_1,b_2,c_2\}}$. By the threefold symmetry of $X$, the same is true for the triangles $a_2b_3c_3$ and $a_3b_1c_1$. However, the center parts of these sets are triangles that do not intersect, so $\mathcal{M}_X$ must be empty.
\end{proof}
\section{The case of convex curves}\label{sec:curve}
In this section we show that for a convex curve $\Gamma$ the answer for Swanepoel's question is the least possible, i.e. $k=6$. For the remaining part of the section we assume that every 6 points of $\Gamma$ are in c.s.c.
The proof of Theorem \ref{thm:curve} is based on the following simple fact, which can be proved easily using Lemma \ref{lem:intersection}.
\begin{lem}\label{lem:parallelogram}
The set of admissible centers for the vertex-set of a parallelogram $P$ is the union the two lines passing through the center of $P$ and each parallel to a side of $P$.
\end{lem}
First we establish a few facts for $\Gamma$. Since $\Gamma$ is convex, then every point $x$ of $\Gamma$ has correctly defined one-sided tangents which are the best linear approximations of $\Gamma$ at $x$ in clockwise and counter-clockwise directions. If these lines coincide, then $\Gamma$ has a tangent at $x$ and we will call $x$ a {\it smooth point} of $\Gamma$. Due to the convexity of $\Gamma$, it may contain at most countably many non-smooth points.
\begin{lem}\label{lem:length}
If $\ell$ and $\ell'$ are two parallel supporting lines of $\Gamma$, then the lengths of the segments $\ell\cap\Gamma$ and $\ell'\cap\Gamma$ are equal.
\end{lem}
\begin{rem}
We say that a point is a segment of length zero.
\end{rem}
\begin{proof}
Suppose that the length of $\ell\cap\Gamma$ is strictly greater than the length of $\ell'\cap\Gamma$. We choose six points $a,b,c,d,e,f\in\Gamma$ in counter-clockwise order such that $a$ and $c$ are the endpoints of $\ell\cap\Gamma$, $b$ is the midpoint of $\ell\cap\Gamma$, $e$ is a point on $\ell'\cap\Gamma$, and $df$ is a segment parallel to $\ell$ such that its length is strictly between lengths of $\ell\cap\Gamma$ and $\ell'\cap\Gamma$, see Figure \ref{fig:parsupport}.
\begin{figure}[ht]
\includegraphics{parasup.pdf}
\caption{Parallel supporting lines cannot intersect $\Gamma$ at segments of different lengths.}
\label{fig:parsupport}
\end{figure}
It is easy to see that these 6 points are not in c.s.c. which is a contradiction. Therefore the intersections $\ell\cap\Gamma$ and $\ell'\cap\Gamma$ have equal length.
\end{proof}
\begin{lem}\label{lem:inscribed-para}
Let $a$ be a smooth point of $\Gamma$ with tangent $\ell$, and let $b,c,d\in\Gamma$ be points such that $abcd$ is a parallelogram with sides not parallel to $\ell$. Then the line through $c$ parallel to $\ell$ supports $\Gamma$.
\end{lem}
\begin{proof}
Let $\ell'$ be the line parallel to $\ell$ through $c$. Suppose $\ell'$ doesn't support $\Gamma$. We may assume that points $a,b,c,d$ determine a counter-clockwise orientation of $\Gamma$ and $\ell'$ intersects the arc $bc$ of $\Gamma$, see Figure \ref{fig:tangent} for more details. Let $\ell_+$ be the tangent of the arc $bc$ of $\Gamma$ at $c$ and let $\ell_+'$ be the line parallel to $\ell_+$ through $a$ (see Figure \ref{fig:tangent}).
\begin{figure}[ht]
\includegraphics{smooth.pdf}
\caption{A curve with an inscribed parallelogram at a smooth point $a$.}
\label{fig:tangent}
\end{figure}
We can choose a point $x$ on the arc $ab$ that is closer to $\ell$ than to $\ell_+'$, indeed, the line $\ell$ is the best linear approximation of $\Gamma$ at $a$, so each point of $\Gamma$ in a small neighborhood of $a$ is closer to $\ell$ than to $\ell_+'$. Similarly we can find a point $y$ on the arc $bc$ closer to $\ell_+$ than to $\ell'$.
For the set $X=\{a,b,c,d,x,y\}$ the set of admissible centers may consist only of the center $O$ of the parallelogram $abcd$. Indeed, according to the Lemma \ref{lem:parallelogram} $\mathcal{M}_X$ is contained in the union of two lines through $O$ parallel to $ab$ and $ad$. The point $x$ does not allow us to take any point in the line parallel to $ab$ except $O$ as an admissible center, and $y$ does not allow to take any point in the line parallel to $ad$ except $O$. If $y'$ is the reflection of $y$ with respect to $O$, then $a$ is strictly inside the triangle $cxy'$ as shown in Figure \ref{fig:points}. Thus we have found 6 points of $\Gamma$ which are not in c.s.c. position which is a contradiction.
\end{proof}
\begin{figure}[ht]
\includegraphics{smooth2.pdf}
\caption{Six points in $\Gamma$ that are not in c.s.c position.}
\label{fig:points}
\end{figure}
Now we proceed to the proof of the main result of this section.
\begin{proof}[Proof of Theorem \ref{thm:curve}]
The convexity of $\Gamma$ is trivial. There are two cases possible, either each supporting line of $\Gamma$ intersects $\Gamma$ at exactly one point (case 1), or there is a supporting line of $\Gamma$ that intersects $\Gamma$ in a segment of non-zero length (case 2).
\noindent{\bf Case 1.} Let $a$ be any smooth point of $\Gamma$, and let $\ell$ be the tangent of $\Gamma$ at $a$. Let $a'$ be the other point of $\Gamma$ with supporting line parallel to $\ell$.
Let $b$ be any point of $\Gamma$ other than $a$ and $a'$. The segment $ab$ is not an affine diameter of $\Gamma$, therefore there are points $c,d\in\Gamma$ such that $abcd$ is a parallelogram. From Lemma \ref{lem:inscribed-para} we get that the line through $c$ parallel to $\ell$ supports $\Gamma$ and therefore $c=a'$. Thus the central symmetry with the center at the midpoint of $aa'$ takes $b$ to another point of $\Gamma$ (the point $d$), and $\Gamma$ is centrally symmetric.
\noindent{\bf Case 2.} Let $ab$ be the intersection of $\Gamma$ with a support line $\ell$. From Lemma \ref{lem:length} we know that the other supporting line $\ell'$ of $\Gamma$ parallel to $\ell$ intersects $\Gamma$ in a segment $cd$ equal in length to $ab$. We may assume that the points $a,b,c,d$ are in counter-clockwise orientation, see Figure \ref{fig:segments}.
\begin{figure}[ht]
\includegraphics{twopara.pdf}
\caption{A curve with two equal parallel segments on the boundary.}
\label{fig:segments}
\end{figure}
Let $p$ be a point in the interior of the arc $da$ of $\Gamma$ and $x$ be a point in the interior of the segment $ab$. Since $px$ is not an affine diameter of $\Gamma$, we can find two more points $q,y\in \Gamma$ (both depending on $x$ and $p$) such that $pxqy$ is a parallelogram. Using Lemma \ref{lem:inscribed-para} for the parallelogram $pxqy$ treating $x$ as the smooth vertex we conclude that the tangent to $\Gamma$ at $x$ is the line $\ell$, therefore the line parallel to $\ell$ through $y$ supports $\Gamma$, and $y$ belongs to the segment $cd$. Also, $q$ must be contained in the arc $bc$ of $\Gamma$.
The center of the parallelogram $pxqy$ is equidistant from the lines $\ell$ and $\ell'$, therefore the distance from $q$ to $\ell$ is equal to the distance from $p$ to $\ell'$ and does not depend on $x$. This means that $q$ only depends on $p$ and not on $x$ and the same is true for the center $O$ of the parallelogram $pxqy$.
If for a fixed $p$ we vary $x$ in the open segment $ab$, then $y$ varies in the interior of $cd$ which has the same length as $ab$. This means that $cd$ is symmetric to $ab$ with respect to $O$ and $O$ is also the center of the parallelogram $abcd$. Thus $O$ does not depend on $p$.
Summarizing, we have shown that for every point $p$ on the arc $da$ of $\Gamma$ we can find another point $q$ of $\Gamma$ symmetric to $p$ with respect to the center of the parallelogram $abcd$. Therefore $O$ is the center of symmetry of $\Gamma$.
\end{proof}
\section{Acknowledgments}
The authors are thankful to Konrad Swanepoel for the interesting questions. We are also thankful to Imre Bárány and Jesús Jerónimo for many fruitful discussions while this work was in progress.
| {
"timestamp": "2018-01-23T02:17:59",
"yymm": "1801",
"arxiv_id": "1801.07238",
"language": "en",
"url": "https://arxiv.org/abs/1801.07238",
"abstract": "We study a certain Helly-type question by Konrad Swanepoel. Assume that $X$ is a set of points such that every $k$-subset of $X$ is in centrally symmetric convex position, is it true that $X$ must also be in centrally symmetric convex position? It is easy to see that this is false if $k\\le 5$, but it may be true for sufficiently large $k$. We investigate this question and give some partial results.",
"subjects": "Metric Geometry (math.MG)",
"title": "On a Helly-type question for central symmetry",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180685922241,
"lm_q2_score": 0.817574471748733,
"lm_q1q2_score": 0.805897929222469
} |
https://arxiv.org/abs/1109.4930 | Multiset metrics on bounded spaces | We discuss five simple functions on finite multisets of metric spaces. The first four are all metrics iff the underlying space is bounded and are complete metrics iff it is also complete. Two of them, and the fifth function, all generalise the usual Hausdorff metric on subsets. Some possible applications are also considered. | \section{Introduction}
Metrics on subsets and multisets (subsets-with-repetition-allowed)
of metric spaces have or could have numerous fields of application
such as credit rating, pattern or image recognition and synthetic
biology. We employ three related models (called $E,F$ and $G$) for
the space of multisets on the metric space $(X,d)$. On each of $E,F$
we define two closely-related functions. These four functions all
turn out to be metrics precisely when $d$ is bounded, and are complete
iff $d$ is also complete. Another function studied in model $G$
has the same properties for (at least) uniformly discrete $d$. $X$
is likely to be finite in many applications anyway.
We show that there is an integer programming algorithm for those in
model $E$. The three in models $F$ and $G$ generalise the Hausdorff
metric. Beyond finiteness, no assumptions about multiset sizes are
required.
Various types of multiset metric on sets have been described{[}\prettyref{sub:Other-metrics}{]},
but the few that incorporate an underlying metric only refer to $\mathbb{R}$
or $\mathbb{C}$. The simple and more general nature of those described
here suggests that there may be other interesting possibilities.
In this section, after setting out notation and required background,
we mention briefly the existing work in this field. The following
three sections are each dedicated to one of $E$, $F$ and $G$.
\subsection{Notation: metric spaces}
$R$ is the non-negative reals, $\mathbb{N}$ includes $0$, and $(X,d)$
is a metric space of more than one element. $d$ is \emph{uniformly
discrete} if $\exists a>0$ such that $d(x,z)\geq a$ whenever $x\neq z$,
and two metrics on $X$ are \emph{equivalent} if they induce the same
topology.
$d$ is \emph{complete} iff every Cauchy sequence converges (to a
point of $X$), and $d$ is \emph{compact} iff every sequence, Cauchy
or not, has a subsequence that converges to a point of $X$.
The well-known \emph{Hausdorff metric} $d_{H}$ on the space $H$
of all non-empty compact subsets of $X$ is defined for $A,B\in H$
by
\[
d_{H}(A,B)=\max(\max_{x\in A}\min_{y\in B}d(x,y),\max_{y\in B}\min_{x\in A}d(x,y))
\]
in which compactness guarantees that all these extrema are attained.
We use later the simple fact that if $d$ does not satisfy the triangle
inequality, neither does $d_{H}$. It is a standard fact\cite[pp.71-72]{Edg90}
that $d_{H}$ is complete if $d$ is. The converse is also true%
\footnote{Let $x_{i}$ be a non-convergent Cauchy sequence in $X$ so that $S_{i}=\{x_{i}\}$
is Cauchy in $H$ with putative limit $S\in H$, so $S$ is non-empty.
If $S=\{x\}$ then $d(x_{i},x)\rightarrow0$. Thus $S$ contains distinct
$a,b\in X$. But then $d_{H}(S_{i},S)\geq\max(d(x_{i},a),d(x_{i},b))\geq\frac{d(x_{i},a)+d(x_{i},b)}{2}\geq\frac{d(a,b)}{2}$.%
}. A convenient heuristic (for finite $A,B$) is to label the rows
(the columns) of a matrix by the elements of $A$ (of $B$), with
the corresponding $d$-distances as entries. Then $d_{H}(A,B)$ is
the largest of all row and column minima.
\label{quotient}Given an equivalence relation $\sim$ on $X$, and
$\alpha,\beta\in X/\!\sim$ write
\[
D(\alpha,\beta)=\inf d(a,p_{1})+d(q_{1},p_{2})+d(q_{2},p_{3})+\ldots+d(q_{n-1},p_{n})+d(q_{n},b)
\]
where $n\in\mathbb{N}$, $a\in\alpha$, $b\in\beta$ and $p_{i}\sim q_{i}$
for each $i$. In general $D$ is a \emph{pseudometric} on $X/\!\sim$,
that is $D(\alpha,\beta)=0\nRightarrow\alpha=\beta$, though $D$
does satisfy the other metric axioms. Clearly $D(\alpha,\beta)\leq\inf_{a\in\alpha,b\in\beta}d(a,b)$.
To simplify notation, we adopt the conventions that $a=q_{0},b=p_{n+1}$
and $p_{i}\nsim p_{i+1}$ for any $i$.
\subsection{Notation: multisets}
A recent survey article on multisets and their applications is \cite{SIYS07}.
The notation and terminology in this article mostly follow \cite{DD09}
and \cite{Pet97}. A convenient definition of multiset also introduces
the model $E${[}\prettyref{sec:Model E}{]}.\emph{ }
A \emph{multiset} of a set $S$ is a function $e:S\mathbb{\rightarrow N}$
taking each $s\in S$ to its \emph{multiplicity} $e(s)$. The \emph{root
set} $R(e)$ of $e$ is $\{s\in S:e(s)>0\}$, always assumed finite.
The \emph{cardinality}%
\footnote{Called the \emph{counting measure} in \cite{DD09}.%
} of $e$ is $C(e)=\sum_{s\in S}e(s)$. So $E$ is the set of functions
of finite support from $S$ to $\mathbb{N}$.
We denote by $e_{s}$, for $s\in S$, the multiset consisting of a
single copy of $s$ and define $e_{0}$ by $R(e_{0})=\phi$. Naturally
any multiset has a unique form $\sum_{s\in S}e(s)e_{s}$; we can add
or subtract them if all the arithmetic is within $\mathbb{N}$.
$E$ forms a lattice under the operations $\cap$ and $\cup$ defined
for $e,f\in E$ by $e\cap f(s)=\min(e(s),f(s))$ and $e\cup f(s)=\max(e(s),f(s))$.
The \emph{multiset difference} $e_{f}$ is $e-e\cap f$, and $e$
and $f$ are \emph{disjoint} if $e\cap f=e_{0}$. For instance $e_{f}$
and $f_{e}$ are disjoint. The \emph{symmetric difference} of $e$
and $f$ is $e\triangle f=e_{f}+f_{e}=e\cup f-e\cap f$. $e$ is a
\emph{submultiset} of $f$, written $e\subseteq f$, if $e(s)\leq f(s)\forall s$
and of course this is equivalent to $e\cap f=e$ or $e_{f}=e_{0}$.
A \emph{function} $h$ from $e$ to $f$ is simply a function $h$
from $R(e)$ to $R(f)$, to guarantee that identical elements of $e$
are not mapped to distinct elements of $f$. We say that $h$ is an
\emph{injection} (resp. \emph{surjection}, \emph{bijection}), according
as (i) its restriction to the root sets has this property in the ordinary
sense, and (ii) for every $s\in R(e)$, $e(s)\leq f(h(s))$\emph{
(resp.} $e(s)\geq f(h(s))$, both of the preceding).
\subsection{Other metrics on multisets\label{sub:Other-metrics}}
We give a short account of the multiset metrics listed at \cite[pp.51-52]{DD09},
described elsewhere in that book, and regrouped here according to
the main idea.
\begin{itemize}
\item The \emph{matching distance}\cite[p.47]{DD09} is defined by $\inf_{g}\max_{x\in e}d(x,g(x))$
where $g$ runs over all (multiset) bijections from $e$ to $f$.
These are used in size theory (image recognition), where a geometric
trick is used to ensure that bijections are always defined. A survey
article is \cite{dFL06}.
\item The \emph{metric space of roots}\cite[p.221]{DD09} is defined on
multisets of $\mathbb{C}$ of fixed cardinality $n$, each identified
with the monic polynomial of which it is the set of roots. Two such
$u_{1},\ldots,u_{n}$ and $v_{1},\ldots,v_{n}$ are separated by $\min_{\rho}\max_{1\leq j\leq n}|u_{j}-v_{\rho(j)}|$
as $\rho$ ranges over the permutations of $1,\ldots,n$. More details
are in \cite{CM06}\emph{.}
\item Petrovsky has defined several metrics\cite[p.52]{DD09} on $E$ using
a measure $\mu:E\rightarrow\mathbb{R}$, $\mu(e)=\sum_{s\in S}\lambda(s)e(s)$
where $\lambda:S\rightarrow\mathbb{R}^{+}$. Thus $\mu=C$ when $\lambda=1$.
One of them is $d(e,f)=\mu(e\triangle f)=\mu(e_{f})+\mu(f_{e})$ and
the others are variants\cite{Pet97,Pet03}. They are related to the
Jaccard and Hamming metrics on sets\cite[p.299, p.45]{DD09}, and
seem to be primarily used in cluster analysis (decision making).
\item The $\mu$\emph{-metric}\cite[p.281]{DD09} on so-called phylogenetic
$X$-trees (computational biology), again is based on symmetric difference.
See \cite{10.1109/TCBB.2007.70270} for more details.
\item The \emph{bag distance}\cite[p.204]{DD09}\emph{,} used in string
matching, is defined to be $\max(C(e_{f}),C(f_{e}))$.
\item In approximate string matching (for instance in bioinformatics), so-called
\emph{$q$-gram similarity}\cite[p.206]{DD09}\emph{ }is defined.
This is not a metric.
\end{itemize}
Note that there are two dominant ideas: minimising over multiset bijections,
and symmetric differences. The latter do not reflect any structure
on $S$ except perhaps if we argue that multiplicity may depend on
that structure. To some extent, the metrics described later mix these
two paradigms.
There are a number of other standard possibilities, such as the metric
induced on $E$ by any injection into a metric space, or those given
by taking the sum (or the supremum) of the $|e(s)-f(s)|$ where $e,f\in E$
and $s\in S$. Any metric on $\mathbb{Z^{+}}$ (multisets on the prime
numbers) is also an example.
\section{The multiset model $E$\label{sec:Model E}}
If $a,c\in E$ and $C(a)\leq C(c)$, we find a submultiset $c'$ of
$c$ of cardinality $C(a)$ so that, matching elements of $a$ and
$c'$ as described below, the sum of the $d$-distances is minimised,
and then we add a constant. The result, denoted $d_{E}$, though resembling
the matching distance just described, actually generalises the bag
distance. The other function, $d_{Em}$ ($m$ for 'mean') is obtained
by dividing $d_{E}$ by $C(c)$.
We choose $M>0$, and define $\theta=\frac{\sup d}{M}$ when $d$
is bounded. Given $a,c\in E$, suppose that $C(a)\leq C(c)$ and $c\neq e_{0}$.
Write down all the elements in both in arbitrary order, \emph{viz.,
$a_{1},a_{2},\ldots,a_{C(a)}$} and \emph{$c_{1},c_{2},\ldots,c_{C(c)}$
}where for each $x\in X$, $\#\{j:a_{j}=x\}=a(x)$ and $\#\{j:c_{j}=x\}=c(x)$.
(In other terminology, we \emph{parametrise} the multisets by enough
positive integers.)
Let $\gamma$ be a member of the permutation group $G_{c}$ on $C(c)$
elements, acting on the subscripts in the $c$-sequence. Write\emph{
}
\[
d^{\gamma}(a,c)=\sum_{j=1}^{j=C(a)}d(a_{j},c_{\gamma(j)})+M|C(c)-C(a)|
\]
and define the following functions $d_{E}$ and $d_{Em}$ from $E\times E$
to $R$.
\[
d_{E}(a,c)=\min_{\gamma\in G_{c}}d^{\gamma}(a,c)\qquad\textrm{and}\qquad d_{Em}(a,c)=\frac{d_{E}(a,c)}{\max(C(a),C(c))}
\]
with $d_{Em}(e_{0},e_{0})=0$. We call $M|C(c)-C(a)|$ the \emph{notional
part} of $d_{E}(a,c)$. The mappings $\gamma$ regarded as from $a$
to $c$, need not be multiset functions.
\begin{prop}
If $d$ is unbounded then $d_{E}$ and $d_{Em}$ are non-metrics for
all $M$. If $d$ is bounded, then $d_{E}$ is a metric iff $\theta\leq2$,
and $d_{Em}$ is a metric iff $\theta\leq1$.\end{prop}
\begin{proof}
Only the triangle inequality need be verified or could fail. Let $x,y\in X$,
with $x\neq y$. Then
\[
d_{E}(e_{x},e_{x}+e_{y})+d_{E}(e_{x}+e_{y},e_{y})-d_{E}(e_{x},e_{y})=2M-d(x,y)
\]
So if $d_{E}$ is a metric, $d$ is bounded and $\theta\leq2$. The
same argument for $d_{Em}$ implies $\theta\leq1$.
From now on we take $a,b,c\in E$, and assume $C(a)\leq C(c)$. We
look first at $d_{E}$ and suppose $\theta\leq2$: as motivation,
we could verify that whenever
\[
2C(b)\leq C(a)(2-\theta)\quad\textrm{or}\quad2C(b)\geq\theta C(a)+2C(c)
\]
then the notional parts alone in $d_{E}(a,b)+d_{E}(b,c)$ add to
at least
\[
M(C(c)-C(a))+\theta MC(a)\geq d_{E}(a,c)
\]
The value of $C(b)$ determines three cases, all with similar reasoning.
Case $C(b)<C(a)$: there exist $\alpha\in G_{a}$ and $\gamma\in G_{c}$
such that
\[
d_{E}(a,b)=\sum_{i=1}^{C(b)}d(b_{i},a_{\alpha(i)})+M(C(a)-C(b))
\]
and
\[
d_{E}(b,c)=\sum_{i=1}^{C(b)}d(b_{i},c_{\gamma(i)})+M(C(c)-C(b))
\]
So
\begin{align*}
d_{E}(a,b)+d_{E}(b,c) & \geq\sum_{i=1}^{C(b)}d(a_{\alpha(i)},c_{\gamma(i)})+M(C(c)-C(a))+2M(C(a)-C(b))
\end{align*}
\begin{alignat}{1}
& \geq\sum_{i=1}^{C(a)}d(a_{\alpha(i)},c_{\gamma(i)})+M(C(c)-C(a))+(2-\theta)M(C(a)-C(b))\label{eq:B}
\end{alignat}
having added, for each $i$ beyond $C(b)$, the non-positive $d(a_{\alpha(i)},c_{\gamma(i)})-\theta M$.
Then \eqref{eq:B} is at least
\[
d_{E}(a,c)+M(2-\theta)(C(a)-C(b))\geq d_{E}(a,c)
\]
Case $C(a)\leq C(b)\leq C(c)$: there exist $\beta\in G_{b}$ and
$\gamma\in G_{c}$ such that
\[
d_{E}(a,b)=\sum_{i=1}^{C(a)}d(a_{i},b_{\beta(i)})+M(C(b)-C(a))
\]
and
\[
d_{E}(b,c)=\sum_{i=1}^{C(b)}d(b_{i},c_{\gamma(i)})+M(C(c)-C(b))
\]
Then
\begin{equation}
d_{E}(a,b)+d_{E}(b,c)=M(C(c)-C(a))+\sum_{i=1}^{C(a)}d(a_{i},b_{\beta(i)})+\sum_{i=1}^{C(b)}d(b_{i},c_{\gamma(i)})\label{eq:C}
\end{equation}
Now $\sum_{i=1}^{C(b)}d(b_{i},c_{\gamma(i)})=\sum_{i=1}^{C(b)}d(b_{\beta(i)},c_{\gamma\beta(i)})$
since $\beta\in G_{b}$, and so \eqref{eq:C} is at least
\[
M(C(c)-C(a))+\sum_{i=1}^{C(a)}d(a_{i},c_{\gamma\beta(i)})+\sum_{i=1+C(a)}^{C(b)}d(b_{\beta(i)},c_{\gamma\beta(i)})
\]
which is at least $d_{E}(a,c)$, in this case for any $\theta$.
Case $C(b)>C(c)$: For some $\tau\in G_{c}$,
\[
d_{E}(a,c)=M(C(c)-C(a))+\sum_{i=1}^{C(a)}d(a_{i},c_{\tau(i)})
\]
and $\rho,\sigma\in G_{b}$ are given by
\[
d_{E}(a,b)=\sum_{i=1}^{C(a)}d(a_{i},b_{\rho(i)})+M(C(b)-C(a))
\]
and
\[
d_{E}(b,c)=\sum_{i=1}^{C(c)}d(b_{\sigma(i)},c_{i})+M(C(b)-C(c))
\]
We write $\omega=\sigma^{-1}\rho\in G_{b}$, which takes any subscript
of $a$ to a subscript of $c$, and define
\[
l=\#\{1\leq i\leq C(a):\rho(i)=\sigma(j)\textrm{ for some }j\textrm{ in }1,\ldots,C(c)\}
\]
Then $l\geq C(a)+C(c)-C(b)$ since $\rho(1),\ldots,\rho(C(a))$ and
$\sigma(1),\ldots,\sigma(C(c))$ are all chosen from $1,2,\ldots,C(b)$.
Dropping all terms with $i>l$, $d_{E}(a,b)+d_{E}(b,c)$ is at least
\begin{equation}
\sum_{i=1}^{l}[d(a_{i},b_{\rho(i)})+d(b_{\rho(i)},c_{\omega(i)})]+2M(C(b)-C(c))+M(C(c)-C(a))\label{eq:A}
\end{equation}
Just as before, $d(a_{i},c_{\omega(i)})-\theta M\leq0$, so \eqref{eq:A}
is at least as big as
\[
\sum_{i=1}^{C(a)}d(a_{i},c_{\omega(i)})+2M(C(b)-C(c))-\theta M(C(a)-l)+M(C(c)-C(a))
\]
Now $C(b)-C(c)\geq C(a)-l\geq0$ so we get
\[
d_{E}(a,b)+d_{E}(b,c)\geq\sum_{i=1}^{C(a)}d(a_{i},c_{\omega(i)})+M(C(c)-C(a))+M(2-\theta)(C(a)-l)\geq d_{E}(a,c)
\]
concluding the proof that $d_{E}$ is a metric.
Passing to $d_{Em}$, we now assume $\theta\leq1$, which implies
$d_{Em}(a,c)\leq M$. If $C(b)\leq C(c)$ it is certainly true that
\[
d_{Em}(a,b)+d_{Em}(b,c)\geq d_{Em}(a,c)
\]
so we will suppose $C(b)>C(c)$ and reuse the notation just employed
for $d_{E}$. Using \eqref{eq:A} again, we can write
\[
C(b)(d_{Em}(a,b)+d_{Em}(b,c))\geq(C(c)-l)M+\sum_{i=1}^{l}d(a_{i},c_{\omega(i)})+(2C(b)-2C(c)-C(a)+l)M
\]
Since $\theta\leq1$, the sum of the first two terms on the right
is at least $C(c)d_{Em}(a,c)$ and we also have $2C(b)-2C(c)-C(a)+l\geq C(b)-C(c)>0$,
so
\[
d_{Em}(a,b)+d_{Em}(b,c)\geq\frac{C(c)d_{Em}(a,c)+M(C(b)-C(c))}{C(b)}\geq d_{Em}(a,c)
\]
as it is a convex combination of $d_{Em}(a,c)$ and $M$.
\end{proof}
\subsection{Simple properties of $d_{E}$ and $d_{Em}$}
We start with some computational results about $d_{E}$. The first
says that $a$ and $c$ can be taken as disjoint.
\begin{prop}
\label{pro:disj}$d_{E}(a,c)=d_{E}(a_{c},c_{a})$\end{prop}
\begin{proof}
Assume $C(a)\leq C(c)$. We have to show that among the permutations
$\gamma$ in $G_{c}$ which minimise
\[
d^{\gamma}(a,c)=\sum_{j=1}^{C(a)}d(a_{j},c_{\gamma(j)})+M|C(c)-C(a)|
\]
there exists one in which maximally many identical elements (with
multiplicity) of $a$ and $c$ are matched up by $\gamma$. But if
$a_{j}=c_{\gamma(k)}$ then
\[
d(a_{j},c_{\gamma(k)})+d(a_{k},c_{\gamma(j)})\leq d(a_{j},c_{\gamma(j)})+d(a_{k},c_{\gamma(k)})
\]
is certainly true, so if we start with any $\gamma$ that minimises
$d^{\gamma}(a,c)$, we can find another with the required property.\end{proof}
\begin{cor}
If $\frac{d}{M}$ is the discrete metric then $d_{E}(a,c)=M\max(C(a_{c}),C(c_{a}))$,
and so $d_{E}$ generalises the bag distance.
\end{cor}
The next result is needed to establish completeness.
\begin{lem}
\label{squeeze} If $x,y\in X$, $a,c\in E$ and $C(a)=C(c)=n$, then
\[
|d_{E}(a+e_{x},c+e_{y})-d_{E}(a,c)|\leq d(x,y)
\]
\end{lem}
\begin{proof}
$d_{E}(a+e_{x},c+e_{y})\leq d(x,y)+d_{E}(a,c)$ because its right
side is obtained from its left side by permuting the subscripts in
the sense of the definition of $d_{E}$.
Now, renumbering so as to identify $x$ as $a_{1}$ and $y$ as $c_{n+1}$
(if these subscripts were the same we would be finished) suppose that
\[
d_{E}(a+e_{x},c+e_{y})=d(x,c_{1})+d(a_{n+1},y)+\sum_{j=2}^{n}d(a_{j},c_{j})
\]
Then
\[
d_{E}(a+e_{x},c+e_{y})+d(x,y)\geq d(a_{n+1},c_{1})+\sum_{j=2}^{n}d(a_{j},c_{j})\geq d_{E}(a,c)
\]
as required. Simple examples show that the bound $d(x,y)$ is tight.
\end{proof}
Finally we compare sequences in $d_{E}$ and $d_{Em}$.
\begin{prop}
\label{pro: sequence}Let $S_{i}$ be a sequence in $E$. Then any
of the following is true with respect to $d_{E}$ iff it is true with
respect to $d_{Em}$: (i) $S_{i}$ is Cauchy; (ii) $S_{i}$ is convergent;
(iii) $S_{i}$ has limit $l\in E$.\end{prop}
\begin{proof}
We first show that $d_{E}$ and $d_{Em}$ have the same Cauchy sequences.
Since multisets of cardinalities $r$ and $t$ are at least $M|t-r|$
apart in $d_{E}$, it follows that any Cauchy sequence for $d_{E}$
must eventually have constant cardinality, in which case $d_{E}$
and $d_{Em}$ are mutually proportional and so the sequence is also
Cauchy for $d_{Em}$.
Now suppose $S_{i}$ is Cauchy for $d_{Em}$, write $s_{i}=C(S_{i})$
and then for each $\epsilon>0,\exists N=N(\epsilon)$ such that whenever
$i,j>N$, $d_{Em}(S_{i},S_{j})<\epsilon$. But then $d_{Em}(S_{i},S_{j})\geq M(1-\frac{s_{j}}{s_{i}})$
supposing $s_{i}\geq s_{j}$ and as $\frac{s_{j}}{s_{i}}>1-\frac{\epsilon}{M}$
, no subsequence of the $s_{i}$ can go to infinity, and hence the
sequence $s_{i}$ is bounded (for each $s_{j}$, and hence in general).
But then $M(1-\frac{s_{j}}{s_{i}})$ only takes finitely many positive
values so for sufficiently small $\epsilon$ this gives a contradiction
unless the $s_{i}$ are eventually constant. So $d_{E}$ and $d_{Em}$
are again proportional and $S_{i}$ is Cauchy with respect to $d_{E}$.
(There is also a trivial case in which $s_{i}=0$ infinitely often.)
An exactly similar argument shows that any limit of such a sequence
(either metric) again has the same cardinality. It follows that $d_{E}$
and $d_{Em}$ also have the same convergent sequences (and limits).
\end{proof}
We are now ready for the main result.
\begin{prop}
(Topology and completeness.)
$d_{E}$ and $d_{Em}$ induce the same topology on $E$. The metrics
$d_{E}$ and $d_{Em}$ are complete iff $d$ is.\end{prop}
\begin{proof}
By \eqref{pro: sequence}, $d_{E}$ and $d_{Em}$ have the same convergent
sequences (and limits), and so induce the same topology on $E$. We
also see that given $d$, either both or neither of $d_{E}$ and $d_{Em}$
are complete metrics.
If $x_{i}$ is a non-convergent Cauchy sequence in $X$, then $S_{i}=\{x_{i}\}$
is a non-convergent Cauchy sequence for both $d_{E}$ and $d_{Em}$.
Supposing that $d$ is complete, let $S_{i}$ be a sequence of multisets
of $X$ which is Cauchy in $d_{E}$, with all $C(S_{i})=n>1$ (the
completeness of $d$ implies the case $n=1$). Given $\epsilon>0$,
$\exists N=N(\epsilon)$ such that $m\geq N\Longrightarrow d_{E}(S_{m},S_{N})<\epsilon$.
As every element%
\footnote{As always, this is with multiplicity. If some element of $X$ occurs
three times in $S_{N}$, then at least three elements (with multiplicity)
of each $S_{m}$ are within $d$-distance $\epsilon$ of it.%
} of each $S_{m}$ for $m\geq N$ is then within $d$-distance $\epsilon$
of some element of $S_{N}$ it follows that there exists a totally
bounded region of $X$ containing all elements of all the $S_{i}$.
Since $X$ is complete, the completion of this region is (can be regarded
as) a compact subset of $X$ and now we can assume that $X$ is compact.
Recalling that a Cauchy sequence converges iff it has a convergent
subsequence, we select an arbitrary $x_{i}$ from each $S_{i}$ (using
the axiom of choice). Since $X$ is compact, the sequence $x_{i}$
has a convergent subsequence $y_{i}=x_{t(i)}$ with limit $y$ (say).
Writing $T_{i}$ for $S_{t(i)}$, we denote by $T_{i}'$ the multiset
$T_{i}-e_{y_{_{i}}}$. Using \eqref{squeeze} we have
\[
|d_{E}(T_{i},T_{j})-d_{E}(T_{i}',T_{j}')|\leq d(y_{i},y_{j})
\]
and it follows that $T_{i}'$ is a Cauchy sequence of cardinality
$n-1$, and we can assume that $T_{i}'$ has limit $T'$. Using \eqref{squeeze}
again, and denoting $T'+e_{y}$ by $T$,
\[
|d_{E}(T_{i},T)-d_{E}(T_{i}',T')|\leq d(y_{i},y)
\]
and so $T_{i}$ converges to $T$, which is therefore the limit of
the Cauchy sequence $S_{i}$.
\end{proof}
\subsection{An algorithm for $d_{E}$}
We show that calculation of $d_{E}$ is an integer programming problem.
As usual suppose $C(a)\leq C(c)$ and $a\cap c=e_{0}$. Just as in
the Hausdorff heuristic, label the rows (the columns) of a matrix
by elements of $R(a)$ (of $R(c)$), and put the $d$-distances as
entries. Add one more row whose entries are all $M$, to give a matrix
$D$.
Define a new matrix $H$, the same shape as $D$, constrained to satisfy
\[
\sum_{i}h_{ij}=c(j)\;\textrm{for}\; j\leq\#R(c)\text{\qquad\textrm{and}\qquad\ensuremath{\sum_{j}h_{ij}=a(i)\;\textrm{for}\; i\leq\#R(a)}}
\]
implying $\sum_{j}h_{1+\#R(a),j}=C(c)-C(a)$. Then $d_{E}(a,c)$ is
the minimum value of $\sum_{i,j}d_{ij}h_{ij}$ (the trace of $D^{T}H$),
for which all the $h_{ij}\in\mathbb{N}$.
\section{The multiset model $F$}
We will define a space $A$ whose finite subsets include the multisets
of $X$.
This time we identify the multiset $re_{x}$ with $(x,r)\in X\times\mathbb{N}$,
as usual interpreted as {}``$r$ copies of $x$''. Let $A$ be the
quotient space of $X\times\mathbb{N}$ in which all points of the
form $(x,0)$ have been identified. $A_{r}$ will denote the (quotient
of the) subset $X\times\{r\}$. We use $\mathbb{N}$ instead of $\mathbb{Z}^{+}$
(which would be simpler) to get a canonical bijection with model $E$.
Note that $A$ consists of the isolated point $e_{0}$ and isolated
copies of $X$; furthermore $A$ coincides with $\{e\in E:\#R(e)\leq1\}$.
Hence a multiset of $X$ is a finite subset $U$ of $A$ whose underlying
elements of $X$ are all distinct, viz. $re_{x},se_{x}\in U\Longrightarrow r=s$
and $F$ will denote the space of all such subsets of $A$. The following
result should now be obvious.
\begin{prop}
Let $d'$ be any metric on $A$. Then the restriction of $d'_{H}$
to $F$ is a multiset metric on $X$, and it generalises the Hausdorff
metric iff $d'(1e_{x},1e_{y})=d(x,y)\forall x,y\in X$.
\end{prop}
We will return later to the question of when this is complete.
For metrics on $A$, as before fix $M>0$ and define $\theta=\frac{\sup d}{M}$
when $d$ is bounded. We start with the functions $d_{A}$ and $d_{Am}$
from $A\times A$ to $R$ defined by
\[
d_{A}(re_{x},te_{z})=M|t-r|+\min(r,t)d(x,z)
\]
and
\[
d_{Am}(re_{x},te_{z})=\frac{d_{A}(re_{x},te_{z})}{\max(r,t)}\quad\textrm{or\ensuremath{\quad0\quad}when\quad}r=t=0
\]
Noting that (a) these are well-defined on $A\times A$, (b) they are
the respective restrictions to $A\times A$ of $d_{E}$ and $d_{Em}$,
and (c) they both agree with $d$ when $r=t=1$, it follows that they
are metrics on $A$ when $\theta\leq2$ and when $\theta\leq1$ respectively.
Actually there is a small surprise.
\begin{prop}
If $d$ is unbounded then $d_{A}$ and $d_{Am}$ are non-metrics for
all $M$. If $d$ is bounded, then $d_{A}$ and $d_{Am}$ are both
metrics iff $\theta\leq2$. \end{prop}
\begin{proof}
As $d_{A}(2e_{x},e_{x})+d_{A}(e_{x},2e_{z})-d_{A}(2e_{x},2e_{z})=2M-d(x,z)$,
if $d_{A}$ is a metric, $d$ must be bounded and $\theta\leq2$.
Use the same example for $d_{Am}.$
It only remains to show that $d_{Am}$ is a metric when $\theta\leq2$.
We fix $re_{x},te_{z}\in A$, assuming $r\leq t$. Now if $s\leq t$,
it is immediate that
\[
d_{Am}(re_{x},se_{y})+d_{Am}(se_{y},te_{z})\geq d_{Am}(re_{x},te_{z})
\]
so we will take $s>t$. Using the definition of $d_{Am}$,
\begin{equation}
st(d_{Am}(re_{x},se_{y})+d_{Am}(se_{y},te_{z})-d_{Am}(re_{x},te_{z}))\label{eq:D}
\end{equation}
\[
=M(t+r)(s-t)+rtd(x,y)+t^{2}d(y,z)-rsd(x,z)
\]
and using $t^{2}\geq rt$ we get that \eqref{eq:D} is at least as
large as
\[
Mt(s-t)+r(s-t)(M-d(x,z))
\]
which is non-negative provided $2M\geq\frac{2r}{r+t}d(x,z)$, whose
right side cannot exceed $\sup d=\theta M$. So $d_{Am}$ is a metric
when $\theta\leq2$.\end{proof}
\begin{rem}
If $r\leq t$, $M(t-r)\leq td_{Am}(re_{x},te_{z})=d_{A}(re_{x},te_{z})\leq tM\max(1,\theta)$.
Actually, $d_{Am}(re_{x},te_{z})$ is a convex combination of $d(x,z)$
and $M$ and therefore lies between them. \end{rem}
\begin{prop}
Let $r_{i}e_{x(i)}$ be a sequence in $A$. Then any of the following
is true with respect to $d_{A}$ iff it is true with respect to $d_{Am}$:
(i) $r_{i}e_{x(i)}$ is Cauchy; (ii) $r_{i}e_{x(i)}$ is convergent;
(iii) $r_{i}e_{x(i)}$ has limit $l\in A$.
\end{prop}
The proof is exactly as in \eqref{pro: sequence}.
\begin{prop}
(Main properties of $A$)\end{prop}
\begin{enumerate}
\item $d_{Am}$ and $d_{A}$ both induce the same topology on $A$, coinciding
with the quotient topology inherited from $X\times\mathbb{N}$.
\item $d_{A}$ and $d_{Am}$ are complete metrics iff $d$ is.
\item The subset $U$ of $A$ is compact iff each $U_{r}=U\cap A_{r}$ is
a compact subset of $A_{r}$, and almost all the $U_{r}$ are empty.\end{enumerate}
\begin{proof}
(Clause 1) We have just seen that $d_{A}$ and $d_{Am}$ have the
same convergent sequences and limits, so they induce the same topology.
Let $re_{x},te_{z}\in A$ with $t>0$ and choose $\epsilon>0$. Now
$M|r-t|\leq d_{A}(re_{x},te_{z})<\epsilon$ implies $r=t$ when $\epsilon$
is sufficiently small, and indeed in this case
\[
d_{A}(te_{x},te_{z})<\epsilon\Leftrightarrow d(x,z)<\frac{\epsilon}{t}
\]
It follows that any sufficiently small open ball around $te_{z}$
in the $d_{A}$-topology is also an open ball in the quotient topology,
and vice versa.
The point $e_{0}$ is isolated in both topologies. So these three
topologies on $A$ coincide.
(Clause 2) By the preceding proposition $d_{A}$ is complete iff $d_{Am}$
is. As any Cauchy sequence eventually lies in a single $A_{r}=X\times\{r\}$,
it converges iff this is true for the same sequence regarded as a
sequence in $X$, and any limits also coincide.
(Clause 3) Suppose $U$ is compact. If infinitely many $U_{r}$ were
non-empty we could find a sequence in $U$ with no convergent subsequence
(compactness being equivalent to sequential compactness in metric
spaces). If $re_{x(i)}$ is a sequence in some $U_{r}$ then it has
a convergent subsequence in $U$ but this must converge to a point
of $U_{r}$. Conversely, if $U_{r}$ is compact in $A_{r}$ then it
is compact in $X$ and then $U$ is a finite union of compact sets,
and so compact.
\end{proof}
Let $d_{F}$ and $d_{Fm}$ be the Hausdorff metrics arising from $d_{A}$
and $d_{Am}$ respectively. Let us write $F'$ for the set of all
finite subsets of $A$.
\begin{prop}
$d_{F}$ and $d_{Fm}$ are metrics on $F'$ iff $\theta\leq2$, and
both coincide with the Hausdorff metric for the case of ordinary subsets.
They are complete metrics on $F'$ iff $d$ is.\end{prop}
\begin{proof}
$\theta\leq2$ is necessary for the triangle inequality for $d_{A}$
(and so for $d_{F}$) or for $d_{Am}$ (and so for $d_{Fm}$) to hold.
The rest of the statement is an immediate consequence of their definitions
and the stated properties of the Hausdorff metric.
\end{proof}
\section{The multiset model $G$}
We continue to suppose $\theta\leq2$. Of course the restrictions
of $d_{F}$ and $d_{Fm}$ to $F$ (multisets on $X$) need not be
complete. For instance, if $d(x_{i},x)$ is strictly decreasing to
zero and $y_{i}=x_{i+1}$, then the sequence $2e_{x_{i}}+3e_{y_{i}}$
is Cauchy in $d_{F}$ or $d_{Fm}$ but its limit is $\{2e_{x},3e_{x}\}\in F'\backslash F$.
We deal with this discrepancy in the following way. Observe that to
every $U\in F'$ there is a function $t_{U}:X\rightarrow\mathbb{N}$
defined by
\[
t_{U}(x)=\sum_{ae_{x}\in U}a
\]
and indeed if $U\in F$, $t_{U}$ is its representative in model $E$.
Define an equivalence relation $\sim$ on $F'$ by decreeing $U\sim V$
iff $t_{U}=t_{V}$. For example, if $x\neq y$ one $\sim$-class is
$\{e_{x},2e_{x},3e_{x},2e_{y}\},\{e_{x},5e_{x},2e_{y}\},\{2e_{x},4e_{x},2e_{y}\},\{6e_{x},2e_{y}\}$.
Obviously every class is finite, contains exactly one element of $F$,
and is a singleton iff $t_{U}(x)\leq2\forall x$.
We now write $G$ for $F'/\!\sim$ and $d_{G}$ for the quotient pseudometric
on $G$ corresponding to $d_{F}$. There are canonical bijections
among $G$, $F$ and $E$. We extend the notations $e_{0}$, $R()$
and $C()$ to $F'$ and $G$ in the obvious way. If $e\in G\backslash\{e_{0}\}$,
it follows that $d_{G}(e,e_{0})\geq M$ since $d_{A}(re_{x},e_{0})\geq M$
for all $re_{x}\in A,r\neq0$.
Now $d_{G}$ is definitely less than $d_{F}$ in general as
\[
d_{G}(3e_{x},3e_{y})\leq d_{F}(\{e_{x},2e_{x}\},\{e_{y},2e_{y}\})\leq2d(x,y)<3d(x,y)=d_{F}(3e_{x},3e_{y})
\]
The most important facts about $d_{G}$ are corollaries of the following
result.
\begin{prop}
\label{prop:dGdFdH}If $e,f\in G\backslash\{e_{0}\}$, then $d_{G}(e,f)\geq d_{H}(R(e),R(f))$.\end{prop}
\begin{proof}
Suppose $x\in R(e),y\in R(f)$ are such that $d(x,y)=d_{H}(R(e),R(f))$.
We can assume $x\notin R(f)$. Let $e=p_{0},p_{1},\ldots,p_{n},p_{n+1}=f$
be a sequence of elements of $G$, referring to the notation of \eqref{quotient}.
If any $p_{i}$ is $e_{0}$ then we have two or more terms $\geq M$
so the path length is at least $2M\geq\sup d\geq d(x,y)$ and we now
assume that all $R(j):=R(p_{j})$ are non-empty.
We will employ the observation that $d_{F}(u,v)\geq\min_{b\in R(v)}d(a,b)$
if $a\notin R(v)$. For any sequence $x_{0},x_{1},\ldots$, all in
$\cup_{j}R_{j}$, define $s_{i}$ by $x_{i}\in R(s_{i})$ where $s_{i}$
is maximal. Take $x_{0}=x$ and choose $x_{1}\in R(1+s_{0})$ such
that $d(x_{0},x_{1})$ is minimal. So the $d_{F}$-distance between
any member of $p_{s_{0}}$ and any member of $p_{1+s_{0}}$ is at
least $d(x_{0},x_{1})$.
If $x_{1}\in R(f)$ we are finished as our path is at least $d(x_{0},x_{1})\geq d(x,y)$.
Otherwise choose $x_{2}\in R(1+s_{1})$ such that $d(x_{1},x_{2})$
is minimal.
Again we are finished if $x_{2}\in R(f)$ as our path is (at least)
$d(x_{0},x_{1})+d(x_{1},x_{2})$. If not, choose $x_{3}\in R(1+s_{2})$
to minimise $d(x_{2},x_{3})$. As the $s_{i}$ are increasing we get
a sequence of terms from $x$ to some $z\in R(f)$ whose sum is at
least $d(x,y)$.\end{proof}
\begin{cor}
(1) $d_{G}$ agrees with the Hausdorff metric on finite subsets of
$X$.
(2) If $d$ is uniformly discrete then $d_{G}$ is a complete metric
on $G$.
(3) $d_{F}(e,f)\geq d_{H}(R(e),R(f))$.\end{cor}
\begin{proof}
(1) For finite subsets $e,f$ of $X$,
\[
d_{F}(e,f)\geq d_{G}(e,f)\geq d_{H}(R(e),R(f))=d_{H}(e,f)=d_{F}(e,f)
\]
(2) $d_{H}$ has the same lower bound as $d$. By clause (1), so does
$d_{G}$, making it a metric. $d_{G}$ is complete because it is uniformly
discrete.
(3) $d_{F}$ is at least as big as $d_{G}$.
\end{proof}
In the notation of the proposition, if we have $t_{e}(x)>t_{f}(x)$
and we define $s_{0}$ to be the maximal $s$ such that $t_{s}(x)>t_{1+s}(x)$,
we cannot use the same argument to show that $d_{G}$ is a metric
in general, because we might have $z=x$.
\section{Concluding remarks}
Aside from the potential applications mentioned at the start or described
in \cite{SIYS07}, these metrics might also be useful in voting theory.
An election is a multiset on the set $X$ of permitted ballot types.
For instance, if $X$ is the total orderings (permutations) of $n$
candidates, one well-known metric on $X$ is the \emph{Kendall $\tau$-distance}\cite[p.211]{DD09},
defined as the fewest transpositions required to change one into the
other.
Future work ought to look at possible applications and clarify the
relationships among $E,F$ and $G$.
\bibliographystyle{amsalpha}
| {
"timestamp": "2011-09-23T02:05:52",
"yymm": "1109",
"arxiv_id": "1109.4930",
"language": "en",
"url": "https://arxiv.org/abs/1109.4930",
"abstract": "We discuss five simple functions on finite multisets of metric spaces. The first four are all metrics iff the underlying space is bounded and are complete metrics iff it is also complete. Two of them, and the fifth function, all generalise the usual Hausdorff metric on subsets. Some possible applications are also considered.",
"subjects": "Metric Geometry (math.MG)",
"title": "Multiset metrics on bounded spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211575679041,
"lm_q2_score": 0.8267117940706734,
"lm_q1q2_score": 0.8058961480710126
} |
https://arxiv.org/abs/1812.11043 | Toric degenerations in symplectic geometry | A toric degeneration in algebraic geometry is a process where a given projective variety is being degenerated into a toric one. Then one can obtain information about the original variety via analyzing the toric one, which is a much easier object to study. Harada and Kaveh described how one incorporates a symplectic structure into this process, providing a very useful tool for solving certain problems in symplectic geometry. Below we present applications of this method to questions about the Gromov width, and cohomological rigidity problems. | \section{Introduction}
Manifolds and algebraic varieties equipped with a group action are usually better understood as a presence of an action is a sign of certain symmetries. In particular, {\it toric varieties} form a very well understood class of varieties. These are varieties which contain an algebraic torus $T^n_{{\mathbb C}}:=({\mathbb C}^*)^n$ as a dense open subset and are equipped with an action of $T^n_{\mathbb C}$ which extends the usual action of $T^n_{\mathbb C}$ on itself.
(For more about toric varieties see, for example, \cite{CLS} and \cite{F}.)
To understand a given projective variety $X$ one can try to ``degenerate" it to a toric one, i.e., form a family of varieties with generic member $X$ and one special member some toric variety $X_0$. The varieties $X$ and $X_0$ are closely related and thus one can obtain information about $X$ by studying $X_0$. Moreover, such a degeneration gives a map from $X$ to $X_0$ which, in certain situations, is preserving some special structures $X$ and $X_0$ might be equipped with (for example: a symplectic structure).
One can use the method of toric degenerations to solve problems in symplectic geometry. In this work we discuss the following two applications:
\begin{itemize}
\item calculating lower bounds for the Gromov width, i.e., trying to find the largest ball which can be symplectically embedded into a given symplectic manifold;
\item constructing symplectomorphisms needed for a cohomological rigidity problem for symplectic toric manifolds, that is, the question of whether any two symplectic toric manifolds with isomorphic integral cohomology rings (via an isomorphism preserving the class of the symplectic form) are symplectomorphic.
\end{itemize}
Recall that a $2n$-dimensional symplectic manifold $(M, \omega)$ equipped with an effective Hamiltonian action of an $n$-dimensional torus $T=(S^1)^n$ is called a {\it symplectic toric manifold}.
The action being Hamiltonian means that there exists a moment map, that is, a $T$-invariant map $\mu \colon M \rightarrow \textrm{Lie}(T)^*\cong {\mathbb R}^n$ such that for every $X \in \textrm{Lie}(T)$ it holds that $\iota_{X^\sharp} \omega=d\mu^X$ where $X^\sharp$ denotes the vector field on $M$ induced by $X$ and $\mu^X \colon M \rightarrow {\mathbb R}$ is defined by $\mu^X(p)=\langle \mu(p), X\rangle.$
Such a manifold can be given a complex structure interacting well with the symplectic one so that $\omega$ is a K\"ahler form and $(M,\omega)$ a K\"ahler manifold. In particular, symplectic toric manifolds are toric varieties in the sense of algebraic geometry. A theorem of Delzant states that we have a bijection
\begin{displaymath}
\begin{array}{ccc}
\{2n \mbox{-dim compact symplectic toric manifolds} \}
&& \{\mbox{rational, smooth polytopes
in }{\mathbb R}^n \}\\
\mbox{up to equivariant} & \Leftrightarrow &\mbox{up to translations and }\\
\mbox{ symplectomorphisms}& & \gl(n,{\mathbb Z})\mbox{ transformations}
\end{array}
\end{displaymath}
(A polytope in ${\mathbb R}^n$ is called rational if the directions of its edges are in ${\mathbb Z}^n$. It is called smooth if for every vertex the primitive vectors in the directions of edges meeting at that vertex form a ${\mathbb Z}$-basis for ${\mathbb Z}^n$.)
In this bijection, a manifold corresponds to the image of its moment map, therefore the associated polytope is often called a moment polytope or a moment image.
Not much is known about a classification of symplectic toric manifolds up to symplectomorphisms. The cohomological rigidity mentioned in the second bullet above asks if such classification might be given by the integral cohomology rings.
In Sections \ref{sec gw} and \ref{sec coh rigidity} respectively we describe the above problems in detail and explain how one can use toric degenerations to solve problems of this type. In particular we prove (rather, outline the proofs of) the following two results, obtained in projects joint respectively with I.~Halacheva, X.~Fang and P.~Littelmann, and S.~Tolman.
\begin{theorem} \cite{HP},\cite{FLP}\label{thm gw}
Let $K$ be a compact connected simple Lie group.
The Gromov width of a coadjoint orbit $\mathcal{O}_\lambda$ through a point $\lambda$ lying on some rational line in $(Lie\, K)^*$, equipped with the Kostant--Kirillov--Souriau symplectic form, is at least
\begin{equation}\label{gw formula}
\min\{\, \left|\left\langle \lambda,\alpha^{\vee} \right\rangle \right|;\ \alpha^{\vee} \textrm{ a coroot and }\left\langle \lambda,\alpha^{\vee} \right\rangle \neq0\}.
\end{equation}
\end{theorem}
\begin{theorem}\cite{PT}\label{thm coh rig}
Let $M$ and $N$ be Bott manifolds such that $\hh^*(M;{\mathbb Q})$ and $\hh^*(N;{\mathbb Q})$ are isomorphic to ${\mathbb Q}[x_1,\ldots,x_n]/\langle x_1^2, \ldots, x_n^2 \rangle$. For any ring isomorphism $F \colon \hh^*(M;{\mathbb Z}) \rightarrow \hh^*(N;{\mathbb Z})$ sending the class $[\omega_M]$ of the symplectic form on $M$ to the class $[\omega_N]$ of the symplectic form on $N$, there exists a symplectomorphism $f \colon (N, \omega_N)\rightarrow (M,\omega_M)$ such that the map $\hh^*(f)\colon \hh^*(M;{\mathbb Z}) \rightarrow \hh^*(N;{\mathbb Z})$ induced by $f$ on integral cohomology rings is $F$.
\end{theorem}
There are other applications of toric degenerations in symplectic geometry. For example, one can obtain information about the Ginzburg--Landau potential function on $X$ from that of $X_0$ and thus detect some non-displaceable Lagrangians of $X$ (see \cite{NNU}).
{\bf Acknowledgements.} First of all, the author would like to thank her collaborators: Xin Fang, Iva Halacheva, Peter Littelmann, and Sue Tolman.
Results contained in this manuscript were obtained in collaboration with the above mathematicians (\cite{HP}, \cite{FLP}, \cite{PT}) and therefore all of them could also be considered as authors of this paper. (However, any remaining mistakes are due to me.)
The author also thanks the organizers of the workshops ``Interactions with Lattice Polytopes" for giving her the opportunity to participate and present her results at these workshops.
The author is supported by the DFG (Die Deutsche Forschungsgemeinschaft) grant CRC/TRR 191 ``Symplectic Structures in Geometry, Algebra and Dynamics".
\section{Toric degenerations}\label{sec toric deg}
A {\bf toric degeneration} of a projective variety $X$ is a flat family $\pi \colon \mathfrak{X} \rightarrow {\mathbb C}$ with generic fiber $X$ and one special fiber $ X_0=\pi^{-1}(0)$, a (not necessarily normal) toric variety. A construction of such a degeneration of a projective variety $X$, equipped with a very ample line bundle satisfying certain conditions, can be found in Anderson (\cite[Theorem 2]{A}).
\begin{example}
Using the Pl\"ucker embedding,\footnote{Recall that the Pl\"ucker embedding sends a Grassmannian spanned by vectors $v,w \in {\mathbb C}^4$ to a point $[x_{12}:\ldots:x_{34}]\in {\mathbb C}{\mathbb P}^5$ with $x_{ij}$ equal to the determinant of the $2\times 2$ minor of $[v^T,w^T]$ spanned by rows $i$ and $j$.}
view $X=Gr(2,{\mathbb C}^4)$, the Grassmannian of $2$-planes in ${\mathbb C}^4$, as a subset of ${\mathbb C}{\mathbb P}^5$ with coordinates $\{x_{ij};\ 1\leq i<j \leq 4\}$, consisting of points satisfying
$$x_{12}x_{34}-x_{13}x_{24}+x_{14}x_{23}=0.$$
Consider the subset
$ \mathfrak{X} \subset {\mathbb C}{\mathbb P}^5 \times {\mathbb C}$ consisting of points satisfying
$$x_{12}x_{34}-x_{13}x_{24}+tx_{14}x_{23}=0,$$
where $t$ denotes the coordinate in ${\mathbb C}$. Let $\pi \colon \mathfrak{X} \rightarrow {\mathbb C}$ be the restriction to $\mathfrak{X} $ of the projection onto the second factor. This family constitutes a toric degeneration of $Gr(2,{\mathbb C}^4)$. Clearly $\pi^{-1}(1)$ is $Gr(2,{\mathbb C}^4)$. Moreover, performing a change of coordinates, one can show that $\pi^{-1}(t)$ for $t\neq 0$ is also bihomolomorphic to $Gr(2,{\mathbb C}^4)$. The central fiber, $\pi^{-1}(0)$, is described by the binomial ideal $\langle x_{12}x_{34}-x_{13}x_{24} \rangle $ and thus is a toric variety.
\end{example}
Harada and Kaveh in \cite{HK} enriched the construction of Anderson by incorporating a symplectic structure.
They start with a smooth projective variety $X$, of complex dimension $n$, equipped with a very ample line bundle ${\mathcal L}$, with some fixed Hermitian structure. Let $L:=\hh^0(X, \mathcal{L})$ denote the vector space of holomorphic sections, $\Phi_{\mathcal L} \colon X \rightarrow {\mathbb P} (L^*)$ the Kodaira embedding and $\omega=\Phi_{\mathcal L}^*(\omega_{FS})$ the pull back of the Fubini--Study form, i.e., of the standard symplectic structure on complex projective spaces. Then $(X,\omega)$ is a K\"ahler manifold.
With this data they construct (under certain assumptions) not only a flat family $\pi \colon \mathfrak{X} \rightarrow {\mathbb C}$ but also a K\"ahler structure $\tilde{\omega}$ on (the smooth part of) $\mathfrak{X}$ so that $(\pi^{-1}(1), \tilde{\omega}_{|\pi^{-1}(1)})$ is symplectomorphic to $(X,\omega)$.
Moreover, the special fiber $X_0=\pi^{-1}(0)$ inherits a $2$-form, the restriction of $\tilde{\omega}$, defined on its smooth part $U_0:=(X_0)_{\textrm{smooth}}$, and thus it also obtains a divisor. If $X_0$ is normal, then the polytope associated to $X_0$ and this divisor by the usual procedure of toric algebraic geometry (see, for example, Chapter 4 of \cite{CLS}) is the closure of the moment image of the (non-compact) symplectic toric manifold $(U_0,\tilde{\omega}_{|U_0}).$ As we will see,
this polytope can be computed by analyzing the behaviour of the holomorphic sections of ${\mathcal L}$.
Here are more details about this procedure.
Denote by $L^m$ the image of the $\textrm{span }\langle f_1\cdot \ldots \cdot f_m\,; \ f_i \in L \rangle$ in $\hh^0(X, \mathcal{L}^{\otimes m})$ and by
$R={\mathbb C}[X]=\oplus_{m\geq 0}\,L^m$ the homogeneous coordinate ring of $X$ with respect to the embedding $\Phi_{\mathcal L}$.
An important ingredient of the construction is a choice of a {\it valuation with one dimensional leaves}, $\nu \colon {\mathbb C}(X)\setminus \{0\} \rightarrow {\mathbb Z}^n$, from the ring ${\mathbb C}(X)$ of rational functions on $X$.
A precise definition of a general valuation can be found, for example, in \cite[Definition 3.1]{HK}.
In this paper we only use valuations induced by a flag of subvarieties and a special case of these, called {\it lowest/highest term valuations associated to a coordinate system}.
\begin{example}[Lowest/highest term valuations of a coordinate system] \cite[Example 3.2]{HK}\label{eg lh val}
Fix a (smooth) point $p \in X$ and let $(u_1, \ldots, u_n)$ be a system of coordinates in a neighborhood of $p$, meaning that $u_1,\ldots, u_n$ are regular functions at $p$, vanishing at $p$, and such that their differentials $du_1,\ldots, du_n$ are linearly independent at $p$. Then any regular function at $p$ can be represented as a power series $\sum_{\alpha \in {\mathbb Z}_{\geq 0}^n}\,c_\alpha u^\alpha$. Here by $ u^\alpha$, with $\alpha=(\alpha_1, \ldots, \alpha_n)\in {\mathbb Z}_{\geq 0}^n$, we mean $u_1^{\alpha_1}\cdot \ldots \cdot u_n^{\alpha_n}$. Choose and fix a total order $>$ on ${\mathbb Z}^n$ respecting the addition, for example the lexicographic order. Define a map $\nu$ from the set of functions regular at $p$ to ${\mathbb Z}^n$ by
$$\nu\, \big(\sum_{\alpha \in {\mathbb Z}_{\geq 0}^n}\,c_\alpha u^\alpha \big)=\min\{\alpha;\ c_\alpha \neq 0\},$$
and extend it to ${\mathbb C}(X)\setminus \{0\}$ by setting $\nu (f/g)=\nu(f)-\nu(g)$. Then $\nu$ is a valuation with one dimensional leaves, called a \emph{lowest term valuation}. If one uses $\max$ instead of $\min$ in the definition of $\nu$, one obtains a \emph{highest term valuation}.
\end{example}
\begin{example}[Valuations induced by a flag of subvarieties] \cite[Example 3.3]{HK}\label{eg flag val}
Take a flag of normal subvarieties (called a Parshin point) of $X$
$$\{p\}=Y_n \subset \ldots \subset Y_0=X,$$
with $\dim_{\mathbb C} (Y_k)=n-k$ and $Y_k$ non-singular along $Y_{k+1}$.
By the non-singularity assumption there exists a collection of rational functions $u_1,\ldots,u_n$ on $X$ such that $u_{k|Y_{k-1}}$ is a rational function on $Y_{k-1}$ which is not identically zero and which has a zero of first order on $Y_{k}$.
Then the lowest term valuation with respect to the lexicographic order can alternatively be described in the following way: for any $f \in {\mathbb C}(X)$, $f \neq 0$, the valuation $v(f)=(k_1,\ldots,k_n)$ where
$k_1$ is the order of vanishing of $f$ on $Y_1$, $k_2$ is the order of vanishing of $f_1:=(u_1^{-k_1} f)_{|Y_1}$ on $Y_2$, etc.
\end{example}
Given such $X$, ${\mathcal L}$, and $\nu$ we form a semigroup $S=S(\nu, {\mathcal L})$,
in the following way. Fix a non-zero element $h \in L$ and use it to identify\label{identification} $L$ with a subspace of ${\mathbb C}(X)$ by mapping $f \in L$ to $f/h \in {\mathbb C}(X)$.
Similarly identify
$L^m$ with a subspace of ${\mathbb C}(X)$ by sending $f \in L^m$ to $f/h^m \in {\mathbb C}(X)$. As any valuation satisfies $\nu(fg)=\nu(f)+\nu(g)$, the set
$$S=S(\nu,{\mathcal L})=\bigcup _{m\geq 0} \{ (m, \nu(f/h^m))\,|\,f \in L^m \setminus \{0\}\,\}$$
is a semigroup with identity (i.e. a monoid).
If $S$ is finitely generated, one can construct a toric degeneration whose special fiber is a toric variety ${\operatorname{Proj}}\, {\mathbb C}[S]$ (which is normal if $S$ is saturated).
Moreover we obtain an Okounkov body
$$ \Delta=\Delta(S)=\overline{\conv \big( \bigcup_{m>0} \{x/m\,|\, (m,x) \in S\}\big)}\subset {\mathbb R}^n.$$
Note that if $S$ is finitely generated, then $\Delta$ is a rational convex polytope.
The toric variety corresponding to $\Delta$ is the normalization of ${\operatorname{Proj}}\, {\mathbb C}[S]$.
In the following theorem we rephrase several results from \cite{HK}.
\begin{theorem}\cite{HK}\label{from hk}
Let ${\mathcal L}$ be a very ample Hermitian line bundle on a
smooth $n$-dimensional projective variety $X$ and $\omega=\Phi_{\mathcal L}^*(\omega_{FS})$ the induced symplectic form.
Let $\nu \colon {\mathbb C}(X) \setminus \{0\}\rightarrow {\mathbb Z}^n$ be a valuation with one dimensional leaves, and such that
the associated semigroup $S$ is finitely generated.
Then
\begin{itemize}
\item There exists a toric degeneration $\pi \colon \mathfrak{X} \rightarrow {\mathbb C}$ with generic fiber $X$ and special fiber $X_0 := {\operatorname{Proj}}\, {\mathbb C}[S]$, and a K\"ahler structure $\tilde{\omega}$ on (the smooth part of) $\mathfrak{X}$ such that
$(\pi^{-1}(1), \tilde{\omega}_{|\pi^{-1}(1)})$ is symplectomorphic to $(X,\omega)$
and the closure of the moment image of symplectic toric manifold
$(U_0, \tilde{\omega}_{|U_0})$, where $U_0:=(X_0)_{\textrm{smooth}}$, is the Okounkov body $\Delta (S)$.
The set $U_0$ contains the preimage of the interior of $\Delta(S)$.
\item Moreover, there exists
a surjective continuous map $\phi \colon X \to X_0$
that restricts
to a symplectomorphism from $(\phi^{-1}(U_0), \omega)$ to $(U_0, \tilde{\omega}_{|U_0})$.
\end{itemize}
In particular, if $X_0={\operatorname{Proj}}\, {\mathbb C}[S]$ is smooth (thus also normal), then $\phi^{-1}(U_0)=X$ and therefore $\phi$ provides a symplectomorphism between $(X,\omega)$ and the symplectic toric manifold $(X_{\Delta(S)}, \omega_{\Delta(S)})$ associated to $\Delta(S)$ via Delzant's construction.
\end{theorem}
Checking whether $S$ is finitely generated is a very difficult problem.
However, it was observed by Kaveh in \cite{K} that even if $S$ is not finitely generated one can still form a (not flat) family with generic fiber $X$ and special fiber $({\mathbb C}^*)^n$. Even though such a construction provides much less information about $X$, it still suffices for the purpose of finding lower bounds on the Gromov width. We describe this idea in Section \ref{sec gw}.
\section{Gromov width}\label{sec gw}
The {\it Gromov width} of a $2n$-dimensional symplectic manifold $(X,\omega)$
is the supremum of the set of the positive real numbers $a$ such that the ball of capacity $a$ (or, equivalently, of radius $\sqrt{\frac{a}{\pi}}$),
$$ B^{2n}_a =B^{2n} \Big(\sqrt{\frac{a}{\pi}}\Big)= \big \{ (x_1,y_1,\ldots,x_n,y_n) \in {\mathbb R}^{2n} \ \Big | \ \pi \sum_{i=1}^n (x_i^2+y_i^2) < a \big
\} \subset ({\mathbb R}^{2n}, \omega_{st}), $$
can be symplectically
embedded in $(X,\omega)$. Here $\omega_{st}=\sum_j\,dx_j \wedge dy_j$ denotes the standard symplectic form on ${\mathbb R}^{2n}$.
This question was motivated by the Gromov non-squeezing theorem which states that a ball $B^{2n}(r) \subset ({\mathbb R}^{2n}, \omega_{st})$ cannot be symplectically embedded into $B^2(R) \times {\mathbb R}^{2n-2} \subset ({\mathbb R}^{2n}, \omega_{st})$ unless $r\leq R$.
$J$-holomorphic curves give obstructions to ball embeddings, while Hamiltonian torus actions can lead to constructions of such embeddings (by extending a Darboux chart using the flow of the vector field induced by the action).
This is why toric degenerations provide a useful tool for finding lower bounds on the Gromov width. Given a toric degeneration of $(X,\omega)$, as described in Theorem \ref{from hk}, one can use the toric action on $X_0$ to construct embeddings of balls into a smooth symplectic toric manifold $(U_0, \tilde{\omega}_{|U_0})$, where $U_0=(X_0)_{\textrm{smooth}}$. Postcomposing such embedding with the symplectomorphism $\phi^{-1}$ produces a symplectic embedding into $(X,\omega)$.
Moreover, many embeddings of balls into symplectic toric manifolds can be read off from the associated (by the Delzant classification theorem) polytope. Identify the dual of the Lie algebra of the compact torus $T$ with the Euclidean space using the convention that $S^1={\mathbb R}/{\mathbb Z}$, i.e., the lattice of $\mathfrak{t}^*$ is mapped to ${\mathbb Z}^{\dim T} \subset {\mathbb R}^{\dim T}$. With this convention, the moment map for the standard $(S^1)^n$ action on $({\mathbb R}^{2n}, \omega_{st})$ maps $ B^{2n}_a$ onto an $n$-dimensional simplex of size $a$, closed on $n$ sides
\begin{equation}\label{simplex}
\mathfrak S^n(a):=\{(x_1,\ldots,x_n) \in {\mathbb R}^n|\ 0\leq x_j< a,\ \sum_{j=1}^n x_j< a\}.
\end{equation}
Moreover, if the moment image contains an open simplex of size $a$, then for any $\varepsilon >0$ a ball of capacity $a - \varepsilon$ can be embedded into the given symplectic toric manifold.
\begin{proposition}\cite[Proposition 1.3]{LuSC}\cite[Proposition 2.5]{P}\label{embedding}
For any connected, proper (not necessarily compact) symplectic toric manifold $U$ of dimension $2n$, with a moment map $\mu$, the Gromov width of $U$ is at least
$$\sup \{a>0\,|\, \exists \; \Psi \in \gl(n,{\mathbb Z}), x \in
{\mathbb R}^n,\textrm{ such that }
\Psi (\inter \mathfrak S^n(a))+\,x \subset \mu(U) \}.
$$
\end{proposition}
The appearance of $\Psi$ and $x$ comes from the facts that the identification $\mathfrak{t}^*\cong {\mathbb R}^{\dim \ T}$ depends on a splitting of $T$ into $(\dim \ T)$ circles, and that a translation of a moment map also provides a moment map.
The above results lead to the following method for finding lower bounds on the Gromov width.
\begin{corollary}\label{cor gw tool}
Let $X$ be a smooth projective variety of complex dimension $n$, ${\mathcal L}$ an ample line bundle on $X$, and $\omega=\Phi_{\mathcal L}^*(\omega_{FS}) \in \hh^2(X; {\mathbb Z})$ an integral K\"ahler form obtained using the Kodaira embedding $\Phi_{\mathcal L} \colon X \rightarrow {\mathbb P} (L^*)$. Suppose that there exists a valuation $\nu$ giving a finitely generated and saturated semigroup $S=S(\nu, {\mathcal L})$.
Let $\Delta$ be the associated Okounkov body.
The Gromov width of $(X,\omega)$ is at least
$$\sup \{a>0\,|\, \exists \; \Psi \in \gl(n,{\mathbb Z}), x \in
{\mathbb R}^n,\textrm{ such that }
\Psi (\inter\mathfrak S^n(a))+\,x \subset \Delta \}.
$$
\end{corollary}
\begin{proof}
By the result of \cite{HK} cited above as Theorem \ref{from hk}, there exists a toric degeneration of $(X, \omega)$ to a normal toric variety $X_0={\operatorname{Proj}}\, {\mathbb C}[S]$. The subset $U:=\phi^{-1}(U_0)$ of $X$ inherits a toric action whose moment image contains $\inter\, \Delta$, the interior of $\Delta$ (recall that a moment map sends singular points of a toric variety to the boundary of the moment polytope). The corollary follows from Proposition \ref{embedding}.
\end{proof}
In fact one does not need $S$ to be saturated. The same corollary holds even if $X_0$ is not a normal toric variety. This is because a normalization map for $X_0$ induces a biholomorphism between $(X_0)_{\textrm{smooth}}$ and an appropriate subset of the normalization of $X_0$.
It is, however, necessary that $S$ is finitely generated for a toric degeneration to exist. Otherwise one can still form a family of manifolds, but one cannot guarantee for this family to be flat, and thus $X$ and $X_0$ are no longer so strongly related. As we already mentioned, Kaveh in \cite{K} observed that such a (not necessarily flat) family, with $X_0=({\mathbb C}^*)^n$, still provides information about the Gromov width of $(X,\omega)$.
To state this result we need additional notation. In the notation of Section \ref{sec toric deg}, for any $m\in{\mathbb Z}_{>0}$ let
$${\mathcal A}_m:=\{\nu(f/h^m)\,|\,f \in L^m \setminus \{0\}\,\}\subset {\mathbb Z}^n, \quad \Delta_m=\frac{1}{m} \, \conv ({\mathcal A}_m).$$
Note that $\Delta=\overline{\cup_{m>0} \Delta_m}$.
Fix $m$ and let $r=r_m$ denote the number of elements in ${\mathcal A}_m=\{\beta_1, \ldots, \beta_r\}$.
From these data we form a symplectic form, $\omega_{m}$, on $({\mathbb C}^*)^n$ using a standard procedure: $\omega_{m}$ is the pull back of the Fubini--Study form on ${\mathbb C}{\mathbb P}^{r-1}$ via the map
$\Psi_m \colon ({\mathbb C}^*)^n \rightarrow {\mathbb C}{\mathbb P}^{r-1}$, $u \mapsto (u^{\beta_1}c_1, \ldots,u^{\beta_r}c_r )$, where $c=[(c_1,\ldots,c_r)]$ is some element in ${\mathbb C}{\mathbb P}^{r-1}$ with all $c_i\neq 0$. (In \cite{K} the elements $c_i$ come from coefficients of leading terms of elements in an appropriately chosen basis of $L^m$. One also needs that the differences of elements in ${\mathcal A}_m$ span ${\mathbb Z}^n$ which, by \cite[Remark 5.6]{K}, is always true for lowest term valuations.)
Kaveh proved that
\begin{itemize}
\item For every $m>0$ there exists an open subset $U \subset X$ such that $(U, \omega)$ is symplectomorphic to $(({\mathbb C}^*)^n, \frac{1}{m}\omega_{m})$ (\cite[Theorem 10.5]{K}).
\item The Gromov width of $(({\mathbb C}^*)^n,\frac{1}{m}\omega_{m})$ is at least $R_m$, where $R_m$ is the size of the largest open simplex that fits in the interior of $\Delta_m=\frac{1}{m}\conv\,({\mathcal A}_m)$(\cite[Corollary 12.3]{K}).
\end{itemize}
This leads to the following corollary.
\begin{corollary}\cite[Corollary 12.4]{K}\label{cor k gw}
Let $X$ be a smooth projective variety of dimension $n$, ${\mathcal L}$ an ample line bundle on $X$, and $\omega=\Phi_{\mathcal L}^*(\omega_{FS}) \in H^2(X; {\mathbb Z})$ an integral K\"ahler form. Let $\nu$ be a lowest term valuation on ${\mathbb C}(X)$, with values in ${\mathbb Z}^n$, and $\Delta$ the associated Okounkov body.
The Gromov width of $(X,\omega)$ is at least $R$, where $R$ is the size of the largest open simplex that fits in the interior of $\Delta$.
\end{corollary}
\subsection{Results about coadjoint orbits.}
The methods for finding the Gromov width described in Corollaries \ref{cor gw tool} and \ref{cor k gw} have been used in \cite{HP} and \cite{FLP} for coadjoint orbits of compact Lie groups.
Recall that given a compact Lie group $K$ each orbit
$\mathcal{O}\subset \mathfrak k^*:=(\textrm{Lie}\,K)^*$ of the coadjoint action of $K$ on $\mathfrak k^*$
is naturally a symplectic manifold. Namely it can be equipped with the Kostant--Kirillov--Souriau symplectic form $\omega^{KKS}$ defined by:
$$\omega^{KKS}_{\xi}(X^\#,Y^\#)=\langle \xi, [X,Y]\rangle,\;\;\;\xi \in \mathcal{O} \subset \mathfrak k^*,\;X,Y \in \mathfrak k,$$
where $X^\#,Y^\#$ are the vector fields on $\mathfrak k^*$ induced by $X,Y \in \mathfrak k$ via the coadjoint action of $K$.
Coadjoint orbits are in bijection with points in a positive Weyl chamber as every coadjoint orbit intersects such a chamber in a single point.
An orbit is called generic (resp. degenerate) if this intersection point is an interior point of the chamber (resp. a boundary point). An orbit passing through a point $\lambda$ in a positive Weyl chamber will be denoted by $\mathcal{O}_{\lambda}$.
For example, when $K=\un(n,{\mathbb C})$ is the unitary group, a coadjoint orbit can be identified with the set of Hermitian matrices with a fixed set of eigenvalues. The orbit is generic if all eigenvalues are different, and in this case it is diffeomorphic to the manifold of complete flags in ${\mathbb C}^n$.
It has been conjectured that the Gromov width of $(\mathcal{O}_{\lambda}, \omega^{KKS})$ should be given by the following neat formula, expressed entirely in the Lie-theoretical language
$$\min\{\, \left|\left\langle \lambda,\alpha^{\vee} \right\rangle \right|;\ \alpha^{\vee} \textrm{ a coroot and }\left\langle \lambda,\alpha^{\vee} \right\rangle \neq0\}.$$
For example, as $\{e_{ii}-e_{jj};\, i\neq j\}$ forms a root system for the unitary group $\un (n, {\mathbb C})$, the Gromov width of its coadjoint orbit $\mathcal{O}_\lambda$ passing through a point $\lambda=diag\,(\lambda_1,\ldots,\lambda_n)\in \mathfrak{u}(n)^*$, lying on some rational line, is equal to
$$\min \{|\lambda_i-\lambda_j|;\ i,j \in \{1,\ldots, n\},\ \lambda_i\neq \lambda_j\}.$$
Here we identified both $\mathfrak{u}(n)$ and $\mathfrak{u}(n)^*$ with the set of $n \times n$ Hermitian matrices.
This conjecture was motivated by the computation of the Gromov width of complex Grassmannians, i.e., degenerate coadjoint orbits of $\un(n,{\mathbb C})$, done by Karshon and Tolman in \cite{KT}, and independently by Lu in \cite{LuGW}. Later, using holomorphic techniques, Zoghi (for generic indecomposable\footnote{A coadjoint orbit through a point $\lambda$ in the interior of a chosen positive Weyl chamber is called indecomposable in \cite{Z} if there exists a simple positive root $\alpha$ such that for any positive root $\alpha'$ there exists a positive integer $k$ such that $\langle \lambda, \alpha' \rangle=k \langle \lambda, \alpha \rangle$. } orbits of $\un(n,{\mathbb C})$, \cite{Z}) and Caviedes (for any coadjoint orbit, \cite{CC}) showed that the above formula provides an upper bound for the Gromov width. The fact that this formula also provides a lower bound was proved using explicit Hamiltonian torus actions by Zoghi (for generic indecomposable orbits of $\un(n,{\mathbb C})$ using the standard action of the maximal torus, \cite{Z}), Lane (for generic orbits of the exceptional group $\textrm{G}_2$, \cite{La}), and the author (for $\un(n,{\mathbb C})$, $\so(2n,{\mathbb C})$ and $\so(2n+1,{\mathbb C})$ orbits
\footnote{The result about $\so(2n+1,{\mathbb C})$ holds only for orbits satisfying a mild technical condition: the point $\lambda$ of intersection of the orbit and a chosen positive Weyl chamber should not belong to a certain subset of one wall of the chamber; see \cite{P} for more details. In particular, all generic orbits satisfy this condition.}
using the Gelfand--Tsetlin torus action, \cite{P}).
\subsection{A sketch of the proof of Theorem \ref{thm gw}}
The first usage of toric degenerations in Gromov width problems appeared in the work \cite{HP} of Halacheva and the author, where the generic orbits of the symplectic group $\Sp(n)=\un(n,\mathbb{H})$ are considered. Then it was used in \cite{FLP} to prove that the formula \eqref{gw formula} is a lower bound for the Gromov width of any coadjoint orbit of any compact connected simple Lie group $K$, passing through a point in the Weyl chamber lying on some rational line, i.e., to prove Theorem \ref{thm gw}.
The rationality assumption comes from the fact that the toric degeneration method can be applied only to the orbits passing through an integral point $\lambda$ of a positive Weyl chamber, i.e., in the language of representation theory, through a dominant weight.
Let $G$ be a simply connected simple complex algebraic group and $K\subset G$ be its maximal compact subgroup.
With a dominant weight $\lambda$ one can associate an irreducible representation $V(\lambda)$ of $G$ of highest weight $\lambda$. Let
$\mathbb C_{v_\lambda}$ be the highest weight line and $P=P_{\lambda}$ be the
normalizer in $G$ of this line. Then the coadjoint orbit $\mathcal{O}_{\lambda}$ of $K$ is diffeomorphic to $G/P$ (and to $K/K \cap P$) and there exists a very ample line bundle $\mathcal L_\lambda$ on $G/P$ such that the pull back of the Fubini--Study form on the projective space
$\mathbb P(\mathrm{H}^0(G/P,\mathcal L_\lambda)^*)=\mathbb P(V(\lambda))$ via the Kodaira embedding $ G/P \rightarrow \mathbb P(\mathrm{H}^0(G/P,\mathcal L_\lambda)^*)$ is exactly the Kostant--Kirillov--Souriau symplectic form $\omega^{KKS}$ on $\mathcal{O}_{\lambda}$ (see for example \cite[Remark 5.5]{CC}).
Thus for integral $\lambda$'s one can try to construct toric degenerations of projective variety $G/P$ with line bundle $\mathcal L_\lambda$ and obtain some lower bounds for the Gromov width of the orbit $\mathcal{O}_{\lambda}$. Rescaling of symplectic forms allows to extend such a result to orbits $\mathcal{O}_{a\lambda}$, for any $a \in {\mathbb R}_{>0}$.
It remains to discuss how one can construct desired toric degenerations.
A great advantage of working with coadjoint orbits of a complex algebraic group $G$ is that a lot of information can be obtained from studying representations of $G$. This leads to a beautiful interplay between symplectic geometry and representation theory.
A reduced decomposition of the longest word in the Weyl group, $\underline{w}_0=s_{i_{\alpha_1}} \cdot \ldots \cdot s_{i_{\alpha_N}}$ provides the following items (defined below) related in an interesting way:
\begin{itemize}
\item a valuation $\nu_{\underline{w}_0}$,
\item a string parameterization of a crystal basis of $V_{\lambda}^*$.
\end{itemize}
We continue to denote by $\lambda$ a dominant weight and by $V_{\lambda}$ the finite dimensional irreducible representation of $G$ with highest weight $\lambda$. Let $V_{\lambda}^*$ denote the dual representation.
One often seeks for a basis of $V_{\lambda}^*$ consisting of elements which behave nicely under the action of Kashiwara operators. A crystal basis is a basis whose elements are permuted under the Kashiwara operators. Given a crystal basis one can form a crystal graph of a given representation: vertices are elements of the crystal basis and $\{0\}$, and edges are labelled by simple roots and correspond to the action of Kashiwara operators. There are (noncanonical) ways of embedding such graph into ${\mathbb R}^{N}$, $N=\dim_{\mathbb C} G/P$. A reduced decomposition of the longest word in the Weyl group (into a composition of reflections with respect to simple roots), $\underline{w}_0=s_{\alpha_1} \cdot \ldots \cdot s_{\alpha_N}$, provides a way of assigning to each vertex of the crystal graph a string of $N$ integers (string parametrization), and thus gives such an embedding. A convex hull of the image of string parametrization is called a string polytope. It depends on $\lambda$ and also on the chosen decomposition $\underline{w}_0$. String polytopes have been extensively studied in representation theory.
Moreover, a reduced decomposition $\underline{w}_0=s_{i_{\alpha_1}} \cdot \ldots \cdot s_{i_{\alpha_N}}$ defines a sequence of Schubert subvarieties
$$[P]=Y_{N} \subset \ldots \subset Y_{0}= G/P,$$
where $Y_j$ denotes the Schubert variety corresponding to the element $s_{i_{\alpha_{j+1}}} \cdot \ldots \cdot s_{i_{\alpha_N}}$ of the Weyl group.
We denote by $\nu_{\underline{w}_0}$ the highest term valuation associated with this flag of subvarieties.
A theorem of Kaveh relates these two objects.
\begin{theorem}\cite{Kcrbas}
The string parametrization for $V_\lambda^*=\hh^0(G/P, {\mathcal L}_\lambda)$ obtained using the reduced decomposition $\underline{w}_0$ is the restriction of the valuation $\nu_{\underline{w}_0}$ and thus the corresponding string polytope is the Okounkov body $\Delta (\nu_{\underline{w}_0})$.
\end{theorem}
Explicit descriptions of string polytopes for classical Lie groups and some well-chosen reduced decompositions of the longest words were presented in the work of Littelmann \cite{Li}. With a bit of work, one can show that the string polytope for $V_\lambda^*$ with $G=\Sp(2n,{\mathbb C})$ the symplectic group (with maximal compact subgroup $K=\Sp(n)=\textrm{U}(n,\mathbb{H})$), described in \cite{Li}, contains a simplex of size prescribed by \eqref{gw formula}. Then, the result of Kaveh, \cite{Kcrbas}, quoted above together with Corollary \ref{cor gw tool} prove that the Gromov width of $\Sp(n)$ coadjoint orbit $(\mathcal{O}_\lambda,\omega^{KKS})$ is at least equal to the value prescribed by \eqref{gw formula}, i.e., proves Theorem \ref{thm gw} for the case of the symplectic group. This is exactly the argument used in \cite{HP}.
Similar method could be applied for other classical Lie groups. However, one would need to consider each type separately, as the descriptions of string polytopes contained in \cite{Li} depend on reduced decompositions which are different for different group types.
To obtain a unified proof which works for all group types, in \cite{FLP} we use lowest term valuations arising from a system of parameters induced by an enumeration $\{\beta_1, \ldots, \beta_N\}$ of certain positive roots. In \cite{FFL} the authors gave a representation-theoretic description of the associated semigroup, also in the cases where this enumeration does not come from a reduced decomposition of the longest word. Unfortunately it might be very difficult to show that this semigroup is finitely generated and to find an explicit description of the associated Okounkov body. However, in the case when the enumeration is a good ordering in the sense of \cite{FFL}, building on the results from \cite{FFL} one can at least show that the associated Okounkov body contains a simplex of size prescribed by \eqref{gw formula}. Then, using the result of Kaveh \cite{K} cited here as Corollary \ref{cor k gw} (which does not require the semigroup to be finitely generated), one proves Theorem \ref{thm gw}. The details of this argument are presented in \cite{FLP}.
\section{Cohomological rigidity}\label{sec coh rigidity}
The following section is based on a project joint with Sue Tolman, \cite{PT}.
Cohomological rigidity problems are problems where one tries to determine whether integral cohomology ring can distinguish between manifolds of a certain family, and whether all isomorphisms between integral cohomology rings are induced by maps (homeomorphisms or diffeomorphisms, depending on the setting) between manifolds.
Integral cohomology ring is too weak to distinguish a homeomorphism type of a manifold. However, by a result of Freedman, it classifies (up to a homeomorphism) all closed, smooth, simply connected $4$-manifolds.
Masuda and Suh posed question of whether the cohomological rigidity holds for the family of symplectic toric manifolds.
The question was studied by its authors, Choi, and Panov. No counterexample was found and partial positive results were proved.
(Interested readers are encouraged to consult the nice survey \cite{CMSsurvey} and references therein.)
Due to the presence of symplectic structure, it seems natural to consider the following symplectic variant of the above question.
\begin{q}(Symplectic cohomological rigidity for symplectic toric manifolds)
\begin{itemize}
\item (weak) Are any two symplectic toric manifolds $(M, \omega_M)$ and $(N, \omega_N)$ necessarily symplectomorphic whenever there exists an isomorphism $F \colon \hh^*(M;{\mathbb Z}) \rightarrow \hh^*(N;{\mathbb Z})$ sending the class $[\omega_M]$ to $[\omega_N]$?
\item (strong) Is any such
isomorphism $F \colon \hh^*(M;{\mathbb Z}) \rightarrow \hh^*(N;{\mathbb Z})$ induced by a symplectomorphism?
\end{itemize}
\end{q}
Sue Tolman and the author, in \cite{PT}, prove that weak and strong symplectic cohomological rigidity hold for the family of Bott manifolds with rational cohomology ring isomorphic to that of a product of copies of ${\mathbb C}{\mathbb P}^1$. Bott manifolds can be viewed as higher dimensional generalizations of Hirzebruch surfaces discussed in the example below. For definition see Section \ref{sec bott mfd}.
\begin{rmk}
Strong (not symplectic) cohomological rigidity, with diffeomorphisms, was already proved for this family by Choi and Masuda in \cite{CM}. Their diffeomorphisms usually do not preserve the complex structure. If they had, then our result would be an immediate consequence of theirs. Indeed, if $f \colon N \rightarrow M$ is a biholomorphism inducing $F$, then $\omega_N$ and $f^*(\omega_M)$ are both K\"ahler forms on $N$, defining the same cohomology class in $\hh^*(N;{\mathbb Z})$, and thus in this case $(N,\omega_N)$ and $(N,f^*(\omega_M))$ are symplectomorphic by Moser's trick.
\end{rmk}
\begin{example}(Hirzebruch surfaces)\label{eg h}
Hirzebruch surfaces are ${\mathbb C}{\mathbb P}^1$ bundles over ${\mathbb C}{\mathbb P}^1$. As complex manifolds they are classified by integers (encoding the twisting of the bundle): for each $A \in {\mathbb Z}$ we denote by $\mathcal{H}_{-A}$ the bundle ${\mathbb P} (\mathcal{O}(A) \oplus \mathcal{O}(0)) \rightarrow {\mathbb C}{\mathbb P}^1$. In particular, $\mathcal{H}_0={\mathbb C}{\mathbb P}^1 \times {\mathbb C}{\mathbb P}^1$. They can be equipped with a symplectic (even K\"ahler) structure and a toric action. A polytope corresponding to $\mathcal{H}_{-A}$ in Delzant's classification is (up to $\gl(2,{\mathbb Z})$ action) a trapezoid with outward normals $(-1,0)$,$(0,-1)$,$(1,0)$,$(A,1)$. The lengths of the edges of this trapezoid depend on the chosen symplectic structure and can be encoded as $\lambda=(\lambda_1,\lambda_2)\in ({\mathbb R}_{>0})^2$.
We denote by $(\mathcal{H}_{-A},\omega_\lambda )$ the symplectic toric manifold corresponding to the trapezoid $\Delta (A, \lambda):=\conv ((0,0),(\lambda_1,0), (\lambda_1,\lambda_2-A\lambda_1),(0,\lambda_2))$. For example, Figure \ref{fig Hirzebruch} presents $(\mathcal{H}_0,\omega_{(1,3)} )$ and $(\mathcal{H}_{-2},\omega_{(1,5)} )$.
\begin{figure}[h]
\includegraphics[scale=0.5]{TwoDiffHirzebruch.pdf}
\caption{Hirzebruch surfaces $(\mathcal{H}_0,\omega_{(1,3)} )$ and $(\mathcal{H}_{-2},\omega_{(1,5)} )$.}
\label{fig Hirzebruch}
\end{figure}
It was observed by Hirzebruch that $\mathcal{H}_{-A}$ and $\mathcal{H}_{-\wt{A}}$ are diffeomorphic if and only if $A \cong \wt{A} \mod 2$.
Moreover, the symplectic toric manifolds $(\mathcal{H}_{-A},\omega_\lambda )$ and $(\mathcal{H}_{-\wt{A}},\omega_{\widetilde \lambda} )$ are (not equivariantly) symplectomorphic if and only if $A \cong \wt{A} \mod 2$ and the widths and the areas of the associated polytopes agree, i.e., $\lambda_1={\widetilde \lambda}_1$ and
$ \lambda_2-\frac{1}{2}A\lambda_1 ={\widetilde \lambda}_2-\frac{1}{2}\wt{A}{\widetilde \lambda}_1$.
For example, the manifolds presented on Figure \ref{fig Hirzebruch} are symplectomorphic.
The cohomology ring can be presented as
$$\hh^*(\mathcal{H}_{-A}; {\mathbb Z})={\mathbb Z}[x_1,x_2]/\langle x_2^2, x_1^2+Ax_1x_2\rangle,$$
with $[\omega_\lambda]=\lambda_1 x_1 +\lambda_2 x_2$.
If $A \cong \wt{A} \mod 2$, then the isomorphism ${\mathbb Z}[x_1,x_2] \rightarrow {\mathbb Z}[{\widetilde x}_1,{\widetilde x}_2]$ defined by $x_1 \mapsto {\widetilde x}_1 +\frac{1}{2}(\wt{A}-A){\widetilde x}_2$, $x_2 \mapsto {\widetilde x}_2$ descends to an isomorphism between $\hh^*(\mathcal{H}_{-A}; {\mathbb Z})$ and $\hh^*(\mathcal{H}_{-\wt{A}}; {\mathbb Z})$. Note that this isomorphism sends $[\omega_\lambda]=\lambda_1 x_1 +\lambda_2 x_2$ to $\lambda_1 {\widetilde x}_1+(\lambda_2 + \frac{\wt{A}-A}{2}\lambda_1)\,{\widetilde x}_2 $ which is equal to $[\omega_{\widetilde \lambda}]$ if and only if $\lambda_1={\widetilde \lambda}_1$ and
$\lambda_2-\frac{A}{2}\lambda_1 ={\widetilde \lambda}_2-\frac{\wt{A}}{2}{\widetilde \lambda}_1$.
Therefore, for Hirzebruch surfaces (weak) symplectic cohomological rigidity holds.
\end{example}
To approach the symplectic cohomological rigidity problem one needs a good method of constructing symplectomorphisms. Here is where toric degenerations come into play. By Theorem \ref{from hk} a toric degeneration whose central fiber ${\operatorname{Proj}}\, \, {\mathbb C}[S]$ is smooth produces a symplectomorphism between the symplectic manifold one started with and the central fiber. The main difficulty in this method of constructing symplectomorphisms lies in finding toric degenerations with smooth central fibers.
A great advantage of working with toric manifolds is that the sections of their line bundles are well understood and one can form very concrete constructions of toric degenerations.
\subsection{Toric degenerations for symplectic toric manifolds}
Let $(X_P,\omega_P)$ be a symplectic toric manifold with $\omega_P \in \hh^2(M; {\mathbb Z})$, corresponding to a polytope $P \subset {\mathbb R}^n$ via Delzant's construction. Then $P$ is an integral polytope (i.e. with vertices in ${\mathbb Z}^n$) and there exists a very ample line bundle ${\mathcal L}$ over $X_P$ inducing $\omega_P$. In this situation a basis of the space of holomorphic sections of ${\mathcal L}$ can be identified with the integral points of $P$
(\cite{D}, see also \cite{H}).
Without loss of generality we can assume that in a neighborhood of some vertex $P$ looks like $({\mathbb R}_{\geq 0})^n$ in a neighborhood of the origin in ${\mathbb R}^n$. Then we can identify $L=\hh^0(X_P, {\mathcal L})$ with a subset of the ring of rational functions, ${\mathbb C}(X_P)$,
as described on page \pageref{identification},
using the section corresponding to the origin as the fixed element $h$:
$$f \mapsto \frac{f}{\textrm{section corresponding to the origin} }.$$
{\bf Notation.}
For simplicity of notation, given a valuation $\nu$ we will write $\nu(L)$ to denote
$$\nu(L):=\{\nu(f/h);\ f \in L \setminus \{0\}\}.$$
Similarly, let
$\nu(L^m):=\{\nu(f/h^m);\ f \in L^{m} \setminus \{0\}\}$ for any $m >1.$
We denote by $f_j \in {\mathbb C}(X_P)$ the rational function
coming from the section corresponding to the $j$-th basis vector, $j=1, \ldots, n$.
Note that $f_1,\ldots,f_n$ form a coordinate system around the fixed point of $X_P$ corresponding to the origin via the moment map. (To see this, one can, for example, use the description of $X_P$ and $f_j$'s from \cite{H}.)
Choose and fix a non-negative integer $c$ and two elements $k<l \in \{1,\dots,n\}$. Then
\begin{equation}
\label{coor sys}
\begin{split}
u_1&=f_1,\\
&\ldots,\\
u_{k-1}&=f_{k-1},\\
u_{k}&=f_{k}-f_l^c,\\
u_{k+1}&=f_{k+1},\\
&\ldots,\\
u_n&=f_n,
\end{split}
\end{equation}
also forms a coordinate system.
Let $\nu$ be the associated lowest term valuation (as in Example \ref{eg lh val}).
The image $\nu(L)$ can be obtained by using a ``sliding" operator ${\mathcal F}_{-e_k+c e_l}$, defined as follows.
For each affine line $\ell$ in ${\mathbb R}^n$ in the direction of $-e_k+c e_l$, with $P \cap \ell \cap Z^n \neq \emptyset$, translate the set $\{P \cap \ell \cap {\mathbb Z}^n\}$ by $a(-e_k+c e_l)$ with $a \geq 0$ maximal non-negative number for which $a(-e_k+c e_l) +\{P \cap \ell \cap {\mathbb Z}^n\}\subset ({\mathbb R}_{\geq 0})^n$.
\begin{lemma}\label{sliding}
One obtains $\nu(L)$ by sliding the integral points of $P$ in the direction $-e_k+c e_l$, inside $({\mathbb R}_{\geq 0})^n$,
i.e.,
$$\nu (L)={\mathcal F}_{-e_k+c e_l}(P \cap {\mathbb Z}^n).$$
\end{lemma}
Instead of the proof, which can be found in \cite{PT}, we give the following example which illustrates the main idea.
\begin{example}\label{eg slide}
Let $(X_P, \omega_P)$ be the symplectic toric manifold corresponding to the polytope $P=\conv\,\{(0,0),(1,0),(1,3),(0,3)\} \subset {\mathbb R}^2$. That is, $X_P$ is diffeomorphic to ${\mathbb C}{\mathbb P}^1 \times {\mathbb C}{\mathbb P}^1$ with product symplectic structure (with different rescaling of the Fubini--Study symplectic form on each factor).
Let $\nu$ be the lowest term valuation associated to the coordinate system $$u_1=f_1-f_2^2,\ u_2=f_2.$$
One can easily compute that $\nu(f_1)=(0,2),\ \nu(f_2)=(0,1),\ \nu(f_1f_2^3)=(0,5),$ and in general
$$\nu(f_1^a f_2^b)=(0,2a+b),\ a,b \in {\mathbb Z}_{\geq 0}.$$
Futhermore, $\nu (f_1-f_2^2)=(1,0)$, $\nu (f_1f_2-f_2^3)=(1,1),$
and one can observe that
$$\nu(L)={\mathcal F}_{(-1,2)}(P \cap {\mathbb Z}^2).$$
The polytopes $P$ and $\conv(\nu(L ))$ are presented on Figure \ref{fig Hirzebruch}.
\end{example}
Understanding $\nu(L)$ is not enough for constructing and understanding a toric degeneration. First of all, to construct a flat family with toric fiber $\pi^{-1}(0)$ one needs the associated semigroup $S=S(\nu)$ to be finitely generated. Additionally, the toric fiber $\pi^{-1}(0)={\operatorname{Proj}}\, {\mathbb C}[S]$ is the toric variety associated to the Okounkov body $\Delta$ if ${\operatorname{Proj}}\, {\mathbb C}[S]$ is normal, that is, if $S$ is saturated. Moreover, to describe the Okounkov body one also needs to find $\nu(L^m )$ for $m>1$. Note that in general $L^m$ differs from $\hh^0(X,{\mathcal L}^{\otimes m})$.
The following proposition describes an especially nice situation where all these conditions simplify.
\begin{proposition}\label{nice cond give sympl}
Let $(X, \omega=\Phi_{\mathcal L}^*(\omega_{FS}))$ be a $2n$-dimensional projective symplectic toric manifold associated to a smooth polytope $P$, with the projective embedding induced by a very ample line bundle ${\mathcal L}$.
Let $\nu$ be a lowest term valuation associated to a coordinate system \eqref{coor sys}, and $S$ the induced semigroup.
Assume that there exists a smooth integral polytope $\Delta$ such that
$$S=(\cone \, \Delta)\cap ({\mathbb Z} \times {\mathbb Z}^n).$$
Then $(X, \omega)$ is symplectomorphic to the symplectic toric manifold $(X_\Delta, \omega_\Delta)$ associated to $\Delta$ via Delzant's construction.
\end{proposition}
\begin{proof}[Sketch of a proof]
The assumptions imply that the semigroup $S$ is saturated and (by Gordan's Lemma) finitely generated. Therefore there is a toric degeneration $(\mathfrak{X}, \tilde{\omega})$ with generic fiber $(X, \omega)$ and the special fiber $\pi^{-1}(0)={\operatorname{Proj}}\, {\mathbb C}[S]$ which is a normal toric variety. Moreover, the Okounkov body associated to the semigroup $S$ is precisely $\Delta$ and therefore ${\operatorname{Proj}}\, {\mathbb C}[S]$, equipped with the restriction of $\tilde{\omega}$, is the symplectic toric manifold $(X_\Delta, \omega_\Delta)$ associated to $\Delta$ via Delzant's construction.
\end{proof}
Note that $S=(\cone \, \Delta)\cap ({\mathbb Z} \times {\mathbb Z}^n)$ implies, in particular, that $\nu (L^m)$ contains ``enough" of integral points, namely that
$$\forall_{m\geq 1}\ \nu(L^m)=m\,\Delta\cap {\mathbb Z}^n=\conv(\nu(L^m))\cap {\mathbb Z}^n.$$
To understand better the requirement $\conv\,(\nu(L^m )) \cap {\mathbb Z}^n=\nu(L^m ),$ consider the following example.
\begin{example} (``Enough" of integral points and saturation.)
Let $(X_P, \omega_P)$ be the symplectic toric manifold corresponding to the polytope $P=\conv\,\{(0,0),(2,0),(2,2),(0,2)\} \subset {\mathbb R}^2$, that is, $X_P$ is diffeomorphic to ${\mathbb C}{\mathbb P}^1 \times {\mathbb C}{\mathbb P}^1$ as in the previous example, but with a different symplectic form.
As before, let $\nu$ be the lowest term valuation associated to the coordinate system $$u_1=f_1-f_2^2,\ u_2=f_2.$$
Then
$$\nu(L)={\mathcal F}_{(-1,2)}(P \cap {\mathbb Z}^2)=\{(0,j); j=0,\ldots, 6\}\cup \{(1,0), (1,3)\}\subsetneq \conv\,(\nu(L))\cap {\mathbb Z}^2.$$
The figure below presents the integral points $P\cap {\mathbb Z}^2$ and $\nu(L)={\mathcal F}_{(-1,2)}(P \cap {\mathbb Z}^2)$. The semigroup $S$ is not saturated: we have that $(1,1,1) \notin S$ even though $(2,2,2) \in S$, as $(2,2,2)=\nu (f_1(f_1-f_2^2) \cdot (f_1-f_2^2)) \in \{2\}\times \nu (L^2)$.
\begin{figure}[h]
\includegraphics[scale=0.5]{EgSaturation.pdf}
\end{figure}
\end{example}
The following condition is sufficient, though not necessary, to guarantee that we have enough of integral points.
\begin{corollary}\label{cor int pt}
Let
$$\Delta= \big\{ p \in {\mathbb R}^2 \ \big| \ 0\leq \langle p, e_1\rangle \leq \lambda_1,\ 0\leq \langle p, e_2\rangle \mbox{ and }
\big\langle p, e_2 + A e_1 \big\rangle \leq \lambda_2 \big\}\textrm{ and } c \in {\mathbb Z}_{>0}.$$
If
$$\lambda_2-c\lambda_1 >0,$$
then
$$(\conv\, {\mathcal F}_{(-1,c)}(\Delta \cap {\mathbb Z}^2))\cap {\mathbb Z}^2={\mathcal F}_{(-1,c)}(\Delta \cap {\mathbb Z}^2).$$
\end{corollary}
Note that in that case the polytope $\conv\, {\mathcal F}_{(-1,c)}(\Delta \cap {\mathbb Z}^2)$ is also a trapezoid, namely:
$$\big\{ p \in {\mathbb R}^2 \ \big| \ 0\leq \langle p, e_1\rangle \leq \lambda_1,\ 0\leq \langle p, e_2\rangle \mbox{ and }
\big\langle p, e_2 + (2c-A) e_1 \big\rangle \leq \lambda_2+(c-A)\lambda_1) \big\},$$
if $c>A$ (see Figure \ref{fig EgDegeneration}), or $\Delta$ if $c\leq A$.
\begin{figure}[h]
\includegraphics[scale=0.8]{EgDegeneration.pdf}
\caption{Toric degeneration of a Hirzebruch surface.}\label{fig EgDegeneration}
\end{figure}
\subsection{Cohomological rigidity for Bott manifolds}\label{sec bott mfd}
A Bott manifold is a manifold obtained as the total space of a tower of iterated bundles with fiber ${\mathbb C}{\mathbb P}^1$ and first base space ${\mathbb C}{\mathbb P}^1$.
Such manifold naturally carries an algebraic torus action, and can be viewed as a toric manifold. Note that $4$-dimensional Bott manifolds are exactly the Hirzebruch surfaces discussed in Example \ref{eg h}.
For more information about Bott manifolds see, for example, \cite{GK}.
The simplest example of a $2n$-dimensional Bott manifold is the product of $n$ copies of ${\mathbb C}{\mathbb P}^1$.
Equipped with a product symplectic structure
$\omega=\pi_1^*(a_1 \omega_{FS} ) + \ldots +\pi_n^*(a_n \omega_{FS} )$, for some $a_j \in {\mathbb R}_{>0}$, and the standard toric action\footnote{
In the description of the symplectic structure, $\pi_j \colon {\mathbb C}{\mathbb P}^1 \times \ldots \times {\mathbb C}{\mathbb P}^1 \rightarrow {\mathbb C}{\mathbb P}^1$ denotes the projection onto the $j$-th factor, and $\omega_{FS}$ stands for the Fubini--Study symplectic form.
The standard action of $(S^1)^n$ on $({\mathbb C}{\mathbb P}^1)^n$ is the one where each $S^1$ acts on the respective copy of ${\mathbb C}{\mathbb P}^1$ by $e^{it}\cdot[(z_0,z_1)]=[(z_0,e^{it}z_1)]$.} it becomes a symplectic toric manifold, whose Delzant polytope is a product of intervals, with lengths depending on $a_j$'s. In particular, if all $a_j$'s are equal, then the moment image is a hypercube.
A moment image for a general $2n$-dimensional Bott manifold is combinatorially an $n$-dimensional hypercube. By applying a translation and a $\gl(n,{\mathbb Z})$ transformation one can always arrange for the moment image to be a polytope of the form
$$ \Delta=\Delta(A, \lambda) = \Big\{ p \in {\mathbb R}^n \ \big| \ \langle p, e_j\rangle \geq 0 \mbox{ and }
\big\langle p, e_j + \sum_i A^i_j e_i \big\rangle \leq \lambda_j \textrm{ for all } 1 \leq j \leq n\Big\},$$
where $\lambda \in ({\mathbb R}_{>0})^n$ and the parameters $A^i_j$ satisfy that $A^i_j = 0$ unless $i < j$ and thus can be arranged in an $n \times n$ strictly upper-triangular integral matrix $A \in M_n(\mathbb{Z})$. Certain relation between $A$ and $\lambda$ must be satisfied in order for $\Delta(A, \lambda)$ to have $2^n$ facets and be combinatorially equivalent to a hypercube (see \cite{PT}).
In that case we say that $(A, \lambda)$ {\it defines a symplectic toric Bott manifold} $(M_{A}, \omega_{ \lambda})$ corresponding to the Delzant polytope $\Delta(A, \lambda)$. The matrix $A$ encodes the twisting of consecutive ${\mathbb C}{\mathbb P}^1$ bundles, and thus determines a diffeomorphism type of $M_{A}$, while $\lambda$ determines the symplectic structure.
By a classical result of Danilov \cite{D}
\begin{equation}\label{cohomology}
\hh^*(M_A;{\mathbb Z}) ={\mathbb Z}[x_1,\dots,x_n]/\big(x_i^2 + \sum_j A^i_j x_j x_i \big),
\end{equation}
with $[\omega_\lambda]=\sum_i \lambda_i x_i$. Note that this particular presentation of $\hh^*(M_A;{\mathbb Z})$ depends on $A$. (The element $x_j$ is the Poincar\'e dual to the preimage of the facet $\Delta(A,\lambda)\cap \{\big\langle p, e_j + \sum_i A^i_j e_i \big\rangle = \lambda_j\}$.)
Using the above presentation we define the following special elements
\begin{equation}\label{special elements}
\alpha_k = - \sum_j A^k_j x_j \in \hh^*(M_A;{\mathbb Z}), \quad y_k = x_k -\frac{1}{2} \alpha_k \in \hh^*(M_A;{\mathbb Q})
\end{equation}
for all k.
We say $x_k$ is of {\it even (respectively odd)
exceptional type} if $\alpha_k = c y_\ell$ for some $\ell > k$, where $c$ is an
even (respectively odd) integer. \label{exceptional def}
In ``coordinates", this means that $A_j^k = 0$ for $j < \ell$ and $A_j^k = \frac{1}{2} A^k_\ell A_j^\ell$ for $j > \ell$.
We say that a Bott manifold is {\it ${\mathbb Q}$-trivial} if $\hh^*(M;{\mathbb Q}) \simeq \hh^*(({\mathbb C}{\mathbb P}^1)^n;{\mathbb Q})$.
For example, observe that all Hirzebruch surfaces are ${\mathbb Q}$-trivial Bott manifolds.
Using toric degenerations one can prove the following result, which is the key ingredient of the proof of Theorem \ref{thm coh rig}.
\begin{proposition}\cite{PT}\label{basic change}
Let $(M,\omega)$ and $(\wt{M},{\widetilde \omega})$ be symplectic Bott manifolds
associated to strictly upper triangular $A$ and $\wt{A}$ in $M_n(\mathbb{Z})$
and $\lambda$ and ${\widetilde \lambda}$ in ${\mathbb Z}^n$, respectively.
Assume that there exist integers $1 \leq k < \ell \leq n$
so that $A_\ell^k$ and $ \wt{A}_\ell^k $ are of the same parity and the isomorphism from ${\mathbb Z}[x_1,\dots,x_n]$ to ${\mathbb Z}[{\widetilde x}_1,\dots,{\widetilde x}_n]$
that sends $x_k$ to ${\widetilde x}_k +\frac{\wt{A}_\ell^k - A_\ell^k}{2} \ {\widetilde x}_\ell$ and $x_i$ to ${\widetilde x}_i$ for all
$i \neq k$ descends to an isomorphism from $\hh^*(M;{\mathbb Z})$ to $\hh^*(\wt{M};{\mathbb Z})$
and takes $\sum \lambda_i x_i$ to $\sum {\widetilde \lambda}_i {\widetilde x}_i$.
If $A_\ell^k + \wt{A}_\ell^k \geq 0$, then
$M$ and $\wt{M}$ are symplectomorphic.
\end{proposition}
\begin{proof}[Sketch of a proof]
Without loss of generality we can assume that the polytope $\Delta(A,\lambda)$ associated to $(A, \lambda)$ is {\it normal}, that is,
any integral point of $m\,\Delta(A,\lambda)$ can be expressed as a sum of $m$ integral points of $\Delta(A,\lambda)$:
$$\forall_{m \in {\mathbb Z}_{>0}}\ \forall_{x \in m\,\Delta(A,\lambda)\cap {\mathbb Z}^n}\ \exists_{x_1,\ldots,x_m \in \Delta(A,\lambda)\cap {\mathbb Z}^n}\textrm{ such that } x=x_1+\ldots+x_m.$$
Indeed, if $\Delta(A,\lambda)$ is not a normal polytope, replace $(M,\omega)$ and $(\wt{M},{\widetilde \omega})$ by $(M,(n-1)\,\omega)$ and $(\wt{M},(n-1)\,{\widetilde \omega})$. This dialates the corresponding polytopes by $(n-1)$.
For any integral polytope $P \subset {\mathbb R}^n$ its dialate $m P$ with $m \geq n-1$ is normal (see, for example, \cite[Theorem 2.2.12]{CLS}). Obviously if $(M,(n-1)\,\omega)$ and $(\wt{M},(n-1)\,{\widetilde \omega})$ are symplectomorphic, then so are $(M,\omega)$ and $(\wt{M},{\widetilde \omega})$.
As usually, let ${\mathcal L}$ denote the very ample line bundle over $M$ corresponding to $\omega$, and $L$ the space of its holomorphic sections.
Note that normality implies that $L^m$ can be identified with $H^0(M, {\mathcal L}^{\otimes m})$ as a basis for both of these vector spaces is given by the integral points $m\,\Delta(A,\lambda)\cap {\mathbb Z}^n$.
Also without loss of generality we can assume that $\wt{A}_\ell^k\geq A_\ell^k$.
Let $c=\frac{1}{2}(A_\ell^k + \wt{A}_\ell^k) \geq 0$.
We will work with a lowest term valuation $\nu$ associated to the coordinate system \eqref{coor sys}.
From Lemma \ref{sliding} and the normality assumption, for all $m \geq 1$ we have that
$$\nu(L^m)=\mathcal{F}_{-e_k+ce_l}(m\,\Delta(A,\lambda)\cap {\mathbb Z}^n).$$
To understand $\nu(L^m)$ consider the action of $\mathcal{F}_{-e_k+ce_l}$ on $2$-dimensional ``slices", that is, the intersections of $m\,\Delta(A,\lambda)$ with affine subspaces which are translations of $(e_k,e_l)$-planes. Such slices are either empty or are trapezoids like in Example \ref{eg slide} and Corollary \ref{cor int pt}, possibly with a cut.
A slightly tedious computation shows that
$$\forall_{m \geq 1}\ \conv(\nu(L^m))=m\,\Delta(\wt{A}, {\widetilde \lambda}).$$
For that computation one uses relations between $A$, $\lambda$, $\wt{A}$ and ${\widetilde \lambda}$ which are implied by the
facts that $\Delta(A,\lambda)$ and $\Delta(\wt{A}, {\widetilde \lambda})$ are combinatorially hypercubes, and by the existence of the isomorphism described in the statement of the proposition.
These relations also allow to generalize Corollary \ref{cor int pt} (precisely: to show that the appropriate generalization of condition $\lambda_2-c\lambda_1>0$ holds) and show that
$$\nu(L^m)=m\,\Delta(\wt{A}, {\widetilde \lambda})\cap {\mathbb Z}^n.$$
This means that the semigroup $S$ associated to the valuation $\nu$ of $(M,\omega)$ is exactly $S=(\cone \,\Delta(\wt{A}, {\widetilde \lambda}))\cap ({\mathbb Z} \times {\mathbb Z}^n)$.
Then the claim follows from Proposition \ref{nice cond give sympl}.
\end{proof}
Note that if $x_k$ is even (resp. odd) exceptional, say $\alpha_k=my_l$, then one can construct an isomoprhism as in Proposition \ref{basic change} from
$\hh^*(M_A;{\mathbb Z})$ to
$\hh^*(M_{\wt{A}};{\mathbb Z})$ for some $\wt{A}$ with $\wt{A}_\ell^k$ equal to $0$ (resp. $-1$).
For example if $x_k$ is of even exceptional type, i.e., $\alpha_k=2my_l$ for some $m$ and $\ell$, implying that $A^k_{\ell}=-2m$ and $A^k_j=-mA^\ell_j$ for $j\neq \ell$, then one should put $\wt{A}^k_l=0$, $\wt{A}^i_j=A^i_j$ for all $i$ and all $j \neq \ell$, and $\wt{A}^i_l=A^i_\ell+mA^i_k$ for all $i \neq k$.
Therefore, consecutive applications of the above proposition lead to simplifying the description of a given Bott manifold.
\begin{corollary}
Any symplectic toric Bott manifold is symplectomorphic to one for which $A^i_j=0$ (resp. $A^i_j=-1$) whenever $x_i$ has even (resp. odd) exceptional type and $\alpha_i=my_j$.
\end{corollary}
In the case of ${\mathbb Q}$-trivial Bott manifolds all
$x_i$ have exceptional type as shown in \cite[Proposition 3.1]{CM}.
Therefore, such symplectic toric Bott manifold must be a product of the following standard models of ${\mathbb Q}$-trivial Bott manifolds.
\begin{example}(${\mathbb Q}$-trivial Bott manifolds)
Take $n \in {\mathbb Z}_{>0}$. Let $A_n^i = -1$ for all $1\leq i < n$,
and $A^i_j = 0$ otherwise. For such upper triangular matrix $A=[A_j^i]$
and any $\lambda \in ({\mathbb R}_{>0})^n$, the polytope $\Delta(A, \lambda)$
is combinatorially a hypercube, thus it defines a symplectic toric Bott manifold, which we will denote by $\mathcal{H}=\mathcal{H}(\lambda_1,\ldots,\lambda_n)$.
Observe that
$$\hh^*(\mathcal{H};{\mathbb Z}) = {\mathbb Z}[x_1,\dots,x_n]/\big( x_1^2 - x_1x_n, \dots, x_{n-1}^2 - x_{n-1} x_n, x_n^2 \big).$$
Consider elements $y_i \in \hh^*(\mathcal{H};{\mathbb Q})$ defined by
$y_i = x_i - \frac{1}{2} x_n$ for all $i < n$, and $y_n = x_n$, and note that they form a basis for $\hh^*(\mathcal{H};{\mathbb Q})$.
Moreover, as $y_i^2 = 0$ for all $i$, we get that $\hh^*(\mathcal{H};{\mathbb Q}) \simeq {\mathbb Q}[y_1,\dots,y_n]/\big(y_1^2, \ldots, y_n^2)$, that is, $\mathcal{H}$ is ${\mathbb Q}$-trivial.
More generally, any partition of $n$, $\sum_{i=1}^m l_i = n$ together with $\lambda \in ({\mathbb R}_{>0})^n$, define a ${\mathbb Q}$-trivial Bott manifold
$$\mathcal{H}(\lambda_1, \ldots, \lambda_{l_1})\,\times \ldots \times \mathcal{H}(\lambda_{n-l_m+1}, \ldots, \lambda_{n}).$$
\end{example}
\begin{corollary}\label{std form}
Each $2n$-dimensional ${\mathbb Q}$-trivial Bott manifold $M$ with integral symplectic form is symplectomorphic to
$$\mathcal{H}(\lambda_1, \ldots, \lambda_{l_1})\,\times \cdots \times \mathcal{H}(\lambda_{n-l_m+1}, \ldots, \lambda_{n}),$$
for some partition $n=\sum_{i=1}^m l_i$ of $n$ and some $\lambda_1, \ldots, \lambda_n\in {\mathbb Z}_{>0}$.
\end{corollary}
The above standard model is easy enough, so that one can understand all possible ring isomorphisms between cohomology rings and prove that they are induced by maps on manifolds.
\begin{lemma}\label{str rigidity hn}
Fix $n \in {\mathbb Z}_{>0}$.
Let $\sum_{i=1}^m l_i = \sum_{i = 1}^{\wt{m}} \wt{l}_i = n$
be partitions of $n$,
and let $\lambda, {\widetilde \lambda} \in ({\mathbb R}_{>0})^n$.
Consider symplectic Bott manifolds
\begin{gather*}
(M,\omega)= \mathcal{H}(\lambda_1,\dots,\lambda_{l_1} )
\times \cdots \times \mathcal{H}(\lambda_{n - l_m + 1},\dots,\lambda_{n}); \\
(\wt{M},{\widetilde \omega})= \mathcal{H}({\widetilde \lambda}_1,\dots,{\widetilde \lambda}_{\wt{l}_1} )
\times \cdots \times \mathcal{H}({\widetilde \lambda}_{n - \wt{l}_{\wt{m}} + 1},\dots,{\widetilde \lambda}_{n}).
\end{gather*}
Given a ring isomorphism $F \colon \hh^*(M;{\mathbb Z}) \to \hh^*(\wt{M};{\mathbb Z})$
such that $F[\omega] = [{\widetilde \omega}]$,
there exists a symplectomorphism $f$ from $(\wt{M},{\widetilde \omega})$ to
$(M,\omega)$ so that $\hh^*(f) = F$.
\end{lemma}
\begin{proof}[Sketch of a proof]
First consider the situation when
$$(M,\omega)= \mathcal{H}(\lambda_1,\dots,\lambda_{n})
\quad \mbox{and} \quad
(\wt{M},{\widetilde \omega})= \mathcal{H}({\widetilde \lambda}_1,\dots,{\widetilde \lambda}_{n}).$$
The ${\mathbb Q}$-triviality assumption implies that there are exactly $2n$ primitive
classes in $\hh^2(M;{\mathbb Z})$ which square to $0$.
A short computation shows that these are $\pm z_1,\dots, \pm z_n$,
where $z_n = x_n$ and $z_i = 2 x_i - x_n$ for all $i < n$.
Similarly for $\wt{M}$. Any ring isomorphism between $\hh^*(M;{\mathbb Z})$ and $ \hh^*(\wt{M};{\mathbb Z})$ must restrict to a bijection on the set of such elements, that is, there exists $\epsilon=(\epsilon_1, \ldots, \epsilon_n)\in \{-1,1\}^n$ and a permutation $\sigma \in \mathcal{S}_n$ such that $F(z_j)=\epsilon_j {\widetilde z}_{\sigma(j)}$.
Moreover, presenting $[\omega]$ (resp. $[{\widetilde \omega}]$) in the basis $\{z_1,\ldots,z_n\}$ (resp. $\{{\widetilde z}_1,\ldots,{\widetilde z}_n\}$) and recalling that the isomorphism $F$ maps
$[\omega]$ to $[{\widetilde \omega}]$, one can deduce that $F$ acts by a permutation: $F(z_j)={\widetilde z}_{\sigma(j)}$ for some permutation $\sigma \in \mathcal{S}_n$ with $\sigma(n)=n$, and that $\lambda_j={\widetilde \lambda}_{\sigma(j)}$.
Then $F$ also takes $x_i$ to $x_{\sigma(i)}$ and it holds that
$ A^i_j=\wt{A}^{\sigma(i)}_{\sigma(j)}$ for all $i,j.$
If $\Lambda \in \gl(n,{\mathbb Z})$ denotes the unimodular matrix taking $e_i$ to $e_{\sigma(i)}$,
then $\Lambda^T( \Delta(\wt{A},{\widetilde \lambda})) = \Delta( A, \lambda)$. Therefore, by the Delzant theorem, the manifolds $(M,\omega)$ and $(\wt{M},{\widetilde \omega})$ are (equivariantly) symplectomorphic, by some symplectomorphism $f$. Moreover, as $\Lambda^T$ maps the facet $\{ \langle p, e_{\sigma(j)} \rangle = 0 \} \cap \Delta(\wt{A},{\widetilde \lambda})$ to the facet $\{ \langle p, e_j \rangle = 0 \} \cap \Delta(A,\lambda)$,
and
$\{ \langle p, e_{\sigma(j)} + \sum_i \wt{A}_{\sigma(j)}^i e_i \rangle = {\widetilde \lambda}_{\sigma(j)} \} \cap \Delta(\wt{A},{\widetilde \lambda})$ to $\{ \langle p, e_j + \sum_i A_j^i e_i \rangle = \lambda_j \} \cap \Delta(A,\lambda)$,
the map $\hh^*(f)$ induced by $f$ on cohomology maps the Poincar\'e duals of preimages of these facets accordingly. That is, $\hh^*(f)=F$.
In the general case, denote by $\lambda^{l_s}$ the $l_s$-tuple of
numbers $(\lambda_{l_1+\cdots+l_{s-1}+1},\dots, \lambda_{l_1 +\cdots +l_s}),$ and define ${\widetilde \lambda}^{{\widetilde l}_s}$ similarly.
Again, we look at primitive elements with trivial squares. In $\hh^*(M;{\mathbb Z})$ these are precisely
$$\pm x_{l_s} \mbox{ and } \pm(2x_i-x_{l_s}) \textrm{ for }
s=1,\ldots,m
\mbox{ and }
i_{s-1}< i <i_s .$$
Note that each such element is contained in some subring $\hh^*(\mathcal{H}(\lambda^{l_s});{\mathbb Z})\subseteq \hh^*(M;{\mathbb Z})$, and that all primitive elements in $\hh^*(\mathcal{H}(\lambda^{l_s});{\mathbb Z})$ whose square is zero
are equal modulo $2$. Therefore $F$ restricts to an isomorphism from $\hh^*(\mathcal{H}(\lambda^{l_s});{\mathbb Z})$ to $\hh^*(\mathcal{H}({\widetilde \lambda}^{{\widetilde l}_r});{\mathbb Z})$ for some $r$ with $l_s={\widetilde l}_r$. This implies that both partitions of $n$ must be equal up to permutation of their factors. Repeating the arguments of the previous paragraph, one can construct a symplectomorphism inducing the ring isomorphism $F$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm coh rig} ]
Theorem~\ref{thm coh rig} follows immediately from Corollary \ref{std form} and Lemma \ref{str rigidity hn}.
\end{proof}
| {
"timestamp": "2018-12-31T02:19:02",
"yymm": "1812",
"arxiv_id": "1812.11043",
"language": "en",
"url": "https://arxiv.org/abs/1812.11043",
"abstract": "A toric degeneration in algebraic geometry is a process where a given projective variety is being degenerated into a toric one. Then one can obtain information about the original variety via analyzing the toric one, which is a much easier object to study. Harada and Kaveh described how one incorporates a symplectic structure into this process, providing a very useful tool for solving certain problems in symplectic geometry. Below we present applications of this method to questions about the Gromov width, and cohomological rigidity problems.",
"subjects": "Symplectic Geometry (math.SG)",
"title": "Toric degenerations in symplectic geometry",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9884918489015607,
"lm_q2_score": 0.8152324893519999,
"lm_q1q2_score": 0.8058506706841803
} |
https://arxiv.org/abs/0804.1268 | k-Wise Independent Random Graphs | We study the k-wise independent relaxation of the usual model G(N,p) of random graphs where, as in this model, N labeled vertices are fixed and each edge is drawn with probability p, however, it is only required that the distribution of any subset of k edges is independent. This relaxation can be relevant in modeling phenomena where only k-wise independence is assumed to hold, and is also useful when the relevant graphs are so huge that handling G(N,p) graphs becomes infeasible, and cheaper random-looking distributions (such as k-wise independent ones) must be used instead. Unfortunately, many well-known properties of random graphs in G(N,p) are global, and it is thus not clear if they are guaranteed to hold in the k-wise independent case. We explore the properties of k-wise independent graphs by providing upper-bounds and lower-bounds on the amount of independence, k, required for maintaining the main properties of G(N,p) graphs: connectivity, Hamiltonicity, the connectivity-number, clique-number and chromatic-number and the appearance of fixed subgraphs. Most of these properties are shown to be captured by either constant k or by some k= poly(log(N)) for a wide range of values of p, implying that random looking graphs on N vertices can be generated by a seed of size poly(log(N)). The proofs combine combinatorial, probabilistic and spectral techniques. | \section{{{\underline{\underline{#1}}}}}
\newcommand {\mysubsubsection} [1] {\subsubsection{{{#1}}}
\newcommand {\mysection} [1] {\section{{{#1}}}
\newcommand {\mysubsection} [1] {\subsection{{{#1}}}
\newcommand {\myParagraph} [1] {\subsubsection*{#1}}
\newcommand {\SaveSizeParagraph} [1] {\vspace{-1.6mm}\paragraph{#1}}
\newcommand {\SaveaMoreSizeParagraph} [1] {\vspace{-2.7mm}\paragraph{#1}}
\newcommand {\pr} [1] {\Pr \left[ {#1} \right]}
\newcommand {\given } {\hspace{0.2em} \vline \hspace{0.3em}}
\newcommand {\awlg} {assume w.l.o.g.~}
\newcommand {\Awlg} {Assume w.l.o.g.~}
\newcommand {\wlg} {w.l.o.g.~}
\newcommand {\Wlg} {W.l.o.g.~}
\newcommand {\wrt} {w.r.t.~}
\newcommand {\wip } {with probability~}
\newcommand {\wpm} {w.p.~}
\newcommand {\st}{s.t.~}
\newcommand {\as}{a.s.~}
\newcommand {\asns}{a.s.\hspace{-0.2ex}}
\newcommand {\etal} {et al.~\hspace{-0.1ex}}
\newcommand {\qed} {$\blacksquare$}
\newcommand {\qedm} {~\blacksquare}
\newcommand {\eqdef} {{~ \stackrel{\mathrm{def}} {=} ~}}
\newcommand {\poly} {\mathit{poly}}
\newcommand {\Z} {\mathbb{Z}}
\newcommand {\N} {\mathbb{N}}
\newcommand {\lb} {\left(}
\newcommand {\rb} {\right)}
\newcommand {\lsb} {\left[}
\newcommand {\rsb} {\right]}
\newcommand {\ceil}[1] {\lceil {#1} \rceil}
\newcommand {\floor}[1] {\lfloor {#1} \rfloor}
\newcommand {\into} {\rightarrow}
\newcommand {\eps} {\epsilon}
\newcommand {\opmo} {{\scriptstyle (1 \pm o(1))}}
\newcommand {\opo} {{\scriptstyle (1 + o(1))}}
\newcommand {\omo} {{\scriptstyle (1 - o(1))}}
\newif \ifCiteName \CiteNamefalse
\newcommand {\myBib} [2]{\ifCiteName \bibitem[#1]{#2}\else \bibitem{#2}\fi}
\newcommand \myTitle {\emph}
\newcommand \myCite {}
\newcommand \andCoAuthersBib {, }
\newcommand \andCoAuthers {and }
\newcommand {\ER} {Erd\"{o}s \andCoAuthers R\'{e}nyi~}
\newcommand {\ERbib} {Erd\"{o}s and R\'{e}nyi~}
\newcommand {\Erd} {Erd\"{o}s~}
\newcommand {\Erdns} {Erd\"{o}s}
\newcommand {\Bol} {Bollob\'{a}s~}
\begin{document}
\thispagestyle{empty}
\title{\bf{k-wise independent random graphs}}
\author{Noga Alon\thanks{
Schools of Mathematics and Computer Science,
Sackler Faculty of Exact Sciences, Tel Aviv
University, Tel Aviv 69978,
Israel, and IAS, Princeton, NJ 08540, USA.
Email:~{\texttt{nogaa@post.tau.ac.il.}}
Research supported in part by the Israel Science
Foundation and by a USA-Israeli BSF
grant.}
\and
Asaf Nussboim\thanks{Department of Computer Science and
Applied Mathematics, Weizmann Institute of Science,
Rehovot, Israel.
Email:~{\texttt{asaf.nussbaum@weizmann.ac.il.}}
{Partly supported by a grant from the Israel Science
Foundation.}}}
\date{}\maketitle
\begin {abstract}
We study the \kwi relaxation of the usual model \gnp of random
graphs where, as in this model,
$N$ labeled vertices are fixed and each edge
is drawn with probability $p$,
however, it is only required that the distribution of
any subset of $k$ edges is independent.
This relaxation can be relevant in modeling phenomena
where only \kwice is assumed to hold, and is also useful
when the relevant graphs are so huge
that handling \gnp graphs becomes infeasible, and
cheaper random-looking distributions (such as \kwi
ones) must be used instead. Unfortunately, many well-known
properties of random graphs in \gnp are global, and it is
thus not clear if they are
guaranteed to hold
in the \kwi case.
We explore the properties of \kwig by providing
upper-bounds and lower-bounds on the amount of
independence, $k$, required for maintaining the main
properties of \gnp graphs: connectivity,
Hamiltonicity, the connectivity-number, clique-number
and chromatic-number and the appearance of fixed
subgraphs.
Most of these properties are shown to be captured by
either constant $k$ or by some $k=\poly(\log(N))$ for
a wide range of values of $p$, implying that random looking graphs
on $N$ vertices can be generated by a seed of size
$\poly(\log(N))$. The proofs combine combinatorial, probabilistic
and spectral techniques.
\end {abstract}
\thispagestyle{empty}
\newpage \setcounter{page}{1}
\mysection{Introduction}
We study the \kwi relaxation of the usual model \gnp of random graphs
where, as in this model, $N$ labeled vertices are fixed and each
edge is drawn with probability (w.p., for short)~$p=p(N)$, however, it is only
required that the distribution of any subset of $k$
edges is independent (in \gnp all edges are
mutually independent).
These \kwig are natural combinatorial objects that
may prove to be useful in modeling scientific phenomena
where only \kwice is assumed to hold. Moreover,
they can be used when the relevant graphs are so huge, that handling \gnp graphs is infeasible, and cheaper random-looking distributions must be used instead.
However, what happens when the application that uses
these graphs (or the analysis conducted on them)
critically relies on the fact that \rgs are, say,
almost surely connected? After all, \kwice is defined via `local' conditions, so isn't it possible that
\kwig will fail to meet `global' qualities like
connectivity? This motivates studying which global attributes
of \rgs are captured by their \kwi counterparts.
Before elaborating on properties of \kwig we provide
some background on \kwicens, on properties of random
graphs, and on the emulation of huge random graphs.
\mysubsection
{Emulating Huge Random Graphs}
Suppose that one wishes to conduct some simulations on
random graphs. Utilizing \gnp graphs requires
resources polynomial in $N$, which is infeasible
when $N$ is huge (for example, exponential in the
input length, $n$, of the relevant algorithms).
A plausible solution is to replace \gnp by a
cheaper `random looking' distribution $\mathcal G_N$.
To this end, each graph $G$ in the support of
$\mathcal G_N$ is represented by a very short binary
string (called seed) $s(G)$, \st evaluating edge
queries on $G$ can be done efficiently when $s(G)$ is
known; Then, sampling a graph from $\mathcal G_N$ is
done by picking the seed uniformly at random.
Goldreich \etal were the first to address this scenario
in \cite{ggn}. They studied emulation by computationally
pseudorandom graphs, that are indistinguishable from \gnp from the view of
any $\poly(\log(N))$-time algorithm that inspects
graphs via edge-queries of its choice. They considered
several prominent properties of
\gnp graphs, and constructed computationally
pseudorandom graphs that preserve many of those
properties (see the final paragraph of Section \ref{compareGGN}).
We consider replacing \rgs by \kwi ones. The latter
can be sampled and accessed using only
$\poly(k\log(N))$-bounded resources. This is achieved
thanks to efficient constructions of discrete \kwi
variables by Joffe \cite{jof}, see also Alon, Babai and
Itai \cite{abi}:
the appearance of any potential edge in the graph is
simply decided by a single random bit (that has
probability $p$ to attain the value 1).
Such \kwig were used by Naor \etal \cite{nnt} to
efficiently capture arbitrary first-order properties
of huge \gnp graphs (see Section \ref{FO_paper}).
\mysubsection
{${\bf k}$-Wise Independent Random Variables}
Distributions of discrete \kwi variables play an
important role in computer science. Such distributions
are mainly used for de-randomizing algorithms (and for
some cryptographic applications). In addition, the
complexity of constructing \kwi variables was studied
in depth, and in particular, the aforementioned constructions
\cite{jof,abi} (based on degree $k$ polynomials over finite fields)
are known to provide essentially the smallest possible sample spaces.
Our work is, however, the first systematic
study of {\em combinatorial properties} of \kwi
objects. Properties of various other \kwi objects
(mainly percolation on $\Z^d$ and on Galton-Watson
trees) were subsequently explored by Benjamini,
Gurel-Gurevich and Peled \cite{bggp}.
\mysubsection
{The Combinatorial Structure of Random Graphs}
What are the principal attributes of \rgs that \kwi ones should maintain? Most theorems that manifest the remarkable structure of \rgs state that certain properties occur either almost surely (\as for
short), or alternatively hardly ever, (namely, \wip
tending either to 1 or to 0 as $N$ grows to $\infty$).
These results typically fall into one of the following
categories.
\SaveSizeParagraph{Tight concentration of measure.}
A variety of prominent random variables (regarding
random graphs) \as attain only values that are {\em
extremely close} to their expectation. For instance,
random graphs (with, say, constant $p$) \as have
connectivity number $\kappa=\opmo pN$, clique number
$c=\opmo \frac {2\log(pN)}{\log(1/p)}$ (\Bol
\andCoAuthers \Erd\cite{be}, Matula \cite{mat}, Frieze
\cite{fri}) and chromatic number $\chi=\opmo \frac
{N\log(1/1-p)}{2\log(pN)}$ (\Bol \cite{bolChi},
{\L}uczak \cite{l}).
\SaveSizeParagraph{Thresholds for monotone
properties.}
For a given monotone increasing\footnote{Namely, any
property closed under graph isomorphism and under
addition of edges.} graph property $T$, how large
should $p(N)$ be for the property to hold \asns?
This question had been settled for many prominent properties such as connectivity (\ER
\cite{erConn}), containing a perfect matching (\ER
\cite{erMatch1,erMatch2,erMatch3}), Hamiltonicity
(P\'{o}sa \cite{pos}, Kor\v sunov \cite{kor},
Koml\'{o}s \andCoAuthers Szemer\'{e}di \cite{ks0}), and
the property of containing copies of some fixed
graph $H$ (\ER \cite{erGiant}, \Bol \cite{bolsub}).
For these (and other) graph properties the sufficient
density (for obtaining the property) is surprisingly
small, and moreover, a threshold phenomenon occurs when
by `slightly' increasing the density from
$\underline{p}(N)$ to $\overline{p}(N)$, the
probability that $T$ holds dramatically changes from
$o(1)$ to $1-o(1)$.%
\footnote{Thresholds for prominent properties are
often so sharp that $\overline{p}=(1+o(1))
\underline{p}$. Somewhat coarser thresholds were (later)
established for {\em arbitrary} monotone properties by
\Bol \andCoAuthers Thomason \cite{bt}, and by Friedgut
\andCoAuthers Kalai \cite{fk}.}
Thus, good emulation requires the property $T$ to be guaranteed at densities as close as possible to the true \gnp threshold.
\SaveSizeParagraph{Zero-one laws.}
These well known theorems reveal that {\em any} first-order
property holds either \as or hardly ever for \gnpns. A
first-order property is any graph property that can be
expressed by a single formula in the canonical
language where variables stand for vertices and the
only relations are equality and adjacency (e.g.~``having an isolated vertex" is specified
by $\exists x \forall y \neg \mbox{\sc edge} (x,y)$).
These Zero-one laws hold for any fixed $p$ (Fagin \cite{fag}, Glebskii, Kogan, Liagonkii \andCoAuthers Talanov \cite{gklt}), and whenever $p(N)=N^{-\alpha}$ for a fixed irrational $\alpha$ (Shelah \andCoAuthers
Spencer \cite {ss}).
\mysection
{Our Contribution}
We investigate the properties of \kwig by providing
upper bounds and lower bounds on the `minimal' amount of
independence, $k_T$, required for maintaining the main properties $T$
of random graphs.\ifPersonal\footnote{When $k={{N \choose 2}}$ we get the original \gnp graphs, while for, say, $k=1$ (e.g., the case where \wip $p$ the graph is complete and otherwise it is empty) the resulting graphs have very little in common with \gnp graphs.} \else{ }\fi
The properties considered are:
connectivity, perfect matchings, Hamiltonicity, the
connectivity-number, clique-number and
chromatic-number and the appearance of copies of a
fixed subgraph $H$.
We mainly establish upper bounds on $k_T$ (where arbitrary \kwig are shown to exhibit the property $T$) but also lower bounds (that provide specific constructions of \kwig that fail to preserve $T$).
Our precise results per each of these properties are
discussed in Section \ref{intro_list_results},
and proved in Section \ref{claims_&_proofs} (and the Appendices).
Interestingly, our results reveal a deep difference
between \kwice and almost \kwice (a$.$k$.$a$.$
$(k,\eps)$-\wicens\footnote{$(k,\eps)$-\wice means
that the joint distribution of any $k$ potential edges
is only required to be within small statistical
distance $\epsilon$ from the corresponding
distribution in the \gnp case.}).
All aforementioned graph properties are guaranteed by
\kwice (even for small $k=\poly(\log(N))$), but are
strongly violated by some almost \kwig - even when
$k=N^{\Omega(1)}$ is huge and $\eps=N^{-\Omega(1)}$ is
tiny.
For some properties of random graphs, $T$, our results
demonstrate for the first time how to efficiently
construct random-looking distributions on huge graphs
that satisfy $T$.
\SaveSizeParagraph
{Our Techniques \& Relations to Combinatorial Pseudorandomness.}
\label{intro_jumbled}
For positive results (upper bounding $k_T$), we note that the original proofs that establish properties of \gnp
graphs often fail for \kwigns. These proofs use a union
bound over $M=2^{\Theta(N)}$ undesired events, by
giving a $2^{-\Omega(N)}$ upper-bound on the
probability of each of these events.%
\footnote{For instance \wrt connectivity, $M$ is the number of choices for partitioning the vertices into 2 disconnected components.%
\ifPersonal{~For perfect matchings, $M$ counts the
number of subsets of vertices that defy a sufficient
condition for the matching (by either Hall's Theorem or
Tutte's Theorem).
For the chromatic number, $M$ is the (a-priory) number of possible sub-graphs from which a greedy coloring
algorithm might choose a large independent set that is
colored with a single color.}\fi}
Unfortunately, there exist $\poly(\log(N))$-\wig
\ifPersonal that can be constructed consuming only
$\Theta(k \log(N))$ random bits. Hence, any event that
occurs with positive probability (in these
constructions), must hold \wip $\geq 2^{-\Theta(k
\log(N))}$ which is $\gg \frac 1 M$ when $k \leq
\poly(\log(N))$. \else where any event that occurs
with positive probability, has probability $\geq
2^{-o(N)}$. \fi
Therefore, directly `de-randomizing' the original proof fails, and alternative arguments (suitable for the $k$-wise independent case) are provided.
In particular, many properties are inferred via a variant of
Thomason's notion of `jumbledness' \cite{t}
(mostly known in its weaker form as quasirandomness or
pseudorandomness, as defined by Chung, Graham \andCoAuthers Wilson
\cite{cgw}, and related to the so called Expander Mixing Lemma and the
pseudo-random properties of graphs
that follow from their spectral properties, see \cite{ac}).
For our purposes, $\alpha$-jumbledness means that (as expected in
\gnp graphs) for all vertex-sets $U,V$, the number of
edges that pass from $U$ to $V$ should be $p|U||V|
\pm \alpha \sqrt{|U||V|}$.
Jumbledness and quasirandomness had been studied extensively (see
\cite{ks} and its many references), and serve in Graph Theory as
{\em the} common notion of resemblance to random graphs. In
particular, \gnp graphs are known to exhibit (the best
possible) jumbledness parameter,
$\alpha=\Theta(\sqrt{pN})$.
One of our main results (Theorem
\ref{kwise_gives_jumbl}) demonstrates that \kwice for
$k=\Theta( \log(N))$ is stronger than jumbledness, in
the sense that it guarantees the optimal
$\alpha=\Theta(\sqrt{pN})$ even for tiny densities
$p=\Theta(\frac{\ln(N)}N)$.
Therefore, prominent properties of \kwig can be directly deduced from properties of jumbled graphs.
Proving Theorem \ref{kwise_gives_jumbl} exploits a
known connection between jumbledness and the
eigenvalues of (a shifted variant of) the adjacency
matrix of graphs, following the approach in \cite{ac}.
In particular, the analysis of Vu (\cite{vu},
extending \cite{fk2}) regarding the eigenvalues of
random graphs is strengthened, in order to achieve
optimal eigenvalues even for smaller densities $p$
than those captured by \cite{vu}. This improvement
implies, among other results, the remarkable fact that
\kwig for $k=\Theta(\log(N))$ preserve (up to constant
factors) the \gnp sufficient density for connectivity.
\SaveSizeParagraph
{More on Techniques \& Relations to Almost $k$-Wise Independence.}
\label{almost_k_wise}
For negative results (producing random-looking graphs
that defy a given property $T$ of random graphs), the
\cite{ggn,my_thesis} approach is to first construct
some random-looking graph $G$, and later to `mildly'
modify $G$ \st $T$ is defied. This is done \wrt all graph properties considered here. For instance, the
modification of choosing a random vertex and then deleting all it's edges violates connectivity while preserving computational pseudorandomness.
Unfortunately, such modifications \ifSaveSize \else are useless in our context because they \fi fail to preserve
\kwice \ifPersonal and it is typically not clear how
to modify the graph for the second time to regain \kwice \else \fi (the resulting graphs are only almost \kw independent).
In contrast, most of our negative results exploit the fact
that some constructions of \kwi bits produce
strings with significantly larger probability than in
the completely independent case. This is translated
(by the construction in Lemma
\ref{kwise_construction_unexpected_sub_graphs}) to the
unexpected appearance of some subgraphs (in \kwigns):
either huge independent-sets inside dense graphs or
fixed subgraphs inside sparse graphs.
\SaveSizeParagraph
{Comparison with Computational Pseudorandomness.}
\label{compareGGN}
Finally, \kwice guarantees all random graphs'
properties that were met by the (specific)
computationally pseudorandom graphs of
\cite{ggn,my_thesis}\ifSaveSize. \else \footnote{A
single exception is preserving the precise \gnp
upper-bound on the chromatic number only up to a
constant factor, whereas no such factors are
introduced by \cite{ggn,my_thesis}.}. \fi
\ifSaveSize In addition, only \kwice captures \else
The list of random graph properties captured by \kwice
but not by \cite{ggn,my_thesis} includes \fi (i)
arbitrary first-order properties of \gnp graphs, (ii)
high connectivity, (iii) strongest possible parameters
of jumbledness, and (iv) almost regular $(1\pm
o(1))pN$ degree for all vertices, and $(1\pm
o(1))p^2N$ co-degrees for all vertex pairs.
Importantly, all this holds for any \kwigns, (and in
particular for the very simple and efficiently
constructable ones derived from \cite{jof,abi}),
whereas the \cite{ggn,my_thesis}'s approach requires
non-trivial modifications of the construction per each
new property.
\mysection
{Combinatorial Properties of ${\bf k}$-Wise Independent
Graphs} \label{intro_list_results}
We now survey our main results per each of the
aforementioned graph properties $T$. Typically our
arguments establish the following tradeoff: the
smaller $p$ is, the larger $k$ should be to maintain
$T$.
Given this tradeoff we highlight minimizing $k$ or, alternatively, minimizing $p$. The latter is
motivated by the fact that the \gnp threshold for
many central properties occurs at some $p^*\ll 1$.
Minimizing $p$ is subject to some reasonable choice of
$k$, which is $k\leq \poly(\log(N))$. Indeed, as the
complexity of implementing \kwig is $\poly(k
\log(N))$, we get efficient implementations whenever
$k \leq \poly(\log(N))$ even when the graphs are huge
and $N=2^{\poly(n)}$.
\footnote{Accessing the graphs via edge-queries is
adequate only when $p \geq n^{-\Theta(1)}$ - otherwise
\as no edges are detected by the $\poly(n)$ inspecting
algorithm. For smaller densities our study has thus mostly a
combinatorial flavor.}
\ifSaveSize{\vspace{-2.7mm}}\fi \mysubsection
{Connectivity, Hamiltonicity and Perfect Matchings
(see Section \ref{section_conn_Ham})}
The well known sufficient \gnp density for all these
properties is $\sim \frac {\ln(N)}N$. For
connectivity, this sufficient density is captured (up
to constant factors) by all $\log(N)$-\wigns. Even
$k=4$ suffices for larger densities $p \gg N^{-\frac 1
{2}}$.
Based on Hefetz, Krivelevich \andCoAuthers Szabo's
\cite{hks}, Hamiltonicity (and hence perfect
matchings) are guaranteed at $p\geq \frac
{\log^2(N)}{N}$ with $k\geq 4\log(N)$, and at $p \geq
N^{-\frac 1 {2}+o(1)}$ with $k\geq 4$.
On the other hand, some pair\wig are provided that
despite having constant density, are still \as
disconnected and fail to contain any perfect matching.
\ifPersonal Note that by changing $k$ to 4, and the
density to $p=1/2 + N^{-\Theta(1)}$, all properties are
ensured again (since then all the degrees are $\leq
\frac N 2$). More importantly, note that the combination
of these results immediately rule out the possibility of
establishing a general threshold phenomenon (for all
monotone properties) when $k=2$.} \fi
\ifSaveSize{\vspace{-2.7mm}}\fi \mysubsection
{High Connectivity
(see Section \ref{section_high_conn})}
The connectivity number, $\kappa(G)$, is the largest
integer, $\ell$, \st any pair of vertices is connected
in $G$ by at least $\ell$ internally vertex-disjoint
paths. Since a typical degree in a random graph is
$(1\pm o(1)) pN$, it is remarkable that \gnp graphs
achieve $\kappa = (1\pm o(1)) pN$ \asns.
Surprisingly, such optimal connectivity is guaranteed
by $\Theta(\log(N))$\wice whenever $p \geq
\Theta(\frac {\log(N)}{N})$.
\ifSaveSize{\vspace{-2.7mm}}\fi \mysubsection
{Cliques and Independent-Sets
(see Appendix \ref{appendix_indp_num})}
For $N^{-o(1)} \leq p \leq 1-N^{o(1)}$ the
independence number, $I$, of random graphs has \as
only two possible values: either $S^*$ or $S^*+1$ for
some $S^*\sim \frac {2\log(pN)}{\log(1/(1-p))}$.
This remarkable phenomenon is observed to hold by
$\Theta(\log^2(N))$-\wice whenever $p$ is
bounded away from 0.
On the other hand, \kwig are provided with $k= \Theta
\lb \frac {\log(N)}{\log \log(N)}\rb$ where $I \geq
(S^*)^{1+\Omega(1)}$ \as (for $k=\Theta(1)$, even huge
$N^{\Omega(1)}$ independent-sets may appear).
For smaller densities, random graphs \as have $I \leq
O(p^{-1} \log(N))$, while $\Theta(\log(N))$\wice gives
a weaker, yet useful, $I \leq O(\sqrt{\frac N p})$
bound whenever $p\geq \Omega (\frac{\log(N)}N)$.
By symmetry (replacing $p$ with $1-p$), analogous
results to all the above hold for the clique number as
well.
Discussing the clique- and independence-number is
deferred to the appendices since the main
relevant techniques here are demonstrated elsewhere in the paper.
\ifSaveSize{\vspace{-2.7mm}}\fi \mysubsection
{Coloring
(see Section \ref{section_color})}
For $1/N \ll p \leq 1-\Omega(1)$, the chromatic number $\chi$ of random graphs
is a.s. $(1+o(1))\frac
{N\log(1/1-p)} {2\log(pN)}$.
This \gnp lower-bound on $\chi$ is observed to hold
for any $(\log(N))^{\Theta(1)}$\wig with moderately
small densities $p\geq (\log(N))^{-\Theta(1)}$.
More surprisingly, $k=\Theta(\log(N))$ suffices to
capture a similar upper-bound (even for tiny densities
$p=c \log(N)/N$).
This upper-bound is based on Alon, Krivelevich
\andCoAuthers Sudakov's \cite{aks}, \cite{aks1} and on Johansson's
\cite{joh}.
\ifSaveSize{\vspace{-2.7mm}}\fi \mysubsection
{Thresholds for the Appearance of Subgraphs
(see Section \ref{section_sub_graphs})} \label{intro_H_copies}
For a fixed (non-empty) graph $H$, consider the
appearance of $H$-copies ({\em not necessarily} as an
induced subgraph) in either a random or a \kwi
graph.\ifPersonal\footnote{
When $H$ is empty, the question is trivial because every
graph $G$ (of sufficient order) contains $H$ copies.
This formally translates to $\rho=\infty$ and to
$p^*=0$.}\else~\fi
The \gnp threshold for the occurrence of $H$ sub-graphs
lies at $p^*_H \eqdef N^{-\rho}$, where the constant
$\rho=\rho(H)$ is the minimum, taken over all subgraphs
$H'$ of $H$ (including $H$ itself), of the ratio $\frac
{v(H')} {e(H')}$ (here, $v(H')$ and $e(H')$ respectively
denote the number of vertices and edges in $H'$).
Thus, no $H$-copies are found when ${p}\ll p^*$, while for any ${p}\gg p^*$, copies of $H$ abound (\ER \cite{erGiant}, \Bol \cite{bolsub}).
\ifPersonal
\begin{remark} The ratio $\rho$ decides the
threshold, since it is minimized for the most `dense'
subgraph $H'$. Intuitively, this $H'$ is the sub-graph
less likely to appear in a \gnp graph, so $H$-copies
\as appear iff $H'$-copies \as appear.\end{remark}
\begin{remark} Note that $\frac 1 {v-1} \leq \rho \leq
2$, where the lower bound is tight (only) for cliques,
and the upper bound is tight iff $H$ contains only
disjoint edges. As $v=v(H)$ grows, most graphs have
$e(H)=\Theta(v^2)$, so $\rho=\Theta(1/v)$.\end{remark}
\fi
For any graph $H$, this \gnp threshold holds whenever
$k\geq \Theta (v^4(H))$, but as $k$ is decreased to
$\floor{\frac 2 {\rho}}$, the \gnp threshold is defied:
much sparser graphs exist where $p \ll p^*_H$ and yet
copies of $H$ are \as found.
In particular, when $e(H)\geq \Omega(v^2(H))$,
\ifPersonal then $\rho \leq O(1/v(H))$, so \fi the
threshold violation occurs at $k= \Omega(v(H))$.
\ifPersonal Note that our negative results are optimal
in the sense that they capture all possible graphs $H$
(see footmark inside statement of Theorem
\ref{sub_graphs_defy}). The optimality of $k$ is
discussed in remark \ref{remark_rho_cond_reasonable}.
\fi
\ifSaveSize{\vspace{-2.7mm}}\fi \mysubsection
{First Order Zero-One Laws (Previous Results)}
\label{FO_paper}
\ifSaveSize Naor \etal \cite{nnt} have recently \else a recent study (joint work of the second author with Naor \andCoAuthers Tromer \cite {nnt}) \fi studied capturing arbitrary depth-$D(N)$ properties.
These are graph properties expressible by a sequence of first-order formulas $\Phi=$ $\{\phi_N\}_{N \in \mathbb N}$, with quantifier depth $\mathit{depth}(\phi_N) \leq D(N)$%
\ifSaveSize. \else
(e.g.~``having a clique of size $t(N)$" can be specified
by $\phi_N = \exists x_1 ... \exists x_{t(N)}
\bigwedge_{i \neq j} ((x_i \neq x_j) \bigwedge
\mbox{\sc edge}(x_i,x_j))$). \fi
A `threshold' depth function $D^*\sim\frac{\log(N)}{\log(1/{p})}$ was
identified \st arbitrary \kwig resemble \gnp graphs \wrt all
depth $D^*$ properties. The underlying resemblance-definition is in
fact so strong, that even \gnp graphs cannot achieve resemblance
to themselves \wrt properties of higher depth. On the other hand,
\kwig were shown to defy some \gnp properties of depth
$\Theta(\sqrt{k\log(N)}+\log(N))$.
These results are incomparable to the ones in the current paper,
since most of the graph properties studied here require larger
depth than $D^*$.
\ifPersonal\footnote{ Note that the naive recursive
formula for connectivity has depth $3 \log(N)$ but
exponential size, while Savitch's Theorem \cite{sav}
gives linear size and $5 \log(N)$ depth. On the other
hand, by a simple Ehrenfeucht-game argument, depth $2
\log(N)-\Theta(1)$ is insufficient for connectivity.
For optimal Ramsey graphs depth $2 \log(N)$ suffices,
which is again tight by some trivial Ehrenfeucht-game
argument.}
Thus, a more direct approach is needed to establish such
properties for \kwigns.\footnote{Note that, in addition,
direct proofs may reduce the amount of independence $k$
required. For instance for appearance of subgraphs of
order $v$, then $k=\Theta(v^4)$ suffices instead of $k
\sim \frac{\log^3 N}{\log^2(1/p)}$.}\fi
\mysection{Preliminaries
\paragraph{Asymptotics.}
Invariably, $k: \N \into \N$, while
$p,\eps,\delta,\gamma,\Delta: \N \into (0,1)$. We
often use $k,p,\eps,\delta,\gamma,\Delta$ instead of
$k(N),p(N),\eps(N),\delta(N),\gamma(N),\Delta(N)$.
Asymptotics are taken as $N \rightarrow \infty$, and
some inequalities hold only for sufficiently large
$N$. The $\floor{\cdot}$ and $\ceil{\cdot}$ operators
are ignored whenever insignificant for the asymptotic
results. Constants $c,\bar c$ are not optimized in
expressions of the form $k=c\log(N)$ or
$p=(\log(N))^{\bar c}/N^{\Delta}$, whereas the
constant $\Delta$ is typically optimized.
\SaveaMoreSizeParagraph{Subgraphs.}
For a graph $H$, let $v(H)$ and $e(H)$ denote the number
of vertices and edges in $H$. For vertex sets $U,V$ let
$e(U,V)$ denote the number of edges that pass from $U$ to
$V$ (if $S=U \bigcap V \neq \emptyset$, then any
internal edge of $S$ is counted twice). Similarly, we
let $e(U)=e(U,U)$.
\SaveaMoreSizeParagraph{Random and ${\bf k}$-Wise
Independent Graphs.}
Throughout, graphs are simple, labeled and undirected.
Given $N,k,p$ as above then $\gnknm$~ (or $\gnkm$~ for short)
denotes some distribution over the set of graphs with
vertex set $\{1,...,N\}$, where each edge appears
w.p.~$p(N)$, and the random variables that indicate the
appearance of any $k(N)$ potential edges are mutually
independent. We use the term `\kwigns' for a sequence of
distributions \ens{\mathcal G^{k}(N,p)} indexed by $N$.
\SaveaMoreSizeParagraph{Almost Sure Graph Properties.}
A graph property $T$, is any property closed under
graph isomorphism. We say that `$T$ holds a.s.~(almost
surely) for $\mathcal G^{k}(N,p)$' or that (abused notation) `$T$
holds for $\mathcal G^{k}(N,p)$' whenever $\Pr_{\mathcal G^{k}(N,p)}[T]$ {}
$\stackrel {N \into \infty} {\longrightarrow} 1$. Similar terminology is used for \gnp graphs.
\SaveaMoreSizeParagraph{Monotonicity in $(\bf{k,p})$.}
Since $\bar k$-\wice implies $k$-\wice for all $\bar k >
k$ we may state claims for arbitrary $k\geq k'$ but
prove them only for $k=k'$.
When establishing monotone increasing properties we
often state claims for arbitrary $p\geq p'$ but prove
them only for $p=p'$.
The latter is valid since for any $N,k,p>p'$, the
process of sampling from any (independent) $\mathcal G^k(N,p)$,
$\mathcal G^k(N,p'/p)$ distributions and defining the
final graph with edge-set being the intersection of the
edge-sets of the two sampled graphs, clearly results in a $\mathcal G^k(N,p')$ distribution.
\ifPersonal Assuming that $k=k',p=p'$ is used only to
ensure the $\frac {M-k}k \mu(1-\mu) \geq 1$ condition in
Lemma \ref{kwise_Chebyshev_bound} (the $k \leq \frac N
3$ condition holds anyway, whenever the Lemma is
meaningful).
Finally, lower-bounds on $p$ are often provided only for
sake of clarity (these lower-bounds are redundant as
they follow immediately from the companion bounds on
$\eps$ or $\gamma$).\fi
\SaveaMoreSizeParagraph{${\bf k}$-Wise Independent
Random Variables.}
The term `$(M,k,p)$-variables' stands for any $M$
binary variables that are $k$-wise independent with
each variable having probability $p$ of attaining
value 1. Lemma \ref{modify_JoffeCG_lem} (proved in
Section \ref{modify_JoffeCG_sec}) adjusts the known
construction of discrete \kwv of \cite{jof},\cite{cg}, \cite{abi} to
provide $(M,k,p)$-variables that induce some
predetermined values with relatively high
probability. Throughout, $e_1$ and $e_0$ resp.~denote
the number of edges and non-edges in a graph $H$.
\blem \label{modify_JoffeCG_lem} Given $0<p<1$ with
binary representation $p=0.b_1...b_{\ell}$, and
natural numbers $e_0,e_1,M$ satisfying $e_0+e_1\leq
M$, let $F= \max \{2^{\ceil{\log_2M}},2^{\ell}\}$.
Then there exists $(M,k,p)$-variables \st
$\Pr[A]=F^{-k}$, where $A$ denotes the event that the
first $e_0$ variables receive value 0 while the next
$e_1$ variables receive value 1. \elem
\SaveaMoreSizeParagraph{Tail Bounds for ${\bf k}$-Wise
Independent Random Variables.}
The following strengthened version of standard tail
bounds (proved in Section \ref{sec_inequal})
translates into smaller densities $p$ for which
monotone graph properties are established for \kwigns.
\blem \label{kwise_Chebyshev_bound} Let $X=\sum_{j=1}^M
X_{j}$ be the sum of \kwi binary variables where
$\Pr[X_j=1]=\mu$ holds for all $j$. Let $\delta > 0$,
and let $k$ be even \st $\frac {M-k}k \mu(1-\mu) \geq
1$. Then
$$\Pr[|X-\mathbb E(X)| \geq \delta \mathbb E(X)] \leq
\left [\frac {2k(1-\mu)} {\delta^2 \mu M}\right] ^{\fktb}.$$
\elem
\mysection{The properties of ${\bf k}$-wise
independent graphs} \label{claims_&_proofs}
\mysubsection{Degrees, Co-Degrees and Jumbledness}
\ifPersonal For future purposes, we first establish some
fundamental random graphs' properties, such as having
almost regular degrees and co-degrees as well as
achieving strong jumbledness.\fi
\ble{Achieving almost regular degrees}
\label{kwise_gives_deg}
In all \kwig \ens{\gnkm} it \as holds that all vertices have degree~~$p(N-1)(1 \pm \eps)$ whenever $N \big[\frac {3k} {\eps^2
pN}\big]^{\floor{k/2}} \longrightarrow 0,$ and in
particular when either \ben
\item
$k\geq 4$,~$N^{-1/2} \ll p \leq 1-\frac 5 N$, and $1 \geq \eps \gg p^{-1/2}N^{-1/4};$~~or
\item
$k\geq 4\log(N)$,~$\frac{25\log(N)}{N} \leq p \leq
1-\frac {5 \log(N)} N$, and $1 \geq \eps \geq
\sqrt{\frac{25\log(N)}{p N}}.$ \een \ele
\noindent{\bf Proof.} Fix a vertex $v$, and let $X_w$ be
the random variable that indicates the appearance of the
edge $\{v,w\}$ in the graph. Thus, the degree of $v$ is
$X=\sum_{w\neq v}X_w$. Since $X$ is the sum of
$(N-1,k,p)$-variables, Lemma \ref{kwise_Chebyshev_bound}
implies that the probability that $v$ has an unexpected
degree $X \neq p(N-1)(1\pm\eps)$ is bounded by
$\big[\frac {3k} {\eps^2 pN}\big]^{\floor{k/2}}.$
Applying a union-bound over the $N$ possible vertices
$v$, gives that the probability of having {\em some}
vertex with unexpected degree is bounded by $N
\big[\frac {3k} {\eps^2 pN}\big]^{\floor{k/2}},$ which
vanishes for the parameters in items 1 and 2. \qed
\ble{Achieving almost regular
co-degrees} \label{kwise_gives_codeg}
In all \kwig \ens{\gnkm} it \as holds that all vertex pairs
have co-degree~~$p^2(N-2)(1\pm \gamma)$ whenever
either \ben
\item
$k\geq 12$,~$N^{-\frac 1 6} \ll p \leq 1-\frac {13} N$, and $1 \geq \gamma \gg p^{-1} N^{-\frac 1 6};$~~or
\item
$k\geq 12\log(N)$,~$\sqrt{\frac{73 \log(N)}{N}} \leq p
\leq 1-\frac {13 \log(N)} N$~~and~~$1 \geq \gamma \geq
\sqrt{\frac {73 \log(N)} {p^2N}}.$ \een \ele
\ifPersonal{%
\noindent{\bf Proof.} We prove item 1 for $k=12$, item
2 for $k=12\log(N)$ and item 3 for $k=4$. The claim
follows for $\bar k > k$, since $\bar k$-\wice implies
$k$-\wicens.
Fix a vertex pair $\{u,v\}$, and let $X_w$ be the random variable indicating the appearance of both edges $\{u,w\}$, $\{v,w\}$ in the graph. Thus the co-degree of $\{u,v\}$ is $X=\sum_{w\neq u,v}X_w$. %
Now $X$ is the sum of $N-2$ binary variables that are
$\lb k/2 \rb$-wise independent and each has probability
of success $p^2$. Thus, Lemma
\ref{kwise_Chebyshev_bound} combined with a union-bound
over the $\binom N 2 $ possible vertex pairs $\{u,v\}$,
ensures that the probability that there exist {\em some}
vertex-pair with unexpected co-degree $X \neq
p^2(N-2)(1\pm\gamma)$ is bounded by $\binom N 2 \left
[\frac {3k} {\gamma^2 p^2N}\right]^{\frac k 4}.$
Clearly, the latter expression vanishes for our choice
of parameters. \qed
\else \noindent{\bf Proof.} The proof is completely
analogous to that of Lemma \ref{kwise_gives_deg}. Here the
union-bound is over all $\binom N 2 $ vertex pairs
$\{u,v\}$, and the co-degree of each $\{u,v\}$ is the
sum of $(N-2, \floor {\frac k 2}, p^2)$-variables. \qed
\fi
The following definition is a modified version of the one in
\cite{t,cgw}, see also \cite{ac} and \cite{as}, Chapter 9.
\bdf {Jumbledness} \label{def_jumbl}
For vertex sets $U,V$, let $e(U,V)$ denote the number of edges that
pass from $U$ to $V$ (internal edges of $U \bigcap V$
are counted twice). A graph is $(p,\alpha)$-jumbled if
$e(U,V)=p |U||V| \pm \alpha \sqrt{|U||V|}$ holds for all
$U,V$. \edf
\btn{Achieving optimal jumbledness}
\label{kwise_gives_jumbl} There exist absolute
constants $c_1,c_2,c_3$ \st all \kwig \ens{\gnkm} are \as
$(p,\alpha)$-jumbled whenever either:
\ben
\item
$k\geq 4$, $p\geq \Omega(\frac {1}{N})$~~and~~$\alpha
\gg \sqrt{p}N^{3/4}$;~~or
\item
$k\geq \log(N)$, $\frac {c_1 \log(N)}{N} \leq p \leq
1- \frac {c_2 \log^4(N)}{N}$~~and~~$\alpha \geq c_3
\sqrt{pN}.$ \een \etn
\noindent{\bf Proof.} The proof is based on spectral techniques
and combines some refined versions of ideas from \cite{ac}, \cite{fk2}
and \cite{vu}, using the fact that traces of the $k$-th power of the
adjacency matrix of a graph are identical in the $k$-wise independent case
and in the totally random one. The details are somewhat lengthy and
are thus deferred to Appendix \ref{proof_kwise_gives_jumbl}.
\mysubsection{Connectivity, Hamiltonicity and Perfect
Matchings}\label{section_conn_Ham}
\btn{Achieving connectivity}
\label{kwise_gives_conn}
There exists a constant $c$ \st the following holds. All \kwig \ens{\gnkm} are \as
connected whenever either:
\begin{itemize}
\item
$k\geq 4$~~and~~$p \gg \frac {1}{\sqrt{N}}$;~~or
\item
$k\geq 4\log(N)$~~and~~$p\geq \frac {c\ln(N)}{N}$.
\end{itemize}
\etn
\noindent{\bf Proof.} Let $U$ be a vertex-set that
induces a connected component. Connectivity follows from
having $|U|>0.5N$ for all such $U$. The following holds
\as for $\mathcal G^{k}(N,p)$. By Lemma \ref{kwise_gives_deg}, all
vertices have degree $\geq 0.9pN$, so $e(U) \geq
0.9pN|U|$. By Theorem \ref{kwise_gives_jumbl}, all sets
$U$ satisfy $e(U)\leq p|U|^2 + \alpha|U|$ with $\alpha=
O(\sqrt{pN})= o(pN)$. Re-arranging gives $(0.9-o(1))N
\leq |U|$. \qed
\btn{Achieving Hamiltonicity}
\label{kwise_gives_Ham}
All \kwig \ens{\gnkm} are \as Hamiltonian (and for even $N$ contain a perfect matching) whenever either:
\begin{itemize}
\item
$k\geq 4$~~and~~$p \geq \frac {\log^2(N)} {\sqrt
N}$;~~or
\item
$k\geq 4\log(N)$~~and~~$p\geq \frac {\log^2(N)}{N}$.
\end{itemize}
\etn
\noindent{\bf Proof.} Let $\bar \Gamma(V)$ denote the
set of vertices $v \notin V$ that are adjacent to some
vertex in the vertex-set $V$. By Theorem 1.1 in
Hefetz, Krivelevich \andCoAuthers Szabo's \cite{hks},
Hamiltonicity follows from the existence of constants $b,c$
such that \as
(i) $|\bar \Gamma(V)|\geq 12 |V|$ holds for all sets
$V$ of size $\leq b N$, and (ii) $e(U,V)\geq 1$ holds
for all disjoint sets $U,V$ of size $\frac {c
N}{\log(N)}$. We remark that (unlike other asymptotic
arguments in this paper), the sufficiency of (i) and
(ii) might hold only for very large $N$.
For (i), let $b= \frac 1 {170}$ and consider an
arbitrary set $V$. By Theorem \ref{kwise_gives_jumbl},
\as all vertex-sets $T$ have $e(T) \leq p|T|^2 +
o(pN)|T|$. By Lemma \ref{kwise_gives_deg} \as all the
degrees are $\opmo pN$, so exactly $\opmo pN|V|$ edges
touch $V$ (where internal edges are counted twice).
Let $T=V \bigcup \bar \Gamma(V)$, and assume that $|\bar
\Gamma(V)| < 12|V|$. We get $\omo pN|V| \leq e(T) \leq p
(13|V|)^2 + o(pN)|V|$. Re-arranging gives $|V| > \frac N
{170}$. Condition (i) follows.
For (ii), by Theorem \ref{kwise_gives_jumbl}, \as all
(equal-sized and disjoint) vertex-sets $U,V$ have
$e(U,V) \geq p|U||V| - O(\sqrt{pN})|U|$. If there is
no edge between $U$ and $V$, then $e(U,V)=0$.
Re-arranging gives $|U|\leq O(\sqrt{N/p}) \leq O(\frac
N{\log(N)})$. Condition (ii) follows. \qed
\btn {Failing to preserve connectivity}\label{kwise_gives_Ham} There exist
pair-wise independent graphs \ens{\gnkm} \\
where $p=1/2$, that are (i) \as disconnected
(and contain no Hamiltonian cycles), and that (ii)
contain no perfect matchings \wip $1$. \etn
\noindent{\bf Proof.} Consider the graphs defined by
partitioning all vertices into 2 disjoint sets
$V_0,V_1$ where each $V_j$ induces a clique, no edges
connect $V_0$ to $V_1$, and $V_1$ is chosen randomly and uniformly among
all subsets of odd cardinality of the vertex set. Note that for every
set of $4$ vertices, there are $16$ ways to split its vertices among $V_0$
and $V_1$, and it is not difficult
to check that if $N \geq 5$, then each of these $16$ possibilities
is equally likely.
Therefore, any edge appears w.p.~$\frac 1 2$, and any pair
of edges (whether they share a common vertex or not)
appears w.p.~$\frac 1 4$. Still the graph is connected
iff all the vertices belong to the same $V_j$ which
happens only w.p.~$2^{-N+1}$ (and only if $N$ is odd). Since
$|V_1|$ is odd, the graph contains no perfect matching.
\qed
\ifPersonal %
Note that these graphs are not 3-wise independent, as
the existence of the edges $\{u,v\}$ and $\{v,w\}$
implies the existence of the edge $\{u,w\}$. Also note
that this construction cannot be generalized for $p \neq
\frac 1 2$ (by assigning vertices w.p.~$q$ to $V_0$ and
w.p.~$1-q$ to $V_1$). Indeed, consider the requirements
(i) $p=q^2+(1-q)^2$ and (ii) $p^2=q^3+(1-q)^3$. If we
plug (i) into (ii) we get an equation of degree 4 that
by Vieta's formula has only 3 solutions $p=0,\frac 1
2,1$ (see
\url{http://planetmath.org/encyclopedia/VietasFormula.html}).
We also remark that when $p$ is slightly increased to
$1/2 + N^{-\Theta(1)}$, then 4-\wice {\em does} suffice
for achieving connectivity since then all vertices \as
have degree $> N/2$. Then, by Dirac's Theorem, the
graphs are in fact Hamiltonian. \else
Note that when $p$ is slightly increased to $1/2 +
N^{-\Theta(1)}$, then 4-\wice suffices for achieving
Hamiltonicity (via Dirac's Theorem), because then \as
all vertices have degree $> N/2$.
\fi%
\mysubsection{High-connectivity}
\label{section_high_conn}
\btn{Achieving optimal connectivity}
\label{kwise_gives_kappa_jumbl}
There exists an absolute constant $c$, \st for all
\kwig \ens{\gnkm} the connectivity number is \as $\opmo pN$
when either
\begin{itemize}
\item
$k\geq 4$~~and~~$p \gg N^{-\frac 1 {3}}$;~~or
\item
$k\geq \log(N)$~~and~~$p \geq c \frac{\log(N)}{N}$.
\end{itemize}
\etn
\noindent{\bf Proof.}
The connectivity is certainly not larger than $(1+o(1))pN$, as it
is upper-bounded by the minimum degree.
By Theorem 2.5 in Thomason's \cite{t} $\kappa \geq d-
\alpha/p$ holds for any $(p,\alpha)$-jumbled graph with
minimal degree $\geq d$. Thus, achieving $\kappa \gtrsim
pN$, reduces to obtaining (i) $d = (1\pm o(1))pN$, and
(ii) $\alpha \ll pd$.
Condition (i) \as holds by Lemma \ref{kwise_gives_deg}.
By Theorem \ref{kwise_gives_jumbl}, we \as achieve
$(c_3\sqrt{pN})$-jumbledness for some constant $c_3$,
so condition (ii) becomes $p^2N \gg \sqrt{pN}$. This
proves the first part of the theorem. To prove the
second we note, first, that we may assume that $p \ll
1$ (since otherwise $4$-wise independence suffices).
Let $S$ be a smallest separating set of vertices,
assume that $|S|$ is smaller than $(1-o(1))pN$, let
$U$ be the smallest connected component of $G-S$ and
let $W$ be the set of all vertices but those in $U
\cup S$. Clearly $|W| \geq (\frac{1}{2}-o(1))N$. Note
that $e(U,W)=0$, but by jumbledness $e(U,W) \geq
p|U||W| -c_3 \sqrt {pN |U||W|}$. This implies, using
the fact that $|W|>N/3$, that $|U| \leq
\frac{3c_3^2}{p}$. Using jumbledness again, $e(U,S)
\leq p|U||S|+c_3 \sqrt {pN |U||S|}$ but as all degrees
are at least $(1-o(1))pN$, $e(U,S) \geq
(1-o(1))pN|S|-e(U) \geq (1-o(1))pN|U|-p|U|^2- c_3
\sqrt {pN} |U| \geq |U| (1-o(1))pN$, where here we
used the fact that $|U| \leq O(1/p)$ and that
$\sqrt{pN} =o(pN)$. This implies that either $p|U||S|
\geq \frac{1}{2}|U|pN$, implying that $|S| \geq N/2
\gg pN$, as needed, or $c_3 \sqrt {pN|U||S|} \geq
\frac{1}{3}|U|pN$, implying that $|S| \geq
\frac{1}{9c_3^2} |U| pN$ which is bigger than $pN$
provided $|U| \geq 9c_3^2$. However, if $|U|$ is
smaller, then surely $|S| \geq (1-o(1))pN$, since all
degrees are at least $(1-o(1))pN$ and every vertex in
$U$ has all its neighbors in $U \cup S$.
\qed
\mysubsection{Thresholds for the Appearance of
Subgraphs}\label{section_sub_graphs}
For a fixed non-empty graph $H$, let $\rho(H)$ and
$p^*_H$ be as in Section \ref{intro_H_copies}.
\bob {Preserving the threshold for appearance of
sub-graphs} \label{kwise_gives_subgraph_threshold}
There exists a function $D(v)=\opmo \frac {v^4}{16}$
\st for any graph $H$ with at most $v$ vertices, and
for all \kwig \ens{\gnkm} $ $ with $k \geq D(v)$ the
following holds. Let $A$ denote the event that $H$
appears in $\gnkm$~ (not necessarily as an induced
sub-graph). Then
\begin{itemize}
\item
If $p(N) \ll p^*_H(N)$ then $(\neg A)$ \as holds.
\item
If $p(N) \gg p^*_H(N)$ then $A$ \as holds.
\end{itemize} \eob
\noindent{\bf Proof.}
The proof (given in Appendix
\ref{kwise_gives_subgraph_thresholdPrf}) applies
Rucinski \andCoAuthers Vince's \cite{rv} to derive a
lower-bound on the minimal $k$ sufficient for the
original \gnp argument to hold. \qed
\btn {Defying the threshold for appearance of sub-graphs}\label{sub_graphs_defy}
For any (fixed) graph $H$ that satisfies%
\footnote{This condition rules out only graphs $H$ that
are a collection of disjoint edges. For such graphs
$\rho(H)=2$, so clearly no $H$-copies can be produced
(even if $k=1$) when $p(N) \ll p^*_H(N)= N^{-2}$.}
$\rho(H)<2$, there exists \kwig \ens{\gnkm} where
$k=\ceil{\frac 2 {\rho(H)} -1}$ and $p(N) \ll p^*_H(N)$
\st $H$ \as appears in $\gnkm$~ as an induced sub-graph.
\etn
\ifPersonal
\begin{remark}
The term $\ceil{\frac 2 {\rho(H)} -1}$ is merely the
minimal $k$ \st $k \gneq \frac 2 {\rho}$ (verified by
separately checking for either integer or non integer
$\frac 2 {\rho}$).\end{remark}
\begin{remark} \label{remark_rho_cond_reasonable}
Note that whenever $\rho \lneq 2$, we have
$\ceil{\frac 2 {\rho(H)} -1} \geq 1$, so the condition
on $k$ is reasonable.\end{remark}
\fi
\noindent{\bf Proof.} Theorem \ref{sub_graphs_defy}
relies on Lemma
\ref{kwise_construction_unexpected_sub_graphs}. This
lemma considers the appearance of the sub-graph $H_N$ in
$\gnkm$~ where $\ensm{H_N}$ is any sequence of graphs
(possibly) with unbounded order.
\ble {\kwig with unexpected appearance of sub-graphs}
\label{kwise_construction_unexpected_sub_graphs}
Let $\ensm{H_N}$ be a sequence of graphs where $H_N$ has
exactly $S(N) < \sqrt N$ vertices, $e_1(N)$ edges and
$e_0(N)$ none-edges. Assume that for each $N$ there
exists $(\binom {S(N)} 2,k(N),p(N))$-variables \st with
probability $\Delta(N) \gg (S(N)/N)^2$ it holds that
the first $e_0(N)$ variables attain value $0$
and the next $e_1(N)$ variables attain value
$1$. Then there exist \kwig \ens{\gnkm} that \as contain
$H_N$-copies as induced sub-graphs. \ele
\noindent{\bf Proof (Lemma
\ref{kwise_construction_unexpected_sub_graphs}).} Fix
$N$, so $H=H_N, S=S(N), e_i=e_i(N), k=k(N), p=p(N),
\Delta=\Delta(N).$ We construct graphs $\gnkm$~ that \as
contain $H$ copies. Given the $N$ vertices,
let $\{V_j\}_{j=1}^M$ be any maximal collection of {\em
edge-disjoint} vertex-sets, each of size $|V_j|=S$.
For each $j$, decide the internal edges of $V_j$ by some $(\binom S 2,k,p)$-variables \st $H$ is induced by $V_j$ with probability $\Delta$. This can be done by appropriately defining which specific edge in $V_j$ is decided by which specific variable.
Critically, the constructions for distinct sets $V_j$
are totally independent. The $R=\binom N 2 - M\binom S
2$ remaining edges
can be decided by any $(R,k,p)$-variables. The
resulting graph is clearly \kwins.
The main point is that (i) the events of avoiding
$H$-copies on the various sets $V_j$ are totally
independent (by the edge-disjointness of the $V_j$-s),
and that (ii) in our \kw case $\Delta$ is rather large
(compared with the totally independent case). Thus,
avoiding $H$-copies on {\em any} of the $V_j$-s is
unlikely.
Indeed, let $B$ denote the event that no $H$-copies appear in the resulting graph, while $B'$ only denotes the event that none of the $V_j$-s induces $H$.
By Wilson's \cite{wil} and Kuzjurin's \cite{kuz} we have $M= \Theta(N^2/S^2)$\ifPersonal~(which is obviously optimal), \else, \fi so
$$\pr{B} \leq \pr{B'}
= (1-\Delta)^{M} \leq
e^{-\Theta\lb\frac {\Delta N^2}{S^2}\rb},$$
which vanishes by our requirement that
$\Delta \gg (S/N)^2.$ %
\qed %
~(Lemma \ref{kwise_construction_unexpected_sub_graphs})
\noindent{\bf Completing the proof of Theorem
\ref{sub_graphs_defy}}.
For $v=v(H), \rho=\rho(H), p^*=p^*_H$,
and some $1 \ll f(N)\leq N^{o(1)}$, define $p$ \st $p^{-1}$ is the minimal power of 2 that is larger than, $\frac {f(N)}{p^*}$. As desired $p \ll p^*$.
Let $e_1$ and $e_0$ respectively denote the number of edges and non-edges in $H$.
With $M=\binom v 2$ and $F=1/p$, we apply Lemma \ref{modify_JoffeCG_lem} to produce $(M,k,p)$-variables
\st \wip $\geq F^{-k}$ the first $e_0$ variables have value 0, and the remaining $e_1$ variables have value 1.
By Lemma \ref{kwise_construction_unexpected_sub_graphs}, the latter immediately implies the existence of \kwig that \as contain $H$-copies as long as $F^{k} \ll (N/v)^2$. As $F=1/p=N^{\rho+o(1)}$, this $\ll$ requirement translates to $k\rho \lneq 2$. \qed
~(Theorem \ref{sub_graphs_defy})
\mysubsection{The Chromatic Number}\label{section_color}
\bob {Preserving the chromatic-number lower bound}
\label{chi_lo_bound} For any $c>0$ there exists some
$d>0$, \st all \kwig \ens{\gnkm} with $(\log(N))^{-c} \leq
p \leq 1 -N^{-o(1)}$ and $k \geq d (\log(N))^{c+1}$
\as have chromatic number $\chi \geq \frac
{N\log(1/1-p)} {2\log(pN)}$. \eob
\ifPersonal
\noindent{\bf Proof.} Let $I(G)$ denote the independence
number of (a single) $N$-vertex graph $G$, so
$\chi(G) \geq \frac N {I(G)}$.
Whenever $k \geq \Theta \lb (\frac 1 p \log(pN))^2
\rb$ our \kwice upper-bound on I (observation
\ref{kwise_gives_exact_indp_num}) shows that \as
$I(\mathcal G^{k}(N,p)) \leq \frac {2\log(pN)}{\log(1/1-p)}= O\lb
\frac 1 p \log(pN) \rb$ (the $=O(\cdot)$ applies
$\log(1/1-p) \stackrel {p \rightarrow
0}{\longrightarrow} p$),
so $\chi(\mathcal G^{k}(N,p)) \geq \Omega \lb \frac
{pN}{\log(pN)}\rb$. \qed \else \noindent{\bf Proof.}
Let $I(G)$ denote the independence number of (a
single) $N$-vertex graph $G$. Clearly, $\chi(G) \geq
\frac N {I(G)}$, so observation \ref{chi_lo_bound}
follows from our \kwice upper-bound on I (observation
\ref{kwise_gives_exact_indp_num}). \qed \fi
\btn {Preserving the chromatic-number upper bound}
\label{chi_up_bound} There exists an absolute constant
$c$ \st the following holds. All \kwig \ens{\gnkm} with $p
\leq 1/2$ \as have chromatic number $\chi \leq \frac
{cN\log(1/1-p)} {\log(pN)}$, whenever either: \ben
\item
$k\geq 12$~and~$p \geq N^{-\frac 1 {75}}$;~~or
\item
$k\geq \log(N)$~and~$p \geq c \frac {\log(N)}{N}$.\een
\etn
\noindent {\bf Remark.} No special effort was made to
optimize the constants $\frac 1 2$ and $\frac 1 {75}$.
\noindent{\bf Proof (sketch).} Since $p$ is bounded from above
and $\log(1/1-p) \stackrel {p \rightarrow 0} {
\longrightarrow} p/\ln(2)$, it suffices to show that
\as $\chi \leq O ( \frac {pN} {\log(pN)} )$.
Item 1 is based on Alon, Krivelevich \andCoAuthers
Sudakov's \cite{aks}.
Specifically, choose $\delta=1/25$, \st by item 1 in
Lemma \ref{kwise_gives_deg} (with $\eps=(\log(N))
p^{-1/2} N^{-3/8}$) and by item 1 in Lemma
\ref{kwise_gives_codeg} (with $\gamma= (\log(N))
p^{-1} N^{-1/6}$), \as all the degrees are lower
bounded by $pN(1- p^{-1/2} N^{-3/8+o(1)}) \geq pN-
N^{1-4\delta},$ and all co-degrees are upper bounded
by $p^2N (1+ p^{-1} N^{-1/6+o(1)}) \leq p^2N-
N^{1-4\delta}.$
By Theorem 1.2 in \cite{aks}, these conditions (with
$\delta <1/4$ and $p\geq N^{-\frac {\delta}{3}}$)
imply that $\chi \leq \frac {4pN}{\delta \ln N} \leq
O(\frac{pN}{\log(pN)}).$
Item 2 follows from jumbledness and the main result of
\cite{aks1} (which is based on \cite{joh}), by which
any graph with maximum degree $d$ in which every
neighborhood of a vertex contains at most
$d^{2-\beta}$ edges (for some constant $\beta$) has
chromatic number $\chi \leq O(\frac d {\log d})$. \qed
\paragraph{\Large Acknowledgements}
The second author wishes to thank Oded Goldreich for
his encouragement, and Ori Gurel-Gurevich, Chandan
Kumar Dubey, Ronen Gradwohl, Moni Naor, Eran Ofek, Ron
Peled, and Ariel Yadin for useful discussions.
| {
"timestamp": "2008-04-08T15:10:58",
"yymm": "0804",
"arxiv_id": "0804.1268",
"language": "en",
"url": "https://arxiv.org/abs/0804.1268",
"abstract": "We study the k-wise independent relaxation of the usual model G(N,p) of random graphs where, as in this model, N labeled vertices are fixed and each edge is drawn with probability p, however, it is only required that the distribution of any subset of k edges is independent. This relaxation can be relevant in modeling phenomena where only k-wise independence is assumed to hold, and is also useful when the relevant graphs are so huge that handling G(N,p) graphs becomes infeasible, and cheaper random-looking distributions (such as k-wise independent ones) must be used instead. Unfortunately, many well-known properties of random graphs in G(N,p) are global, and it is thus not clear if they are guaranteed to hold in the k-wise independent case. We explore the properties of k-wise independent graphs by providing upper-bounds and lower-bounds on the amount of independence, k, required for maintaining the main properties of G(N,p) graphs: connectivity, Hamiltonicity, the connectivity-number, clique-number and chromatic-number and the appearance of fixed subgraphs. Most of these properties are shown to be captured by either constant k or by some k= poly(log(N)) for a wide range of values of p, implying that random looking graphs on N vertices can be generated by a seed of size poly(log(N)). The proofs combine combinatorial, probabilistic and spectral techniques.",
"subjects": "Combinatorics (math.CO)",
"title": "k-Wise Independent Random Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9884918482235153,
"lm_q2_score": 0.8152324826183822,
"lm_q1q2_score": 0.8058506634752894
} |
https://arxiv.org/abs/1610.00954 | Exact and Positive Controllability of Boundary Control Systems | Using the semigroup approach to abstract boundary control problems we characterize the space of all exactly reachable states. Moreover, we study the situation when the controls of the system are required to be positive. The abstract results are applied to flows in networks with static as well as dynamic boundary conditions. | \section{Introduction}
This paper is a continuation of \cite{EKNS:08, EKKNS:10} where we introduced a semigroup approach to boundary control problems and applied it to the control of flows in networks. While in these previous works we concentrated on maximal \emph{approximate} controllability, we now focus on the \emph{exact}- and \emph{positive} controllability spaces. In particular, this will generalize and refine results given in \cite{BBEAM:13, EKNS:08, EKKNS:10} where further references to the related literature can be found.
\smallbreak
As a simple motivation, we consider as in \cite{EKNS:08} a transport process along the edges of a finite network. This system is governed through the transmission conditions in the vertices of the network which represent the ``\emph{boundary space}'' for our problem. We then like to control the behavior of this system by acting upon a single node only. In this context it is reasonable to ask the following questions.
\begin{itemize}
\item Can we reach all possible states in final time?
\item If not, can we describe the maximal possible set of reachable states?
\item Is the choice of a particular control node important?
\item Which states can be reached if only positive controls are allowed?
\end{itemize}
In Section~\ref{examples} we will address all these questions. To this end we first recall in Section~\ref{TAF} our abstract framework from \cite{EKKNS:10} as well as some basic results concerning boundary control systems. In Section~\ref{sec:EC} we then characterize boundary admissible control operators and describe the corresponding exact reachability space. In Section~\ref{Sec:pos} we turn our attention to positive boundary control systems on Banach lattices. Finally, in Section~\ref{examples} we apply our results and explicitly compute the exact (positive) reachability spaces for three different examples of a transport equation controlled at the boundary: in $\mathbb R^m$, in a network, and in a network with dynamic boundary conditions.
\section{The Abstract Framework}\label{TAF}
We start by recalling our general setting from \cite{EKKNS:10}.
\begin{af}\label{af-bcs}
We consider
\begin{enumerate}[(i)]
\item three Banach spaces $X$, $\partial X$ and $U$, called the
\emph{state}, \emph{boundary} and \emph{control space}, resp.;
\item a closed, densely defined \emph{system operator} $A_m:D(A_m)\subseteq X\to
X$;
\item a \emph{boundary operator} $Q\in\mathcal{L}([D(\Am)],\partial X)$;
\item a \emph{control operator} $B\in\mathcal{L}(U,\partial X)$.
\end{enumerate}
\end{af}
For these operators and spaces and a \emph{control function}
$u\in\rL^1_{\mathrm{loc}}(\mathbb R_+,U)$ we then consider the \emph{abstract Cauchy problem
with boundary control}\footnote{We denote by $\dot x(t)$ the derivative of $x$ with respect to the ``time'' variable $t$.}
\begin{alignat}{2}\label{ACPBC}
\begin{cases}
\dot x(t)=A_m x(t),&t\ge0,\\
Qx(t)=Bu(t), &t\ge0,\\
x(0)=x_0.
\end{cases}
\end{alignat}
A function $x(\cdot)=x(\cdot,x_0,u)\in\rC^1(\mathbb R_+,X)$ with $x(t)\in D(A_m)$
for all $t\ge0$ satisfying \eqref{ACPBC} is called a \emph{classical
solution}. Moreover, we denote the \emph{abstract boundary control
system} associated to \eqref{ACPBC} by $\Sigma_{\textrm{BC}}(\Am,B,Q)${}.
\smallbreak In order to investigate \eqref{ACPBC} we make the following standing assumptions which in particular ensure that the \emph{un}controlled abstract Cauchy problem, i.e., \eqref{ACPBC} with $B=0$, is well-posed.
\begin{ma}
\makeatletter
\hyper@anchor{\@currentHref}%
\makeatother
\label{ma-bcs}
\begin{enumerate}[(i)]
\item The restricted operator $A\subsetA_m$ with domain $D(A):=\ker Q$ generates
a strongly continuous semigroup $(T(t))_{t\ge0}$ on $X$;
\item the boundary
operator $Q:D(A_m)\to\partial X$ is surjective.
\end{enumerate}
\end{ma}
Under these assumptions the following properties have been shown in \cite[Lem.~1.2]{Gre:87}.
\begin{lemma}\label{lem-Gre} Let Assumptions~\ref{ma-bcs} be
satisfied. Then the following assertions are true for all $\lambda,\mu\in\rho(A)$.
\begin{enumerate}[(i)]
\item $D(A_m)=D(A)\oplus\ker(\lambda-A_m)$;
\item $Q|_{\ker(\lambda-A_m)}$ is invertible and the operator
\[Q_\lambda:=(Q|_{\ker(\lambda-A_m)})^{-1}:\partial X\to\ker(\lambda-A_m)\subseteq X\]
is bounded;
\item $\RmAQ_\lambda=
\RlAQ_\mu$.
\end{enumerate}
\end{lemma}
The following operators are essential to obtain explicit
representations of the solutions of the boundary control problem \eqref{ACPBC}.
\begin{definition}
For $\lambda\in\rho(A)$ we call the operator $Q_\lambda$, introduced in
Lemma~\ref{lem-Gre}.(ii), abstract \emph{Dirichlet operator} and define
\[B_\lambda:=Q_\lambda B \in \mathcal{L}\bigl(U,\ker(\lambda-A_m)\bigr)\subset\mathcal{L}(U,X).\]
\end{definition}
By \cite[Prop.~2.7]{EKKNS:10} the solutions of \eqref{ACPBC} can be represented by the following extrapolated version of the variation of parameters formula.
\begin{proposition}\label{prop-vpf-ex} Let $x_0\in X$, $u\in\rL^1_{\mathrm{loc}}(\mathbb R_+,U)$ and $\lambda\in\rho(A)$. If
$x(\cdot)=x(\cdot,x_0,u)$ is a classical solution \eqref{ACPBC}, then it
is given by the \emph{variation of parameters formula}
\begin{equation}\label{eq-vpf-1}
x(t)=T(t)x_0+(\lambda-A_{-1})\int_0^t T(t-s)B_\lambda u(s)\,ds,\quad
t\ge0.
\end{equation}
\end{proposition}
Our aim in the sequel is to investigate which states in $X$ can be \emph{exactly} reached from $x_0=0$ by solutions of \eqref{ACPBC}. To this end we have to impose an additional assumption which, by \eqref{eq-vpf-1}, ensures that solutions for $\mathrm{L}^p$-controls have values in $X$.
\begin{definition}
Let $1\le p\le+\infty$.
Then the control operator $B\in\mathcal{L}(U,\partial X)$ is called \emph{$p$-boundary admissible} if there exist $t>0$ and $\lambda\in\rho(A)$ such that
\begin{equation}\label{b-admiss}
\int_0^{t} T(t-s)B_\lambda u(s)\,ds\in D(A)\quad\text{for all }u\in{\rL^p\bigl([0,t],U\bigr)}.
\end{equation}
\end{definition}
\begin{remark}\label{rem:Bdd-op}
From Lemma~\ref{lem-Gre}.(iii) it follows that $(\lambda-A_{-1})Q_\lambda\in\mathcal{L}(\partial X,X_{-1})$, hence also
\[
B_A:=(\lambda-A_{-1})B_\lambda\in\mathcal{L}(U,X_{-1})
\]
is independent of $\lambda\in\rho(A)$. Then $B\in\mathcal{L}(U,\partial X)$ is $p$-boundary admissible if and only if $B_A$ is $p$-admissible in the usual sense, cf. \cite[Def.~4.1]{Wei:89a}.
This implies that if \eqref{b-admiss} is satisfied for some $t>0$ then it is satisfied for every $t>0$. Moreover, we note that $B$ is $1$-boundary admissible if $\ker(\lambda-A_m)\subset{F_1^A}$, see \cite[Lem.~A.3]{EKKNS:10}. Finally, since ${\rL^p\bigl([0,t],U\bigr)}\subset\rL^1([0,t],U)$ it follows that $1$-boundary admissibility implies $p$-boundary admissibility for all $p>1$.
\end{remark}
\smallbreak
Now assume that $B\in\mathcal{L}(U,\partial X)$ is $p$-boundary admissible. Then for fixed $\lambda\in\rho(A)$ and $t>0$ the operators $\sB_{t}^{\textrm{BC}}:{\rL^p\bigl([0,t],U\bigr)}\to X$ given by
\begin{equation}\label{bt-bc}
\sB_{t}^{\textrm{BC}} u:= (\lambda- A)\int_0^tT(t-s)B_\lambda u(s)\,ds\\
=\int_0^tT_{-1}(t-s)B_A u(s)\,ds
\end{equation}
are called the \emph{controllability maps} of the system $\Sigma_{\textrm{BC}}(\Am,B,Q)$, where the second integral initially is taken in the extrapolation space $X_{-1}$. Note that by the closed graph theorem $\sB_{t}^{\textrm{BC}}\in\mathcal{L}({\rL^p\bigl([0,t],U\bigr)},X)$. Hence, this definition is independent of the particular choice of $\lambda\in\rho(A)$ and gives the (unique) classical solution of \eqref{ACPBC} for given $u\in \rW^{2,1}([0,t],U)$ and $x_0=0$. This motivates the following definition.
\begin{definition}
\begin{enumerate}[(a)]
\item
The \emph{exact reachability space in time $t\ge0$} of $\Sigma_{\textrm{BC}}(\Am,B,Q)${} is defined by\footnote{By $\operatorname{rg}(T)$ we denote the range $TX\subseteq Y$ of an operator $T:X\to Y$.}
\begin{equation}\label{eq-reach-sp-t}
e\sR_t^{\textrm{BC}}:=\operatorname{rg}(\sB_{t}^{\textrm{BC}}).
\end{equation}
Moreover, we define the \emph{exact reachability space} (in arbitrary time) by
\begin{equation}\label{eq-reach-sp}
e\sR^{\textrm{BC}}:=\bigcup_{t\ge0}\operatorname{rg}(\sB_{t}^{\textrm{BC}})
\end{equation}
and call $\Sigma_{\textrm{BC}}(\Am,B,Q)${} \emph{exactly controllable} (in arbitrary time) if $e\sR^{\textrm{BC}}=X$.
\item
The \emph{approximate reachability space in time $t\ge0$} of $\Sigma_{\textrm{BC}}(\Am,B,Q)${} is defined by
\begin{equation}\label{eq-app-reach-sp-t}
a\sR_t^{\textrm{BC}}:=\overline{e\sR_t^{\textrm{BC}}}.
\end{equation}
Moreover, we define the \emph{approximate reachability space} (in arbitrary time) by
\begin{equation}\label{eq-app-reach-sp}
a\sR^{\textrm{BC}}:=\overline{\;\bigcup_{t\ge0}a\sR_t^{\textrm{BC}}\,}
\end{equation}
and call $\Sigma_{\textrm{BC}}(\Am,B,Q)${} \emph{approximately controllable} if $a\sR^{\textrm{BC}}=X$.
\end{enumerate}
\end{definition}
From \cite[Thm.~2.12 \& Cor.~2.13]{EKKNS:10} we obtain the following properties and representations of the approximate reachability space.
\begin{proposition}\label{prop:aR}
Assume that $B\in\mathcal{L}(U,\partial X)$ is $p$-boundary admissible.
Then the following holds.
\begin{enumerate}[(i)]
\item $a\sR^{\textrm{BC}}$ is a closed linear subspace, invariant under $(T(t))_{t\ge0}$ and $R(\lambda,A)$ for $\lambda>\operatorname{{\omega_0}}(A)$.
\item $a\sR^{\textrm{BC}}=\overline{\operatorname{span}}\, \bigcup_{\lambda>\omega} \operatorname{rg}(B_\lambda)$ for some $\omega>\operatorname{{\omega_0}}(A)$.
\item $a\sR^{\textrm{BC}}\subseteq\overline{\operatorname{span}}\, \bigcup_{\lambda>\operatorname{{\omega_0}}(A)}\ker(\lambda-A_m)$.
\end{enumerate}
\end{proposition}
Part~(iii) shows that there is an upper bound for the reachability space
depending on the eigenvectors of $A_m$ only, independent of the
control operator $B$. This justifies the following notion.
\begin{definition}\label{RBCmax}
The \emph{maximal reachability space} of $\Sigma_{\textrm{BC}}(\Am,B,Q)${} is defined by
\begin{equation*
\sR^{\textrm{BC}}_{\textrm{max}}:=\overline{\operatorname{span}}\,\bigcup_{\lambda>\operatorname{{\omega_0}}(A)} \ker
(\lambda-A_m).
\end{equation*}
The system $\Sigma_{\textrm{BC}}(\Am,B,Q)${} is called \emph{maximally controllable} if $e\sR^{\textrm{BC}}=\sR^{\textrm{BC}}_{\textrm{max}}$.
\end{definition}
We stress that $\sR^{\textrm{BC}}_{\textrm{max}}\ne X$ may happen (some basic examples are provided in \cite[Sec.~5]{EKNS:08}), hence the relevant question about exact or approximate
controllability is indeed to compare $e\sR^{\textrm{BC}}$ or $a\sR^{\textrm{BC}}$ to the space
$\sR^{\textrm{BC}}_{\textrm{max}}$ and not to the whole space $X$, as it is usually
done in the classical situation.
After this short summary on boundary control systems $\Sigma_{\textrm{BC}}(\Am,B,Q)${} taken mainly from \cite{EKKNS:10} in the context of approximate controllability, we now turn our attention to the case of \emph{exact} controllability.
\section{Exact controllability}
\label{sec:EC}
\smallbreak
We start this section by giving two characterizations of $p$-boundary admissibility for a control operator $B$ which frequently simplifies the explicit computation of the associated controllability map $\sB_t^{\textrm{BC}}$.
Here for $\lambda\in\mathbb C$ we introduce the function $\varepsilon_{\la}:\mathbb R\to\mathbb C$ by $\varepsilon_{\la}(s):=e^{\lambda s}$.
Moreover, for $f\in{\rL^p[0,t]}$ and $u\in U$ we define
\[
f\otimes u\in{\rL^p\bigl([0,t],U\bigr)}
\quad\text{by}\quad
(f\otimes u)(s):=f(s)\cdot u.
\]
Finally, we denote by $\eins_{[\alpha,\beta]}$ the characteristic function of the interval $[\alpha,\beta]\subset[0,t]$.
\begin{proposition}\label{prop:range-B1}
For a control operator $B\in\mathcal{L}(U,\partial X)$ the following are equivalent.
\begin{enumerate}[(a)]
\item $B$ is $p$-boundary admissible.
\item There exist $\lambda\in\rho(A)$, $t>0$ and $M\in\mathcal{L}\bigl({\rL^p\bigl([0,t],U\bigr)},X\bigr)$ such that for all $0\le\alpha\le\beta\le t$ and $v\in U$
\begin{equation}\label{eq:strange1}
\bigl(e^{\lambda\beta}\,T(t-\beta)-e^{\lambda\alpha}\,T(t-\alpha)\bigr)B_\lambda v=
M(\varepsilon_{\la}\cdot\eins_{[\alpha,\beta]}\otimes v).
\end{equation}
\item There exist $t>0$, $\lambda_0>\operatorname{{\omega_0}}(A)$ and $M\in\mathcal{L}\bigl({\rL^p\bigl([0,t],U\bigr)},X\bigr)$ such that for all $\lambda\ge\lambda_0$ and $v\in U$
\begin{equation}\label{eq:strange2}
\bigl(e^{\lambda t}-T(t)\bigr)B_\lambda v=
M(\varepsilon_{\la}\otimes v).
\end{equation}
\end{enumerate}
Moreover, in this case the controllability map is given by $\sB_t^{\textrm{BC}}=M$.
\end{proposition}
\begin{proof} Let $u=\varepsilon_{\la}\cdot\eins_{[\alpha,\beta]}\otimes v$ for some $\lambda\in\rho(A)$, $0\le\alpha\le\beta\le t$ and $v\in U$. Then
\begin{align}
\int_0^tT(t-s)B_\lambda u(s)\,ds\notag
&=e^{\lambda t}\int_\alpha^\beta e^{-\lambda(t-s)}\, T(t-s)B_\lambda v\,ds\\\notag
&=e^{\lambda t}\int_{t-\beta}^{t-\alpha} e^{-\lambda s}\, T(s)B_\lambda v\,ds\\\label{eq:B-zul0}
&=R(\lambda,A)\cdot\bigl(e^{\lambda\beta}\,T(t-\beta)-e^{\lambda\alpha}\,T(t-\alpha)\bigr)B_\lambda v.
\end{align}
(a)$\Rightarrow$(b). Since by assumption $B$ is $p$-boundary admissible we have $\sB_{t}^{\textrm{BC}}\in\mathcal{L}({\rL^p\bigl([0,t],U\bigr)},X)$. Hence, \eqref{bt-bc} and \eqref{eq:B-zul0} imply \eqref{eq:strange1} for $M=\sB_t^{\textrm{BC}}$.
\smallbreak
(b)$\Rightarrow$(a). We start by proving \eqref{b-admiss}. The idea is to show this first for functions of the type $u=\varepsilon_{\la}\cdot\eins_{[\alpha,\beta]}\otimes v$. Then by linearity it also holds for linear combinations of such functions and a density argument implies \eqref{b-admiss} for arbitrary $u\in{\rL^p\bigl([0,t],U\bigr)}$. To this end let $u=\varepsilon_{\la}\cdot\eins_{[\alpha,\beta]}\otimes v$ for $[\alpha,\beta]\subset[0,t]$ and $v\in U$. Then \eqref{eq:strange1} and \eqref{eq:B-zul0} imply
\begin{align}
\int_0^tT(t-s)B_\lambda u(s)\,ds\notag
&=R(\lambda,A)\cdot M(\varepsilon_{\la}\cdot\eins_{[\alpha,\beta]}\otimes v)\\
&=R(\lambda,A)\cdot Mu\label{eq:B-zul}.
\end{align}
Note that the multiplication operator $\sM_\lambda\in\mathcal{L}\bigl({\rL^p\bigl([0,t],U\bigr)}\bigr)$ defined by $\sM_\lambda u:=\varepsilon_{\la}\cdot u$ is an isomorphism (with bounded inverse $\mathcal{M}_{-\lambda}$). Hence, it maps dense sets of ${\rL^p\bigl([0,t],U\bigr)}$ into dense sets. Since the step functions are dense in ${\rL^p\bigl([0,t],U\bigr)}$ (see \cite[p.14]{ABHN:01}), the linear combinations of functions of the type $\varepsilon_{\la}\cdot\eins_{[\alpha,\beta]}\otimes v$ for $[\alpha,\beta]\subset[0,t]$ and $v\in U$ form a dense subspace of ${\rL^p\bigl([0,t],U\bigr)}$. Thus, we conclude that \eqref{eq:B-zul} holds for all $u\in{\rL^p\bigl([0,t],U\bigr)}$. Clearly this implies that $B$ is $p$-boundary admissible and $\sB_t^{\textrm{BC}}=M$.
\smallbreak
Recall that $B_A=(\lambda-A_{-1})B_\lambda$ is independent of $\lambda\in\rho(A)$. Hence, the equivalence (a)$\Leftrightarrow$(c) follows by similar arguments replacing the total set $\{\varepsilon_{\la}\cdot\eins_{[\alpha,\beta]}\otimes v:0\le\alpha<\beta\le t, v\in U\}$ by the set $\{\varepsilon_{\la}\otimes v:\lambda\ge\lambda_0, v\in U\}$ which by the Stone--Weierstra{\ss} theorem is total as well in ${\rL^p\bigl([0,t],U\bigr)}$ for all $\lambda_0>\operatorname{{\omega_0}}(A)$.
\end{proof}
We note that by linearity it would suffice that Part~(b) of Proposition~\ref{prop:range-B1} is satisfied for $\alpha=0$ and all $0\le\beta\le t$ (or for all $0\le\alpha\le t$ and $\beta=t$).
\begin{corollary}\label{cor:Bn}
Let\footnote{We use the notation $\mathbb N_l:=\{l,l+1,l+2,\ldots\}$ for the set of natural numbers starting at $l\in\mathbb N$.} $n\in\mathbb N_1$ and assume that $B$ is $p$-boundary admissible.
Then for all $u\in{\rL^p\bigl([0,nt],U\bigr)}$
\begin{equation}\label{eq:sBnt}
\sB_{nt}^{\textrm{BC}} u=\sum_{k=0}^{n-1}T(t)^k M u_{k}
\end{equation}
where $u_k\in{\rL^p\bigl([0,t],U\bigr)}$ is defined by
\begin{equation}\label{eq:def-uk}
u_{k}(s)=u\bigl((n-k-1)t+s\bigr)
\end{equation}
and $M\in\mathcal{L}\bigl({\rL^p\bigl([0,t],U\bigr)},X\bigr)$ is the operator from Proposition~\ref{prop:range-B1}.
\end{corollary}
\begin{proof} Let $u\in{\rL^p\bigl([0,nt],U\bigr)}$. Then by \eqref{bt-bc}
\begin{align*}
\sB_{nt}^{\textrm{BC}} u
&=(\lambda-A)\int_0^{nt}T(nt-s)B_\lambda\, u(s)\,ds\\
&=(\lambda-A)\sum_{k=1}^n T\bigl((n-k)t\bigr)\int_{(k-1)t}^{kt} T(kt-s)B_\lambda\, u(s)\,ds\\
&=\sum_{k=1}^n T\bigl((n-k)t\bigr)\cdot(\lambda-A)\int_{0}^t T(t-s)B_\lambda\, u_{n-k}(s)\,ds\\
&=\sum_{k=0}^{n-1} T(t)^k\, \sB_t^{\textrm{BC}} u_{k}.
\qedhere
\end{align*}
\end{proof}
In Section \ref{examples} we will see that \eqref{eq:strange1}, \eqref{eq:strange2}, and \eqref{eq:sBnt} allow us to easily compute the controllability map in the situations studied in \cite[Sect.~4]{EKNS:08} and \cite[Sect.~3]{EKKNS:10} dealing with the control of flows in networks.
\begin{corollary}\label{cor:Reach}
If $B$ is $p$-boundary admissible,
then the exact reachability space in time $nt$ for $n\in\mathbb N_1$ is given by
\begin{equation*}
e\sR_{nt}^{\textrm{BC}}=\l\{\sum_{k=0}^{n-1}T(t)^k M u_k : u_k\in{\rL^p\bigl([0,t],U\bigr)},\; 1\le k\le n-1 \r\},
\end{equation*}
where $M\in\mathcal{L}\bigl({\rL^p\bigl([0,t],U\bigr)},X\bigr)$ is the operator from Proposition~\ref{prop:range-B1}.
\end{corollary}
\section{Positive controllability}\label{Sec:pos}
In this section we are interested in positive control functions yielding positive states. To this end we will make the following
\begin{ada}
The spaces
$X$
and $U$ are Banach lattices.
\end{ada}
Moreover, by $Y^+:=\{y\in Y:Y\ge0\}$ we denote the positive cone in a Banach lattice $Y$.
\smallbreak
Note that in the sequel we do \emph{not} make any positivity assumptions on $(T(t))_{t\ge0}$, $B$ or $Q_\lambda$ if not stated otherwise.
\begin{definition}
\begin{enumerate}[(a)]
\item The \emph{exact positive reachability space in time $t\ge0$} of system $\Sigma_{\textrm{BC}}(\Am,B,Q)${} is defined by
\begin{equation}\label{eq-reach-sp-t+}
e^+\sR_t^{\textrm{BC}}:=\Bigl\{\sB_{t}^{\textrm{BC}} u:u\in{\rL^p\bigl([0,t],U^+\bigr)}\Bigr\}.
\end{equation}
Moreover, we define the \emph{exact positive reachability space} (in arbitrary time) by
\begin{equation}\label{eq-reach-sp+}
e^+\sR^{\textrm{BC}}:=\bigcup_{t\ge0}e^+\sR_t^{\textrm{BC}}
\end{equation}
and call $\Sigma_{\textrm{BC}}(\Am,B,Q)${} \emph{exactly positive controllable} (in arbitrary time) if \\$e^+\sR^{\textrm{BC}}=X^+$.
\smallbreak
\item The \emph{approximate positive reachability space in time $t\ge0$} of $\Sigma_{\textrm{BC}}(\Am,B,Q)${} is defined by
\begin{equation}\label{eq-a-reach-sp+}
a^+\sR_t^{\textrm{BC}}:=\overline{e^+\sR_t^{\textrm{BC}}}.
\end{equation}
Moreover, we define the \emph{approximate positive reachability space} (in arbitrary time) by
\begin{equation}\label{eq-app-reach-sp+}
a^+\sR^{\textrm{BC}}:=\overline{\;\bigcup_{t\ge0}a^+\sR_t^{\textrm{BC}}\,}
\end{equation}
and call $\Sigma_{\textrm{BC}}(\Am,B,Q)${} \emph{approximately positive controllable} if $a^+\sR^{\textrm{BC}}=X^+$.
\end{enumerate}
\end{definition}
First we give necessary and sufficient conditions implying that starting from the initial state $x_0=0$ positive controls result in positive states.
\begin{proposition}\label{prop:reach-pos}
Assume that $B\in\mathcal{L}(U,\partial X)$ is $p$-boundary admissible. Then
\begin{equation}\label{eq:pos-R-1}
e^+\sR_t^{\textrm{BC}}\subset X^+
\end{equation}
if and only if
\begin{equation}\label{eq:pos-R-1.5}
a^+\sR_t^{\textrm{BC}}\subset X^+
\end{equation}
if and only if there exists $\lambda\in\mathbb R\cap\rho(A)$ such that
\begin{equation}\label{eq:pos-R-2}
\bigl(e^{\lambda\beta}\,T(t-\beta)-e^{\lambda\alpha}\,T(t-\alpha)\bigr)B_\lambda\ge0
\qquad\text{for all $0\le\alpha\le\beta\le t$.}
\end{equation}
Moreover, if $(T(t))_{t\ge0}$ is positive, then the above assertions are satisfied if and only if
\begin{equation}\label{eq:pos-R-1x}
e^+\sR^{\textrm{BC}}\subset X^+
\end{equation}
if and only if
\begin{equation}\label{eq:pos-R-1.5x}
a^+\sR^{\textrm{BC}}\subset X^+
\end{equation}
if and only if there exists $\lambda>\operatorname{{\omega_0}}(A)$ and $t>0$ such that
\begin{equation}\label{eq:pos-R-3}
\bigl(e^{\lambda s}-T(s)\bigr)B_\lambda\ge0
\qquad\text{for all $0\le s\le t$}
\end{equation}
if and only if there exists $\lambda_0>\operatorname{{\omega_0}}(A)$ such that
\begin{equation}\label{eq:pos-R-4}
B_\lambda\ge0
\qquad\text{for all $\lambda\ge\lambda_0$}.
\end{equation}
\end{proposition}
\begin{proof} The equivalence of \eqref{eq:pos-R-1} and \eqref{eq:pos-R-1.5} follows from the closedness of $X^+$. To show the equivalence of \eqref{eq:pos-R-1} and \eqref{eq:pos-R-2} recall that by \cite[p.14]{ABHN:01} the step functions are dense in ${\rL^p\bigl([0,t],U\bigr)}$. Since the map $u\mapsto u^+$ on ${\rL^p\bigl([0,t],U\bigr)}$ is continuous, we conclude that the positive step functions are dense in ${\rL^p\bigl([0,t],U^+\bigr)}$.
The claim then follows from (the proof of) Proposition~\ref{prop:range-B1} using the boundedness of the controllability map $\sB_t^{\textrm{BC}}$.
Now assume that $(T(t))_{t\ge0}$ is positive. Then the equivalences of \eqref{eq:pos-R-1}, \eqref{eq:pos-R-1.5} with \eqref{eq:pos-R-1x}, \eqref{eq:pos-R-1.5x} follow from Corollary~\ref{cor:Reach} using the fact that the reachability spaces are growing in time. In particular, this implies that if \eqref{eq:pos-R-2} holds for some $t>0$ it holds for arbitrary $t>0$ and choosing $\beta=t$ and $\alpha=0$ we obtain \eqref{eq:pos-R-3} for arbitrary $t>0$.
To show the remaining assertions we fix some $\lambda>\operatorname{{\omega_0}}(A)$ and define on $\mathcal{X}:=X\times X$ the operator matrix
\[
\mathcal{A}:=
\begin{pmatrix}
A-\lambda&0\\0&0
\end{pmatrix},\quad
D(\mathcal{A}):=\Bigl\{\tbinom{x}{y}\in D(A_m)\times\partial X:Qx=By\Bigr\}.
\]
Then by \cite[Cor.~3.4]{Eng:99} the matrix $\mathcal{A}$ generates a $C_0$-semigroup $(\mathcal{T}(t))_{t\ge0}$ given by
\begin{equation}\label{eq:sTt}
\mathcal{T}(s)=\begin{pmatrix}
e^{-\lambda s}T(s)&\bigl(I-e^{-\lambda s}T(s)\bigr)B_{\lambda}\\0&I
\end{pmatrix},
\quad s\ge0.
\end{equation}
Moreover, by \cite[Lem.~3.1]{Eng:99} we have $(0,+\infty)\subset\rho(\mathcal{A})$ and
\begin{equation} \label{eq:RsA}
R(\mu,\mathcal{A})=
\begin{pmatrix}
R(\mu+\lambda,A)&\frac1\mu B_{\mu+\lambda}\\
0&\frac1\mu
\end{pmatrix}
\quad\text{for $\mu>0$}.
\end{equation}
Now, if \eqref{eq:pos-R-3} holds then $\mathcal{T}(s)\ge0$ for all $0\le s\le t$ which implies that $(\mathcal{T}(t))_{t\ge0}$ is positive which is equivalent to the fact that $\mathcal{A}$ is resolvent positive. However, by \eqref{eq:RsA} the latter is the case if and only if \eqref{eq:pos-R-4} is satisfied which shows the equivalence of \eqref{eq:pos-R-3} and \eqref{eq:pos-R-4}.
Finally, if \eqref{eq:pos-R-3} holds, then
\begin{align*}
\bigl(e^{\lambda\beta}\,T(t-\beta)-e^{\lambda\alpha}\,T(t-\alpha)\bigr)B_\lambda
&=e^{\lambda\beta}\,T(t-\beta)\cdot\bigl(I-e^{-\lambda(\beta-\alpha)}\,T(\beta-\alpha)\bigr)B_\lambda\\
&\ge0
\end{align*}
for all $0\le\alpha\le\beta\le t$. This proves \eqref{eq:pos-R-2} and completes the proof.
\end{proof}
In the sequel we use the notation $\operatorname{co} M$ and $\overline{\operatorname{co}}\, M$ to indicate the convex hull and the closed convex hull of a set $M\subset X$, respectively.
\begin{proposition}\label{prop:aRp-char}
Assume that $B\in\mathcal{L}(U,\partial X)$ is $p$-boundary admissible and that $e^+\sR_t^{\textrm{BC}}\subset X^+$.
Then the following holds.
\begin{enumerate}[(i)]
\item $a^+\sR^{\textrm{BC}}$ is a closed convex cone, invariant under $(T(t))_{t\ge0}$ and $R(\lambda,A)$ for $\lambda>\operatorname{{\omega_0}}(A)$.
\item
$
a^+\sR^{\textrm{BC}}=\overline{\operatorname{co}}\,\l\{\bigl(e^{\lambda \beta}T(t-\beta)-e^{\lambda \alpha}T(t-\alpha)\bigr)B_\lambda v:0\le \alpha\le\beta\le t,\,v\in U^+\r\}
$
for all $\lambda>\operatorname{{\omega_0}}(A)$.
\item $a^+\sR^{\textrm{BC}}=\overline{\operatorname{co}}\,\{T(t)B_\lambda v:t\ge0,\,\lambda>w,\,v\in U^+\}$ for all $w>\operatorname{{\omega_0}}(A)$.
\item $a^+\sR^{\textrm{BC}}=\overline{\operatorname{co}}\,\{R(\lambda,A)^nB_\lambda v:n\in\mathbb N_0,\,\lambda>w,\,v\in U^+\}$ for some/all $w>\operatorname{{\omega_0}}(A)$.
\end{enumerate}
\end{proposition}
\begin{proof} (i). Clearly, $a^+\sR^{\textrm{BC}}$ is a closed convex cone. Its invariance under $(T(t))_{t\ge0}$ and $R(\lambda,A)$ for $\lambda>\operatorname{{\omega_0}}(A)$ follows from the representations in (iii) and (iv).
To show (ii) we note that by \eqref{bt-bc} and \eqref{eq:B-zul0} the inclusion ``$\supseteq$'' holds.
Now recall that the positive step functions are dense in ${\rL^p\bigl([0,t],U^+\bigr)}$ and invariant under positive convey combinations. Hence, the boundedness of the controllability maps implies equality of the spaces in (ii).
\smallbreak
To obtain (iii) we note that by \eqref{bt-bc} and \eqref{eq:B-zul0} we have
\[
\bigl(e^{\lambda\beta}\,T(t-\beta)-e^{\lambda\alpha}\,T(t-\alpha)\bigr)B_\lambda v\ine^+\sR^{\textrm{BC}}
\]
for all $0\le\alpha\le\beta\le t$ and $v\in U^+$.
Multiplying this inclusion by $e^{-\lambda\beta}>0$ and putting $s:=t-\beta$ and $r:=t-\alpha$ implies
\[
\bigl(T(s)-e^{\lambda(s-r)}\,T(r)\bigr)B_\lambda v\ine^+\sR^{\textrm{BC}}
\]
for all $0\le s\le r$ and $v\in U^+$. Since $\lambda>\operatorname{{\omega_0}}(A)$ we obtain
\[
\lim_{r\to+\infty}e^{\lambda(s-r)}\,\|T(r)\|=0
\]
and hence
\[T(s)B_\lambda v\ina^+\sR^{\textrm{BC}}\]
for all $s\ge0$ and $v\in U^+$. This shows the inclusion ``$\supseteq$'' in (iii).
For the converse inclusion in (iii) it suffices to prove that
\begin{equation}\label{eq:inc2.(ii)}
e^+\sR_t^{\textrm{BC}}\subset\overline{\operatorname{co}}\,\bigl\{T(s)B_\mu y:s\ge0,\,\mu>w,\,y\in U^+\bigr\}
\end{equation}
for all $t>0$ and $w>\operatorname{{\omega_0}}(A)$. Since $B\in\mathcal{L}(U,\partial X)$ is $p$-boundary admissible the controllability map $\sB_t^{\textrm{BC}}$ is continuous. Moreover, the positive step functions are dense in ${\rL^p\bigl([0,t],U^+\bigr)}$ and $\overline{\operatorname{co}}\,\{T(s)B_\mu y:s\ge0,\,\mu>w,\,y\in U^+\}$ is a convex cone. Combining these facts and \eqref{eq:B-zul0} it follows that \eqref{eq:inc2.(ii)} holds if
\begin{equation}\label{eq::inc2.(ii)-1}
\bigl(e^{\lambda\beta} T(t-\beta)-e^{\lambda\alpha} T(t-\alpha)\bigr)B_\lambda v\in \overline{\operatorname{co}}\,\bigl\{T(s)B_\mu y:s\ge0,\,\mu>w,\,y\in U^+\bigr\}
\end{equation}
for all $0\le\alpha\le\beta\le t$, $k\in\mathbb N_0$ and $v\in X^+$.
Since $(T(t))_{t\ge0}$ is strongly continuous the following integral is the limit of Riemann sums, hence for $\nu>\max\{0,w\}$ we obtain using Lemma~\ref{lem-Gre}.(iii)
\begin{align*}
\overline{\operatorname{co}}\,\bigl\{T(s)B_\mu y:s\ge0,\,\mu>w,\,y\in U^+\bigr\}&\ni\nu\int_{t-\beta}^{t-\alpha}e^{\lambda(t-r)}T(r)B_\nu v\,dr\\
&=\bigl(e^{\lambda\beta}\,T(t-\beta)-e^{\lambda\alpha}\,T(t-\alpha)\bigr)\nu\RlAB_\nu v\\
&=\bigl(e^{\lambda\beta}\,T(t-\beta)-e^{\lambda\alpha}\,T(t-\alpha)\bigr)\nu\RnAB_\lambda v\\
&\to\bigl(e^{\lambda\beta}\,T(t-\beta)-e^{\lambda\alpha}\,T(t-\alpha)\bigr)B_\lambda v,
\end{align*}
as $\nu\to+\infty$.
This proves \eqref{eq::inc2.(ii)-1} and completes the proof of (iii).
\smallbreak
That the right-hand-sides of the equalities in (iii) and (iv) coincide follows from the integral representation of the resolvent (see \cite[Cor.~II.1.11]{EN:00}) and the Post--Widder inversion formula (see \cite[Cor.~III.5.5]{EN:00}). For the details we refer to the proof of \cite[Prop.~3.3]{BBEAM:13}.
\end{proof}
\begin{corollary}
Assume that $B\in\mathcal{L}(U,\partial X)$ is $p$-boundary admissible and that $a^+\sR^{\textrm{BC}}\subset X^+$. Then the following are equivalent.
\begin{enumerate}[(a)]
\item The system $\Sigma_{\textrm{BC}}(\Am,B,Q)${} is approximately positive controllable.
\item There exists $w>\operatorname{{\omega_0}}(A)$ such that the following implication holds for all $\phi\in X'$
\[
\big<T(s)B_\lambda v,\phi\big>\ge0\text{ for all $v\in U^+$, $s\ge0$ and $\lambda>w$ $\Rightarrow$ $\phi\ge0$.}
\]
\item There exists $w>\operatorname{{\omega_0}}(A)$ such that the following implication holds for all $\phi\in X'$
\[
\bigl<R(\lambda,A)^nB_\lambda v,\phi\bigr>\ge0 \text{ for all $v\in U^+$, $n\in\mathbb N$ and $\lambda>w$ $\Rightarrow$ $\phi\ge0$.}
\]
\end{enumerate}
\end{corollary}
\begin{proof}
This follows from the proof of \cite[Thm.~3.4]{BBEAM:13} by replacing \cite[Prop.~3.3]{BBEAM:13} with our Proposition~\ref{prop:aRp-char}.
\end{proof}
\begin{remark}
The previous two results generalize \cite[Prop.~3.3 and Thm.~3.4]{BBEAM:13}, respectively, where it is assumed that $(T(t))_{t\ge0}$, $B$ and $Q_{\lambda}$ for all $\lambda>\lambda_0$ are all positive and, in particular, the additional hypothesis
\begin{itemize}
\item[(H)] \emph{There exists $\gamma>0$ and $\lambda_0\in\mathbb R$ such that $\|Qx\|\ge\gamma\lambda\|x\|$ for all $\lambda>\lambda_0$ and $x\in\ker(\lambda-A_m)$}
\end{itemize}
is made. We note that Hypothesis~(H) in reflexive state spaces $X$ implies that $A=A_m$, cf. \cite[Lem.~A.1]{ABE:16}. Hence, the results of \cite{BBEAM:13} are, e.g., not applicable to state space like $X=\mathrm{L}^p([a,b],Y)$ for $p\in(1,+\infty)$ and reflexive $Y$.
\end{remark}
\smallskip
Combining Corollary~\ref{cor:Bn} and Proposition \ref{prop:reach-pos}
we finally obtain the following characterization of an exact positive reachability space.
\begin{corollary}\label{cor:Reach_pos}
Assume that $B$ is $p$-boundary admissible, $t>0$ and $n\in\mathbb N_1$.
Then the exact positive reachability space in time $nt$ is given by
\begin{equation*}
e^+\sR_{nt}^{\textrm{BC}}=\l\{\sum_{k=0}^{n-1}T(t)^k M u_k : u_k\in{\rL^p\bigl([0,t],U^+\bigr)},\; 1\le k\le n-1 \r\},
\end{equation*}
where $M\in\mathcal{L}\bigl({\rL^p\bigl([0,t],U^+\bigr)},X\bigr)$ is the operator from Proposition~\ref{prop:range-B1}. Moreover, the operator $M$ is positive if and only if $a^+\sR_t^{\textrm{BC}}\subset X^+$.
\end{corollary}
\section{Examples}\label{examples}
In this section we will show how our abstract results can be applied to a transport equation with boundary control and to the vertex control of flows in networks.
\subsection{Exact Boundary Controllability for a Transport Equation}
In this subsection we study a transport equation in $\mathbb R^m$ given by\footnote{We denote by $x'(t,s)$ the derivative of $x(t,s)$ with respect to the ``space'' variable $s$.}
\begin{alignat}{2}\label{eq:TE}
\begin{cases}
\dot x(t,s)=x'(t,s),& s\in[0,1],\ t\ge0,\\
x(t,1)=\mathbb B x(t,0)+u(t)\cdot b,&t\ge0,\\
x(0,s)=0,& s\in[0,1].
\end{cases}
\end{alignat}
Here $x:\mathbb R_+\times[0,1]\to\mathbb C^m$, $\mathbb B\in\rM_{m}(\mathbb C)$, $u:\mathbb R_+\to\mathbb C$ and $b\in\mathbb C^m$. In order to fit this system in our general framework we choose
\begin{itemize}
\item the state space $X:={\rL^p\bigl([0,1],\Cm\bigr)}$ where $1\le p<+\infty$,
\item the boundary space $\partial X:=\mathbb C^m$,
\item the control space $U:=\mathbb C$,
\item the control operator $B:=b\in\mathbb C^m\simeq\mathcal{L}(U,\partial X)=\mathcal{L}(\mathbb C,\mathbb C^m)$,
\item the system operator
\[A_m:=\operatorname{diag}\l(\frac{d}{ds}\r)_{m\times m}\quad\text{with domain}\quad D(A_m):=\rW^{1,p}\bigl([0,1],\Cm\bigr),\]
\item the boundary operator $Q:\rW^{1,p}\bigl([0,1],\Cm\bigr)\to\mathbb C^m$, $Qf:=f(1)-\mathbb B f(0)$,
\item the operator $A\subsetA_m$ with domain $D(A)=\ker Q$,
\item the state trajectory $x:\mathbb R_+\to{\rL^p\bigl([0,1],\Cm\bigr)}$, $x(t):=x(t,\cdot)$.
\end{itemize}
With these choices the controlled transport equation~\eqref{eq:TE} can be reformulated as an abstract Cauchy problem with boundary control of the form \eqref{ACPBC}. Clearly, the boundary operator $Q$ is surjective.
\smallbreak
By \cite[Cor.~18.4]{BKR:17}
we know that for $\lambda\in\mathbb C$ and $A=A_m|_{\ker(Q)}$ as above we have
\[
\lambda\in\rho(A)\iff
e^\lambda\in\rho(\mathbb B).
\]
Moreover, by \cite[Prop.~18.7]{BKR:17}
the operator $A$ generates a strongly continuous semigroup given by
\begin{equation}\label{eq:HG-TE}
\bigl(T(t)f\bigr)(s)=\mathbb B^k f(t+s-k)\quad \text{if }\;t+s\in[k,k+1) \text{ for } k\in\mathbb N_0,
\end{equation}
where $\mathbb B^0:=Id$.
This shows that the Assumptions~\ref{ma-bcs} are satisfied. To proceed we have to compute the associated Dirichlet operator.
\begin{lemma} For $\lambda\in\rho(A)$ the Dirichlet operator $Q_\lambda\in\mathcal{L}\bigl(\mathbb C^m,{\rL^p\bigl([0,1],\Cm\bigr)}\bigr)$ is given by
\begin{equation}\label{eq:Dirich-TE}
Q_\lambda=\varepsilon_{\la}\otimes R(e^{\lambda},\mathbb B).
\end{equation}
\end{lemma}
\begin{proof}
By Lemma~\ref{lem-Gre}.(ii) we know that $Q:{\ker(\lambda-A_m)}\to\partial X$ is invertible. Moreover, for $d\in\mathbb C^m=\partial X$ we have
\begin{align*}
Q\bigl(\varepsilon_{\la}\otimes R(e^{\lambda},\mathbb B)\,d\bigr)
=e^{\lambda}\cdot R(e^{\lambda},\mathbb B)\,d-\mathbb B\cdot R(e^{\lambda},\mathbb B)\,d=d
\end{align*}
which proves \eqref{eq:Dirich-TE}.
\end{proof}
Next we verify that in this context \eqref{eq:strange1} holds.
\begin{lemma}\label{lem:strange-TE}
Let $\lambda\in\rho(A)$. Then for all $0\le\alpha\le1$
\begin{equation}\label{eq:strange-TE}
\bigl(e^{\lambda\alpha}\cdot T(1-\alpha)B_\lambda \bigr)(s)=
\begin{cases}
\varepsilon_{\la}(1+s)\cdotR(e^{\lambda},\mathbb B)\, b&\text{if }0\le s<\alpha,\\
\varepsilon_{\la}(1+s)\cdotR(e^{\lambda},\mathbb B)\, b-\varepsilon_{\la}(s)\cdot b&\text{if }\alpha\le s\le1.
\end{cases}
\end{equation}
Hence, \eqref{eq:strange1} is satisfied for
\begin{align*}
M&=b\in\mathcal{L}\bigl({\rL^p[0,1]},{\rL^p\bigl([0,1],\Cm\bigr)}\bigr),\
(M u)(s)=u(s)\cdot b.
\end{align*}
\end{lemma}
\begin{proof} The claim follows from \eqref{eq:Dirich-TE} and \eqref{eq:HG-TE} by the following simple computation.
\begin{align*}
\bigl(e^{\lambda\alpha}\cdot T(1-\alpha)B_\lambda \bigr)(s)
&=e^{\lambda\alpha}\cdot\Bigl(T(1-\alpha)\bigl(\varepsilon_{\la}\otimesR(e^{\lambda},\mathbb B)\,b\bigr)\Bigr)(s)
\\
&=e^{\lambda\alpha}\cdot
\begin{cases}
\varepsilon_{\la}(1-\alpha+s)\cdotR(e^{\lambda},\mathbb B)\, b&\text{if }0\le s<\alpha,\\
\varepsilon_{\la}(s-\alpha)\cdot\BBR(e^{\lambda},\mathbb B)\, b&\text{if }\alpha\le s\le1,
\end{cases}
\\
&=
\begin{cases}
\varepsilon_{\la}(1+s)\cdotR(e^{\lambda},\mathbb B)\, b&\text{if }0\le s<\alpha,\\
\varepsilon_{\la}(1+s)\cdotR(e^{\lambda},\mathbb B)\, b-\varepsilon_{\la}(s)\cdot b&\text{if }\alpha\le s\le1.
\end{cases}
\qedhere
\end{align*}
\end{proof}
Thus $B$ is $p$-boundary admissible. Next we compute the appropriate reachability space.
\begin{corollary}\label{cor:Reach-TE}
If $t\ge m$ then the exact reachability space of the controlled transport equation~\eqref{eq:TE} is given by
\begin{equation*
e\sR_t^{\textrm{BC}}=e\sR^{\textrm{BC}}={\rL^p[0,1]}\otimes\operatorname{span}\l\{b,\mathbb B b,\ldots,\mathbb B^{m-1}b\r\}.
\end{equation*}
\end{corollary}
\begin{proof}
Note that by \eqref{eq:HG-TE} we have $T(1)f=\mathbb B f$. Hence, for $t=m$ the assertion follows immediately from Corollary~\ref{cor:Reach} and Lemma~\ref{lem:strange-TE}. Clearly, $e\sR_t^{\textrm{BC}}$ increases in time $t\ge0$. However, by the Cayley--Hamilton theorem $\operatorname{span}\{b,\mathbb B b,\ldots,\mathbb B^{l}b\}=\operatorname{span}\{b,\mathbb B b,\ldots,\mathbb B^{m-1}b\}$ for all $l\ge m-1$ and the claim follows.
\end{proof}
\begin{remark}\label{rem:C-H}
Let $l\le m$ be the degree of the minimal polynomial of $\mathbb B$. Then the previous proof shows that for all $t\ge l$ we even have
\begin{equation*
e\sR_t^{\textrm{BC}}=e\sR^{\textrm{BC}}={\rL^p[0,1]}\otimes\operatorname{span}\l\{b,\mathbb B b,\ldots,\mathbb B^{l-1}b\r\}.
\end{equation*}
\end{remark}
\begin{corollary}\label{cor:TE}
The following assertions are equivalent.
\begin{enumerate}[(a)]
\item Equation~\eqref{eq:TE} is exactly boundary controllable in time $t\ge m$, i.e.,
$e\sR_t^{\textrm{BC}}=X$.
\item Equation~\eqref{eq:TE} is maximally controllable in time $t\ge m$, i.e.,
$e\sR_t^{\textrm{BC}}=\sR^{\textrm{BC}}_{\textrm{max}}\,$.
\item $\operatorname{span}\l\{b,\mathbb B b,\ldots,\mathbb B^{m-1}b\r\}=\mathbb C^m$.
\end{enumerate}
\end{corollary}
\begin{proof} Note that $\ker(\lambda-A_m)=\varepsilon_{\la}\otimes\mathbb C^m$. Since by the Stone--Weierstra\ss{} theorem we have
\[\overline{\operatorname{span}}\,\bigcup_{\lambda>\operatorname{{\omega_0}}(A)} \{\varepsilon_{\la}\}={\rL^p[0,1]},\]
the maximal reachability space equals
\[
\sR^{\textrm{BC}}_{\textrm{max}}={\rL^p[0,1]}\otimes\mathbb C^m = X
\]
and the assertions follow immediately from Corollary~\ref{cor:Reach-TE}.
\end{proof}
\begin{remark}
The previous result characterizes the \emph{exact} maximal boundary controllability by a one-dimensional control in terms of a Kalman-type condition which is well-known in control theory.
\end{remark}
Combining Remark~\ref{rem:C-H} and Corollary~\ref{cor:TE} we furthermore obtain the following
\begin{corollary} Let $l\in\mathbb N$ be the degree of the minimal polynomial of $\mathbb B$. If $l < m$, the transport equation~\eqref{eq:TE} is not
maximally controllable, i.e., $e\sR^{\textrm{BC}} \subsetneq\sR^{\textrm{BC}}_{\textrm{max}}$.
\end{corollary}
\smallbreak
Finally, we investigate positive controllability and consider
\begin{itemize}
\item the positive cone $X^+:={\rL^p\bigl([0,1],\mathbb R_+^m\bigr)}$ in the state space $X$,
\item the positive cone $U^+:=\mathbb R_+$ in the control space $U$,
\item a positive matrix $\mathbb B\in\mathrm{M}_m(\mathbb R_+)$,
\item a positive control operator $B:=b\in\mathbb R_+^m$.
\end{itemize}
Then by \eqref{eq:HG-TE}--\eqref{eq:Dirich-TE} the operators $T(t)\in\mathcal{L}(X)$ for $t\ge0$ and $B_\lambda\in\mathcal{L}(U,X)$ for $\lambda>\operatorname{{\omega_0}}(A)$ are positive.
Thus arguing as above using Proposition \ref{prop:reach-pos} and Corollary~\ref{cor:Reach_pos} we obtain the following.
\begin{corollary}\label{cor:pos-Reach-TE}
The exact positive reachability space of the controlled transport equation~\eqref{eq:TE} is given by
\begin{equation*}
e^+\sR^{\textrm{BC}}={\rL^p\bigl([0,1],\mathbb R_+\bigr)}\otimes\operatorname{co}\bigl\{\mathbb B^{k}b \;:\; k\in\mathbb N_0\bigr\}.
\end{equation*}
Hence, the problem is exactly positive controllable if and only if
\begin{equation*}
\operatorname{co}\bigl\{\mathbb B^{k}b \;:\; k\in\mathbb N_0\bigr\} = \mathbb R_+^m.
\end{equation*}
\end{corollary}
\subsection{Vertex control of flows in networks}
The previous example can be easily adapted to cover a transport problem on a network controlled in a single vertex. More precisely, consider a network consisting of $n$ vertices $\{v_1,\dots ,v_n\}$ and $m$ edges $\{e_1,\dots ,e_m\}$. As shown in \cite[Sec.~18.1]{BKR:17}, its structure can be described by
either the transposed weighted adjacency matrix
$\mathbb A\in\rM_{n}(\mathbb C)$ given by
\begin{equation*}
\mathbb A_{ij}:=\begin{cases}
w _{jk} & \mbox{if $v_j\stackrel{e_k}{\longrightarrow}v_i$}, \\
0 & \text{otherwise,}
\end{cases}
\end{equation*}
or by the transposed weighted adjacency matrix of the line graph
$\mathbb B\in\rM_{m}(\mathbb C)$ where
\begin{equation*}
\mathbb B_{ij}:=\begin{cases}
w_{ki} & \mbox{if $\stackrel{e_j}{\longrightarrow}v_k \stackrel{e_i}{\longrightarrow}$}\ , \\
0 & \text{otherwise.}
\end{cases}
\end{equation*}
To proceed we also need the transposed weighted outgoing incidence matrix
$(\Fiwm)^\top=:\Psi\in\rM_{m\times n}(\mathbb C)$ defined by
\begin{equation*}
\Psi_{ij} :=
\begin{cases}
w_{ij} & \mbox{if $v_j\stackrel{e_i}{\longrightarrow}$}\ , \\
0 & \text{otherwise}
\end{cases}
\end{equation*}
and the corresponding unweighted outgoing
incidence matrix denoted by $\Phi^{-}\in\rM_{n\times m}(\mathbb C)$.
For the weights we assume $0\le w_{ij} \le 1$, thus all these matrices are positive. Moreover, we assume that $\Psi$ is column stochastic (i.e., the weights on all the outgoing edges from a given vertex sum up to $1$).
For a detailed account of the various graph matrices we refer to \cite[Sec.~18.1]{BKR:17}. Here we only mention the following relations
\begin{equation}\label{eq:ABrel}
\Psi\mathbb A = \mathbb B\Psi,\quad
\Psi R(\lambda,\mathbb A) = R(\lambda,\mathbb B)\Psi,\quad
\text{and}\quad
\Phi^{-}\Psi = Id_{\mathbb C^n}
\end{equation}
which we will need in the sequel.
\smallbreak
We then consider a transport equation on the $m$ edges imposing $n$ boundary conditions in the vertices, controlled in a single vertex $v_i$, i.e.,
\begin{alignat}{2}\label{eq:TE-net}
\begin{cases}
\dot x(t,s)=x'(t,s),& s\in[0,1],\ t\ge0,\\
x(t,1)=\mathbb B x(t,0)+u(t)\cdot \Psi v,&t\ge0,\\
x(0,s)=0,& s\in[0,1],
\end{cases}
\end{alignat}
where $x:\mathbb R_+\times[0,1]\to\mathbb C^m$, $u:\mathbb R_+\to\mathbb C$, and the vertex $v=v_i$ is represented by the $i$-th canonical basis vector in $\mathbb C^n$. To rewrite this equation in an abstract form we take the same state space $X:={\rL^p\bigl([0,1],\Cm\bigr)}$, control space $U:=\mathbb C$ and boundary space $\partial X:=\mathbb C^m$ as above. Adapting the domain of $A_m$ as
\[D(A_m):=\l\{f\in\rW^{1,p}\bigl([0,1],\Cm\bigr): f(1) \in \operatorname{rg}\Psi \r\}\]
and choosing the control operator $B=b:=\Psi v\in\mathbb C^m$ we are in the situation considered in \cite{EKNS:08} and \cite{BBEAM:13}, see also \cite[Sec.~18.4]{BKR:17}.
Then the \emph{approximate} controllability space for the network flow problem computed in \cite[Cor.~4.3]{EKNS:08} by our Corollary~\ref{cor:Reach-TE} indeed coincides with the \emph{exact} controllability space.
\begin{corollary}
If $t\ge \min\{m,n\}=:l$ then the exact reachability space of the controlled transport in network problem \eqref{eq:TE-net} equals
\begin{align*}
e\sR_t^{\textrm{BC}}=e\sR^{\textrm{BC}}&={\rL^p[0,1]}\otimes\operatorname{span}\l\{\Psi v,\mathbb B \Psi v,\ldots,\mathbb B^{l-1}\Psi v\r\}\\
& = {\rL^p[0,1]}\otimes \Psi \operatorname{span}\l\{v,\mathbb A v,\ldots,\mathbb A^{l-1} v\r\} .
\end{align*}
\end{corollary}
Note that in big connected networks one usually has $n\le m$, hence the latter space is the more important one for applications.
\smallbreak
Positive control for this problem was already treated in \cite{BBEAM:13} and the approximate positive reachability spaces was computed. However, our approach even yields the \emph{exact} reachability space.
\begin{corollary}
The exact positive reachability space of the controlled transport in network problem \eqref{eq:TE-net} is given by
\begin{align*}
e^+\sR^{\textrm{BC}}&={\rL^p\bigl([0,1],\mathbb R_+\bigr)}\otimes\operatorname{co}\l\{\mathbb B^{k} \Psi v : k\in\mathbb N_0\r\}\\
& = {\rL^p\bigl([0,1],\mathbb R_+\bigr)}\otimes \Psi \operatorname{co}\l\{\mathbb A^k v : k\in\mathbb N_0\r\}.
\end{align*}
\end{corollary}
\subsection{Exact Boundary Controllability for Flows in Networks with Dynamical Boundary Conditions}
In this subsection we investigate exact controllability in the situation of \cite[Sect.~3]{EKKNS:10}. Without going much into details we only introduce the necessary facts to state the problem and to compute the exact reachability space $e\sR_t^{\textrm{BC}}$.
\smallskip
We start from the transport problem in the network introduced in the previous example, but now change the transmission process in the vertices allowing for dynamical boundary conditions.
To encode the structure of the underlying network and the imposed boundary conditions we use the incidence matrices introduced above as well as the weighted incoming incidence matrix $\Phi_w^{+}$ given by
\[
\bigl(\Phi_w^{+}\bigr)_{ij}:=
\begin{cases}
w^+_{ij} & \mbox{if $\stackrel{e_j}{\longrightarrow}v_i$}\ , \\
0 & \text{otherwise,}
\end{cases}
\]
for some $0\le w^+_{ij} \le 1$. Defining
\begin{equation}\label{eq:ABrel2}
\mathbb A := \Phi_w^+\Psi \quad\text{and}\quad \mathbb B:=\Psi\Phi_w^+
\end{equation}
we obtain the adjacency matrices as above (with different nonzero weights).
We mention that the relations \eqref{eq:ABrel} remain valid also in this case.
\smallbreak
We are then interested in the network transport problem with dynamical boundary conditions in $s=1$ considered already in \cite{Sik:05} and \cite[Sect.~3]{EKKNS:10}, i.e.,
\begin{alignat}{2}\label{eq:TE-net-db}
\begin{cases}
\dot x(t,s)=x'(t,s),& s\in[0,1],\ t\ge0,\\
\dot x(t,1)=\mathbb B x(t,0)+u(t)\cdot \Psi v,&t\ge0,\\
x(0,s)=0,& s\in[0,1],\\
\Phi^- x(1,0) = 0.
\end{cases}
\end{alignat}
To embed this example in our setting we introduce
\begin{itemize}
\item the state space $X:={\rL^p\bigl([0,1],\Cm\bigr)}\times\mathbb C^n$ where $1\le p<+\infty$,
\item the boundary space $\partial X:=\mathbb C^m$,
\item the control space $U:=\mathbb C$,
\item the control operator $B:=\Psi v\in\mathbb C^m\simeq\mathcal{L}(U,\partial X)=\mathcal{L}(\mathbb C,\mathbb C^m)$
where $v=v_i$ denotes the $i$-th canonical basis vector of $\mathbb C^n$ meaning that the control acts in the $i$-th vertex of the network,
\item the system operator\footnote{By $\delta_s$ we denote the point evaluation in $s\in[0,1]$, i.e., $\delta_s(f)=f(s)$.}
\begin{align*}
A_m:&=\begin{pmatrix}
\operatorname{diag}\bigl(\frac{d}{ds}\bigr)_{m\times m}&0\\\Phi_w^{+}\delta_0&0
\end{pmatrix}
\quad\text{with domain}\\
D(A_m):&=\l\{\tbinom fd\in\rW^{1,p}\bigl([0,1],\Cm\bigr)\times\mathbb C^n:f(1)\in\operatorname{rg}\Psi\r\},
\end{align*}
\item the boundary operator $Q:D(A_m)\times\mathbb C^n\to\mathbb C^m$, $Q\binom fd:=\Phi^{-} f(1)-d$,
\item the operator $A\subsetA_m$ with domain $D(A)=\ker Q$.
\end{itemize}
As is shown in \cite[Prop.~3.4]{EKKNS:10} these spaces and operators satisfy all assumptions of Section~\ref{TAF}.
To proceed we first need to compute the associated Dirichlet operator $Q_\lambda$ and an explicit representation of the semigroup operators $T(t)$ for $t\in[0,1]$.
\begin{lemma}
\makeatletter
\hyper@anchor{\@currentHref}%
\makeatother
\label{lem:Diri-SGR-dBC}
\begin{enumerate}[(i)]
\item For each $0\neq \lambda\in\rho(A)$, the Dirichlet operator $Q_{\lambda}\in \mathcal{L}(\mathbb C^n, {X})$ is given by
\begin{equation}\label{Ql}
Q_{\lambda} =
\binom{
\lambda \varepsilon_{\lambda}\otimes\Psi R(\lambdae^{\lambda},\mathbb A)}
{\mathbb A R(\lambdae^{\lambda},\mathbb A)}.
\end{equation}
\item The semigroup $(T(t))_{t\ge0}$ generated by $A$ is given by\footnote{We use the notations $\bigl[\binom fd\bigr]_1:=f$ and $\bigl[\binom fd\bigr]_2:=d$ for the canonical projections of $\binom fd\in X$.}
\begin{align}
\l[T(t)\tbinom fd\r]_1(s)&=
\begin{cases}\label{eq:[T(t)]1}
f(t+s)&\text{if }\kern22pt 0\le t<1-s,\\
\mathbb B\, V_{t+s-1} f +\Psi d&\text{if }\,1-s\le t\le1,
\end{cases}
\\
\l[T(t)\tbinom fd\r]_2&=\Phi_w^{+}\, V_t f +d\kern45pt\text{for }0\le t\le1,\label{eq:[T(t)]2}
\end{align}
where
\begin{equation}\label{def:V_s}
V_s f:=\int_0^s f(r)\,dr\quad \text{for }f\in{\rL^p\bigl([0,1],\Cm\bigr)}.
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
Assertion~(i) is proved in \cite[Prop.~3.8]{EKKNS:10}. Equation~\eqref{eq:[T(t)]2} is shown in the proof of \cite[Prop.~3.4.(iii)]{EKKNS:10}. The statement \eqref{eq:[T(t)]1} for the first coordinate then follows from \cite[Lem.~6.1]{Sik:05}.
\end{proof}
Next we apply Proposition~\ref{prop:range-B1} to the present situation.
\begin{lemma}\label{lem:strange-dBC}
Let $\lambda\in\rho(A)$. Then for all $0\le\alpha\le1$
\begin{align}\label{eq:strange-dBC}
\bigl[e^{\lambda\alpha}\cdot T(1-\alpha)B_\lambda \bigr]_1(s)&=
\begin{cases}
\lambda\varepsilon_{\la}(1+s)\cdot\PsiR(\lambdae^{\lambda},\mathbb A)\, v&\text{if }0\le s<\alpha,\\
\lambda\varepsilon_{\la}(1+s)\cdot\PsiR(\lambdae^{\lambda},\mathbb A)\, v-\varepsilon_{\la}(s)\cdot \Psi v&\text{if }\alpha\le s\le1.
\end{cases}
\\
\bigl[e^{\lambda\alpha}\cdot T(1-\alpha)B_\lambda \bigr]_2
&=e^{\lambda}\mathbb A R(\lambdae^{\lambda},\mathbb A)\, v
\end{align}
Hence the equality in \eqref{eq:strange1} is satisfied with
\begin{align*}
M&=\binom{\Psi v}{0}\in\mathcal{L}\Bigl({\rL^p[0,1]},{\rL^p\bigl([0,1],\Cm\bigr)}\times \mathbb C^n\Bigr),\
(M u)(s)=\binom{ u(s)\cdot\Psi v}{0}.
\end{align*}
\end{lemma}
\begin{proof}
Using the explicit representations of $Q_\lambda$ and $T(t)$ given in Lemma~\ref{lem:Diri-SGR-dBC} and the relations \eqref{eq:ABrel} we obtain
\begin{align*}
\bigl[e^{\lambda\alpha}\cdot T(1&-\alpha)B_\lambda \bigr]_1(s)=\\
&=e^{\lambda\alpha}\cdot
\begin{cases}
\lambda\varepsilon_{\la}(1-\alpha+s)\cdot\PsiR(\lambdae^{\lambda},\mathbb A)\, v&\text{if }0\le s<\alpha,\\
\lambda\mathbb B\, V_{s-\alpha}\,\varepsilon_{\la}\cdot\PsiR(\lambdae^{\lambda},\mathbb A)\, v+\Psi\mathbb A R(\lambdae^{\lambda},\mathbb A) v&\text{if }\alpha\le s\le1,
\end{cases}
\\
&=e^{\lambda\alpha}\cdot
\begin{cases}
\lambda\varepsilon_{\la}(1-\alpha+s)\cdot\PsiR(\lambdae^{\lambda},\mathbb A)\, v&\text{if }0\le s<\alpha,\\
\bigl(\varepsilon_{\la}(s-\alpha)-1\bigr)\cdot\Psi\,\AAAR(\lambdae^{\lambda},\mathbb A)\, v+\Psi\mathbb A R(\lambdae^{\lambda},\mathbb A) v&\text{if }\alpha\le s\le1,
\end{cases}
\\
&=
\begin{cases}
\lambda\varepsilon_{\la}(1+s)\cdot\PsiR(\lambdae^{\lambda},\mathbb A)\, v&\text{if }0\le s<\alpha,\\
\varepsilon_{\la}(s)\cdot\Psi\bigl(\lambda\elR(\lambdae^{\lambda},\mathbb A)-Id\bigr)\, v&\text{if }\alpha\le s\le1.
\end{cases}
\\
&=\begin{cases}
\lambda\varepsilon_{\la}(1+s)\cdot\PsiR(\lambdae^{\lambda},\mathbb A)\, v&\text{if }0\le s<\alpha,\\
\lambda\varepsilon_{\la}(1+s)\cdot\PsiR(\lambdae^{\lambda},\mathbb A)\, v-\varepsilon_{\la}(s)\cdot \Psi v&\text{if }\alpha\le s\le1.
\end{cases}
\end{align*}
Similarly, for the second coordinate we have
\begin{align*}
\bigl[e^{\lambda\alpha}\cdot T(1-\alpha)B_\lambda \bigr]_2
&=e^{\lambda\alpha}\Bigl(\lambda\Phi_w^{+}\,V_{1-\alpha}\,\varepsilon_{\la}\cdot\PsiR(\lambdae^{\lambda},\mathbb A)\,v+\mathbb A R(\lambdae^{\lambda},\mathbb A)\, v\Bigr)\\
&=e^{\lambda\alpha}\Bigl(\bigl(\varepsilon_{\la}(1-\alpha)-1\bigr)\cdot\AAAR(\lambdae^{\lambda},\mathbb A)\,v+\mathbb A R(\lambdae^{\lambda},\mathbb A)\, v\Bigr)\\
&=e^{\lambda}\mathbb A R(\lambdae^{\lambda},\mathbb A)\, v
\end{align*}
where we used \eqref{eq:ABrel2}.
\end{proof}
We note that by \cite[Prop.~3.5]{EKKNS:10} the states of the controlled flow at time $t\ge0$ are given by the first coordinate of the states in our ``extended'' state space $X={\rL^p\bigl([0,1],\Cm\bigr)}\times\mathbb C^n$. For this reason we also need to compute the first coordinate of $T(1)^k\binom{\Psi g}{0}$.
\begin{lemma}
We have
\begin{equation*}
\l(T(1)\tbinom fd \r)(s)=
\begin{pmatrix}
\Psi\,\Phi_w^{+}\, V_s&\Psi\\\kern10pt \Phi_w^{+}\,V_1&Id
\end{pmatrix}
\binom{f}{d}
=\binom{\mathbb B\,V_s f+\Psi d}{\Phi_w^{+}\, V_1 f +d},
\end{equation*}
where the operator $V_s\in\mathcal{L}\bigl({\rL^p\bigl([0,1],\Cm\bigr)},\rW^{1,p}\bigl([0,1],\Cm\bigr)\bigr)$ is defined in \eqref{def:V_s}.
Moreover, for $k\in\mathbb N_1$ we have
\begin{equation}\label{eq:T(k)-dBC}
\l[T(1)^k\tbinom{\Psi g}0\r]_1 (s)
=\Psi(\mathbb A V_s +\delta_1)^{k-1}\mathbb A\,V_sg
=(\mathbb B V_s+\delta_1)^{k-1}\mathbb B\Psi\,V_sg.
\end{equation}
\end{lemma}
\begin{proof}
The formula for $T(1)$ follows immediately from Lemma~\ref{lem:Diri-SGR-dBC}.(ii). Since $\Psi\mathbb A = \mathbb B\Psi$, it suffices to show the second equality in \eqref{eq:T(k)-dBC}. Obviously this equation holds for $k=1$. To verify it for $k>1$ we note that by \eqref{eq:ABrel} the matrix $\Psi$ is left invertible with left inverse $\Phi^{-}$. Hence, we obtain
\[
\l[T(1)\tbinom fd\r]_2=\Phi^{-}\delta_1 \l[T(1)\tbinom fd\r]_1.
\]
If $\binom fd\in\operatorname{rg} T(1)$ we can write $f=\Psi h$ and the previous equation implies
\begin{align*}
\l[T(1)\tbinom fd\r]_1 (s)
=\l[T(1)\tbinom {\Psi h}{\delta_1h}\r]_1(s)
=\mathbb B V_s\Psi h+\Psi\delta_1 h
=(\mathbb B V_s +\delta_1)f.
\end{align*}
Now assume that \eqref{eq:T(k)-dBC} holds for some $k\ge1$. Then for $\binom fd=T(1)^k\binom{\Psi g}{0}\in\operatorname{rg} T(1)$ we conclude
\begin{align*}
\l[T(1)^{k+1}\tbinom{\Psi g}0\r]_1(s)
&=\l[T(1)\cdot T(1)^{k}\tbinom{\Psi g}0\r](s)\\
&=(\mathbb B V_s +\delta_1)\cdot(\mathbb B V_s+\delta_1)^{k-1}\,\mathbb B\Psi\,V_sg\\
&=(\mathbb B V_s+\delta_1)^{k}\,\mathbb B\Psi\,V_s g.\qedhere
\end{align*}
\end{proof}
The previous two lemmas together with Corollary~\ref{cor:Bn} imply the following result.
\begin{corollary}\label{cor:Bn-dBC}
For $l\in\mathbb N_2$ and $u\in{\rL^p[0,l]}$ we have
\begin{align}
\bigl[\sB_l u\bigr]_1(s)
&=\Psi\biggl( u_0\otimes v+\sum_{k=1}^{l-1}\bigl(\mathbb A V_s+\delta_1\bigr)^{k-1}\, V_s(u_{k}\otimes\mathbb A v)\biggr) \nonumber\\
&=u_0\otimes \Psi v+\sum_{k=1}^{l-1}\bigl(\mathbb B V_s+\delta_1\bigr)^{k-1}\,V_s(u_{k}\otimes\mathbb B\Psi\, v)\label{eq:edBC}
\end{align}
where $u_k\in{\rL^p[0,1]}$ is defined as in \eqref{eq:def-uk}.
\end{corollary}
Using this explicit representation of the controllability map we now compute the exact reachability space for the control problem given in \eqref{eq:TE-net-db}.
\begin{corollary}\label{cor:Reach-dBC}
If $t\ge\min\{m,n\}=:l$ then the exact reachability space of the controlled flow with dynamic boundary conditions \eqref{eq:TE-net-db} is given by\footnote{Here we define $\mathrm{W}^{0,p}[0,1]:={\rL^p[0,1]}$.}
\begin{align*
\bigl[e\sR_t^{\textrm{BC}}\bigr]_1
&\subseteq
\l\{
\Psi\sum_{k=0}^{l}\l(u_{k}\otimes\mathbb A^k\, v\r):u_k\in\rW^{k,p}[0,1]\text{ for }0\le k\le l
\r\}\\
&=
\l\{
\sum_{k=0}^{l}\l(u_{k}\otimes\mathbb B^k\Psi\, v\r):u_k\in\rW^{k,p}[0,1]\text{ for }0\le k\le l
\r\}.
\end{align*}
\end{corollary}
\begin{proof}
The equality of the two sets on the right-hand-side follows immediately from \eqref{eq:ABrel}.
To show the inclusion in the second set we combine Corollaries~\ref{cor:Reach} and \ref{cor:Bn-dBC}.
First observe, that for the operators $\mathbb B$, $V_s$, and $\delta_1$ we have
\[
\mathbb B V_s f = V_s \mathbb B f, \quad \mathbb B \delta_1 f = \delta_1\mathbb B f,\quad
\delta_1 V_s f = V_1f
\]
for every $f\in{\rL^p\bigl([0,1],\Cm\bigr)}$ while
\[ \delta_1^k f = \delta_1f =f(1)\quad\text{for } k\ge 1.\]
So, when expanding $(\mathbb B V_s+\delta_1)^{k-1}V_s$ we can rearrange the terms to obtain expressions of the form
\[\alpha_i \mathbb B^i V_{s_1}\cdots V_{s_{i+1}},\quad 0\le i\le k-1,\]
where $\alpha_i$ are scalar coefficients and $s_j\in\{s,1\}$, $1\le j\le i+1$.
Next, for arbitrary $u\in{\rL^p[0,1]}$
and $0\le k\le l$ we have
\[ V_{s_1}\cdots V_{s_k} u \in \rW^{k,p}[0,1], \quad s_j\in\{s,1\}, 1\le j\le k. \]
Combining these facts we obtain the desired result by considering \eqref{eq:edBC} for all $u\in{\rL^p[0,l]}$.
\end{proof}
From the previous Corollary we immediately obtain the following result which improves \cite[Thm.~3.10]{EKKNS:10} and shows that $ \bigl[a\sR_t^{\textrm{BC}}\bigr]_1$ is constant for $t\ge\min\{m,n\}=:l$.
\begin{corollary}\label{cor:Reach-approx-dBC}
If $t\ge\min\{m,n\}=:l$ then the approximate controllability space of the controlled flow with dynamic boundary conditions \eqref{eq:TE-net-db} is given by
\begin{align*}
\bigl[a\sR_t^{\textrm{BC}}\bigr]_1
&={\rL^p[0,1]}\otimes\operatorname{span}\l\{\Psi v,\mathbb B\,\Psi v,\ldots,\mathbb B^{l-1}\Psi v\r\}\\
&={\rL^p[0,1]}\otimes\Psi\operatorname{span}\l\{v,\mathbb A v,\ldots,\mathbb A^{l-1}v\r\}.
\end{align*}
\end{corollary}
\smallskip
In the same manner as before we also obtain the following result on positive controllability.
\begin{corollary}\label{cor:Reach-pos-dBC}
The approximate positive controllability space of the controlled flow with dynamic boundary conditions \eqref{eq:TE-net-db} is given by
\begin{align*}
\bigl[a^+\sR^{\textrm{BC}}\bigr]_1
&={\rL^p[0,1]}\otimes\overline{\operatorname{co}}\,\l\{\mathbb B^k\Psi v : k\in\mathbb N_0\r\}\\
&={\rL^p[0,1]}\otimes\Psi\,\overline{\operatorname{co}}\,\l\{\mathbb A^k v: k\in\mathbb N_0\r\}.
\end{align*}
\end{corollary}
\section*{Conclusion}
Using a new characterization of admissible boundary control operators (see Proposition~\ref{prop:range-B1}) we are able to describe explicitly the \emph{exact} reachability space of the abstract boundary control system $\Sigma_{\textrm{BC}}(\Am,B,Q)${}, cf. \eqref{ACPBC}. Moreover, this approach allows us also to determine the \emph{positive} reachability space obtained allowing only positive control functions. Our results generalize and improve the ones obtained in the former works \cite{BBEAM:13, EKNS:08, EKKNS:10} where only approximate controllability or positive controllability under quite restrictive assumptions are studied.
| {
"timestamp": "2016-10-05T02:04:50",
"yymm": "1610",
"arxiv_id": "1610.00954",
"language": "en",
"url": "https://arxiv.org/abs/1610.00954",
"abstract": "Using the semigroup approach to abstract boundary control problems we characterize the space of all exactly reachable states. Moreover, we study the situation when the controls of the system are required to be positive. The abstract results are applied to flows in networks with static as well as dynamic boundary conditions.",
"subjects": "Functional Analysis (math.FA); Optimization and Control (math.OC)",
"title": "Exact and Positive Controllability of Boundary Control Systems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9773708052400942,
"lm_q2_score": 0.8244619306896956,
"lm_q1q2_score": 0.8058050210879906
} |
https://arxiv.org/abs/2104.01689 | What does a typical metric space look like? | The collection $\mathcal{M}_n$ of all metric spaces on $n$ points whose diameter is at most $2$ can naturally be viewed as a compact convex subset of $\mathbb{R}^{\binom{n}{2}}$, known as the metric polytope. In this paper, we study the metric polytope for large $n$ and show that it is close to the cube $[1,2]^{\binom{n}{2}} \subseteq \mathcal{M}_n$ in the following two senses. First, the volume of the polytope is not much larger than that of the cube, with the following quantitative estimates: \[ \left(\tfrac{1}{6}+o(1)\right)n^{3/2} \le \log \mathrm{Vol}(\mathcal{M}_n)\le O(n^{3/2}). \] Second, when sampling a metric space from $\mathcal{M}_n$ uniformly at random, the minimum distance is at least $1 - n^{-c}$ with high probability, for some $c > 0$. Our proof is based on entropy techniques. We discuss alternative approaches to estimating the volume of $\mathcal{M}_n$ using exchangeability, Szemerédi's regularity lemma, the hypergraph container method, and the Kővári--Sós--Turán theorem. | \section{Introduction}
For a positive integer $n$, let
$\br{n} :=\{1, \dotsc, n\}$ and let $\binom{\br{n}}{2}$ be the set of all
unordered pairs of distinct elements in $\br{n}$. A finite metric space
on $n\ge 2$ points can be regarded as an array $(\dist{i}{j})$ with
$\{i,j\}\in \binom{\br{n}}{2}$, where $\dist{i}{j}$ denotes the
distance between the $i^{\text{th}}$ and $j^{\text{th}}$ points in
the space. Such a metric space may also be regarded as an element of
$\mathbb{R}^{\binom{n}{2}}$ satisfying certain restrictions among its
coordinates. Specifically, the set of all such metric spaces is the
cone
\[
\mathcal{C}_n:=\{(\dist{i}{j})\in\mathbb{R}^{\binom{n}{2}} : \dist{i}{j} >
0\text{ and }\dist{i}{j}\le \dist{i}{k} + \dist{k}{j}\text{ for all
$i,j,k$}\}.
\]
Our goal in this work is to study a
`uniformly chosen metric space on $n$ points'. This is interpreted as a metric space sampled according to the Lebesgue measure from a suitable bounded subset of $\mathcal{C}_n$. There are several natural choices for such a bounded subset. In this work, we focus on the diameter normalisation, that is, we bound the maximal diameter of the space from above.
We thus define the \emph{metric polytope}
\[
\mathcal{M}_n:=\{(\dist{i}{j})\in \mathcal{C}_n : \dist{i}{j}\le 2\text{ for all $i,j$}\}.
\]
The specific upper bound on the diameter amounts only to a scaling factor, with the constant two chosen to simplify some of the later expressions.
Understanding the structure of a uniformly chosen metric space in $\mathcal{M}_n$ is
intimately related to understanding the volume of $\mathcal{M}_n$. By
construction, we have the trivial upper bound
\begin{equation}\label{eq:trivial_volume_upper_bound}
\mathrm{Vol}(\mathcal{M}_n)\le 2^{\binom{n}{2}}.
\end{equation}
To obtain a lower bound, we make the following observation: Any triple
$x,y,z\in[1,2]$ satisfies the triangle inequality $x \le y + z$ and,
consequently, $\mathcal{M}_n$ contains the cube $[1,2]^{\binom{n}{2}}$.
This yields the lower bound
\begin{equation}\label{eq:trivial_volume_lower_bound}
\mathrm{Vol}(\mathcal{M}_n)\ge 1.
\end{equation}
The precise behaviour of the volume $\mathrm{Vol}(\mathcal{M}_n)$ seems
difficult to study. For instance, while intuitive, we do not know
whether $\mathrm{Vol}(\mathcal{M}_{n+1})\ge \mathrm{Vol}(\mathcal{M}_n)$ for all $n$ (see also
Section~\ref{sec:further_questions}). It is thus interesting to note
that at least the `radius' $\mathrm{Vol}(\mathcal{M}_n)^{1/\binom{n}{2}}$ exhibits
some regularity.
\begin{prop}\label{prop:decreasing_radius}
The sequence $n\mapsto\mathrm{Vol}(\mathcal{M}_n)^{1/\binom{n}{2}}$ is
non-increasing.
\end{prop}
The proposition is deduced from Shearer's inequality, see Section~\ref{sec:monotonicity}.
It allows to obtain increasingly refined volume
estimates for $\mathrm{Vol}(\mathcal{M}_n)$ via finite computations. For instance,
one may check that $\mathrm{Vol}(\mathcal{M}_3)=4$ (see Figure~\ref{fig:M3})
and hence we have the inequality
\begin{equation}\label{eq:second_trivial_volume_upper_bound}
\mathrm{Vol}(\mathcal{M}_n)\le 4^{\frac{1}{3}\binom{n}{2}}\quad \text{for all $n\ge 3$},
\end{equation}
improving upon the trivial upper
bound~\eqref{eq:trivial_volume_upper_bound}. Mascioni~\cite{Ma05}
calculated $\mathrm{Vol}(\mathcal{M}_4) = \frac{136}{15}$ and used it to deduce a bound
on $\mathrm{Vol}(\mathcal{M}_n)$ that is stronger than
\eqref{eq:second_trivial_volume_upper_bound} but weaker than what
can be deduced from Proposition~\ref{prop:decreasing_radius}. The
proposition and~\eqref{eq:trivial_volume_lower_bound} imply that the limit
\begin{equation}\label{eq:limiting_volume_constant}
\lim_{n\to\infty} \mathrm{Vol}(\mathcal{M}_n)^{1 / \binom{n}{2}}
\end{equation}
exists, which raises the natural question of finding its value.
Our main result is that a uniformly chosen metric space is `almost
in $[1,2]^{\binom{n}{2}}$', as the following two theorems make
precise. Our first theorem shows that the limiting constant
\eqref{eq:limiting_volume_constant} equals one, that is,
\begin{equation}\label{eq:limit constant is one}
\mathrm{Vol}(\mathcal{M}_n) = 2^{o(n^2)}\quad\text{as $n\to\infty$}.
\end{equation}
In fact, our analysis goes much further and determines the \emph{second-order term} in the logarithm of the volume up to a multiplicative constant.\footnote{Note that~\eqref{eq:main_volume_estimates} hides the first-order term $(2/2)^{\binom{n}{2}}$, which would become $(M/2)^{\binom{n}{2}}$ if we chose to normalise the diameter of $\mathcal{M}_n$ to be $M$, instead of $2$.}
\begin{thm}\label{thm:volume_estimate}
The following asymptotic estimates hold as $n \to \infty$:
\begin{equation}\label{eq:main_volume_estimates}
\exp\big((\nicefrac{1}{6}-o(1))n^{3/2}\big) \le \mathrm{Vol}(\mathcal{M}_n)\le \exp\big(Cn^{3/2}\big)
\end{equation}
for some absolute constant $C$.
\end{thm}
Our second theorem studies the minimum distance in a typical metric space in $\mathcal{M}_n$. Let $d$ be a uniformly sampled metric space from $\mathcal{M}_n$. Since
\begin{equation*}
\P\left(\min_{i,j} d_{ij} > 1-\delta\right) \le \frac{(1+\delta)^{\binom{n}{2}}}{\mathrm{Vol}(\mathcal{M}_n)},
\end{equation*}
the lower bound in Theorem~\ref{thm:volume_estimate} implies that, for any $a<\frac{1}{3}$,
\begin{equation}
\label{eq:min-dij-lower}
\P\left(\min_{i,j} d_{ij} \le 1-\frac{a}{\sqrt{n}}\right) \to 1\quad\text{as $n\to\infty$}.
\end{equation}
Complementing this fact, we show that, in a typical metric space, the minimum distance is polynomially close to one.
\begin{thm}\label{thm:minimal_distance}
There exist constants $C,c>0$ such that, for all $n\ge 2$, if $d$ is a uniformly sampled metric space from $\mathcal{M}_n$, then
\begin{equation*}
\P\left(\min_{i,j} d_{ij} \le 1-n^{-c}\right)\le C n^{-c}.
\end{equation*}
\end{thm}
\begin{figure
\begin{centering}
\includegraphics{M3.eps}
\end{centering}
\caption{$\mathcal{M}_3$ inside $[0,2]^3$.\label{fig:M3}}
\end{figure}
It would be interesting to find the typical order of $1 - \min_{i,j} d_{ij}$, see also Section~\ref{sec:further_questions}. Our proof shows that it is at most~$n^{-1/30}$.
The heart of this work is the proof of the upper bound on the volume of $\mathcal{M}_n$ in~\eqref{eq:main_volume_estimates},
which relies on entropy techniques. A conceptual outline of our argument is presented in the next section.
Our proof of the lower bound on the volume is a fairly simple application of the Local Lemma of Erd\H{o}s and Lov\'asz~\cite{ErLo75}, see Section~\ref{sec:lower}.
The starting point for the proof of Theorem~\ref{thm:minimal_distance} is an upper bound on the probability
that the distance between a \emph{fixed} pair of points is shorter than one (Proposition~\ref{prop:distance-lower-tail}), which is a by-product of
our proof of the upper bound on the volume of $\mathcal{M}_n$. The assertion of the theorem is then deduced via elementary, but nontrivial,
combinatorial arguments. Several alternative approaches to proving upper bounds on the volume of $\mathcal{M}_n$, which lead to results weaker
than Theorem~\ref{thm:volume_estimate}, are reviewed in Section~\ref{sec:other_approaches}.
\subsection*{A remark on precedence}
This paper has been long in writing and a number of results have appeared in the interim, notably Mubayi and Terry~\cite{mubayi2019discrete} and Balogh and Wagner~\cite{BalWag16}, who considered the number of metric spaces with distances in the discrete set $\{1, \dotsc, M\}$. The methods of~\cite{BalWag16} also yielded the upper bound $\mathrm{Vol}(\mathcal{M}_n) \le \exp(n^{11/6+o(1)})$. (We elaborate on the relation between the discrete and the continuous model in Section~\ref{sec:discrete-problem}.) These papers have kindly acknowledged our precedence, but, for fairness, it should be noted that we did not have the upper bound of Theorem~\ref{thm:volume_estimate} then, only a bound of the form $\mathrm{Vol}(\mathcal{M}_n)\le\exp(n^{2-c})$; in particular, the volume bound of~\cite{BalWag16} was stronger than ours. Several months before we streamlined the entropy argument underlying the proof of the upper bound in Theorem~\ref{thm:volume_estimate} to yield the estimate $\mathrm{Vol}(\mathcal{M}_n) \le \exp(Cn^{3/2})$, in a joint work with Rob Morris, we found a more efficient version of the argument of Balogh and Wagner~\cite{BalWag16}, based on the method of hypergraph containers, that gives the estimate $\mathrm{Vol}(\mathcal{M}_n) \le \exp\big( C n^{3/2} (\log n)^3 \big)$; we present a detailed sketch of this argument in Section~\ref{sec:container-method}.
\subsection*{Further directions and related work}
We believe that the general method of relating entropy and independence that underlies our proof of Theorem~\ref{thm:volume_estimate} will find many further applications. In particular, the first and the fourth named authors adapted the methods of this work, and combined them with the arguments underlying the proofs of the hypergraph container theorems~\cite{BalMorSam, SaxTho}, to study lower tails of random variables that can be expressed as polynomials of independent Bernoulli random variables~\cite{KozSam}.
Similar ideas of relating entropy and independence were used by Tao~\cite[Lemma~4.3]{MR2212136} to develop a probabilistic interpretation of Szemer\'edi's regularity lemma. Concurrently with the writing of this paper, Ellis, Friedgut, Kindler, and Yehudayoff~\cite{EllFriKinYeh15} used a related approach to prove stability versions of the Loomis--Whitney inequality and the more general Uniform Cover inequality.
The pigeonhole principle argument that appears in our proof outline below (see also Lemma~\ref{lem:h_m_bound_with_difference}) is somewhat reminiscent of the Lov\'asz--Szeg\'edy Hilbert space regularity lemma, see the proof of \cite[Lemma~4.1]{MR2306658} (we thank Bal\'azs R\'ath for pointing out this connection).
\subsection{Proof outline}\label{sec:proof outline}
Suppose that $d$ is a uniformly chosen metric space from $\mathcal{M}_n$. Conceptually, our argument consists of three steps.
\subsubsection*{Step I (conditioning)}
We say that a subset $F \subseteq \binom{\br{n}}{2}$ has the \emph{conditioned almost independence} property if the following holds: Conditioned on all the distances $d_f$ with $f \in F$, for each triangle $\{i,j,k\}$ whose edges lie outside of $F$, the distances $d_{ij}$, $d_{ik}$, and $d_{jk}$ become close to mutually independent.
The goal of the first step is to find a `small' set $F$ with the above property.
In order to show this, for $m \ge 0$, define the set
\[
F_m := \Big\{\{s,t\} \in \binom{\br{n}}{2} : \max\{s,t\} > n-m\Big\}
\]
and examine the conditional entropy
\[
h(F_m) := H\big(d_{12} \mid\{d_f : f \in F_m\}\big).
\]
Since $F_{m+1} \supseteq F_m$, monotonicity of conditional entropy implies that the sequence $m \mapsto h(F_m)$ is nonincreasing.
Moreover, it is not difficult to bound $h(F_0)$ from above and $h(F_{\sqrt{n}})$ from below by absolute constants.
Thus, the pigeonhole principle produces an $m_0$ with $0 \le m_0 \le \sqrt{n}$ for which
\begin{equation}
\label{eq:hm-pigeonhole}
h(F_{m_0})-h(F_{{m_0}+1}) \le \frac{C}{\sqrt{n}}
\end{equation}
for some absolute constant $C$. The set $F$ described above is taken to be $F_{m_0}$. Its cardinality is at most $m_0 n \le n^{3/2}$. We now argue that $F$ has the conditioned almost independence property. Since $\{1,n-m_0\}, \{2,n-m_0\} \in F_{m_0+1} \setminus F_{m_0}$, inequality~\eqref{eq:hm-pigeonhole} gives (again using the monotonicity of conditioned entropy)
\begin{equation*}
h(F) - h\big(F \cup \{\{1,n-m_0\}, \{2,n-m_0\}\}\big) \le \frac{C}{\sqrt{n}}.
\end{equation*}
Symmetry considerations show that, in fact, for every ordered triple of distinct $i, j, k \in \br{n-m_0}$, we have
\begin{equation}
\label{eq:dijk-relative-entropy}
H\big(d_{ij} \mid \{d_f : f \in F\}\big) - H\big(d_{ij} \mid \{d_f : f \in F\} \cup \{d_{ik}, d_{jk}\}\big) \le \frac{C}{\sqrt{n}}.
\end{equation}
Inequality~\eqref{eq:dijk-relative-entropy} is the notion of almost independence that we need. It may be conveniently restated in terms of the average \emph{Kullback--Leibler divergence} between the conditional (on all $d_f$ with $f \in F$) joint distribution of $d_{ij}$, $d_{ik}$, $d_{jk}$ and the product of the (conditional) marginal distributions of $d_{ij}$, $d_{ik}$, and $d_{jk}$:
\begin{equation}
\label{eq:dijk-relative-DKL}
\mathbb{E}\big[\DKL{(d_{ij}, d_{ik}, d_{jk})}{d_{ij} \times d_{ik} \times d_{jk}}\big] \le \frac{C}{\sqrt{n}}.
\end{equation}
\subsubsection*{Step II (subadditivity)}
Since $d$ is a uniformly sampled metric space from $\mathcal{M}_n$,
\[
H(d) = \log\big(\mathrm{Vol}(\mathcal{M}_n)\big).
\]
Using the chain rule for conditional entropy, we may write
\begin{equation}
\label{eq:chain-rule-entropy}
H(d) = \underbrace{H(\{d_f : f \in F\})}_{\alpha} + \underbrace{H(\{d_f : f \notin F\} \mid \{d_f : f \in F\})}_{\beta}.
\end{equation}
Since $d_f \in [0,2]$ for every $f \in F$, we have $\alpha \le |F| \log 2$. By considering an arbitrary Steiner triple system on $n - O(1)$ vertices, one sees that the complement of $F$ can be partitioned into a family $\mathcal{T}$ of edge-disjoint triangles and a leftover set of pairs $G$ with $|G| \le C (|F| + n)$. Using subadditivity of entropy,
\begin{equation}
\label{eq:beta-upper-bound}
\beta \le |G| \log 2 + \sum_{\{i,j,k\} \in \mathcal{T}} H(d_{ij}, d_{ik}, d_{jk} \mid \{d_f : f \in F\}).
\end{equation}
\subsubsection*{Step III (entropy-maximising distributions)}
Combining the above two steps, we arrive at the problem of bounding $H(d_{ij}, d_{ik}, d_{jk} \mid \{d_f : f \in F\})$, which, by conditioned almost independence, amounts to estimating the largest entropy of a vector that is supported on~$\mathcal{M}_3$ and satisfies~\eqref{eq:dijk-relative-DKL}. We first observe that~\eqref{eq:dijk-relative-DKL} implies the following inequality:
\[
\mathbb{E}\big[\P(d_{ij} \times d_{ik} \times d_{jk} \notin \mathcal{M}_3)\big] \le \frac{C}{\sqrt{n}},
\]
see Lemma~\ref{lem:Kullback_Leibler_and_support}. This allows us to bound $H(d_{ij}, d_{ik}, d_{jk} \mid \{d_f : f \in F\})$ by the largest entropy of a vector of (fully) independent random variables that is almost supported on~$\mathcal{M}_3$. To this end, we prove a general statement (Theorem~\ref{thm:entropy_bound_for_almost_independent}) showing that the largest entropy of a vector of independent random variables that is almost supported on a convex set $\mathcal{P}$ cannot be much larger than the logarithm of the volume of the largest box contained in $\mathcal{P}$. In the case $\mathcal{P} = \mathcal{M}_3$, the (unique) largest such box is $[1,2]^3$ (Lemma~\ref{lem:independent_max_volume}) and, consequently, the entropy cannot be much larger than zero.
The three steps suffice to show that the volume of $\mathcal{M}_n$ is $\exp(o(n^2))$, that is, that the limiting constant in~\eqref{eq:limiting_volume_constant} is one. Moreover, the various error terms are polynomially related and the above argument shows the quantitative estimate $\mathrm{Vol}(\mathcal{M}_n) \le \exp(Cn^{2-c})$ for an explicit $c > 0$. To obtain the sharp exponent $3/2$, as in the statement of Theorem~\ref{thm:volume_estimate}, several enhancements to the above argument are made. In particular, the following two bounds are proved:
\begin{align}
\label{eq:sketch-chain-rule-Fm}
\log(\mathrm{Vol}(\mathcal{M}_n)) & \le \sum_{m=0}^{n-2} |F_{m+1} \setminus F_m| \cdot h(F_m) \le n \cdot \sum_{m=0}^{n-2} h(F_m), \\
\label{eq:hm-upper-bound}
h(F_m) & \le C \big(h(F_m) - h(F_{m+1})\big)^{1/3}.
\end{align}
The bound~\eqref{eq:hm-upper-bound} implies that $h(F_m) \le C' (m+1)^{-1/2}$, as shown in Lemma~\ref{lem:h_m_bound}, which gives the claimed estimate after substituting it into~\eqref{eq:sketch-chain-rule-Fm}. The estimate~\eqref{eq:sketch-chain-rule-Fm} improves upon the subadditivity step, making use of the symmetry inherent in the specific choice of the sets~$F_m$, see~\eqref{eq:volume_decomposition}. Inequality~\eqref{eq:hm-upper-bound} is obtained using a more careful analysis in steps one and three above.
\section{Entropy and almost independence}
\label{sec:entropy}
\subsection{Differential entropy}
\label{sec:differential-entropy}
We now recall the notion and some properties of differential
entropy, the entropy of continuous random variables. Readers who are
used to the entropy of discrete random variables (Shannon's entropy)
should keep in mind that in the continuous case entropies can be
either positive or negative, the value $0$ plays no special role.
Given an absolutely continuous probability measure $\mu$ on $\mathbb{R}^k$
with density $f$ and a~random variable $X \sim \mu$, the
\emph{differential entropy} (or simply \emph{entropy}) of $\mu$ (or
of $X$) is defined as
\begin{equation}\label{eq:H_def}
H(\mu) := H(X) := -\int \log(f(x)) f(x)dx = -\int \log\left(f(x)\right)d\mu(x),
\end{equation}
whenever the above integral is well-defined. As is customary, if random variables
$X_1,\dotsc,X_m$ have a joint density function, we will write $H(X_1, \dotsc, X_m)$
for the entropy of the random vector $(X_1, \dotsc, X_m)$.
Observe that if $X$ takes values in a compact set $K \subseteq \mathbb{R}^k$, then by Jensen's
inequality,
\begin{equation}\label{eq:entropy_on_compact}
\begin{split}
H(X) & = -\int_K \log(f(x)) f(x)dx = \int_{K\cap\{f>0\}} \log\left(\frac{1}{f(x)}\right) f(x) dx \\
& \le \log\left(\int_{K \cap \{f > 0\}} \frac{1}{f(x)} f(x)dx\right) \le \log(\mathrm{Vol}(K)).
\end{split}
\end{equation}
Differential entropy may be negative, e.g., if $\mathrm{Vol}(K) < 1$ above.
It could even happen that $H(X) = -\infty$. However, one easily
checks that if the density $f$ is bounded, then $H(X) >
-\infty$. In view of this, for the sake of simplicity, from now on we
focus on probability measures on $\mathbb{R}^k$ that are
compactly supported and admit a bounded density. We denote
the family of all such measures by $\mathcal{A}(\mathbb{R}^k)$. We emphasize that $\mathcal{A}(\mathbb{R}^k)$
is closed under projections.
\begin{fact}
\label{fact:hereditary-density}
If $X\in\mathbb{R}^{k_1}$ and $Y\in\mathbb{R}^{k_2}$ have a joint distribution in $\mathcal{A}(\mathbb{R}^{k_1+k_2})$,
then the distribution of $X$ is in $\mathcal{A}(\mathbb{R}^{k_1})$.
\end{fact}
Keeping in mind the
case of equality in Jensen's inequality and applying it to
\eqref{eq:entropy_on_compact}, let us note the following for future
reference.
\begin{lem}
\label{lem:entropy-compact-support}
If the distribution of a random variable $X$ is in $\mathcal{A}(\mathbb{R}^k)$ and $X$ takes values in a compact set $K$, then
\[
-\infty < H(X) \le \log(\mathrm{Vol}(K)).
\]
The second inequality holds with equality if and only if $X$ is uniform on $K$.
\end{lem}
Further use is made of the following generalisation of~\eqref{eq:entropy_on_compact} for which we also provide a quantitative `stability' estimate.
\begin{lem}\label{lem:entropy_bound_for_sub_probability}
Let $K\subseteq\mathbb{R}^k$ be a bounded measurable set and let $f\colon\mathbb{R}^k\to[0,\infty)$ be a bounded measurable function. Set $p := \int_K f(x)dx$. Then
\begin{equation}\label{eq:entropy on bounded set}
-\int_K f(x)\log(f(x))dx \le
p\log\left(\frac{\mathrm{Vol}(K)}{p}\right),
\end{equation}
where we interpret the right-hand side as $0$ if $p=0$, and define $0\log 0=0$ for the left-hand side.
Moreover, if $K$ admits a partition $K = K_1 \cup K_2$ for measurable $K_1, K_2$ and either
\[
\frac{\int_{K_1} f(x)dx}{\int_K f(x)dx} \le \frac{1}{10} \cdot \frac{\mathrm{Vol}(K_1)}{\mathrm{Vol}(K)}\quad\text{or}\quad\frac{\int_{K_1} f(x)dx}{\int_K f(x)dx} \ge 10 \cdot \frac{\mathrm{Vol}(K_1)}{\mathrm{Vol}(K)}
\]
%
then
\begin{multline}
\label{eq:entropy-bound-non-uniform-sub-probability}
-\int_K f(x)\log(f(x)) \, dx \le p\log\left(\frac{\mathrm{Vol}(K)}{p}\right)\\ - \frac{p}{4} \cdot \max\left\{\frac{\mathrm{Vol}(K_1)}{\mathrm{Vol}(K)},\frac{\int_{K_1} f(x)dx}{\int_K f(x)dx}\right\}.
\end{multline}
\end{lem}
\begin{proof}
The bound is trivial if $p=0$. Otherwise, since $g := f/p$ satisfies
\[
-\int_Kf(x) \log(f(x)) \, dx = - p\int_Kg(x) \log(g(x)) \, dx - p\log p,
\]
it suffices to prove the results when $p=1$, as we now assume.
The estimate~\eqref{eq:entropy on bounded set} follows from the same calculation as in~\eqref{eq:entropy_on_compact}.
We proceed to prove~\eqref{eq:entropy-bound-non-uniform-sub-probability}. Set $r := \frac{\int_{K_1} f(x)dx}{\int_K f(x)dx} = \int_{K_1} f(x)dx$ and $q := \frac{\mathrm{Vol}(K_1)}{\mathrm{Vol}(K)}$ so that either $r\le \frac{1}{10}q$ or $r\ge 10q$. Invoking~\eqref{eq:entropy on bounded set} twice yields
\begin{align*}
- \int_K f(x) \log(f(x))dx & =- \left(\int_{K_1} + \int_{K_2}\right) f(x) \log(f(x))dx \\
& \le r \log\left(\frac{\mathrm{Vol}(K_1)}{r}\right) + (1-r) \log\left(\frac{\mathrm{Vol}(K_2)}{(1-r)}\right),
\end{align*}
which, by the definition of $q$, is easily seen to be equivalent to:
\begin{multline*}
\log(\mathrm{Vol}(K)) + \int_K f(x) \log(f(x))dx \\
\ge r \log\left(\frac{r}{q}\right) + (1-r) \log \left(\frac{1-r}{1-q}\right).
\end{multline*}
Define $D(x) := x\log(x/q) + (1-x)\log((1-x)/(1-q))$, so that the right-hand side above is $D(r)$. (An observant reader will recognise that $D(r)$ is the Kullback--Leibler divergence between Bernoulli random variables with success probabilities $r$ and $q$, respectively.) Note that $D(q) = 0$ and that
\begin{equation}
\label{eq:DKL-derivative}
D'(x) = \log\left(\frac{x}{q}\right) - \log\left(\frac{1-x}{1-q}\right).
\end{equation}
Observe further that $D'(x)$ is an increasing function of $x$ and $D'(q)=0$. This implies that, in the case where $r \le q/10$,
\[
D(r) \ge \frac{q-r}{2} \cdot \left(-D'\left(\frac{r+q}{2}\right)\right) \ge \frac{9q}{20} \cdot \log \left(\frac{20}{11}\right) \ge \frac{q}{4}
\]
and, in the case where $r\ge 10 q$,
\[
D(r) \ge \frac{r-q}{2} \cdot D'\left(\frac{r+q}{2}\right) \ge \frac{9r}{20} \cdot \log \left(\frac{11}{2}\right) \ge \frac{r}{4}.\qedhere
\]
\end{proof}
\subsection{Conditional entropy}
\label{sec:conditional-entropy}
For random variables $X \in \mathbb{R}^{k_1}$ and $Y \in \mathbb{R}^{k_2}$ having a joint density $f$ on
$\mathbb{R}^{k_1 + k_2}$, the \emph{conditional entropy} of $X$ given $Y$,
denoted by $H(X \mid Y)$, is the average over $Y$ of the entropy of
the conditional distribution of $X$ given~$Y$. Formally, if we write
\begin{equation}
\label{eq:conditioned-density}
g(y):=\int f(x,y)\,dx\quad\text{and}\quad f_y(x):=\frac{f(x,y)}{g(y)},
\end{equation}
with $f_y$ defined for almost every $y$ with respect to the
distribution of~$Y$, then
\begin{equation}
\label{eq:conditional_entropy_def}
H(X \mid Y) := \int H(X_{\{Y = y\}}) g(y)\,dy = -\iint
\log\left(f_y(x)\right) f_y(x) dx\, g(y)\,dy,
\end{equation}
whenever the above integral is well defined, where $X_{\{Y=y\}}$ denotes the random variable $X$ conditioned on the event $\{Y=y\}$.
Note that, using Fact~\eqref{fact:hereditary-density}, $H(X \mid Y)$ is well defined and finite whenever the joint distribution of $X$ and $Y$ is in $\mathcal{A}(\mathbb{R}^{k_1+k_2})$.
\subsection{Kullback--Leibler divergence}
\label{sec:KL-divergence}
Given two absolutely continuous probability measures $\mu$ and $\nu$
on~$\mathbb{R}^k$ with densities $f$ and $g$, respectively, and random
variables $X \sim \mu$ and $Y \sim \nu$, we define the
\emph{Kullback--Leibler divergence} between $\mu$ and $\nu$ (or between $X$
and $Y$) by
\begin{equation}
\label{eq:DKL-def}
\DKL{\mu}{\nu} := \DKL{X}{Y} := \int \log\left( \frac{f(x)}{g(x)}\right)f(x)\,dx.
\end{equation}
Since $\log y \ge 1 - 1/y$ for every $y > 0$, we see that
\begin{equation}
\label{eq:DKL-nonnegative}
\DKL{\mu}{\nu} \ge \int \left(1-\frac{g(x)}{f(x)}\right) f(x)\, dx = 0.
\end{equation}
In particular, the integral in~\eqref{eq:DKL-def} is well defined,
possibly as $+\infty$.
We note a simple relation between the entropy of a pair of random variables and their Kullback--Leibler divergence.
Let $X$ and $Y$ be random variables taking values in $\mathbb{R}^{k_1}$ and $\mathbb{R}^{k_2}$, respectively, with a joint distribution in $\mathcal{A}(\mathbb{R}^{k_1+k_2})$. A direct calculation shows that
\begin{equation}\label{eq:entropy_KL_relation}
H(X,Y) = H(X) + H(Y) - \DKL{(X,Y)}{X\times Y},
\end{equation}
where we use the notation $X\times Y$ to denote a random variable whose distribution is the product of the marginal distribution of $X$ and the marginal distribution of $Y$; in other words, $X \times Y$ is composed of independent copies of $X$ and $Y$.
\subsection{Properties of entropy}
\label{sec:properties-entropy}
We now recall some standard facts about entropy.
\begin{lem}\label{prop:basic_entropy_properties}
Suppose $X\in\mathbb{R}^{k_1}$, $Y\in\mathbb{R}^{k_2}$, $Z\in\mathbb{R}^{k_3}$ have a joint distribution in $\mathcal{A}(\mathbb{R}^{k_1+k_2+k_3})$. Then
\begin{enumerate}[label={\rm(\textit{\roman*})}]
\item
\label{item:entropy-prop-1}
$H(X,Y) = H(X \mid Y) + H(Y)$,
\item
\label{item:entropy-prop-2}
$H(X \mid Y)\le H(X)$,
\item
\label{item:entropy-prop-3}
$H(X,Y) \le H(X) + H(Y)$,
\item
\label{item:entropy-prop-4}
$H(X \mid Y,Z)\le H(X \mid Y)$.
\end{enumerate}
\end{lem}
\begin{proof}
Note first that our assumption on the joint distribution of $X$, $Y$, and $Z$ implies that all entropies appearing
in the statement of the lemma are well defined, see Fact~\ref{fact:hereditary-density} and Lemma~\ref{lem:entropy-compact-support}. To see~\ref{item:entropy-prop-1}, let $f$ be the joint density of $X$ and $Y$ and define $g$ and $f_y$
as in~\eqref{eq:conditioned-density}. Then
\[
\begin{split}
\lefteqn{H(X,Y) = - \iint \log(f(x,y)) f(x,y)\, dx\, dy}\quad & \\
& = - \iint \log(f_y(x)g(y)) f_y(x)g(y)\, dx\,dy \\
& = - \iint \log(f_y(x)) f_y(x)\, dx\, g(y)\, dy - \int \log(g(y)) g(y) \int f_y(x)\,dx\,dy \\
& = H(X \mid Y) + H(Y).
\end{split}
\]
Inequality~\ref{item:entropy-prop-3} is a direct consequence of~\eqref{eq:DKL-nonnegative} and~\eqref{eq:entropy_KL_relation}
whereas~\ref{item:entropy-prop-2} follows immediately from~\ref{item:entropy-prop-1} and~\ref{item:entropy-prop-3}.
To see~\ref{item:entropy-prop-4}, let $f$ be the joint density of $X$, $Y$, and $Z$, and let
\[
g(y) := \iint f(x,y,z)\, dx\, dz.
\]
It is not hard to see that
\begin{align*}
H(X \mid Y,Z) &= \int H(X_{\{Y=y\}} \mid Z_{\{Y=y\}}) g(y) dy \\
&\le \int H(X_{\{Y=y\}}) g(y) dy = H(X \mid Y),
\end{align*}
where the inequality follows from~\ref{item:entropy-prop-2}.
\end{proof}
A powerful tool for comparing entropies is the following inequality
originally proved by Shearer (see~\cite{shearer_ineq}) for Shannon's
entropy. As the literature usually deals with the discrete case, we provide a short proof based on the treatment in~\cite{AlSp}.
\begin{thm}[Shearer's inequality]\label{thm:Shearers_inequality}
Let $X_1,\dotsc,X_m$ be random variables with a joint density which is bounded and compactly supported.
Let $\mathcal{I} \subseteq 2^{\br{m}}$ be a collection of subsets which $r$-covers the set $\br{m}$, i.e.,~has the property
that for each $i \in \br{m}$,
\begin{equation}\label{eq:r_cover}
|\{I\in \mathcal{I} : i\in I\}| = r.
\end{equation}
Then
\begin{equation}
\label{eq:Shearer}
H(X_1,\dotsc, X_m) \le \frac{1}{r}\sum_{I \in \mathcal{I}} H(\{X_i: i\in I\}).
\end{equation}
\emph{(since we did not preclude $\emptyset\in\mathcal{I}$, let us define that the entropy of an empty collection of random variables is zero).}
\end{thm}
\begin{proof}
We prove the statement by induction on $r$. The case $r=1$ follows immediately from~\ref{item:entropy-prop-3} in Proposition~\ref{prop:basic_entropy_properties}.
Suppose now that $r > 1$. If $\br{m} \in \mathcal{I}$, then we easily obtain~\eqref{eq:Shearer} invoking the inductive assumption with $\mathcal{I}$
replaced by the $(r-1)$-cover $\mathcal{I} \setminus \{\br{m}\}$. Otherwise, assume $I_1, I_2 \in \mathcal{I}$ satisfy that both $I_1 \setminus I_2$
and $I_2 \setminus I_1$ are non-empty. It follows
from~\ref{item:entropy-prop-4} in
Proposition~\ref{prop:basic_entropy_properties} that
\begin{equation}\label{eq:17.5}
H(I_1\setminus I_2\mid I_2)\le H(I_1\setminus I_2\mid I_1\cap I_2)
\end{equation}
where we denote $H(I)$ as a short for $H(\{X_i:i\in I\})$, and
similarly for conditioned entropies.
Consequently, by~\ref{item:entropy-prop-1} in
Proposition~\ref{prop:basic_entropy_properties},
\begin{multline*}
H(I_1\cup I_2)+H(I_1\cap I_2)
\stackrel{\textrm{\ref{item:entropy-prop-1}}}{=}
H(I_1\setminus I_2 \mid I_2) + H(I_2) + H(I_1\cap I_2 ) \\
\stackrel{\textrm{(\ref{eq:17.5})}}{\le}
H(I_1\setminus I_2 \mid I_1\cap I_2) + H(I_2) + H(I_1\cap I_2 )
\stackrel{\textrm{\ref{item:entropy-prop-1}}}{=}
H(I_1)+H(I_2)
\end{multline*}
If we now replace $I_1$ and $I_2$ with $I_1 \cup I_2$ and $I_1 \cap I_2$, then $\mathcal{I}$ remains an $r$-cover and the sum in the right-hand side
of~\eqref{eq:Shearer} can only decrease. It is clear that after a finite number of such modifications we will eventually arrive at the case when
$\br{m}\in \mathcal{I}$.
\end{proof}
We remark that, due to the fact that differential entropy may be negative and unlike the Shannon entropy case (the entropy of discrete random variables), inequality \eqref{eq:Shearer} need not hold when the equals sign in the $r$-cover condition \eqref{eq:r_cover} is changed to a greater-or-equal sign.
\subsection{A triangle inequality for the Kullback--Leibler divergence}
The proof of Theorem~\ref{thm:volume_estimate} will require the following simple `triangle inequality' for Kullback--Leibler divergences.
\begin{lem}
\label{lem:DKL-triangle-ineq}
Suppose that $X$, $Y$, and $Z$ are $\mathbb{R}$-valued random variables with a joint distribution in $\mathcal{A}(\mathbb{R}^3)$. Then
\begin{multline*}
\DKL{(X,Y,Z)}{X\times Y\times Z} \\
\le \DKL{(X,Y,Z)}{X\times (Y,Z)} + \DKL{(X,Y,Z)}{(X,Y)\times Z}.
\end{multline*}
\end{lem}
\begin{proof}
The definition of Kullback--Leibler divergence gives
\begin{multline*}
\DKL{(X,Y,Z)}{X\times Y\times Z} \\
=\DKL{(X,Y,Z)}{X\times (Y,Z)} + \DKL{(Y,Z)}{Y\times Z}.
\end{multline*}
The second term in the right-hand side may be bounded from above, using~\eqref{eq:entropy_KL_relation} and Lemma~\ref{prop:basic_entropy_properties}~\ref{item:entropy-prop-4}, as follows:
\begin{align*}
\DKL{(Y,Z)}{Y\times Z} & = H(Z) - H(Z \mid Y) \le H(Z) - H(Z \mid X,Y) \\
& = \DKL{(X,Y,Z)}{(X,Y)\times Z}.\qedhere
\end{align*}
\end{proof}
\subsection{Relations between entropy and independence}
\label{sec:entr-total-vari}
As explained above, a key step in our proof of
Theorem~\ref{thm:volume_estimate} is showing that the individual
distances in a uniformly sampled metric space from $\mathcal{M}_n$ become
almost independent random variables after we condition on the values
of some small fraction of all $\binom{n}{2}$ distances. We shall
establish this almost independence property by bounding the
entropies of various vectors of distances in the random metric
space. The connection between almost independence and entropy will
be provided by the following lemma, relating the Kullback--Leibler divergence of two measures with the difference of their supports. The lemma will be used to bound from above the error term in the upper bound on entropy given by Theorem~\ref{thm:entropy_bound_for_almost_independent}, see Claim~\ref{claim:PXnotinM3}.
\begin{lem}\label{lem:Kullback_Leibler_and_support}
Let $\mu, \nu$ be probability measures
in $\mathcal{A}(\mathbb{R}^k)$. Then
\begin{equation}
\label{eq:mu-nu-supp-diff}
\DKL{\mu}{\nu}\ge \sup\{\nu(A) : A\subseteq\mathbb{R}^k\text{ Borel satisfying }
\mu(A) = 0\}.
\end{equation}
\end{lem}
\begin{proof}
Denote by $f(x)$ the density of $\mu$ and by $g(x)$ the density of $\nu$. Let $A\subseteq\mathbb{R}^k$ be a Borel subset and suppose that $\mu(A)
= 0$. Recalling that $\log(y)\ge 1 - \frac{1}{y}$ for all $y>0$ we
conclude that
\begin{align*}
\DKL{\mu}{\nu} &= \int \log\left( \frac{f(x)}{g(x)}\right)f(x)\,dx = \int_{A^c} \log\left(
\frac{f(x)}{g(x)}\right)f(x)\,dx\\
&\ge \int_{A^c} \left(1 -
\frac{g(x)}{f(x)}\right)f(x)\,dx = 1 - \nu(A^c) = \nu(A).\qedhere
\end{align*}
\end{proof}
Observe that the quantity in the right-hand side of~\eqref{eq:mu-nu-supp-diff} is a lower bound on the \emph{total variation distance} between $\mu$ and $\nu$. Therefore, it seems natural to relate it to the Kullback--Leibler divergence between $\mu$ and $\nu$ using Pinsker's inequality~\cite{Pin60}, which states that\footnote{Originally, Pinsker proved~\eqref{eq:Pinsker} with the multiplicative constant $2$ replaced by $1/(2\log 2)$. The version stated in~\eqref{eq:Pinsker} was obtained somewhat later by Csisz\'ar~\cite{csiszar_inequality1966}, Kemperman~\cite{Ke69}, and Kullback~\cite{Ku67}.}
\begin{equation}
\label{eq:Pinsker}
\DKL{\mu}{\nu} \ge 2 \left(\TV{\mu}{\nu}\right)^2.
\end{equation}
This, in fact, was done in our proof of an earlier, weaker version of Theorem~\ref{thm:volume_estimate}. While~\eqref{eq:Pinsker} is optimal for certain pairs of $\mu$ and $\nu$, in our setting, the more specialised Lemma~\ref{lem:Kullback_Leibler_and_support} yields much better dependence between the two quantities involved. Similar considerations are discussed in~\cite{EllFriKinYeh15}, which also uses a version of Lemma~\ref{lem:Kullback_Leibler_and_support} in place of Pinsker's inequality.
\section{Entropy-maximising product distributions}\label{sec:entropy-maximising product distributions}
The first part of this section is devoted to deriving an upper bound on the largest entropy
of a vector of independent random variables that is almost supported in a given compact, convex subset $\mathcal{P} \subseteq \mathbb{R}^d$.
It turns out that this largest entropy is close to the logarithm of the largest volume of a box that is fully contained in $\mathcal{P}$.
In a short, second part of the section, we compute this volume in the specific case that $\mathcal{P}$ is the (closure of the) 3-dimensional metric polytope $\mathcal{M}_3$.
The results of this section are a central ingredient in the proof of the volume upper bound in Theorem~\ref{thm:volume_estimate}.
\subsection{Entropy-maximising product distributions on convex sets}
The following theorem is the main result of this section.
\begin{thm}\label{thm:entropy_bound_for_almost_independent}
Let $M>0$ and let $\mathcal{P}\subseteq[-M,M]^d$, $d\ge 2$, be a closed, convex
set with non-empty interior. Let $V_0$ be the maximal volume of an axis-parallel box fully contained
in $\mathcal{P}$, that is,
\begin{equation}\label{eq:maximal volume of box in polytope}
V_0 := \max\left\{\prod_{i=1}^d (b_i - a_i) : [a_1,b_1]\times\cdots\times[a_d,b_d]\subseteq \mathcal{P}\right\}.
\end{equation}
There exists a finite $C = C(M,\mathcal{P})$ such that the following holds. Suppose
$X_1,\dotsc, X_d$ are \emph{independent} random variables with
bounded densities supported in $[-M,M]$. Then
\begin{equation*}
H(X_1, \dotsc, X_d) = \sum_{i=1}^d H(X_i)\le \log(V_0) + C\cdot\P\big((X_1,\dotsc,X_d)\notin\mathcal{P}\big)^{1/d}.
\end{equation*}
\end{thm}
Let us comment on the assumptions and the conclusion of the theorem. First,
since we will only use this result for a very specific $\mathcal{P}$ (the closure of the 3-dimensional metric polytope $\mathcal{M}_3$), we do not need the exact dependence of $C$ on $\mathcal{P}$, but let us nonetheless note that $C$ depends only on $d$, $M$, and $V_0$.
The assumption $d \ge 2$ is required for the conclusion. Indeed, when $d=1$, there exist examples where the error term has an additional logarithmic factor (see~\eqref{eq:poly theorem weaker bound} below for a complementary upper bound). To see this, consider $\mathcal{P} = [0,1] \subseteq [-2,2]$ and let $X_1$ be a random variable that, with probability $1-\varepsilon$, is uniform on $\mathcal{P}$ and, with probability $\varepsilon$, is uniform on $[1,2]$. Here, $\P(X_1 \notin \mathcal{P}) = \varepsilon$ whereas $H(X_1) = (1-\varepsilon)\log(1/(1-\varepsilon)) + \varepsilon \log(1/\varepsilon) \ge \log(V_0) + \varepsilon \log(1/\varepsilon)$.
Aside from the constant factor $C$, the dependence of the error term on $\P\big((X_1, \dotsc, X_d) \notin \mathcal{P}\big)$ is optimal. To see this, consider the simplex
\[
\mathcal{P} := \left\{(x_1, \dotsc, x_d) \in [0,d]^d : x_1 + \dotsb + x_d \le d\right\}\subseteq[-d,d]^d.
\]
The AM--GM inequality implies that $[0,1]^d$ is the largest box contained in $\mathcal{P}$ and thus $\log(V_0) = 0$. Let $X_1, \dotsc, X_d$ be i.i.d.\ random variables distributed uniformly on the interval $[0,1+\delta]$, for some $\delta \le 1/d$. On the one hand, we have
\[
\P\big((X_1, \dotsc, X_d) \notin \mathcal{P}\big) \le \P\big(\min_i |X_i| > 1-(d-1)\delta\big) \le (d \delta)^d.
\]
On the other hand,
\[
H(X_1, \dotsc, X_d) = d\cdot H(X_1) = d\log(1+\delta) \ge d\delta/2.
\]
Thus, $H(X_1, \dotsc, X_d) \ge \log(V_0) + \P\big((X_1, \dotsc, X_d) \notin \mathcal{P}\big)^{1/d}/2$.
The first step in our proof of Theorem~\ref{thm:entropy_bound_for_almost_independent} is Lemma~\ref{lem:independent-box-poly}, below. The lemma supplies an axis-parallel box fully contained in $\mathcal{P}$ that supports most of the distribution of the vector $(X_1, \dotsc, X_d)$. The existence of such a box already implies an upper bound on $H(X_1, \dotsc, X_d)$ that differs from the one stated in Theorem~\ref{thm:entropy_bound_for_almost_independent} by an extra logarithmic factor in the error term (see~\eqref{eq:poly theorem weaker bound}). The proof of the lemma is short; following it, the bulk of the proof of Theorem~\ref{thm:entropy_bound_for_almost_independent} is devoted to removing this superfluous logarithmic term. (If one substitutes the bound~\eqref{eq:poly theorem weaker bound} for Theorem~\ref{thm:entropy_bound_for_almost_independent} in the argument presented in Section~\ref{sec:volume-upper-bound}, one obtains the following weaker version of the upper bound in Theorem~\ref{thm:volume_estimate}: $\mathrm{Vol}(\mathcal{M}_n)\le \exp\big(C(n\log n)^{3/2}\big)$.)
The second step in the proof of Theorem~\ref{thm:entropy_bound_for_almost_independent} is Proposition~\ref{prop:entropy_bound_for_almost_independent}, below. The proposition (combined with Lemma~\ref{lem:independent-box-poly}) may be regarded as a strengthening of the conclusion of the theorem. The extra information it provides will be used in our analysis of the minimum distance in a typical sample from the metric polytope (Theorem~\ref{thm:minimal_distance}).
\medskip
We start the proof of Theorem~\ref{thm:entropy_bound_for_almost_independent} with several definitions that we will use throughout.
Let $M>0$ and fix a closed convex set $\mathcal{P}\subseteq[-M,M]^d$ with non-empty interior. At this point the dimension is allowed to be any $d\ge 1$ but the restriction $d\ge 2$ will be placed in Proposition~\ref{prop:entropy_bound_for_almost_independent}. Write $V_0$ for the maximal volume of an axis-parallel box fully contained in $\mathcal{P}$, defined formally in~\eqref{eq:maximal volume of box in polytope}. Our assumptions on $\mathcal{P}$ imply that $V_0>0$. Let $X_1,\dotsc, X_d$ be \emph{independent} random variables with bounded densities supported in $[-M,M]$. Define
\begin{equation}\label{eq:eps def for poly}
\varepsilon := \P\big((X_1,\dotsc,X_d)\notin\mathcal{P}\big)^{1/d},
\end{equation}
so that our goal is to show that, for a finite $C = C(M,\mathcal{P})$,
\begin{equation}\label{eq:entropy_bound_goal}
H(X_1, \dotsc, X_d) = \sum_{i=1}^d H(X_i)\le \log(V_0) + C\varepsilon.
\end{equation}
We may (and will) assume without loss of generality that $\varepsilon\le \frac{1}{6}$, as otherwise the statement follows by taking $C=6(d \log(2M) -\log(V_0)) \ge 0$ (as $H(X_i) \le \log(2M)$ by Lemma~\ref{lem:entropy-compact-support} and $V_0 \le (2M)^d$).
For each $i \in \br{d}$, define the upper and lower $\varepsilon$-quantiles of the distribution of $X_i$,
\begin{equation}\label{eq:LQ UQ def}
\begin{split}
a_i & := \sup\{a \in \mathbb{R} : \P(X_i < a) \le \varepsilon\}, \\
b_i & := \inf\{b \in \mathbb{R} : \P(X_i > b) \le \varepsilon\},
\end{split}
\end{equation}
so that
\begin{equation}
\label{eq:X-quantiles}
\P\big(X_i \le a_i\big) = \P\big(X_i \ge b_i\big) = \varepsilon.
\end{equation}
In particular, the interval $[a_i,b_i]$ is nonempty by our assumption that $\varepsilon\le \frac{1}{6}$. Denote the volume spanned by these intervals by
\begin{equation}\label{eq:V def}
V := \mathrm{Vol}\big([a_1,b_1] \times \dotsb \times [a_d,b_d]\big) = \prod_{i=1}^d (b_i-a_i).
\end{equation}
Finally, writing $f_i$ for the density of $X_i$, let
\begin{equation}\label{eq:entropy portion}
H(X_i;A) := -\int_A f_i(x)\log(f_i(x))dx
\end{equation}
be the contribution to the differential entropy of $X_i$ from the measurable set $A$.
Our first lemma shows that the box spanned by the intervals $([a_i, b_i])$ is fully contained in $\mathcal{P}$.
\begin{lem}
\label{lem:independent-box-poly}
In every dimension $d\ge 1$,
\begin{equation}\label{eq:box in polytope}
[a_1,b_1] \times \dotsb \times [a_d, b_d] \subseteq \mathcal{P}.
\end{equation}
In particular, $V\le V_0$.
\end{lem}
\begin{proof}
Suppose, to obtain a contradiction, that
\eqref{eq:box in polytope} fails. Hence, as $\mathcal{P}$ is closed, there exists $(x_1,
\dotsc, x_d)\notin \mathcal{P}$ with $a_i<x_i<b_i$ for all $i$. This implies, as $\mathcal{P}$ is convex, that there is a choice of signs $(s_1,\ldots, s_d)\in\{-1,1\}^d$ so that the orthant
\begin{equation*}
O := \big\{(y_1,\dotsc, y_d) : s_i(y_i - x_i)\ge 0 \text{ for all $i \in \br{d}$}\big\}
\end{equation*}
does not intersect $\mathcal{P}$. In particular,
\begin{align*}
\varepsilon^d & = \P\big((X_1,\dotsc,X_d)\notin\mathcal{P}\big)\ge \P\big((X_1,\dotsc,X_d)\in
O\big)\\
&\ge \prod_{i=1}^d \min\{\P(X_i \le x_i),\P(X_i\ge x_i)\}\\
&>\prod_{i=1}^d \min\{\P(X_i \le a_i),\P(X_i\ge b_i)\} =
\varepsilon^d,
\end{align*}
where the strict inequality uses that $a_i<x_i<b_i$ and the definition~\eqref{eq:LQ UQ def}.
This contradiction shows that~\eqref{eq:box in polytope} must in fact hold.
The volume statement is now deduced from the definition of $V_0$.
\end{proof}
We digress from the proof of Theorem~\ref{thm:entropy_bound_for_almost_independent} to note that Lemma~\ref{lem:independent-box-poly} implies the following version of the theorem, which holds in every dimension $d\ge 1$ but has an extra logarithmic factor in the error term,
\begin{equation}\label{eq:poly theorem weaker bound}
\sum_{i=1}^d H(X_i) \le \log(V_0) + C\varepsilon\log(2/\varepsilon)
\end{equation}
with $\varepsilon$ as in~\eqref{eq:eps def for poly} and $C = C(d,M,V_0)$ finite. The case $\varepsilon>\frac{1}{6}$ is handled directly as before. For $\varepsilon\le \frac{1}{6}$, note first that, for some $C' = C'(d, M)$,
\begin{equation}\label{eq:weaker upper bound on entropy}
\sum_{i=1}^d H(X_i) \le (1-2\varepsilon)\log(V) + C'\varepsilon\log(2/\varepsilon)
\end{equation}
Indeed, for each $i \in \br{d}$, by~\eqref{eq:X-quantiles} and Lemma~\ref{lem:entropy_bound_for_sub_probability},
\begin{equation*}
\begin{split}
H(X_i) & = H(X_i;[-M,M]\setminus[a_i,b_i]) + H(X_i;[a_i,b_i])\\
& \le 2\varepsilon \log\left(\frac{2M-(b_i-a_i)}{2\varepsilon}\right) +
(1-2\varepsilon)\log\left(\frac{b_i-a_i}{1-2\varepsilon}\right)\\
& \le (1-2\varepsilon)\log(b_i-a_i) + 2\varepsilon\log\left(\frac{M}{\varepsilon}\right) +
(1-2\varepsilon)\log\left(\frac{1}{1-2\varepsilon}\right)
\end{split}
\end{equation*}
and the bound~\eqref{eq:weaker upper bound on entropy} follows by summing this estimate over all $i$. To deduce~\eqref{eq:poly theorem weaker bound}, replace $V$ by $V_0$ using Lemma~\ref{lem:independent-box-poly} and absorb the factor $2\varepsilon\log(V_0)$ in the error term.
We return to the proof of Theorem~\ref{thm:entropy_bound_for_almost_independent} and will show the following key proposition.
\begin{prop}\label{prop:entropy_bound_for_almost_independent}
In dimensions $d\ge 2$, there exists a finite $C = C(M,\mathcal{P})$ such that
\begin{equation*}
\sum_{i=1}^d H(X_i) \le \frac{1}{2}\left(\log(V_0) + \sum_{i=1}^d H(X_i; [a_i, b_i])\right) + C\varepsilon.
\end{equation*}
\end{prop}
It will be convenient to denote by $c$ and $C$ finite positive constants which depend only on $d$, $M$, and $V_0$. These constants, and their numbered versions, may change from line to line.
Let us see how Proposition~\ref{prop:entropy_bound_for_almost_independent} and Lemma~\ref{lem:independent-box-poly} imply Theorem~\ref{thm:entropy_bound_for_almost_independent}. Combining the proposition with Lemma~\ref{lem:entropy_bound_for_sub_probability}, recalling~\eqref{eq:X-quantiles},~\eqref{eq:V def}, and our assumption that $\varepsilon\le\frac{1}{6}$, and applying Lemma~\ref{lem:independent-box-poly} we have
\begin{equation*}
\begin{split}
H(X_1, \dotsc, X_d) &\le \frac{1}{2}\left(\log(V_0) + (1-2\varepsilon)\sum_{i=1}^d \log\left(\frac{b_i-a_i}{1-2\varepsilon}\right)\right) + C\varepsilon\\
&\le \frac{1}{2}\big(\log(V_0) + (1-2\varepsilon) \log(V)\big) + C\varepsilon\le \log(V_0) + C\varepsilon.
\end{split}
\end{equation*}
\begin{proof}[Proof of Proposition~\ref{prop:entropy_bound_for_almost_independent}]
There are three constants in the proof that deserve their own letters, $\beta$, $\mu$, and $K$, also depending only on $d$, $M$, and $V_0$. We will not specify these explicitly, and only point out here that we first choose $\beta$ (small), then $\mu$ (even smaller), and then $K$ (very large). In symbols (treating $d$, $M$, and $V_0$ as constants),
\[
K^{-1} \ll \mu \ll \beta \ll 1.
\]
Let us introduce the following quantity,
\[
t:=\sup\left\{s > 0:\exists i \; \max\left\{\P(X_i\le a_i-s), \P(X_i\ge b_i+s)\right\} \ge \frac{K\varepsilon^2}{s}\right\}
\]
where we set $t = 0$ if the above set is empty. As each $X_i$ is supported on $[-M,M]$, we have $t \le 2M$. On the other hand, by the definition of $a_i$ and $b_i$, see~\eqref{eq:X-quantiles}, either $t = 0$ or $t \ge K\varepsilon$.
The proposition is obtained by summing the following inequalities:
\begin{multline}
\label{eq:X_i entropy on sides}
M_t := \sum_{i=1}^d H(X_i;[a_i-t, a_i]\cup[b_i, b_i+t]) + \frac{1}{2}H(X_i;[a_i,b_i]) \\
\le \frac{1}{2}\log(V_0) + C\varepsilon.
\end{multline}
and, for each $i\in\br{d}$,
\begin{equation}\label{eq:X_i entropy on tail}
H(X_i; \mathbb{R}\setminus[a_i - t, b_i + t])\le C\varepsilon.
\end{equation}
We first prove~\eqref{eq:X_i entropy on tail}. For each $k \ge 1$, let $\lambda_k$ be the Lebesgue measure of the set
\[
\{x \notin [a_i-t,b_i+t] : e^{-k} \le f_i(x) \le e^{-k+1}\}
\]
and note that, as $-y\log y<0$ for $y>1$,
\[
H(X_i; \mathbb{R}\setminus[a_i - t, b_i + t]) \le \sum_{k=1}^\infty ke^{-k+1} \lambda_k.
\]
For every $k \ge 1$,
\[
\begin{split}
\lambda_k & \le 2\varepsilon k^2 + e^k \cdot \left(\P(X_i \le a_i - t - \varepsilon k^2) + \P(X_i \ge b_i+t+\varepsilon k^2)\right) \\
& \stackrel{(*)}{\le} 2 \varepsilon k^2 + e^k \cdot \frac{2K\varepsilon^2}{t + \varepsilon k^2} \le \left(2k^2 + \frac{2Ke^k}{k^2}\right) \cdot \varepsilon
\end{split}
\]
where $(*)$ follows from the definition of $t$. Hence
\[
\sum_{k=1}^\infty ke^{-k+1}\lambda_k \le \left(\sum_{k=1}^\infty \frac{2k^3}{e^{k-1}} + \sum_{k=1}^\infty \frac{2eK}{k^2}\right) \cdot \varepsilon \le C\varepsilon,
\]
which proves~\eqref{eq:X_i entropy on tail}.
It remains to argue that~\eqref{eq:X_i entropy on sides} holds as well. For this we apply Lemma~\ref{lem:entropy_bound_for_sub_probability} and get for each $i$ (using also $\log y\le y-1$ for $y>0$),
\begin{equation}
\label{eq:I-ai-bi}
\begin{split}
H(X_i;[a_i,b_i]) & \le (1-2\varepsilon)\log\left(\frac{b_i-a_i}{1-2\varepsilon}\right) \\
& \le (1-2\varepsilon)\log(b_i-a_i) + 2\varepsilon
\end{split}
\end{equation}
and, if $t > 0$,
\begin{equation}
\label{eq:I-ai-t-bi-t}
H(X_i;[a_i-t,a_i]\cup[b_i, b_i+t]) \le 2\varepsilon\log\left(\frac{t}{\varepsilon}\right),
\end{equation}
where we used the fact that $t \ge K\varepsilon \ge e\varepsilon$, which implies that the function $\delta \mapsto \delta \log(t/\delta)$ is increasing for $\delta \in [0,\varepsilon]$. We split the remainder of the argument into two cases, depending on how close $V$, the volume of $[a_1, b_1] \times \dotsb \times [a_d, b_d]$, is to $V_0$.
\subsubsection*{Case 1.}
We first assume that
\begin{equation}
\label{eq:cube-small}
\log(V) \le \log(V_0) - \beta t.
\end{equation}
In this case, summing~\eqref{eq:I-ai-t-bi-t} and half of~\eqref{eq:I-ai-bi} over all $i$ gives
\begin{align*}
M_t & \le \frac{1}{2}(1-2\varepsilon) \sum_{i=1}^d \log(b_i-a_i) + d\varepsilon + 2d\varepsilon\log\left(\frac{t}{\varepsilon}\right) \cdot \mathbbm{1}_{\{t > 0\}}\\
& \le \frac{1}{2}(1-2\varepsilon)\left(\log(V_0) - \beta t\right) + d\varepsilon + 2d\varepsilon\log\left(\frac{t}{\varepsilon}\right) \cdot \mathbbm{1}_{\{t > 0\}}\\
& \le \frac{1}{2}\log(V_0) + \left(d + 2d\log\left(\frac{t}{\varepsilon}\right) \cdot \mathbbm{1}_{\{t > 0\}} - \frac{\beta}{4} \cdot \frac{t}{\varepsilon} - \log(V_0)\right) \cdot \varepsilon.
\end{align*}
The claimed estimate~\eqref{eq:X_i entropy on sides} now follows as, for every $c > 0$, the function $y \mapsto \log(y) - cy$ is bounded from above.
\subsubsection*{Case 2.}
Assume now that~\eqref{eq:cube-small} does not hold. This means, in particular, that $t > 0$ (due to Lemma~\ref{lem:independent-box-poly}). Since our variables are continuous,
\[
\max\left\{\P(X_i\le a_i-t), \P(X_i\ge b_i+t)\right\} = \frac{K\varepsilon^2}{t}
\]
for some index $i$. By permuting and reflecting the coordinates, if necessary, we may assume that
\begin{equation}\label{eq:def t}
\P(X_1\le a_1-t)=K\varepsilon^2/t.
\end{equation}
We claim that
\begin{equation}
\label{eq:box-modified-not-in-poly}
[a_1- t,b_1]\times\prod_{i=2}^d[a_i+\mu t,b_i-\mu t]\nsubseteq\mathcal{P}.
\end{equation}
Indeed, if this were not true, then
\begin{equation}
\label{eq:case-2-vol-V-modified}
\log(V_0) - \log(V) \ge \log\left(\frac{b_1-a_1+t}{b_1-a_1}\right) +\sum_{i=2}^d\log\left(\frac{b_i-a_i-2\mu t}{b_i-a_i}\right).
\end{equation}
Since~\eqref{eq:cube-small} does not hold, we have
\[
\min_i (b_i-a_i) \ge \frac{V}{(2M)^{d-1}} \ge \frac{e^{-\beta t} \cdot V_0}{(2M)^{d-1}} \ge \frac{V_0}{(2M)^d},
\]
where the last inequality holds as $t \le 2M$ and $\beta$ is small. It follows that the first term in the right-hand side of~\eqref{eq:case-2-vol-V-modified} is at least $c_1t$, for some positive constant $c_1=c_1(d, M, V_0)$ (independent of $\beta$ as long as $\beta$ is small), and, if $\mu$ is sufficiently small, each of the $d-1$ summands is at least $-C_1\mu t$, for some positive constant $C_1 = C_1(d, M, V_0)$. In particular, if $\beta$ and $\mu$ are sufficiently small, then the right-hand side of~\eqref{eq:case-2-vol-V-modified} is larger than $\beta t$, contradicting our assumption.
Let $x$ be some point demonstrating~\eqref{eq:box-modified-not-in-poly}, namely
\[
x\in \left([a_1- t,b_1]\times\prod_{i=2}^d[a_i+\mu t,b_i-\mu t]\right)\setminus\mathcal{P},
\]
and notice that $x_1\in[a_1-t,a_1)$, as $\prod_i [a_i,b_i] \subseteq \mathcal{P}$ by Lemma~\ref{lem:independent-box-poly}. Since $\mathcal{P}$ is convex there is a hyperplane separating $x$ from $\mathcal{P}$, that is, a vector $v$ such that
\begin{equation}\label{eq:separating}
\forall y \in \mathcal{P} \quad \langle v, x \rangle < \langle v, y \rangle.
\end{equation}
First, let us apply \eqref{eq:separating} to $y=(a_1,x_2,\dotsc,x_d)$, which we may since $x_i\in[a_i,b_i]$ for all $i\ge 2$. We get $v_1(a_1-x_1)> 0$, so $v_1>0$ and we may normalise $v$ to assume $v_1=1$. Moreover, by permuting the coordinates, if necessary, we may assume that $v_i \ge 0$ for $i \in \{2, \dotsc, j\}$ and $v_i < 0$ for $i \in \{j+1, \dotsc, d\}$. We may also assume that $j \ge 2$, the complementary case $j+1 = 2$ being essentially identical. (Note that we do use the assumption that $d \ge 2$ here.) These assumptions imply that
\[
(-\infty, a_1-t] \times (-\infty,a_2 + \mu t] \times \prod_{i = 3}^j (-\infty, a_i] \times \prod_{i = j+1}^d [b_i, \infty)
\]
is disjoint from $\mathcal{P}$. Indeed, if $z$ is an arbitrary point in this orthant, we have $v_iz_i \le v_ix_i$ for each $i$ and hence $\langle v, z \rangle < \langle v, y \rangle$ for all $y\in\mathcal{P}$. It follows that
\begin{multline*}
\P(X_1 \le a_1-t) \cdot \P(a_2 \le X_2 \le a_2 + \mu t) \\
\cdot \prod_{i=3}^j \P(X_i \le a_i) \cdot \prod_{i=j+1}^d \P(X_i \ge b_i) \le \varepsilon^d,
\end{multline*}
and therefore, as $\P(X_i \le a_i) = \P(X_i \ge b_i) = \varepsilon$,
\[
\P(a_2 \le X_2 \le a_2+\mu t) \le \frac{\varepsilon^2}{\P(X_1 \le a_1-t)}
\stackrel{\textrm{(\ref{eq:def t})}}{=}
\frac{t}{K}.
\]
Thus we have shown some inhomogeneity in the distribution of $X_2$. If we require $K>10(b_2-a_2)/\mu(1-2\varepsilon)$, then this inhomogeneity is strong enough to apply \eqref{eq:entropy-bound-non-uniform-sub-probability} in Lemma~\ref{lem:entropy_bound_for_sub_probability},
so let us make this requirement. We get
\begin{align}
H(X_2;[a_2,b_2]) & \le (1-2\varepsilon)\log\left(\frac{b_2-a_2}{1-2\varepsilon}\right) - \frac{1-2\varepsilon}{4} \cdot \frac{\mu t}{b_2-a_2} \nonumber\\
& \le (1-2\varepsilon)\log(b_2-a_2) + 2\varepsilon - \frac{\mu}{20M} \cdot t. \label{eq:int-a2-b2}
\end{align}
Summing~\eqref{eq:I-ai-t-bi-t} and half of~\eqref{eq:I-ai-bi} over all $i$, as in Case 1, but using the improved estimate~\eqref{eq:int-a2-b2} in place of~\eqref{eq:I-ai-bi} when $i=2$ yields
\[
M_t \le \frac{1}{2}(1-2\varepsilon) \log(V) + d\varepsilon + 2d\varepsilon\log\left(\frac{t}{\varepsilon}\right) - \frac{\mu}{40M} \cdot t.
\]
A calculation similar to the one done in Case 1, using that $V\le V_0$ by Lemma~\ref{lem:independent-box-poly}, yields the upper bound~\eqref{eq:X_i entropy on sides} on $M_t$. This completes the proof of the proposition (and hence also of Theorem~\ref{thm:entropy_bound_for_almost_independent}).
\end{proof}
\ifxx\undefined
Now, \eqref{eq:entropy-on-ap-bp} follows by summing~\eqref{eq:int-a-b}, \eqref{eq:int-aip-ai}, and \eqref{eq:int-bi-bip} over all $i$, but using the stronger estimate~\eqref{eq:int-a2-b2} in place of~\eqref{eq:int-a-b} when $i=2$.
***************
In order to remove the superfluous logarithmic term, we shall assume now that $d \ge 2$ and that $\mathcal{P}$ is a polytope.
In particular, there is a \emph{finite} set $V \subseteq \left(\mathbb{R}^d \setminus \{0\}\right) \times \mathbb{R}$ such that
\[
\mathcal{P} = \bigcap_{(v,\tau) \in V} \left\{x \in \mathbb{R}^d : \sum_{i=1}^d v_ix_i \ge \tau\right\}.
\]
Let $\rho$ be the smallest (in absolute value) nonzero ratio of the coordinates of a normal vector defining $\mathcal{P}$. In other words,
\[
\rho := \min\left\{|v_i| / |v_j| : \big((v_1, \dotsc, v_d), \tau\big) \in V \text{ and $i, j \in [d]$ satisfy $v_i, v_j \neq 0$}\right\} \le 1.
\]
Moreover, let $K = K(\rho,d,M)$ be sufficiently large and define for each $1 \le i \le d$,
\begin{equation}\label{eq:ap_i_bp_i_def}
\begin{split}
a_i' &:= \inf \left\{a \in [-M, a_i) : \P(X_i \le a) > K\varepsilon^2 / (a_i-a) \right\},\\
b_i' &:= \sup \left\{b \in (b_i,M] : \P(X_i \ge b) > K\varepsilon^2 / (b-b_i) \right\},
\end{split}
\end{equation}
where $a_i' = a_i$ (resp.\ $b_i' = b_i$) if the set in the infimum (resp.\ supremum) is empty.
Since for every $1 \le i \le d$, the definition of $a_i'$ and $b_i'$ yields $\P(X_i \le a) \le K\varepsilon^2 / (a_i-a)$
for all $a \le a_i'$ and $\P(X_i \ge b) \le K\varepsilon^2 / (b-b_i)$ for all $b \ge b_i'$,
Lemma~\ref{lem:entropy_bound_under_tail_estimate} implies that
\[
-\int_{-M}^{a_i'} f_i(x)\log(f_i(x)) dx - \int_{b_i'}^M f_i(x)\log(f_i(x))dx \le 2C\sqrt{K}\varepsilon.
\]
Therefore, we only need to show that
\begin{equation}
\label{eq:entropy-on-ap-bp}
- \sum_{i=1}^d \int_{a_i'}^{b_i'} f_i(x)\log(f_i(x))dx \le \log(V_0) + C\varepsilon.
\end{equation}
As Lemma~\ref{lem:entropy_bound_for_sub_probability} yields
\begin{equation}
\label{eq:int-a-b}
\begin{split}
-\int_{a_i}^{b_i} f_i(x)\log(f_i(x))dx & \le \P(X_i\in[a_i,b_i])\log\left(\frac{\mathrm{Vol}([a_i,b_i])}{\P(X_i\in[a_i,b_i])}\right) \\
& \le (1-2\varepsilon)\log(b_i-a_i) - \log\left(1-2\varepsilon\right)
\end{split}
\end{equation}
and $\sum_{i=1}^d \log(b_i - a_i) \le \log(V_0)$ by~\eqref{eq:box_in_poly}, we shall be focusing on estimating the integrals of $f_i\log(f_i)$ on the sets $[a_i',a_i] \cup [b_i,b_i']$.
To this end, let us define
\[
t := \max\left\{\max\{b_i'-b_i, a_i-a_i'\} : i \in [d]\right\}.
\]
In the easy case $t = 0$, we have $a_i' = a_i$ and $b_i' = b_i$ and therefore we easily obtain~\eqref{eq:entropy-on-ap-bp}
by summing~\eqref{eq:int-a-b} over all $i$, as $\log(1-2\varepsilon) \ge -3\varepsilon$ by our assumption that $\varepsilon \le 1/6$.
Hence, from now one we shall assume that $t > 0$. Without loss of generality, we may further assume that $t = a_1 - a_1'$,
as otherwise we can permute the coordinates (which results in the same permutation of the indices of $a_i$, $b_i$, $a_i'$, and $b_i'$)
or reflect them in zero (which results in exchanging the roles of $a_i$ and $a_i'$ with $b_i$ and $b_i'$).
Note that
\[
\frac{K\varepsilon^2}{t} = \frac{K\varepsilon^2}{a_1-a_1'} \le \P(X_1 \le a_1') \le \P(X_1 \le a_1) = \varepsilon,
\]
and hence $t \ge K\varepsilon$. In particular, Lemma~\ref{lem:entropy_bound_for_sub_probability} implies that for every $1 \le i \le d$,
\begin{equation}
\label{eq:int-aip-ai}
\begin{split}
-\int_{a_i'}^{a_i} f_i(x) \log(f_i(x))dx & \le \P\left(a_i' \le X \le a_i\right) \log\left(\frac{a_i-a_i'}{\P\left(a_i' \le X_i \le a_i\right)}\right) \\
& \le \varepsilon \log\left(\frac{t}{\varepsilon}\right) \le \frac{t}{K} \log K \le \frac{\rho t}{50dM}
\end{split}
\end{equation}
and similarly,
\begin{equation}
\label{eq:int-bi-bip}
- \int_{b_i}^{b_i'} f_i(x)\log(f_i(x))dx \le \frac{\rho t}{50dM}.
\end{equation}
Let
\[
s := \max\left\{ s \ge 0 : [a_1-s,b_1] \times [a_2, b_2] \times \dotsb \times [a_d,b_d] \subseteq \mathcal{P}\right\}.
\]
We shall split the remainder of the proof into two cases, depending on whether $s \ge t/2$.
\medskip
\noindent
\textbf{Case 1. $s \ge t/2$.}
\smallskip
Since $(b_1-a_1+s) \cdot \prod_{i=2}^d (b_i-a_i) \le V_0$, then
\begin{equation}
\label{eq:sum-log-ai-bi}
\begin{split}
\sum_{i=1}^d \log(b_i-a_i) & \le \log(V_0) - \log\left(\frac{b_1-a_1+s}{b_1-a_1}\right) \le \log(V_0) - \log\left(1+\frac{s}{2M}\right) \\
& \le \log(V_0) - \log\left(1+\frac{t}{4M}\right) \le \log(V_0) - \frac{t}{6M},
\end{split}
\end{equation}
where the last inequality follows as clearly $t \le 2M$. Consequently, \eqref{eq:entropy-on-ap-bp} follows by summing~\eqref{eq:int-a-b}, \eqref{eq:int-aip-ai}, and \eqref{eq:int-bi-bip} over all $i$ and then invoking~\eqref{eq:sum-log-ai-bi}.
\medskip
\noindent
\textbf{Case 2. $s < t/2$.}
\smallskip
As $[a_1-t/2,b_1] \times [a_2,b_2] \times \dotsb \times [a_d, b_d] \not\subseteq \mathcal{P}$ by the definition of $s$, there must exist a $(v,\tau) \in V$ and $(x_1, \dotsc, x_d) \in \mathbb{R}^d$ such that $a_1 - t/2 \le x_1 \le b_1$ and $a_i \le x_i \le b_i$ for $i \in \{2, \dotsc, d\}$ such that
\begin{equation}
\label{eq:v-tau-violated}
\sum_{i=1}^d v_ix_i < \tau.
\end{equation}
As $(a_1, x_2, \dotsc, x_d) \in \mathcal{P}$ by~\eqref{eq:box_in_poly}, then necessarily $v_1 > 0$. Without loss of generality, we may assume that $v_i \ge 0$ for $i \in \{2, \dotsc, j\}$ and $v_i < 0$ for $i \in \{j+1, \dotsc, d\}$. We may also assume that $j \ge 2$, the complementary case $j+1 = 2$ being essentially identical. (Note that we do use the assumption that $d \ge 2$ here.) We claim that
\begin{equation}
\label{eq:modified-tail-outside-poly}
(-\infty, a_1'] \times (-\infty,a_2 + \rho t/2] \times \prod_{i = 3}^j (-\infty, a_i] \times \prod_{i = j+1}^d [b_i, \infty) \subseteq \mathbb{R}^d \setminus \mathcal{P}.
\end{equation}
Indeed, note first that by our definition of $j$, inequality~\eqref{eq:v-tau-violated} still holds if we decrease $x_i$ for $i \in \{1, \dotsc, j\}$ and increase $x_i$ for $i \in \{j+1, \dotsc, d\}$. Moreover, as $a_1' = a_1 - t$ and $v_1 \ge \rho v_2$ by the definition of $\rho$, then
\[
v_1 a_1' + v_2(a_2+\rho t/2) + \sum_{i=3}^d v_ix_i = v_1(a_1-t/2) + v_2a_2 + \sum_{i=3}^d v_ix_i + (\rho v_2 - v_1) t/2 < \tau.
\]
It follows from~\eqref{eq:modified-tail-outside-poly} that
\[
\P(X_1 \le a_1') \cdot \P(a_2 \le X_2 \le a_2 + \rho t/2) \cdot \prod_{i=3}^j \P(X_i \le a_i) \cdot \prod_{i=j+1}^d \P(X_i \ge b_i) \le \varepsilon^d,
\]
and therefore
\[
\P(a_2 \le X_2 \le a_2+\rho t/2) \le \frac{\varepsilon^2}{\P(X_1 \le a_1')} \le \frac{a_1-a_1'}{K} = \frac{t}{K} \le \frac{1}{10} \cdot \frac{\rho t}{4M} \le \frac{1}{10} \cdot \frac{\rho t/2}{b_2-a_2}.
\]
Therefore, Lemma~\ref{lem:entropy_bound_for_non_uniform_sub_probability} implies the following improvement of the estimate~\eqref{eq:int-a-b}
\begin{equation}
\label{eq:int-a2-b2}
\begin{split}
-\int_{a_2}^{b_2} f_2(x)\log(f_2(x)) dx & \le (1-2\varepsilon)\log(b_2-a_2) - \log(1-2\varepsilon) - \frac{1-2\varepsilon}{4} \cdot \frac{\rho t /2}{b_2-a_2} \\
& \le (1-2\varepsilon)\log(b_2-a_2) - \log(1-2\varepsilon) - \frac{\rho t}{24M}.
\end{split}
\end{equation}
Now, \eqref{eq:entropy-on-ap-bp} follows by summing~\eqref{eq:int-a-b}, \eqref{eq:int-aip-ai}, and \eqref{eq:int-bi-bip} over all $i$, but using the stronger estimate~\eqref{eq:int-a2-b2} in place of~\eqref{eq:int-a-b} when $i=2$.
\end{proof}
\fi
\subsection{The largest box in \texorpdfstring{$\mathcal{M}_3$}{M3}}
Theorem~\ref{thm:entropy_bound_for_almost_independent} will be applied to the (closure of the) 3-dimensional metric polytope. To this end, we study here the largest box contained in $\mathcal{M}_3$.
\begin{lem}\label{lem:independent_max_volume}
Suppose that $P$ is an axis-parallel box contained in the closure of the metric polytope $\mathcal{M}_3$, that is,
\[
P = [a_1,b_1] \times [a_2,b_2] \times [a_3,b_3] \subseteq \overline{\mathcal{M}_3} \subseteq \mathbb{R}^{\binom{3}{2}} \cong \mathbb{R}^3.
\]
Then
\begin{equation}\label{eq:vol_P_lemma_estimate}
\mathrm{Vol}(P) \le 1
\end{equation}
and equality holds if and only if
$[a_i,b_i]=[1,2]$ for each $i \in \{1,2,3\}$.
Furthermore, for some absolute constant $C$,
\[
\sum_{i=1}^3 \big( |a_i - 1| + |b_i - 2| \big) \le C(1-\mathrm{Vol}(P)).\]
\end{lem}
The furthermore clause is not required for our analysis of the volume of the metric polytope (Theorem~\ref{thm:volume_estimate}) but will be used in analysing the typical minimum distance (Theorem~\ref{thm:minimal_distance}).
The following proof was suggested to us by Shoni Gilboa; it replaced our previous, less transparent argument.
\begin{proof}[Proof of Lemma~\ref{lem:independent_max_volume}]
Our assumption that $P \subseteq \overline{\mathcal{M}_3}$ implies that $b_1 \le a_2 + a_3$ and, similarly, $b_2 \le a_1 + a_3$ and $b_3 \le a_1 + a_2$. Summing these three inequalities yields
\begin{equation}
\label{eq:three-triangles}
b_1+b_2+b_3 \le 2(a_1+a_2+a_3).
\end{equation}
Consequently, by the AM--GM inequality,
\begin{equation}
\label{eq:AMGM}
\mathrm{Vol}(P) = \prod_{i=1}^3 (b_i-a_i) \le \left(\sum_{i=1}^3\frac{b_i-a_i}{3}\right)^3 \le \left(\frac{b_1+b_2+b_3}{6}\right)^3 \le 1,
\end{equation}
where the second inequality is precisely~\eqref{eq:three-triangles} and the last inequality holds by our assumption that $P \subseteq \overline{\mathcal{M}_3} \subseteq [0,2]^3$, which ensures that $b_1, b_2, b_3 \le 2$.
For the second and third assertions of the lemma, suppose that $\mathrm{Vol}(P) \ge 1-\varepsilon$ for some $\varepsilon \ge 0$. Inequality~\eqref{eq:AMGM}
implies that
\begin{equation}
\label{eq:AMGM-stability}
\frac{b_1+b_2+b_3}{2} \ge \sum_{i=1}^3 (b_i-a_i) \ge 3(1-\varepsilon)^{1/3} \ge 3(1-\varepsilon)
\end{equation}
and, consequently, as $\max_i b_i \le 2$, that $\min_i b_i \ge 2 - 6\varepsilon$. Moreover, summing the inequalities $a_1+a_2 \ge b_3$ and $a_1 + a_3 \ge b_2$, we obtain
\[
a_1 \ge b_2+b_3 - (a_1+a_2+a_3) = \sum_{i=1}^3 (b_i-a_i) - b_1 \ge 1 - 3\varepsilon,
\]
where the last inequality follows from~\eqref{eq:AMGM-stability} and the assumption $b_1 \le 2$. By symmetry, $\min_i a_i \ge 1-3\varepsilon$.
Using again~\eqref{eq:AMGM-stability}, we have
\[
3(1-\varepsilon) \le \sum_{i=1}^3 (b_i-a_i) \le 6 - 2\min_i a_i -a_1,
\]
giving $a_1 \le 1+9\varepsilon$. By symmetry, $\max_i a_i \le 1+9\varepsilon$. Hence
\[
\sum_{i=1}^3\big(|a_i-1|+|b_i-2|\big)\le\sum_{i=1}^3\big( 9\varepsilon+6\varepsilon \big)=45\varepsilon,
\]
as needed.
\end{proof}
\section{Estimating the volume of the metric polytope}
In this section, we prove Proposition~\ref{prop:decreasing_radius} and Theorem~\ref{thm:volume_estimate}. The proof of Proposition~\ref{prop:decreasing_radius} is a short application of Shearer's inequality (Theorem~\ref{thm:Shearers_inequality}) and is presented in Section~\ref{sec:monotonicity}. The lower bound on the volume of $\mathcal{M}_n$ is obtained via the Local Lemma of Erd\H{o}s and Lov\'asz~\cite{ErLo75}; it is presented in Section~\ref{sec:lower}. The proof of the upper bound on the volume, whose outline is given in the introduction, is presented in Section~\ref{sec:volume-upper-bound}.
\subsection{Monotonicity}
\label{sec:monotonicity}
In this section, we prove Proposition~\ref{prop:decreasing_radius}. Recall
that said proposition states that the sequence $n \mapsto \mathrm{Vol}(\mathcal{M}_n)^{1/\binom{n}{2}}$ is non-increasing.
We will show that this fact is a simple consequence of Shearer's inequality
(Theorem~\ref{thm:Shearers_inequality}). Alternatively, one can derive
it from a generalisation of the Loomis--Whitney inequality~\cite{LoWh49} due to
Bollob\'as and Thomason~\cite{BoTh95}.
\begin{proof}[{Proof of Proposition~\ref{prop:decreasing_radius}}]
Let $n\ge 2$ and let $d$ be a uniformly sampled metric space in $\mathcal{M}_{n+1}$.
By Lemma~\ref{lem:entropy-compact-support},
\begin{equation}\label{eq:entropy_as_volume_of_M_n+1}
H(d) = \log\left(\mathrm{Vol}(\mathcal{M}_{n+1})\right).
\end{equation}
For each $i \in \br{n+1}$, let $J_i := \br{n+1}\setminus\{i\}$ and let
$I_i$ be the set of all unordered pairs of distinct elements in
$J_i$. Observe that, for each $\{j,k\}\in\binom{\br{n+1}}{2}$, we have
\[
|\{i \in \br{n+1} : \{j,k\}\in I_i\}| = n-1.
\]
Since $d$ may be naturally viewed as a random vector of distances, Shearer's inequality
(Theorem~\ref{thm:Shearers_inequality}) implies that
\begin{equation}\label{eq:Shearers_inequality_for_metric_space}
H(d)\le \frac{1}{n-1}\sum_{i=1}^{n+1}H\big(\{\dist{j}{k}: \{j,k\}\in I_i\}\big).
\end{equation}
Finally, observe that for each $i \in \br{n+1}$, the restriction of
$d$ to the pairs in $I_i$ is a metric space on
$n$ points that belongs to $\mathcal{M}_n$. Thus, by
Lemma~\ref{lem:entropy-compact-support},
\begin{equation}\label{eq:entropy_at_most_volume_for_M_n}
H\big(\{\dist{j}{k}: \{j,k\}\in I_i\}\big) \le \log(\mathrm{Vol}(\mathcal{M}_n))
\end{equation}
Putting together \eqref{eq:entropy_as_volume_of_M_n+1},
\eqref{eq:Shearers_inequality_for_metric_space}, and
\eqref{eq:entropy_at_most_volume_for_M_n}, we conclude that
\[
\log(\mathrm{Vol}(\mathcal{M}_{n+1})) \le \frac{n+1}{n-1}\log(\mathrm{Vol}(\mathcal{M}_n)).
\]
Since this inequality holds for any $n \ge 2$ and $(n+1)/(n-1) = \binom{n+1}{2} / \binom{n}{2}$, we conclude that the
sequence $n \mapsto \mathrm{Vol}(\mathcal{M}_n)^{1/\binom{n}{2}}$ is non-increasing, as claimed.
\end{proof}
\subsection{Lower bound
\label{sec:lower}
In this section, we prove the lower bound on $\mathrm{Vol}(\mathcal{M}_n)$ from Theorem~\ref{thm:volume_estimate}. Recall that it was
$\mathrm{Vol}(\mathcal{M}_n)\ge \exp((\nicefrac{1}{6}+o(1))n^{3/2})$. The proof below, which uses the Local Lemma of Erd\H{o}s and Lov\'asz~\cite{ErLo75}, is due to Dor Elboim (our original argument, based on Harris's inequality~\cite{harris1960lower}, gave $\nicefrac{1}{24}$ instead of $\nicefrac{1}{6}$).
We may and will assume that $n$ is sufficiently large. Let $\delta = 1/(2\sqrt{n})$ and let $(\dist{i}{j})$, $\{i,j\}\in\binom{\br{n}}{2}$, be an array
of independent and identically distributed random variables, each uniform on the interval $\left[1 - \delta, 2\right]$.
Define the event
\[
G:=\{(\dist{i}{j})\in\mathcal{M}_n\}.
\]
Observe that, by definition,
\begin{equation}\label{eq:vol_M_n_G_bound}
\mathrm{Vol}(\mathcal{M}_n) \ge \left(1 + \delta \right)^{\binom{n}{2}}\P(G) = \left(1 + \frac{1}{2\sqrt{n}} \right)^{\binom{n}{2}}\P(G).
\end{equation}
We shall derive a lower bound on $\P(G)$ from the Local Lemma.
\begin{lem}[{\cite[Lemma~5.1.1]{AlSp}}]
\label{lem:local-lemma}
Let $B_v$, $v \in V$, be events in an arbitrary probability space. Suppose that there is an integer $k$ such that, for each $v \in V$, there is a set $D_v \subseteq V \setminus \{v\}$ with at most $k$ elements such that $B_v$ is mutually independent of all the events $B_w$ with $w \in V \setminus (D_v \cup \{v\})$. If a real $p \in [0,1]$ satisfies $\P(B_v) \le p (1-p)^k$ for each $v \in V$, then
\[
\P\bigg(\bigcap_{v \in V} B_v^c\bigg) \ge (1-p)^{|V|}.
\]
\end{lem}
For a triple of distinct indices $\{i,j,k\} \in \binom{\br{n}}{3}$, let $B_{ijk}$ denote the event that $(\dist{i}{j}, \dist{i}{k}, \dist{j}{k})$ is
not in $\mathcal{M}_3$, that is, one of the three triangle inequalities is violated. Observe that
\[
\begin{split}
\P(B_{ijk}) & = 3 \P(d_{ij} + d_{jk} < d_{ik}) = \frac{3}{1+\delta} \int_0^{2\delta} \P(d_{ij} + d_{jk} < 2 - x) \, dx \\
& = \frac{3}{(1+\delta)^3} \int_0^{2\delta} \frac{(2\delta - x)^2}{2} \, dx = \frac{4\delta^3}{(1+\delta)^3}.
\end{split}
\]
Since the event $B_{ijk}$ is mutually independent of all events $B_{i'j'k'}$ such that $|\{i,j,k\} \cap \{i',j',k'\}| \le 1$, we may invoke Lemma~\ref{lem:local-lemma} to conclude that, for every $p$ satisfying
\begin{equation}
\label{eq:local-lemma-condition}
\frac{4\delta^3}{(1+\delta)^3} \le p(1-p)^{3(n-3)},
\end{equation}
we have
\begin{equation}
\label{eq:local-lemma-conclusion}
\P(G) = \P\left(\bigcap_{\smash{\{i,j,k\} \in \binom{\br{n}}{3}}} B_{ijk}^c\right) \ge (1-p)^{\binom{n}{3}}.
\end{equation}
It is easy to see that if $p = an^{-3/2}$ for some constant $a > 1/2$, then~\eqref{eq:local-lemma-condition} is satisfied for all sufficiently large $n$.
In particular, \eqref{eq:vol_M_n_G_bound} and~\eqref{eq:local-lemma-conclusion} imply that, for each $a > 1/2$,
\begin{align*}
\mathrm{Vol}(\mathcal{M}_n) &\ge \left(1 + \frac{1}{2\sqrt{n}} \right)^{\binom{n}{2}} \left(1 - \frac{a}{n^{3/2}}\right)^{\binom{n}{3}} \\
& = \exp\left(\left(\frac{1}{4} - \frac{a}{6} + o(1)\right) n^{3/2}\right).
\end{align*}
Since $a$ was an arbitrary constant greater than $1/2$, this yields the lower bound in~\eqref{eq:main_volume_estimates}.\qed
\subsection{Upper bound}
\label{sec:volume-upper-bound}
In this section, we deduce the upper bound on $\mathrm{Vol}(\mathcal{M}_n)$ stated in
Theorem~\ref{thm:volume_estimate} from the entropy estimate of Theorem~\ref{thm:entropy_bound_for_almost_independent}.
Let $n \ge 3$ and let $d$ be a uniformly sampled metric space in $\mathcal{M}_n$, which we view as a vector in $\mathbb{R}^{\binom{\br{n}}{2}}$. By Lemma~\ref{lem:entropy-compact-support},
\[
\log(\mathrm{Vol}(\mathcal{M}_n)) = H(d).
\]
For each $m \in \{0, \dotsc, n-1\}$, let us denote by $F_m$ the set of all pairs $ij \in
\binom{\br{n}}{2}$ with $\max\{i,j\} > n-m$, that is,
\begin{equation}\label{eq:F_m_def}
F_ m := \binom{\br{n}}{2} \setminus \binom{\br{n-m}}{2}
\end{equation}
and set, for $m \le n-2$,
\begin{equation}\label{eq:h_m def}
h_m := H(d_{12} \mid (d_e)_{e \in F_m}),
\end{equation}
where $h_0=H(d_{12})$. Observe that, by symmetry, $h_m = H(\dist{i}{j} | (d_e)_{e \in F_m})$
for every $ij \in \binom{\br{n-m}}{2}$. Since $d_{12}\in[0,2]$, then $h_0 \le \log 2$,
by Lemma~\ref{lem:entropy-compact-support}.
Since $F_m \subseteq F_{m+1}$ for every $m$, it follows from Lemma~\ref{prop:basic_entropy_properties}~\ref{item:entropy-prop-4} that
\begin{equation}\label{eq:conditional_entropy_monotonicity}
h_{n-2} \le \cdots \le h_1 \le h_0 \le \log 2.
\end{equation}
Additionally, as $F_0 = \emptyset$ and $F_{n-1}= \binom{\br{n}}{2}$, Lemma~\ref{prop:basic_entropy_properties}~\ref{item:entropy-prop-1} and~\ref{item:entropy-prop-3} give
\begin{align}
H\left((d_e)_{e \in \binom{\br{n}}{2}}\right) & = \sum_{m = 0}^{n-2} H\left((d_e)_{e \in F_{m+1} \setminus F_m} \mid (d_f)_{f \in F_m}\right) \nonumber\\
& \le \sum_{m = 0}^{n-2} \sum_{e \in F_{m+1} \setminus F_m} H\left(d_e \mid (d_f)_{f \in F_m}\right) \nonumber\\
& = \sum_{m = 0}^{n-2} |F_{m+1} \setminus F_m| \cdot h_m
= \sum_{m=0}^{n-2} (n-m-1) \cdot h_m. \label{eq:volume_decomposition}
\end{align}
In particular, it suffices to prove the following estimate.
\begin{lem}
\label{lem:h_m_bound}
There exists a $K > 0$ such that, for all $m \in \{0, \dotsc, n-2\}$,
\begin{equation*}
h_m\le \frac{K}{\sqrt{m+1}}.
\end{equation*}
\end{lem}
Indeed, substituting this bound into \eqref{eq:volume_decomposition} gives
\begin{equation*}
\log(\mathrm{Vol}(\mathcal{M}_n)) = H(d) \le K \sum_{m=1}^{n-1}\frac{n-m}{\sqrt{m}} \le C n^{3/2}
\end{equation*}
for some absolute constant $C>0$, establishing the theorem.
Lemma~\ref{lem:h_m_bound} is a fairly simple consequence of the monotonicity of the sequence $(h_m)$ and the following estimate, which lies at the heart of the matter.
\begin{lem}
\label{lem:h_m_bound_with_difference}
There exists a $C>0$ such that, for all $m \in \{0, \dotsc, n-3\}$,
\begin{equation*}
h_m\le C\left(h_m - h_{m+1}\right)^{1/3}.
\end{equation*}
\end{lem}
\begin{proof}
Fix an $m \in \{0, \dotsc, n-3\}$. Since $\{1,n-m\}, \{2,n-m\} \in F_{m+1} \setminus F_m$, Lemma~\ref{prop:basic_entropy_properties}~\ref{item:entropy-prop-4} implies that
\[
h_{m+1} \le H(d_{12} \mid (d_e)_{e \in F_m}, d_{1,n-m}, d_{2,n-m}) \le h_m.
\]
By symmetry, we may replace $n-m$ in the above inequality with any element of $\{3, \dotsc, n-m\}$. In particular,
\begin{equation*}
H(d_{12} \mid (d_e)_{e \in F_m}, d_{13}, d_{23}) \ge h_{m+1}
\end{equation*}
and thus,
\begin{equation}
\label{eq:entropy-bound-hm}
H(d_{12} \mid (d_e)_{e \in F_m}) - H(d_{12} \mid (d_e)_{e \in F_m}, d_{13},
d_{23}) \le h_m - h_{m+1}.
\end{equation}
Condition on all the distances $d_{ij}$ with $ij \in F_m$ and denote by $(X_1, X_2, X_3)$ the random vector whose distribution is the conditioned distribution of $(d_{12}, d_{13}, d_{23})$, so that
\[
h_m = H(d_{12} \mid (d_e)_{e \in F_m}) = \mathbb{E}[H(X_1)] = \mathbb{E}[H(X_2)] = \mathbb{E}[H(X_3)].
\]
We write $X_1 \times X_2 \times X_3$ to denote the random variable whose distribution is the product of the marginal distributions of $X_1$, $X_2$, and $X_3$.
\begin{claim}
\label{claim:PXnotinM3}
We have
\[
\mathbb{E}\big[\P(X_1 \times X_2 \times X_3 \notin \overline{\mathcal{M}_3})\big] \le 2(h_m - h_{m+1}).
\]
\end{claim}
\begin{proof}[Proof of Claim~\ref{claim:PXnotinM3}]
By~\eqref{eq:entropy_KL_relation}, inequality~\eqref{eq:entropy-bound-hm} is equivalent to
\begin{equation}
\label{eq:DKL-bound-hm}
\mathbb{E}\big[\DKL{(X_1, X_2, X_3)}{X_1 \times (X_2, X_3)}\big] \le h_m - h_{m+1}.
\end{equation}
By symmetry, inequality~\eqref{eq:entropy-bound-hm} continues to hold for any permutation of $(d_{12}, d_{13}, d_{23})$ and hence~\eqref{eq:DKL-bound-hm} continues to hold for any permutation of $(X_1, X_2, X_3)$. It thus follows from Lemma~\ref{lem:DKL-triangle-ineq} that
\begin{equation}
\label{eq:conditioned DKL}
\mathbb{E}\big[\DKL{(X_1,X_2,X_3)}{X_1 \times X_2 \times X_3}\big] \le 2(h_m - h_{m+1}).
\end{equation}
Since $(X_1, X_2, X_3) \in \overline{\mathcal{M}_3}$ with probability one, Claim~\ref{claim:PXnotinM3} now follows from~\eqref{eq:conditioned DKL} and Lemma~\ref{lem:Kullback_Leibler_and_support}.
\end{proof}
We may now apply Theorem~\ref{thm:entropy_bound_for_almost_independent} (to the distribution of $X_1 \times X_2 \times X_3$) to bound the entropies of $X_1$, $X_2$, and $X_3$. Lemma~\ref{lem:independent_max_volume} states that the largest volume of an axis-parallel box contained in $\overline{\mathcal{M}_3}$ is one and thus, by Theorem~\ref{thm:entropy_bound_for_almost_independent},
\[
H(X_1) + H(X_2) + H(X_3) \le C \cdot \P(X_1 \times X_2 \times X_3 \notin \overline{\mathcal{M}_3})^{1/3}.
\]
By Jensen's inequality (applied to the concave function $x \mapsto x^{1/3}$) and Claim~\ref{claim:PXnotinM3},
\[
3h_m = \mathbb{E}\big[H(X_1) + H(X_2) + H(X_3) \big] \le C \cdot \big( 2(h_m-h_{m+1}) \big)^{1/3},
\]
as we wanted to prove.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:h_m_bound}]
Let $K$ be a sufficiently large absolute constant, to be fixed later. If $h_m \le \frac{K}{\sqrt{m+1}}$ for all $m$ then there is nothing to prove. Otherwise, aiming to obtain a contradiction, define
\begin{equation*}
m_0:=\min\left\{m : 0\le m\le n-2\text{ and }h_m > \frac{K}{\sqrt{m+1}}\right\}.
\end{equation*}
Taking $K\ge \log 2$ we necessarily have $m_0\ge 1$ by \eqref{eq:conditional_entropy_monotonicity}. It follows from Lemma~\ref{lem:h_m_bound_with_difference} and the definition of $m_0$ that
\begin{equation*}
h_{m_0-1}\le C\left(h_{m_0-1} - h_{m_0}\right)^{1/3} \le C\left(\frac{K}{\sqrt{m_0}} - \frac{K}{\sqrt{m_0+1}}\right)^{1/3}.
\end{equation*}
As $\frac{1}{\sqrt{x}} - \frac{1}{\sqrt{x+1}}\le \frac{1}{2x^{3/2}}$ for all $x>0$, we may continue the above inequality to obtain
\begin{equation*}
h_{m_0-1}\le \frac{CK^{1/3}}{2^{1/3}\sqrt{m_0}}\le \frac{K}{\sqrt{m_0+1}}
\end{equation*}
if only $K$ is sufficiently large compared with $C$. Fix $K$ to satisfy this condition. This contradicts the definition of $m_0$ as $h_{m_0}\le h_{m_0-1}$ by~\eqref{eq:conditional_entropy_monotonicity}. This finishes the proof of Lemma \ref{lem:h_m_bound}, and thus also of Theorem \ref{thm:volume_estimate}.
\end{proof}
\subsection{The lower tail of a typical distance}
\label{sec:lower-tail-typical}
The proof of Theorem~\ref{thm:minimal_distance}, presented in Section~\ref{sec:distance} below, uses as input an upper bound on $\P(d_{12}<1)$ where, as before, $(d_{ij})$ denotes a uniformly chosen metric space in $\mathcal{M}_n$. We record this in the following result, which further points out a nearly-matching lower bound.
\begin{prop}\label{prop:distance-lower-tail}
There are absolute constants $C, c>0$ such that
\[
\frac{c}{\sqrt{n} \log(n+1)}\le \P( d_{12} < 1) \le \frac{C}{\sqrt{n}}.
\]
\end{prop}
We continue to use the notation $H(X;A):=-\int_A f(x)\log(f(x))dx$ for a random variable $X$ with bounded and compactly-supported density $f$ and measurable $A$.
The lower bound in Proposition~\ref{prop:distance-lower-tail} is a simple consequence of the volume lower bound in Theorem~\ref{thm:volume_estimate}. To see this, assume, to reach a contradiction, that the lower bound does not hold. Then, by Lemma~\ref{lem:entropy_bound_for_sub_probability} and the fact that $\log y\le y-1$ for all $y>0$,
\begin{equation*}
\begin{split}
H(d_{12}) &= H(d_{12};[0,1)) + H(d_{12};[1,2])\\
&\le \P(d_{12}<1)\log\left(\frac{1}{\P(d_{12}<1)}\right) + \P(d_{12}\ge1)\log\left(\frac{1}{\P(d_{12}\ge1)}\right)\\
&\le \P(d_{12}<1)\log\left(\frac{1}{\P(d_{12}<1)}\right) + 1-\P(d_{12}\ge1)\\
&= \P(d_{12}<1)\log\left(\frac{e}{\P(d_{12}<1)}\right)\le \frac{2c}{\sqrt{n}}
\end{split}
\end{equation*}
for each $c>0$ and $n$ sufficiently large. Thus, by the subadditivity of entropy,
\begin{equation*}
\log(\mathrm{Vol}(\mathcal{M}_n)) \le \binom{n}{2}H(d_{12})\le c n^{3/2}
\end{equation*}
for all large $n$. For small $c$, this leads to a contradiction with the lower bound in Theorem~\ref{thm:volume_estimate}.
We proceed to prove the upper bound in Proposition~\ref{prop:distance-lower-tail}. The following lemma, which relies on the entropy bounds of Section~\ref{sec:entropy-maximising product distributions}, is the main ingredient.
\begin{lem}\label{lem:sum of entropies with lower tail}
There exist absolute constants $C,c>0$ such that the following holds. Let $X_1, X_2, X_3$ be \emph{independent} random variables supported in $[0,2]^3$. Then
\begin{equation*}
\sum_{i=1}^3 H(X_i)\le C\,\P((X_1, X_2, X_3)\notin\overline{\mathcal{M}_3})^{1/3} - c\sum_{i=1}^3\P(X_i < 1).
\end{equation*}
\end{lem}
\begin{proof}
Set $\varepsilon:=\P((X_1, X_2, X_3)\notin\overline{\mathcal{M}_3})^{1/3}$ and use~\eqref{eq:LQ UQ def} to define $(a_i), (b_i)$ for $1\le i\le 3$. Proposition~\ref{prop:entropy_bound_for_almost_independent} and \eqref{eq:vol_P_lemma_estimate}
imply that
\begin{equation}\label{eq:initial entropy bound for minimum distance}
\sum_{i=1}^3 H(X_i)\le \frac{1}{2}\sum_{i=1}^d H(X_i; [a_i, b_i]) + C\varepsilon
\end{equation}
for an absolute $C>0$.
We proceed to estimate the right-hand side of~\eqref{eq:initial entropy bound for minimum distance}. First, Lemma~\ref{lem:independent-box-poly} shows that $P:=[a_1, b_1]\times [a_2, b_2]\times [a_3,b_3]$ is contained in the closure of $\mathcal{M}_3$. Hence, Lemma~\ref{lem:independent_max_volume} implies that
\begin{equation}\label{eq:V upper bound}
V:=\mathrm{Vol}(P)\le 1 - c_1\sum_{i=1}^3 \big( |a_i - 1| + |b_i - 2| \big)
\end{equation}
for an absolute $0<c_1<1$. Second, we shall prove that
\begin{multline}\label{eq:middle entropy bound}
H(X_i;[a_i,b_i]) \le (1-2\varepsilon)\log(b_i-a_i) + 2\varepsilon \\
+ c_1\left(|a_i-1| + |b_i - 2| - \frac{\P(X_i<1)}{20}\right) + \varepsilon
\end{multline}
for each $1\le i\le 3$. Lastly, plugging this estimate in~\eqref{eq:initial entropy bound for minimum distance} gives
\begin{multline*}
\sum_{i=1}^3 H(X_i)\le \frac{1-2\varepsilon}{2}\log(V) + (C+\tfrac{9}{2})\varepsilon\\ + \frac{c_1}{2}\sum_{i=1}^3 \left(|a_i-1| + |b_i-2| - \frac{\P(X_i<1)}{20}\right).
\end{multline*}
Using $\log(V)\le V-1$ and \eqref{eq:V upper bound} gives
\[
\frac{1-2\varepsilon}{2}\log(V)\le\frac{1}{2}\bigg(-c_1\sum_{i=1}^3(|a_i-1|+|b_i-2|)\bigg)+9c_1\varepsilon,
\]
where we used the inequality $|a_i-1| + |b_i-2| \le 3$ to bound the error term. We see that the terms containing $\sum_i |a_i-1|+|b_i-2|$ cancel and we are left with
\[
\sum_{i=1}^3H(X_i)\le C\varepsilon-\frac{c_1}{40}\sum_{i=1}^3\P(X_i<1),
\]
as needed.
It remains to prove~\eqref{eq:middle entropy bound}. Fix $1\le i\le 3$. Lemma~\ref{lem:entropy_bound_for_sub_probability} and the inequality $\log y \le y -1$, valid for all $y > 0$, give that
\[
H(X_i;[a_i, b_i]) \le(1-2\varepsilon)\log\left(\frac{b_i-a_i}{1-2\varepsilon}\right)\le (1-2\varepsilon)\log(b_i-a_i) + 2\varepsilon.
\]
Thus it suffices to show that the sum of the third and fourth terms in~\eqref{eq:middle entropy bound} is non-negative. This is the case if: (i) $a_i\ge1$, since $c_1<1$ and the definition of $a_i$ implies that $\P(X_i<a_i)=\varepsilon$ (see~\eqref{eq:X-quantiles}); (ii) $b_i\le1$; (iii) $b_i - a_i\le \frac{1}{2}$; or (iv) $a_i<1$ and $\P(a_i<X_i<1)\le 20(1-a_i)$ since
\begin{equation}\label{eq:less than 1 and a_i}
\P(X_i<1) = \P(a_i<X_i<1) + \P(X_i<a_i) = \P(a_i<X_i<1) + \varepsilon.
\end{equation}
We thus assume that $a_i<1$, $b_i>1$, $b_i-a_i>\frac{1}{2}$ and $\P(a_i<X_i<1)>20(1-a_i)$. In particular,
\begin{equation*}
\frac{\P(a_i<X_i<1)}{\P(a_i<X_i<b_i)}\ge 10 \cdot \frac{1-a_i}{b_i-a_i}
\end{equation*}
Applying the second clause of Lemma~\ref{lem:entropy_bound_for_sub_probability}, with the partition $[a_i,b_i]=[a_i,1)\cup[1,b_i]$, then shows that
\begin{equation*}
H(X_i;[a_i, b_i])\le(1-2\varepsilon)\log\left(\frac{b_i-a_i}{1-2\varepsilon}\right) - \frac{\P(a_i<X_i<1)}{4}.
\end{equation*}
This implies~\eqref{eq:middle entropy bound}, again using~\eqref{eq:less than 1 and a_i} and the fact that $c_1<1$.
\end{proof}
Now recall the notation $F_m$ and $h_m$ from~\eqref{eq:F_m_def} and~\eqref{eq:h_m def}, respectively. The following simple lemma is our second ingredient in the proof of the upper bound in Proposition~\ref{prop:distance-lower-tail}.
\begin{lem}\label{eq:m for conditioning}
There exists an absolute constant $K>0$ and some $\frac{1}{3}n \le m \le \frac{2}{3}n$ for which
\begin{equation*}
h_m>-\frac{K}{\sqrt{n}}\quad\text{and}\quad h_m - h_{m+1}\le \frac{K}{n^{3/2}}.
\end{equation*}
\end{lem}
\begin{proof}
Recall that $m \mapsto h_m$ is decreasing (\ref{eq:conditional_entropy_monotonicity}) and $h_m\le C/\sqrt{m+1}$ for each $m$, by Lemma \ref{lem:h_m_bound}. We first claim that, for some $K$ sufficiently large,
\begin{equation}\label{eq:hm from below}
h_{\lceil 2n/3 \rceil}>-\frac{K}{\sqrt{n}}.
\end{equation}
Indeed, if it were not the case then from (\ref{eq:volume_decomposition}) we would get
\begin{align*}
\log(\mathrm{Vol}(\mathcal{M}_n))
& \stackrel{\textrm{\clap{(\ref{eq:volume_decomposition})}}}{\le}\;
\sum_{m=0}^{n-2}(n-m-1)h_m\\
&\le \sum_{m=0}^{\lceil 2n/3 \rceil-1}\frac {C(n-m-1)}{\sqrt{m+1}} -
\sum_{m=\lceil 2n/3 \rceil}^{n-2}\frac{K(n-m-1)}{\sqrt{n}}\\
& \le \;Cn^{3/2}-Kn^{3/2},
\end{align*}
which would contradict the fact that $\mathrm{Vol}(\mathcal{M}_n) \ge 1$, provided that $K$ is sufficiently large.
Finally, it follows from the pigeonhole principle that for some $m$ with $\lceil n/3 \rceil \le m< \lceil 2n/3 \rceil$, we have
\begin{equation}
\label{eq:hm32}
h_m-h_{m+1}\le \frac{h_{\lceil n/3 \rceil} - h_{\lceil 2n/3 \rceil}}{\lfloor n/3 \rfloor} \le \frac{C/\sqrt{n/3+1} + K/\sqrt{n}}{\lfloor n/3 \rfloor} \le \frac{K}{n^{3/2}}.
\end{equation}
Moreover, $h_m \ge h_{\lceil 2n/3 \rceil} > -K/\sqrt{n}$, as claimed.
\end{proof}
We now finish the proof of the upper bound in Proposition~\ref{prop:distance-lower-tail}.
Let $\frac{1}{3}n \le m \le \frac{2}{3}n$ be as in Lemma~\ref{eq:m for conditioning}. Condition on all the distances $d_{ij}$ with $ij \in F_m$ and write $(X_1,X_2,X_3)$ for the conditional versions of $(d_{12},d_{13},d_{23})$. The distribution of $(X_1, X_2, X_3)$ is regarded as random (a function of the variables conditioned upon). Lemma~\ref{lem:sum of entropies with lower tail} shows that
\begin{equation*}
\sum_{i=1}^3 H(X_i)\le C\,\P(X_1 \times X_2 \times X_3 \notin \mathcal{M}_3)^{1/3} - c\sum_{i=1}^3\P(X_i < 1).
\end{equation*}
Averaging over the conditioning, using Jensen's inequality (for the concave function $x\mapsto x^{1/3}$), and applying Claim~\ref{claim:PXnotinM3}, we conclude that
\begin{equation*}
\begin{split}
3h_m\le C\,(2h_m - 2h_{m+1})^{1/3} - 3c\P(d_{12}<1)\\
\end{split}
\end{equation*}
Thus, by Lemma~\ref{eq:m for conditioning},
\begin{equation*}
\P(d_{12}<1)\le \frac{C'}{\sqrt{n}}
\end{equation*}
for some absolute constant $C'$, finishing the proof.
\section{Other approaches to estimating the
volume}\label{sec:other_approaches}
\begin{table}
\begin{centering}
\begin{tabular}{| l | c |}
\hline
Method & Upper bound on $\log \mathrm{Vol}(\mathcal{M}_n)$\\ \hline\hline
Main result (Theorem~\ref{thm:volume_estimate}) & $O(n^{3/2})$\\ \hline
Exchangeability & $o(n^2)$ \\ \hline
Szemer\'edi regularity lemma & $o(n^2)$ \\ \hline
Hypergraph container method & $O\big(n^{3/2} (\log n)^3\big)$ \\ \hline
\texorpdfstring{K\H{o}v\'ari}{K\"ov\'ari}--S\'os--Tur\'an & $O\big(\frac{n^2 (\log\log n)^2}{\log n}\big)$\\ \hline
\end{tabular}
\end{centering}
\smallskip
\caption{The upper bounds on the volume of the metric polytope provided by our main result and by the alternative approaches presented in Section~\ref{sec:other_approaches}.}
\end{table}
A \emph{coloured graph} on vertex set $V$ with \emph{palette} $C$ is simply a function in $C^{\binom{V}{2}}$.
Recall that a \emph{hereditary property} is a family of coloured graphs that is closed under taking subgraphs and isomorphisms.
Questions about the asymptotic growth rate of the volume and the distribution of the edge lengths for a random point in the metric polytope can be viewed as instances of the following very general class of problems:
Describe the distribution of a `uniformly sampled' \emph{coloured graph} (more generally, \emph{coloured hypergraph}) on $n$ vertices, conditioned to satisfy a given \emph{hereditary property}~$\mathcal{P}$, when $n$ is large.
The relevance to our setting is the following: We let $V = \br{n}$ and consider the collection of functions $d: \binom{V}{2} \to (0,2]$ satisfying the hereditary property:
\[
d_{ik} \le d_{ij}+ d_{jk} \quad \text{for all $i,j,k \in V$}.
\]
A few related approaches have been used to study problems from this class:
exchangeable families of random variables,
Szemer\'edi's regularity lemma, graph limits, and the method of hypergraph containers.
We refer to the survey paper \cite{MR2426176} and references within for a discussion of the connection between exchangeability, the regularity lemma, and graph limits;
for an introduction to the method of hypergraph containers, the reader is referred to the survey paper~\cite{BalMorSam-ICM}.
In this section, we discuss the problem of estimating the volume of the metric polytope using some of these approaches. These approaches may also be used to obtain some structural information on typical samples from the metric polytope.
\subsection{Limiting model and exchangeability}
The purpose of this section is to give a `soft' proof of a qualitative version of our main result on the volume. Precisely, we shall show that
\begin{equation}\label{eq:volume exponent}
\log\mathrm{Vol}(\mathcal{M}_n)=o(n^2),
\end{equation}
as in~\eqref{eq:limit constant is one}. The presented proof relies on \emph{exchangeability}.
To motivate the proof method, let us start by recalling de Finetti's theorem \cite{dF59}. It states that the distribution of an exchangeable sequence of random variables is a mixture of distributions of i.i.d.\ sequences of random variables. Here, we recall that: (i) a sequence $(X_n)$ of random variables is called \emph{exchangeable} if, for every finitely-supported permutation~$\sigma$, the sequence $(X_{\sigma(n)})$ has the same joint distribution as the sequence $(X_n)$; (ii) the distribution of a sequence is a \emph{mixture} of distributions of i.i.d.\ random variables if it can be sampled by first randomly sampling a distribution $D$ and then sampling the variables of the sequence independently from the distribution $D$.
De Finetti's theorem implies the following \emph{conditional independence} property: If $(X_n)$ is exchangeable, then, for each $n_0$, after conditioning on $\{X_n : n > n_0\}$ the random variables $X_1,\dotsc,X_{n_0}$ become independent and identically distributed. Indeed, the conditioning determines which distribution $D$ is used in the underlying i.i.d.\ sequence and independence follows.
As metric spaces $(d_{ij})$ are indexed by unordered pairs $\{i,j\}$, their relevant context is not that of exchangeable sequences but rather that of \emph{exchangeable arrays}. An exchangeable array is a two-dimensional array of random variables $(X_{ij})$, with the index set being all unordered pairs of distinct positive integers, such that, for each finitely-supported permutation~$\sigma$, the array $(X_{\sigma(i)\sigma(j)})$ has the same distribution as the array $(X_{ij})$. (The names \emph{weak exchangeability} and \emph{partial exchangeability} are also used for notions of this type. Higher-dimensional versions and variants where different permutations are applied to the coordinates have also been discussed in the literature.) A representation theorem similar to, but more complicated than, de Finetti's theorem exists for exchangeable arrays; see~\cite{aldous_rep1981,hoover_exchange1983,kallenberg_rep_1989} and especially \cite[Theorem 14.21]{ald85}. It again implies a conditional independence property, stated as follows.
\begin{lem}\label{lem:cond_independence}
Let $(X_{ij})$ be an exchangeable array. For each integer $n_0\ge 1$, conditioned on $\{X_{ij} :\max\{i,j\} > n_0\}$ the random variables $\{X_{ij} : i,j\le n_0\}$ become independent.
\end{lem}
We note that, unlike de Finneti's theorem, the random variables $\{X_{ij} : i,j\le n_0\}$ are not necessarily identically distributed after the conditioning. For completeness, we provide a short proof.
\begin{proof}[Proof of Lemma~\ref{lem:cond_independence}]
The proof is by induction on $n_0$. In the case $n_0 = 2$, there is nothing to prove. Suppose then that $n_0 \ge 3$ and that the result has already been established for $n_0 - 1$.
Write $\mathcal{F}_{n}^N$ and $\mathcal{F}_n$ for the sigma algebras generated by the collections $\{X_{ij} : n\le \max\{i,j\}\le N\}$ and $\{X_{ij} : \max\{i,j\}\ge n\}$, respectively. Let $A\subseteq\mathbb{R}^{\binom{\br{n_0-1}}{2}}$ be a Borel set. Levy's upward theorem (a consequence of the martingale convergence theorem) shows that
\begin{align}
&\P\big((X_{ij})_{i,j\le n_0-1}\in A\mid\mathcal{F}_{n_0}^N\big) \to \P\big((X_{ij})_{i,j\le n_0-1}\in A\mid\mathcal{F}_{n_0}\big),\label{eq:Levy upward theorem}\\
&\P\big((X_{ij})_{i,j\le n_0-1}\in A\mid\mathcal{F}_{n_0+1}^{N+1}\big) \to \P\big((X_{ij})_{i,j\le n_0-1}\in A\mid\mathcal{F}_{n_0+1}\big),\nonumber
\end{align}
as $N\to\infty$, almost surely. In addition, the fact that $(X_{ij})$ is an exchangeable array implies that, for each $N\ge n_0+1$,
\begin{equation*}
\P\big((X_{ij})_{i,j\le n_0-1}\in A\mid\mathcal{F}_{n_0}^N\big) \,{\buildrel d \over =}\, \P\big((X_{ij})_{i,j\le n_0-1}\in A\mid\mathcal{F}_{n_0+1}^{N+1}\big).
\end{equation*}
Consequently,
\begin{equation*}
\P\big((X_{ij})_{i,j\le n_0-1}\in A\mid\mathcal{F}_{n_0}\big)\,{\buildrel d \over =}\, \P\big((X_{ij})_{i,j\le n_0-1}\in A\mid\mathcal{F}_{n_0+1}\big),
\end{equation*}
which implies that, in fact,
\begin{equation}\label{eq:conditional equality almost surely}
\P\big((X_{ij})_{i,j\le n_0-1}\in A\mid\mathcal{F}_{n_0}\big)= \P\big((X_{ij})_{i,j\le n_0-1}\in A\mid\mathcal{F}_{n_0+1}\big)
\end{equation}
almost surely. To see the last conclusion, observe that if $X$ is a random variable with finite second moment and $\mathcal{G}_1\subseteq\mathcal{G}_2$ are sigma algebras, then
\begin{equation*}
\mathbb{E}\left[\mathbb{E}[X\mid\mathcal{G}_2]^2\right]=\mathbb{E}\left[\mathbb{E}[X\mid\mathcal{G}_1]^2\right] + \mathbb{E}\left[(\mathbb{E}[X\mid\mathcal{G}_2] - \mathbb{E}[X\mid\mathcal{G}_1])^2\right].
\end{equation*}
Thus, if $\mathbb{E}[X\mid\mathcal{G}_1] \,{\buildrel d \over =}\, \mathbb{E}[X\mid\mathcal{G}_2]$, then $\mathbb{E}[X\mid\mathcal{G}_1]= \mathbb{E}[X\mid\mathcal{G}_2]$ almost surely.
As~\eqref{eq:conditional equality almost surely} holds for arbitrary Borel $A$, we conclude (recalling the definition of $\mathcal{F}_n$) that, conditioned on $\mathcal{F}_{n_0+1}$, the collection of random variables $\{X_{ij} : i,j\le n_0-1\}$ is independent of the collection $\{X_{ij} : \max\{i,j\} = n_0\}$. Together with the induction hypothesis this implies that, conditioned on $\mathcal{F}_{n_0+1}$, the random variables $\{X_{ij} : i,j\le n_0-1\}$ become independent. These facts together with another use of the exchangeability property imply the lemma.
\end{proof}
We proceed to discuss the metric polytope, aiming to prove~\eqref{eq:volume exponent}. It is convenient to pass to a discrete problem, to avoid questions on the existence of densities and convergence issues. Specifically, given integers $M$ and $n$, define the discrete metric polytope $\mathcal{M}_n^M$ by
\begin{equation*}
\mathcal{M}_n^M := \left\{ (\dist{i}{j})\in\{1, \dotsc, M\}^{\binom{\br{n}}{2}} : \dist{i}{j}\le \dist{i}{k} + \dist{k}{j}\text{ for all $i,j,k$} \right\},
\end{equation*}
see also Section~\ref{sec:discrete-problem}. We shall prove that, for each fixed \emph{even} $M$,
\begin{equation}
\label{eq:exchangeability-lemma-claim}
\limsup_{n\to\infty}\frac{\log(|\mathcal{M}_n^M|)}{\binom{n}{2}} \le \log\left(\frac{M+2}{2}\right).
\end{equation}
As $\mathrm{Vol}(\mathcal{M}_n) \le \left(\frac{2}{M}\right)^{\binom{n}{2}}|\mathcal{M}_n^M|$ for all $n,M$, see~\eqref{eq:card_MnM-vol_Mn} in Section~\ref{sec:discrete-problem} below, \eqref{eq:exchangeability-lemma-claim} will imply~\eqref{eq:volume exponent}.
Fix an even $M$. To apply Lemma~\ref{lem:cond_independence}, embed $\mathcal{M}_n^M$ into $\br{M}^{\binom{\mathbb{N}}{2}}$ by setting all distances involving points $i>n$ to zero. Denote by $\mu_n^M$ the uniform distribution on $\mathcal{M}_n^M$, viewed as a distribution on the space $\br{M}^{\binom{\mathbb{N}}{2}}$ via this embedding. As the set of probability measures on this space is compact with respect to convergence in distribution, there exists a subsequence $n_m$ on which the limit superior in~\eqref{eq:exchangeability-lemma-claim} is realized and such that $\mu_{n_m}^M$ converges in distribution. Denote the limit measure by $\mu_\infty^M$ and note that it is necessarily supported on the infinite-dimensional discrete metric polytope
\begin{equation*}
\mathcal{M}_\infty^M:=\left\{ (d_{ij}) \in \{1, \dotsc, M\}^{\binom{\mathbb{N}}{2}} : d_{ij}\le d_{ik}+d_{kj}\text{ for all
$i,j,k$} \right\}.
\end{equation*}
Write $d^\infty = (d_{ij}^\infty)$ for a sample from $\mu_\infty^M$. Note that $d^\infty$ is an exchangeable array, inheriting its exchangeability properties from the measures $(\mu_n^M)$. As before, we write $\mathcal{F}_{n}^N$ and $\mathcal{F}_n$ for the sigma algebras generated by the collections $\{d_{ij}^\infty : n\le \max\{i,j\}\le N\}$ and $\{d_{ij}^\infty : \max\{i,j\}\ge n\}$, respectively.
Lemma~\ref{lem:cond_independence} shows that, conditioned on $\mathcal{F}_4$, the random variables $(d_{12}^\infty,d_{13}^\infty,d_{23}^\infty)$ become independent. Thus the support of their (conditional) joint distribution is in some axis-parallel discrete box fully contained in $\mathcal{M}_3^M$. An analogue of Lemma~\ref{lem:independent_max_volume} (with an analogous proof) shows that such a box has cardinality at most $\left(\frac{M+2}{2}\right)^3$. In particular,
\begin{equation}\label{eq:limiting cond entropy}
H_{\text{S}}(d_{12}^\infty,d_{13}^\infty,d_{23}^\infty\mid \mathcal{F}_4)\le 3\log\left(\frac{M+2}{2}\right),
\end{equation}
where $H_{\text{S}}$ denotes Shannon's entropy. Recalling our use of Levy's upward theorem in~\eqref{eq:Levy upward theorem}, and noting that $(d_{12}^\infty, d_{13}^\infty, d_{23}^\infty)$ is supported on a finite set, we see that the conditional distribution of these random variables given $\mathcal{F}_4^N$ converges as $N\to\infty$ to their conditional distribution given $\mathcal{F}_4$, almost surely. In particular (again, by the finite support),
\begin{equation}\label{eq:towards the limiting cond entropy}
\lim_{N\to\infty}H_{\text{S}}(d_{12}^\infty,d_{13}^\infty,d_{23}^\infty\mid \mathcal{F}_4^N) = H_{\text{S}}(d_{12}^\infty,d_{13}^\infty,d_{23}^\infty\mid \mathcal{F}_4).
\end{equation}
Let $\varepsilon>0$. Combining~\eqref{eq:limiting cond entropy} and~\eqref{eq:towards the limiting cond entropy} shows that, for some $N_0$,
\begin{equation*}
H_{\text{S}}(d_{12}^\infty,d_{13}^\infty,d_{23}^\infty\mid \mathcal{F}_4^{N_0})\le 3\log\left(\frac{M+2}{2}\right)+\varepsilon.
\end{equation*}
Let $d^n = (d_{ij}^n)$ be a sample from $\mu_n^M$. Similar to the above, the fact that $\mu_{n_m}^M\to\mu_\infty^M$ and $\{d_{ij}^n : i,j\le N_0\}$ is finitely-supported implies that
\begin{equation*}
H_{\text{S}}\left(d_{12}^{n_m},d_{13}^{n_m},d_{23}^{n_m}| \big\{d_{ij}^{n_m} : 4\le \max\{i,j\}\le N_0\big\}\right)\le 3\log\left(\frac{M+2}{2}\right)+2\varepsilon
\end{equation*}
for all large $m$. By symmetry and monotonicity of conditional entropy, we conclude that, for all large $m$ and all distinct $i, j, k \in \{1, \dotsc, n_m - N_0+4\}$,
\begin{multline*}
H_{\text{S}}\left(d_{ij}^{n_m},d_{ik}^{n_m},d_{jk}^{n_m}\mid \big\{d_{ij}^{n_m} : n_m-N_0+4\le \max\{i,j\}\le n_m\big\}\right)\\
\le 3\log\left(\frac{M+2}{2}\right)+2\varepsilon.
\end{multline*}
We may now apply the subadditivity argument from the proof outline, Section~\ref{sec:proof outline}, to obtain that, for all large $m$,
\begin{equation*}
\log(|\mathcal{M}_{n_m}^M|)\le C \log(M) N_0 n_m + \left(\log\left(\frac{M+2}{2}\right)+\frac{2}{3}\varepsilon\right)\cdot\binom{n_m}{2}
\end{equation*}
for an absolute constant $C$. Finally, recalling that the limit superior in~\eqref{eq:exchangeability-lemma-claim} is realized along $n_m$, and noting that $\varepsilon$ is arbitrary and $N_0$ is a function only of $\varepsilon$ and $\mu_\infty^M$, we conclude that~\eqref{eq:exchangeability-lemma-claim} holds.
\subsection{The Szemer\'edi regularity lemma approach}
In this section, we show how a fairly standard application of (a multi-coloured version of) Szemer\'edi's regularity lemma gives an alternative proof of~\eqref{eq:limit constant is one}. The argument presented here may be seen as an adaptation of the classical argument of Erd\H{o}s, Frankl, and R\"odl~\cite{ErFrRo86} proving that the number of $H$-free graphs with $n$ vertices is $2^{\mathrm{ex}(n,H) + o(n^2)}$, where $\mathrm{ex}(n,H)$ denotes the maximum number of edges in an $H$-free graph with $n$~vertices. This approach was independently pursued by Mubayi and Terry~\cite{mubayi2019discrete}.
Recall that a bipartite graph $G$ with parts $V_1$ and $V_2$ is \emph{$\varepsilon$-regular} if, for every $W_1 \subseteq V_1$ with $|W_1| \ge \varepsilon |V_1|$ and $W_2 \subseteq V_2$ with $|W_2| \ge \varepsilon |V_2|$, we have
\[
\left| \frac{e_G(W_1, W_2)}{|W_1| |W_2|} - \frac{e_G(V_1, V_2)}{|V_1| |V_2|} \right| \le \varepsilon,
\]
where $e_G(W_1,W_2)$ is the number of edges connecting a vertex of $W_1$ to a vertex of $W_2$. An~\emph{equipartition} of a set $V$ is a partition of $V$ into $V_1, \dotsc, V_k$ such that $\big| |V_i| - |V_j| \big| \le 1$ for all $i$ and $j$. The celebrated regularity lemma of Szemer\'edi~\cite{Sz78} states that, for every positive $\varepsilon$, there exists a constant $R$ such that the vertex set of every graph $G$ admits an equipartition into at most $R$ parts with the property that the bipartite subgraphs of $G$ induced by all but at most an $\varepsilon$-proportion of all pairs of parts are $\varepsilon$-regular. We shall be needing the following straightforward generalisation of this statement to edge-coloured graphs. For the remainder of this section, given a positive integer $M$, we shall refer to a colouring of all pairs of elements of a set $V$ with elements of $\br{M}$ as an \emph{$M$-graph} with vertex set $V$. Moreover, given an $M$-graph $G$ and a $c \in \br{M}$, we shall denote by $G(c)$ the graph whose edges are all pairs of vertices to which $G$ assigns the colour $c$. The following straightforward generalisation of Szemer\'edi's regularity lemma to $M$-graphs was formulated in~\cite{AxMa11}. It may be easily deduced from the standard proof of the regularity lemma.
\begin{thm}[{\cite{AxMa11}}]
\label{thm:colored-reg-lemma}
For every $\varepsilon > 0$, $M$, and $r_0$, there exists an integer $R$ with the following property.
The vertex set of an arbitrary $M$-graph $G$ admits an equipartition $\{V_1, \dotsc, V_r\}$, where $r_0 \le r \le R$, such that, for all but at most $\varepsilon \binom{r}{2}$ pairs $\{i,j\} \in \binom{\br{r}}{2}$, the bipartite subgraph of $G(c)$ induced by $V_i$ and $V_j$ is $\varepsilon$-regular for every $c \in \br{M}$.
\end{thm}
For the sake of brevity, we shall refer to partitions satisfying the assertion of the theorem as \emph{$\varepsilon$-regular partitions}. As in most standard applications of the regularity lemma, we shall use the following straightforward property of $\varepsilon$-regular graphs, the \emph{embedding lemma} for triangles. For a more general version of the embedding lemma, we refer the reader to the classical survey of Koml\'os and Simonovits~\cite{KoSi96}.
\begin{prop}
\label{prop:triangle-embedding}
Let $\varepsilon \in (0,1/2)$, suppose that $V_1$, $V_2$, and $V_3$ are
pairwise disjoint sets, and let $G$ be a graph with vertex set $V_1
\cup V_2 \cup V_3$. If, for each pair $\{i,j\} \in \binom{\br{3}}{2}$,
the bipartite subgraph of $G$ induced by $V_i$ and $V_j$ is
$\varepsilon$-regular and satisfies $e_G(V_i, V_j) \ge 2\varepsilon|V_i||V_j|$,
then $G$ contains a triangle.
\end{prop}
As in the previous section, given integers $M$ and $n$, we define
\[
\mathcal{M}_n^M := \left\{(\dist{i}{j})\in\{1, \dotsc, M\}^{\binom{\br{n}}{2}} : \dist{i}{j}\le \dist{i}{k} + \dist{k}{j}\text{ for all $i,j,k$}\right\},
\]
see also Section~\ref{sec:discrete-problem} below. We shall prove that
\begin{equation}
\label{eq:reg-lemma-claim}
|\mathcal{M}_n^M| \le M^{\delta n^2} \cdot \left(\frac{M+2}{2}\right)^{\binom{n}{2}}
\end{equation}
for each fixed even $M$ and $\delta > 0$, provided that $n$ is sufficiently large. We remark here that Mubayi and Terry~\cite{mubayi2019discrete} independently used a similar approach, combined with a delicate stability analysis, to prove the much more accurate estimate $|\mathcal{M}_n^M| = (1+e^{-\Omega(n)}) \left(\frac{M+2}{2}\right)^{\binom{n}{2}}$ for each fixed even $M$. As $\mathrm{Vol}(\mathcal{M}_n) \le \left(\frac{2}{M}\right)^{\binom{n}{2}}|\mathcal{M}_n^M|$, see~\eqref{eq:card_MnM-vol_Mn} in Section~\ref{sec:discrete-problem} below, \eqref{eq:reg-lemma-claim} will imply that $\mathrm{Vol}(\mathcal{M}_n) = 2^{o(n^2)}$.
As proofs of both~\eqref{eq:reg-lemma-claim} and the improved estimate of~\cite{mubayi2019discrete} rely on the regularity lemma, the rate of convergence implicit in the $o(n^2)$ term in the exponent is very slow. Possibly, one could use weaker forms of the regularity lemma to improve this rate of convergence. We do not pursue this direction here, but only mention that one such regularity lemma, guaranteeing a regular partition whose number of parts is only exponential in $\varepsilon^{-2}$, was obtained by Frieze and Kannan~\cite{FriKan96,FriKan99}, see also~\cite[Section~1.4]{MR2989432}. (In our context, a multi-coloured version of such a regularity lemma would most likely have been required.)
Fix an even integer $M$ and $\delta \in (0,1/2)$ and let $\varepsilon = \frac{\delta}{10M\log(1/\delta)}$ and $r_0 = 2/\delta$. Choose an arbitrary $G \in \mathcal{M}_n^M$, which may be viewed as an $M$-graph with vertex set $\br{n}$, and apply Theorem~\ref{thm:colored-reg-lemma} to $G$ to obtain an $\varepsilon$-regular partition $\{V_1, \dotsc, V_r\}$ of $\br{n}$ with $r_0 \le r \le R$ for some constant $R = R(M, \delta)$. For every pair $\{i,j\} \in \binom{\br{r}}{2}$, define
\[
D_{ij} = \left\{ c \in \br{M} : e_{G(c)}(V_i, V_j) \ge 2\varepsilon |V_i||V_j| \right\}
\]
and observe that all but at most a $2M\varepsilon$-proportion of pairs in $V_i \times V_j$ are coloured with an element of $D_{ij}$. Call a triple $\{i,j,k\} \in \binom{\br{r}}{3}$ \emph{regular} if the bipartite subgraphs of $G(1), \dotsc, G(M)$ induced by $(V_i,V_j)$, $(V_i, V_k)$, and $(V_j, V_k)$ are all $\varepsilon$-regular. It follows from Proposition~\ref{prop:triangle-embedding} that, for every regular triple $\{i, j, k\}$, we must have $D_{ij} \times D_{ik} \times D_{jk} \subseteq \mathcal{M}_3^M$. Indeed, otherwise $G$ would contain a triple of distances that do not satisfy the triangle inequality. A discrete analogue of Lemma~\ref{lem:independent_max_volume} (with an analogous proof) shows that $D_{ij} \times D_{ik} \times D_{jk}$ has cardinality at most $\left(\frac{M+2}{2}\right)^3$. As $\{V_1, \dotsc, V_r\}$ is an $\varepsilon$-regular partition of $G$, all but at most $3\varepsilon\binom{r}{3}$ triples $\{i,j,k\} \in \binom{\br{r}}{3}$ are regular. Consequently,
\begin{multline}
\label{eq:prod-Dij}
\prod_{\{i,j\} \in \binom{\br{r}}{2}} |D_{ij}| = \left( \prod_{\smash{\{i,j,k\} \in \binom{\br{r}}{3}}} |D_{ij}||D_{ik}||D_{jk}|\right)^{\frac{1}{r-2}} \displaybreak[0]\\
\le \left( \left(\frac{M+2}{2}\right)^{3(1-3\varepsilon)\binom{r}{3}} M^{9\varepsilon\binom{r}{3}} \right)^{\frac{1}{r-2}} \le \left(\frac{M+2}{2^{1-3\varepsilon}}\right)^{\binom{r}{2}}.
\end{multline}
Since $G$ was arbitrary, the above analysis shows that one may construct each element of $\mathcal{M}_n^M$ as follows. First, choose $r$, the equipartition $\{V_1,\dotsc,V_r\}$, and the sets $D_{ij}\subseteq \br{M}$; the number of choices for all three combined is $2^{O(n)}$ (with implicit constant depending on $M$ and $\delta$). Next, for each $\{i,j\} \in \binom{\br{r}}{2}$, choose a set $X_{ij} \subseteq V_i \times V_j$ of at most $2M\varepsilon |V_i| |V_j|$ pairs whose colour will not belong to $D_{ij}$; there are at most $\binom{\binom{n}{2}}{\lfloor 2M\varepsilon \binom{n}{2}\rfloor} \le \exp\big(M\varepsilon \log(e/(2M\varepsilon)) n^2\big)$ ways to do it. Finally, choose colours for all $\binom{n}{2}$ pairs in such a way that each pair in $V_i \times V_j \setminus X_{ij}$ is assigned a colour from $D_{ij}$; the number of ways one can do this is
\begin{equation}
\label{eq:SzRL-number-of-choices}
\prod_{i,j}|D_{ij}|^{|V_i\times V_j\setminus X_{ij}|} \cdot M^{\binom{n}{2} - \sum_{i,j} |V_i \times V_j \setminus X_{ij}|}.
\end{equation}
Recalling that $|V_i \times V_j| \ge \lfloor n/r \rfloor^2$ and $|X_{ij}| \le 2M\varepsilon |V_i||V_j|$ for each $\{i,j\} \in \binom{\br{r}}{2}$, inequality~\eqref{eq:prod-Dij} implies that~\eqref{eq:SzRL-number-of-choices} is at most
\begin{equation}
\label{eq:SzRL-bound}
\left(\frac{M+2}{2^{1-3\varepsilon}}\right)^{\binom{r}{2} \cdot (1-2M\varepsilon)\lfloor n/r \rfloor^2} \cdot M^{\binom{n}{2} - \binom{r}{2} \cdot (1-2M\varepsilon)\lfloor n/r \rfloor^2}.
\end{equation}
Finally, as $\binom{n}{2} - \binom{r}{2} \lfloor n/r \rfloor^2 \le \frac{1}{r} \binom{n}{2} + r(n-1)$ and $r_0 \le r \le R$, a straightforward calculation shows that~\eqref{eq:SzRL-bound} is at most
\[
\left(\frac{M+2}{2}\right)^{\binom{n}{2}} 2^{\left(\frac{1}{r_0}+2M\varepsilon+3\varepsilon+\frac{2R}{n}\right)\binom{n}{2}}.
\]
This yields the claimed upper bound on $|\mathcal{M}_n^M|$ stated in~\eqref{eq:reg-lemma-claim}, provided that $n$ is sufficiently large, by our choice of $\varepsilon = \varepsilon(M, \delta)$ and~$r_0 = r_0(\delta)$.
\subsection{The hypergraph container method}
\label{sec:container-method}
In this section, which is based on joint work with Rob Morris, we shall show how the method of hypergraph containers can be used to derive a volume estimate of the form
\begin{equation}
\label{eq:vol-Mn-container}
\mathrm{Vol}(\mathcal{M}_n) \le \exp \left( C n^{3/2} (\log n)^3 \right),
\end{equation}
which falls just a little short of the upper bound established in Theorem~\ref{thm:volume_estimate} using entropy methods. We point out that the arguments presented here are inspired by the (earlier) work of Balogh and Wagner~\cite{BalWag16}, who were the first to use the container method for enumerating finite metric spaces and obtained the bound $\mathrm{Vol}(\mathcal{M}_n) \le \exp(n^{11/6+o(1)})$.
The hypergraph container theorems, proved simultaneously, but separately, in~\cite{BalMorSam} and~\cite{SaxTho}, state that the family of independent sets of any uniform hypergraph whose edges are sufficiently evenly distributed can be covered by a small family of \emph{containers}, subsets of vertices of the hypergraph that themselves are nearly independent. The wide applicability of this abstract statement stems from the fact that many discrete structures may be naturally represented as independent sets of some auxiliary hypergraph; in particular, this is the case with the metric spaces in $\mathcal{M}_n^M$. The particular version of the hypergraph container theorem stated below was proved in~\cite{MorSamSax}; see also~\cite{BalMorSam-ICM} for a survey.
Suppose that $\mathcal{H}$ is a $k$-uniform hypergraph, i.e.\ each (hyper)edge has exactly $k$ vertices. We write $V(\mathcal{H})$ to denote the vertex set of $\mathcal{H}$ and we identify $\mathcal{H}$ with its (hyper)edge set; we denote by $v(\mathcal{H})$ and $e(\mathcal{H})$ the numbers of vertices and edges of $\mathcal{H}$, respectively. A set $I\subseteq V(\mathcal{H})$ is called \emph{independent} if it contains no edges of $\mathcal{H}$. We moreover define, for every $\ell \in \{1, \dotsc, k\}$,
\[
\Delta_\ell(\mathcal{H}) = \max\left\{ |\{ S \in \mathcal{H} \colon T \subseteq S \}| \colon T \in \binom{V(\mathcal{H})}{\ell} \right\}.
\]
In other words, $\Delta_\ell(\mathcal{H})$ is the maximum number of edges of $\mathcal{H}$ that a single $\ell$-element set of vertices can be contained in.
We say that a family $\mathcal{C}$ of subsets of $V(\mathcal{H})$ is a family of \emph{containers} for (the independent sets of) $\mathcal{H}$ if every independent set is contained in some $B \in \mathcal{C}$. Every hypergraph $\mathcal{H}$ admits two trivial families of containers: the one-element family $\{V(\mathcal{H})\}$ and the family of all (maximal) independent sets of $\mathcal{H}$. The following proposition guarantees the existence of a family of containers that interpolates between these two extremes: it is much smaller than the family of all independent sets but each of the containers is significantly smaller than $V(\mathcal{H})$.
\begin{prop}
\label{prop:containers-main}
Let $\mathcal{H}$ be a~non-empty $k$-uniform hypergraph. Suppose that positive integers $b$ and $r$ satisfy
\[
\Delta_\ell(\mathcal{H}) \le \left(\frac{b}{v(\mathcal{H})}\right)^{\ell-1} \frac{e(\mathcal{H})}{r}
\]
for every $\ell \in \{1, \dotsc, k\}$. Then there exists a collection $\mathcal{C}$ of at most $\exp\big(kb\log(v(\mathcal{H}))\big)$ subsets of $V(\mathcal{H})$ such that:
\begin{enumerate}[label=(\roman*)]
\item
every independent set of $\mathcal{H}$ is contained in some $B \in \mathcal{C}$;
\item
$|B| \le v(\mathcal{H}) - 2^{-k(k+1)} \cdot r$ for every $B \in \mathcal{C}$.
\end{enumerate}
\end{prop}
In a typical application of the proposition, such as the one presented in this section, one takes $r$ to be close to $v(\mathcal{H})$ while $b = v(\mathcal{H})^\alpha$ for some $\alpha \in (0,1)$.
Call a triple $(a,b,c)$ of numbers \emph{non-metric} if some permutation of $(a,b,c)$ does not satisfy the triangle inequality, that is, if $a+b < c$, $a+c < b$, or $b+c < a$. Given positive integers $n$ and $M$, define the hypergraph $\mathcal{H}_n^M$ of \emph{non-metric triangles} as follows. The vertex set of $\mathcal{H}_n^M$ is $\binom{\br{n}}{2} \times \br{M}$ and its edges are all triples $\{(e_i,d_i)\}_{i=1}^3$ such that
\begin{itemize}
\item
$\{e_1, e_2, e_3\}$ is the set of edges of some triangle in the complete graph on $\br{n}$,
\item
$(d_1, d_2, d_3)$ is a non-metric triple.
\end{itemize}
It is not hard to see that the elements of $\mathcal{M}_n^M$ are in a one-to-one correspondence with independent subsets of $\mathcal{H}_n^M$ that contain exactly one element of the set $\{e\} \times \br{M}$ for each $e \in \binom{\br{n}}{2}$.
Now, given a set $A \subseteq \binom{\br{n}}{2} \times \br{M}$, define, for each $e \in \binom{\br{n}}{2}$,
\[
A_e := \{d \in \br{M} \colon (e, d) \in A\}.
\]
Viewing $A$ as a representation of the product set $\prod_e A_e$, we define its volume by
\[
\mathrm{Vol}(A) := \prod_{e \in \binom{\br{n}}{2}} |A_e|,
\]
which is precisely the number of sets $I \subseteq A$ that contain exactly one element of the set $\{e\} \times \br{M}$ for each $e \in \binom{\br{n}}{2}$.
The following supersaturation statement for $\mathcal{H}_n^M$ is the key ingredient in our application of the container method to the setting of discrete metric spaces.
\begin{prop}
\label{prop:supersaturation-global}
Let $n$ and $M$ be positive integers, with $M$ \emph{even} and $n \ge 3$. Suppose that $A \subseteq \binom{\br{n}}{2} \times \br{M}$ satisfies
\[
\mathrm{Vol}(A) \ge \left(\frac{(1+\varepsilon)M}{2}\right)^{\binom{n}{2}}
\]
for some $\varepsilon \ge 16/M$. Then there exist an $m \in \br{M}$ and a set $A' \subseteq A$ with $|A'| \le mn^2$ such that
\begin{itemize}
\item
$e(\mathcal{H}_n^M[A']) \ge \varepsilon m^2 M \binom{n}{3} / (32 \log_2 M)$,
\item
$\Delta_1(\mathcal{H}_n^M[A']) \le 4nm^2$,
\item
$\Delta_2(\mathcal{H}_n^M[A']) \le 2m$,
\end{itemize}
where $\mathcal{H}[B]$ denotes the subhypergraph of $\mathcal{H}$ induced by the subset $B$, that is, the hypergraph whose vertex set is $B$ and whose edges are all edges of $\mathcal{H}$ that are fully contained in $B$.
\end{prop}
The basic building block in the proof of Proposition~\ref{prop:supersaturation-global} is the following elementary lemma, which one can prove combining the ideas in the proofs of Lemmas~\ref{lem:independent-box-poly} and~\ref{lem:independent_max_volume}.
\begin{lem}
\label{lem:supersaturation-local}
Let $M$ and $m$ be positive integers, with $M \ge 16$ even, and suppose that $A, B, C \subseteq \br{M}$. Let $A' \subseteq A$ comprise the $m$ largest and the $m$ smallest elements of $A$ and define $B'$ and $C'$ analogously. If $|A| \cdot |B| \cdot |C| \ge (M/2+2m)^3$, then the set $A' \times B' \times C'$ contains $m^3$ non-metric triples.
\end{lem}
\newcommand{s_{\max}}{s_{\max}}
\begin{proof}[Proof of Proposition~\ref{prop:supersaturation-global}]
As $\mathrm{Vol}(A) \le M^{\binom{n}{2}}$, we may assume that $\varepsilon \le 1$ and hence $M \ge 16$. Let $\mathcal{T}$ be the family of edge sets of all triangles in the complete graph with vertex set $\br{n}$. Since each edge (of the complete graph) belongs to exactly $n-2$ triangles,
\begin{equation}
\label{eq:prod-T-vol-A}
\prod_{e_1e_2e_3 \in \mathcal{T}} \left( |A_{e_1}| |A_{e_2}| |A_{e_3}|\right)^{1/3} = \mathrm{Vol}(A)^{\frac{n-2}{3}} \ge \left(\frac{(1+\varepsilon) M}{2}\right)^{\binom{n}{3}}.
\end{equation}
We partition the family $\mathcal{T}$ as follows. Set $s_{\max} = \lfloor \log_2 M \rfloor - 2$ and, for each $s \in \{0, \ldots, s_{\max}\}$, define
\[
\mathcal{T}_s := \left\{e_1e_2e_3 \in \mathcal{T} \colon \left( |A_{e_1}| |A_{e_2}| |A_{e_3}|\right)^{1/3} \in \left[\frac{M}{2} + 2^{s+1}, \frac{M}{2} + 2^{s+2} \right) \right\};
\]
moreover, let $\mathcal{T}_* := \mathcal{T} \setminus \bigcup_{s=0}^{s_{\max}} \mathcal{T}_s$. Observe that $\mathcal{T}_*$ contains only $e_1e_2e_3$ with $|A_{e_1}||A_{e_2}||A_{e_3}| < (\frac{M}{2}+2)^3$, as $2^{s_{\max}+2} > M/2$, and thus
\begin{equation}
\label{eq:prod-Ts-vol-A}
\prod_{e_1e_2e_3} \left( |A_{e_1}| |A_{e_2}| |A_{e_3}|\right)^{1/3} \le \left(\frac{M}{2}+2\right)^{|\mathcal{T}_*|} \cdot \prod_{s=0}^{s_{\max}} \left(\frac{M}{2} + 2^{s+2}\right)^{|\mathcal{T}_s|}.
\end{equation}
We claim that there is an $s \in \{0, \ldots, s_{\max}\}$ satisfying
\[
|\mathcal{T}_s| \ge \frac{\varepsilon M}{2^{s+5} \log_2M} \binom{n}{3}.
\]
Indeed, if this were not true, then~\eqref{eq:prod-Ts-vol-A} would contradict~\eqref{eq:prod-T-vol-A}, as $16/M \le \varepsilon \le 1$ and $s_{\max} + 1 \le \log_2 M$ (we omit the straightforward calculation).
Finally, let $m = 2^s$ and let $A'$ be the set of all pairs $(e, d) \in A$ such that $d$ is among the $m$ largest or the $m$ smallest elements of $A_e$. This definition guarantees that $|A'| \le 2m \binom{n}{2} \le mn^2$, that $\Delta_1(\mathcal{H}_n^M[A']) \le (2m)^2n$, and that $\Delta_2(\mathcal{H}_n^M[A']) \le 2m$. For each $e_1e_2e_3 \in \mathcal{T}_s$, we may invoke Lemma~\ref{lem:supersaturation-local} with $(A,B,C) \leftarrow (A_{e_1}, A_{e_2}, A_{e_3})$ to deduce that the set $A'_{e_1} \times A'_{e_2} \times A'_{e_3}$ contains at least $m^3$ non-metric triples. In particular,
\[
e(\mathcal{H}_n^M[A']) \ge |\mathcal{T}_s| \cdot m^3 \ge \frac{\varepsilon Mm^2}{32 \log_2M} \binom{n}{3},
\]
which concludes the proof of the proposition.
\end{proof}
Fix a large integer $n$ and let $M = 2\lfloor \frac{n}{2} \rfloor$. Suppose that $A \subseteq \binom{\br{n}}{2} \times \br{M}$ satisfies $\mathrm{Vol}(A) = \left(\frac{(1+\varepsilon)M}{2}\right)^{\binom{n}{2}}$ for some $16/M \le \varepsilon \le 1$ and let $m$ and $A'$ be as in Proposition~\ref{prop:supersaturation-global}. It is straightforward to verify that the ($3$-uniform) hypergraph $\mathcal{H}_n^M[A']$ satisfies the assumption of Proposition~\ref{prop:containers-main} with
\[
b := \left\lceil n^{3/2} \right\rceil \qquad \text{and} \qquad r := \left\lfloor \frac{\varepsilon M \binom{n}{3}}{128 n \log_2 M} \right\rfloor \ge \frac{\varepsilon M n^2}{2^{10} \log_2 M}.
\]
The proposition supplies a family $\mathcal{C}'$ of at most $\exp\big(3n^{3/2} \log(n^2M)\big)$ containers for independent sets of $\mathcal{H}_n^M[A']$, each of cardinality at most $|A'| - \frac{\varepsilon M n^2}{2^{22} \log_2 M}$. Therefore, the collection
\[
\mathcal{C} := \mathcal{C}(A) := \{ (A \setminus A') \cup B' : B' \in \mathcal{C}'\}
\]
is a family of containers for independent sets of $\mathcal{H}_n^M[A]$, with the same cardinality as $\mathcal{C}$, that satisfies, for every $B \in \mathcal{C}$,
\[
\mathrm{Vol}(B) \le \left(\frac{M-1}{M}\right)^{\frac{\varepsilon M n^2}{2^{22} \log_2M}} \cdot \mathrm{Vol}(A) = \left(\frac{(1+\varepsilon')M}{2}\right)^{\binom{n}{2}},
\]
for some $\varepsilon' \le \left(1 - \frac{1}{2^{22}\log_2M}\right)\varepsilon$.
We build a family $\mathcal{C}$ of containers for the independent sets of $\mathcal{M}_n^M$ recursively as follows. We start with the trivial family containing only the set $\binom{\br{n}}{2} \times \br{M}$. As long as our family contains some set $A$ with
\[
\mathrm{Vol}(A) > \left(\frac{(1 + \varepsilon_0) M}{2}\right)^{\binom{n}{2}},
\]
where $\varepsilon_0 := 1/\sqrt{n} \ge 16/M$, we replace $A$ with the elements of the family $\mathcal{C}(A)$ defined above. We claim that the depth of the recursion is bounded by $t := C \log_2(M) \log(n)$, for some large constant $C$. Indeed, if a set $B$ reached the $t$-th level of the recursion, then
\[
\mathrm{Vol}(B) \le \left(\frac{\left(1+\varepsilon_t\right)M}{2}\right)^{\binom{n}{2}},
\]
where
\[
\varepsilon_t = \max\left\{ \left(1-\frac{1}{2^{22}\log_2M}\right)^t, \frac{16}{M} \right\} \le \varepsilon_0,
\]
a contradiction. It follows that
\[
|\mathcal{C}| \le \exp\left(3n^{3/2} \log(n^2M) \cdot t\right) \le \exp\left(Cn^{3/2}(\log n)^3\right).
\]
Since each space in $\mathcal{M}_n^M$ corresponds to an independent set of $\mathcal{H}_n^M$ and is thus described by one of the containers, we obtain
\[
|\mathcal{M}_n^M| \le \sum_{B \in \mathcal{C}} \mathrm{Vol}(B) \le \exp\left(Cn^{3/2}(\log n)^3+ n^{3/2}\right) \cdot \left(\frac{M}{2}\right)^{\binom{n}{2}}.
\]
Finally, this translates to the following upper bound on the volume:
\[
\mathrm{Vol}(\mathcal{M}_n) \le \left(\frac{2}{M}\right)^{\binom{n}{2}} \cdot |\mathcal{M}_n^M| \le \exp\left(Cn^{3/2}(\log n)^3\right),
\]
see~\eqref{eq:card_MnM-vol_Mn} in Section~\ref{sec:discrete-problem} below.
\subsection{The \texorpdfstring{K\H{o}v\'ari}{K\"ov\'ari}--S\'os--Tur\'an approach}
In this section, we shall show yet another approach to the volume estimate. The estimate it gives is
\begin{equation}
\label{eq:vol-Mn-KST}
\mathrm{Vol}(\mathcal{M}_n) \le \exp \left( \frac{C n^2 (\log\log n)^2}{\log n}\right),
\end{equation}
better than what we obtained using the exchangeability or the regularity lemma approaches, but not as good as what is proved by the entropy or the hypergraph container methods. Our argument bears similarities to the classical work of Erd\H{o}s, Kleitman, and Rotschild~\cite{ErKlRo76}, which estimates the number of graphs that do not contain a clique of a given size.
Given a positive integer $t$, we shall write $K_{t,t}$ for the complete bipartite graph with $t$ vertices on each side. The \emph{Tur\'an number} for $K_{t,t}$, denoted $\mathrm{ex}(n,K_{t,t})$, is the largest number of edges in an $n$-vertex graph that does not contain $K_{t,t}$ as a (not necessarily induced) subgraph. The following well-known upper bound on $\mathrm{ex}(n,K_{t,t})$ was obtained by K\H{o}v\'ari, S\'os, and Tur\'an \cite{KoSoTu54}, see also~\cite[Section~3]{FurSim13}.
\begin{thm}[K\H{o}v\'ari--S\'os--Tur\'an~\cite{KoSoTu54}]
\label{thm:KST}
For every $t \ge 2$,
\[
\mathrm{ex}(n,K_{t,t}) \le \frac{1}{2} \left((t-1)^{1/t} n^{2-1/t} + (t-1)n\right).
\]
\end{thm}
Fix integers $n$ and $t \ge 2$ and a real $\delta \in (0,1)$. For a $d \in \mathcal{M}_n$, let
\[
T(d) := \left\{\{i, j\} \in \binom{\br{n}}{2} : d_{ij} < 1-\delta \right\}
\]
and partition $\mathcal{M}_n$ into $\M_n^{t,\delta}$ and $\overline{\Mnd}$, where
\[
\M_n^{t,\delta} := \{d \in \mathcal{M}_n : T(d) \nsupseteq K_{t,t}\} \quad \text{and} \quad \overline{\Mnd} := \mathcal{M}_n \setminus \M_n^{t,\delta}.
\]
Since $|T(d)| \le \mathrm{ex}(n,K_{t,t})$ for every $d \in \M_n^{t,\delta}$, we have
\[
\begin{split}
\mathrm{Vol}(\M_n^{t,\delta}) & \le \binom{\binom{n}{2}}{\mathrm{ex}(n,K_{t,t})} \cdot 2^{\mathrm{ex}(n,K_{t,t})} \cdot (1+\delta)^{\binom{n}{2} - \mathrm{ex}(n,K_{t,t})} \\
& \le \exp\left(3\mathrm{ex}(n,K_{t,t}) \log n + \delta n^2\right).
\end{split}
\]
It follows from Theorem~\ref{thm:KST} and simple calculus that, if $n \ge t^2 \ge 4$,
\begin{equation}
\label{eq:KST-vol-Mn1}
\mathrm{Vol}(\M_n^{t,\delta}) \le \exp\left( 5n^{2-1/t}\log n + \delta n^2 \right).
\end{equation}
We now derive an upper bound on the volume of $\overline{\Mnd}$.
\begin{lem}
\label{lem:KST-vol-Mn2}
If $t \ge 6$, $n \ge 4t^2$, and $\delta \ge 3 \log(4t)/t$, then
\[
\mathrm{Vol}(\overline{\Mnd}) \le e^{-n} \cdot \mathrm{Vol}(\mathcal{M}_{n-2t}).
\]
\end{lem}
\begin{proof}
Suppose that $d \in \overline{\Mnd}$. By definition, we may find two disjoint $t$-element sets $I, J \subseteq \br{n}$ such that $d_{ij} < 1-\delta$ for every pair $(i,j) \in I \times J$. Fix any such pair $(I,J)$ and suppose that $k \in \br{n} \setminus (I \cup J)$. Let
\[
a_I = \min_{i \in I} d_{ik}, \quad b_I = \max_{i \in I} d_{ik},
\quad a_J = \min_{j \in J} d_{jk}, \quad b_J = \max_{j \in J} d_{jk}.
\]
Since all distances between $I$ and $J$ are shorter than $1- \delta$, both $b_J - a_I$ and $b_I - a_J$ must be smaller than $1-\delta$ and, consequently,
\[
(b_I - a_I)(b_J - a_J) \le \left(\frac{(b_I - a_I) + (b_J - a_J)}{2}\right)^2 < (1-\delta)^2.
\]
In other words, all distances between $k$ and $I$ and between $k$ and $J$ fall into intervals $A_I$ and $A_J$, respectively, where $|A_I| \cdot |A_J| < (1-\delta)^2$. In particular, if $W$ denotes the set of all $2t$-dimensional vectors $(d_{ik}')_{i \in I \cup J}$ which may be used to complete $(d_e)_{e \in \binom{I \cup J}{2}}$ to a metric space on $I \cup J \cup \{k\}$, then
\[
\mathrm{Vol}(W) \le t^4 \cdot 2^4 \cdot (1-\delta)^{2t-4},
\]
as there are at most $t^4$ choices for the $i,i' \in I$ and $j,j' \in J$ for which $a_I = d_{ik}$, $b_I = d_{i'k}$, $a_J = d_{jk}$, and $b_J = d_{j'k}$. By our assumption on $t$ and $\delta$,
\[
\mathrm{Vol}(W) \le \left(2te^{-\delta(t-2)/2}\right)^4 \le \left(2te^{-\delta t /3}\right)^4 \le 2^{-4}.
\]
We may now bound the volume of $\overline{\Mnd}$ as follows. First, the number of choices for $I$ and $J$ is at most $\binom{n}{t}^2$ and the volume of the distances between pairs in $I \cup J$ does not exceed $2^{\binom{2t}{2}}$. Next, bounding the volume of $(d_{ik})_{i \in I \cup J, k \notin I \cup J}$ as above and the volume of $(d_{ij})_{i,j \notin I \cup J}$ by $\mathrm{Vol}(\mathcal{M}_{n-2t})$, we obtain
\[
\begin{split}
\mathrm{Vol}(\overline{\Mnd}) & \le \binom{n}{t}^2 \cdot 2^{\binom{2t}{2}} \cdot \mathrm{Vol}(W)^{n-2t} \cdot \mathrm{Vol}(\mathcal{M}_{n-2t}) \\
& \le n^{2t} \cdot 2^{2t^2} \cdot 2^{-4n + 8t} \cdot \mathrm{Vol}(\mathcal{M}_{n-2t}),
\end{split}
\]
which, with our assumption on $n$ and $t$, implies the claimed bound.
\end{proof}
One may now derive~\eqref{eq:vol-Mn-KST} by induction on $n$ using Lemma~\ref{lem:KST-vol-Mn2} and the upper bound on $\mathrm{Vol}(\M_n^{t,\delta})$ given by~\eqref{eq:KST-vol-Mn1}. In the inductive step, one may take $t = \log n / (2 \log \log n)$ and $\delta = 3 \log(4t) / t$, say. We leave the details to the reader.
\section{The shortest distance in the metric space}
\label{sec:distance}
In this section, we prove Theorem~\ref{thm:minimal_distance}, showing that, with high probability, the minimum distance in a uniformly chosen metric space from $\mathcal{M}_n$ is only polynomially shorter than one. In order to introduce several key ideas used in the proof of the theorem, we first sketch an argument yielding the weaker result that all distances are larger than $2^{-8}$. This result will not need any fine estimates on the volume and it will yield an exponential bound on the probability of having a short distance. The first step is the following simple proposition.
\begin{prop}
\label{prop:exp-bound-alpha}
For every $\alpha \in (0,1/2]$,
\[
\mathrm{Vol}\big(\{d \in \mathcal{M}_n : \min_{i,j}d_{ij}\le \alpha\}\big) \le \binom{n}{2} (2\alpha)^{n-2} \cdot \mathrm{Vol}(\mathcal{M}_{n-1}).
\]
\end{prop}
\begin{proof}
By symmetry, it suffices to show that the volume of those $d \in \mathcal{M}_n$ for which $d_{n-1,n} \le \alpha$ is at most $(2\alpha)^{n-2} \cdot \mathrm{Vol}(\mathcal{M}_{n-1})$. Assume that $d_{n-1,n} \le \alpha$ and note that, for each $i \in \br{n-2}$, the distance $d_{in}$ must belong to the interval $[d_{i,n-1}-\alpha, d_{i,n-1}+\alpha]$. In other words, the volume of the possible values for $(d_{in})_{i=1}^{n-2}$, given all the other distances, is at most $(2\alpha)^{n-2}$. This gives the desired estimate.
\end{proof}
Suppose that $d$ is sampled uniformly from $\mathcal{M}_n$. We could already conclude that $\P(\min_{ij} d_{ij} \le \alpha)$ is exponentially small in $n$, for every constant $\alpha < 1/2$, if we knew that $\mathrm{Vol}(\mathcal{M}_{n-1}) \le e^{o(n)} \cdot \mathrm{Vol}(\mathcal{M}_n)$. Such an estimate does indeed hold, as will be shown in Proposition~\ref{prop:volume_comparison}. Since the proof of Proposition~\ref{prop:volume_comparison} is rather involved (even though it is quite natural to conjecture that $n \mapsto \mathrm{Vol}(\mathcal{M}_n)$ is increasing, see Section~\ref{sec:further_questions}) and it crucially relies on the volume estimate provided by Theorem~\ref{thm:volume_estimate}, let us sketch here a self-contained argument showing that
\begin{equation}
\label{eq:vol-Mn-comparison}
\mathrm{Vol}(\mathcal{M}_n) \ge 2^{-6n} \cdot \mathrm{Vol}(\mathcal{M}_{n-1}),
\end{equation}
which is enough to deduce that, for some constants $c, C > 0$,
\begin{equation}
\label{eq:exponential_probability_for_very_small_distances}
\P(\min_{i,j}d_{ij}\le 2^{-8})\le Ce^{-cn}.
\end{equation}
\begin{proof}[Sketch of a proof of~\eqref{eq:vol-Mn-comparison}]
Define
\[
F(d) := \min_{A \subseteq \br{n}} \prod_{i \in A} \left(2\min_{j\in\br{n}\setminus\{i\}}d_{ij}\right)
\]
(so that $F(d) = 1$ if $d_{ij} \ge 1/2$ for all $\{i, j\}$). We claim that, for all sufficiently large $n$ and all $F \in (0,1)$,
\begin{equation}
\label{eq:F-lemma-basic}
\mathrm{Vol}\big(\{d\in\mathcal{M}_n: F(d) \le F\}\big) \le F^{n/10} \cdot 2^{\binom{n}{2}}.
\end{equation}
Since a stronger estimate will be proved in Lemma~\ref{lem:F-lemma}, we only sketch the main idea here. The proof of~\eqref{eq:F-lemma-basic} is similar in spirit to the calculation done in the proof of Proposition~\ref{prop:exp-bound-alpha}. It relies on the key observation that, if $d_{ij}$ is small, the $n-2$ pairs of distances $(d_{ik}, d_{jk})$ are constrained to a strip in $[0,2]^{2}$ of width $2d_{ij}$. In particular, if $F(d)$ is small, then this significantly constrains all distances. For details, we refer the reader to the proof of Lemma~\ref{lem:F-lemma}.
Examine the set
\[
\mathcal{M}_n^1:=\{d\in\mathcal{M}_n:F(d)>2^{-5n}\}.
\]
It follows from~\eqref{eq:F-lemma-basic} that
\[
\mathrm{Vol}(\mathcal{M}_n\setminus \mathcal{M}_n^1)\le \frac{1}{2}\le \frac{1}{2}\mathrm{Vol}(\mathcal{M}_n),
\]
so that $\mathrm{Vol}(\mathcal{M}_n^1)\ge \frac 12 \mathrm{Vol}(\mathcal{M}_n)$. We claim that the volume of possible extensions of any fixed $d\in\mathcal{M}_n^1$ to a metric space in $\mathcal{M}_{n+1}$ is reasonably large. Indeed, denote $I(\rho):=[3/2-\rho/2,\,3/2+\rho/2]$ and extend $d$ to $[0,2]^{\binom{\br{n+1}}{2}}$ by requiring that, for all $i \in \br{n}$,
\[
d_{i,n+1}\in I\left(\min\left\{\min_{j\in \br{n}\setminus\{i\}}d_{ij},1\right\}\right)
\]
It is straightforward to check that one obtains a metric space, and further, that the volume of the extension is at least $F(d)/2^n$. (A version of this argument is presented in the proof of Proposition~\ref{prop:volume_comparison}.) Hence,
\[
\mathrm{Vol}(\mathcal{M}_{n+1})\ge 2^{-6n} \cdot \mathrm{Vol}(\mathcal{M}_n^1)\ge 2^{-6n-1} \cdot \mathrm{Vol}(\mathcal{M}_n).\qedhere
\]
\end{proof}
Let us point out here that, regardless of other losses in the argument above, using Proposition~\ref{prop:exp-bound-alpha} or examining the quantity $F$ gives absolutely no information about distances between $\frac{1}{2}$ and $1$; for these, more involved analysis is required.
\subsection{{Proof of Theorem~\ref{thm:minimal_distance}}}
\label{sec:minimum-distance}
Giving up optimising various estimates in favour of simplifying the presentation (and because we believe that further ideas would be needed to obtain the optimal value of $c$), we shall prove the theorem with
\[
c = 1/30.
\]
The starting point of our proof is Proposition~\ref{prop:distance-lower-tail}, which states that there exists a constant $C$ such that, when $d$ is a uniformly sampled metric space from $\mathcal{M}_n$,
\begin{equation}\label{eq:unlikely_short_edge}
\P(d_{ij}<1) \le Cn^{-1/2} \quad \text{for every $\{i,j\}\in\binom{\br{n}}{2}$}.
\end{equation}
This allows us to conclude that a typical metric space sampled from $\mathcal{M}_n$ has relatively few distances shorter than one. More precisely, letting
\[
\mathcal{G}_n := \big\{d\in \mathcal{M}_n: \text{$d_{ij} < 1$ for at most $n^{5/3}$ pairs $\{i,j\}$}\big\},
\]
we have
\begin{equation}
\label{eq:used_to_be_lem}
\mathrm{Vol}(\mathcal{G}_n) > \left(1 - Cn^{-1/6}\right)\mathrm{Vol}(\mathcal{M}_n).
\end{equation}
To see \eqref{eq:used_to_be_lem}, let $d$ be a uniformly sampled metric space in $\mathcal{M}_n$ and let $X$ be the number of pairs $\{i,j\}$ such that $d_{ij}<1$. By Markov's inequality and~\eqref{eq:unlikely_short_edge}, we have
\[
\P(X > n^{5/3}) < \frac{\mathbb{E}[X]}{n^{5/3}} \le Cn^{-1/6},
\]
as needed. In particular, we may restrict our attention to spaces in $\mathcal{G}_n$. Define
\[
\mathcal{B}_n :=\{d\in\mathcal{G}_n : \min_{i,j} d_{ij} < 1-n^{-c}\}.
\]
Our argument will comprise two independent parts. First, we will show that the volume of $\mathcal{B}_n$ is extremely small when compared to the volume of $\mathcal{M}_{n-2}$.
\begin{prop}
\label{prop:volume-Bn}
For all sufficiently large $n$, we have
\[
\mathrm{Vol}(\mathcal{B}_n) \le \exp\left(-\frac{n^{1-2c}}{5}\right) \cdot \mathrm{Vol}(\mathcal{M}_{n-2}).
\]
\end{prop}
This bound would yield the desired result if we knew that $\mathrm{Vol}(\mathcal{M}_{n-2})$ is not much larger than $\mathrm{Vol}(\mathcal{M}_n)$. It seems plausible that, in fact,
\begin{equation}
\label{eq:growing_volume}
\mathrm{Vol}(\mathcal{M}_n)\ge\mathrm{Vol}(\mathcal{M}_{n-2})
\end{equation}
holds for all $n$. However, we have been unable to establish this, see Section~\ref{sec:discussion_and_open_questions}. We should point out that the volume estimates of Theorem~\ref{thm:volume_estimate} imply that \eqref{eq:growing_volume} holds for an infinite sequence of $n$ and thus Proposition~\ref{prop:volume-Bn} is sufficient to yield the assertion of Theorem~\ref{thm:minimal_distance} for that sequence. In order to establish the theorem for all sufficiently large $n$, we shall prove the following weaker bound, which still suffices for our purposes.
\begin{prop}\label{prop:volume_comparison}
For all sufficiently large $n$, we have
\[
\mathrm{Vol}(\mathcal{M}_{n+1}) \ge \exp\left(-n^{1-3c} \log(n) \right) \cdot \mathrm{Vol}(\mathcal{M}_n).
\]
\end{prop}
We postpone the proofs of Propositions~\ref{prop:volume-Bn} and~\ref{prop:volume_comparison} to the next two sections and finish the current section with a short derivation of Theorem~\ref{thm:minimal_distance}.
\begin{proof}[Proof of Theorem~\ref{thm:minimal_distance}]
Recalling the definitions of $\mathcal{G}_n$ and $\mathcal{B}_n$, we have
\[
\P(\min_{i,j} d_{ij}<1-n^{-c}) \le \frac{\mathrm{Vol}(\mathcal{M}_n\setminus \mathcal{G}_n)}{\mathrm{Vol}(\mathcal{M}_n)}+\frac{\mathrm{Vol}(\mathcal{B}_n)}{\mathrm{Vol}(\mathcal{M}_n)}.
\]
Estimate \eqref{eq:used_to_be_lem} states that the first term in the right-hand side is at most $C n^{-1/6}$ whereas Propositions~\ref{prop:volume-Bn} and~\ref{prop:volume_comparison} give
\[
\begin{split}
\frac{\mathrm{Vol}(\mathcal{B}_n)}{\mathrm{Vol}(\mathcal{M}_n)} & \le \exp\left(-\frac{n^{1 - 2c}}{5}\right) \cdot \frac{\mathrm{Vol}(\mathcal{M}_{n-2})}{\mathrm{Vol}(\mathcal{M}_n)} \\
& \le \exp\left(-\frac{n^{1-2c}}{5} + 2n^{1-3c}\log(n)\right) \le \exp\left(-\frac{n^{1-2c}}{6}\right),
\end{split}
\]
provided that $n$ is sufficiently large.
\end{proof}
\subsection{Bounding the volume of spaces with a short distance}
\label{sec:bound-volume-spac}
In this section, we prove Proposition~\ref{prop:volume-Bn}. We shall split the set $\mathcal{B}_n$ into two parts, depending on whether or not there is a point $i \in \br{n}$ at distance significantly shorter than one from many other points, and use different arguments to estimate the volume of each of these parts. More precisely, for a metric space $d\in\mathcal{M}_n$ and an $i \in \br{n}$, we define the set of close neighbours of $i$ by
\begin{equation*}
S_i(d) :=\left\{j\in \br{n}\setminus\{i\} : \dist{i}{j} < 1 - \frac{n^{-2c}}{4}\right\}
\end{equation*}
and let $m := \lfloor n^{1-3c} \rfloor$.
Our first lemma uses Theorem~\ref{thm:volume_estimate} to provide a very strong upper bound on the volume of all spaces $d \in \mathcal{G}_n$ (and not only $d \in \mathcal{B}_n$) for which $|S_i(d)| > m$ for some $i \in \br{n}$.
\begin{lem}
\label{lem:Si-large}
For all sufficiently large $n$, we have
\[
\mathrm{Vol}\big(\{d\in\mathcal{G}_n : \max_i |S_i(d)| > m\}\big) \le \exp\left(-\frac{n^{2-8c}}{16}\right).
\]
\end{lem}
Our second lemma bounds the volume of those $d \in \mathcal{B}_n$ for which $|S_i(d)| \le m$ for all $i \in \br{n}$ in terms of the volume of $\mathcal{M}_{n-2}$.
\begin{lem}
\label{lem:Si-small}
For all sufficiently large $n$, we have
\[
\mathrm{Vol}\big(\{d\in\mathcal{B}_n : \max_i |S_i(d)| \le m\}\big) \le \exp\left(-\frac{n^{1 -2c}}{4}\right) \cdot \mathrm{Vol}(\mathcal{M}_{n-2}).
\]
\end{lem}
\begin{proof}[Proof of Proposition~\ref{prop:volume-Bn}]
Using the estimates of the two lemmas, we may conclude that, for all sufficiently large $n$,
\[
\begin{split}
\mathrm{Vol}(\mathcal{B}_n) &\le \exp\left(-\frac{n^{1 - 2c}}{4}\right) \cdot \mathrm{Vol}(\mathcal{M}_{n-2}) + \exp\left(-\frac{n^{2-8c}}{16}\right) \\
& \le \exp\left(-\frac{n^{1-2c}}{5}\right) \cdot \mathrm{Vol}(\mathcal{M}_{n-2}),
\end{split}
\]
as $c < 1/6$ and $\mathrm{Vol}(\mathcal{M}_{n-2}) \ge 1$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:Si-large}]
For $i \in \br{n}$, $S \subseteq \br{n}$ with $|S| = m$, and $T\subseteq\binom{\br{n}}{2}$ with $|T| = \lfloor n^{5/3}\rfloor$, we let
\begin{equation*}
\mathcal{G}_n^{i,S,T}:=\big\{d\in\mathcal{G}_n : S_i(d) \supseteq S\text{ and } d_{jk} \ge 1\text{ if }\{j,k\}\notin T\big\}.
\end{equation*}
Note that if $d\in\mathcal{G}_n^{i,S,T}$ and $\{j,k\}\in\binom{S}{2}$, then necessarily $d_{jk} \le 2(1 - n^{-2c}/4)$, as follows from the triangle inequality $d_{jk} \le d_{ij} + d_{ik}$. Thus, $\mathcal{G}_n^{i,S,T}$ is contained in the product set
\begin{multline*}
\left\{\left(d_{jk} \right)_{\{j,k\}\in\binom{S}{2}}\in\left(1 -
\frac{n^{-2c}}{4}\right)\cdot\mathcal{M}_{|S|}\right\}\\
\times\prod_{\{j,k\}\in
T\setminus\binom{S}{2}}\{d_{jk}\le 2\}\prod_{\{j,k\}\in
\binom{\br{n}}{2}\setminus\left(T\cup\binom{S}{2}\right)}\{1\le d_{jk} \le
2\}.
\end{multline*}
It follows that
\begin{equation*}
\mathrm{Vol}(\mathcal{G}_n^{i,S,T})\le \left(1 - \frac{n^{-2c}}{4}\right)^{\binom{|S|}{2}}\mathrm{Vol}(\mathcal{M}_{|S|})\cdot 2^{|T|}\cdot 1.
\end{equation*}
Estimating $\mathrm{Vol}(\mathcal{M}_{|S|})$ using Theorem~\ref{thm:volume_estimate} give
\begin{equation*}
\mathrm{Vol}(\mathcal{G}_n^{i,S,T})\le \exp\left(-\frac{n^{-2c}}{4}\binom{m}{2}+
C_1 m^{3/2} + n^{5/3}\right)\le
\exp\left(-\frac{n^{2-8c}}{10}\right),
\end{equation*}
where we have used that $m = \lfloor n^{1-3c} \rfloor$, that $c$ is sufficiently small (so that $2-8c > 5/3$) and that $n$ is sufficiently large. Summing over all possible choices for $i$, $S$, and $T$ yields
\begin{multline*}
\mathrm{Vol}\big(\{d\in\mathcal{G}_n : \max_i |S_i(d)| > m\}\big)\le n\binom{n}{m}\binom{n^2}{\lfloor
n^{5/3}\rfloor}\exp\left(-\frac{n^{2-8c}}{10}\right) \\
\le \exp\left((m+1) \log(n) + n^{5/3} \log(n^2)-\frac{n^{2-8c}}{10}\right)\le
\exp\left(-\frac{n^{2-8c}}{16}\right),
\end{multline*}
where we again used the assumption that $2-8c > 5/3$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:Si-small}]
For $S \subseteq \br{n}$ with $|S| = 2m$, we let
\begin{equation*}
\mathcal{B}_n^S := \big\{d \in \mathcal{G}_n : S_1(d)\cup S_2(d)\subseteq S \text{ and } d_{12} < 1-n^{-c}\big\}
\end{equation*}
and note that, by symmetry,
\begin{equation}
\label{eq:Bn-sum-BnS}
\mathrm{Vol}\big(\{d \in \mathcal{B}_n : \max_i |S_i(d)| \le m\}\big) \le \binom{n}{2} \sum_{S \subseteq \br{n}, |S|=2m} \mathrm{Vol}(\mathcal{B}_n^S).
\end{equation}
The crucial observation is that if $d_{12}<1-n^{-c}$, then, for any $j\not\in S$, we have $(d_{1j},d_{2j})\in W$, where
\begin{figure}
\begin{centering}
\input{W.pspdftex}
\end{centering}
\caption{$W$ inside $[1-n^{-2c}/4,2]^2$\label{fig:W}.}
\end{figure}
\[
W := \left\{(a,b) : 1 - \frac{n^{-2c}}{4}\le a,b\le 2,\, |a-b|\le 1-n^{-c}\right\}.
\]
Since
\begin{equation*}
\mathrm{Vol}(W) = \left(1 + \frac{n^{-2c}}{4}\right)^2 -
\left(\frac{n^{-2c}}{4} + n^{-c}\right)^2\le 1 -
\frac{n^{-2c}}{2},
\end{equation*}
bounding the volume of $(d_{ij}:i\in\{1,2\},j\in S)$ crudely by $2^{2|S|}$, we get
\begin{align*}
\mathrm{Vol}(\mathcal{B}_n^S) & \le 2^{2|S|} \cdot \mathrm{Vol}(W)^{n-2-|S|} \cdot \mathrm{Vol}(\mathcal{M}_{n-2}) \\
& \le 16^m \cdot \exp\left(-\frac{1}{3}n^{1 - 2c}\right) \cdot \mathrm{Vol}(\mathcal{M}_{n-2}),
\end{align*}
provided that $n$ is sufficiently large. Substituting this bound into~\eqref{eq:Bn-sum-BnS} gives the result, since
\[
\binom{n}{2} \binom{n}{2m} \le \exp\big((2m+2)\log(n)\big) \le 16^{-m} \cdot \exp\left(\frac{n^{1-2c}}{12}\right).\qedhere
\]
\end{proof}
\subsection{Comparing volumes of metric polytopes}
This section is devoted to the proof of Proposition~\ref{prop:volume_comparison}. We show that a large portion of the spaces in $\mathcal{M}_n$ admit a
significant volume of extensions to spaces in $\mathcal{M}_{n+1}$. To this end, we study certain typical properties of metric spaces in $\mathcal{G}_n$.
The first step is establishing that, in a typical space in $\mathcal{G}_n$, there are not too many vertices that are incident to a distance that is significantly
shorter than $\frac{1}{2}$. Define, for a set $A\subseteq \br{n}$ and a space $d\in \mathcal{M}_n$,
\begin{equation}
\label{eq:F_A_def}
F_A(d):=\prod_{i\in A} \left(2\min_{j\in\br{n}\setminus\{i\}}\dist{i}{j}\right).
\end{equation}
(In particular, $F_\emptyset(d) = 1$.)
\begin{lem}
\label{lem:F-lemma}
If $n$ is sufficiently large, then for any $F \in (0,1)$, we have
\begin{equation*}
\mathrm{Vol}\big(\{d \in \mathcal{G}_n : \min_{A \subseteq \br{n}} F_A(d) \le F \}\big)\le F^{n/10} \cdot \exp\left(n^{5/3}\log(n)\right).
\end{equation*}
\end{lem}
\begin{proof}
For a metric space $d\in\mathcal{M}_n$ and a set $B\subsetneq\br{n}$, define
\[
F_B^*(d):=\prod_{i\in B} \left(2\min_{j \in \br{n} \setminus B} d_{ij} \right).
\]
The difference between $F_B$ and $F_B^*$ is that, in the definition of $F_B$, the index $j$ minimising $d_{ij}$ is chosen arbitrarily while, in the definition of $F_B^*$, it is chosen from outside of $B$. For each $i \in B$, let $j_i^B(d)$ denote an (arbitrary) such index, that is, $j_i^B(d)$ is an arbitrary $j\in \br{n}\setminus B$ for which $\dist{i}{j} = \min_{k\in\br{n}\setminus B}\dist{i}{k}$. We shall shorthand $j_i(d) := j_i^{\{i\}}(d)$.
Suppose that $d \in \mathcal{G}_n$ and that $F_A(d)\le F$ for some $A \subseteq \br{n}$. We first show that there exists a subset $B\subseteq A$
such that
\begin{equation}\label{eq:good_B_property}
\text{$|B|\le \frac{n}{2}$\quad and\quad$F_B^*(d)\le F^{1/4}$}.
\end{equation}
To see this, let $R$ be a uniformly chosen random subset of $\br{n}$ with $\lfloor n/2 \rfloor$ elements, let
\[
B = \{i \in A \cap R : j_i(d) \notin R\},
\]
and note that
\[
\mathbb{E}\left[\log\big(F_B^*(d)\big)\right] = \sum_{i \in A} \P(i \in B) \cdot \log\big(2d_{i,j_i(d)}\big) = p \cdot \log\big(F_A(d)\big),
\]
where
\[
p = \frac{\lfloor n/2 \rfloor \lceil n/2 \rceil}{n(n-1)} \ge \frac{1}{4}.
\]
Since $F_A(d) \le F \le 1$, there exists a choice of $R$ for which the set $B$ satisfies $F_B^*(d)\le F_{A}(d)^{1/4} \le F^{1/4}$.
For a set $B \subseteq \br{n}$ with at most $n/2$ elements, a function $J \colon \br{n} \to \br{n}$, and a set $T \subseteq \binom{\br{n}}{2}$ with $|T| = \lfloor n^{5/3} \rfloor$, define
\begin{multline*}
\mathcal{G}_n^{B,T,J} := \Big\{d \in \mathcal{M}_n : F_B^*(d)\le F^{1/4}, j_i^B(d) = J(i) \text{ for all $i\in B$}, \\
\text{ and } \dist{i}{j}\ge 1\text{ if }\{i,j\}\notin T\Big\}.
\end{multline*}
We may construct each space in $\mathcal{G}_n^{B,T,J}$ as follows. We first choose all the distances $d_{ij}$ with $\{i, j\} \in \binom{\br{n} \setminus B}{2} \cup \binom{B}{2}$ and the $|B|$ distances $d_{i,J(i)}$ with $i \in B$. Since $d_{ij} \in [1,2]$ when $ij \notin T$ and $d_{ij} \in [0,2]$ otherwise, the volume of all such choices is at most $2^{|T|}$. Now, for every $i\in B$ and $k\notin B\cup\{J(i)\}$, the distance $d_{ik}$ must satisfy $|d_{ik}-d_{J(i),k}|\le d_{i,J(i)}$. As a result, given all the other distances, the volume of the set of valid choices for all such $d_{ik}$ is not more than
\begin{equation*}
\prod_{i\in B\vphantom{k\in \br{n}\setminus(B\cup\{J(i)\})}} \prod_{k \notin B\cup\{J(i)\}} 2\dist{i,}{J(i)} =F_B^*(d)^{n - |B| -
1} \le F^{(n-|B|-1)/4}.
\end{equation*}
We thus get
\begin{equation*}
\mathrm{Vol}(\mathcal{G}_n^{B,T,J}) \le 2^{|T|} \cdot F^{(n-|B|-1)/4} \le 2^{n^{5/3}} F^{n/10},
\end{equation*}
for $n$ sufficiently large. Summing over all possible choices for $B$, $T$, and $J$, we have
\begin{align*}
\mathrm{Vol}\big(\{d \in \mathcal{G} : \min_{A \subseteq \br{n}} F_A(d) \le F\}\big) & \le 2^n n^n \binom{n^2}{\lfloor n^{5/3} \rfloor} \cdot 2^{n^{5/3}}F^{n/10} \\
& \le F^{n/10} \cdot \exp\big(n^{5/3}\log (n)\big),
\end{align*}
as claimed.
\end{proof}
The second step in the proof of Proposition~\ref{prop:volume_comparison} is showing that, in a typical metric space in $\mathcal{G}_n$, distances significantly shorter than one do not form large matchings. To this end, for a constant $\rho > 0$ and $d \in \mathcal{M}_n$, we define
\begin{equation}
\label{eq:T_rho_d_def}
T^\rho(d) := \left\{\{i,j\} \in \binom{\br{n}}{2} : d_{ij} < 1 - n^{-\rho} \right\}.
\end{equation}
\begin{lem}
\label{lem:short-matching}
If $\mu$ and $\rho$ are positive constants satisfying
\begin{equation}\label{eq:mu_rho_cond}
\mu + 2\rho < \frac{1}{3},
\end{equation}
then, for all sufficiently large $n$,
\begin{multline*}
\mathrm{Vol}\big(\{d\in\mathcal{G}_n : \text{$T^\rho(d)$ contains a matching}
\text{ of size at least $n^{1-\mu}$}\}\big) \\
\le \exp\left(-\frac{n^{2-2\rho-\mu}}{4}\right).
\end{multline*}
\end{lem}
\begin{proof}
Let $\mu$ and $\rho$ be positive constants satisfying~\eqref{eq:mu_rho_cond}. For disjoint $M, T \subseteq \binom{\br{n}}{2}$ such that $M$ is a matching with $|M| = \lceil n^{1-\mu} \rceil$ and $|T| = \lfloor n^{5/3}\rfloor$, let
\[
\mathcal{G}_n^{M,T} := \left\{d \in \mathcal{M}_n : T^\rho(d) \supseteq M \text{ and } \dist{i}{j}\ge 1\text{ if }\{i,j\}\notin T\right\}.
\]
Denote by $V(M)$ the set of $2|M|$ endpoints of edges of $M$ and let $\emph{\L}^{M,T}$ be the set of all triangles that contain an edge of $M$ and two edges that are not in $T$ and whose common endpoint is not in $V(M)$, that is,
\[
\emph{\L}^{M,T} := \bigg\{(\{i,j\}, k) \in M \times \br{n} : \{i,k\}, \{j,k\} \not\in T, k \not\in V(M) \bigg\}.
\]
Observe that every edge in $T$ can `prevent' no more than one triangle from belonging to $\emph{\L}^{M,T}$, since $M$ is a matching and since $k$ is not allowed to be in $V(M)$. Hence,
\begin{equation}
\label{eq:LMT-size}
|\emph{\L}^{M,T}| \ge |M|(n-2|M|) - |T| \ge \frac{n^{2-\mu}}{2},
\end{equation}
as $2-\mu > 5/3$, by~\eqref{eq:mu_rho_cond}, and $n$ is sufficiently large.
As in the proof of Lemma~\ref{lem:Si-small}, the crucial observation is that, if $(\{i, j\}, k) \in \emph{\L}^{M,T}$, then $(d_{ik},d_{jk})\in W'$, where
\begin{equation*}
W' := \left\{(a,b) : 1\le a,b \le 2,\, |a-b|\le 1-n^{-\rho}\right\}.
\end{equation*}
Consequently, $\mathcal{G}_n^{M,T}$ is contained in the following product set:
\begin{equation*}
\prod_{\{i,j\}\in T}\{\dist{i}{j}\le 2\}
\prod_{(\{i,j\},k) \in \emph{\L}^{M,T}} \{(\dist{i}{k}, \dist{j}{k})\in W'\}
\prod_{\textrm{remaining }\{i,j\}}\{1\le \dist{i}{j}\le 2\}.
\end{equation*}
Since
\begin{equation*}
\mathrm{Vol}(W') = 1 - n^{-2\rho},
\end{equation*}
we conclude, using~\eqref{eq:LMT-size}, that
\begin{align*}
\mathrm{Vol}(\mathcal{G}_n^{M,T}) & \le 2^{|T|}\cdot\mathrm{Vol}(W')^{|\emph{\L}^{M,T}|} \le 2^{n^{5/3}} \cdot \left(1 - n^{-2\rho}\right)^{\frac12 n^{2-\mu}} \\
& \le \exp\left(- n^{2-\mu-2\rho}/3 \right),
\end{align*}
where in the last inequality we used \eqref{eq:mu_rho_cond}.
Summing over all possible choices for $M$ and $T$, we have
\begin{multline*}
\mathrm{Vol}\big(\{d\in\mathcal{G}_n : \text{$T^\rho(d)$ contains a matching of size at least $n^{1-\mu}$}\}\big) \\
\le\binom{n^2}{\lceil n^{1-\mu} \rceil} \binom{n^2}{\lfloor n^{5/3} \rfloor} \exp\left(- n^{2-\mu-2\rho}/3 \right),
\end{multline*}
from which the lemma follows, again using~\eqref{eq:mu_rho_cond}.
\end{proof}
The two lemmas enable us to compare the volumes of $\mathcal{M}_{n}$ and $\mathcal{M}_{n+1}$.
\begin{proof}[Proof of Proposition~\ref{prop:volume_comparison}]
Recall the definition of $T^\rho(d)$ from~\eqref{eq:T_rho_d_def} and the definition of $F_A(d)$ from~\eqref{eq:F_A_def}. Recall also that $c = 1/30$ and let
\begin{equation}\label{eq:varphi_def}
\varphi:=6c, \qquad \rho := 3c, \quad \text{and} \quad \mu := 3c.
\end{equation}
Define
\begin{align*}
\mathcal{M}_n^1&:=\left\{d \in \mathcal{M}_n : \min_{A\subseteq \br{n}} F_A(d) > \exp(-n^{1-\varphi}) \right\},\\
\mathcal{M}_n^2&:=\left\{\text{$T^\rho(d)$ contains no matching of size at least
$n^{1-\mu}$}\right\}
\end{align*}
and let
\[
\mathcal{M}_n^*:=\mathcal{G}_n \cap \mathcal{M}_n^1 \cap \mathcal{M}_n^2.
\]
Since $2 - \varphi > 5/3$ and $\mu + 2\rho = 9c < 1/3$, we may use estimate~\eqref{eq:used_to_be_lem}, Lemma~\ref{lem:F-lemma}, Lemma~\ref{lem:short-matching}, and the estimate $\mathrm{Vol}(\mathcal{M}_n) \ge 1$ to conclude that, for sufficiently large $n$,
\begin{equation}\label{eq:M_n_*_volume}
\mathrm{Vol}(\mathcal{M}_n^*)\ge \frac{1}{2}\mathrm{Vol}(\mathcal{M}_n).
\end{equation}
For $d\in\mathcal{M}_n$ define
\begin{equation*}
Q(d):=\left\{i\in \br{n} : \min_{j\in\br{n}\setminus\{i\}}
\dist{i}{j}<\frac{1}{2} - \frac{n^{-\rho}}{2}\right\}
\end{equation*}
and let
\begin{equation*}
V(d):=\left\{\text{the vertices of a largest matching in $T^\rho(d)$}\right\},
\end{equation*}
where, if there are several largest matchings, we let $V(d)$ to be the vertex set of an arbitrary one of them. For the sake of brevity, from now on we shall write $Q$ and $V$ in place of $Q(d)$ and $V(d)$.
Let $d \in\mathcal{M}_n^*$. We aim to define a set of metric spaces in $\mathcal{M}_{n+1}$ which extend $d$. More precisely, we shall find a voluminous family of metric spaces $d' \in\mathcal{M}_{n+1}$ which satisfy
\begin{equation}\label{eq:d_n_plus_1_prop}
d_{ij}' = d_{ij},\quad \{i,j\}\in
\binom{\br{n}}{2}.
\end{equation}
To this end, define, for $\delta > 0$,
\begin{equation*}
I(\delta):=\left[\frac{3}{2} - \frac{\delta}{2}, \frac{3}{2} +
\frac{\delta}{2}\right]
\end{equation*}
and the following quantities
\begin{equation*}
\delta_1(i) :=
\begin{cases}
n^{-\rho},& \text{if $i\in V$}, \\
1, & \text{otherwise},
\end{cases}
\qquad
\delta_2(i) :=
\begin{cases}
\min_{j\ne i}d_{ij}, & \text{if $i\in Q$}, \\
1, & \text{otherwise}.
\end{cases}
\end{equation*}
\begin{claim}
\label{claim:d-extensions}
Every $d' \in [0,2]^{\binom{\br{n+1}}{2}}$ satisfying \eqref{eq:d_n_plus_1_prop} and having
\begin{equation*}
d_{i,n+1}' \in I\left(\min\big\{\delta_1(i),\delta_2(i), 1-2n^{-\rho}\big\}\right)
\end{equation*}
belongs to $\mathcal{M}_{n+1}$.
\end{claim}
\begin{proof}
Since $d$ is a metric space, by~\eqref{eq:d_n_plus_1_prop}, it suffices to verify the triangle inequality for triangles $\{i,j,n+1\}$ with $\{i,j\}\in\binom{\br{n}}{2}$. Note that $d_{i,n+1}',
d_{j,n+1}' \ge 1$ whereas $d_{ij}' = d_{ij} \le 2$, so that we only need to verify that
\begin{equation}
\label{eq:triangle_i_j_n_plus_1}
|d_{i,n+1}' - d_{j,n+1}'|\le d_{ij}.
\end{equation}
We consider three cases, according to the value of $d_{ij}$.
\begin{itemize}
\item
If $d_{ij}<\frac 12 - \frac 12 n^{-\rho}$, then $i, j\in Q$ and
\begin{equation*}
|d_{i,n+1}' - d_{j,n+1}'| \le \frac{1}{2}\left(\min_{k\ne i}d_{ik} + \min_{k\ne j} d_{jk} \right) \le d_{ij}.
\end{equation*}
\item
If $\frac{1}{2} - \frac{1}{2} n^{-\rho} \le d_{ij}<1-n^{\rho}$, then at least one of $i$ and $j$ is in $V$, as $\{i, j\} \in T^{\rho}(d)$ and $V$ is the vertex set of a largest matching in $T^{\rho}(d)$, and
\begin{equation*}
|d_{i,n+1}' - d_{j,n+1}'| \le \frac{1}{2}\left(n^{-\rho} + (1-2n^{-\rho})\right)=\frac 12-\frac12 n^{-\rho} \le d_{ij}.
\end{equation*}
\item
Finally, if $d_{ij}\ge 1-n^{-\rho}$, then
\begin{equation*}
|d_{i,n+1}' - d_{j,n+1}'| \le \frac{1}{2}\left((1-2n^{-\rho}) + (1-2n^{-\rho})\right) \le d_{ij}.
\end{equation*}
\end{itemize}
The proof of the claim is now complete.
\end{proof}
Claim~\ref{claim:d-extensions} implies a lower bound on the ratio of the volumes of $\mathcal{M}_{n+1}$ and $\mathcal{M}_n^*$. More precisely, for each $d \in \mathcal{M}_n^*$, the volume of extensions of $d$ to a $d' \in \mathcal{M}_{n+1}$ is at least
\[
\prod_{i=1}^n \min\{\delta_1(i),\delta_2(i),1-2n^{-\rho}\}\ge
\left(n^{-\rho}\right)^{|V|} \cdot \prod_{i\in Q}\min_{j\ne
i}d_{ij} \cdot (1-2n^{-\rho})^n.
\]
By the definition of $F_{Q}(d)$, see~\eqref{eq:F_A_def},
\[
\prod_{i \in Q} \min_{j \neq i} d_{ij} = 2^{-|Q|} \cdot F_{Q}(d),
\]
while the definition of $Q$ ensures that
\begin{equation*}
F_{Q}(d) \le \left(1 - n^{-\rho}\right)^{|Q|} \le \exp\big(-|Q| \cdot n^{-\rho}\big) \le \left(2^{-|Q|}\right)^{n^{-\rho}}.
\end{equation*}
Since the fact that $d \in \mathcal{M}_n^1$ gives $F_Q(d) > \exp(-n^{1-\varphi})$, we deduce that
\[
2^{-|Q|} \cdot F_{Q}(d) \ge F_{Q}(d)^{n^\rho+1} > \exp\left(-2n^{1-\varphi+\rho}\right).
\]
Finally, as $d \in \mathcal{M}_n^2$, we have $|V| \le 2n^{1-\mu}$ and we conclude that
\begin{align*}
\mathrm{Vol}(\mathcal{M}_{n+1}) &\ge \left(n^{-\rho}\right)^{2n^{1-\mu}} \cdot \exp\big(-2n^{1 - \varphi + \rho}\big) \cdot \left(1-2n^{-\rho}\right)^n \cdot \mathrm{Vol}(\mathcal{M}_n^*)\\
&\ge \exp\left(-n^{1-3c} \log (n) \right) \cdot \mathrm{Vol}(\mathcal{M}_n),
\end{align*}
where the last inequality follows from~\eqref{eq:varphi_def}, \eqref{eq:M_n_*_volume}, and our assumption that $n$ is sufficiently large.
\end{proof}
\section{Discussion and open questions}\label{sec:discussion_and_open_questions}
\subsection{Further questions}\label{sec:further_questions}
As we remarked, we were not able to decide whether $\mathrm{Vol}(\mathcal{M}_n)$ is increasing in $n$.
If one could prove that this is indeed the case, this would greatly simplify our proof of Theorem~\ref{thm:minimal_distance} on the shortest distance in the metric space sampled uniformly from~$\mathcal{M}_n$.
Suppose that $d$ is a metric space sampled uniformly from $\mathcal{M}_n$. A key ingredient in our proof of Theorem~\ref{thm:minimal_distance} is the upper bound on $\P(d_{12} < 1)$ established in Proposition~\ref{prop:distance-lower-tail}. It would be interesting to obtain additional information about the distribution of $d_{12}$. In particular, is it true that $\P(d_{12} < 1) = \Theta(n^{-1/2})$? We believe that this is the case and our belief seems to be supported by the lower bound of Proposition~\ref{prop:distance-lower-tail}. Going even further and writing $f_n$ for the density of the random variable $d_{12}$, one may ask whether the function $[0, \infty) \ni t \mapsto f_n(1-\frac{t}{\sqrt{n}})$ has a limit as $n \to \infty$? It would also be very interesting to estimate the probability $\P(d_{12} < 1-\frac{t}{\sqrt{n}})$ for $t \gg 1$. Propositions~\ref{prop:exp-bound-alpha} and~\ref{prop:volume_comparison} imply that $\P(d_{12} < \alpha)$ is exponentially small in $n$ for every fixed $\alpha < 1/2$, see also~\eqref{eq:exponential_probability_for_very_small_distances}, but we are not ready to make any conjectures about the range $t \le \sqrt{n}/2$.
Do the empirical measures of individual distances (and tuples of distances)
satisfy a large deviation principle? If so, what is the rate
function? Is it possible to recover our result about the minimum distance from such a large deviation estimate?
\subsection{Relation with the discrete problem}
\label{sec:discrete-problem}
One may naturally consider a discrete analogue of the problem we study in this paper, where we require the distances between every pair of points to be integers. More specifically, given integers $M\ge 1$ and $n\ge 2$, one may consider the space $\mathcal{M}_n^M$ defined by
\begin{equation*}
\mathcal{M}_n^M := \left\{ (\dist{i}{j})\in\{1, \dotsc, M\}^{\binom{\br{n}}{2}} : \dist{i}{j}\le \dist{i}{k} + \dist{k}{j}\text{ for all $i,j,k$} \right\},
\end{equation*}
which is closely related to the metric polytope $\mathcal{M}_n$. Indeed, for every~$n$, $\mathcal{M}_n$ is naturally obtained as a limit of $\left(\frac{2}{M}\right)\mathcal{M}_n^M$ as $M$ tends to infinity. We proceed to discuss some of the quantitative aspects of this relation.
As with the continuous problem, observing that the cube
\begin{equation}\label{eq:discrete_problem_cube_structure}
\left\{\left\lceil\frac{M}{2}\right\rceil,\left\lceil\frac{M}{2}\right\rceil+1,\dotsc, M\right\}^{\binom{n}{2}}
\end{equation}
is fully contained in $\mathcal{M}_n^M$, one gets the following simple lower bound on the cardinality of~$\mathcal{M}_n^M$:
\begin{equation}\label{eq:discrete_problem_naive_estimate}
|\mathcal{M}_n^M|\ge \left\lceil\frac{M+1}{2}\right\rceil^{\binom{n}{2}}.
\end{equation}
In fact, one may obtain bounds on $|\mathcal{M}_n^M|$ from bounds on $\mathrm{Vol}(\mathcal{M}_n)$ and vice-versa. In one direction, consider the map $\varphi \colon (0,2]^{\binom{n}{2}} \to \{1, \dotsc, M\}^{\binom{n}{2}}$ defined by
\[
\varphi(d)_{ij} = \left\lceil \frac{M d_{ij}}{2} \right\rceil.
\]
Observe that $\varphi$ maps $\mathcal{M}_n$ to $\mathcal{M}_n^M$ (as $\lceil x \rceil + \lceil y \rceil \ge \lceil z \rceil$ whenever $x + y \ge z$) and that $\mathrm{Vol}(\varphi^{-1}(d')) = \left(\frac{2}{M}\right)^{\binom{n}{2}}$ for any $d' \in \{1, \dotsc, M\}^{\binom{n}{2}}$. Consequently,
\[
\mathrm{Vol}(\mathcal{M}_n) \le \left(\frac{2}{M}\right)^{\binom{n}{2}} |\mathcal{M}_n^M|.
\]
In the other direction, consider the map $\psi$ from $\mathcal{M}_n^M$ to the power set of $(0,2]^{\binom{n}{2}}$ defined by
\[
\psi(d) = \prod_{i,j} \left(\frac{2}{M+2}\left(d_{ij} + 1\right), \frac{2}{M+2}\left(d_{ij} + 2\right)\right].
\]
Observe that $\psi$ maps each $d \in \mathcal{M}_n^M$ to a cube that is fully contained in $\mathcal{M}_n$ (as $x+y \ge z$ implies that $(x+\Delta x)+(y+\Delta y) \ge (z+\Delta z)$ for all $\Delta x, \Delta y, \Delta z \in (1,2]$) so that cubes corresponding to different $d$ are disjoint. It follows that
\[
|\mathcal{M}_n^M| \left(\frac{2}{M+2}\right)^{\binom{n}{2}} \le \mathrm{Vol}(\mathcal{M}_n).
\]
Putting these bounds together yields
\begin{equation}
\label{eq:card_MnM-vol_Mn}
\left(\frac{M}{2}\right)^{\binom{n}{2}}\mathrm{Vol}(\mathcal{M}_n)\le |\mathcal{M}_n^M|\le \left(\frac{M}{2}+1\right)^{\binom{n}{2}}\mathrm{Vol}(\mathcal{M}_n).
\end{equation}
\smallskip
Concurrently with the writing of this paper, Mubayi and Terry~\cite{mubayi2019discrete} studied the discrete problem in the regime where \emph{$M$ is fixed} and $n$ tends to infinity, proving that
\begin{equation}
\label{eq:Mubayi-Terry}
|\mathcal{M}_n^M| =
\begin{cases}
\big(1+e^{-\Omega(n)}\big)\left(\frac{M}{2}+1\right)^{\binom{n}{2}} & \text{if $M$ is even,} \\
\left(\frac{M+1}{2}\right)^{\binom{n}{2} + o(n^2)} & \text{if $M$ is odd}
\end{cases}
\end{equation}
(with additional structural information in the odd $M$ case).
The above bound reveals that for even $M$, the structure of a uniformly chosen space from $\mathcal{M}_n^M$ is very rigid: the probability that even a single distance lies outside the discrete interval $\{M/2, \dotsc, M\}$ is exponentially small. This strong rigidity property stems from the assumption that $M$ is fixed and does not hold in the continuous setting. Indeed, the bound~\eqref{eq:min-dij-lower} shows that the minimum distance is smaller than $1 - \frac{c}{\sqrt n}$ in typical samples from $\mathcal{M}_n$. Handling such microscopic fluctuations contributes to the difficulty in controlling the volume of $\mathcal{M}_n$ and understanding the structure of typical samples from it.
\subsection{Metric preserving maps}
A map $\phi \colon [0,\infty)\to[0,\infty)$ is metric preserving if $\phi(d) = \big(\phi(d_{ij})\big)$ is a metric
on some set whenever $d = (d_{ij})$ is, e.g., the ceiling operation from the previous subsection.
There are many interesting examples of such maps, see~\cite{Corazza}. Every metric preserving map $\phi$
such that $\sup_{ x\in [0,2]} \phi(x) \le 2$ induces a self-map of the metric polytope.
We wonder how metric preserving maps can be utilized to further study the structure of the metric polytope.
\subsection{Other models for random metric spaces}
In this paper we investigated a certain model of a `random metric space', which in some sense is natural.
The conclusion of our results is that on a large scale this model essentially reduces to the `trivial' model where all distances are in the interval $[1,2]$, and the triangle inequality is trivially satisfied.
It would be interesting to find other models for a `random metric space', which are `natural' on the one hand, and `interesting' on the other hand, in the sense that they reveal new phenomena about metric spaces.
In \cite{MR2086637} Vershik considered one natural candidate for a
random metric space, and proved that it is essentially the Urysohn universal metric space. As remarked within the paper, `An obvious drawback of our construction is that it is not invariant with respect
to the numbering of points'.
\subsection*{Acknowledgments}
We thank Itai Benjamini for asking the question and Gil Kalai for
informing us that this object is known as the metric polytope. We
thank Omer Angel, Dor Elboim, Ronen Eldan, Ehud Friedgut, Shoni Gilboa, Rob Morris, Bal\'asz
R\'ath and Johan W\"astlund for many
interesting discussions. Special thanks are due to Rob Morris, who kindly agreed to us presenting
the results of our joint work with him in Section~\ref{sec:container-method} of this paper.
\bibliographystyle{abbrv}
| {
"timestamp": "2021-04-06T02:24:49",
"yymm": "2104",
"arxiv_id": "2104.01689",
"language": "en",
"url": "https://arxiv.org/abs/2104.01689",
"abstract": "The collection $\\mathcal{M}_n$ of all metric spaces on $n$ points whose diameter is at most $2$ can naturally be viewed as a compact convex subset of $\\mathbb{R}^{\\binom{n}{2}}$, known as the metric polytope. In this paper, we study the metric polytope for large $n$ and show that it is close to the cube $[1,2]^{\\binom{n}{2}} \\subseteq \\mathcal{M}_n$ in the following two senses. First, the volume of the polytope is not much larger than that of the cube, with the following quantitative estimates: \\[ \\left(\\tfrac{1}{6}+o(1)\\right)n^{3/2} \\le \\log \\mathrm{Vol}(\\mathcal{M}_n)\\le O(n^{3/2}). \\] Second, when sampling a metric space from $\\mathcal{M}_n$ uniformly at random, the minimum distance is at least $1 - n^{-c}$ with high probability, for some $c > 0$. Our proof is based on entropy techniques. We discuss alternative approaches to estimating the volume of $\\mathcal{M}_n$ using exchangeability, Szemerédi's regularity lemma, the hypergraph container method, and the Kővári--Sós--Turán theorem.",
"subjects": "Probability (math.PR); Combinatorics (math.CO); Metric Geometry (math.MG)",
"title": "What does a typical metric space look like?",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9883127416600689,
"lm_q2_score": 0.8152324938410784,
"lm_q1q2_score": 0.8057046610784514
} |
https://arxiv.org/abs/1405.7334 | Strong duality in Lasserre's hierarchy for polynomial optimization | A polynomial optimization problem (POP) consists of minimizing a multivariate real polynomial on a semi-algebraic set $K$ described by polynomial inequalities and equations. In its full generality it is a non-convex, multi-extremal, difficult global optimization problem. More than an decade ago, J.~B.~Lasserre proposed to solve POPs by a hierarchy of convex semidefinite programming (SDP) relaxations of increasing size. Each problem in the hierarchy has a primal SDP formulation (a relaxation of a moment problem) and a dual SDP formulation (a sum-of-squares representation of a polynomial Lagrangian of the POP). In this note, when the POP feasibility set $K$ is compact, we show that there is no duality gap between each primal and dual SDP problem in Lasserre's hierarchy, provided a redundant ball constraint is added to the description of set $K$. Our proof uses elementary results on SDP duality, and it does not assume that $K$ has an interior point. | \section{Introduction}
Consider the following polynomial optimization problem (POP)
\begin{equation}
\label{eq:pop}
\begin{array}{ll}
\inf_x & f(x) := \sum_\alpha f_{\alpha} x^\alpha \\
\mathrm{s.t.} & g_i(x) := \sum_\alpha g_{i,\alpha} x^\alpha \geq 0, \quad i=1,\ldots,m
\end{array}
\end{equation}
where we use the multi-index notation $x^\alpha := x_1^{\alpha_1} \cdots x_n^{\alpha_n}$ for $x \in {\mathbb R}^n$,
$\alpha \in {\mathbb N}^n$ and
where the data are polynomials $f, g_1, \ldots, g_m \in {\mathbb R}[x]$
so that in the above sums only a finite number of coefficients
$f_{\alpha}$ and $g_{i,\alpha}$ are nonzero. Assume that the feasibility set
\[
K:=\{x \in {\mathbb R}^n \: :\: g_i(x) \geq 0, \: i=1,\ldots,m\}
\]
is nonempty and bounded, so that the above infimum is attained. To solve POP (\ref{eq:pop}),
Lasserre \cite{lasserre-2000,lasserre-2001} proposed a semidefinite programming (SDP) relaxation hierarchy
with guarantees of global and generically finite convergence \cite{nie14} provided an algebraic assumption holds:
\begin{assumption}\label{archimedean}
There exists a polynomial $u \in \mathbb{R}[x]$ such that $\{x\in \mathbb{R}^n \: :\: u(x)\geq 0\}$ is bounded
and $u=u_0 + \sum_{i=1}^m u_i g_i $ where polynomials $u_i \in {\mathbb R}[x]$, $i=0,1,\ldots,m$
are sums of squares (SOS) of other polynomials.
\end{assumption}
Assumption \ref{archimedean} can be difficult to check computationally (as the degrees of the SOS
multipliers can be arbitrarily large), and it is often replaced
by the following slightly stronger assumption:
\begin{assumption}\label{ball}
Let $R>0$ be the radius of an Euclidean ball including set $K$, and add the
redundant ball constraint $g_{m+1}(x) = R^2 - \sum_{i=1}^n x_i^2 \geq 0$ to the description of $K$.
\end{assumption}
Indeed, under Assumption \ref{ball}, simply choose $u=g_{m+1}$, $u_1=\cdots=u_m=0$, and $u_{m+1}=1$
to conclude that Assumption \ref{archimedean} holds as well.
In practice, it is often easy to identify a bound $R$ on the radius of the feasibility set $K$.
Each problem in Lasserre's hierarchy consists of a primal-dual SDP pair, called SDP relaxation,
where the primal corresponds
to a convex moment relaxation of the original (typically nonconvex) POP, and the dual corresponds
to a SOS representation of a polynomial Lagrangian of the POP. The question arises of the absence of
duality gap in each SDP relaxation. This is of practical importance because numerical algorithms
to solve SDP problems are guaranteed to converge only where there is no duality gap,
and sometimes under the stronger assumption that there is a primal or/and dual SDP interior point.
In \cite[Example 4.9]{schweighofer},
Schweighofer provides a two-dimensional POP with bounded $K$ with no interior point
for which Assumption \ref{archimedean} holds, yet a duality gap exists at the first
SDP relaxation: $\inf x_1x_2 \:\mathrm{s.t.}\: x \in K=\{x \in {\mathbb R}^2 \: :\:
-1 \leq x_1 \leq 1, \: x^2_2 \leq 0\}$, with
primal SDP value equal to zero and dual SDP value equal to minus infinity.
This shows that a stronger assumption is required to ensure no SDP duality gap.
A sufficient condition for strong duality has been given in \cite{lasserre-2001}:
the interior of the POP feasibility set $K$ should be nonempty. However, this may be too restrictive:
in the proof of Lemma 1 in \cite{hl12} the authors use notationally awkward arguments
involving truncated moment matrices
to prove the absence of SDP duality gap for a set $K$ with no interior point. This shows that absence of an interior point
for $K$ is not necessary for no SDP duality gap, and a weaker assumption is welcome.
Motivated by these observations, in this note we prove that under the basic Assumption \ref{ball}
on the description of set $K$, there is no duality gap in the SDP hierarchy. In particular, this
covers the cases when $K$ has an empty interior. Our interpretation of this result,
and the main message of this contribution, is that in the context of Lasserre's hierarchy
for POP, a practically relevant description of a bounded semialgebraic feasibility set
must include a redundant ball constraint.
\section{Proof of strong duality}
For notational convenience, let $g_0(x)=1 \in {\mathbb R}[x]$ denote the unit polynomial.
Define the localizing moment matrix
\[
M_{d-d_i}(g_iy) := \left(\sum_\gamma g_{i,\gamma} y_{\alpha+\beta+\gamma}\right)_{|\alpha|,|\beta|\leq d-d_i}
= \sum_{|\alpha|\leq 2d} A_{i,\alpha} y_{\alpha}
\]
where $d_i$ is the smallest integer greater than or equal to half the degree of $g_i$,
for $i=0,1,\ldots,m$.
For $d \geq d_{\min}:=\max_{i=0,1,\ldots,m} d_i$, the Lasserre hierarchy for POP (\ref{eq:pop})
consists of a primal moment SDP problem
\[
(P_d) ~:~
\begin{array}{ll}
\inf_y & \sum_{\alpha} f_{\alpha} y_\alpha \\
\mathrm{s.t.} & y_0 = 1 \\
& M_{d-d_i}(g_i y) \succeq 0, \:i=0,1,\ldots,m
\end{array}
\]
and a dual SOS SDP problem
\[
(D_d) ~:~
\begin{array}{ll}
\sup_{z,Z} & z \\
\mathrm{s.t.} & f_0 - z = \sum_{i=0}^m \langle A_{i,0}, Z_i \rangle \\
& f_\alpha = \sum_{i=0}^m \langle A_{i,\alpha}, Z_i \rangle, \quad 0<|\alpha|\leq 2d \\
& Z_i \succeq 0, \:i=0,1,\ldots,m
\end{array}
\]
where $A\succeq 0$ stands for matrix $A$ positive semidefinite, $\langle A,B \rangle = \mathrm{trace}\:AB$ is
the inner product between two matrices.
The primal-dual pair $(P_d,D_d)$ is called the SDP relaxation of order $d$ for POP (\ref{eq:pop}).
Let us define the following sets:
\begin{itemize}
\item $\mathcal{P}_d$: feasible points for $P_d$;
\item $\mathcal{D}_d$: feasible points for $D_d$;
\item $\mathrm{int}\:\mathcal{P}_d$: strictly feasible points for $P_d$;
\item $\mathrm{int}\:\mathcal{D}_d$: strictly feasible points for $D_d$;
\item $\mathcal{P}^*_d$: optimal solutions for $P_d$;
\item $\mathcal{D}^*_d$: optimal solutions for $D_d$.
\end{itemize}
Finally, let us denote by $\mathrm{val}\:P_d$ the infimum in problem $P_d$
and by $\mathrm{val}\:D_d$ the supremum in problem $D_d$.
Strong duality holds whenever $\mathrm{val}\:P_d=\mathrm{val}\:D_d < \infty$.
\begin{lemma}
\label{lemma:Slater}
$\mathrm{int}\:\mathcal{P}_d$ nonempty or $\mathrm{int}\:\mathcal{D}_d$ nonempty
implies $\mathrm{val}\:P_d=\mathrm{val}\:D_d$.
\end{lemma}
Lemma \ref{lemma:Slater} is classical in convex optimization, and it is generally
called Slater's condition, see e.g. \cite[Theorem 4.1.3]{shapiro-2000}.
\begin{lemma}
\label{lemma:Trnovska}
$\mathcal{P}_d$ is nonempty and $\mathrm{int}\:\mathcal{D}_d$ is nonempty
if and only if $\mathcal{P}^*_d$ is nonempty and bounded.
\end{lemma}
A proof of Lemma \ref{lemma:Trnovska} can be found in \cite{trnovska-2005}.
According to Lemmas \ref{lemma:Slater} and \ref{lemma:Trnovska},
$\mathcal{P}_d^*$ nonempty and bounded implies strong duality.
This result is also mentioned without proof at the end of \cite[Section 4.1.2]{shapiro-2000}.
\begin{lemma}
\label{lemma:bound}
Under Assumption \ref{ball}, set $\mathcal{P}_d$ is included in the Euclidean ball of radius $\sum_{k=0}^{d} R^{2k}$.
\end{lemma}
\begin{proof}
Consider a feasible point $(y_\alpha)_{|\alpha| \leqslant 2d} \in \mathcal{P}_d$.
Let $k \in {\mathbb N}$ be such that
$1 \leq k \leq d$. In the SDP problem $P_k$,
the localizing matrix associated to the redundant ball constraint $g_{m+1}(x) = R^2 - \sum_{i=1}^n x^2_i \geq 0$ reads
\[
M_{k-1}(g_{m+1} y) = \left(\sum_\gamma g_{m+1,\gamma} ~ y_{\alpha+\beta+\gamma}\right)_{|\alpha|,|\beta|\leq k-1}
\]
with trace equal to
$$
\begin{array}{rcl}
\mathrm{trace}\:M_{k-1}(g_{m+1} y) & = & \sum_{|\alpha|\leqslant k-1} \sum_\gamma g_{m+1,\gamma} ~ y_{2\alpha+\gamma} \\\\
& = & \sum_{|\alpha|\leq k-1} \left( g_{m+1,0} ~ y_{2\alpha} + \sum_{|\gamma|=1} g_{m+1,2\gamma} ~ y_{2\alpha+2\gamma} \right) \\\\
& = & \sum_{|\alpha|\leq k-1} \left( R^2 y_{2\alpha} - \sum_{|\gamma|=1} y_{2(\alpha+\gamma)} \right) \\\\
& = & \sum_{|\alpha|\leq k-1} R^2 y_{2\alpha} - \sum_{|\alpha|\leq k-1,|\gamma|=1} y_{2(\alpha+\gamma)} \\\\
& = & R^2 (\sum_{|\alpha|\leq k-1} y_{2\alpha}) + y_0 - \sum_{|\alpha|\leq k} y_{2\alpha} \\\\
& = & R^2 \:\mathrm{trace}\:M_{k-1}(y) + 1 - \mathrm{trace}\:M_{k}(y).\\
\end{array}
$$
From the structure of the localizing matrix, it holds $M_{k-1}(g_{m+1} y) \succeq 0$ hence
$\mathrm{trace}\:M_{k-1}(g_{m+1} y) \geq 0$ and
\[
\mathrm{trace}\:M_k(y) \leq 1 + R^2\:\mathrm{trace}\:M_{k-1}(y)
\]
from which we derive
\[
\mathrm{trace}\:M_d(y) \leq \sum_{k=1}^{d} R^{2(k-1)} + R^{2d}\:\mathrm{trace}\:M_0(y) = \sum_{k=0}^{d} R^{2k}
\]
since $\mathrm{trace}\:M_0(y) = y_0 = 1$.
The norm $\|M_{d}(y)\|_2$, equal to the maximum eigenvalue of $M_d(y)$, is upper bounded
by $\mathrm{trace}\:M_d(y)$, the sum of the eigenvalues of $M_d(y)$, which are all nonnegative.
Moreover
$$\begin{array}{rcl}
\|M_d(y)\|_2^2 & = & \langle ~ \sum_{|\alpha|\leq 2d} A_\alpha y_\alpha ~,~ \sum_{|\alpha|\leq 2d} A_\alpha y_\alpha ~ \rangle \\\\
& = & \sum_{|\alpha|\leq 2d} ~ \langle A_\alpha,A_\alpha \rangle ~ y_\alpha^2 ~~~ \text{by orthogonality of the matrices
$(A_\alpha)_{|\alpha|\leq 2d}$}\\\\
& \geq & \sum_{|\alpha|\leq 2d} ~ y_\alpha^2 ~~~ \text{because $\langle A_\alpha,A_\alpha \rangle \geq 1$}.
\end{array}
$$
The proof follows then from
\[
\sqrt{\sum_{|\alpha|\leq 2d} y^2_\alpha} \leq \|M_d(y)\|_2 \leq \sum_{k=0}^{d} R^{2k}.
\]
\end{proof}
\begin{theorem}
\label{theorem:strong duality}
Under Assumption \ref{ball}, strong duality holds for SDP relaxations of all orders.
\end{theorem}
\begin{proof}
Let us first show that $\mathcal{P}_d$ is nonempty.
Let $v_d(x):=(x^{\alpha})_{|\alpha|\leq d}$ and
consider a feasible point $x^* \in K$ for POP (\ref{eq:pop}).
Then $y^*=v_{2d}(x^*) \in {\mathcal P}_d$ since by construction
$M_{d-d_i}(g_i y^*) = g_i(x^*) v_{d-d_i}(x^*) v_{d-d_i}(x^*)^T \succeq 0$ for all $i$.
From Lemma \ref{lemma:bound}, $\mathcal{P}_d$ is bounded and closed, and the objective function in $P_d$
is linear, so we conclude that
${\mathcal P}^*_d$ is nonempty and bounded. According to Lemma \ref{lemma:Trnovska}, $\mathrm{int}\:\mathcal{D}_d$
is nonempty, and from Lemma \ref{lemma:Slater} strong duality holds.
\end{proof}
\section{Conclusion}
We prove that strong duality always holds in Lasserre's SDP hierarchy for POP
on bounded semi-algebraic sets after adding a redundant ball constraint.
To preclude numerical troubles with SDP solvers, we advise
to systematically add such a ball constraint, combined with an appropriate scaling
so that all scaled variables belong to the unit sphere. Without scaling,
numerical troubles can occur as well, but they are not due to the presence
of a duality gap, see \cite{hl05} and also the example of \cite[Section 6]{wnm12}.
| {
"timestamp": "2014-05-29T02:10:57",
"yymm": "1405",
"arxiv_id": "1405.7334",
"language": "en",
"url": "https://arxiv.org/abs/1405.7334",
"abstract": "A polynomial optimization problem (POP) consists of minimizing a multivariate real polynomial on a semi-algebraic set $K$ described by polynomial inequalities and equations. In its full generality it is a non-convex, multi-extremal, difficult global optimization problem. More than an decade ago, J.~B.~Lasserre proposed to solve POPs by a hierarchy of convex semidefinite programming (SDP) relaxations of increasing size. Each problem in the hierarchy has a primal SDP formulation (a relaxation of a moment problem) and a dual SDP formulation (a sum-of-squares representation of a polynomial Lagrangian of the POP). In this note, when the POP feasibility set $K$ is compact, we show that there is no duality gap between each primal and dual SDP problem in Lasserre's hierarchy, provided a redundant ball constraint is added to the description of set $K$. Our proof uses elementary results on SDP duality, and it does not assume that $K$ has an interior point.",
"subjects": "Optimization and Control (math.OC)",
"title": "Strong duality in Lasserre's hierarchy for polynomial optimization",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575152637948,
"lm_q2_score": 0.8198933337131076,
"lm_q1q2_score": 0.8055923567545002
} |
https://arxiv.org/abs/2005.08903 | Iterative and doubling algorithms for Riccati-type matrix equations: a comparative introduction | We review a family of algorithms for Lyapunov- and Riccati-type equations which are all related to each other by the idea of \emph{doubling}: they construct the iterate $Q_k = X_{2^k}$ of another naturally-arising fixed-point iteration $(X_h)$ via a sort of repeated squaring.The equations we consider are Stein equations $X - A^*XA=Q$, Lyapunov equations $A^*X+XA+Q=0$, discrete-time algebraic Riccati equations $X=Q+A^*X(I+GX)^{-1}A$, continuous-time algebraic Riccati equations $Q+A^*X+XA-XGX=0$, palindromic quadratic matrix equations $A+QY+A^*Y^2=0$, and nonlinear matrix equations $X+A^*X^{-1}A=Q$. We draw comparisons among these algorithms, highlight the connections between them and to other algorithms such as subspace iteration, and discuss open issues in their theory. | \section{Introduction}
Riccati-type matrix equations are a family of matrix equations that appears very frequently in literature and applications, especially in systems theory. One of the reasons why they are so ubiquitous is that they are equivalent to certain invariant subspace problems; this equivalence connects them to a larger part of numerical linear algebra, and opens up avenues for many solution algorithms.
Many books (and even more articles) have been written on these equations; among them, we recall the classical monography by Lancaster and Rodman~\cite{LancasterRodman}, a review book edited by Bittanti, Laub and Willems~\cite{BittantiLaubWillems}, various treatises which consider them from different points of view such as~\cite{AbouKandil,bart1,bart2,bimbook,BoyEFB-book,Datta,IonescuOaraWeiss,Meh91-book}, and recently also a book devoted specifically to doubling~\cite{doublingbook}.
This vast theory can be presented from different angles; in this exposition, we aim to present a selection of topics which differs from that of the other books and treatises. We focus on introducing doubling algorithms with a direct approach, explaining in particular that they arise as `doubling variants' of other more basic iterations, and detailing how they are related to the subspace iteration, to ADI, to cyclic reduction and to Schur complements. We do not treat algorithms and equations with the greatest generality possible, to reduce technicalities; we try to present the proofs only up to a level of detail that makes the results plausible and allows the interested reader to fill the gaps.
The basic idea behind doubling algorithms can be explained through the `model problem' of computing $w_h = M^{2^h}v$ for a certain matrix $M\in\mathbb{C}^{n\times n}$, $v\in\mathbb{C}^{n}$, and $h\in \mathbb{N}$. There are two possible ways to approach this computation:
\begin{enumerate}
\item[(1)] Compute $v_{k+1} = Mv_k$, for $k=0,1,\dots,2^{h-1}$ starting from $v_0 = v$; then the result is $w_h = v_{2^h}$.
\item[(2)] Compute $M_{k+1} = (M_k)^2$, for $k=0,1,\dots,h-1$, starting from $M_0 = M$; then the result is $w_h = M_h v$ (repeated squaring).
\end{enumerate}
It is easy to verify that $M_k v = v_{2^k}$ for each $k$. Hence $k$ iterations of (2) correspond to $2^k$ iterations of (1). We say that (2) is a \emph{squaring variant}, or \emph{doubling variant}, of (1). Each of the two versions has its own pros and cons, and in different contexts one or the other may be preferred. If $h$ is moderate and $M$ is large and sparse, one should favor variant (1): sparse matrix-vector products can be computed efficiently, while the matrices $M_k$ would become dense rather quickly, and one would need to compute and store all their $n^2$ entries. On the other hand, if $M$ is a dense matrix of non-trivial size (let us say $n \approx 10^3$ or $10^4$) and $h$ is reasonably large, then variant (2) wins: fewer iterations are needed, and the resulting computations are rich in matrix multiplications and BLAS level-3 operations, hence they can be performed on modern computers even more efficiently than their flop counts suggest. This problem is an oversimplified version, but it captures the spirit of doubling algorithms, and explains perfectly in which cases they work best.
Regarding competing methods: we mention briefly in our exposition Newton-type algorithms, ADI, and Krylov-type algorithms. We do not treat here direct methods, incuding Schur decomposition-based methods~\cite{Lau79-schur,PapLS80,Van81}, methods based on structured QR~\cite{Bye86a,BunM86,Meh88}, on symplectic URV decompositions~\cite{BenMX98,ChuLM07}, and linear matrix inequalities~\cite{BoyEFB-book}. Although these competitors may be among the best methods for dense problems, they do not fit the scope of our exposition and they do not lend themselves to an immediate comparison with the algorithms that we discuss.
The equations that we treat arise mostly from the study of dynamical systems, both in discrete and continuous time. In our exposition, we chose to start from the discrete-time versions: while continuous-time Riccati equations are simpler and more common in literature, it is more natural to start from discrete-time problems in this context. Indeed, when we discuss algorithms for continuous-time problems we shall see that often the first step is a reduction to a discrete-time problem (possibly implicit).
In the following, we use the notation $ A \succ B$ (resp.~$A \succeq B$) to mean that $A-B$ is positive definite (resp.~semidefinite) (Loewner order). We use $\rho(M)$ to denote the spectral radius of $M$, the symbol $\mathrm{LHP} = \{z\in \mathbb{C}: \operatorname{Re}(z) < 0\}$ to denote the (open) left half-plane, and $\mathrm{RHP}$ for the (open) right half-plane. We use the notation $\Lambda(M)$ to denote the spectrum of $M$, i.e., the set of its eigenvalues. We use $M^*$ to denote the conjugate transpose, and $M^\top$ to denote the transpose without conjugation, which appears when combining vectorizations and Kronecker products with the identity $\operatorname{vec}(MXN) = (N^\top \otimes M)\operatorname{vec}(X)$~\cite[Sections~1.3.6--1.3.7]{gvl}.
\section{Stein equations} \label{sec:stein}
The simplest matrix equation that we consider is the \emph{Stein equation} (or \emph{discrete-time Lyapunov equation}).
\begin{equation} \label{stein}
X - A^*XA = Q, \quad Q=Q^* \succeq 0,
\end{equation}
for $A,X,Q\in \mathbb{C}^{n\times n}$. This equation often arises in the study of discrete-time constant-coefficient linear systems
\begin{equation} \label{dlinearsystem}
x_{k+1} = Ax_k.
\end{equation}
A classical application of Stein equations is the following. If $X$ solves~\eqref{stein}, then by multiplying by $x_k^*$ and $x_k$ on both sides one sees that $V(x) := x^* X x$ is decreasing over the trajectories of~\eqref{dlinearsystem}, i.e., $V(x_{k+1}) \leq V(x_k)$. This fact can be used to prove stability of the dynamical system~\eqref{dlinearsystem}.
\subsection{Solution properties}
The Stein equation~\eqref{stein} is linear, and can be rewritten using Kronecker products as
\begin{equation} \label{stein-linearized}
(I_{n^2} - A^\top \otimes A^*) \vecop(X) = \vecop(Q).
\end{equation}
If $A=UTU^*$ is a Schur factorization of $A$, then we can factor the system matrix as
\begin{align} \label{stein-schur}
I_{n^2} - M &= I_{n^2} - A^\top \otimes A^* = (\bar{U} \otimes U) (I_{n^2} - T^\top \otimes T^*) (U^\top \otimes U^*), & M &= A^\top \otimes A^*,
\end{align}
which is a Schur-like factorization where the middle term is lower triangular. One can tell when $I-M$ is invertible by looking at its diagonal entries: $I-M$ is invertible (and hence~\eqref{stein} is uniquely solvable) if and only if $\lambda_i \overline{\lambda_j} \neq 1$ for each pair of eigenvalues $\lambda_i,\lambda_j$ of $A$. This holds, in particular, when $\rho(A) < 1$. When the latter condition holds, we can apply the Neumann inversion formula
\begin{equation} \label{neumann}
(I-M)^{-1} = I + M + M^2 + \dots,
\end{equation}
which gives (after de-vectorization) an expression for the unique solution as an infinite series
\begin{equation} \label{steinsol}
X = \sum_{k=0}^{\infty} (A^*)^k Q A^k.
\end{equation}
It is apparent from~\eqref{steinsol} that $X \succeq 0$. A reverse result holds, but with strict inequalities: if~\eqref{stein} holds with $X\succ 0$ and $Q \succ 0$, then $\rho(A) < 1$ \cite[Exercise~7.10]{Datta}.
\subsection{Algorithms}
As discussed in the introduction, we do not describe here direct algorithms of the Bartels--Stewart family~\cite{BarS72,ChuBS,EptonBS,GardinerBS} (which, essentially, exploit the decomposition~\eqref{stein-schur} to reduce the cost of solving~\eqref{stein-linearized} from $\mathcal{O}(n^6)$ to $\mathcal{O}(n^3)$) even if they are often the best performing ones for dense linear (Stein or Lyapunov) equations. Rather, we present here two iterative algorithms, which we will use to build our way towards algorithms for nonlinear equations.
The Stein equation~\eqref{stein} takes the form of a fixed-point equation; this fact suggests the fixed-point iteration
\begin{align} \label{smith}
X_0 &= 0, & X_{k+1} &= Q + A^* X_k A,
\end{align}
known as \emph{Smith method}~\cite{Smi68}. It is easy to see that the $k$th iterate $X_k$ is the partial sum of~\eqref{steinsol} (and~\eqref{neumann}) truncated to $k+1$ terms, thus convergence is monotonic, i.e., $Q = X_0 \preceq X_1 \preceq X_2 \preceq \dots \preceq X$. Moreover, some manipulations give
\begin{align*}
\vecop(X-X_k) &= (I+M+M^2+\dots) \vecop(Q) - (I+M+M^2+\dots+M^k)\vecop(Q) \\&= M^{k+1}(I+M+M^2+\dots) \vecop(Q) = M^{k+1} \vecop(X),
\end{align*}
or, devectorizing,
\begin{equation} \label{stein-error}
X-X_k = (A^*)^{k+1} X A^{k+1}.
\end{equation}
This relation~\eqref{stein-error} implies $\norm{X-X_k} = \mathcal{O}(r^{k})$ for each $r > \rho(A)^2$, so convergence is linear when $\rho(A) < 1$, and it typically slows down when $\rho(A) \approx 1$.
A doubling variant comes from splitting the partial sums into two halves. The truncated sums of~\eqref{neumann} to $2^{k+1}$ terms can be computed iteratively using the identity
\[
I+M+M^2+\dots+M^{2^{k+1}-1} = (I+M+M^2+\dots+M^{2^k-1}) + M^{2^k}(I+M+M^2+\dots+M^{2^k-1}),
\]
without computing all the intermediate sums. Setting $\vecop Q_k := (I+M+M^2+\dots+M^{2^k-1}) \vecop{Q}$ and $A_k := A^{2^k}$, one gets the iteration
\begin{subequations} \label{squared-smith}
\begin{align}
A_0 &= A,& A_{k+1} &= A_k^2,\\
Q_0 &= Q,& Q_{k+1} &= Q_k + A_k^* Q_k A_k.
\end{align}
\end{subequations}
In view of the definitions, we have $Q_k = X_{2^k}$; so this method computes the $2^k$th iterate of the Smith method directly with $\mathcal{O}(k)$ operations, without going through all intermediate ones. Convergence is quadratic: $\norm{X-Q_k} = \mathcal{O}(r^{2^k})$ for each $r > \rho(A)^2$. The method~\eqref{squared-smith} is known as \emph{squared Smith}. It has been used in the context of parallel and high-performance computing~\cite{BenQQ}, and reappeared in recent years, when it has been used for large and sparse equations~\cite{Pen99,Sad12,BenES} in combination with Krylov methods.
\section{Lyapunov equations} \label{sec:lyap}
Lyapunov equations
\begin{equation} \label{lyap}
A^*X + XA + Q = 0, \quad Q =Q^* \succeq 0
\end{equation}
are the continuous-time counterpart of Stein equations. They arise from the study of continuous-time constant-coefficient linear systems
\begin{equation} \label{clinearsystem}
\frac{\mathrm{d}}{\mathrm{d}t} x(t) = Ax(t).
\end{equation}
A classical application is the following. If $X$ solves~\eqref{lyap}, by multiplying on by $x(t)^*$ and $x(t)$ on both sides one sees that $V(x):= x^* X x$ is decreasing over the trajectories of~\eqref{clinearsystem}, i.e., $\frac{\mathrm{d}}{\mathrm{d}t} V(x(t))\leq 0$. This fact can be used to prove stability of the dynamical system~\eqref{clinearsystem}. Today stability is more often proved by computing eigenvalues, but Stein equations~\eqref{stein} and Lyapunov equations~\eqref{lyap} have survived in many other applications in systems and control theory, for instance in model order reduction~\cite{BenBD11,GugA04,Sim-review}, or as the inner step in Newton methods for other equations (see for instance~\eqref{newton-step} in the following).
\subsection{Solution properties}
Using Kronecker products, one can rewrite~\eqref{lyap} as
\begin{equation} \label{lyap-linearization}
(I_n \otimes A^* + A^\top \otimes I_n)\vecop(X) = -\vecop(Q),
\end{equation}
and a Schur decomposition $A=UTU^*$ produces
\begin{equation} \label{lyap-schur}
I_n \otimes A^* + A^\top \otimes I_n = (\bar{U} \otimes U) (I_n \otimes T^* + T^\top \otimes I_n) (U^\top \otimes U^*).
\end{equation}
Again, this is a Schur-like factorization, where the middle term is lower triangular. One can tell when $I_n \otimes A^* + A^\top \otimes I_n$ is invertible by looking at its diagonal entries: that matrix is invertible (and hence~\eqref{lyap} is uniquely solvable) if and only if $\bar{\lambda_i} + \lambda_j \neq 0$ for each pair of eigenvalues $\lambda_i,\lambda_j$ of $A$. This holds, in particular, if the eigenvalues of $A$ all lie in $\mathrm{LHP} = \{z \in \mathbb{C} \colon \operatorname{Re}(z) < 0 \}$. When the latter condition holds, an analogue of~\eqref{steinsol} is
\begin{equation}
X = \int_0^{\infty} \exp(A^* t) Q \exp(At) \, \mathrm{d}t.
\end{equation}
Indeed, this integral converges for every choice of $Q$ if and only if the eigenvalues of $A$ all lie in $\mathrm{LHP}$.
Notice the pleasant symmetry with the Stein case: the (discrete) sum turns into a (continuous) integral; the stability condition for discrete-time linear time-invariant dynamical systems $\rho(A) < 1$ turns into the one $\Lambda(A) \subset \mathrm{LHP}$ for continuous-time systems. Perhaps a bit less evident is the equivalence between the condition $\bar{\lambda_i} + \lambda_j \neq 0$ (i.e., no two eigenvalues of $A$ are mapped into each other by reflection with respect to the imaginary axis) and $\lambda_i \overline{\lambda_j} \neq 1$ (i.e., no two eigenvalues of $A$ are mapped into each other by circle inversion with respect to the complex unit circle).
Lyapunov equations can be turned into Stein equations and \emph{vice versa}. Indeed, for a given $\tau \in \mathbb{C}$, \eqref{lyap} is equivalent to
\[
(A^*-\tau I)X(A-\bar{\tau}I) - (A^*+\bar{\tau} I)X(A+\tau I) - 2\operatorname{Re}(\tau)Q = 0,
\]
or, if $A-\bar{\tau} I$ is invertible,
\begin{align} \label{lyap-to-stein}
X - c(A)^* X c(A) &= 2\operatorname{Re}(\tau)(A^*-\tau I)^{-1} Q (A-\bar{\tau}I)^{-1}, & c(A) &= (A+\tau I)(A-\bar{\tau} I)^{-1}=(A-\bar{\tau} I)^{-1}(A+\tau I).
\end{align}
If $\tau \in \mathrm{RHP}$, then the right-hand side is positive semidefinite and~\eqref{lyap-to-stein} is a Stein equation. The stability properties of $c(A)$ can be explicitly related to those of $A$ via the following lemma.
\begin{lemma}[properties of Cayley transforms] \label{lem:cayley}
Let $\tau \in \mathrm{RHP}$. Then,
\begin{enumerate}
\item[(1)] for $\lambda \in \mathbb{C}$, we have $\abs{c(\lambda)} = \abs*{\frac{\lambda+\tau}{\lambda - \bar{\tau}}}<1$ if and only if $\lambda \in \mathrm{LHP}$;
\item[(2)] for a matrix $A\in\mathbb{C}^{n\times n}$, we have $\rho(c(A)) < 1$ if and only if $\Lambda(A) \subset \mathrm{LHP}$.
\end{enumerate}
\end{lemma}
A geometric argument to visualize (1) is the following. In the complex plane, $-\tau$ and $\bar{\tau}$ are symmetric with respect to the imaginary axis, with $-\tau$ lying to its left. Thus a point $\lambda \in \mathbb{C}$ is closer to $-\tau$ than to $\bar{\tau}$ if and only if it lies in $\mathrm{LHP}$. Part (2) follows from facts on the behaviour of eigenvalues of a matrix under rational functions~\cite[Proposition~1.7.3]{LancasterRodman}, which we will often use also in the following.
Another important property of the solutions $X$ of Lyapunov and Stein equations is the decay of their singular values in many practical cases. We defer its discussion to the following section, since a proof follows from the properties of certain solution algorithms.
\subsection{Algorithms}
As in the Stein case, one can implement a direct $\mathcal{O}(n^3)$ Bartels-Stewart algorithm~\cite{BarS72} by exploiting the decomposition~\eqref{lyap-schur}: the two outer factors have Kronecker product structure, and the inner factor is lower triangular, allowing for forward substitution. An interesting variant allows one to compute the Cholesky factor of $X$ directly from the one of $Q$~\cite{Ham82}.
Again, we focus our interest on iterative algorithms. We will assume $\Lambda(A) \subset \mathrm{LHP}$. Then, thanks to Lemma~\ref{lem:cayley}, we have $\rho(c(A)) < 1$, so we can apply the Smith method~\eqref{smith} to~\eqref{lyap-to-stein}. In addition, we can change the value of $\tau$ at each iteration. The resulting algorithm is known as \emph{ADI iteration}~\cite{PeaR55,Wac88}:
\begin{align} \label{adi}
X_0 &= 0, & X_{k+1} &= Q_k + c_k(A)^* X_k c_k(A),\\
Q_k &= 2\operatorname{Re}(\tau_k)(A^*-\tau_k I)^{-1} Q (A-\bar{\tau}_k I)^{-1}, & c_k(A) &= (A+\tau_k I)(A-\bar{\tau}_k I)^{-1}=(A-\bar{\tau}_k I)^{-1}(A+\tau_k I). \nonumber
\end{align}
The sequence of \emph{shifts} $\tau_k \in \mathrm{RHP}$ can be chosen arbitrarily, with the only condition that $\bar{\tau}_k \not \in \Lambda(A)$. By writing a recurrence for the error $E_k = X - X_k$, one sees that
\begin{equation} \label{adi-error}
E_k = r_{k+1}(A)^* E_0 r_{k+1}(A) = r_{k+1}(A)^* X r_{k+1}(A), \quad r_{k+1}(A) = c_k(A) \dots c_1(A)c_0(A),
\end{equation}
a formula which generalizes~\eqref{stein-error}. When $A$ is normal, the problem of assessing the convergence speed of this iteration can be reduced to a scalar approximation theory problem. Note that
\begin{align*}
\norm{r_k(A)} &= \max_{\lambda \in \Lambda(A)} \abs{r_k(\lambda)}, & \norm{r_k(A)^*} = \norm{r_k(-A^*)^{-1}} = \frac{1}{\min_{\lambda \in \Lambda(A)} \abs{r_k(-\lambda^*)} }.
\end{align*}
If one knows a region $E \subset \mathrm{LHP}$ that encloses the eigenvalues of $A$, the optimal choice of $r_k$ is the degree-$k$ rational function that minimizes
\begin{equation} \label{zolotarev-objective}
\frac{\sup_{z\in E} \abs{r_k(z)}}{\inf_{z\in -E^*} \abs{r_k(z)}},
\end{equation}
i.e., a rational function that is `as large as possible' on $E$ and `as small as possible' on $-E^*$.
Finding this rational function is known as \emph{Zolotarev approximation problem}, and it was solved by its namesake for many choices of $E$, including $E=[a,b] \subseteq \mathbb{R}_+$: this choice of $E$ corresponds to having a symmetric positive definite $A$ for which a lower and upper bound on the spectrum are known. It is known that the optimal ratio~\eqref{zolotarev-objective} decays as $\rho^k$, where $\rho < 1$ is a certain value that depends on $E$, related to its so-called \emph{logarithmic capacity}. See the recent review by Beckermann and Townsend~\cite{BecT19} for more details. Optimal choices for the shifts for a normal $A$ were originally studied by Wachspress~\cite{Wac88,EllW91}. When $A$ is non-normal, a similar bound can be obtained from its eigendecomposition $A = VDV^{-1}$, but it includes its eigenvalue condition number $\kappa(V) = \norm{V} \norm{V}^{-1}$, and thus it is of worse quality.
An important case, both in theory and in practice, is when $Q$ has low rank. One usually writes $Q = C^*C$, where $C\in\mathbb{C}^{p\times n}$ is a short-fat matrix, motivated by a standard notation in control theory. A decomposition $X_k = Z_k Z_k^*$ can be derived from~\eqref{adi}, and reads
\begin{align}
Z_k &= \begin{bmatrix}
\sqrt{2\operatorname{Re}(\tau_{k-1})} (A^*-\tau_{k-1}I)^{-1}C^*, & c_{k-1}(A)^* Z_{k-1}
\end{bmatrix} \nonumber\\
& = \left[
\sqrt{2\operatorname{Re}(\tau_{k-1})} (A^*-\tau_{k-1}I)^{-1}C^*, \sqrt{2\operatorname{Re}(\tau_{k-2})}(A^*-\tau_{k-1}I)^{-1}(A^*+\bar{\tau}_{k-1}I) (A^*-\tau_{k-2}I)^{-1}C^*, \dots, \right. \nonumber \\
& \left.
\sqrt{2\operatorname{Re}(\tau_{0})}(A^*-\tau_{k-1}I)^{-1}(A^*+\bar{\tau}_{k-1}I) (A^*-\tau_{k-2}I)^{-1}(A^*+\bar{\tau}_{k-2}I) \dotsm (A^*-\tau_0 I)^{-1} C^*
\right]. \label{krylovadi}
\end{align}
Hence $Z_k$ is obtained by concatenating horizontally $k$ terms $V_1,V_2,\dots,V_k$ of size $n\times p$ each. Each of them contains a rational function of $A^*$ of increasing degree multiplied by $C^*$. All the factors in parentheses commute: hence that the factors $V_j$ can be computed with the recurrence
\begin{align}
Z_k &= \begin{bmatrix}
V_1 & V_2 & \cdots & V_{k}
\end{bmatrix}, & V_1 &=\sqrt{2\operatorname{Re}(\tau_{k-1})} (A^*-\tau_{k-1}I)^{-1}C^*, \nonumber \\&& V_{j+1} &= \frac{\sqrt{2\operatorname{Re}(\tau_{k-j-1})}}{\sqrt{2\operatorname{Re}(\tau_{k-j})}}(A^*-\tau_{k-j-1}I)^{-1}(A^*+\bar{\tau}_{k-j}I)V_j \nonumber \\
&&& =
\frac{\sqrt{2\operatorname{Re}(\tau_{k-j-1})}}{\sqrt{2\operatorname{Re}(\tau_{k-j})}} \left(V_j + (\tau_{k-j-1}+\bar{\tau}_{k-j})(A^*+\tau_{k-j-1}I)^{-1}V_j\right). \label{adi-step}
\end{align}
This version of ADI is known as \emph{low-rank ADI (LR-ADI)} \cite{BenLP08}. After $k$ steps, $X_k = Z_kZ_k^*$, but note that in the intermediate steps $j<k$ the quantity $\begin{bmatrix}
V_1 & V_2 & \cdots & V_j
\end{bmatrix}\begin{bmatrix}
V_1 & V_2 & \cdots & V_j
\end{bmatrix}^*$ differs from $X_j$ in~\eqref{adi}. Indeed, in this factorized version the shifts appear in reversed order, starting from $\tau_{k-1}$ and ending with $\tau_0$. Nevertheless, we can use LR-ADI as an iteration in its own right: since we keep adding columns to $Z_k$ at each step, $Z_kZ_k^*$ converges monotonically to $X$. This version is particularly convenient for problems in which $A$ is large and sparse, because in each step we only need to solve $p$ linear systems with a shifted matrix $A^*-\tau I$, and we store in memory only the $n \times kp$ matrix $Z_k$. In contrast, iterations such as~\eqref{squared-smith} are not going to be efficient for problems with a large and sparse $A$, since powers of sparse matrices become dense.
The formula~\eqref{krylovadi} displays the relationship between ADI and certain Krylov methods: since the LR-ADI iterates are constructed by applying rational functions of $A^*$ iteratively to $C^*$, the LR-ADI iterate $Z_k$ lies in the so-called \emph{rational Krylov subspace}~\cite{Ruh84}
\begin{equation} \label{ratksub}
K_{q,k+1}(A^*, C^*) = \operatorname{span} \{q(A^*)^{-1}p(A^*) C^* : \text{$p$ is a polynomial of degree $ \leq k$}\},
\end{equation}
constructed with \emph{pole polynomial} $q(z)=(z-\tau_0)(z-\tau_1)\dotsm (z-\tau_{k-1})$. This suggests a different view: what is important is not the form of the ADI iteration, but rather the approximation space $K_{q,k}(A^*, C^*)$ to which its iterates belong. Once one has chosen suitable shifts and computed an orthogonal basis $U_k$ of $K_{q,k+1}(A^*, C^*)$, \eqref{lyap} can be solved via \emph{Galerkin projection}: we seek an iterate $X_k$ of the form $X_k = U_k Y_k U_k^*$, and compute $Y_k$ by solving the projected equation
\begin{align*}
0 = U_k^*(A^*X_k + X_kA + Q)U = (U_k^*A^*U_k) Y_k + Y_k (U_k^* A U_k) + U_k^* Q U_k,
\end{align*}
which is a smaller ($kp\times kp$) Lyapunov equation.
While the approximation properties of classical Krylov subspaces are related to polynomial approximation, those of rational Krylov subspaces are related to approximation with rational functions, as in the Zolotarev problem mentioned earlier. In many cases, rational approximation has better convergence properties, with an appropriate choice of the shifts. This happens also for Lyapunov equations: algorithms based on rational Krylov subspaces~\eqref{ratksub}~\cite{DruS,DruKS} (including ADI which uses them implicitly) often display better convergence properties than equivalent ones in which $U_k$ is chosen as a basis of a regular Krylov subspace or of an extended Krylov subspace
\begin{equation} \label{extKrylov}
K_{k_1,k_2}(A^*,C^*)=\operatorname{span} \{\ell(A^*)C^* : \text{$\ell$ is a Laurent polynomial of degrees $(k_1,k_2)$}\}.
\end{equation}
Computing a basis for a rational Krylov subspace~\eqref{ratksub} is more expensive than computing one for an extended Krylov subspace~\eqref{extKrylov}: indeed, the former requires solving linear systems with $A-\tau_k I$ for many values of $k$, while the latter uses multiple linear systems with the same matrix $A$. However, typically, their faster convergence more than compensates for it. Another remarkable feature is the possibility to use an adaptive procedure based on the residual for shift selection~\cite{DruS}.
See also the analysis in Benner, Li, Truhar~\cite{BenLT}, which shows that Galerkin projection can improve also on the ADI solution.
An important consequence of the convergence of these algorithms is that they can be used to give bounds on the rank of the solution $X$. Since we can find rational functions such that~\eqref{zolotarev-objective} decreases exponentially, the formula~\eqref{adi-error} shows that $X$ can be approximated well with $X_k$, which has rank at most $k \cdot \operatorname{rank}(Q)$ in view of the decomposition~\eqref{krylovadi}. This observation has practical relevance, since in many applications $p$ is very small, and the exponential decay in the singular values of $X$ is very well visible and helps reducing the computational cost.
\subsection{Remarks}
There is vast literature already for linear matrix equations, especially when it comes to large and sparse problems. We refer the reader to the review by Simoncini~\cite{Sim-review} for more details. The literature typically deals with continuous-time Lyapunov equations more often than their discrete-time counterpart; however, Cayley transformations~\eqref{lyap-to-stein} can be used to convert one to the other.
In particular, it follows from our discussion that a step of ADI can be interpreted as transforming the Lyapunov equation~\eqref{lyap} into a Stein equation~\eqref{stein} via a Cayley transform~\eqref{lyap-to-stein} and then applying one step of the Smith iteration~\eqref{smith}. Hence the squared Smith method~\eqref{squared-smith} can be interpreted as a doubling algorithm to construct the ADI iterate $X_{2^k}$ in $k$ iterations only, but with the significant limitation of using \emph{only one shift} $\tau$ in ADI.
It is known that a wise choice of shifts has a major impact on the convergence speed of these algorithms; see e.g. Güttel~\cite{Gut13}. A major challenge for doubling-type algorithms seems incorporating multiple shifts in this framework of repeated squaring. It seems unlikely that one can introduce more than one shift per doubling iteration, but even doing so would be an improvement, allowing one to leverage the theory of rational approximation that underlies ADI and Krylov space methods.
\section{Discrete-time Riccati equations} \label{sec:dare}
We consider the equation
\begin{align} \label{dare}
X &= Q + A^* X(I+GX)^{-1}A & G=G^*&\succeq 0, & Q=Q^* &\succeq 0, & A,G,Q,X &\in\mathbb{C}^{n\times n},
\end{align}
to be solved for $X = X^* \succeq 0$. This equation is known as \emph{discrete-time algebraic Riccati equation} (DARE), and arises in various problems connected to discrete-time control theory~\cite[Chapter~10]{Datta}. Variants in which $G,Q$ are not necessarily positive semidefinite also exist~\cite{RanT93,Wil71}, but we will not deal with them here to keep our presentation simpler. The non-linear term can appear in various slightly different forms: for instance, if $G = BR^{-1}B^*$ for certain matrices $B\in\mathbb{C}^{n\times m},R\in\mathbb{C}^{m\times m}$, $R=R^* \succ 0$, then one sees with some algebra that
\begin{align}
X(I+GX)^{-1} &= (I+XG)^{-1}X = X - X(I+GX)^{-1}GX \nonumber
\\&= X - XBR^{-1/2}(I+R^{-1/2}B^*XBR^{-1/2})^{-1}R^{-1/2}B^*X \nonumber
\\&= X - XB(R+B^*XB)^{-1}B^*X, \label{dare-identities}
\end{align}
and all these forms can be plugged into~\eqref{dare} to obtain a slightly different (but equivalent) equation. In particular, from the versions in the last two rows one sees that $X(I+GX)^{-1}$ is Hermitian, which is not evident at first sight. These identities become clearer if one considers the special case in which $\rho(GX)<1$: in this case, one sees that the expressions in~\eqref{dare-identities} are all different ways to rewrite the sum of the converging series $X-XGX + XGXGX - XGXGXGX + \dots$.
Note that the required inverses exist under our assumptions, because the eigenvalues of $GX$ coincide with those of $G^{1/2}XG^{1/2} \succeq 0$.
\subsection{Solution properties}
For convenience, we assume in the following that $A$ is invertible. The results in this section hold also when it is singular, but to formulate them properly one must deal with matrix pencils, infinite eigenvalues, and generalized invariant subspaces (or \emph{deflating subspaces}), a technical difficulty that we would rather avoid here since it does not add much to our presentation. For a more general pencil-based presentation, see for instance Mehrmann~\cite{meh-cayley}.
For each solution $X$ of the DARE~\eqref{dare}, it holds that
\begin{align} \label{dare-subspace}
\begin{bmatrix}
A & 0\\
-Q & I
\end{bmatrix}
\begin{bmatrix}
I\\X
\end{bmatrix}
&=
\begin{bmatrix}
I & G\\
0 & A^*
\end{bmatrix}
\begin{bmatrix}
I\\X
\end{bmatrix}
K, &
K &= (I+GX)^{-1}A.
\end{align}
Equation \eqref{dare-subspace} shows that $\operatorname{Im} \begin{bsmallmatrix}I\\X\end{bsmallmatrix}$ is an \emph{invariant subspace} of
\begin{equation} \label{symplectic}
\mathcal{S} =
\begin{bmatrix}
I & G\\
0 & A^*
\end{bmatrix}^{-1}
\begin{bmatrix}
A & 0\\
-Q & I
\end{bmatrix},
\end{equation}
i.e., $\mathcal{S}$ maps this subspace into itself. In particular, the $n$ eigenvalues (counted with multiplicity) of $K$ are a subset of the $2n$ eigenvalues of $\mathcal{S}$: this can be seen by noticing that the matrix $K$ represents (in a suitable basis) the linear operator $\mathcal{S}$ when restricted to said subspace. Conversely, if one takes a basis matrix $\begin{bsmallmatrix}
U_1\\U_2
\end{bsmallmatrix}$ for an invariant subspace of $\mathcal{S}$, and if $U_1$ is invertible, then $\begin{bsmallmatrix}
I\\U_2 U_1^{-1}
\end{bsmallmatrix}$ is another basis matrix, the equality \eqref{dare-subspace} holds, and $X=U_2U_1^{-1}$ is a solution of~\eqref{dare}. Hence,~\eqref{dare} typically has multiple solutions, each associated to a different invariant subspace. However, among them there is a preferred one, which is the one typically sought in applications.
\begin{theorem}\relax\cite[Corollary~13.1.2 and Theorem~13.1.3]{LancasterRodman} \label{thm:daresol}
Assume that $Q \succeq 0$, $G\succeq 0$ and $(A,G)$ is d-stabilizable. Then, \eqref{dare} has a (unique) solution $X_+$ such that
\begin{enumerate}
\item[(1)] $X_+ = X_+^* \succeq 0$;
\item[(2)] $X_+ \succeq X$ for any other Hermitian solution $X$;
\item[(3)] $\rho\left((I+GX_+)^{-1}A\right) \leq 1$.
\end{enumerate}
If, in addition, $(Q,A)$ is d-detectable, then $\rho\left((I+GX_+)^{-1}A\right) < 1$.
\end{theorem}
The hypotheses involve two classical definitions from control theory~\cite{Datta}: \emph{d-stabilizable} (resp. \emph{d-detectable}) means that all Jordan chains of $A$ (resp. $A^*$) that are associated to eigenvalues \emph{outside} the set $\{\abs{\lambda}<1\}$ are contained in the maximal (block) Krylov subspace $\operatorname{span}(B, AB, A^2B, \dots)$ (resp. $\operatorname{span}(C^*, A^*C^*, (A^*)^2C^*, \dots)$). We do not discuss further these hypotheses nor the theorem, which is not obvious to prove; we refer the reader to Lancaster and Rodman~\cite{LancasterRodman} for details, and we just mention that these hypotheses are typically satisfied in control theory applications. This solution $X_+$ is often called \emph{stabilizing} (because of property 3) or \emph{maximal} (because of property 2).
Various properties of the matrix $\mathcal{S}$ in~\eqref{symplectic} follow from the fact that it belongs to a certain class of structured matrices. Let $J = \begin{bsmallmatrix}
0 & I_n\\
-I_n & 0
\end{bsmallmatrix}\in\mathbb{C}^{2n\times 2n}$. A matrix $M\in\mathbb{C}^{2n\times 2n}$ is called \emph{symplectic} if $M^*JM = J$, i.e., if it is unitary for the non-standard scalar product associated to $J$. The following properties hold.
\begin{lemma} \label{symplemma}
\begin{enumerate}
\item[(1)] A matrix in the form~\eqref{symplectic} is symplectic if and only if $G=G^*, Q=Q^*$, and the two blocks called $A,A^*$ in~\eqref{symplectic} are one the conjugate transpose of the other.
\item[(2)] If $\lambda$ is an eigenvalue of a symplectic matrix with right eigenvector $v$, then $\overline{\lambda}^{-1}$ is an eigenvalue of the same matrix with left eigenvector $v^*J$.
\item[(3)] Under the hypotheses of Theorem~\ref{thm:daresol} (including the d-detectability one in the end), then the $2n$ eigenvalues of $\mathcal{S}$ are (counting multiplicities) the $n$ eigenvalues $\lambda_1,\lambda_2,\dots,\lambda_n$ of $(I+GX_+)^{-1}A$ inside the unit circle, and the $n$ eigenvalues $\overline{\lambda_i}^{-1}$, $i=1,2,\dots,n$ outside the unit circle. In particular, $\begin{bsmallmatrix}
I\\X_+
\end{bsmallmatrix}$ spans the unique invariant subspace of~$\mathcal{S}$ of dimension $n$ all of whose associated eigenvalues lie in the unit circle.
\end{enumerate}
\end{lemma}
Parts 1 and 2 are easy to verify from the form~\eqref{symplectic} and the definition of symplectic matrix, respectively. To prove Part 3, plug $X_+$ into~\eqref{dare-subspace} and notice that $K$ has $n$ eigenvalues $\lambda_1,\lambda_2,\dots,\lambda_n$ inside the unit circle; these are also eigenvalues of $\mathcal{S}$. By Part~2, all other eigenvalues lie outside the unit circle.
\subsection{Algorithms}
The shape of~\eqref{dare} suggests the iteration
\begin{align} \label{dare-iterative}
X_{k+1} &= Q + A^* X_k (I+GX_k)^{-1}A, & X_0 &= 0.
\end{align}
This iteration can be rewritten in a form analogous to~\eqref{dare-subspace}:
\begin{align} \label{dare-subspace-iteration}
\begin{bmatrix}
A & 0\\
-Q & I
\end{bmatrix}
\begin{bmatrix}
I\\X_{k+1}
\end{bmatrix}
&=
\begin{bmatrix}
I & G\\
0 & A^*
\end{bmatrix}
\begin{bmatrix}
I\\X_k
\end{bmatrix}
K_k, &
K_k &= (I+GX_k)^{-1}A.
\end{align}
Equivalently, one can write it as
\begin{align} \label{dare-subspace-iteration-form2}
\begin{bmatrix}
U_{1k}\\
U_{2k}
\end{bmatrix} &= \mathcal{S}^{-1}\begin{bmatrix}
I\\X_k
\end{bmatrix}, & \begin{bmatrix}
I\\ X_{k+1}
\end{bmatrix} = \begin{bmatrix}
U_{1k}\\
U_{2k}
\end{bmatrix}(U_{1k})^{-1}.
\end{align}
This form highlights a connection with (inverse) subspace iteration (or orthogonal iteration), a classical generalization of the (inverse) power method to find multiple eigenvalues~\cite{Wat-book}. Indeed, we start from the $2n\times n$ matrix $\begin{bsmallmatrix}
I\\X_0
\end{bsmallmatrix} = \begin{bsmallmatrix}
I\\0
\end{bsmallmatrix}$, and at each step we first multiply it by $\mathcal{S}^{-1}$, and then we normalize the result by imposing that the first block is $I$. In inverse subspace iteration, we would make the same multiplication, but then we would normalize the result by taking the $Q$ factor of its QR factorization, instead.
It follows from classical convergence results for the subspace iteration (see e.g. Watkins~\cite[Section~5.1]{Wat-book}) that \eqref{dare-subspace-iteration-form2}~converges to the invariant subspace associated to the $n$ largest eigenvalues (in modulus) of $\mathcal{S}^{-1}$, i.e., the $n$ smallest eigenvalues of $\mathcal{S}$.
In view of Part~3 of Lemma~\ref{symplemma}, this subspace is precisely $ \operatorname{Im}\begin{bsmallmatrix}
I\\X_+
\end{bsmallmatrix}$. Note that this unusual normalization is not problematic, since at each step of the iteration (and in the limit) the subspace does admit a basis in which the first $n$ rows form an identity matrix. This argument shows the convergence of~\eqref{dare-iterative} to the maximal solution, under the d-detectability condition mentioned in Theorem~\ref{thm:daresol}, which ensures that there are no eigenvalues on the unit circle.
How would one construct a `squaring' variant of this method? Note that that $\begin{bsmallmatrix}
U_{1k}\\
U_{2k}
\end{bsmallmatrix} = \mathcal{S}^{-k} \begin{bsmallmatrix}
I\\0
\end{bsmallmatrix}$; hence one can think of computing $\mathcal{S}^{-2^k}$ by iterated squaring to obtain $X_{2^k}$ in $k$ steps. However, this idea would be problematic numerically, because it amounts to delaying the normalization in subspace iteration until the very last step. The key to solve this issue is using the LU-like decomposition obtained from~\eqref{symplectic}
\[
\mathcal{S}^{-1} = \begin{bmatrix}
A & 0\\
-Q & I
\end{bmatrix}^{-1}
\begin{bmatrix}
I & G\\
0 & A^*
\end{bmatrix}.
\]
We seek an analogous decomposition for the powers of $\mathcal{S}^{-1}$, i.e.,
\begin{equation} \label{ssffact}
\mathcal{S}^{-2^k} = \begin{bmatrix}
A_k & 0\\
-Q_k & I
\end{bmatrix}^{-1}
\begin{bmatrix}
I & G_k\\
0 & A_k^*
\end{bmatrix}.
\end{equation}
The following result shows how to compute this factorization with just one matrix inversion.
\begin{lemma}~\cite{PolR} \label{lem:bmf}
Let $M_1,M_2,N_1,N_2\in\mathbb{C}^{2n\times n}$. The factorization
\begin{align} \label{bmf-factorization}
\begin{bmatrix}
M_1 & M_2
\end{bmatrix}^{-1} \begin{bmatrix}
N_1 & N_2
\end{bmatrix} &= \begin{bmatrix}
A_{11} & 0\\
A_{21} & I_n
\end{bmatrix}^{-1}
\begin{bmatrix}
I_n & A_{21}\\
0 & A_{22}
\end{bmatrix},
&
A_{11},A_{12},A_{21},A_{22} &\in \mathbb{C}^{n\times n}
\end{align}
exists if and only if $\begin{bmatrix}
N_1 & M_2
\end{bmatrix}$ is invertible, and in that case its blocks $A_{ij}$ are given by
\[
\begin{bmatrix}
A_{11} & A_{12}\\
A_{21} & A_{22}
\end{bmatrix}
=
\begin{bmatrix}
N_1 & M_2
\end{bmatrix}^{-1}
\begin{bmatrix}
M_1 & N_2
\end{bmatrix}.
\]
\end{lemma}
A proof follows from noticing that the factorization~\eqref{bmf-factorization} is equivalent to the existence of a matrix $K\in\mathbb{C}^{2n\times 2n}$ such that
\[
K
\begin{bmatrix}
M_1 & M_2 & N_1 & N_2
\end{bmatrix} =
\begin{bmatrix}
A_{11} & 0 & I_n & A_{12}\\
A_{21} & I_n & 0 & A_{22}
\end{bmatrix},
\]
and rearranging block columns in this expression.
One can apply Lemma~\ref{lem:bmf} (with $[M_1\, M_2] = I$ and $[N_1\, N_2] = \begin{bsmallmatrix}
I & G_k\\
0 & A_k^*
\end{bsmallmatrix}
\begin{bsmallmatrix}
A_k & 0\\
-Q_k & I
\end{bsmallmatrix}^{-1}$ ) to find a factorization of the term in parentheses in
\begin{equation} \label{sdaderiv}
\mathcal{S}^{-2^{k+1}} = \mathcal{S}^{-2^k}\mathcal{S}^{-2^k} = \begin{bmatrix}
A_k & 0\\
-Q_k & I
\end{bmatrix}^{-1}
\left(
\begin{bmatrix}
I & G_k\\
0 & A_k^*
\end{bmatrix}
\begin{bmatrix}
A_k & 0\\
-Q_k & I
\end{bmatrix}^{-1}
\right)
\begin{bmatrix}
I & G_k\\
0 & A_k^*
\end{bmatrix},
\end{equation}
and use it to construct a decomposition~\eqref{ssffact} of $\mathcal{S}^{-2^{k+1}}$ starting from that of $\mathcal{S}^{-2^k}$. The fact that the involved matrices are symplectic can be used to prove that the relations $A_{11}=A_{22}^*$, $A_{21}=A_{21}^*$, $A_{12}=A_{12}^*$ will hold for the computed coefficients. We omit the details of this computation; what matters are the resulting formulas
\begin{subequations} \label{sda-formulas}
\begin{align}
A_{k+1} &= A_k(I+G_k Q_k)^{-1}A_k,\\
G_{k+1} &= G_k + A_k G_k(I+Q_kG_k)^{-1}A_k^*,\\
Q_{k+1} &= Q_k + A_k^*(I+Q_kG_k)^{-1}Q_kA_k, \label{sda-formulasQ}
\end{align}
with $A_0 = A, Q_0 = Q, G_0 = G$.
\end{subequations}
These formulas are all we need to formulate a `squaring' version of~\eqref{dare-iterative}: for each $k$ it holds that
\[
\mathcal{S}^{-2^k} \begin{bmatrix}
I_n\\0
\end{bmatrix} = \begin{bmatrix}
I\\
Q_k
\end{bmatrix}A_k^{-1},
\]
hence $Q_k = X_{2^k}$, the $2^k$th iterate of~\eqref{dare-iterative}. It is not difficult to show by induction that $0 \preceq Q_0 \preceq Q_1 \preceq \dots \leq Q_k \preceq \dots$, and we have already argued above that $Q_k = X_{2^k} \to X_+$. In view of the interpretation as subspace iteration, the convergence speed of~\eqref{dare-iterative} is linear and proportional to the ratio between the absolute values of the $(n+1)$st and $n$th eigenvalue of $\mathcal{S}$, i.e., between $\sigma := \rho((I+GX_+)A) < 1$ and its inverse $\sigma^{-1}$. The convergence speed of its doubling variant~\eqref{sda-formulas} is then quadratic with the same ratio~\cite{doublingbook}.
The iteration~\eqref{sda-formulas}, which goes under the name of \emph{structure-preserving doubling algorithm}, has been used to solve DAREs and related equations by various authors, starting from Chu, Fan, Lin and Wang~\cite{chu-dare}, but it also appears much earlier: for instance, Anderson~\cite{And78} gave it an explicit system-theoretical meaning as constructing an equivalent system with the same DARE solution. The reader may find in the literature slightly different versions of~\eqref{sda-formulas}, which are equivalent to them thanks to the identities~\eqref{dare-identities}.
More general versions of the factorization~\eqref{ssffact} and of the iteration~\eqref{sda-formulas}, which guarantee existence and boundedness of the iterates under much weaker conditions, have been explored by Mehrmann and~Poloni~\cite{MehP12}. Kuo, Lin and Shieh~\cite{KuoLS} studied the theoretical properties of the factorization~\eqref{ssffact} for general powers $\mathcal{S}^t$, $t\in\mathbb{R}$, drawing a parallel with the so-called \emph{Toda flow} for the QR algorithm.
The limit of the monotonic sequence $0 \preceq G_0 \preceq G_1 \preceq G_2 \preceq \dots$ also has a meaning: it is the maximal solution $Y_+$ of the so-called \emph{dual equation}
\begin{equation} \label{dare-dual}
Y = G + AY(I+QY)^{-1}A^*,
\end{equation}
which is obtained swapping $Q$ with $G$ and $A$ with $A^*$ in~\eqref{dare}. Indeed, SDA for the DARE~\eqref{dare-dual} is obtained by
swapping $Q$ with $G$ and $A$ with $A^*$ in~\eqref{sda-formulas}, but this transformation leaves the formulas unchanged. The dual equation~\eqref{dare-dual} appears sometimes in applications together with~\eqref{dare}. From the point of view of linear algebra, the most interesting feature of its solution $Y_+$ is that $\begin{bsmallmatrix}
-Y_+\\
I
\end{bsmallmatrix}$ is a basis matrix for the invariant subspace associated to the other eigenvalues of $\mathcal{S}$, those outside the unit circle. Indeed, \eqref{ssffact}~gives
\[
\mathcal{S}^{2^k} \begin{bmatrix}
0\\I
\end{bmatrix} = \begin{bmatrix}
-G_k\\I_n
\end{bmatrix}A_k^{-*},
\]
so $\begin{bsmallmatrix}
-Y_+\\
I
\end{bsmallmatrix}$ is the limit of subspace iteration applied to $\mathcal{S}$ instead of $\mathcal{S}^{-1}$, with initial value $\begin{bsmallmatrix}
0\\I
\end{bsmallmatrix}$.
In particular, putting all pieces together, the following \emph{Wiener-Hopf factorization} holds
\begin{equation} \label{WHfact}
\mathcal{S} = \begin{bmatrix}
-Y_+ & I\\
I & X_+
\end{bmatrix}
\begin{bmatrix}
\left((I+QY_+)^{-1}A^*\right)^{-1}& 0\\
0& (I+GX_+)^{-1}A
\end{bmatrix}
\begin{bmatrix}
-Y_+ & I\\
I & X_+
\end{bmatrix}^{-1}.
\end{equation}
This factorization relates explicitly the solutions $X_+, Y_+$ to a block diagonalization of $\mathcal{S}$.
An interesting limit case is the one when only the first part of Theorem~\ref{thm:daresol} holds, $(Q,A)$ is not d-detectable, and the solution $X_+$ exists but $\rho((I+GX_+)A) = 1$. In this case, $\mathcal{S}$ has eigenvalues on the unit circle, and it can be proved that all its Jordan blocks relative to these eigenvalues have even size: one can use a result in Lancaster and Rodman\cite[Theorem~12.2.3]{LancasterRodman}, after taking a factorization $G=BR^{-1}B^*$ with $R\succ 0$ and using another result in the same book~\cite[Theorem~12.2.1]{LancasterRodman} to show that the hypothesis $\Psi(\eta)\succ 0$ holds.
It turns out that in this case the two iterations still converge, although~\eqref{dare-iterative} becomes sublinear and~\eqref{sda-formulas} becomes linear with rate $1/2$. This is shown by Chiang, Chu, Guo, Huang, Lin and Xu~\cite{Chi09-critical-doubling}; the reader can recognize that the key step there is the study of the subspace iteration in presence of Jordan blocks of even multiplicity.
Note that the case in which the assumptions $Q\succeq 0, G \succeq 0$ do not hold is trickier, because there are examples where~\eqref{dare} does not have a stabilizing solution and $\mathcal{S}$ has Jordan blocks of odd size with eigenvalues on the unit circle: an explicit example is
\begin{align} \label{example-dare-eigenvalues}
A &= \begin{bmatrix}
1 & 3\\ 0 & 1
\end{bmatrix}, & G &= \begin{bmatrix}
1 & 1\\
1 & 1
\end{bmatrix}, &
Q &= \begin{bmatrix}
1 & 0\\
0 & -10
\end{bmatrix},
\end{align}
which produces a matrix $\mathcal{S}$ with two simple eigenvalues (Jordan blocks of size $1$) $\lambda_{\pm} \approx 0.598 \pm 0.801i$ with $\abs{\lambda}=1$. Surprisingly, eigenvalues on the unit circle are a generic phenomenon for symplectic matrices, which is preserved under perturbations: a small perturbation of the matrices in~\eqref{example-dare-eigenvalues} will produce a perturbed $\tilde{\mathcal{S}}$ with two simple eigenvalues $\tilde{\lambda}_{\pm}$ that satisfy exactly $\abs{\lambda}=1$, because otherwise Part~2 of Lemma~\ref{symplemma} would be violated.
\section{Continuous-time Riccati equations} \label{sec:care}
We consider the equation
\begin{align} \label{care}
Q + A^*X + XA - XGX &= 0, & G &= G^* \succeq 0, & Q &= Q^* \succeq 0, & A,G,Q,X &\in \mathbb{C}^{n\times n},
\end{align}
to be solved for $X = X^* \succeq 0$. This equation is known as \emph{continuous-time algebraic Riccati equation} (CARE), and arises in various problems connected to continuous-time control theory~\cite[Chapter~10]{Datta}. Despite the very different form, this equation is a natural analogue of the DARE~\eqref{dare}, exactly like Stein and Lyapunov equations are related to each other.
\subsection{Solution properties}
For each solution $X$ of the CARE, it holds
\begin{align} \label{care-subspace}
\begin{bmatrix}
A & -G\\
-Q & -A^*
\end{bmatrix}
\begin{bmatrix}
I\\X
\end{bmatrix}
&=
\begin{bmatrix}
I\\X
\end{bmatrix}
M,
&
M &= A-GX.
\end{align}
Hence, $\begin{bsmallmatrix}
I\\X
\end{bsmallmatrix}$ is an invariant subspace of
\begin{equation} \label{hamiltonian}
\mathcal{H} = \begin{bmatrix}
A & -G\\
-Q & -A^*
\end{bmatrix}.
\end{equation}
Like in the discrete-time case, this relation implies that the $n$ eigenvalues of $M$ are a subset of those of $\mathcal{H}$; moreover, we can construct a solution $X= U_2 U_1^{-1}$ to~\eqref{care} from an invariant subspace $\operatorname{Im}\begin{bmatrix}
U_1\\ U_2
\end{bmatrix}$, whenever $U_1$ is invertible. Among all solutions, there is a preferred one.
\begin{theorem}\relax\cite[Theorems~7.9.1, 9.1.2 and~9.1.5]{LancasterRodman} \label{thm:care-maxsol}
Assume that $Q \succeq 0$, $G\succeq 0$, and $(A,G)$ is c-stabilizable. Then, \eqref{care}~has a (unique) solution $X_+$ such that
\begin{enumerate}
\item[(1)] $X_+ = X_+^* \succeq 0$;
\item[(2)] $X_+ \succeq X$ for any other Hermitian solution $X$;
\item[(3)] $\Lambda(A-GX_+) \subset \overline{\mathrm{LHP}}$.
\end{enumerate}
If, in addition, $(Q,A)$ is c-detectable, then $\Lambda(A-GX_+) \subset \mathrm{LHP}$.
\end{theorem}
$\emph{C-stabilizable}$ and $\emph{c-detectable}$ are defined analogously to their discrete-time counterparts, with the only difference that the domain $\{\abs{\lambda}<1\}$ is replaced by the left half-plane $\mathrm{LHP}$. Again, we do not comment on this theorem, whose proof is not obvious, and refer the reader to Lancaster and Rodman~\cite{LancasterRodman}.
Exactly as in the discrete-time case, various interesting properties of the matrix $\mathcal{H}$ in~\eqref{hamiltonian} follow from the fact that it belongs to a certain class of structured matrices. A matrix $M \in \mathbb{C}^{2n\times 2n}$ is called \emph{Hamiltonian} if $-M^*J = JM$, i.e., if it is skew-self-adjoint with respect to the non-standard scalar product induced by $J$. The following result holds.
\begin{lemma}
\begin{enumerate}
\item[(1)] A matrix in the form~\eqref{hamiltonian} is Hamiltonian if and only if $G=G^*,Q=Q^*$, and the two matrices called $A,A^*$ in~\eqref{hamiltonian} are one the conjugate transpose of the other.
\item[(2)] If $\lambda$ is an eigenvalue of a Hamiltonian matrix with right eigenvector $v$, then $-\overline{\lambda}$ is an eigenvalue of the same matrix with left eigenvector $v^*J$.
\item[(3)] If the hypotheses of Theorem~\ref{thm:care-maxsol} hold (including the c-detectability one), then the $2n$ eigenvalues of $\mathcal{H}$ are (counting multiplicities) the $n$ eigenvalues $\lambda_1,\dots,\lambda_n$ of $A-GX_+$ in the left half-plane, and the $n$ eigenvalues $-\overline{\lambda_i}$, $i=1,\dots,n$ in the right half-plane. In particular, $\begin{bsmallmatrix}
I\\X_+
\end{bsmallmatrix}$ spans the unique invariant subspace of~$\mathcal{H}$ of dimension $n$ all of whose associated eigenvalues lie in the left half-plane.
\end{enumerate}
\end{lemma}
Parts~1 and~2 are easy to verify from the block decomposition~\eqref{hamiltonian} and the definition of Hamiltonian matrix. To prove Part 3, plug $X_+$ into~\eqref{dare-subspace} and notice that $M$ has $n$ eigenvalues $\lambda_1,\lambda_2,\dots,\lambda_n$ in the left half-plane; these are also eigenvalues of $\mathcal{S}$. By Part~2, all other eigenvalues lie in the right half-plane.
The similarities between~\eqref{care-subspace} and~\eqref{dare-subspace} suggest that CAREs can be turned into DAREs (and \emph{vice versa}) by converting the two associated invariant subspace problems; the ingredient to turn one into the other is the Cayley transform.
\begin{lemma} \label{lem:caretodare}
Let $A,G=G^*, Q=Q^*$ be given, and take $\tau > 0$. Set
\begin{equation} \label{caretodare}
\begin{bmatrix}
A_d & G_d\\
-Q_d & A_d^*
\end{bmatrix}
=
\begin{bmatrix}
A-\tau I & -G\\
Q & A^* -\tau I
\end{bmatrix}^{-1}
\begin{bmatrix}
A+\tau I & -G\\
Q & A^* + \tau I
\end{bmatrix}
= I + 2\tau \begin{bmatrix}
A - \tau I & -G\\
Q & A^* - \tau I
\end{bmatrix}^{-1}.
\end{equation}
Assume that the inverse exists, and that $A_d$ is invertible. Then, the DARE with coefficients $A_d,G_d,Q_d$ has the same solutions as the CARE with coefficients $A,G,Q$ (and, in particular, the same maximal / stabilizing solution).
\end{lemma}
These formulas~\eqref{caretodare} follow from constructing $\mathcal{S}: = c(\mathcal{H}) = (\mathcal{H}-\tau I)^{-1}(\mathcal{H}+\tau I)$, and then applying Lemma~\ref{lem:bmf} to construct a factorization
\[
\mathcal{S} = \begin{bmatrix}
I & G_d\\
0 & A_d^*
\end{bmatrix}^{-1}
\begin{bmatrix}
A_d & 0\\
-Q_d & I
\end{bmatrix}.
\]
The matrix $\mathcal{S}$ that we have constructed has the same invariant subspaces as $\mathcal{H}$ because $c(\cdot)$ is an invertible rational function: indeed, from \eqref{care-subspace}, it follows that
\begin{align*}
\mathcal{S}\begin{bmatrix}
I\\X
\end{bmatrix} = c(\mathcal{H})
\begin{bmatrix}
I\\X
\end{bmatrix}
&=
\begin{bmatrix}
I\\X
\end{bmatrix}
c(M),
&
M &= A-GX.
\end{align*}
This relation coincides with~\eqref{dare-subspace}, and shows that a solution $X$ of the CARE is also a solution of the DARE constructed with~\eqref{caretodare}.
Thanks to Lemma~\eqref{lem:cayley}, $M$ has all its eigenvalues in~$\mathrm{LHP}$ if and only if $c(M)$ has all its eigenvalues inside the unit circle, so the stabilizing property of the solution is preserved.
Methods to transform DAREs into CAREs and vice versa based on the Cayley transform appear frequently in the literature starting from the 1960s; see for instance Mehrmann~\cite{meh-cayley}, a paper which explores these transformations and mentions the presence of many ``folklore results'' based on the Cayley transforms, relating the properties of the two associated equations.
Even if we restrict ourselves to the assumption that $A_d$ is invertible when treating the DARE, it is important to remark that Lemma~\ref{lem:caretodare} does not generalize completely to the case when $A_d$ is singular~\cite[Section~6]{meh-cayley}. By considering the poles of $c(\mathcal{H})$ as a function of $\tau$, one sees that $A_d$ is singular if and only if $\tau \in \Lambda(\mathcal{H})$. When this happens, even if $\mathcal{S}$ `exists' in a suitable sense as an equivalent matrix pencil, an invariant subspace of $\mathcal{H}$ for which $\tau \in \Lambda(M)$ cannot be converted to the form~\eqref{dare-subspace}, but only to the subtly weaker form
\begin{align}
\begin{bmatrix}
A_d & 0\\
-Q_d & I
\end{bmatrix}
\begin{bmatrix}
I\\X
\end{bmatrix}(M-\tau I)
&=
\begin{bmatrix}
I & G_d\\
0 & A_d^*
\end{bmatrix}
\begin{bmatrix}
I\\X
\end{bmatrix}
(M+\tau I), &
M &= A-GX.
\end{align}
with an additional singular matrix $M-\tau I$ in the left-hand side. Thus we cannot write the equality~\eqref{dare-subspace}, which identifies $X$ as a solution of the DARE: hence the DARE has fewer solutions than the CARE. The stabilizing solution is always preserved by this transformation, though, because $\Lambda(M) \subset \mathrm{LHP}$ cannot contain $\tau > 0$.
\subsection{Algorithms}
In view of the relation between DAREs and CAREs that we have just outlined, a natural algorithm is using the formulas~\eqref{caretodare} to convert~\eqref{care} into an equivalent~\eqref{dare} and solving it using~\eqref{sda-formulas}. This algorithm has been suggested by Chu, Fan and Lin~\cite{chu-care} as a doubling algorithm for CAREs. This algorithm inherits all the nice convergence properties of SDA for DAREs; in particular, among them, the fact that it also works (at reduced linear speed) on problems in which $A-GX_+$ has eigenvalues on the imaginary axis~\cite{Chi09-critical-doubling}.
While SDA works well in general, a delicate point is the choice of the shift value $\tau$. In principle almost every choice of $\tau$ works, since $\mathcal{H}-\tau I$ is singular only for at most $2n$ values of $\tau$, but in practice choosing the wrong value of $\tau$ may affect accuracy negatively. Dangers arise not from singularity of $\mathcal{H}-\tau I$ (which is actually harmless with a matrix pencil formulation), but from singularity in~\eqref{caretodare}, and also from taking $\tau$ too large or too small by orders of magnitude. A heuristic approach based on golden section search has been suggested~\cite{chu-care}.
In practice, one would prefer to avoid the Cayley transform or at least delay it as much as possible; this observation leads to another popular algorithm for CAREs. We start from the following observation.
\begin{lemma}
If $\mathcal{S} = c(\mathcal{H})$ (with a parameter $\tau \in \mathbb{R}$), then
\begin{equation} \label{squaringsignrelation}
\mathcal{S}^2 = c\left(\frac{1}{2}\left(\mathcal{H} + \tau^2\mathcal{H}^{-1}\right)\right).
\end{equation}
\end{lemma}
This identity can be verified directly, using the fact that rational functions of the same matrix $\mathcal{H}$ all commute with each other.
Applying this identity repeatedly, we get $\mathcal{S}^{2^k} = c(\mathcal{H}_k)$, where
\begin{align} \label{sign}
\mathcal{H}_{k+1} &= \frac12 \left(\mathcal{H}_k + \tau^2\mathcal{H}_k^{-1} \right), & \mathcal{H}_0 &= \mathcal{H}.
\end{align}
Hence one can hold off the Cayley transform and just compute the sequence $\mathcal{H}_k$ directly, starting from~\eqref{hamiltonian}. This constructs a sequence which represents implicitly $\mathcal{S}^{2^k}$.
Constructing the matrices $\mathcal{H}_k$ is numerically much less troublesome than constructing explicitly $\mathcal{S}^{2^k}$ or its inverse $\mathcal{S}^{-2^k}$. Indeed, it is instructive to consider the behaviour of these iterations in a basis in which $\mathcal{H}$ is diagonal (when it exists). Let $\lambda$ be a generic diagonal entry (i.e., an eigenvalue) of $\mathcal{H}$. Then, $\mathcal{S}=c(\mathcal{H})$ has the corresponding eigenvalue $c(\lambda)$, and $\mathcal{S}^{2^k}$ has the eigenvalue $c(\lambda)^{2^k}$. If $\lambda \in \mathrm{LHP}$, then $\abs{c(\lambda)}<1$ (Lemma~\ref{lem:cayley}), and hence $c(\lambda)^{2^k} \to 0$ when $k\to\infty$. Similarly, if $\lambda$ is in the right half-plane, then $\abs{c(\lambda)}>1$ and $c(\lambda)^{2^k} \to \infty$. Thus $\mathcal{S}^{2^k}$ (as well as its inverse) has some eigenvalues that converge to zero, and some that diverge to infinity, as $k$ grows. This is one of the reasons why it is preferable to keep $\mathcal{S}$ in its factored form~\eqref{ssffact}. On the other hand, the eigenvalues of $\mathcal{H}_{k}$ converge to finite values $c^{-1}(0) = -\tau$ and $c^{-1}(\infty) = \tau$, so this computation suggests that the direct computation of $\mathcal{H}_k$ is feasible.
The \emph{sign function method}\cite{Rob80,denbea,GarL} to solve CAREs consists exactly in computing the iteration~\eqref{sign} up to convergence, obtaining a matrix $\mathcal{H}_\infty = \lim_{k\to\infty} \mathcal{H}_k$ that has numerically $n$ eigenvalues equal to $\tau\in \mathrm{RHP}$ and $n$ equal to $-\tau\in\mathrm{LHP}$, and then computing
\begin{align} \label{sign-final}
\operatorname{Im} \begin{bmatrix}
U_1\\
U_2
\end{bmatrix} &= \ker (\mathcal{H}_\infty + \tau I), & U_1,U_2& \in\mathbb{C}^{n\times n}, & X_+ &= U_2U_1^{-1}.
\end{align}
The method takes its name from the fact that the limit matrix $\mathcal{H}_{\infty}$ (for $\tau=1$) is the so-called \emph{matrix sign function} of $\mathcal{H}$. We refer the reader to its analysis in Higham~\cite[Chapter~5]{higham-functions}, in which one clearly sees that one of the main ingredients is the formula~\ref{squaringsignrelation} relating the iteration to repeated squaring.
Scaling is an important detail that deserves a discussion. Replacing $\mathcal{H}$ with a positive multiple of itself corresponds to multiplying each term of~\eqref{care} by a positive quantity; this operation does not change the solutions of the equation, nor the maximal / stabilizing properties of $X_+$. In SDA, scaling is limited to choosing the parameter of the initial Cayley transform, but in the sign method we have more freedom: we can take a different $\tau_k$ at each step of~\eqref{sign}. We remark that scaling for the sign method is usually presented in the literature in a slightly different form: one replaces~\eqref{sign} with
\begin{equation} \label{sign2}
\mathcal{H}_{k+1} = \frac12\left((\tau_k^{-1} \mathcal{H}_k) + (\tau_k^{-1} \mathcal{H}_k)^{-1}\right).
\end{equation}
The two forms are essentially equivalent, as they return iterates $\mathcal{H}_k$ that differ only by a multiplicative factor, which is then irrelevant in the final step~\eqref{sign-final}. Irrespective of formulation, the main result is that a judicious choice of scaling can speed up the convergence of~\eqref{sign} or~\eqref{sign2}. A cheap and effective choice of scaling, \emph{determinantal scaling}, $\tau_k = (\det \mathcal{H}_k)^{\frac1{n}}$ has been suggested by Byers~\cite{Bye87}. Other related choices of scaling and their performances have been discussed by Higham~\cite[Chapter~5]{higham-functions} and Kenney and Laub~\cite{KenL}. The general message is that scaling has a great impact in the first steps of the iteration, when it can greatly improve convergence, but once the residual starts to decrease its effect in the later steps becomes negligible.
Scaling also has an impact on stability; the stability of the sign iteration as a method to compute invariant subspaces (and hence ultimately Riccati solutions) has been studied by Bai and Demmel~\cite{BaiD98} and Byers, He and Mehrmann~\cite{ByeHM97}. The two interesting messages are that (expectedly) the sign function method suffers when $\mathcal{H}$ is ill-conditioned, but that (unexpectedly) the invariant subspaces extracted from $\mathcal{H}_\infty$ has better stability properties than $\mathcal{H}_\infty$ itself. A version of the sign iteration that uses matrix pencils to reduce the impact of these inversions have been suggested by Benner and Byers~\cite{BenB06}.
Another useful computational detail is that one can rewrite the sign function method~\eqref{sign} as
\begin{align*}
\mathcal{M}_{k+1} &= \frac12 (\mathcal{M}_k + \tau^2 J \mathcal{M}_k^{-1}J), & \mathcal{M}_k = \mathcal{H}_k J,
\end{align*}
which is cheaper because one can take advantage of the fact that the matrices $\mathcal{M}_k$ are Hermitian~\cite{Bye87}. Indeed, it is a general observation that most of the matrix algebra operations needed in doubling-type algorithms can be reduced to operations on symmetric/Hermitian matrices; see for instance also~\eqref{caretodare}.
\subsection{Remarks}
The formulation in the sign iteration allows one to introduce some form of per-iteration scaling in the setting of a doubling-type algorithm. It would be interesting to see if this scaling can be transferred to the SDA setting, and which computational advantage it brings. Note that, in view of~\eqref{squaringsignrelation}, scaling the sign iteration is equivalent to changing the parameter $\tau$ in the Cayley transform. So SDA does incorporate a form of scaling, but only at the first iteration, when one chooses $\tau$.
In general, it is unclear if scaling after the first iteration produces major gains in convergence speed. It would be appealing to try and study this kind of scaling with the tools of polynomial and rational approximation, like it has been done in more details for non-doubling algorithms, with the aim of deriving optimal choices for the parameters $\tau$ and $\sigma_k$.
There is another classical iterative algorithm to solve algebraic Riccati equations (both in discrete and continuous time), and it is Newton's method. For the simpler case of CAREs, Newton's method~\cite{Kle68} consists in determining $X_{k+1}$ by solving at each step the Lyapunov equation
\begin{equation} \label{newton-step}
(A-GX_k)^*(X_{k+1}-X_k) + (X_{k+1}-X_k)(A-GX_k) = -(Q+A^*X_k+X_kA-X_k G X_k)
\end{equation}
or the equivalent one
\[
(A-GX_k)^*X_{k+1} + X_{k+1}(A-GX_k) = -Q - X_k G X_k.
\]
A line search procedure, which improves convergence speed in practice, has been introduced by Benner and Byers~\cite{BenB98-linesearch}. The method can be used, in particular, for large and sparse equations in conjunction with low-rank ADI~\cite{BenLP08}.
The reader may wonder if there is an explicit relation between doubling algorithms and Newton-type algorithms, considering especially that both exhibit quadratic convergence (which, moreover, in both cases degrades to linear with rate $1/2$ if $A-GX_+$ has purely imaginary eigenvalues~\cite{GuoL98}). The answer, unfortunately, seems to be no. An argument that suggests that the two iterations are genuinely different is that the iterates produced by Newton's method approach $X_+$ from \emph{above}~\cite{Kle68} (i.e., $X_1 \succeq X_2 \succeq \dots \succeq X_{k}\succeq X_{k+1} \succeq \dots \succeq X_+$), not from \emph{below} like the iterates $Q_k$ of SDA in~\eqref{sda-formulasQ}.
Some more recent algorithms for large and sparse CAREs essentially merge the Newton step~\eqref{newton-step} and the ADI iteration~\eqref{adi-step} into a single iteration~\cite{LinSim,Sim16,BenBKS}. It is again unclear whether there is an explicit relation between these two families of methods.
An interesting question is what is the `non-doubling' analogue of the sign method and of SDA. One can convert the CARE to discrete-time using~\eqref{caretodare} and formulate~\eqref{dare-iterative}, but to our knowledge this method does not have a more appealing presentation in terms of a simple iterative method for~\eqref{care}, like it has in all the other discrete-time examples.
Another `philosophical' observation is that the sign function method does not avoid a Cayley-type transformation; it merely pushes it back to the very last step~\eqref{sign-final}, where the sub-expression $\mathcal{H}+ \tau I$ appears; this operation takes the role of a discretizing transformation that maps the eigenvalue $-\tau$ into a value inside a given circle and the eigenvalue $\tau$ into one outside. A discretizing transformation of some sort seems inevitable in this family of algorithms, although delaying it until the very last step seems beneficial for accuracy, because at that point we have complete control of the location of eigenvalues.
\section{Unilateral equations and NMEs} \label{sec:nme}
We end our discussion of the family of Riccati-type equations with a pair of oft-neglected cousins, and present them with an application that shows clearly the relationship between them. Consider the matrix Laurent polynomial
\begin{align} \label{laurpol}
P(z) &= Az^{-1} + Q + A^* z, & Q &= Q^* \succ 0, & A,Q &\in \mathbb{C}^{n\times n}.
\end{align}
The problem of \emph{spectral factorization} (of quadratic matrix polynomials) consists in determining a factorization
\begin{align} \label{specfact}
P(z) &= (z Y^* - I)X(z^{-1}Y - I), & X&=X^* \succ 0, & X,Y&\in\mathbb{C}^{n\times n},
\end{align}
such that $\rho(Y) \leq 1$. In particular, the left factor is invertible for $\abs{z}<1$, and the right factor is invertible for $\abs{z}>1$.
Equating coefficients in~\eqref{laurpol} and~\eqref{specfact} gives $-XY=A$, $Q = X + Y^*XY$. We can eliminate one among $X$ and $Y$ from this system of two equations, getting two equations with a single unknown each
\begin{align}
0 &= A + QY + A^*Y^2, \label{uqme} \\
Q &= X + A^*X^{-1}A. \label{nme}
\end{align}
The first one~\eqref{uqme} is called \emph{unilateral quadratic matrix equation}~\cite{BinILM06}, while the second one~\eqref{nme} is known with the (rather undescriptive) name of \emph{nonlinear matrix equation} (NME)~\cite{GuoL-nano1,GuoL-nano2,doublingbook}.
While~\eqref{uqme} looks more appealing at first, as it reveals direct ties with the palindromic quadratic eigenvalue problem~\cite{GuoL-nano1,GuoL-nano2,M4}, it is in fact~\eqref{nme} that reveals more structure: for instance, \eqref{nme}~has Hermitian solutions (see below), while the structure in the solutions of~\eqref{uqme} is much less apparent.
\subsection{Solution properties}
It follows from~\eqref{specfact} that $P(\lambda) \succeq 0$ for each $\lambda$ that belongs to the unit circle (hence $\lambda^{-1} = \bar{\lambda}$), so this is a necessary condition for the solvability of this problem. It can be proved that it is sufficient, too, and that a maximal / stabilizing solution exists.
\begin{theorem}\cite[Theorem~2.2]{EngRR} \label{thm:nmesol} Assume that $P(z)$ is regular and $P(\lambda) \succeq 0$ for each $\lambda$ on the unit circle. Then, \eqref{nme} has a (unique) solution $X_+$ such that
\begin{enumerate}
\item[(1)] $X_+ = X_+^* \succ 0$;
\item[(2)] $X_+ \succeq X$ for any other Hermitian solution $X$;
\item[(3)] $\rho(Y) = \rho(-X_+^{-1}A) \leq 1$
\end{enumerate}
If, in addition, $P(\lambda) \succ 0$ for each $\lambda$ on the unit circle, then $\rho(-X_+^{-1}A) < 1$.
\end{theorem}
Once again, we can rewrite~\eqref{nme} as an invariant subspace problem.
\begin{align}
\begin{bmatrix}
A & 0\\
-Q & I
\end{bmatrix}
\begin{bmatrix}
I\\X
\end{bmatrix}
&=
\begin{bmatrix}
0 & -I\\
A^* & 0
\end{bmatrix}
\begin{bmatrix}
I\\X
\end{bmatrix}
Y, & Y &= -X^{-1}A.
\end{align}
We assume again that $A$ is invertible to avoid technicalities with matrix pencils. The matrix
\begin{equation}
\mathcal{S} = \begin{bmatrix}
0 & -I\\
A^* & 0
\end{bmatrix}^{-1}
\begin{bmatrix}
A & 0\\
-Q & I
\end{bmatrix}
\end{equation}
is symplectic, and so is the slightly more general form
\begin{equation} \label{symplectic2}
\begin{bmatrix}
G & -I\\
A^* & 0
\end{bmatrix}^{-1}
\begin{bmatrix}
A & 0\\
-Q & I
\end{bmatrix}.
\end{equation}
\begin{lemma}
\begin{enumerate}
\item[(1)] A matrix in the form~\eqref{symplectic2} is symplectic if and only if $G=G^*, Q=Q^*$, and the two blocks called $A,A^*$ in~\eqref{symplectic} are one the conjugate transpose of the other.
\item[(2)] If $\lambda$ is an eigenvalue of a symplectic matrix with right eigenvector $v$, then $\overline{\lambda}^{-1}$ is an eigenvalue of the same matrix with left eigenvector $v^*J$.
\item[(3)] If the hypotheses of Theorem~\ref{thm:nmesol} hold (including the strict positivity one in the end), then the $2n$ eigenvalues of $\mathcal{S}$ are (counting multiplicities) the $n$ eigenvalues $\lambda_1,\lambda_2,\dots,\lambda_n$ of $-X_+^{-1}A$ inside the unit circle, and the $n$ eigenvalues $\overline{\lambda_i}^{-1}$, $i=1,2,\dots,n$ outside the unit circle.
\end{enumerate}
\end{lemma}
The symplectic structure behind this equation is the same one as the DARE, and indeed Part~2 of this lemma is identical to Part~2 of Lemma~\ref{symplemma}. Indeed, Engwerda, Ran and Rijkeboer~\cite[Section~7]{EngRR} note that~\eqref{nme} can be reduced to a DARE, although it is one that does not fall inside our framework since it has $G \preceq 0$.
\subsection{Algorithms}
The formulation~\eqref{nme} suggests immediately the iterative algorithm
\begin{equation} \label{nme-iterative}
X_{k+1} = Q - A^* X_k^{-1}A.
\end{equation}
Clearly we cannot start this iteration from $0$, so we take $X_1 = Q$ instead. An interesting interpretation of this algorithm is as iterated Schur complements of block Toeplitz tridiagonal matrices. The Schur complement of the $(1,1)$ block of the tridiagonal matrix
\[
\underbrace{
\begin{bmatrix}
X_{k} & A^*\\
A & Q & A^*\\
& A & Q & \ddots\\
& & \ddots & \ddots & A^*\\
& & & A & Q
\end{bmatrix}}_{\text{$h$ blocks}},
\]
is
\[
\underbrace{
\begin{bmatrix}
X_{k+1} & A^*\\
A & Q & A^*\\
& A & Q & \ddots\\
& & \ddots & \ddots & A^*\\
& & & A & Q
\end{bmatrix}}_{\text{$h-1$ blocks}}.
\]
Hence the whole iteration can be interpreted as constructing successive Schur complements of the tridiagonal matrix
\begin{equation} \label{tridiag}
\mathcal{Q}_{m} :=
\underbrace{\begin{bmatrix}
Q & A^*\\
A & Q & A^*\\
& A & Q & \ddots\\
& & \ddots & \ddots & A^*\\
& & & A & Q
\end{bmatrix}}_{\text{$m$ blocks}}.
\end{equation}
It can be seen that $\mathcal{Q}_m$ is positive semidefinite, under the assumptions of Theorem~\ref{thm:nmesol}: a quick sketch of a proof is as follows. The matrix $\mathcal{Q}_m$ is a submatrix of
\[
\begin{bmatrix}
Q & A^* & & & A\\
A & Q & A^*\\
& A & Q & \ddots\\
& & \ddots & \ddots & A^*\\
A^*& & & A & Q
\end{bmatrix}
= (\Phi \otimes I) \begin{bmatrix}
P(1)\\
& P(\zeta)\\
& & P(\zeta^2)\\
& & & \ddots\\
& & & & P(\zeta^{-1})
\end{bmatrix} (\Phi \otimes I)^{-1},
\]
which the equation shows to be similar (using the Fourier matrix $\Phi$ and properties of Fourier transforms) to a block diagonal matrix that contains $P(z)$ from~\eqref{laurpol} evaluated in the roots of unity $1,\zeta,\zeta^2,\dots,\zeta^{-1}$.
Hence, in particular, all the $X_{k}$ are positive semidefinite. One can further show that $Q = X_0 \succeq X_1 \succeq X_2 \succeq \dots \succeq X_k \succeq \dots$. The sequence $X_k$ is monotonic and bounded from below, hence it converges, and one can show that its limit is $X_+$~\cite[Section~4]{EngRR} (to do this, verify the property in Point~(2) of Theorem~\ref{thm:nmesol} by proving that $X_k \succeq X$ at each step of the iteration).
A doubling variant of~\eqref{nme-iterative} can be constructed starting from this Schur complement interpretation. The Schur complement of the submatrix formed by the odd-numbered blocks $(1,3,5,\dots,2m-1)$ of
\[
\underbrace{
\begin{bmatrix}
U_{k} & A_k^*\\
A_k & U_k & A_k^*\\
& A_k & \ddots & \ddots\\
& & \ddots & U_k & A_k^*\\
& & & A_k & Q_k
\end{bmatrix}}_{\text{$2m$ blocks}},
\]
is
\[
\underbrace{
\begin{bmatrix}
U_{k+1} & A_{k+1}^*\\
A_{k+1} & U_{k+1} & A_{k+1}^*\\
& A_{k+1} & \ddots & \ddots\\
& & \ddots & U_{k+1} & A_{k+1}^*\\
& & & A_{k+1} & Q_{k+1}
\end{bmatrix}}_{\text{$m$ blocks}},
\]
with
\begin{subequations} \label{cr}
\begin{align}
A_{k+1} &= -A_kU_k^{-1}A_k,\\
Q_{k+1} &= Q_k - A_k^* U_k^{-1} A_k,\\
U_{k+1} &= U_k - A_k^* U_k^{-1}A_k - A_k U_k^{-1}A_k^*.
\end{align}
\end{subequations}
We can construct the Schur complement of the first $2^k-1$ blocks of $\mathcal{Q}_{2^k}$ in two different ways: either we make $2^k-1$ iterations of~\eqref{nme-iterative}, resulting in $X_{2^k}$, or we make $k$ iterations of~\eqref{cr}, starting from $A_0=A, Q_0=U_0=Q$, resulting in $Q_k$. This shows that $Q_k = X_{2^k}$.
This peculiar way to take Schur complements of Toeplitz tridiagonal matrices was introduced by Buzbee, Golub and Nielson~\cite{BuzGN} to solve certain differential equations, and then later applied to matrix equations similar to~\eqref{uqme} and~\eqref{nme} by Bini, Gemignani, and Meini~\cite{BM,BGM,Mei}. The iteration~\eqref{cr} is known as \emph{cyclic reduction}.
One can derive the same iteration from repeated squaring, in the same way as we obtained SDA as a modified subspace iteration~\cite{LinX06}. We seek formulas to update a factorization of the kind
\[
\mathcal{S}^{-2^k} = \begin{bmatrix}
A_k & 0\\
-Q_k & I
\end{bmatrix}^{-1}
\begin{bmatrix}
G_k & -I\\
A_k^* & 0
\end{bmatrix}.
\]
To do this, we write (analogously to~\eqref{sdaderiv})
\[
\mathcal{S}^{-2^{k+1}} = \mathcal{S}^{-2^k} \mathcal{S}^{-2^k} =
\begin{bmatrix}
A_k & 0\\
-Q_k & I
\end{bmatrix}^{-1}
\left(
\begin{bmatrix}
G_k & -I\\
A_k^* & 0
\end{bmatrix}
\begin{bmatrix}
A_k & 0\\
-Q_k & I
\end{bmatrix}^{-1}
\right)
\begin{bmatrix}
G_k & -I\\
A_k^* & 0
\end{bmatrix}
\]
and use Lemma~\ref{lem:bmf} (with $[M_1\, M_2] = I_{2n}$) to find a factorization in the form~\eqref{bmf-factorization} of the term in parentheses, which then combines with the outer terms to produce the sought decomposition. The resulting formulas are
\begin{subequations} \label{sda2}
\begin{align}
A_{k+1} &= -A_k(Q_k-G_k)^{-1}A_k,\\
Q_{k+1} &= Q_k - A_k^* (Q_k-G_k)^{-1} A_k,\\
G_{k+1} &= G_k + A_k (Q_k-G_k)^{-1} A_k^*,
\end{align}
\end{subequations}
and one sees that they coincide with~\eqref{cr}, after setting $U_k = Q_k - G_k$. With an argument analogous to the one in Section~\ref{sec:dare}, one sees that
\[
\mathcal{S}^{-2^k}\begin{bmatrix}
0 \\ -I
\end{bmatrix} = \begin{bmatrix}
I\\
Q_k
\end{bmatrix},
\]
thus $\begin{bsmallmatrix}
I\\Q_k
\end{bsmallmatrix}$ converges to a basis of the invariant subspace associated to the eigenvalues of $\mathcal{S}$ inside the unit circle.
This formulation~\eqref{sda2} is known as \emph{SDA-II}~\cite{LinX06,Chu-trains}.
\subsection{Remarks}
Even though we have mentioned spectral factorization only here, it can be formulated for more complicated matrix functions also in the context of DAREs and CAREs; in fact, it is a classical topic, and another facet of the multiple connections between matrix equations and control theory~\cite{bart1,bart2,SayK}.
The interpretation as Schur complement is a powerful trick, which reveals a greater picture in this family of methods. It may possibly be used to understand more about the stability of these methods, since Schur complementation and Gaussian elimination on symmetric positive definite matrices is a well understood topic from the numerical point of view.
Many authors have studied variants of~\eqref{nme}. Typically, one replaces the nonlinear term with various functions of the form $A^* f(X) A$, or adds more nonlinear terms. In the modified versions, it is often possible to prove convergence of the fixed-point algorithm with arguments of monotonicity, and prove the existence of a solution under some assumptions. However, after any nontrivial modification the connection with invariant subspaces is lost. This fact, coupled with lack of applications, makes these variants much less interesting than the original equation, in the eyes of the author.
\section{Nonsymmetric variants in applied probability}
Many of the equations treated here have nonsymmetric variants which appear naturally in queuing theory, a sub-field of applied probability. In the analysis of \emph{quasi-birth-death models}~\cite{LR-book,BLM-book}, one encounters equations of the form
\begin{align} \label{qbd}
0 &= A + QY + BY^2, & A, B, Q, Y \in \mathbb{R}^{n\times n},
\end{align}
where $A,B \geq 0$ (we use the notation $M\geq N$ to denote that a matrix $M$ is entrywise larger than $N$, i.e., $M_{ij} \geq N_{ij}$ for all $i,j$), and the matrix $-Q$ is an M-matrix, i.e., $Q_{ij} \geq 0$ for $i\neq j$ and $\Lambda(Q) \subset \overline{\mathrm{LHP}}$. These equations have a solution $Y \geq 0$ which has a natural probabilistic interpretation. The solution $X$ to $X = Q - BX^{-1}A$ and the solution of the associated dual equation $0 = Z^2A + ZQ+B$ also appear naturally and have a related probabilistic meaning~\cite[Chapter~6]{LR-book}\cite[Section~5.6]{BLM-book}.
Similarly, the equation
\begin{align} \label{nare}
Q + BX+XA - XGX &= 0, & Q,X & \in \mathbb{R}^{m\times n}, A \in \mathbb{R}^{n\times n}, B \in \mathbb{R}^{m\times m}, G\in\mathbb{R}^{n\times m}.
\end{align}
appears in the study of so-called \emph{fluid queues}, or \emph{stochastic flow models}~\cite{roger94,kk95,soares05}. The matrices $A,B$ are M-matrices, while $G,-Q \geq 0$.
One can formulate nonsymmetric analogues of basic matrix iterations and doubling algorithms. Unfortunately, the theory does not translate perfectly to this setting, due to the sign differences between the two cases: in the symmetric equations $G,Q\succeq 0$, while in the nonsymmetric case $G,-Q \geq 0$. Due to this asymmetry, the signs in the two cases do not match, and one needs to formulate different arguments. For instance, in the symmetric case one proves that the inverses that appear in~\eqref{sda-formulas} exist because $G_k \succeq 0$, $Q_k \succeq 0$; while in its nonsymmetric analogue $G_k,-Q_k \geq 0$, and one proves that $I+G_kQ_k$ and $I+Q_kG_k$ are M-matrices to show that those inverses exist.
The equation~\eqref{dare} does not appear to have an immediate analogue in queuing theory, but this fact seems just an accident, since some of the results that involve~\eqref{nare} could have been formulated with an equivalent equation resembling more~\eqref{dare} than~\eqref{care} instead. There is a distinction between discrete-time and continuous-time models also in applied probability, but in many cases it does not affect directly the shape of the equations; for instance~\eqref{qbd} takes the same form for discrete- and continuous-time QBDs. The role of discretizing transformations such as Cayley transforms in this context has been studied by Bini, Meini, and Poloni~\cite{BMP-transforming}.
For reasons of space, we cannot give here a complete treatment of these nonsymmetric variants. Huang, Li and Lin~\cite{doublingbook} in their book enter into more detail about the doubling algorithms for these equations, but a great part of the theory (including existence results and probabilistic interpretations for the iterates of various numerical methods) is unfortunately available only in the queuing theory literature, strictly entangled with its applications.
An interesting remark is that the M-matrix structure allows one to construct stability proofs more easily. Conditioning and stability results for these equations have been studied by some authors~\cite{XXL1,XXL2,NguP15,XL17,CheLM19}, relying heavily on the sign and M-matrix structure. The forward stability proof in Nguyen and~Poloni~\cite{NguP15} is, to date, one of the very few complete stability proofs for a doubling-type algorithm.
\section{Conclusions}
In this paper, we presented from a consistent point of view doubling algorithms for symmetric Riccati-type equations, relating them to the basic iterations of which they are a `squaring' variant. We have included various algorithms that belong to the same family but have appeared independently, such as the sign iteration and cyclic reduction. We have outlined relations between doubling algorithms, the subspace iteration, ADI-type and Krylov subspace methods, and Schur complementation of tridiagonal block Toeplitz matrices. This theory, in turn, forms only a small portion of the far larger topic of numerical algorithms for Riccati-type equations and control theory. This field of research is an incredibly vast one, spanning at least six decades of literature and various communities between engineering and mathematics, so we have surely omitted or forgotten many relevant contributions; we apologize with the missing authors.
We hope that the reader can benefit from our paper by both gaining theoretical insight, and having available some numerical algorithms for these equations. Indeed, with respect to many competitors, doubling-based algorithms have the advantage that they reduce to the simple coupled matrix iterations~\eqref{sda-formulas} or~\eqref{cr}, which are easy to code and fast to run in many computational environments.
Another interesting remark that was suggested by a referee is that some recent lines of research consider this family of matrix equations under different types of data sparsity than low-rank: for instance, Palitta and Simoncini~\cite{PaliS} consider banded data, and Kressner, Massei and Robol~\cite{KreMR} and Massei, Palitta and Robol~\cite{MasPR} consider semi-separable (low-rank off-diagonal blocks) and hierarchically semiseparable structures. Much earlier, Grasedyck, Hackbusch and Khoromskij~\cite{GraHK} considered using hierarchical matrices to solve Riccati equations. All these structures are (at least up to a degree) preserved by the operations involved in doubling methods~\cite{HierBook,XiaCGL}. These novel techniques may open up new lines of research for doubling-type algorithms.
\bibliographystyle{abbrvurl}
| {
"timestamp": "2020-05-19T02:37:09",
"yymm": "2005",
"arxiv_id": "2005.08903",
"language": "en",
"url": "https://arxiv.org/abs/2005.08903",
"abstract": "We review a family of algorithms for Lyapunov- and Riccati-type equations which are all related to each other by the idea of \\emph{doubling}: they construct the iterate $Q_k = X_{2^k}$ of another naturally-arising fixed-point iteration $(X_h)$ via a sort of repeated squaring.The equations we consider are Stein equations $X - A^*XA=Q$, Lyapunov equations $A^*X+XA+Q=0$, discrete-time algebraic Riccati equations $X=Q+A^*X(I+GX)^{-1}A$, continuous-time algebraic Riccati equations $Q+A^*X+XA-XGX=0$, palindromic quadratic matrix equations $A+QY+A^*Y^2=0$, and nonlinear matrix equations $X+A^*X^{-1}A=Q$. We draw comparisons among these algorithms, highlight the connections between them and to other algorithms such as subspace iteration, and discuss open issues in their theory.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Iterative and doubling algorithms for Riccati-type matrix equations: a comparative introduction",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9910145725743421,
"lm_q2_score": 0.8128673223709251,
"lm_q1q2_score": 0.8055633620390723
} |
https://arxiv.org/abs/math/0611614 | Matchings in arbitrary groups | A matching in a group G is a bijection f from a subset A to a subset B in G such that af(a) does not belong to A for all a in A. The group G is said to have the matching property if, for any finite subsets A,B in G of same cardinality with B avoiding 1, there is a matching from A to B.Using tools from additive number theory, Losonczy proved a few years ago that the only abelian groups satisfying the matching property are the torsion-free ones and those of prime order. He also proved that, in an abelian group, any finite subset A avoiding 1 admits a matching from A to A.In this paper, we show that both Losonczy's results hold verbatim for arbitrary groups, not only abelian ones. Our main tools are classical theorems of Kemperman and Olson, also pertaining to additive number theory, but specifically developped for possibly nonabelian groups. | \section{Introduction}
Let $G$ be a group, written multiplicatively. Given nonempty finite subsets $A,B$ in $G$,
a \textit{matching} from $A$ to $B$ is a map $\varphi:A \rightarrow B$
which is bijective and satisfies the condition $$a\varphi(a) \notin A$$ for all $a \in A$.
This notion was introduced in \cite{FanLos} by Fan and Losonczy, who used matchings in $\mathbb{Z}^n$
as a tool for studying an old problem of Wakeford concerning canonical forms for symmetric
tensors \cite{Wak}.
Coming back to general groups, it is plain that if there is a matching $\varphi$ from $A$ to $B$, then $|A|=|B|$ and $1 \notin B$. (For if $1 \in B$, let $a_1 = \varphi^{-1}(1)$; then $a_1 \varphi(a_1)= a_1 \in A$.) It is natural to wonder whether these necessary conditions for the existence of a matching from $A$ to $B$ are also sufficient. The answer turns out to depend on the group structure.
Following Losonczy, we say that the group $G$ has the \textit{matching property} if,
whenever the subsets $A,B$ satisfy the conditions $|A|=|B|$ and $1 \notin B$, there exists
a matching from $A$ to $B$. Losonczy proved the following result.
\begin{theorem}[\cite{LOs}]\label{thm:matching property} Let $G$ be an abelian group. Then $G$ has the matching property if and only if $G$ is torsion-free or cyclic of prime order.
\end{theorem}
A special case of interest is the one where $A=B$. Is it sufficient, in this case, to assume that $A$ does not contain $1$ in order to guarantee the existence of a matching from $A$ to $A$? Losonczy's answer for abelian groups is yes.
\begin{theorem}[\cite{LOs}]\label{thm:automatching} Let $G$ be an abelian group. Let $A$ be a nonempty finite subset of $G$. Then there is a matching from $A$ to $A$ if and only if $1 \notin A$.
\end{theorem}
The proofs in \cite{LOs} are based on methods and results from additive number theory, namely the Dyson transform, and theorems of Cauchy-Davenport and Kneser. However powerful, these methods only work for abelian groups.
\medskip
In Section~\ref{secM} of this paper, we extend the above two theorems of Losonczy to arbitrary groups. This is achieved by making use of results in additive number theory which were specifically developped for possibly nonabelian groups. These results are recalled in the next section. The engine behind their proofs is the Kemperman transform, a clever nonabelian analogue of the Dyson transform. See Olson's paper \cite{Ols}.
See also Nathanson's book \cite{Nath} for general background on additive number theory.
\section{Nonabelian additive theory}
Given subsets $A,B$ of a group $G$, their \textit{product set} is defined as $$AB=\{ab \mid a \in A, b \in B\}.$$ We start with a result of Kemperman providing a conditional lower bound on the size of $AB$.
\begin{theorem}[Kemperman \cite{Kem}]
\label{th_ke}\label{th_kem}\label{THK}
Let $A,B$ be finite subsets of a group $G.$ Assume there
exists an element $c\in AB$ appearing exactly once as a product $c=ab$ with $a\in A, \, b \in B$.
Then
\begin{equation*}
\left| AB\right| \geq \left| A\right| +\left| B\right| -1.
\end{equation*}
\end{theorem}
The following corollary will be used in the next section for our extension of Theorem~\ref{thm:automatching}.
\begin{corollary}
\label{corKem} Let $U,V$ be nonempty finite subsets of a group $G$ such that
$U$, $V$ and $UV$ are all three contained in a subset $X$ of $G \setminus \{1\}.$ Then
\begin{equation*}
\left| X \right| \ge \left| U \right| + \left| V \right| + 1.
\end{equation*}
\end{corollary}
\begin{proof}
Let $A = U\cup \{1\}$, $B=V\cup \{1\}$.\ Then $1 \in AB$
and appears exactly once as a product in $AB$. Indeed,
assume $1=ab$ with $a \in A$, $b \in B$. Then either $a=1$ or $b=1$,
since $1 \notin UV$ by hypothesis, and hence $a=b=1$.
Therefore Theorem~\ref{THK} applies, and gives
$$
\left| AB \right| \ge \left| A \right| + \left| B \right| -1.
$$
Since $|A|=|U|+1$, $|B|=|V|+1$ and $AB=UV \cup U \cup V$, we have $AB \subset X$ and hence
$$
\left| X \right| \ge \left| AB \right| \ge \left| U \right| + \left| V \right| +1,
$$
as desired.
\end{proof}
\medskip
As for extending Theorem~\ref{thm:matching property} to arbitrary groups, we shall need
the following result of Olson.
\begin{theorem}[Olson \cite{Ols}]\label{thOl}
Let $A,B$ be nonempty finite subsets of a group $G$. There exists a finite subgroup $H$ of $G$
and a nonempty subset $T$ of $AB$ such that
\begin{equation*}
\left| AB\right| \ge \left| T\right| \geq \left| A\right| +\left| B\right| -\left| H\right|,
\end{equation*}
and either $HT=T$ or $TH=T$.
\end{theorem}
\section{Results and proofs\label{secM}}
We now present our extensions of Losonczy's theorems.
Besides the additive tools from the preceding section, we shall also need, as in \cite{FanLos, LOs},
the marriage theorem of Hall. Recall that,
given a collection $\mathcal{E}=\{E_{1},E_{2},\dots,E_{n}\}$ of subsets of a set $E$,
a \textit{system of distinct representatives} for $\mathcal{E}$ is a set
$\{x_{1},\dots,x_{n}\}$ of pairwise distinct elements of $E$ with the
property that $x_{i}\in E_{i}$ for all $i=1,\dots,n.$ Hall's theorem gives necessary
and sufficient conditions for the existence of such systems.
\begin{theorem}[Hall \cite{Hall}]
Let $E$ be a set and $\mathcal{E}=\{E_{1},E_{2},\dots,E_{n}\}$
a family of finite subsets of $E.$ Then $\mathcal{E}$ admits a system of distinct
representatives if and only if
\begin{equation*}
\left| \bigcup_{i \in S}E_{i}\right| \geq \left| S\right|
\end{equation*}
for all nonempty subsets $S \subset \{1,\dots,n\}.$\end{theorem}
We are now ready to generalize Theorem~\ref{thm:automatching}.
\begin{theorem}
\label{th-match1}Let $G$ be a group. Let $A$ be a nonempty finite subset of $G$. Then there is
a matching from $A$ to $A$ if and only if $1 \notin A$.
\end{theorem}
\begin{proof} We already know that if $A$ contains $1$, there cannot be a matching from $A$ to $A$.
Assume now $1 \notin A$. For each $a\in A$, set
\begin{equation*}
E_{a}=\{x \in A\mid ax \notin A\}.
\end{equation*}
Finding a matching from $A$ to $A$ is clearly equivalent to finding a system of
distinct representatives for the family of sets
$$\mathcal{E} = \{E_a \mid a \in A \}.$$
By the Hall marriage theorem, this is also equivalent to the inequalities
\begin{equation}
\left| \bigcup_{s\in S}E_{s}\right| \geq \left| S\right| \label{lneqHall}
\end{equation}
for all nonempty subsets $S\subset A$.
Denote $E_s' = A \setminus E_s$,
the complement of $E_s$ in $A$. Hall's conditions (\ref{lneqHall})
may be rewritten as
\begin{equation}\label{Hall2}
\left| \bigcap_{s\in S}E_s'\right| \leq \left|A\right|-\left| S\right|
\end{equation}
for all nonempty subsets $S\subset A$. Set
$$V_S=\bigcap_{s\in S}E_s' = \{x\in A\mid sx\in A \textrm{ for all } s\in S\}.$$
We have $S V_S \subset A$ by construction. Since $1\notin A$, Corollary \ref{corKem} applies
(with $U,V,X$ standing for $S,V_S, A$ respectively), and gives
$$\left| S\right| +\left| V_S\right| \le \left| A\right|-1 .$$
This shows that conditions (\ref{Hall2}) are satisfied and finishes the proof of the theorem.
\end{proof}
\bigskip
We now turn to the characterization of all groups satisfying the matching property.
The abelian case was first settled by Losonczy as Theorem~\ref{thm:matching property}.
\begin{theorem} Let $G$ be any group. Then $G$ has the matching property
if and only if $G$ is torsion-free or cyclic of prime order.
\end{theorem}
\begin{proof}
Assume first that $G$ is neither torsion-free nor cyclic of prime order.
Then there is an element $a \in G$, of finite order $n \ge 2$, which does not generate $G$.
Let $$A = \langle a \rangle = \{1,a,\dots,a^{n-1}\}$$
be the subgroup generated by $a$. Let $g \in G \setminus A$ and set
$$
B = A \cup \{g\} \setminus \{1\} = \{a,\dots,a^{n-1},g\}.
$$
Let $\varphi: A \rightarrow B$ be any bijection. Can it possibly satisfy
the condition $x\varphi(x) \notin A$ for all $x \in A$? No, it cannot.
Picking $a \in B$ and $x_0=\varphi^{-1}(a) \in A$, we have
$x_0 \varphi(x_0) = x_0 a \in A$ since $A$ is a subgroup. We conclude
that $G$ does not satisfy the matching property.
\smallskip
Conversely, assume that $G$ is either torsion-free or cyclic of prime order.
This means that the only finite subgroups of $G$ are $\{1\}$, and $G$ if $G$ is finite.
The trivial group is torsion-free and vacuously satisfies the matching property.
Assume now $G \neq \{1\}$.
Let $A,B$ be nonempty finite subsets of $G$ with $|A|=|B|$ and $1 \notin B$. For each $a\in A$, set
\begin{equation*}
E_{a}=\{x \in B\mid ax \notin A\}.
\end{equation*}
Again, finding a matching from $A$ to $B$ is equivalent to finding a system of
distinct representatives for the family of sets
$$\mathcal{E} = \{E_a \mid a \in A \}.$$
By the Hall marriage theorem, it suffices to prove the inequalities
\begin{equation}
\left| \bigcup_{s\in S}E_{s}\right| \geq \left| S\right| \label{lneqHall3}
\end{equation}
for all nonempty subsets $S\subset A$. Denote $E_s' = B \setminus E_s$,
the complement of $E_s$ in $B$. Hall's conditions (\ref{lneqHall3})
may be rewritten as
\begin{equation}\label{Hall4}
\left| \bigcap_{s\in S}E_s'\right| \leq \left|A\right|-\left| S\right|
\end{equation}
for all nonempty subsets $S\subset A$. Set
$$V_S=\bigcap_{s\in S}E_s' = \{x\in B\mid sx\in A \textrm{ for all } s\in S\},$$
and $W_S = V_S \cup \{1\}$. We have $|W_S|=|V_S|+1$ and $S W_S \subset A$ by construction.
By Theorem \ref{thOl}, there is a finite subgroup $H \subset G$ and a nonempty subset $T \subset S W_S$ such that
\begin{equation}\label{Hall5}
|S W_S| \geq |S| + |W_S| - |H|
\end{equation}
and $HT=T$ or $TH=T$. We cannot have $H=G$, for otherwise $T=G$. But as
$T \subset S W_S \subset A$, this would imply $A=G=B$, contradicting the hypothesis $1 \notin B$.
It follows that $H = \{1\}$, and inequality (\ref{Hall5}) yields
$$
|A| \geq |S| + |V_S|,
$$
since $S W_S \subset A$, $|W_S|=|V_S|+1$ and $|H|=1$.
Therefore conditions (\ref{Hall4}), which imply the existence of a matching
from $A$ to $B$, are satisfied. It follows that $G$ has the matching property.
\end{proof}
| {
"timestamp": "2006-11-20T18:10:23",
"yymm": "0611",
"arxiv_id": "math/0611614",
"language": "en",
"url": "https://arxiv.org/abs/math/0611614",
"abstract": "A matching in a group G is a bijection f from a subset A to a subset B in G such that af(a) does not belong to A for all a in A. The group G is said to have the matching property if, for any finite subsets A,B in G of same cardinality with B avoiding 1, there is a matching from A to B.Using tools from additive number theory, Losonczy proved a few years ago that the only abelian groups satisfying the matching property are the torsion-free ones and those of prime order. He also proved that, in an abelian group, any finite subset A avoiding 1 admits a matching from A to A.In this paper, we show that both Losonczy's results hold verbatim for arbitrary groups, not only abelian ones. Our main tools are classical theorems of Kemperman and Olson, also pertaining to additive number theory, but specifically developped for possibly nonabelian groups.",
"subjects": "Group Theory (math.GR); Combinatorics (math.CO)",
"title": "Matchings in arbitrary groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9881308807013052,
"lm_q2_score": 0.8152324848629214,
"lm_q1q2_score": 0.805556393243912
} |
https://arxiv.org/abs/1501.06394 | Chains of subsemigroups | We investigate the maximum length of a chain of subsemigroups in various classes of semigroups, such as the full transformation semigroups, the general linear semigroups, and the semigroups of order-preserving transformations of finite chains. In some cases, we give lower bounds for the total number of subsemigroups of these semigroups. We give general results for finite completely regular and finite inverse semigroups. Wherever possible, we state our results in the greatest generality; in particular, we include infinite semigroups where the result is true for these. The length of a subgroup chain in a group is bounded by the logarithm of the group order. This fails for semigroups, but it is perhaps surprising that there is a lower bound for the length of a subsemigroup chain in the full transformation semigroup which is a constant multiple of the semigroup order. | \section{The definition}
Let $S$ be a semigroup. A collection of subsemigroups of $S$ is
called a \emph{chain} if it is totally ordered with respect to inclusion. In
this paper we consider the problem of finding the longest chain of subsemigroups
in a given semigroup. From among several conflicting candidates for the
definition, we define the \textit{length} of a semigroup $S$ to be the largest
number of non-empty subsemigroups of $S$ in a chain minus 1; this is denoted
$l(S)$. There are several reasons for choosing this definition rather than
another, principally: several of the formulae and proofs we will present are
more straightforward with this definition (especially that in
Proposition~\ref{prop-ideal}, which is the basis for several of our results);
when applied to a group our definition of length coincides with the definition
in the literature (for more details see Section~\ref{section-groups}). There
are some negative aspects of this definition too. For example, if $S$ is a null
semigroup (the product of every pair of elements equals $0$), then $l(S) = |S| -
1$; or if $S$ is empty, then $l(S) = - 1$. Our definition of length also
differs from the usual order-theoretic definition.
The paper is organised as follows: we review some known results for groups in
Section~\ref{section-groups}; in Section~\ref{section-generalities} we present
some general results about the length of a semigroup and its relationship to the
lengths of its ideals; in Section~\ref{section-full-trans} we give a lower bound
for the length of the full transformation semigroup on a finite set, and
consider the asymptotic behaviour of this bound; in
Sections~\ref{section-order-preserving} and~\ref{section-general-linear} we
perform an analysis similar to that for the full transformation semigroup for
the semigroup of all order-preserving transformations, and for the
general linear monoid; in Sections~\ref{section-inverse}
and~\ref{section-completely-regular} we provide a formula for the length of an
arbitrary finite inverse or completely regular semigroup in terms of the lengths
of its maximal subgroups, and its numbers of $\L$- and $\R$-classes. In
Section~\ref{section-numbers}, as consequences of our results about the full
transformation monoid, we give some bounds on the number of subsemigroups, and
the maximum rank of a subsemigroup, of the full transformation monoid.
Note that, with the exception of Proposition~\ref{prop-ideal}, all semigroups
we consider are finite.
\section{Subgroup chains in groups}\label{section-groups}
In this section we give a brief survey of a well-understood case, that of
groups. We will use some of the results in this section later in the paper.
The question of the length $l(G)$ of the longest chain of subgroups in a
finite group has been considered for some time. The base and strong generating
set algorithm for finite permutation groups, due to Charles Sims, involves
constructing a chain of point stabilisers. L\'aszl\'o Babai~\cite{babai}
pointed out that the length of such a chain in any action of $G$ is bounded
above by $l(G)$, so this parameter is important in the complexity analysis
of the algorithm. Babai gave a bound, linear in $n$, for the length of the
symmetric group $S_n$; the precise value of $l(S_n)$ was computed by Cameron,
Solomon and Turull~\cite{cst}: values are given in sequence A007238 in the
On-Line Encyclopedia of Integer Sequences \cite{sloane0}.
\begin{theorem}\label{symmetric_theorem}
$l(S_n)=\lceil 3n/2\rceil - b(n) - 1$, where
$b(n)$ is the number of ones in the base~$2$ expansion of $n$.
\end{theorem}
It is easy to see that, if $N$ is a normal subgroup of $G$, then
$l(G)=l(N)+l(G/N)$. (In one direction, there is a chain of length $l(N)+l(G/N)$
passing through $N$; in the other direction, if $H$ and $K$ are subgroups of $G$
with $H<K$, then either $H\cap N<K \cap N$ or $HN/N<KN/N$, so in any subgroup
chain in $G$, each step involves taking a step in either $N$ or $G/N$.) So, for
any group $G$, we obtain $l(G)$ by summing the lengths of the composition
factors of $G$, and the problem is reduced to evaluating the lengths of the
finite simple groups. The result cited in the preceding paragraph deals with the
alternating groups. The problem was further considered by Seitz, Solomon and
Turull~\cite{st,gst}. It is not completely solved for all finite simple groups,
but we can say that it is reasonably well understood. In what follows, we will
regard a formula containing $l(G)$ for some group $G$ as ``known''.
We will use a special case of the following (known) result later. The function
$\Omega(n)$ gives the number of prime divisors of $n$, counted with their
multiplicities; equivalently, the number of prime power divisors of $n$.
\begin{prop}
\label{p:sol}
For any group $G$, $l(G)\le\Omega(|G|)$. Equality holds if and only if each
non-abelian composition factors of $G$ is a $2$-transitive permutation group
of prime degree in which the point stabiliser $H$ also satisfies
$l(H)=\Omega(|H|)$. In particular, any soluble group $G$ satisfies
$l(G)=\Omega(|G|)$.
\end{prop}
\paragraph{Remark} It follows from the Classification of Finite Simple Groups
that the non-abelian simple groups with this property are
$\PSL(2,2^a)$ where $2^a+1$ is a Fermat prime,
$\PSL(2,7)$, $\PSL(2,11)$, $\PSL(3,3)$ and $\PSL(3,5)$.
\begin{proof}
It is clear from Lagrange's Theorem that $l(G)\le\Omega(|G|)$. Equality holds
if and only if it holds in every composition factor.
A non-abelian finite simple group with this property has a subgroup of prime
index, and so acts as a transitive permutation group of prime degree. By
Burnside's theorem, it is $2$-transitive. The rest of the proposition is clear.
\end{proof}
Since a subsemigroup of a finite group is a subgroup, these results solve
particular cases of our general problem.
\section{Generalities about subsemigroup chains}\label{section-generalities}
In contrast to groups, where the length of a chain of subgroups is at most the
logarithm of the group order, a semigroup may have a chain whose length is equal
one less than its order. A null semigroup of any order has this property, as does
any semigroup which is not generated by a proper subset (i.e.\ any semigroup
whose $\J$-classes are semigroups of left or right zeros, and where $S/\J$ is a
chain).
If $S$ is a semigroup and $T$ is a subsemigroup of $S$, then $l(T) \leq l(S)$.
Similarly, if $\rho$ is a congruence on $S$, then, since subsemigroups are
preserved by homomorphisms, $l(S/\rho) \leq l(S)$.
Let $I$ be an ideal of the semigroup $S$ and let $S/I$ denote the \textit{Rees
quotient} of $S$ by $I$, i.e.\ the semigroup with the elements of $S\setminus I$
together with an additional zero $0\not\in S$; the multiplication is given by
setting $xy=0$ if the product in $S$ lies in $I$, and letting it have its value
in $S$ otherwise. As noted above, in the following result we do not assume
any finiteness condition.
\begin{prop}[cf. Lemma 1 in \cite{Ganyushkin2011aa}]
\label{prop-ideal}
Let $S$ be a semigroup and let $I$ be an ideal of $S$. Then
$l(S)=l(I)+l(S/I)$.
\end{prop}
\begin{proof}
We start by showing that $l(S)\geq l(I)+l(S/I)$. Suppose that
$\set{U_{\alpha}}{\alpha\hbox{ an ordinal},\ \alpha<l(I)}$ and
$\set{V_{\alpha}}{\alpha\hbox{ an ordinal},\ \alpha<l(S/I)}$ are
chains of non-empty proper subsemigroups of $I$ and $S/I$, respectively. Then
\begin{equation*}
W_{\alpha}=
\begin{cases}
U_{\alpha} &\text{if }\alpha<l(I)\\
(V_{\beta}\setminus \{0\})\cup I & \text{if }\alpha=l(I)+\beta<l(I)+l(S/I)
\end{cases}
\end{equation*}
is a chain of $l(I)+l(S/I)$ proper subsemigroups of $S$, and so $l(S)\geq
l(I)+l(S/I)$.
Suppose that $\mathcal{C}=\set{U_{\alpha}}{\alpha\hbox{ an ordinal},\
\alpha<l(S)}$ is a chain of non-empty proper subsemigroups such that
$U_{\alpha}<U_{\alpha+1}$ for all $\alpha<l(S)$. We will show that we may
assume, without loss of generality, that for all $\alpha<l(S)$ either:
\begin{equation}\label{dichotomy}
\big(U_{\alpha+1}\setminus U_{\alpha}\big) \cap I=\emptyset\qquad\hbox{ or
}\qquad U_{\alpha+1}\setminus U_{\alpha}\subseteq I .
\end{equation}
Since the union of a subsemigroup and an ideal is a semigroup, it follows that
$U_\alpha\cup (U_{\alpha+1}\cap I)$ is a subsemigroup of $S$. Hence
$$U_{\alpha}\leq U_\alpha\cup (U_{\alpha+1}\cap I)\leq U_{\alpha+1}.$$
If $l(S)$ is finite, then either $U_\alpha\cup (U_{\alpha+1}\cap
I)=U_{\alpha}$ or $U_\alpha\cup (U_{\alpha+1}\cap I)=U_{\alpha+1}$. Therefore
$\big(U_{\alpha+1}\setminus U_{\alpha}\big) \cap I=\emptyset$ or
$U_{\alpha+1}\setminus U_{\alpha}\subseteq I $, respectively.
Suppose that $l(S)$ is infinite. Then replacing any subchain
$U_{\alpha}<U_{\alpha+1}$ of $\mathcal{C}$ which fails (\ref{dichotomy}) by
$U_{\alpha}<U_\alpha\cup (U_{\alpha+1}\cap I)<U_{\alpha+1}$ we obtain another
chain of length $l(S)$. Furthermore, $U_\alpha\cup (U_{\alpha+1}\cap
I)<U_{\alpha+1}$ and $U_{\alpha}<U_\alpha\cup (U_{\alpha+1}\cap I)$ satisfy
(\ref{dichotomy}).
Assume without loss of generality that $\mathcal{C}$ satisfies
(\ref{dichotomy}). Note that $\set{U_\alpha\cap I}{\alpha<l(S)}$ is a chain
of non-empty subsemigroups of $I$ and $\set{U_{\alpha}/I}{\alpha<l(S)}$
is a chain of non-empty proper subsemigroups of $S/I$. By (\ref{dichotomy}),
for all $\alpha<l(S)$ either
$$U_{\alpha}\cap I=U_{\alpha+1}\cap I\quad\hbox{ and }\quad
U_{\alpha}/I<U_{\alpha+1}/I$$
or
$$U_{\alpha}/I=U_{\alpha+1}/I\quad\hbox{ and }\quad U_{\alpha}\cap
I<U_{\alpha+1}\cap I$$
Therefore $l(S)\leq l(I)+l(S/I)$.
\end{proof}
If $S$ is a semigroup and $x,y\in S$, then we write $x\J y$ if the principal
(two-sided) ideal $S^1xS^1$ generated by $x$ equals the ideal $S^1yS^1$
generated by $y$. The relation $\J$ is an equivalence relation called
\emph{Green's $\J$-relation}, and the equivalence classes are called
\emph{$\J$-classes}. If $J_1$ and $J_2$ are $\J$-classes of a semigroup, then
we write $J_1\leq_{\J}J_2$ if $S^1xS^1\subseteq S^1yS^1$ for any $x\in J_1$ and
$y\in J_2$. It is straightforward to verify that $\leq_{\J}$ is a partial order
on the $\J$-classes of $S$.
If $J$ is a $\J$-class of a finite semigroup $S$, then its \emph{principal
factor} $J^*$ is the semigroup with elements $J\cup \{0\}$ ($0\not\in J$) and
the product $xy$ of $x,y\in J$ defined to be its value in $S$ if $x,y, xy\in J$
and $0$ otherwise. In other words, if $J$ is not minimal, then $J^*$ is the Rees
quotient of the principal ideal generated by any element of $J$ by the ideal
consisting of those elements in $S$ whose $\J$-classes are not greater than $J$
under $\leq_{\J}$. If $J$ is minimal, then $J$ is a subsemigroup of $S$, and
$J^*$ is not isomorphic to the quotient in the previous sentence (which is
isomorphic to $J$), since $J^*$ has one more element.
A semigroup $S$ is \emph{regular} if for every $x\in S$ there is $y\in S$ such
that $xyx=x$.
\begin{lemma} \label{lemma-regular}
Let $S$ be a finite regular semigroup and let $J_1, J_2, \ldots, J_m$ be the
$\J$-classes of $S$. Then $l(S)=l(J_1^*)+l(J_2^*)+\cdots +l(J_m^*)-1$.
\end{lemma}
\begin{proof}
Assume without loss of generality that $J_1$ is maximal in the partial order
of $\J$-classes on $S$. It follows that $I=S\setminus J_1$ is an ideal. Hence
by Proposition~\ref{prop-ideal} it follows that $l(S)=l(I)+l(S/I)$.
If $I=\emptyset$, then $S=J_1$, and so
$l(S)=l(J_1)=l(J_1^*)-1$, in which case we are finished.
Suppose that $I\not=\emptyset$. Then $S/I$ is isomorphic to $J_1^*$ and so
$l(S)=l(I)+l(J_1^*)$. Since $S$ is regular and $I$ is an ideal, it
follows by Proposition A.1.16 in \cite{Rhodes2009aa} that $I$ is regular and
the $\J$-classes of $I$ are $J_2, J_3, \ldots, J_m$. Therefore repeating the
argument in the previous paragraph a further $m-2$ times, we obtain
$$l(S)=l(J_1^*)+l(J_2^*)+\cdots +l(J_m^*)-1,$$
as required.
\end{proof}
We conclude this section with a simple application of the results in this, and
the previous, sections.
\begin{prop}
Let $S$ be a semigroup generated by a single element $s$ and let $m,n\in\N$ be
the least numbers such that $s^{m+n}=s^{m}$. Then $l(S)=m+\Omega(n)-1$,
where $\Omega(n)$ is the number of prime power divisors of $n$.
\end{prop}
\begin{proof}
The $\J$-classes of $S$ are
$$\{\{s\}, \{s^2\}, \ldots, \{s^{m-1}\}, \{s^m, s^{m+1},\ldots,
s^{m+n-1}\}\},$$
where the non-singleton class is the cyclic group
$C_n$ with $n$ elements.
By repeatedly applying Proposition~\ref{prop-ideal},
$$l(S)=m+l(C_n)-1,$$
and $l(C_n)=\Omega(n)$ by Proposition~\ref{p:sol}.
\end{proof}
\section{The full transformation semigroup}\label{section-full-trans}
\subsection{Long chains}
The full transformation semigroup, denoted $T_n$, consists of all functions with
domain and codomain $\{1,\ldots,n\}$ under the usual composition of functions.
Clearly $|T_n|=n^n$. In this section, we will prove the following theorem.
\begin{theorem}\label{thm-full-transformation-monoid}
$$l(T_n) \ge \ee^{-2} n^n - 2\ee^{-2}(1-\ee^{-1}) n^{n-1/3} - o(n^{n-1/3}).$$
\end{theorem}
The \emph{rank} of an element of $T_n$ is the cardinality of its image. The
$\mathscr{J}$-classes of $T_n$ are the sets $J_k$ of all elements of rank $k$.
Since $T_n$ is regular, Lemma~\ref{lemma-regular} implies that $l(T_n)$ is the
sum of the lengths of the principal factors $J_k^*$ of its
$\mathscr{J}$-classes, minus $1$.
A element $f$ of rank $k$ in $T_n$ has a kernel, which is the partition of
$\{1,\ldots,n\}$ into its pre-images (hence with $k$ parts), and an image, a
$k$-subset of $\{1,\ldots,n\}$. The set of all maps with given kernel $Q$ and
given image $A$ is an $\mathscr{H}$-class in the semigroup $T_n$, and has
cardinality $|A|!$. So the number of maps of rank $k$ is
$$N(n,k)=S(n,k){n\choose k}k!,$$
where $S(n,k)$ is the Stirling number of the second kind.
If $f_1$ and $f_2$ are two maps of rank $k$ with kernels $Q_1,Q_2$ and images
$A_1,A_2$ respectively, then $f_1f_2$ has rank $k$ if $A_1$ is a transversal for
the partition $Q_2$, and smaller rank otherwise. So, if $P$ is a set of
$k$-partitions of $\{1,\ldots,n\}$ (partitions with $k$ parts), and $S$ a set of
$k$-subsets, with the property that no element of $S$ is a transversal for any
element of $Q$, then the set of maps with kernel in $P$ and image in $S$ is a
null semigroup in $J_{k}^*$. We call a set $(P,S)$ with this property a
\emph{league}, and define its \emph{content} to be $|P|\cdot|S|$.
If a league $(P,S)$ of rank $k$ has content $m$, then the set of all maps $f$
with kernel in $P$ and image in $S$ has the property that the product of any two
of its elements has rank smaller than $k$; so this set, together with zero,
forms a null subsemigroup of the principal factor $J_k^*$ of order $1+m\cdot
k!$. This semigroup has a chain of subsemigroups of length equal to one less
than its order. Combining these observations with Lemma~\ref{lemma-regular}, we
obtain the following result.
\begin{prop}
Let $F(n,k)$ be the largest content of a league of rank $k$ on
$\{1,\ldots,n\}$. Then
$$l(T_n)\ge\sum_{k=1}^nF(n,k)k! - 1.$$
\end{prop}
We prove Theorem~\ref{thm-full-transformation-monoid} by a suitable choice of
leagues, as follows. Choose one element of the set $\{1,\ldots,n\}$, say $n$;
let $P$ be the set of all $k$-partitions having $n$ as a singleton part, and $S$
the set of all $k$-subsets not containing $n$. Then clearly $(P,S)$ is a league,
and its content is
$${n-1\choose k}S(n-1,k-1).$$
\begin{lemma}
The expected rank $E(n)$ of a transformation in $T_n$ chosen uniformly at random
satisfies
$$
E(n) = (1 - \ee^{-1}) n + O(1).
$$
Moreover, the standard deviation $\sigma(n)$ of the rank satisfies
\begin{equation}
\sigma(n) \le \sqrt{\ee^{-1} - 2 \ee^{-2}} \sqrt{n+1}
\end{equation}
for $n$ large enough.
\end{lemma}
\begin{proof}
The exact values of the expectation $E(n)$ and of the variance $V(n)$ are given
in \cite{hig1}, where their asymptotic estimates are also given. The expected
rank is given by
$$
E(n) = n \left[ 1- \left( 1- \frac{1}{n} \right)^n \right] = (1- \ee^{-1})n + O(1).
$$
For the variance, we have
\begin{eqnarray*}
V(n) &=& n \left[ \left( 1- \frac{1}{n} \right)^n - \left( 1- \frac{2}{n}
\right)^n \right] + n^2 \left[ \left( 1- \frac{2}{n} \right)^n - \left( 1-
\frac{1}{n} \right)^{2n} \right]\\
&=& n \left[\ee^{-1} \left(1 - \frac{1}{2n} + o(n^{-1}) \right) - \ee^{-2}
\left( 1- \frac{2}{n} + o(n^{-1}) \right) \right]\\
&&{} + n^2 \left[ \ee^{-2} \left( 1 - \frac{2}{n} - \frac{2}{3n^2} +
o(n^{-2}) \right) - \ee^{-2} \left( 1 - \frac{1}{n} - \frac{1}{6n^2} +
o(n^{-2}) \right)\right]\\
&=& n(\ee^{-1} - 2\ee^{-2}) + \frac{3 \ee^{-2} - \ee^{-1}}{2} + o(1).
\end{eqnarray*}
Since $\frac{3 \ee^{-2} - \ee^{-1}}{2} < \ee^{-1} - 2\ee^{-2}$, we have $V(n)
\le (\ee^{-1} - 2\ee^{-2})(n+1)$ for $n$ large enough.
\end{proof}
We now return to the proof of Theorem~\ref{thm-full-transformation-monoid}.
Let $\tau = \sqrt{\ee^{-1} - 2 \ee^{-2}}$ and $K = \{k :
|k-E(n-1)| < n^{1/6} \tau n^{1/2}\}$; we then have
$$ n - k \ge n - E(n-1) - \tau n^{2/3} = \ee^{-1} n - \tau n^{2/3} - o(n^{2/3})$$
for any $k \in K$. Also, for all $n$ large enough, $K$ contains all $k$ such
that $|k - E(n-1)| < n^{1/6} \sigma(n-1)$. Chebyshev's inequality then yields
$$ \sum_{k \in K} N(n-1,k-1) \ge (n-1)^{n-1} \left( 1 - n^{-1/3} \right) \ge
\ee^{-1} n^{n-1} (1- n^{-1/3}).
$$
Therefore, we obtain an overall chain of length at least
\begin{eqnarray*}
\sum_{k \in K} {n-1 \choose k} S(n-1, k-1) k!
&=&\sum_{k \in K} (n-k) N(n-1,k-1)\\
&\ge& (\ee^{-1} n - \tau n^{2/3} - o(n^{2/3})) \ee^{-1} n^{n-1} \left( 1 - n^{-1/3} \right)\\
&=& \ee^{-2} n^n - 2\ee^{-2}(1-\ee^{-1})(n^{n-1/3}) - o(n^{n-1/3}).
\end{eqnarray*}
\subsection{Combinatorial results}
The question of finding $F(n,k)$, the largest possible content of a league
$(P,S)$, where $P$ is a set of $k$-partitions and $S$ a set of $k$-subsets
of $\{1,\ldots,n\}$, is purely combinatorial, and maybe of some interest.
We give here some general bounds and some exact values.
We showed above that
\begin{equation}\label{equation-1}
F(n,k)\ge{n-1\choose k}S(n-1,k-1).
\end{equation}
Another strategy gives a different bound, which is better for small $k$:
for $n\ge2$, we have
\begin{equation}\label{equation-2}
F(n,k)\ge{n-2\choose k-2}S(n-1,k).
\end{equation}
This is proved by letting $S$ consist of all $k$-sets containing $1$ and $2$,
and $P$ the set of all $k$-partitions not separating $1$ and $2$.
Further improvements are possible.
In the extreme cases, we can evaluate $F(n,k)$ precisely, as follows.
\begin{prop}
\begin{enumerate}
\item $F(n,1)=0$.
\item For $n>3$, $F(n,2)=3(2^{n-3}-1)$, and a pair $(P,S)$ meets the bound
if and only if $S$ is the set of edges of a triangle $T$ and $P$ is the set
of $2$-partitions with $T$ contained in a part.
\item $F(n,n-1)=s^2(2s-1)$, $s^2(2s+1)$, or $s(s+1)(2s+1)$ when
$n=3s$, $3s+1$, or $3s+2$, respectively, with $s\ge1$.
\item $F(n,n)=0$.
\end{enumerate}
\end{prop}
\begin{proof}
In the first and last case, the proof is trivial and ommited.
\smallskip
\textbf{(b)} Consider the case $k=2$. Then $S$ is the set of edges of a graph,
and the partitions in $P$ do not cross edges, so each part of such a partition
is a union of connected components of a graph. We are going to make moves which
will all increase $|S|\cdot|P|$.
First, by including edges so that each component is a complete graph, and by
including all partitions whose parts are unions of components, we do not
decrease $|S|\cdot|P|$. So we may assume that this is the case. Thus, if the
components have sizes $a_1,\ldots,a_r$, then
$$|S|\cdot|P| = \left(\sum_{i=1}^r{a_i\choose 2}\right)(2^{r-1}-1),$$
where $\sum_{i=1}^ra_i=n$.
Next we claim that, if $a_i\ge4$, then we can increase $|S|\cdot|P|$ by
replacing the part of size $a_i$ by two parts of sizes $1$ and $a_i-1$. For we
increase $r$ by $1$, so the second factor more than doubles; so it will suffice
to show that $${a_i-1\choose2}\ge\frac{1}{2}{a_i\choose 2},$$ since then the
first factor will be at least half of its previous value. Now the displayed
inequality is equivalent to $a_i\ge4$.
Also, splitting a part of size $2$ into two parts of size $1$ more than
doubles the second factor and reduces the first factor by $1$. So this is
also an improvement (except in the case where the resulting partition has
all $a_i=1$, when the product is zero).
So we can continue the process, increasing the objective function, until all
$a_i$ are equal to $1$ or $3$.
If two $a_i$ are equal to $3$, then replacing them by $5$ and $1$ improves
the sum, since
$$2{3\choose2}<{5\choose 2}.$$
Then we can replace the $5$ by three parts of sizes $3$, $1$ and $1$, by the
preceding argument.
So we end with a part of size $3$ and $n-3$ parts of size $1$, giving the
value $3(2^{n-3}-1)$ claimed, and also the extremal configuration described.
\smallskip
\textbf{(c)} Now consider the case $k=n-1$. Let $(P,S)$ satisfy the conditions.
Identify each element of $S$ by the single point it omits, and each element of
$P$ by the pair of points (or edge) in the same class; then the condition
asserts that no point of $P$ is on an edge of $S$. So to optimise we want $P$ to
be a complete graph on, say, $m$ points, and $S$ to consist of the remaining
$n-m$ points. Then $|S|\cdot|P|=m(m-1)(n-m)/2$.
This is maximised when $m$ is roughly $2n/3$; a detailed but elementary
calculation gives the stated result.
\end{proof}
Table~\ref{t:leagues} gives some further exact values, computed with the
GAP~\cite{gap} package GRAPE~\cite{grape}, except the value of $F(7, 4)$, which
was computed by Chris Jefferson using the Minion \cite{minion} constraint
satisfaction solver. Each table entry also gives a lower bound, which is the
maximum of the values in (\ref{equation-1}) and (\ref{equation-2}). The column
headed ``Total'' multiplies $F(n,k)$ (or the lower bound) by $k!$, and sums over
$k$. The entries in columns $k=1$ and $k=n$ are zero and have been omitted.
\begin{table}[ht]
\begin{equation*}
\begin{array}{||r||r||r|r|r|r|r||}
\hline
n & \hbox{Total} & k=2 & 3 & 4 & 5 & 6 \\
\hline
2 & 0,0 &&&&&\\
3 & 2,2 & 1,1 &&&&\\
4 & 24,18 & 3,3 & 3,2 &&&\\
5 & 330,326 & 9,7 & 28,28 & 6,6 &&\\
6 & 5382,5130 & 21,15 & 150,150 & 125,125 & 12,10 &\\
7 & 98250,93782 & 45,31 &760,620 & 1350,1350 & 390,390 & 20,15 \\
\hline
\end{array}
\end{equation*}
\caption{\label{t:leagues}Values and bounds for $F(n,k)$}
\end{table}
A kind of dual problem, which is also connected to the theory of transformation
semigroups (though not to the questions considered here) is the following:
\begin{quote}
Given $n$ and $k$, what is the smallest size of a collection of $k$-subsets
of $\{1,\ldots,n\}$ which contains a transversal to every $k$-partition of
$\{1,\ldots,n\}$?
\end{quote}
For some asymptotic results about this question, see \cite{bt}; for an
application to semigroups, in the special case where there is a permutation
group $G$ such that every orbit of $G$ on $k$-sets has this property,
see \cite{ac}.
\section{Order-preserving transformations}\label{section-order-preserving}
A transformation $f\in T_n$ is \textit{order-preserving} if $(i)f < (j)f$
whenever $i < j$. In this section we consider $O_n$, the semigroup of all
order-preserving transformations of $\{1,\ldots,n\}$. It is shown in
\cite{hig2}, for example, that $|O_n| = {2n-1 \choose n}$.
We will denote by $F^*(n,k)$ denote the maximum content of a league $(P, S)$
where $P$ consists of $k$-partitions corresponding to kernels of
order-preserving transformations, and $S$ is an $k$-element subset of
$\{1,2,\ldots, n\}$.
\begin{prop}
$$l(O_n) \ge {2n - 3 \choose n} - 1
= \frac{(n-1)(n-2)}{(2n-1)(2n-2)} |O_n| - 1.$$
\end{prop}
Note that this lower bound is asymptotically $|O_n|/4$.
\begin{proof}
It is well known that each $\mathscr{H}$-class in $O_n$ is a singleton. For any
given value of the rank $k$, there are ${n \choose k}$ choices for the image of
a transformation in $O_n$, and ${n-1 \choose k-1}$ choices for its kernel, which
must be a partition of $\{1,\ldots,n\}$ into $k$ intervals \cite{gm} -- this is
because we specify such a partition by giving the $k-1$ points which divide the
interval appropriately. Therefore, the number of transformations of
rank~$k$ in $O_n$ is given by
$$ N^*(n,k) = {n \choose k} {n-1 \choose k-1}. $$
We can apply the same strategy as in $T_n$ in order to obtain long chains of
subsemigroups in $O_n$. Let $S$ be the set of all $k$-subsets of
$\{1,\ldots,n\}$ not containing $n$ and let $P$ be the set of partitions of
$\{1,\ldots,n\}$ into $k$ intervals such that the last interval is $\{n\}$. We
then have that $|S| = {n-1 \choose k}$ and $|P| = {n-2 \choose k-2}={n-2\choose
n-k}$ and so
\begin{equation}\label{equation-3}
F^*(n,k) \geq {n-1 \choose k} {n-2 \choose n-k}.
\end{equation}
Hence we have a chain of length
$$ \sum_{k=1}^n {n-1 \choose k} {n-2 \choose n-k} - 1 = {2n-3 \choose n} - 1, $$
using the Vandermonde convolution: ${m+n \choose k}=\sum_{i=0}^{k}{m \choose
i}{n \choose k-i}$.
\end{proof}
As we did for the full transformation monoid, in the extreme cases, we can
evaluate $F^*(n,k)$ precisely, as follows.
\begin{prop}
\begin{enumerate}
\item $F^*(n,1) = 0$.
\item
\begin{equation*}
\begin{array}{lllr}
F^*(n,2)=
\max &\left\{\right.&\frac{1}{2} (n-\lfloor r^* \rfloor +1)(n- \lfloor
r^* \rfloor)(\lfloor r^* \rfloor -1),\\
&& \frac{1}{2} (n-\lceil r^* \rceil +1)(n- \lceil r^*
\rceil)(\lceil r^*
\rceil -1) &\left.\right\}
\end{array}
\end{equation*}
where $r^* = \left(2(n+1) - \sqrt{(n+1)^2 - 3n}\right)/3$.
\item
$\displaystyle{F^*(n,n-1)=
\left \lfloor \frac{n-1}{2} \right\rfloor \left \lceil
\frac{n-1}{2} \right\rceil.}$
\item $F^*(n,n) = 0$.
\end{enumerate}
\end{prop}
The bound in (b) is asymptotically $(2/27)n^3$; that in (c), $n^2/4$.
\begin{proof}
The proofs are very similar to the case of arbitrary leagues; as such, we shall
use a similar notation.
\smallskip
\textbf{(b)} Again, we can represent $S$ as a graph and each part of any
partition in $P$ is a union of connected components of that graph. We can still
assume that $S$ forms a union of $r$ cliques, of cardinalities $a_1,\ldots,a_r$.
However, for a graph with $r$ connected components, there are at most $r-1$
possible choices for a partition in $P$, with equality if and only if the vertex
set of each connected component is an interval. Therefore, the maximum content
of a league with partitions into intervals is given by
$$
\max_{1 \le r \le n} \max_{a_1,\ldots,a_r} \left\{ (r-1) \sum_{i=1}^r
\frac{a_i(a_i-1)}{2} \right\},
$$
where the inner maximum is taken over all $a_1,\ldots,a_r$ such that $a_i \ge 1$
for all $i$ and $\sum_{i=1}^r a_i = n$. This inner maximum is achieved for $a_1
= \ldots = a_{r-1} = 1$, $a_r = n-r+1$ and is equal to $\frac{1}{2}
(n-r+1)(n-r)(r-1)$. Maximising this polynomial gives the result.
\smallskip
\textbf{(c)} Again, we can represent $S$ as a set of $m$ points and $P$ as a
graph on the remaining $n-m$ points. This time, $P$ can only contain edges of
the form $\{i,i-1\}$ for any $i$ such that neither $i$ nor $i-1$ are amongst the
$m$ points of $S$. Hence $P$ is a disjoint union of paths with at most $n-m-1$
edges, which is achieved if the points of $S$ are $1$ up to $m$ and $P$ is the
path from $m+1$ to $n$. Together, we obtain a content of $m(n-m-1)$, maximised
for $m = \lfloor (n-1)/2 \rfloor$ or $m = \lceil (n-1)/2 \rceil$.
\end{proof}
Table~\ref{t:opleagues} gives some values for the function $F^*(n,k)$ giving
the maximum content of a league where the parts of the partitions are
intervals, together with the lower bound in (\ref{equation-3}).
Again, the zeros for $k=1$ and $k=n$ are omitted.
\begin{table}[ht]
$$\begin{array}{||r||r||r|r|r|r|r||}
\hline
n & \hbox{Total} & k=2 & 3 & 4 & 5 & 6 \\
\hline
2 & 0,0 &&&&&\\
3 & 1,1 & 1,1 &&&&\\
4 & 5,5 & 3,3 & 2,2 &&&\\
5 & 22,21 & 6,6 & 12,12 & 4,3 &&\\
6 & 88,84 & 12,10 & 40,40 & 30,30 & 6,4 &\\
7 & 345,330 & 20,15 & 100,100 & 150,150 & 66,60 & 9,5 \\
\hline
\end{array}$$
\caption{\label{t:opleagues}Values and bounds for $F^*(n,k)$ in the monoid of
order-preserving transformations.}
\end{table}
\section{The general linear semigroup}\label{section-general-linear}
For $q$ a prime power and $n$ a positive integer, let $\mathrm{GLS}(n,q)$
denote the semigroup of all linear maps on the $n$-dimensional vector space
$V$ over the Galois field $\GF(q)$ of order $q$. We have
$|\GLS(n,q)|=q^{n^2}$, since the linear maps are representable as $n\times n$
matrices.
Our technique here resembles that in the case of the full transformation
semigroup. For $1\le k\le n$, the set of linear maps of rank at most $k$
forms an ideal, so we can analyse the principal factors.
One important difference is that the structure is far more top-heavy. Indeed,
the group $\GL(n,q)$ of maps of full rank $n$ contains a non-zero proportion
of the whole semigroup.
\begin{prop}
Given $q$, there is a constant $c(q)$, with $0<c(q)<1$, so that
$$\lim_{n\to\infty}\frac{\mathrm{GL}(n,q)}{\mathrm{GLS}(n,q)}=c(q).$$
\label{p:ordergl}
\end{prop}
\begin{proof}
\begin{eqnarray*}
|\GL(n,q)| &=& \prod_{k=1}^n(q^n-q^{n-k}) \\
&=& q^{n^2}\prod_{k=1}^n (1-q^{-k}) \\
&\ge& |\GLS(n,q)|\prod_{k\ge1}(1-q^{-k}).
\end{eqnarray*}
The infinite product converges to a limit $c(q)>0$. Euler's Pentagonal
Numbers Theorem \cite[Theorem 4.1.3]{hall} gives
$$c(q)=\sum_{k\in\mathbb{Z}}(-1)^kq^{-k(3k-1)/2}=1-q^{-1}-q^{-2}+q^{-5}
+q^{-7}-q^{-12}-\cdots,$$
a form handy for calculation. For example, $c(2)=0.288788095\ldots$. In
fact, $c(q)$ is an evaluation of Jacobi's theta-function.
\end{proof}
The other main difference here is that the kernel of a linear map of rank $k$
is the partition of the vector space into cosets of a $(n-k)$-dimensional
subspace $U$ (the ``kernel'' of the map in the usual sense of linear algebra),
and a $k$-dimensional subspace $W$ is a transversal for the kernel partition
if and only if $U\cap W=\{0\}$. So the linear analogue of a league is a pair
$(P,S)$, where $P$ is a set of $(n-k)$-dimensional subspaces and $S$ a set
of $k$-dimensional subspaces such that, for all $U\in P$ and $W\in S$, we
have $U\cap W\ne\{0\}$. The simplest construction of a league is to take an
$(n-1)$-dimensional subspace $H$ of $V$, and to take $S$ and $P$ to consist of
all subspaces of the appropriate dimension contained in $H$; or dually, take
a $1$-dimensional subspace $K$ of $V$, and to take $S$ and $P$ to be all the
subspaces of the appropriate dimension containing $K$.
For $1\le k\le n$, the number of maps of rank $k$ is
$$\left(\gauss{n}{k}{q}\right)^2|\GL(k,q)|.$$
Here $$\gauss{n}{k}{q}$$ is the Gaussian coefficient, the number
of $k$-dimensional subspaces of an $n$-dimensional vector space over $\GF(q)$.
This coefficient is a monic polynomial in $q$ of degree $k(n-k)$ with
non-negative
integer coefficients, so is at least $q^{k(n-k)}$. Using the fact that
$|\GL(k,q)|\ge c(q)q^{k^2}$, we see that the number of maps of rank $k=n-d$
is at least $c(q)q^{n^2-d^2}$. So the largest principal factors are at the top.
The league just described in the principal factor of rank $k$ contains
$$\gauss{n-1}{k}{q}\gauss{n-1}{k-1}{q}$$ pairs. We have
\begin{eqnarray*}
\gauss{n-1}{k}{q}\gauss{n-1}{k-1}{q}\Bigg/\left(\gauss{n}{k}{q}\right)^2
&=& \frac{(q^k-1)(q^{n-k}-1)}{(q^n-1)^2} \\
&\ge& (1-1/q)^2q^{-n}.
\end{eqnarray*}
Altogether, we obtain a chain of length at least
\begin{eqnarray*}
l(\GLS(n,q)) &\ge (1- 1/q)^2 q^{-n} \sum_{k=0}^{n-1} \left( {n \brack
k}_q \right)^2 |\GL(k,q)| - 1\\
&= (1- 1/q)^2 q^{-n}(|\GLS(n,q)| - |\GL(n,q)|) - 1.
\end{eqnarray*}
By Proposition~\ref{p:ordergl}, we have
$$
|\GLS(n,q)| - |\GL(n,q)| \ge q^{n^2}(1-c(q)-o(1)),
$$
where the $o(1)$ is for fixed $q$ as $n\to\infty$. We obtain:
\begin{theorem}
$l(\GLS(n,q)) \ge (1-c(q)-o(1))(1- 1/q)^2 q^{-n} |\GLS(n,q)|$.
\end{theorem}
\section{Inverse semigroups}\label{section-inverse}
An \emph{inverse semigroup} is a semigroup $S$ such that for all $x\in S$, there
exists a unique $x^{-1}\in S$ where $xx^{-1}x=x$ and $x^{-1}xx^{-1} = x$. The
\emph{symmetric inverse monoid} consists of the injective functions between
subsets of a fixed set $X$. It is the analogue of the symmetric group in the
context of inverse semigroups i.e.\ every inverse semigroup is isomorphic to an
inverse subsemigroup of some symmetric inverse monoid.
The length of the symmetric inverse monoid on any finite set was determined in
\cite{Ganyushkin2011aa}. However, the main theorem of \cite{Ganyushkin2011aa}
holds for arbitrary finite inverse semigroups, and the proof is essentially that
given in \cite{Ganyushkin2011aa}. We state the theorem in its full generality,
and give a slightly different proof from that in \cite{Ganyushkin2011aa}, which
makes use of the description of the maximal subsemigroups of a Rees matrix
semigroup given in \cite{Graham1968aa}.
Let $G$ be a group and let $n\in \mathbb{N}$. Then the \emph{Brandt semigroup}
$B(G, n)$ has elements $\left(\{1,\ldots, n\}\times G\times \{1,\ldots,
n\}\right)\cup \{0\}$ with multiplication defined by
\begin{equation*}
(i,g,j)(k,h,l)=
\begin{cases}
(i, gh, l) & \text{if }j = k \\
0 & \text{if }j \not= k
\end{cases}
\end{equation*}
and $0x=x0=0$ for all $x\in B(G, n)$.
It follows from the Rees Theorem \cite[Theorem 5.1.8]{Howie1995aa} that the
principal factor of a $\J$-class $J$ of a finite inverse semigroup $S$ is
isomorphic to $B(G, n)$ where $G$ is any maximal subgroup of $S$ contained in
$J$ and $n$ is the number of $\mathscr{L}$- and $\mathscr{R}$-classes of $J$.
Every inverse semigroup is regular, and so, to calculate the length of an
inverse semigroup, it suffices, by Lemma~\ref{lemma-regular}, to work out the
length of a Brandt semigroup.
\begin{prop}\label{prop-brandt}
Let $G$ be a group and let $n\in \N$. Then:
\begin{equation}\label{formula}
\begin{aligned}
l(B(G,n)) & = n(l(G)+1)+\frac{n(n-1)}{2}|G|+n-1\\
& = n(l(G)+2)+\frac{n(n-1)}{2}|G|-1.
\end{aligned}
\end{equation}
\label{thm-brandt}
\end{prop}
\begin{proof}
We proceed by induction on $n$ and $|G|$. If $n=1$, then
$l(B(G, n))=l(G)+1$ and (\ref{formula}) holds.
Let $n\in \mathbb{N}$, $n>1$, and let $G$ be a finite group. Suppose that if
either: ($m<n$ and $|H|=|G|$) or ($m=n$ and $|H|<|G|$), then
$$l(B(H, m))=m(l(H)+1)+\frac{m(m-1)}{2}|H|+m-1.$$
We will show that (\ref{formula}) holds for $n$ and $G$.
Remark 1 of \cite{Graham1968aa} implies that a maximal subsemigroup of $B(G,
n)=(I\times G\times I)\cup \{0\}$ is isomorphic to either:
\begin{enumerate}[(i)]
\item $B(H, n)$ where $H$ is a maximal subgroup of $G$; or
\item $B(G, n)\setminus (J\times G \times K)$ where $J$ and $K$ partition $I$.
\end{enumerate}
(This is also shown directly in Theorem 6 of \cite{Ganyushkin2011aa}.)
The semigroups of type (ii) are always maximal, while the ones in part (i)
may or may not be. It follows that either
$$l(B(G, n))=1+l(B(H, n))$$
for some maximal subgroup $H$ of $G$, or
$$l(B(G, n))=1+l(B(G, n)\setminus (J\times G \times K))$$
where $J$ and $K$ partition $I$.
In the latter case,
$$B(G, n)\setminus (J\times G \times K)=(J\times G\times J)\cup (K\times
G\times K)\cup (K\times G\times J)\cup \{0\}.$$
It is routine to verify that
$$(J\times G\times J)\cup\{0\}\cong B(G, |J|)\quad\hbox{ and }\quad (K\times
G\times K)\cup \{0\}\cong B(G, |K|)$$
and that
$$(K\times G\times J)\cup \{0\}$$
is a null ideal of $B(G, n)\setminus (J\times G \times K)$.
Thus, by applying Proposition~\ref{prop-ideal} to the null ideal and
Lemma~\ref{lemma-regular} to the (regular!) quotient of $B(G, n)\setminus
(J\times G \times K)$ by the ideal, it
follows that
$$l(B(G, n)\setminus (J\times G \times K))=
l(B(G, |J|))+l(B(G, |K|))+l(K\times G\times J\cup \{0\}).$$
Since every non-empty subset of $(K\times G\times J)\cup \{0\}$ is a
subsemigroup, it follows that
$$l(K\times G\times J)\cup \{0\})=|J||K||G|$$
and so by induction that
$$l(B(G, n)\setminus (J\times G \times K))=n(l(G)+1)+\frac{n(n-1)}{2}|G|+n-2.$$
By the second part of the inductive hypothesis
\begin{eqnarray*}
l(B(H, n))&=&n(l(H)+1)+\frac{n(n-1)}{2}|H|+n-1\\
&\leq&n(l(G)+1)+\frac{n(n-1)}{2}|G|+n-2\\
&=&l(B(G, n)\setminus (J\times G \times K)).
\end{eqnarray*}
Thus, when we are constructing a chain of semigroups, if we have a choice
between semigroups of types (i) or (ii), we should choose type (ii) to
obtain the longest possible chain. We conclude that
$$l(B(G, n))=1+l(B(G, n)\setminus (J\times G \times K))$$
and (\ref{formula}) holds.
\end{proof}
The following result for inverse semigroups now follows immediately from
Lemma~\ref{lemma-regular} and Proposition~\ref{thm-brandt}.
\begin{theorem}[cf. Theorem 7 in \cite{Ganyushkin2011aa}]
Let $S$ be a finite inverse semigroup with $\J$-classes $J_1, \ldots, J_m$.
If $n_i\in \mathbb{N}$ denotes the number of $\mathscr{L}$- and
$\mathscr{R}$-classes in $J_i$, and $G_i$ is any maximal subgroup of $S$
contained in $J_i$, then
\begin{eqnarray*}
l(S) &=& -1+\sum_{i=1}^{m} l(B(G_i, n_i))\\
&=& -1+\sum_{i=1}^m n_i(l(G_i)+1)+\frac{n_i(n_i-1)}{2}|G_i|+(n_i-1).
\end{eqnarray*}
\label{thm-inverse}
\end{theorem}
Given a specific inverse semigroup $S$, Theorem~\ref{thm-inverse} gives a
formula for $l(S)$ in terms of the numbers $n_i$ of $\L$- and $\R$-classes and
the lengths of the maximal subgroups $G$ of the $\J$-classes of $S$. Thus to
determine the length of a particular semigroup, it suffices to determine
these values.
For example, if $I_n$ denotes the symmetric inverse monoid on an $n$-element set
and $x,y\in I_n$, then $x\J y$ if and only if the size of the domain of $x$ is
equal to the size of the domain of $y$; \cite[Exercise 5.11.2]{Howie1995aa}.
Hence the number of $\J$-classes in $I_n$ is $n+1$, corresponding to the
possible sizes of subsets of $\{1,\ldots, n\}$. If $J$ is the $\J$-class of
$I_n$, consisting of partial permutations defined on $i$ points, then the
number of $\L$- and $\R$-classes in $J$ is ${n\choose i}$ and every maximal
subgroup of $J$ is isomorphic to the symmetric group $S_i$ on $i$ points. So, in
the formula in Theorem~\ref{thm-inverse}, $m=n+1$, $n_i={n \choose i-1}$ and
$G_i=S_{i-1}$, so we have
$$l(I_n)=-1+\sum_{i=1}^{n+1}{n\choose i-1}(l(S_{i-1})+2)+
{n\choose i-1}\left({n\choose i-1}-1\right)\frac{(i-1)!}{2}-1,$$
where the values of $l(S_{i-1})$ for $i>1$ are given by
Theorem~\ref{symmetric_theorem} and $l(S_0)=0$. The first few values of
$l(I_n)$ are given in Table~\ref{fig-inverse}, for further terms see
\cite{sloane1}.
Three further examples are: the dual symmetric inverse monoid $I_n^*$ where
$m=n$, $n_i$ is the Stirling number of the second kind $S(n,i)$, and $G_i=S_i$;
see \cite[Theorem 2.2]{Fitzgerald1998aa}, Table~\ref{fig-inverse}, and
\cite{sloane2}; the partial injective order-preserving mappings
$POI_n$ on an $n$-element chain where $m=n+1$, $n_i={n\choose i-1}$, and $G_i$
is trivial; see \cite{Fernandes2001aa}, Table~\ref{fig-inverse}, and
\cite{sloane3}; the partial injective orientation-preserving
mappings $POPI_n$ on an $n$-element chain where $m=n+1$, $n_i={n\choose i-1}$,
and $G_i$ is the cyclic group with $i$ elements when $i>0$ and the trivial group
when $i=0$; see \cite{Fernandes2000aa}, Table~\ref{fig-inverse}, and
\cite{sloane4}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\hline
$n$ &1&2&3&4&5&6&7&8&9\\\hline
$l(I_n)$ &1& 6& 25& 116& 722& 5956& 59243& 667500& 8296060\\\hline
$l(I_n^*)$ &0& 2& 17& 180& 3298& 88431& 3064050& 130905678&
6732227475\\\hline
$l(POI_n)$& 1& 5& 17& 53& 167& 550& 1899& 6809& 25067\\\hline
$l(POPI_n)$&1& 6& 24& 92& 363& 1483& 6191& 26077& 109987\\\hline
\end{tabular}
\caption{The length of the longest chain of non-empty proper subsemigroups
of some well-known inverse semigroups.} \label{fig-inverse}
\end{center}
\end{table}
We consider the asymptotic value of $l(I_n)$ compared to $|I_n|$.
\begin{theorem}
If $S$ is any of the symmetric inverse monoid $I_n$, the dual symmetric
inverse monoid $I_n^*$, the partial order-preserving injective mappings
$POI_n$, the partial orientation-preserving injective mappings $POPI_n$, then
$$\lim_{n\to\infty}\frac{l(S)}{|S|}=\frac{1}{2}.$$
\end{theorem}
\begin{proof}
We present the proof in the case that $S=I_n$, the other proofs are
similar.
It is routine to check that
$$|I_n|=\sum_{i=0}^n {n\choose i}^2i!$$
(see also \cite[Exercise 5.11.3]{Howie1995aa}). By Theorem~\ref{thm-inverse},
\begin{eqnarray*}
l(I_n)&=&-1+\sum_{i=0}^n \left[{n \choose i}(l(S_i)+1)+{n\choose i}\left({n
\choose i}-1\right)\frac{i!}{2}+{n\choose i}-1\right]\\
&=&\frac{|I_n|}{2}-1+\sum_{i=0}^n\left[ {n \choose i}(l(S_i)+1)-{n\choose
i}\frac{i!}{2}+{n\choose i}-1\right]\\
&=&\frac{|I_n|}{2}-n-2+\sum_{i=0}^n{n\choose
i}\left[l(S_i)+2-\frac{i!}{2}\right]\\
&=& \frac{|I_n|}{2} + \frac{n-1}{2}+\sum_{i=2}^n{n\choose
i}\left[l(S_i)+2-\frac{i!}{2}\right].
\end{eqnarray*}
Note that, for $n\geq 1$,
\begin{equation}\label{InEstimate}
|I_n|\geq {n \choose n-1}^2(n-1)! = n \cdot n!
\end{equation}
and so to show that $l(I_n)$ is asymptotically $\frac{|I(n)|}{2}$ it suffices
to show that the ratio of
$$\sum_{i=2}^n{n\choose i}\left[l(S_i)+2-\frac{i!}{2}\right]$$
to $|I_n|$ tends to $0$ as $n\to\infty$.
By Theorem~\ref{symmetric_theorem}
\begin{eqnarray*}
\left| \sum_{i=2}^n{n\choose i}\left[l(S_i)+2-\frac{i!}{2}\right]\right| \leq
\sum_{i=2}^n{n\choose i}\left[ \frac{3i}{2}+2+\frac{i!}{2}\right]
\leq \sum_{i=2}^n{n\choose i}i!
\end{eqnarray*}
Using the inequalities (\ref{InEstimate}) and
\begin{equation*}
|I_n|=\sum_{i=0}^n {n\choose i}^2i!\geq n\sum_{i=2}^{n-1} {n\choose i} i!
\end{equation*}
it follows that
$$\frac{\sum_{i=2}^{n} {n\choose i} i!}{|I_n|}=
\frac{n!}{|I_n|} + \frac{\sum_{i=2}^{n-1} {n\choose i} i!}{|I_n|}\leq
\frac{2}{n} \rightarrow 0$$
as $n\rightarrow \infty$ and the proof is complete.
\end{proof}
\subsection{Longest chains of inverse subsemigroups}
In this section we consider the question of determining the longest chains of
\emph{inverse} subsemigroups of a finite inverse semigroup. We define the
\textit{inverse subsemigroup length} of an inverse semigroup $S$ to be the
largest number of non-empty inverse subsemigroups of $S$ in a chain minus 1;
this is denoted $l^*(S)$. Since every group is an inverse semigroup, and every
subsemigroup of a finite group is a subgroup, if $G$ is a finite group, then
$l(G)=l^*(G)$.
We will prove the following theorem.
\begin{theorem}
\label{thm-inverse-inverse}
Let $S$ be a finite inverse semigroup with $\J$-classes $J_1, \ldots, J_m$.
If $n_i\in \mathbb{N}$ denotes the number of $\mathscr{L}$- and
$\mathscr{R}$-classes in $J_i$, and $G_i$ is any maximal subgroup of $S$
contained in $J_i$, then
\begin{eqnarray*}
l^*(S) &=& -1+\sum_{i=1}^{m} l^*(B(G_i, n_i))\\
&=& -1+\sum_{i=1}^m n_i(l(G_i)+1) + n_i - 1.
\end{eqnarray*}
\end{theorem}
The proof is similar to the proof of Theorem~\ref{thm-inverse}. We start by
proving analogues of Proposition~\ref{prop-ideal} and Lemma~\ref{lemma-regular}
for the inverse subsemigroup length, rather than length, of an inverse
semigroup.
To prove the analogue of Proposition~\ref{prop-ideal}, we require the following
facts about inverse semigroups. Let $S$ be an inverse semigroup, let $T$ and $U$
be inverse subsemigroups, and let $I$ be an ideal in $S$. Then the following are
inverse semigroups: the ideal $I$, the quotient $S/I$, the intersection $T\cap
U$, and the union $T\cup I$. If $V$ is an inverse subsemigroup of $S/I$, then
$V\setminus\{0\} \cup I$ is an inverse subsemigroup of $S$.
\begin{prop}
\label{prop-ideal-inverse}
Let $S$ be an inverse semigroup and let $I$ be an ideal of $S$. Then
$l^*(S)=l^*(I)+l^*(S/I)$.
\end{prop}
\begin{proof}
From the comments preceding the proposition, it is straightforward to verify
that, the proof of this proposition follows by an argument analogous to that
used to prove Proposition~\ref{prop-ideal}.
\end{proof}
The analogue of Lemma~\ref{lemma-regular}, follows as a corollary of
Proposition~\ref{prop-ideal-inverse} using the analogue of the proof of
Lemma~\ref{lemma-regular}.
\begin{cor} \label{cor-inverse}
Let $S$ be a finite inverse semigroup and let $J_1, J_2, \ldots, J_m$ be the
$\J$-classes of $S$. Then $l^*(S)=l^*(J_1^*)+l^*(J_2^*)+\cdots +l^*(J_m^*)-1$.
\end{cor}
As in the previous subsection, to calculate the inverse subsemigroup length of
an inverse semigroup, it suffices, by Corollary~\ref{cor-inverse}, to find the
inverse subsemigroup length of a Brandt semigroup.
\begin{prop}
Let $G$ be a group and let $n\in \N$. Then:
\begin{equation}
\label{inverse-formula}
l ^ * (B(G,n)) = n (l(G) + 1) + n - 1 = n(l(G)+2)-1
\end{equation}
\end{prop}
\begin{proof}
We proceed by induction on $n$ and $|G|$. If $n=1$, then
$l^*(B(G, n))=l(G)+1$ and (\ref{inverse-formula}) holds.
Let $n\in \mathbb{N}$, $n > 1$, and let $G$ be a finite group. Suppose that if
either: ($m < n$ and $|H| = |G|$) or ($m = n$ and $|H| < |G|$), then
$$l^*(B(H, m))=m(l(H) + 1) + m - 1.$$
We will show that (\ref{inverse-formula}) holds for $n$ and $G$.
As in the proof of Proposition~\ref{prop-brandt}, a maximal subsemigroup of
$B(G, n)=(I\times G\times I)\cup \{0\}$ is isomorphic to either:
\begin{enumerate}[(i)]
\item $B(H, n)$ where $H$ is a maximal subgroup of $G$; or
\item $B(G, n)\setminus (J\times G \times K)$ where $J$ and $K$ partition $I$.
\end{enumerate}
The subsemigroups of type (i) are inverse subsemigroups, and hence maximal
inverse subsemigroups. The subsemigroups of type (ii) are not regular
semigroups, since
$$B(G, n)\setminus (J\times G \times K)=(J\times G\times J)\cup (K\times
G\times K)\cup (K\times G\times J)\cup \{0\},$$
and $(K\times G\times J)\cup \{0\}$ is a null subsemigroup. It follows that
$$U := (J\times G\times J)\cup (K\times G\times K)\cup \{0\}$$
is a maximal inverse subsemigroup of $B(G, n)\setminus (J\times G \times K)$,
and hence of $B(G, n)$. Since the $\mathscr{J}$-classes of $U$ are
$J\times G\times J$, $K\times G\times K$, and $\{0\}$, by
Corollary~\ref{cor-inverse},
$$l^*(U)= l^*(B(G, |J|)) + l^*(B(G, |K|)).$$
Therefore either:
$$l^*(B(G, n)) = 1 + l ^ * (B(H, n))$$
for some maximal subgroup $H$ of $G$, or
$$l^*(B(G, n)) = 1 + l ^ * (B(G, m)) + l ^ * (B(G, r))$$
where $m + r = n$. By induction, and since $n > 1$,
\begin{eqnarray*}
1 + l ^ * (B(G, m)) + l ^ * (B(G, r)) & = & n (l(G) + 1) + n - 1 \\
& > & n\ l(G) + n\\
& = & n(l(H) + 1) + n\\
& = & 1 + l ^ * (B(H, n)),
\end{eqnarray*}
and the result follows.
\end{proof}
In particular, we see that
$$l^*(I_n)=-1+\sum_{i=1}^{n+1}{n\choose i-1}(l(S_{i-1}+2)-1.$$
Some small values of the inverse subsemigroup lengths of the four examples of
inverse semigroups from the previous section can be seen in Table
\ref{fig-inverse-inverse}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\hline
$n$ &1&2&3&4&5&6&7&8&9\\\hline
$l^*(I_n)$ & 1 & 5 & 15 & 39 & 96 & 229 & 533 & 1217 & 2742\\\hline
$l^*(I_n^*)$&0 & 2 & 11 & 49 & 223 & 1065 & 5337 & 28231 & 158939 \\\hline
$l^*(POI_n)$& 1 & 4 & 11 & 26 & 57 & 120 & 247 & 502 & 1013\\\hline
$l^*(POPI_n)$&1 & 6 & 17 & 44 & 97 & 208 & 429 & 884 & 1814 \\\hline
\end{tabular}
\caption{The length of the longest chain of non-empty proper inverse
subsemigroups of some well-known inverse semigroups.}
\label{fig-inverse-inverse}
\end{center}
\end{table}
\section{Completely regular semigroups}\label{section-completely-regular}
In this section, we consider a special type of semigroup, which does not have
any leagues in any of its $\J$-classes. A semigroup is
\emph{completely regular} if every element belongs to a subgroup.
It follows by the Rees Theorem \cite[Theorems 3.2.3 and 4.1.3]{Howie1995aa} that
the principal factor of a $\J$-class $J$ of a finite completely regular
semigroup $S$ is isomorphic to a Rees 0-matrix semigroup $\mathcal{M}^0[I, G, J;
P]$ where $G$ is a finite group and $P$ is a $|J|\times |I|$ matrix with entries
in $G$.
\begin{theorem}\label{thm-completely-regular}
Let $S$ be a completely regular semigroup where the numbers of $\L$- and
$\R$-classes are $m$ and $n$, and where the $\J$-classes of $S$ are $J_1,
\ldots, J_r$. If $G_i$ is a maximal subgroup of $S$ contained in $J_i$, then
$$l(S)=m+n-r-1+\sum_{i=1}^{r} l(G_i).$$
\end{theorem}
\begin{proof}
By Lemma~\ref{lemma-regular}, it suffices to show that
\begin{equation*}
l(\mathcal{M}^0[I, G, J; P])=|I|+|J|+l(G)-1
\end{equation*}
where $\mathcal{M}^0[I, G, J; P]$ is a Rees 0-matrix semigroup over the group
$G$ and $P$ is a $|J|\times |I|$ matrix with entries in $G$ (i.e.\ there are no
entries equal $0$). Furthermore, since the length of a semigroup $S$ with zero
adjoined is $1$ more than the length of $S$, it suffices to show that
\begin{equation*}
l(\mathcal{M}[I, G, J; P])=|I|+|J|+l(G)-2.
\end{equation*}
where $R=\mathcal{M}[I, G, J; P]$ is a Rees matrix semigroup without zero.
We proceed by induction on $|R|=|I|\times |G|\times |J|$. If
$|I|=|J|=|G|=1$, then $|R|=1$ and so $l(R)=0$ and
$|I|+|J|+l(G)-2=1+1+0-2=0$.
As in the proof of Proposition~\ref{thm-brandt}, the length of $R$ is the
length of one of its maximal subsemigroups plus $1$. Remark 1 of
\cite{Graham1968aa} implies that a maximal subsemigroup of $R$ is isomorphic
to one of:
\begin{enumerate}[(i)]
\item $I\setminus\{i\}\times G\times J$ for some $i\in I$;
\item $I\times G\times J\setminus \{j\}$ for some $j\in J$;
\item $\mathcal{M}[I, H, J; Q]$ where $H$ is a maximal subgroup of $G$ and
$Q$ is a $|J|\times |I|$ matrix with entries in $H$.
\end{enumerate}
Thus every maximal subsemigroup $T$ of $R$ is isomorphic to a completely regular
Rees matrix semigroup. In any case, by induction, $l(T)=|I|+|J|+l(G)-3$, the
result follows.
\end{proof}
A semigroup $S$ is a \emph{band} if every element is an idempotent, i.e. $x^2=x$
for all $x\in S$. Every band is a completely regular semigroup where the maximal
subgroups are trivial, and so Theorem~\ref{thm-completely-regular} tells us that
$$l(S)=m+n-r-1$$
where $m$, $n$, and $r$ are the numbers of $\L$-, $\R$-, and $\J$-classes of
$S$, respectively.
The $n$-generated \emph{free band} $B_n$ is the free object in the category of
bands, and, as it turns out, it is finite; see \cite[Section 4.5]{Howie1995aa}
for more details. The $\J$-classes in $B_n$ are in 1-1 correspondence with the
non-empty subsets of $\{1,2,\ldots, n\}$, and the number of $\L$- and
$\R$-classes in any $\J$-class corresponding to a subset of size $k$ is:
$$k\prod_{i=1}^{k-2}(k-i)^{2^i}.$$
The following is an immediate corollary of these observations and
Theorem~\ref{thm-completely-regular}.
\begin{cor}
The length of the free band $B_n$ with $n$ generators is:
$$2\sum_{k=1}^{n}\left[{n\choose k}k\prod_{i=1}^{k-2}(k-i)^{2^i}\right]-2^n.$$
\end{cor}
Since every band with $n$ generators is a homomorphic image of the free band
$B_n$, it follows that $l(B_n)$ is an upper bound for $l(S)$ for every $n$
generated band $S$.
\begin{table}[ht]\label{fig-free-band}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}\hline
$n$ &1&2&3&4&5&6\\\hline
$l(B_n)$&0& 4& 34& 1264&
3323778&33022614177128\\\hline
\end{tabular}
\caption{The length of the longest chain of non-empty proper subsemigroups
in the free band $B_n$.}
\end{center}
\end{table}
\section{Numbers of subsemigroups}\label{section-numbers}
Our technique for producing long chains also gives lower bounds for the number
of subsemigroups of certain semigroups.
We note that some results are known for groups. The number of subgroups of
$S_n$ is bounded below by roughly $2^{n^2/16}$. For this group contains an
elementary abelian subgroup of order $2^{\lfloor n/2\rfloor}$ generated by
$\lfloor n/2\rfloor$ disjoint transpositions; and an elementary abelian
group of order $2^m$ has
$$\gauss{m}{k}{2}$$
subgroups of order $2^k$, this number being greater than $2^{k(m-k)}$, and so at
least $2^{\lfloor m^2/4\rfloor}$ when $k=\lfloor m/2\rfloor$. Remarkably,
Pyber~\cite{pyber} found an upper bound for the number of subgroups, also of the
form $2^{cn^2}$ for constant $c$.
If a null semigroup has $n$ non-zero elements, then it has $2^n$ subsemigroups,
since the zero together with any set of non-zero elements forms a subsemigroup.
So the existence of large null semigroups in principal factors of $T_n$, for
example, gives lower bounds for the number of subsemigroups, and on the number
of generators required.
\begin{theorem}
Let
$$c = \frac{ \ee^{-2}}{3\sqrt{\ee^{-1} - 2 \ee^{-2}} \sqrt{3}}.$$
Then
\begin{enumerate}
\item the number of subsemigroups of $T_n$ is at least $2^{(c - o(1))
n^{n-1/2}}$;
\item the smallest number $d(n)$ for which any subsemigroup of $T_n$ can be
generated by $d(n)$ elements is at least $(c - o(1)) n^{n-1/2}$.
\end{enumerate}
\end{theorem}
\begin{proof}
The reader is reminded of the notation used in the proof of
Theorem~\ref{thm-full-transformation-monoid}. We have exhibited then a null
subsemigroup of $T_n$ of order $(n-k) N(n-1,k-1)$ for all $k \in \{1,\ldots,
n\}$. We shall give a lower bound on the the order of the largest of those
semigroups. In particular, we restrict ourselves to the set $J = \{k : |k -
E(n-1)| < d \tau n^{1/2}\}$ where $E(n-1)$ is the expected rank of a
transformation in $T_{n-1}$, $\tau = \sqrt{\ee^{-1} - 2 \ee^{-2}}$ and $d$ is a
constant which we will specify later. Using similar arguments as before, we
can then prove that for all $k \in J$,
\begin{eqnarray*}
n - k &\ge& \ee^{-1} n - o(n)\\
\sum_{k \in J} N(n-1,k-1) &\ge& \ee^{-1} \frac{d^2 - 1}{d^2} n^{n-1}\\
\sum_{k \in J} (n-k) N(n-1,k-1) &\ge& \ee^{-1} \frac{d^2 - 1}{d^2} n^n -
o(n^n),
\end{eqnarray*}
and hence the largest semigroup for $k \in J$ has order at least
$$
\frac{\ee^{-2}}{2\tau} \frac{d^2 - 1}{d^3} n^{n - 1/2} - o(n^{n - 1/2}).
$$
The fraction is maximised for $d = \sqrt{3}$.
\end{proof}
\paragraph{Remark} Part (b) answers a question of Brendan McKay to the first
author a few years ago and gives a partial answer to Open Problem 1 in
\cite{Gray14}. The analogous number for $S_n$ (the smallest $d$ such that any
subgroup can be generated by at most $d$ elements) is only $\lfloor n/2\rfloor$
for $n>3$, as shown by McIver and Neumann~\cite{mn}. Jerrum~\cite{jerrum} gave
a weaker bound $n-1$, but with a constructive (and computationally efficient)
proof.
\section{Open problems}
\paragraph{Problem 1} Does the ratio $l(T_n)/|T_n|$ tend to a limit as
$n\to\infty$? If so, what is this limit? Is it possible to improve on the
constant $\ee^{-2}$ by either more careful analysis, or counting the extra
steps available in a principal factor?
\paragraph{Problem 2} Evaluate the function $F(k,n)$ giving the largest
content of a league of rank $k$ on $\{1,\ldots,n\}$, and the function
$F^*(n,k)$ giving the largest content involving partitions into intervals.
\paragraph{Problem 3} In most cases, our results are not strong enough to
show that the number of subsemigroups of a semigroup $S$ is at least
$c^{|S|}$ for some $c>1$. Does such a result hold in the case $S=T_n$, for
example?
\paragraph{Problem 4} What can be said about the number of inverse
subsemigroups of an inverse semigroup, for example the symmetric inverse
semigroup $I_n$?
| {
"timestamp": "2015-01-27T02:18:33",
"yymm": "1501",
"arxiv_id": "1501.06394",
"language": "en",
"url": "https://arxiv.org/abs/1501.06394",
"abstract": "We investigate the maximum length of a chain of subsemigroups in various classes of semigroups, such as the full transformation semigroups, the general linear semigroups, and the semigroups of order-preserving transformations of finite chains. In some cases, we give lower bounds for the total number of subsemigroups of these semigroups. We give general results for finite completely regular and finite inverse semigroups. Wherever possible, we state our results in the greatest generality; in particular, we include infinite semigroups where the result is true for these. The length of a subgroup chain in a group is bounded by the logarithm of the group order. This fails for semigroups, but it is perhaps surprising that there is a lower bound for the length of a subsemigroup chain in the full transformation semigroup which is a constant multiple of the semigroup order.",
"subjects": "Group Theory (math.GR)",
"title": "Chains of subsemigroups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9852713874477228,
"lm_q2_score": 0.8175744761936438,
"lm_q1q2_score": 0.8055327385011566
} |
https://arxiv.org/abs/0708.2295 | Product-free subsets of groups, then and now | A subset of a group is product-free if it does not contain elements a, b, c such that ab = c. We review progress on the problem of determining the size of the largest product-free subset of an arbitrary finite group, including a lower bound due to the author, and a recent upper bound due to Gowers. The bound of Gowers is more general; it allows three different sets A, B, C such that one cannot solve ab = c with a in A, b in B, c in C. We exhibit a refinement of the lower bound construction which shows that for this broader question, the bound of Gowers is essentially optimal. | \section{Introduction}
Let $G$ be a group. A subset $S$ of $G$ is \emph{product-free} if
there do not exist $a,b,c \in S$ (not necessarily distinct\footnote{In some
sources, one does require $a \neq b$. For instance, as noted in
\cite{guiduci-hart}, I mistakenly assumed this in
\cite[Theorem~3]{kedlaya-amm}.})
such that $ab=c$.
One can ask about the existence of large product-free subsets for various
groups, such as the groups of integers (see next section), or compact
topological groups (as suggested in \cite{kedlaya-amm}). For the rest of this
paper, however, I will require $G$ to be a finite group of order
$n > 1$. Let $\alpha(G)$ denote the size of the largest product-free subset of
$G$; put $\beta(G) = \alpha(G)/n$, so that $\beta(G)$ is the density of
the largest product-free subset. What can one say about
$\alpha(G)$ or $\beta(G)$ as a function
of $G$, or as a function of $n$?
(Some of our answers will include an unspecified positive constant;
I will always call this constant $c$.)
The purpose of this paper is threefold. I first
review the history of this problem, up to and including
my involvement via Joe Gallian's REU (Research Experience for Undergraduates)
at the University of Minnesota, Duluth,
in 1994;
since I did this once already in \cite{kedlaya-amm}, I will be briefer
here. I then describe some very recent progress made by Gowers
\cite{gowers}. Finally, I speculate on the gap between the lower and upper
bounds, and revisit my 1994 argument to show that this gap cannot be
closed using Gowers's argument as given.
Note the usual convention that multiplication and inversion are
permitted to act on subsets of $G$, i.e., for $A,B \subseteq G$,
\[
AB = \{ab: a \in A, b \in B\}, \qquad A^{-1} = \{a^{-1}: a \in A\}.
\]
\section{Origins: the abelian case}
In the abelian case, product-free subsets are more customarily called
\emph{sum-free} subsets.
The first group in which such subsets were studied is the
group of integers $\mathbb{Z}$; the first reference I could find for
this is Abbott and Moser \cite{abbott-moser},
who expanded upon Schur's theorem that the set
$\{1, \dots, \lfloor n! e \rfloor\}$ cannot be partitioned into
$n$ sum-free sets. This led naturally to considering sum-free
subsets of finite abelian groups, for which the following is easy.
\begin{theorem}
For $G$ abelian, $\beta(G) \geq \frac{2}{7}$.
\end{theorem}
\begin{proof}
For $G = \mathbb{Z}/p\mathbb{Z}$ with $p > 2$,
we have $\alpha(G) \geq \lfloor
\frac{p+1}{3} \rfloor$ by taking
\[
S = \left\{ \left\lfloor \frac{p+1}{3}
\right\rfloor, \dots, 2 \left\lfloor \frac{p+1}{3}
\right\rfloor - 1 \right\}.
\]
Then apply the following lemma.
\end{proof}
\begin{lemma} \label{L:quotient}
For $G$ arbitrary, if $H$ is a quotient of $G$, then
\[
\beta(G) \geq \beta(H).
\]
\end{lemma}
\begin{proof}
Let $S'$ be a product-free subset of $H$ of size $\alpha(H)$.
The preimage of $S'$ in $G$
is product-free of size $\#S' \#G/\#H$, so
$\alpha(G) \geq \alpha(H)\#G/\#H$.
\end{proof}
In fact, one can prove an exact formula for $\alpha(G)$ showing
that this construction is essentially optimal. Many cases were
established around 1970, but only in 2005 was the proof of the
following result finally
completed by Green and Ruzsa \cite{green-ruzsa}.
\begin{theorem}[Green-Ruzsa]
Suppose that $G$ is abelian.
\begin{enumerate}
\item[(a)] If $n$ is divisible by a prime $p \equiv 2 \pmod{3}$,
then for the least such $p$, $\alpha(G) = \frac{n}{3} + \frac{n}{3p}$.
\item[(b)] Otherwise, if $3 | n$, then $\alpha(G) = \frac{n}{3}$.
\item[(c)] Otherwise, $\alpha(G) = \frac{n}{3} - \frac{n}{3m}$,
for $m$ the exponent (largest order of any element) of $G$.
\end{enumerate}
\end{theorem}
One possible explanation for the delay
is that it took this long for this subject to migrate into the mathematical
mainstream, as part of the modern subject of \textit{additive combinatorics}
\cite{tao-vu}; see Section~\ref{sec:interlude}.
The first appearance of the problem of computing $\alpha(G)$ for
nonabelian $G$ seems to have been
in a 1985 paper of Babai and S\'os \cite{babai-sos}.
In fact, the problem appears there as an afterthought;
the authors were more interested in \textit{Sidon sets},
in which the equation $ab^{-1} = cd^{-1}$
has no solutions with $a,b,c,d$ taking
at least three distinct values. This construction can be related to
embeddings of graphs as induced subgraphs of Cayley graphs;
product-free subsets arise because they relate to the special case
of embedding stars in Cayley graphs.
Nonetheless, the Babai-S\'os paper is the first
to make a nontrivial assertion about $\alpha(G)$ for general $G$; see
Theorem~\ref{T:babai-sos}.
This circumstance suggests rightly
that the product-free problem is only one of a broad class of
problems about structured subsets of groups; this class can be
considered a nonabelian version of additive combinatorics,
and progress on problems in this class has been driven as much by
the development of the abelian theory as by interest from
applications in theoretical computer science. An example of the latter
is a problem of Cohn and Umans \cite{cohn-umans} (see also
\cite{cksu}): to find groups $G$
admitting large subsets $S_1, S_2, S_3$ such that the
equation $a_1 b_1^{-1} a_2 b_2^{-1} a_3 b_3^{-1} = e$, with
$a_i, b_i \in S_i$, has only solutions with $a_i = b_i$ for all $i$.
A sufficiently good construction would resolve an ancient problem
in computational algebra: to prove that two $n \times n$ matrices can be
multiplied using $O(n^{2+\epsilon})$ ring operations for any $\epsilon > 0$.
\section{Lower bounds: Duluth, 1994}
Upon my arrival at the REU in 1994, Joe gave me the paper of Babai and S\'os,
perhaps hoping I would have some new insight about Sidon sets. Instead,
I took the path less traveled and started thinking about
product-free sets.
The construction of product-free subsets given in \cite{babai-sos}
is quite simple: if $H$ is a proper subgroup of $G$, then any
nontrivial coset of $H$ is product-free. This is trivial to
prove directly, but it occurred to me to formulate it in terms of
permutation actions. Recall that
specifying a transitive permutation action of the group $G$ is the same
as simply identifying a conjugacy class of subgroups: if $H$ is one
of the subgroups, the action is left multiplication on
left cosets of $H$. (Conversely, given an action, the point stabilizers
are conjugate subgroups.) The construction of Babai and S\'os can then
be described as follows.
\begin{theorem}[Babai-S\'os] \label{T:babai-sos}
For $G$ admitting a transitive action on $\{1,\dots,m\}$ with $m>1$,
$\beta(G) \geq m^{-1}$.
\end{theorem}
\begin{proof}
The set of all $g \in G$ such that $g(1) = 2$ is product-free of size $n/m$.
\end{proof}
I next wondered: what if you allow $g$ to carry 1 into a slightly
larger set, say a set $T$ of $k$ elements?
You would still get a product-free set
if you forced each $x \in T$ to map to something not in $T$.
This led to the following argument.
\begin{theorem} \label{T:kedlaya}
For $G$ admitting a transitive action on $\{1,\dots,m\}$ with $m>1$,
$\beta(G) \geq c m^{-1/2}$.
\end{theorem}
\begin{proof}
For a given $k$, we compute a lower bound for the average size of
\[
S = \bigcup_{x \in T} \{g \in G: g(1) = x\}
- \bigcup_{y \in T} \{g \in G: g(1), g(y) \in T\}
\]
for $T$ running over $k$-element subsets of $\{2, \dots, m\}$.
Each set in the first union contains $n/m$ elements, and they are all disjoint,
so the first union contains $kn/m$ elements.
To compute the average of a set in the second union, note that for fixed
$g \in G$ and $y \in \{2,\dots,m\}$,
a $k$-element subset $T$ of $\{1, \dots, m\}$
contains $g(1), y, g(y)$ with probability $\frac{k(k-1)}{m(m-1)}$
if two of the three coincide and $\frac{k(k-1)(k-2)}{m(m-1)(m-2)}$
otherwise. A bit of arithmetic then shows that the average size of $S$
is at least
\[
\frac{kn}{m} - \frac{k^3n}{(m-2)^2}.
\]
Taking $k \sim (m/3)^{1/2}$, we obtain
$\alpha(G) \geq c n/m^{1/2}$. (For any fixed $\epsilon > 0$,
the implied constant can be improved to
$e^{-1} - \epsilon$ for $m$ sufficiently large;
see the proof of Theorem~\ref{T:lower bound}. On the other hand, the
proof as given can be made constructive in case $G$ is doubly transitive,
as then there is no need to average over $T$.)
\end{proof}
This gives a lower bound depending on the parameter $m$, which we can
view as the index of the largest proper subgroup of $G$. To state a bound
depending only on $n$, one needs to know something about the dependence
of $m$ on $n$; by
Lemma~\ref{L:quotient}, it suffices to prove a lower bound on $m$ in terms
of $n$ for all \emph{simple} nonabelian
groups. I knew this could be done in
principle using the classification of finite
simple groups (CFSG); after some asking around,
I got hold of a manuscript by Liebeck and Shalev
\cite{liebeck-shalev} that included the bound I wanted, leading to
the following result from \cite{kedlaya}.
\begin{theorem}
Under CFSG, the group $G$ admits a transitive action on a set of size $1 <
m \leq c n^{3/7}$.
Consequently, Theorem~\ref{T:babai-sos} implies
$\alpha(G) \geq cn^{4/7}$, whereas
Theorem~\ref{T:kedlaya} implies
$\alpha(G) \geq c n^{11/14}$.
\end{theorem}
At this point, I was pretty excited to have discovered something interesting
and probably publishable. On the other hand,
I was completely out of ideas! I had no hope of getting any stronger
results, even for specific classes of groups, and it seemed impossible
to derive any nontrivial upper bounds at all. In fact, Babai and S\'os
suggested in their paper that maybe $\beta(G) \geq c$ for all $G$;
I was dubious about this, but I couldn't convince myself that one couldn't
have $\beta(G) \geq c n^{-\epsilon}$ for all $\epsilon > 0$.
So I decided to write this result up by itself, as my first Duluth
paper, and ask Joe for another problem (which naturally he provided).
My paper ended up
appearing as \cite{kedlaya}; I revisited the topic when I was asked to submit
a paper in connection with being named a runner-up for the Morgan Prize
for undergraduate research, the result being \cite{kedlaya-amm}.
I then put this problem in a mental deep freezer,
figuring (hoping?) that my youthful foray into combinatorics
would be ultimately forgotten, once I had made some headway with some
more serious mathematics, like algebraic number theory or algebraic
geometry. I was reassured by the
expectation that the nonabelian product-free problem was both intractable
and of no interest to anyone, certainly not to any serious mathematician.
Ten years passed.\footnote{If you do not recognize this reference, you may
not have
read the excellent novel \textit{The Grasshopper King}, by fellow
Duluth REU alumnus Jordan Ellenberg.}
\section{Interlude: back to the future}
\label{sec:interlude}
Up until several weeks before the Duluth conference, I had been planning to
speak about the latest and greatest in algebraic number
theory (the proof of Serre's conjecture linking modular forms
and mod $p$ Galois representations, recently completed by
Khare and Wintenberger). Then I got an email that suggested that
maybe I should try embracing my past instead of running from it.
A number theorist friend (Michael Schein) reported having attended
an algebra seminar at Hebrew University about product-free
subsets of finite groups, and hearing my name in this
context. My immediate reaction was to wonder what self-respecting mathematician
could possibly be interested in my work on this problem.
The answer was Tim Gowers, who had recently established a nontrivial
upper bound for $\alpha(G)$ using a remarkably simple argument.
It seems that in the
ten years since I had moved on to ostensibly more mainstream
mathematics, additive combinatorics had come into its own, thanks partly
to the efforts of no fewer than three Fields medalists (Tim Gowers, Jean
Bourgain, and Terry Tao); some sources date the start of this boom to
Ruzsa's publication in 1994 of a simplified proof \cite{ruzsa} of a theorem of
Freiman on subsets of $\mathbb{Z}/p\mathbb{Z}$ having few pairwise sums.
In the process, some interest had spilled over
to nonabelian problems.
The introduction to Gowers's paper \cite{gowers}
cites\footnote{Since Joe is fond of noting ``program firsts'', I
should point out that
this appears to be the first citation of a Duluth paper
by a Fields medalist. To my chagrin, I think it
is also the first such citation of any of my papers.}
my Duluth paper as giving the
best known lower bound on $\alpha(G)$ for general $G$.
At this point, it became
clear that I had to abandon my previous plan for the conference in favor
of a return visit to my mathematical roots.
\section{Upper bounds: bipartite Cayley graphs}
In this section, I'll proceed quickly through
Gowers's upper bound construction.
Gowers's paper
\cite{gowers} is exquisitely detailed;
I'll take that fact as license to be
slightly less meticulous here.
The strategy of Gowers is to consider three sets $A,B,C$
for which there is no true equation
$ab=c$ with $a \in A, b \in B, c \in C$, and give an upper bound on
$\#A \#B \#C$.
To do this, he studies a certain \emph{bipartite Cayley graph}
associated to $G$. Consider the bipartite graph
$\Gamma$ with vertex set $V_1 \cup V_2$, where each $V_i$
is a copy of $G$, with an edge from $x \in V_1$ to $y \in V_2$
if and only if $yx^{-1} \in A$.
We are then given that there are no edges
between $B \subseteq V_1$ and $C \subseteq V_2$.
A good reflex at this point would be to consider
the eigenvalues of the adjacency matrix of $\Gamma$. For bipartite graphs,
it is more convenient to do something slightly different using singular values;
although this variant of spectral analysis of graphs is quite natural, I am only
aware of the reference \cite{bollobas-nikiforov} from 2004 (and only
thanks to Gowers for pointing it out).
Let $N$ be the \emph{incidence matrix}, with columns indexed by $V_1$
and rows by $V_2$, with an entry in row $x$ and column $y$ if $xy$
is an edge of $\Gamma$.
\begin{theorem} \label{T:svd}
We can factor $N$ as a product
$U \Sigma V$ of $\#G \times \#G$ matrices over $\mathbb{R}$, with $U,V$ orthogonal
and $\Sigma$ diagonal with nonnegative entries.
(This is called a \emph{singular value decomposition} of $N$.)
\end{theorem}
\begin{proof}
(Compare \cite[Theorem~2.6]{gowers}, or see any textbook on numerical
linear algebra.)
By compactness of the unit ball, there is a greatest
$\lambda$ such that $\|N\mathbf{v}\| = \lambda \|\mathbf{v}\|$ for some nonzero $\mathbf{v} \in
\mathbb{R}^{V_1}$.
If $\mathbf{v} \cdot \mathbf{w} = 0$, then $f(t) = \|N(\mathbf{v} + t \mathbf{w})\|^2$
has a local maximum at $t=0$, so
\[
0 = \frac{d}{dt} \|N(\mathbf{v} + t \mathbf{w})\|^2 = 2 t (N\mathbf{v}) \cdot (N\mathbf{w}).
\]
Apply the same construction to the orthogonal complement of $\mathbb{R} \mathbf{v}$
in $\mathbb{R}^{V_1}$.
Repeating, we obtain an orthonormal basis of $\mathbb{R}^{V_1}$;
the previous calculation shows that the image of this basis in $\mathbb{R}^{V_2}$
is also orthogonal. Using these to construct $V,U$ yields the claim.
\end{proof}
The matrix $M = NN^T$ is symmetric, and has
several convenient properties.
\begin{enumerate}
\item[(a)]
The trace of $M$ equals the number of edges of $\Gamma$.
\item[(b)]
The eigenvalues of $M$ are the squares of the diagonal entries of $\Sigma$.
\item[(c)]
Since $\Gamma$ is regular of degree $\#A$ and connected, the
largest eigenvalue of $M$ is $\#A$,
achieved by the all-ones eigenvector $\mathbf{1}$.
\end{enumerate}
\begin{lemma} \label{L:subspace}
Let $\lambda$ be the second largest diagonal entry of $\Sigma$.
Then the set $W$ of $\mathbf{v} \in \mathbb{R}^{V_1}$ with
$\mathbf{v} \cdot \mathbf{1} = 0$ and $\|N \mathbf{v}\| = \lambda \|\mathbf{v}\|$ is
a nonzero subspace of $\mathbb{R}^{V_1}$.
\end{lemma}
\begin{proof}
(Compare \cite[Lemma~2.7]{gowers}.)
From Theorem~\ref{T:svd}, we obtain an orthogonal basis $\mathbf{v}_1, \dots, \mathbf{v}_n$
of $\mathbb{R}^{V_1}$, with $\mathbf{v}_1 = \mathbf{1}$, such that $N\mathbf{v}_1, \dots, N\mathbf{v}_n$
are orthogonal, and $\|N\mathbf{v}_1\|/\|\mathbf{v}_1\|, \dots, \|N\mathbf{v}_n\|/\|\mathbf{v}_n\|$
are the diagonal entries of $\Sigma$; we may then identify $W$
as the span of the $\mathbf{v}_i$ with $i > 1$ and $\|N\mathbf{v}_i\| = \lambda \|\mathbf{v}_i\|$.
Alternatively, one may note that $W$ is obviously closed under scalar
multiplication, then check that $W$ is closed under addition
as follows.
If $\mathbf{v}_1, \mathbf{v}_2 \in W$, then
$\|N (\mathbf{v}_1 \pm \mathbf{v}_2)\| \leq \lambda \|\mathbf{v}_1 \pm \mathbf{v}_2\|$, but by
the parallelogram law
\begin{align*}
\|N\mathbf{v}_1 + N\mathbf{v}_2\|^2 +
\|N\mathbf{v}_1 - N\mathbf{v}_2\|^2 &= 2 \|N\mathbf{v}_1\|^2 + 2 \|N\mathbf{v}_2\|^2 \\
&= 2 \lambda^2 \|\mathbf{v}_1\|^2 + 2 \lambda^2 \|\mathbf{v}_2\|^2 \\
&= \lambda^2 \|\mathbf{v}_1 + \mathbf{v}_2\|^2 + \lambda^2 \|\mathbf{v}_1 - \mathbf{v}_2\|^2.
\end{align*}
Hence $\|N (\mathbf{v}_1 \pm \mathbf{v}_2)\| = \lambda \|\mathbf{v}_1 \pm \mathbf{v}_2\|$.
\end{proof}
Gowers's upper bound on $\alpha(G)$ involves the parameter $\delta$,
defined as the smallest dimension of a
nontrivial representation\footnote{One could just as well restrict
to real representations, which would increase $\delta$
by a factor of 2 in some cases. For instance,
if $G = \PSL_2(q)$ with $q \equiv 3 \pmod{4}$, this would give
$\delta = q-1$.}
of $G$.
For instance, if $G = \PSL_2(q)$ with $q odd$, then
then $\delta = (q-1)/2$.
\begin{lemma} \label{L:gowers}
If $\mathbf{v} \in \mathbb{R}^{V_1}$ satisfies $\mathbf{v} \cdot \mathbf{1} = 0$, then
$\|N\mathbf{v}\| \leq (n\#A/\delta)^{1/2} \|\mathbf{v}\|$.
\end{lemma}
\begin{proof}
Take $\lambda, W$ as in Lemma~\ref{L:subspace}.
Let $G$ act on $V_1$ and $V_2$ by right multiplication; then $G$
also acts on $\Gamma$. In this manner, $W$ becomes a real representation of $G$
in which no nonzero vector is fixed. In particular, $\dim(W) \geq \delta$.
Now note that the number of edges of $M$, which is
$n\#A$, equals the trace of $M$, which is at least $\dim(W) \lambda^2 \geq
\delta \lambda^2$.
This gives $\lambda^2 \leq n\#A/\delta$, proving the claim.
\end{proof}
We are now ready to prove Gowers's theorem
\cite[Theorem~3.3]{gowers}.
\begin{theorem}[Gowers] \label{T:gowers}
If $A,B,C$ are subsets of $G$ such that there is no true
equation $ab=c$ with $a \in A, b \in B, c \in C$, then
$\#A \#B \#C \leq n^3/\delta$. Consequently, $\beta(G)
\leq \delta^{-1/3}$.
\end{theorem}
For example, if $G = \PSL_2(q)$ with $q$ odd, then $n \sim c q^3$, so
$\alpha(G) \leq c n^{8/9}$. On the lower bound side, $G$ admits subgroups of
index $m \sim c q$, so $\alpha(G) \geq c n^{5/6}$.
\begin{proof}
Write $\#A = rn, \#B = sn, \#C = tn$. Let $\mathbf{v}$ be the characteristic
function of $B$ viewed as an element of $\mathbb{R}^{V_1}$, and put
$\mathbf{w} = \mathbf{v} - s \mathbf{1}$. Then
\begin{align*}
\mathbf{w} \cdot \mathbf{1} &= 0 \\
\mathbf{w} \cdot \mathbf{w} &= (1-s)^2 \#B + s^2 (n - \#B) = s(1-s)n \leq sn,
\end{align*}
so by Lemma~\ref{L:gowers}, $\|N\mathbf{w}\|^2 \leq r n^2sn/\delta$.
Since $ab = c$ has no solutions with $a \in A, b \in B, c \in C$,
each element of $C$ corresponds to a zero entry in
$N\mathbf{v}$. However, $N \mathbf{v} = N \mathbf{w} + r sn \mathbf{1}$, so each zero entry in
$N \mathbf{v}$ corresponds to an entry of $N \mathbf{w}$ equal to $-rsn$. Therefore,
\[
(tn)(rsn)^2 \leq \|N\mathbf{w}\|^2 \leq rsn^3/\delta,
\]
hence $rst \delta \leq 1$ as desired.
\end{proof}
As noted by Nikolov and Pyber \cite{nikolov-pyber}, the
extra strength in Gowers's theorem is useful for other applications in group
theory, largely via the following corollary.
\begin{cor}[Nikolov-Pyber]
If $A,B,C$ are subsets of $G$ such that $ABC \neq G$,
then
$\#A \#B \#C \leq n^3/\delta$.
\end{cor}
\begin{proof}
Suppose that $\#A \#B \#C > n^3/\delta$.
Put $D = G \setminus AB$,
so that $\#D = n - \#(AB)$.
By Theorem~\ref{T:gowers}, we have $\#A \#B \#D \leq n^3/\delta$,
so $\#C > \#D$. Then for any $g \in C$, the sets $AB$ and $gC^{-1}$
have total cardinality more than
$n$, so they must intersect. This yields $ABC = G$.
\end{proof}
Gowers indicates that his motivation for this argument was the
notion of a \emph{quasi-random graph} introduced by
Chung, Graham, and Wilson \cite{cgw}. They show that (in a suitable
quantitative sense) a graph
looks random in the sense of having the right number of short cycles
if and only if it also looks random from the spectral viewpoint, i.e.,
the second largest eigenvalue of its adjacency matrix is not too large.
\section{Coda}
As noted by Nikolov and Pyber \cite{nikolov-pyber},
using CFSG to get a strong quantitative version of Jordan's theorem on finite
linear groups, one can produce upper and lower bounds for $\alpha(G)$ that
look similar. (Keep in mind that the index of a proper subgroup must be
at least $\delta+1$, since any permutation representation of degree $m$
contains a linear representation of dimension $m-1$.)
\begin{theorem}
Under CFSG, the group $G$ has a proper subgroup of index
at most $c \delta^2$. Consequently,
\[
c n/\delta \leq \alpha(G) \leq cn/\delta^{1/3}.
\]
\end{theorem}
Moreover, for many natural examples (e.g., $G = A_m$ or $G = \PSL_2(q)$),
$G$ has a proper subgroup of index at most $c\delta$, in which case one has
\[
c n/\delta^{1/2} \leq \alpha(G) \leq cn/\delta^{1/3}.
\]
Since the gap now appears quite small, one might ask about closing it.
However, one can adapt the argument of \cite{kedlaya}
to show that Gowers's argument alone will not suffice, at least for
families of groups with $m \leq c \delta$.
(Gowers proves some additional results about products taken more than
two at a time \cite[\S 5]{gowers};
I have not attempted to extend this construction to that setting.)
\begin{theorem} \label{T:lower bound}
Given $\epsilon > 0$,
for $G$ admitting a transitive action on $\{1,\dots,m\}$ for $m$
sufficiently large,
there exist $A,B,C \subseteq G$ with $(\#A)(\#B)(\#C) \geq
(e^{-1}-\epsilon)n^3/m$,
such that the equation $ab=c$ has no solutions with $a \in A, b \in B,
c \in C$. Moreover, we can force $B=C$, $C=A$, or $A=B^{-1}$ if desired.
\end{theorem}
\begin{proof}
We first give a quick proof of the lower bound $cn^3/m$.
Let $U,V$ be subsets of $\{1,\dots,m\}$ of respective sizes $u,v$.
Put
\begin{align*}
A &= \{g \in G: g(U) \cap V = \emptyset\} \\
B &= \{g \in G: g(1) \in U\} \\
C &= \{g \in G: g(1) \in V\};
\end{align*}
then clearly the equation $ab=c$ has no solutions with $a \in A, b \in B,
c \in C$. On the other hand,
\[
\#A \geq n - u \frac{vn}{m}, \qquad
\#B = \frac{un}{m},
\qquad
\#C = \frac{vn}{m},
\]
and so
\[
(\#A)(\#B)(\#C) \geq \frac{n^3}{m}
\left( \frac{uv}{m} \right) \left( 1 - \frac{uv}{m} \right).
\]
By taking $u,v = \lfloor \sqrt{m/2} \rfloor$, we obtain
$(\#A)(\#B)(\#C) \geq cn^3/m$.
To optimize the constant, we must average over choices of $U,V$.
Take $u,v = \lfloor \sqrt{m} \rfloor$. By
inclusion-exclusion, for any positive integer $h$,
the average of $\#A$ is bounded below by
\[
\sum_{i=0}^{2h-1} (-1)^i n \frac{u(u-1)\cdots(u-i+1)v(v-1)\cdots(v-i+1)}{i! m(m-1)
\cdots (m-i+1)}.
\]
(The $i$-th term counts occurrences of $i$-element subsets in $g(U) \cap V$.
We find $\binom{v}{i}$ $i$-element sets inside $V$; on average,
each one occurs
inside $g(U)$ for $n \binom{u}{i} / \binom{m}{i}$ choices of $g$.)
Rewrite this as
\[
n \left( \sum_{i=0}^{2h-1} (-1)^i \frac{(m^{1/2})^i
(m^{1/2})^i}{m^i i!} +
o(1) \right),
\]
where $o(1) \to 0$ as $m \to \infty$.
For any $\epsilon > 0$, we have
\[
(\#A)(\#B)(\#C) \geq n^3 \frac{m}{m^2}
\left( e^{-1} - \epsilon \right)
\]
for $h$ sufficiently large, and
$m$ sufficiently large depending on $h$.
This gives the desired lower bound.
Finally, note that we may achieve $B=C$ by taking $U = V$.
To achieve the other equalities,
note that if the triplet $A,B,C$ has the desired property, so do
$B^{-1},A^{-1},C^{-1}$ and $C,B^{-1},A$.
\end{proof}
I have no idea whether one can sharpen
Theorem~\ref{T:gowers} under the hypothesis $A=B = C$
(or even just $A=B$). It might be enlightening to collect some
numerical evidence using examples generated by Theorem~\ref{T:kedlaya};
with Xuancheng Shao, we have done this for $\PSL_2(q)$ for $q \leq 19$.
I should also mention again that (as suggested in
\cite{kedlaya-amm}) one can also study product-free
subsets of compact topological groups, which are large for Haar measure. Some such
study is implicit in \cite[\S 4]{gowers}, but we do not
know what explicit bounds come out.
| {
"timestamp": "2007-11-07T23:12:48",
"yymm": "0708",
"arxiv_id": "0708.2295",
"language": "en",
"url": "https://arxiv.org/abs/0708.2295",
"abstract": "A subset of a group is product-free if it does not contain elements a, b, c such that ab = c. We review progress on the problem of determining the size of the largest product-free subset of an arbitrary finite group, including a lower bound due to the author, and a recent upper bound due to Gowers. The bound of Gowers is more general; it allows three different sets A, B, C such that one cannot solve ab = c with a in A, b in B, c in C. We exhibit a refinement of the lower bound construction which shows that for this broader question, the bound of Gowers is essentially optimal.",
"subjects": "Group Theory (math.GR); Combinatorics (math.CO)",
"title": "Product-free subsets of groups, then and now",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462176445589,
"lm_q2_score": 0.8152324893519999,
"lm_q1q2_score": 0.8054058543562664
} |
https://arxiv.org/abs/1906.11337 | Voronoi Cells in Metric Algebraic Geometry of Plane Curves | Voronoi cells of varieties encode many features of their metric geometry. We prove that each Voronoi or Delaunay cell of a plane curve appears as the limit of a sequence of cells obtained from point samples of the curve. We use this result to study metric features of plane curves, including the medial axis, curvature, evolute, bottlenecks, and reach. In each case, we provide algebraic equations defining the object and, where possible, give formulas for the degrees of these algebraic varieties. We show how to identify the desired metric feature from Voronoi or Delaunay cells, and therefore how to approximate it by a finite point sample from the variety. | \section{Introduction}
\emph{Metric algebraic geometry} addresses questions about real algebraic varieties involving distances. For example, given a point $x$ on a real plane algebraic curve $X \subset \mathbb{R}^2$, we may ask for the locus of points which are closer to $x$ than to any other point of $X$. This is called the \emph{Voronoi cell of $X$ at $x$} \cite{voronoi}.
The boundary of a Voronoi cell consists of points which have more than one nearest point to $X$. So we may ask, given a point in $\mathbb{R}^2$, how close must it be to $X$ in order to have a unique nearest point on $X$? This quantity is called the \emph{reach}, and was first defined by \cite{Federer}.
We use Voronoi cells to study metric features of plane curves. The following theorem makes precise the idea behind Figures \ref{fig:voronoi_butterfly} and \ref{fig:delaunay_butterfly}.
\begin{maintheorem}
\label{thm:convergence}
Let $X$ be a compact algebraic curve in $\mathbb{R}^2$ and $\{A_\epsilon\}_{\epsilon \searrow 0}$ be a sequence of finite subsets of $X$ containing all singular points of $X$ such that every point of $X$ is within distance $\epsilon$ of some point in $A_\epsilon$.
\begin{enumerate}
\item Every Voronoi cell is the Wijsman limit (see Definition \ref{def:wijsman}) of a sequence of Voronoi cells of $\{A_\epsilon\}_{\epsilon \searrow 0}$.
\item If $X$ is not tangent to any circle in four or more points, then every maximal Delaunay cell is the Hausdorff limit (see Definition \ref{def:hausdorff}) of a sequence of Delaunay cells of $\{A_\epsilon\}_{\epsilon \searrow 0}$.
\end{enumerate}
\end{maintheorem}
\begin{figure}[h]
\centering
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=\linewidth]{vtooth1.png}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=\linewidth]{vtooth2.png}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=\linewidth]{vtooth3.png}
\end{subfigure}
\caption{Voronoi cells of 101, 441, and 1179 points sampled from the butterfly curve \ref{eqn:butterfly}.}
\label{fig:voronoi_butterfly}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=\linewidth]{dtooth1.png}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=\linewidth]{dtooth2.png}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=\linewidth]{dtooth3.png}
\end{subfigure}
\caption{Delaunay cells of 101, 441, and 1179 points sampled from the butterfly curve \ref{eqn:butterfly}. The two large triangles correspond to tritangent circles of the curve.}
\label{fig:delaunay_butterfly}
\end{figure}
Voronoi diagrams of finite point sets are widely studied and have seen applications across science and technology, most notably in the natural sciences, health, engineering, informatics, civics and city planning. For example in Victoria, a state in Australia, students are typically assigned to the school to which they live closest. Thus, the catchment zones for schools are given by a Voronoi diagram \cite{VSD}.
Metric features of varieties, such as the medial axis and curvature of a point, can be detected from the Voronoi cells of points sampled densely from a variety. Computational geometers frequently use Voronoi diagrams to approximate these features and reconstruct varieties \cite{skeletons, skeleton_approx, normal_approximation}.
The reach of an algebraic variety is an invariant that is important in applications of algebraic topology to data science. For example, the reach determines the number of sample points needed for the technique of persistent homology to accurately determine the homology of a variety \cite{NSW}. For an algebraic geometric perspective on the reach, see \cite{BKSW}. The \emph{medial axis} of a variety is the locus of points which have more than one nearest point on $X$. This gives the following definition of the reach.
\begin{figure}[h!]
\centering
\includegraphics[height = 2 in]{reach_butterfly.pdf}
\caption{The reach of the butterfly curve is attained by the maximal curvature point in the lower left wing. The narrowest bottleneck is also shown. This figure is explained in Example \ref{ex:reach}.}
\label{fig:reach}
\end{figure}
\begin{definition}
The \emph{reach} $\tau(X)$ of an algebraic variety $X \subset \mathbb{R}^n$ is the minimum distance from any point on $X$ to a point on the medial axis of $X$.
\end{definition}
The paper \cite{ACKMRW17} describes how the reach is the minimum of two quantities.
We have
\begin{equation}
\label{eqn:reach_min}
\tau(X) = \min\left\{ q , \frac{\rho}{2}\right\},
\end{equation}
where $q$ is the minimum radius of curvature (Definition \ref{def:curvature}) of points in $X$ and $\rho$ is the narrowest bottleneck distance (Definition \ref{def:bottleneck_distance}). An example is depicted in Figure \ref{fig:reach}.
The paper is organized as follows. We begin with a systematic treatment in Section \ref{sec:limits} of convergence of Voronoi cells of increasingly dense point samples of a variety. This gives the proof of Theorem \ref{thm:convergence}, split among Theorems \ref{thm:delaunay_convergence} and \ref{thm:voronoi_convergence}, as well as Proposition \ref{prop:singular_convergence}, which treats the singular case separately. Theorem \ref{thm:convergence} is robust because it is not affected by the distribution of the point sample.
Theorem \ref{thm:convergence} provides the theoretical foundations for estimating metric features of a variety from a point sample. We do this for the medial axis (Section~\ref{sec:medial_axis}), curvature and evolute (Section~\ref{sec:evolute}), bottlenecks (Section~\ref{sec:bottlenecks} and reach (Section~\ref{sec:reach}). For each of these metric features, we first give defining equations and where possible a formula for the degree. We then turn our attention to detecting information about a real plane curve $X$ from its Voronoi cells. For each metric feature, we state a theoretical result about how to detect the feature from the Voronoi cells of $X$ or a subset of $X$. Corollaries to Theorem~\ref{thm:convergence} provide convergence results for these features. The overall aim is to give a path to compute the metric features of a plane algebraic curve $X$ from Voronoi cells of dense point samples of $X$. We use the \emph{butterfly curve}
\begin{equation}
\label{eqn:butterfly}
b(x,y) = x^4 - x^2 y^2 + y^4 - 4 x^2 - 2 y^2 - x - 4 y + 1
\end{equation}
in our examples.
In computational geometry and data science, these problems are often considered when there is noise in the sample. In this paper we assume that our samples lie precisely on the curve $X$.
\section{Voronoi and Delaunay Cells of Varieties and Their Limits}\label{sec:limits}
Let $X \subset \mathbb{R}^n$ be a real algebraic variety, and let $d(x,y)$ denote the Euclidean distance between two points $x,y \in \mathbb{R}^n$.
\begin{definition}
The \emph{Voronoi cell of $x \in X$} is
$$
Vor_X(x) = \{y \in \mathbb{R}^n\ |\ d(y,x) \leq d(y,x') \text{ for all }x' \in X\}.
$$
\end{definition}
An example of a Voronoi cell is given in Figure \ref{fig:ellipse_ex}.
This is a convex semialgebraic set whose dimension is equal to $codim(X)$ so long as $x$ is a smooth point of $X$. It is contained in the \emph{normal space to $X$ at $x$}:
$$
N_X(x) = \{u \in \mathbb{R}^n \ |\ u-x\text{ is perpendicular to the tangent space of }X\text{ at }x\}.
$$
The topological boundary of the Voronoi cell $Vor_X(x)$ consists of the points in $\mathbb{R}^n$ that have two or more closest points in $X$, one of which is $x$. The collection of boundaries of Voronoi cells is described as follows.
\begin{definition}
\label{def:medial}
The \emph{medial axis} $M(X)$ of an algebraic variety $X \subset \mathbb{R}^n$ is the collection of points in $\mathbb{R}^n$ that have two or more closest points in $X$. An example of the medial axis is given in Figure \ref{fig:ellipse_ex}.
\end{definition}
Let $B(p,r)$ denote the open disc with center $p \in \mathbb{R}^n$ and radius $r > 0$. We say this disc is \emph{inscribed} with respect to $X$ if $X \cap B(p,r) = \emptyset$ and we say it is \emph{maximally inscribed} if no disc containing $B(p,r)$ shares this property.
Each inscribed disc gives a Delaunay cell, defined as follows.
\begin{definition}
Given an inscribed disc $B$ of an algebraic variety $X \subset \mathbb{R}^n$,
the \emph{Delaunay cell $\text{Del}_X(B)$}
is
$
conv(\overline{B} \cap X).
$
An example of a Delaunay cell and the corresponding maximally inscribed disc is given in Figure \ref{fig:ellipse_ex}.
\end{definition}
\begin{figure}[h]
\centering
\includegraphics[height=1.5 in]{ellipse_ex.pdf}
\caption{The ellipse $
(x/2)^2 + y^2 -1 = 0
$
is shown in purple. The Voronoi cell of the red point $(\sqrt{7}/2,3/4)$ is shown in pink. It is a ray starting at the point $(3\sqrt{7}/8,0)$ in the direction $(\sqrt{7}/4,3/2)$. The dark blue line segment between the points $(-1/2, \sqrt{15}/4)$ and $(-1/2, -\sqrt{15}/4)$ is a Delaunay cell defined by the light blue maximally inscribed circle with center $(-3/8,0)$ and radius $\sqrt{61}/8$. The light blue line is the medial axis, which goes from $(-3/2,0)$ to $(3/2,0)$ because the curvature at the points $(-2,0)$ and $(2,0)$ is 2.
}
\label{fig:ellipse_ex}
\end{figure}
\begin{remark}
For plane curves, the collection of centers of all inscribed spheres which give maximal Delaunay cells (Delaunay cells which are not contained in any other Delaunay cell) is the Euclidean closure of the medial axis. Points of a plane algebraic curve $X$ which are themselves maximal Delaunay cells are points of $X$ with locally maximal curvature. In this case, the maximally inscribed circle is an \emph{osculating circle}, see Definition \ref{def:curvature}.
\end{remark}
We now describe two convex sets whose face structures encode the Delaunay and Voronoi cells of $X$.
We embed $\mathbb{R}^n$ in $\mathbb{R}^{n+1}$ by adding a coordinate. We usually imagine that this last coordinate points vertically upwards.
So, we say that $x \in \mathbb{R}^{n+1}$ is below $y \in \mathbb{R}^{n+1}$ if $x_{n+1} \leq y_{n+1}$ and all other coordinates are the same.
Let
$$
U = \{ x \in \mathbb{R}^{n+1} \ |\ x_{n+1} = x_1^2 + \cdots + x_n^2 \}
$$
be the \emph{standard paraboloid} in $\mathbb{R}^{n+1}$.
If $p \in \mathbb{R}^n,$ then let $p_U = (p,||p||^2)$ denote its lift to $U$.
Given a convex set $C \subset \mathbb{R}^{n+1}$, a convex subset $F \subset C$ is called a \emph{face} of $C$ if for every $x \in F$ and every $y,z \in C$ such that $x \in conv(y,z),$ we have that $y,z \in F$. We say that a face $F$ is \emph{exposed} if there exists an \emph{exposing hyperplane} $H$ such that $C$ is contained in one closed half space of the hyperplane and such that $F = C \cap H$. We call an exposed face $F$ a \emph{lower exposed face} of $C$ if the exposing hyperplane lies below $C$.
\begin{definition}
\label{def:lifted_delaunay}
The \emph{Delaunay lift of an algebraic variety $X \subset \mathbb{R}^n$} is the convex set
$$
P^*_X = conv(x_u\ |\ x \in X) + \{(0,\ldots,0,\lambda)\ :\ \lambda \in \mathbb{R}_{\geq 0}\} \subset \mathbb{R}^{n+1},
$$
where we recall that $x_u = (x , || x || ^2)$ and use $+$ to denote the Minkowski sum.
The Delaunay lift of the butterfly curve is shown in Figure \ref{fig:lifted_delaunay}.
\end{definition}
\begin{figure}[h]
\centering
\begin{minipage}{.55\textwidth}
\centering
\includegraphics[width=.8\linewidth]{delaunay_tooth.pdf}
\captionof{figure}{The Delaunay lift (Definition \ref{def:lifted_delaunay}) of the butterfly curve, viewed from below.}
\label{fig:lifted_delaunay}
\end{minipage}%
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{voronoi_tooth.pdf}
\captionof{figure}{The Voronoi lift (Definition \ref{def:lifted_voronoi}) of the butterfly curve, viewed from below.}
\label{fig:lifted_voronoi}
\end{minipage}
\end{figure}
We now study how the lower exposed faces of the Delaunay lift $P^*_X$ project to $conv(X)$, and give the Delaunay cells of $X$.
\begin{proposition}
\label{prop:delaunay_faces}
Let $X \subset \mathbb{R}^n$ be an algebraic variety. Let $\pi: \mathbb{R}^{n+1} \rightarrow \mathbb{R}^n$ be the projection onto the first $n$ coordinates. A subset $F \subset P^*_X$ is a lower exposed face if and only if $\pi(F)$ is a Delaunay cell of $X$. Furthermore, if $H_F$ is the hyperplane exposing $F$, then $\pi(U\cap H_F)$ is an inscribed sphere of $X$ and $\pi(F) = \text{Del}_X(\pi(U\cap H_F)).$
\end{proposition}
\begin{proof}
The map from $\mathbb{R}^n \rightarrow \mathbb{R}^{n+1}$ defined by $x \mapsto x_U= (x,||x||^2)$ lifts every sphere in $\mathbb{R}^n$ to the intersection of a hyperplane $H$ with $U$ \cite[Proposition 7.17]{joswig}. Moreover, the projection of the intersection of any hyperplane with
$U$ gives a sphere in $\mathbb{R}^n$ \cite[Proposition 7.17]{joswig}.
Given a Delaunay cell $\text{Del}_X(B)$ for some inscribed sphere $B$, we have that $P^*_X$ lies above the corresponding hyperplane $H$. This is because any points below $H$ would project to points in $X$ lying inside of $B$, contradicting the condition that $X \cap B = \emptyset$ for an inscribed disc $B$. So, $H$ is the exposing hyperplane of the face $(Del_X(B))_U$.
Suppose $F \subset P^*_X$ is a lower exposed face with exposing hyperplane $H_F$. The interior of the sphere $\pi(H_F \cap U)$ contains no points of $X$, because if it did contain a point $x$, then $x_U$ would lie in the lower half-space of $H_F$, which does not intersect $P^*_X$. Then $\pi(H_F\cap U)$ is the inscribed disc corresponding to a Delaunay cell.
Since $\pi(H_F \cap U)$ is a sphere, we have $\text{Del}_X(\pi(U\cap H_F)) = conv(\pi(U\cap H_F)\cap X)$. Let $X_U$ denote the lift of $X$ to $\mathbb{R}^{n+1}$. Then $\pi(U\cap H_F)\cap X = \pi(U\cap H_F\cap X_U) = \pi(H_F \cap X_U),$ and so $\text{Del}_X(\pi(U\cap H_F)) = conv(\pi(H_F \cap X_U)) = \pi(conv(H_F \cap X_U)) = \pi(F)$.
\end{proof}
We may define a convex set whose faces project down to the Voronoi cells as follows. For any point $x \in X$, let $T(x)$ denote the plane in $\mathbb{R}^{n+1}$ through $x_U = (x,||x||^2)$ tangent to the paraboloid $U$. Let $T(x)^+$ be the closed half-space consisting of all points in $\mathbb{R}^{n+1}$ lying above the plane $T(x)$.
\begin{definition}
\label{def:lifted_voronoi}
The \emph{Voronoi lift of an algebraic variety $X \subset \mathbb{R}^n$} is the convex set
$P_X = \cap_{x \in X} T(x)^+$.
The Voronoi lift of the butterfly curve is shown in Figure \ref{fig:lifted_voronoi}.
\end{definition}
The lower exposed faces of the Voronoi lift $P_X$ project to Voronoi cells of $X$, as we now show.
\begin{proposition}
\label{prop:voronoi_faces}
Let $X \subset \mathbb{R}^n$ be an algebraic variety. Let $\pi: \mathbb{R}^{n+1} \rightarrow \mathbb{R}^n$ be the projection onto the first $n$ coordinates. A subset $F$ of the Voronoi lift $P_X$ is an exposed face of $P_X$ if and only if $\pi(F)$ is a Voronoi cell of $X$. Furthermore, if $H_F$ is the hyperplane exposing $F$ and $H_F \cap U \neq \emptyset$, then $H_F\cap U$ is a point and
$
\pi(F) = \text{Vor}_X(\pi(U \cap H_F)).
$
\end{proposition}
\begin{proof}
For some point $x \in X$, consider $P_X \cap T(x)$. Let $p \in \mathbb{R}^n$. There exists $p_U' \in T(x)$ with $\pi(p_U) = \pi(p_U')$. The distance from $p_U$ to the point $p_U'$ is the distance $d_{\mathbb{R}^n}(\pi(p),\pi(x))$ \cite[Lemma 6.11]{joswig}.
Therefore, $P_X \cap T(x)$ consists of those points $p_U'$ for which the distance $d_{\mathbb{R}^n}(p,x)$ is minimal over all $x \in X$. In other words, ${\pi(P_X \cap T(x)) = Vor_X(x)}$.
Suppose $F \subset P_X$ is an exposed face with exposing hyperplane $H_F$ such that $H_F \cap U \neq \emptyset$. Let $p \in H_F \cap U$. Since $U \subset P_X$ we have that $p \in P_X$. Then, $p \in F = H_F \cap P_X$. This implies $H_F$ is the tangent hyperplane to $U$ at the point $p$, so in particular, $p=U \cap H_F$. Since $p$ is on the boundary of $P_x$, we have $\pi(p)\in X$ and $T(\pi(p)) = H_F$. We have $\pi(F)= \pi(P_X \cap T(\pi(p))) = Vor_X(\pi(p)) = Vor_X(\pi(U \cap H_F))$, where in the second equality we use the result in the preceding paragraph.
\end{proof}
There is a sense in which the Voronoi lift $P_X$ and the Delaunay lift $P^*_X$ are dual. We now describe this relationship. Suppose that $X$ is not contained in any proper linear subspace of $\mathbb{R}^n$. This implies that $P_X$ is pointed, meaning it does not contain a line. Therefore, it is projectively equivalent to a compact set \cite[Theorem 3.36]{joswig}. Embed $\mathbb{R}^{n+1}$ into $\mathbb{P}^{n+1}$ by the map
$$
\iota (x_1, \ldots, x_{n+1}) = (1:x_1 : \cdots : x_{n+1}).
$$
Let $l$ be the transformation of $\mathbb{P}^{n+1}$ defined by the following $(n+2) \times (n+2)$ matrix
$$
\begin{bmatrix}
1 & 0 & \cdots & 0 & 1 \\
0 & 2 & 0 & \cdots & 0 \\
\vdots & 0 & \ddots & 0 & \vdots \\
0 & \cdots & 0 & 2 & 0 \\
-1 & 0 & \cdots & 0 & 1
\end{bmatrix}.
$$
Then by \cite[Lemma 7.1]{joswig} the projective transformation $l$ maps $U$ to the sphere $S \subset \mathbb{R}^{n+1}$. The tangential hyperplane at the north pole $(1: 0 : \cdots : 0 : 1)$ is the image of the hyperplane at infinity. Moreover, the topological closure of $l(P_X)$ is a compact convex body so long as the origin is in the interior of $P^*_X$. In this case, we call the convex body $l(P_X)$ the \emph{Voronoi body}. The Voronoi body is full dimensional and contains the origin in its interior. Its polar dual
$$
l(P_X)^\circ := \left\{ y \in \mathbb{R}^{n+1} \ :\ \sum_{i=1}^n x_i y_i \leq 1\text{ for all } x \in p(P_X) \right\}
$$
is also full dimensional and has the origin in its interior. If we apply $l^{-1}$ to $l(P_X)^\circ$ we obtain an unbounded polyhedron, which is exactly the Delaunay lift $P^*_X$ of $X$. For more details, see \cite{joswig}.
We now study convergence of Voronoi and Delaunay cells. More precisely, given a real algebraic curve $X$ and a sequence of samplings $A_N \subset X$ with $|A_N| = N$, we show that Voronoi (or Delaunay) cells from the Voronoi (or Delaunay) cells of the $A_N$ limit to Voronoi (or Delaunay) cells of $X$. We begin by introducing two notions of convergence which describe the limits.
The \emph{Hausdorff distance} of two compact sets $B_1$ and $B_2$ in $\mathbb{R}^n$ is defined as
$$
d_h(B_1,B_2) := \sup \left \{ \adjustlimits \sup_{x \in B_1} \inf_{y \in B_2} d(x,y), \adjustlimits \sup_{y \in B_2} \inf_{x \in B_1} d(x,y) \right \}.
$$
More intuitively, we can define this distance as follows. If an adversary gets to put your ice cream on either set $B_1$ or $B_2$ with the goal of making you go as far as possible, and you get to pick your starting place in the opposite set, then $d_h(B_1,B_2)$ is the farthest the adversary could make you walk in order for you to reach your ice cream.
\begin{definition}
\label{def:hausdorff}
A sequence $\{B_\nu\}_{\nu\in \mathbb{N}}$ of compact sets is \emph{Hausdorff convergent} to $B$ if $d_h(B,B_\nu) \rightarrow 0$ as $\nu \rightarrow \infty$.
Given a point $x \in \mathbb{R}^n$ and a closed set $B \subset \mathbb{R}^n$, define
$$
d_w(x,B) = \inf_{b \in B} d(x,b).
$$
\end{definition}
\begin{definition}
\label{def:wijsman}
A sequence $\{B_\nu\}_{\nu\in \mathbb{N}}$ of compact sets is \emph{Wijsman convergent} to $B$ if for every $x \in \mathbb{R}^n$, we have that
$$
d_w(x,B_\nu) \rightarrow d_w(x,B).
$$
\end{definition}
An \emph{$\epsilon$-approximation} of a real algebraic variety $X$ is a discrete subset $A_\epsilon \subset X$ such that for all $y\in X$ there exists an $x \in A_\epsilon$ so that $d(y,x) \leq \epsilon$.
By definition, when $X$ is compact a sequence of $\epsilon$-approximations is Hausdorff convergent to $X$, and for all $X$, a sequence of $\epsilon$-approximations is Wijsman convergent to $X$.
We use Wijsman convergence as a variation of Hausdorff convergence which is well suited for unbounded sets. Delaunay cells are always compact, while Voronoi cells may be unbounded.
We now study convergence of Delaunay cells of $X$, and introduce a condition on real algebraic varieties which ensures that the Delaunay cells are simplices.
\begin{definition}
\label{def:generic}
We say that an algebraic variety $X\subset \mathbb{R}^n$ is \emph{Delaunay-generic} if $X$ does not meet any $d$-dimensional inscribed sphere at greater than $d+2$ points.
\end{definition}
\begin{example}
The standard paraboloid $U$ in any dimension $n+2$ is not Delaunay-generic because it contains $n$-spheres.
\end{example}
Although the focus of this paper is on algebraic curves in $\mathbb{R}^2$, we state the following theorem for curves in $\mathbb{R}^n$ because the proof holds at this level of generality.
\begin{theorem}
\label{thm:delaunay_convergence}
Let $X \subset \mathbb{R}^n$ be a Delaunay-generic compact algebraic curve, and let $\{A_\epsilon\}_{\epsilon \searrow 0}$ be a sequence of $\epsilon$-approximations of $X$. Every maximal Delaunay cell is the Hausdorff limit of a sequence of Delaunay cells of $A_{\epsilon}$.
\end{theorem}
\begin{proof}
Consider a sequence $\{A_\epsilon\}_{\epsilon \searrow 0}$ of $\epsilon$-approximations of $X$, where $\epsilon \searrow 0$ indicates a decreasing sequence of positive real numbers $\epsilon_\nu$ for $\nu \in \mathbb{N}$. We will study the convex sets
$
P^*_{A_{\epsilon}} = conv(a_U\ |\ a \in A_{\epsilon}),
$
where $a \mapsto a_U = (a,||a||^2)$ lifts $a$ to the paraboloid $U$.
The lower faces of $P^*_{A_{\epsilon}}$ project to Delaunay cells of $A_\epsilon$ \cite[Theorem 6.12, Theorem 7.7]{joswig}.
We now apply \cite[Theorem 3.5]{nidhi} to our situation.
This result says the following.
Let $C$ be a curve and $B_\epsilon$ be a sequence of $\epsilon$-approximations of $C$.
Suppose every point on $C$ which is contained in the boundary of $conv(C)$ is an extremal point of $conv(C)$, meaning it is not contained in the open line segment joining any two points of $conv(C)$. Let $F$ be a simplicial face of $conv(C)$ which is an exposed face of $conv(C)$ with a unique exposing hyperplane.
Then $F$ is the Hausdorff limit of a sequence of facets of $conv(B_\epsilon)$. We apply this result in the case when $C= X_U = \{x_U \in \mathbb{R}^{n+1} \ |\ x \in X\}$ and $B_\epsilon = (A_{\epsilon})_U = \{a_U \in \mathbb{R}^{n+1}\ |\ a \in A_{\epsilon}\}$.
Since every point on $U$ is extremal in $conv(U)$ and $conv(X_U) \subset conv(U)$, every point on $X_U$ which is contained in the boundary of $conv(X_U)$ is also extremal in $conv(X_U)$.
A maximal Delaunay cell of $X$ is a simplex because $X$ is Delaunay-generic.
Consider a maximal Delaunay cell of $X$ which is not a vertex. It has a unique description as $Del_X(B)$ for a disc $B$.
Proposition \ref{prop:delaunay_faces} establishes a one-to-one correspondence between such Delaunay cells and lower exposed faces of $P^*_X$, which are uniquely exposed by the hyperplane containing $(\partial \overline{B})_U$.
In this case, \cite[Theorem 3.5]{nidhi} holds, so the result is proved.
If a maximal Delaunay cell is a vertex, then it is a point $x \in X$. It is then also an extremal point of $conv(X_U)$. Since $conv(B_\epsilon)$ is sequence of compact convex sets converging in the Hausdorff sense to $P_X^*$, by \cite[Lemma 3.1]{nidhi} there exists a sequence of points of $B_\epsilon$ which are extremal points of $conv(B_\epsilon)$ converging to $x_U$. So, their projections are Delaunay cells of $A_\epsilon$ converging to $x$, since every point in a finite point set is a Delaunay cell of that point set.
\end{proof}
We will now study limits of Voronoi cells, using results from \cite{skeletons}, which studies convergence of Voronoi cells of \emph{r-nice sets} (for a definition, see \cite[p. 119]{skeletons}). In the plane, these are open sets whose boundary satisfies some properties. In particular, open sets whose boundaries are an algebraic curve with positive reach $r$ satisfy the $r$-nice condition. For plane curves, having positive reach is equivalent to being smooth.
To study continuity and convergence of closed sets in the plane, Brandt uses the \emph{hit-miss topology} $\mathcal{F}$ on closed subsets of the plane \cite[Section 1-2]{matheron}.
\begin{definition}
\label{def:semicontinuous}
In the hit-miss topology, a sequence $\{F_n\}$ \emph{converges} to $F$ if and only if
\begin{enumerate}
\item \label{def:upper} for any $p \in F$, there is a sequence $p_n \in F_n$ such that $p_n \rightarrow p$; and
\item \label{def:lower}
if there exists a subsequence $p_{n_k} \in F_{n_k}$ converging to a point $p$, then $p \in F$.
\end{enumerate}
Then, to determine if a function with range in $\mathcal{F}$ is continuous, we need to examine the above conditions for sequences of sets obtained by applying the function to countable convergent sequences in $\mathbb{R}^2.$ If all such sequences satisfy (\ref{def:upper}) then the function is \emph{upper-semicontinuous}. If all such sequences satisfy (\ref{def:lower}) then it is \emph{lower-semicontinuous}. If a function satisfies both then it is continuous.
\end{definition}
\begin{lemma}
\label{lem:voronoi}
Let $X \subset \mathbb{R}^2$ be a smooth plane algebraic curve. Then the function $Vor_X: X \rightarrow \mathcal{F}$ sending $x \mapsto Vor_X(x)$ is continuous in the hit-miss topology.
\end{lemma}
\begin{proof}
By \cite[Theorem 2.2]{skeletons}, the Voronoi function $Vor_X:X \rightarrow \mathcal{F}$ is lower semicontinuous. By \cite[Theorem 3.2]{skeletons}, if the curve is $C^2$ and the \emph{skeleton} (locus of centers of maximally inscribed discs) is closed, then the Voronoi function is continuous. A smooth algebraic curve is $C^2$. The skeleton is closed because a smooth curve satisfies the $r$-nice condition, and $r$-nice curves have closed skeletons \cite{skeletons}.
\end{proof}
By \cite[page 10]{matheron}, convergence in the hit-miss topology is equivalent to Wijsman convergence. In what follows, we rephrase the results from \cite{skeletons} in the setting of Wijsman convergence of Voronoi cells of plane curves, and extend it to singular curves.
\begin{theorem}
\label{thm:voronoi_convergence}
Let $X$ be a compact smooth algebraic curve in $\mathbb{R}^2$ and $\{A_\epsilon\}_{\epsilon \searrow 0}$ be a sequence of $\epsilon$-approximations of $X$. Every Voronoi cell is the Wijsman limit of a sequence of Voronoi cells of $A_\epsilon$.
\end{theorem}
\begin{proof}
By Lemma \ref{lem:voronoi}, the function $Vor_X: X \rightarrow \mathcal{F}$ is continuous. Theorem 3.1 from \cite{skeletons} states that in this case, if $x_\epsilon$ is a sequence such that $x_\epsilon \in A_\epsilon$ and $x_\epsilon \rightarrow x$, then $Vor_{A_\epsilon}(x_\epsilon) \rightarrow Vor_X(x)$. Such a sequence must exist because for all $y \in X$, there exists a $y_\epsilon \in A_\epsilon$ such that $d(y,y_\epsilon) \leq \epsilon$.
\end{proof}
We now investigate the structure of Voronoi cells of different types of singular points. In Figure~\ref{fig:singular_varieties}, we give four examples of Proposition \ref{prop:singular_points}. First, we need a glueing lemma.
\begin{lemma}
\label{lem:glueing}
Let $C$ and $D$ be subsets of $\mathbb{R}^2$ containing a point $p \in C \cap D$. Then
$$
Vor_{C \cup D}(p) = Vor_C(p) \cap Vor_D(p).
$$
\end{lemma}
\begin{proof}
A point $x \in Vor_{C \cup D}(p)$ is closer to $p$ than it is to any other point of $C$ or $D$. On the other hand, a point in $Vor_C(p) \cap Vor_D(p)$ is closer to $p$ than it is to any other point of $C$ or $D$.
\end{proof}
\begin{figure}[h]
\centering
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.9\linewidth]{isolated.pdf}
\caption{$(x^2 + y^2 - x)^2 - (1.5)^2 (x^2 + y^2) = 0$\\ An isolated singularity and its\\
2-dimensional Voronoi cell. }
\end{subfigure}%
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.9\linewidth]{cusp.pdf}
\caption{$(x^2 + y^2 - x)^2 - (x^2 + y^2)= 0$\\
A cusp and its 2-dimensional Voronoi cell. }
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.9\linewidth]{node.pdf}
\caption{$(x^2 + y^2 - x)^2 - (0.5)^2 (x^2 + y^2)= 0$\\ A node and its 0-dimensional Voronoi cell. }
\end{subfigure}%
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{tacnode.pdf}
\caption{$x^4 - y^2= 0$\\
A tacnode and its 1-dimensional Voronoi cell. }
\end{subfigure}
\caption{Four singular varieties with singular point $(0,0)$. In each case the medial axis is blue, the singular point is red, and its Voronoi cell is pink.}
\label{fig:singular_varieties}
\end{figure}
\begin{proposition}
\label{prop:singular_points}
Let $X \subset \mathbb{R}^2$ be a real plane algebraic curve and $p$ be a singular point.
\begin{enumerate}
\item If $p$ is an isolated point, its Voronoi cell is 2-dimensional;
\item If $p$ is a node, then its Voronoi cell is 0-dimensional and equal to $p$;
\item If $p$ is a tacnode, then its Voronoi cell is 1-dimensional.
\end{enumerate}
\end{proposition}
\begin{proof}
\hfill
\begin{enumerate}
\item Suppose $p$ is an isolated point. Then there is a ball $B(p,r)$ centered at $p$ such that the ball contains no other points of the curve $X$. Therefore, the ball $B(p,r/2)$ is entirely contained in $Vor_X(p)$, so it is 2-dimensional.
\item If $p$ is a node, then we claim that the only point contained in $Vor_X(p)$ is $p$. At $p$, the curve meets in two branches which have distinct tangent directions at $p$. If we treat this as two separate 1-dimensional subsets $X_1$ and $X_2$ and apply Lemma \ref{lem:glueing}, we see that $Vor_X(p) = Vor_{X_1}(p) \cap Vor_{X_2}(p)$. But, since $p$ is a smooth point of $X_1$ and $X_2$, the Voronoi cells $Vor_{X_1}(p)$ and $Vor_{X_2}(p)$ are each contained in their respective normal directions, which are distinct. Therefore, $Vor_{X_1}(p) \cap Vor_{X_2}(p) = p$.
\item If $p$ is a tacnode, then two or more osculating circles (see Definition \ref{def:curvature}) are tangent at $p$. We can choose $\epsilon > 0$ so that we can separate $X \cap B(p,\epsilon)$ into subsets $X_1, \ldots, X_n$ corresponding to the osculating circles at $p$ such that $Vor_{X_i}(p)$ is a subset of the line from $p$ to the center of the corresponding osculating circle.
Then, we apply Lemma \ref{lem:glueing}. We have ${Vor_{X\cap B(p,\epsilon)}(p) = \cap_{i=1}^n Vor_{X_i}(p)}$. All of the $Vor_{X_i}(p)$ are contained in the normal line at $p$, so $Vor_{X \cap B(p,\epsilon)}(p)$ is also a subset of this normal line. Since $Vor_X(p)$ is convex, and within $B(p,\epsilon)$ the Voronoi cell is a line segment, $Vor_X(p)$ is 1-dimensional.
\end{enumerate}
\end{proof}
\begin{example}
\label{ex:cusp_converge}
In this example we illustrate why Theorem \ref{thm:voronoi_convergence} fails when the curve has a singular point. From this example it will be clear that the singular points must be included in the samples $A_\epsilon$, and it turns out that this condition is enough to extend Theorem \ref{thm:voronoi_convergence} to the singular case.
Consider the curve defined by the equation $y^2 = x^3$. In \cite[Remark 2.4]{voronoi} the authors give equations for the Voronoi cell of the cusp at the origin. This region is
$$
Vor_{y^2=x^3}((0,0)) = \{(x,y) \in \mathbb{R}^2 \ : \
27 y^4 + 128 x^3 + 72 x y^2 + 32 x^2 + y^2 + 2 x \leq 0
\}.
$$
In Figure \ref{fig:cuspidal} we give three $\epsilon$-approximations of the curve and the corresponding Voronoi decompositions. Let $\epsilon = 1/n$. The points in the $\epsilon$-approximation $A_{\epsilon}$ are given by:
$$
A_{\epsilon} = \left \{\left (\frac{j}{n}, \pm \left (\frac{j}{n}\right)^{3/2} \right )\right \}_{j = 1}^{\infty}.
$$
As we can see in Figure \ref{fig:cuspidal}, there is no sequence of cells converging to $Vor_{y^2=x^3}((0,0))$ because the $x$-axis, present due to the symmetrical nature of the sample, always divides the Voronoi cell.
\begin{figure}[h]
\centering
\includegraphics[width= \linewidth]{cuspidal.pdf}
\caption{Some $\epsilon$-approximations of the curve $y^2=x^3$ and their Voronoi diagrams. The Voronoi cell of the cusp $(0,0)$ is shown in pink. This figure is discussed in Example \ref{ex:cusp_converge}.}
\label{fig:cuspidal}
\end{figure}
\end{example}
We now are able to expand Theorem \ref{thm:voronoi_convergence} to include singular varieties.
\begin{proposition}
\label{prop:singular_convergence}
Let $X \subset \mathbb{R}^2$ be a compact smooth algebraic curve and $\{A_\epsilon\}_{\epsilon \searrow 0}$ a sequence of $\epsilon$-approximations with the singular locus $Sing(X) \subset A_\epsilon$ for all $\epsilon$. Then every Voronoi cell of $X$ is the Wijsman limit of a sequence of Voronoi cells of $A_\epsilon$.
\end{proposition}
\begin{proof}
By \cite[Theorem 2.2]{skeletons}, the Voronoi function is always lower-semicontinuous. So, we must show that condition (\ref{def:upper}) in Definition \ref{def:semicontinuous} holds. That is, we need that for all $p \in X$, there is a sequence $p_\epsilon \in A_\epsilon$ with $p_\epsilon \rightarrow p$ such that for any $x \in Vor_X(p)$ there is an $x_\epsilon \in Vor_{A_\epsilon}(p_\epsilon)$ with $x_\epsilon \rightarrow x$. We distinguish the cases when $p$ is smooth and singular.
If $p$ is a smooth point on $X$, and $x \in Vor_X(p)$, there exists an $\epsilon$ such that $x$ and $p$ are both in the Voronoi cell $Vor_{A_\epsilon}(p_\epsilon)$ for some $p_\epsilon$.
Suppose now that $p \in X$ is a singular point. We wish to show that there is a sequence of Voronoi cells converging to $Vor_X(p)$, and we take the sequence $Vor_{A_\epsilon}(p)$. To establish convergence, it is now enough to show that for all $x \in Vor_X(p)$, there is an $x_\epsilon \in Vor_{A_\epsilon}(p)$ with $x_\epsilon \rightarrow x$. Since $x \in Vor_X(p)$, we have that $x$ is closer to $p$ than it is to any other point in $X$. So, in particular, $x \in Vor_{A_\epsilon}(p)$.
Now we have shown that for each $p \in X$, condition (\ref{def:upper}) in Definition \ref{def:semicontinuous} holds. Therefore, for each $p \in X$, we have sequences of Voronoi cells which are convergent to $Vor_p(X)$ in the hit-miss topology. Since convergence in the hit-miss topology and Wijsman convergence are equivalent, every Voronoi cell of $X$ is the Wijsman limit of a sequence of Voronoi cells of the $A_\epsilon$.
\end{proof}
This concludes the proof of Theorem \ref{thm:convergence}.
\section{Medial Axis}\label{sec:medial_axis}
Let $X = V(F)\subset \mathbb{R}^2$ be a smooth plane algebraic curve.
We now study the medial axis of $X$, as defined in Definition \ref{def:medial}.
The Zariski closure of the medial axis is an algebraic variety which has the same dimension as the medial axis. We can obtain equations in variables $x,y$ for the ideal $I$ of a variety containing the Zariski closure of the medial axis in the following way.
Let $(s,t)$ and $(z,w)$ be two points on $X$. Then, $s,t,z,$ and $ w$ satisfy the equations
\[ F(s,t)=0\ \text{and } \ F(z,w)=0 . \]
If $(x,y)$ is equidistant from $(s,t)$ and $(z,w)$ then
\[ (x-s)^2+(y-t)^2=(x-z)^2+(y-w)^2 . \]
Furthermore, $(x,y)$ must be a critical point of the distance function from both $(s,t)$ and $(z,w)$. Thus we require that the determinants of the following $2 \times 2$ augmented Jacobian matrices vanish:
\begin{equation*}
\begin{bmatrix}
x-s & y-t\\
F_s & F_t
\end{bmatrix} ,
\ \ \ \ \
\begin{bmatrix}
x-z & y-w\\
F_z & F_w
\end{bmatrix} ,
\end{equation*}
where $F_s,F_t,F_z$ and $F_w$ denote the partial derivatives of $F(s,t)$ and $F(z,w)$, respectively.
Let
\begin{align*}
I = \langle &
F(s,t), F(z,w), (x-s)^2+(y-t)^2-(x-z)^2-(y-w)^2,\\
& (x-s)F_t - (y-t)F_t , (x-z)F_w - (y-w)F_z \rangle.
\end{align*}
Then,
$
J = \left ( I : (s-z,t-w)^\infty \right ) \cap \mathbb{R}[x,y]
$
is an ideal whose variety contains the Zariski closure of the medial axis.
We now study the medial axis from the perspective of Voronoi cells.
It has been observed that an approximation of the medial axis arises as a subset of the Voronoi diagram of finitely many points sampled densely from a curve \cite{medial_approximation}. We now discuss theoretical results given in \cite{skeletons} about the convergence of medial axes.
Let $X$ be a compact smooth plane algebraic curve, and let $A_\epsilon$ be an $\epsilon$-approximation of $X$. A Voronoi cell $Vor_{A_\epsilon}(a_\epsilon)$ for $a_\epsilon \in A_\epsilon$ is polyhedral, meaning it is an intersection of half-spaces.
\begin{definition}
\label{def:short_long_edges}
For sufficiently small $\epsilon$, exactly two edges of $Vor_{A_\epsilon}(a_\epsilon)$ will intersect $X$ \cite{skeletons}. We call these edges the \emph{long edges} of the Voronoi cell, and all other edges are called \emph{short edges}. An example is given in Figure \ref{fig:short_long}.
\end{definition}
\begin{figure}[h]
\centering
\includegraphics[height=2.5in]{short_long.pdf}
\caption{The long edges (blue) and short edges (red) of Voronoi cells of points sampled from the butterfly curve as in Definition \ref{def:short_long_edges}.}
\label{fig:short_long}
\end{figure}
In this case, let $\hat{S}_\epsilon(a_\epsilon)$ denote the union of the short edges and vertices of the Voronoi cell $Vor_{A_\epsilon}(a_\epsilon)$. An \emph{$\epsilon$-medial axis approximation} is the set of all short edges
$$
\hat{S}_\epsilon = \bigcup_{p \in A_\epsilon} \hat{S}_\epsilon(p).
$$
\begin{proposition}(\cite[Theorem 3.4]{skeletons})\label{prop:brandt_medial}
Let $X$ be a compact smooth plane algebraic curve. The medial axis approximations $\hat{S}_\epsilon$ converge to the Euclidean closure of the medial axis.
\end{proposition}
\begin{remark}
The medial axis is the union of all endpoints of Voronoi cells $Vor_X(p)$ for $p \in X$. The medial axis is also the union of all centers of maximally inscribed circles of $X$.
\end{remark}
The following corollary shows that the corresponding statements also hold for $\epsilon$-approximations.
\begin{corollary}
Let $\{A_\epsilon\}_{\epsilon \searrow 0}$ be a sequence of $\epsilon$-approximations of a compact smooth algebraic curve ${X \in \mathbb{R}^2}$.
\label{cor:medial_approx}
\begin{enumerate}
\item The collection of vertices of the Voronoi diagrams of the $A_\epsilon$ converge to the medial axis.
\item The collection of centers of maximally inscribed discs of the $A_\epsilon$ converge to the medial axis.
\end{enumerate}
\end{corollary}
\begin{proof}
This is a consequence of Theorem \ref{thm:delaunay_convergence}, Theorem \ref{thm:voronoi_convergence}, and Proposition \ref{prop:brandt_medial}.
\end{proof}
\begin{example}
In Figure \ref{fig:trott_medial} we display the centers of maximally inscribed circles, or equivalently circumcenters of the Delaunay triangles, for an $\epsilon$-approximation of the butterfly curve where 898 points were sampled. In Figure \ref{fig:butterfly_v_medial} we show the short edges of Voronoi cells from an $\epsilon$-approximation of the butterfly curve where 101 points were sampled.
\end{example}
\begin{figure}[h]
\centering
\begin{minipage}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{medial_via_delaunay.pdf}
\captionof{figure}{A medial axis approximation of the butterfly curve obtained from circumcenters of Delaunay triangles, which are shown in red.}
\label{fig:trott_medial}
\end{minipage}%
\begin{minipage}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{butterfly_medial_v.pdf}
\captionof{figure}{A medial axis approximation of the butterfly curve obtained from short edges of Voronoi cells, which are shown in red.}
\label{fig:butterfly_v_medial}
\end{minipage}
\end{figure}
The medial axis plays an important role in applications for understanding the connected components and regions of a shape. As such, it is a very well-studied problem in computational geometry to find approximations of the medial axis from point clouds.
A survey on medial axis computation is given in \cite{medial_survey}.
\section{Curvature and the Evolute}\label{sec:evolute}
Curvature of plane curves, osculating circles and evolutes have interested mathematicians since antiquity. As early as 200 BCE, Apollonius mentioned evolutes in Book V of \textit{Conics} \cite{bce}.
We refer readers to works of Salmon in the 19th century \cite{Salmon1, Salmon2} and to modern lectures by Fuchs and Tabachnikov outlining this history \cite[Chapter 3]{Fuchs}.
We now discuss the minimal radius of curvature of a plane curve. This is one of the two quantities which determines the reach, see Equation \ref{eqn:reach_min}.
There are many ways to define the radius of curvature of a plane curve. We use the definition of Cauchy \cite[91]{cauchy}.
\begin{definition}\label{def:curvature}
Let $X\subset \mathbb{R}^2$ be an algebraic curve and $p \in X$ be a smooth point. The \emph{center of curvature} at $p$ is the intersection of the normal line to $X$ at p and the normal line to $X$ at a point infinitely close to $p$. The \emph{radius of curvature} at $p$ is the distance from $p$ to its center of curvature. The (unsigned) \emph{curvature} is the reciprocal of the radius of curvature. The \emph{osculating circle} at $p$ is the circle tangent to $X$ at $p$ centered at the center of curvature with radius equal to the radius of curvature.
\end{definition}
Modern mathematicians may feel uncomfortable with the language of ``infinitely close points.'' An alternative definition of center and radius of curvature can be given using envelopes.
\begin{definition}
The \emph{envelope} of a one-parameter family of plane algebraic curves given implicitly by
${F(x,y,t)=0}$
is a curve that touches every member of the family tangentially. The envelope is the variety defined by the ideal
\[ \left \langle \frac{\partial F}{\partial t}, \ F(x,y,t) \right \rangle \cap \mathbb{R}[x,y]. \]
\end{definition}
The envelope of the family of normal lines parametrized by the points of the curve is called its \emph{evolute}.
A generalization of the evolute to all dimensions is called the \emph{ED discriminant}, and is studied in \cite{ed}.
They show that for general smooth plane algebraic curves, the degree of the evolute is
$3d(d-1)$ \cite[Example 7.4]{ed}.
We now derive a formula for the center and radius of curvature of a plane curve at a point. Our derivation follows Salmon \cite[84-98]{Salmon2}. This can be taken as an equivalent definition of center and radius of curvature. The evolute is then the locus of the centers of curvature.
\begin{proposition}\cite[84-86]{Salmon2}
Let $X=V(F(x,y)) \in \mathbb{R}^2$ be a smooth curve of degree $d$. The radius of curvature at a point $(x_0,y_0) \in V(F)$ is given by evaluating the following expression in terms of partial derivatives of $F$ at $(x_0,y_0)$:
\begin{equation}\label{urc}
R= \frac{ (F_x^2+F_y^2)^{\frac{3}{2}}}{F_{xx}F_{y}^2-2F_{xy}F_xF_y+F_{yy} F_x^2}.
\end{equation}
\end{proposition}
\begin{proof}
The equation of a normal line to $X$ at a point $(x,y) \in X$ in the variables $(\alpha, \beta)$ is
\begin{equation}\label{nl}
F_y(\alpha-x)-F_x(\beta-y)=0.
\end{equation}
The total derivative of the equation for the normal line is
\begin{equation}\label{td1}
\left(F_{xy}+F_{yy}\frac{dy}{dx}\right)(\alpha-x_0)-\left(F_{xx}+F_{xy}\frac{dy}{dx}\right)(\beta-y)-F_y+F_x\frac{dy}{dx}=0.
\end{equation}
The total derivative of $F(x,y)$ is
\begin{equation}\label{td2}
F_x(x_0,y_0)+F_y\frac{dy}{dx} = 0.
\end{equation}
The equations \ref{nl}, \ref{td1} are a system of two linear equations in the unknowns $\{ \alpha, \beta \}$. We solve this system to obtain expressions for $\alpha$ and $\beta$ in terms of $x$, $y$, and $\frac{dy}{dx}$. We substitute in for $\frac{dy}{dx}$ the expression given by \ref{td2}. The center of curvature of $X$ at a point $(x,y) \in X$ is given by the coordinates $(\alpha, \beta)$, which are now expressions in $x$ and $y$.
The radius of curvature $R$ at a point $(x,y)$ is its distance to its center of curvature $(\alpha, \beta)$, so we have
$ R=\sqrt{(\alpha-x)^2+(\beta-y)^2}$.
Substituting in the equations for $\alpha$ and $\beta$, we find
\begin{equation*}
R= \frac{ (F_x^2+F_y^2)^{\frac{3}{2}}}{F_{xx}F_{y}^2-2F_{xy}F_xF_y+F_{yy} F_x^2}.
\end{equation*}
\end{proof}
For curves in projective space, there is a modified formula for the radius of curvature.
\begin{corollary}\cite[86-87]{Salmon2}
Let $X=V(F(x,y,z)) \subset \mathbb{P}_{\mathbb{R}}^2$ be a smooth curve of degree $d$ defined by a homogeneous polynomial $F$. The radius of curvature at a point $(x_0,y_0,z_0) \in X$ is given by evaluating the following expression in terms of partial derivatives of $F$ at $(x_0,y_0,z_0)$:
\begin{equation}\label{rc}
R= \frac{(d-1)^2(F_x^2+F_y^2)^{\frac{3}{2}}}{z^2(F_{xx}F_{yy}F_{zz}-F_{xx}F_{yz}^2-F_{yy}F_{xz}^2-F_{zz}F_{xy}^2+2F_{yz}F_{xz}F_{xy})}.
\end{equation}
\end{corollary}
\begin{proof}
For any homogeneous function $F(x,y,z)$ of degree $d$ we have $xF_x+yF_y+zF_z=dF$.
We use this equation to obtain expressions for $F_x$ and $F_y$. Similarly, we find relations among the second derivatives. We substitute these expressions in to \ref{urc} to obtain our new formula.
\end{proof}
We now analyze the critical points of curvature of a smooth algebraic curve $X \subset {\mathbb{R}}^2$. If $X$ is a line, then for all $p \in X$ the radius of curvature is infinite and the curvature is $0$. If $X$ is a circle, then all points $p \in X$ have the same radius of curvature, equal to the radius of the circle. Thus the total derivative of the equation for the radius of curvature is identically $0$. We exclude such curves from our analysis by requiring that $X$ be irreducible of degree greater than or equal to $3$.
\begin{definition}
The \emph{degree of critical curvature} of a smooth algebraic curve $X \subset \mathbb{R}^2$ is the degree of the variety obtained by intersecting the Zariski closure $\overline{X} \subset \mathbb{P}_{\mathbb{C}}^2$ with the variety of the total derivative of the equation for the radius of curvature. If $X \subset {\mathbb{R}}^2$ is a smooth, irreducible algebraic curve of degree greater than or equal to $3$, then the intersection consists of finitely many points, the points of critical curvature. Thus the degree of critical curvature of $X$ gives an upper bound for the number of real points of critical curvature of $X$.
\end{definition}
\begin{theorem}
Let $X \subset {\mathbb{R}}^2$ be a smooth, irreducible algebraic curve of degree $d \ge 3$. Then the degree of critical curvature of $X$ is $6d^2-10d$.
\end{theorem}
\begin{proof}
To simplify notation, let $H=F_{xx}F_{yy}F_{zz}-F_{xx}F_{yz}^2-F_{yy}F_{xz}^2-F_{zz}F_{xy}^2+2F_{yz}F_{xz}F_{xy}.$
We have assumed that the radius of curvature is finite, so $H$ is nonzero.
Dehomogenize Equation \ref{rc} by setting $z=1$.
Then take the total derivative and set the total derivative equal to $0$.
Then divide both sides of the equation by $\frac{(d-1)^2(F_x^2+F_y^2)^{\frac{1}{2}}}{2z^2H}$. We have already shown that the denominator of this fraction is nonzero. The numerator is nonzero as well because $H$ is nonzero implies that $F_x$ and $F_y$ cannot both be zero.
We obtain
\begin{equation}\label{cc}
(F_x^2+F_y^2)(F_yH_x-F_xH_y)= 3H[(F_{xx}-F_{yy})F_xF_y+F_{xy}(F_y^2-F_x^2) ].
\end{equation}
The degree of $F$ is $d$. So the degree of Equation \ref{cc} is $6d-10$. We intersect the projective variety defined by the homogenization of Equation \ref{cc} (which has the same degree as the affine variety) with the projectivization of $X$. By B\'{e}zout's Theorem, the degree of critical curvature of the complex projectivization of $X$ is
$6d^2-10d$.
\end{proof}
We remark that the critical points of curvature of $X$ give cusps on the evolute \cite[Lemma 10.1]{Fuchs}.
That is, if a normal line is drawn through a point of critical curvature on a curve, then the normal line will pass through a cusp of the evolute.
In addition, the evolute of a curve of degree $d$ has $d$ cusps at infinity. Thus the evolute of a plane curve of degree $d$ has $6d^2-10d+d=6d^2-9d$ cusps \cite[97]{Salmon2}. In Figure \ref{fig:evolute}, we picture the evolute, the butterfly curve, and the pairs of critical curvature points on the butterfly curve with their corresponding cusp on the evolute.
\begin{figure}[h]
\centering
\includegraphics{evolute_approx_17485.pdf}
\caption{The eleven real points of critical curvature on the butterfly curve, computed in Example \ref{ex:curvature}, joined by green line segments to their centers of curvature. These give cusps on the evolute, which is pictured here in light blue.}
\label{fig:evolute}
\end{figure}
\begin{example}
\label{ex:curvature}
Consider the butterfly curve. Using the above description, we can compute the 56 points of critical curvature using \texttt{JuliaHomotopyContinuation} \cite{julia}. Twelve of these points are real, and they are plotted in Figure \ref{fig:evolute}. The maximal curvature is approximately $9.65$. This is achieved at the lower left wing of the butterfly.
\end{example}
We now describe how to recover the curvature at a point from the Voronoi cells of a subset of a curve $X$. In applications, Voronoi-based methods are used for obtaining estimates of curvature at a point.
An overview of techniques for estimating curvature of a variety from a point cloud is given in \cite{feature_detection}. Further, there are also Delaunay-based methods for estimating curvature of a surface in three dimensions \cite{delaunay_curvature}.
\begin{theorem}
\label{thm:curvature}
Let $X$ be a smooth plane curve of degree at least $3$ and $p\in X$ a point.
Let $\delta$ be less than the distance to the critical point of curvature nearest to $p$, and
let $B(p, \delta)$ be a ball of radius $\delta$ centered at $p$. Then
\begin{enumerate}
\item
\label{thm:1pt1}
The Voronoi cell $\text{Vor}_{X \cap B(p, \delta)}(p)$ is a ray. The distance from $p$ to the endpoint of this ray is the radius of curvature of $X$ at $p$.
\item
\label{thm:1pt2}
Consider a sequence of $\epsilon$-approximations $A_\epsilon$ of $X \cap B(p,\delta)$. Let $a_\epsilon$ be the point such that $p \in Vor_{A_\epsilon}(a_\epsilon)$, and let $d_\epsilon$ be the minimum distance from $a_\epsilon$ to a vertex of $Vor_{A_\epsilon}(a_\epsilon)$. Then, the sequence $d_\epsilon$ converges to the radius of curvature of $X$ at $p$.
\end{enumerate}
\end{theorem}
\begin{proof}
The Voronoi cell $\text{Vor}_{X \cap B(p, \delta)}(p)$ is a subset of the line normal to $X \cap B(p, \delta)$ at $p$. This line has an endpoint either at the center of curvature of $p$ or at a point where it intersects the normal space of a distinct point $p'$ in $X\cap B(p,\delta)$. The point where the normals at $p$ and $p'$ intersect is contained in the Voronoi cell with respect to $X \cap B(p, \delta)$ of each of them, so in particular $X \cap B(p,\delta)$ has a nonempty medial axis. This medial axis has an endpoint which corresponds to a point of critical curvature \cite{medial_endpoints}. This contradicts the constraint on $\delta$. Therefore, the endpoint of the Voronoi cell $\text{Vor}_{X \cap B(p, \delta)}(p)$ is the center of curvature of $p$. This concludes the proof of (\ref{thm:1pt1}).
For (\ref{thm:1pt2}), we know that the sequence $Vor_{A_\epsilon}(a_\epsilon)$ is Wijsman convergent to $Vor_{X \cap B(p, \delta)}(p)$ by Theorem \ref{thm:voronoi_convergence}. Denote by $V_\epsilon$ the set of vertices of $Vor_{A_\epsilon}(a_\epsilon)$. By Corollary \ref{cor:medial_approx}, we also have that the sets $V_\epsilon$ are Wijsman convergent to the endpoint of $Vor_{X \cap B(p,\delta)}(p)$, which we call $p'$. By the definition of Wijsman convergence, this means that for any $x \in \mathbb{R}^2$, $d_w(x,V_\epsilon) \rightarrow d_w(x,p')$. By the definition of $d_w$, we have $d_w(p,V_\epsilon) = d_\epsilon$ and $d_w(p,p')$ is the radius of curvature of $p$. This concludes the second part of the proof.
\end{proof}
The evolute $E$ of a plane curve is the locus of all centers of curvature of the curve. Therefore, to find the evolute using Voronoi cells we may splice the curve into sections and apply Theorem~\ref{thm:curvature}.
Let $X$ be compact. Let $C \subset X$ denote the points of locally maximal curvature. Then $X \backslash C$ consists of finitely many components $X = X_1\cup \cdots \cup X_n$. Let $\tau$ denote the reach of $X$, and cover each $X_i$ by balls $B_{i,j}$ of radius less than $\tau$.
Let $E_{i,j}$ denote the collection of vertices of Voronoi cells of $X_i \cap B_{i,j}$. Then by Theorem \ref{thm:curvature},
$
E = \cup_{i,j} \overline{E_{i,j}}.
$
Furthermore, for $\epsilon$-approximations $A_{\epsilon,i,j}$ of $X_i \cap B_{i,j}$, the union over $i,j$ of their Voronoi vertices will converge to $E$ by Theorem \ref{thm:voronoi_convergence}.
\section{Bottlenecks}\label{sec:bottlenecks}
As in the colloquial sense of the word, a bottleneck refers to a narrowing of a variety, or a place where it gets closer to self-intersection.
Consider a smooth algebraic variety $X \subset \mathbb{R}^n$. We define $a\perp b$ by $\sum_{i=1}^na_ib_i=0$ for
$a=(a_1,\dots,a_n),$ $b= (b_1,\dots,b_n) \in \mathbb{R}^n$.
For a point $x \in X$,
let $(T_xX)_0$ denote the embedded tangent space of $X$ translated to
the origin. Then the \emph{Euclidean normal space} of $X$ at $x$ is
defined as $N_xX=\{z \in \mathbb{R}^n:(z-x) \perp (T_xX)_0\}$.
\begin{definition}
\label{def:bottleneck}
A \emph{bottleneck} of a smooth algebraic variety $X \subset \mathbb{R}^n$ is a pair of
distinct points $(x,y) \in X \times X$ such that $\overline{xy} \subseteq N_xX \cap N_yX$, where $\overline{xy}$ is the line spanned by $x$ and $y$.
\end{definition}
We note that bottlenecks are given not only by the narrowest parts of the variety, but also by maximally wide parts of the variety, as our algebraic definition considers all critical points rather than just the minimums.
The bottlenecks of the butterfly curve are shown in Figure \ref{fig:tooth_bottle}.
\begin{definition}
\label{def:bottleneck_distance}
The \emph{narrowest bottleneck distance} $\rho$ of a variety $X \subset \mathbb{R}^n$ is
\[ \rho(X)=\min_{(x,y) \textbf{ a bottleneck}} d(x,y) \]
where $d(x,y)$ is the Euclidean distance between $x$ and $y$.
\end{definition}
We will now describe the \emph{bottleneck locus} in
$\mathbb{R}^{2n}$ which consists of the bottlenecks of $X$ \cite{DEW}. Let $(f_1,\dots,f_k)
\subseteq \mathbb{R}[x_1,\dots,x_n]$ be the ideal of $X$. Consider the ring
isomorphism $\phi: \mathbb{R}[x_1,\dots,x_n] \to \mathbb{R}[y_1,\dots,y_n]$ defined by $x_i \mapsto
y_i$ and let $f'_i=\phi(f_i)$. Then $f_i$ and $f_i'$ have gradients
$\nabla f_i$ and $\nabla f'_i$ with respect to $\{x_1,\dots,x_n\}$ and
$\{y_1,\dots,y_n\}$, respectively. The \emph{augmented
Jacobian} $J$ is the following matrix of size $(k+1) \times n$ with
entries in $R=\mathbb{R}[x_1,\ldots,x_n, y_1,\ldots, y_n]$:
\begin{equation*} \label{eq:augjac}
J =
\begin{bmatrix}
y-x \\
\nabla f_1 \\
\vdots \\
\nabla f_k
\end{bmatrix} ,
\end{equation*}
where $y-x$ is the row vector $(x_1-y_1,\dots,x_n-y_n)$. Let $N$
denote the ideal in $R$ generated by $(f_1,\dots,f_k)$ and the
$(n-dim(X)+1) \times (n-dim(X)+1)$ minors of $J$. Then the points $(x,y)$ of the
variety defined by $N$ are the points $(x,y) \in X \times X \subset \mathbb{R}^{2n}$ such
that $y \in N_xX$. In the same way we define a matrix $J'$ and an
ideal $N' \subseteq R$ by replacing $f_i$ with $f'_i$ and $\nabla f_i$
with $\nabla f'_i$.
The \emph{bottleneck locus} $B$ is the variety
\begin{equation}
B = V((N + N'): \langle x-y\rangle ^{\infty}) \subset X \times X \subset \mathbb{R}^{2n}.
\end{equation}
The saturation removes the diagonal, as $(x,y)$ is not a bottleneck if $x=y$.
\begin{figure}[h]
\centering
\includegraphics[height = 3.3 in]{bottlenecks.pdf}
\caption{The real bottleneck pairs of the butterfly curve, computed in Example \ref{ex:bottlenecks}.}
\label{fig:tooth_bottle}
\end{figure}
Next, we give the bottleneck degree, which is
a measure of the complexity of computing all bottlenecks of an algebraic variety. We refer readers to \cite{Eklund} for a discussion of the numerical algebraic geometry of bottlenecks. Under suitable genericity assumptions described in \cite{DEW}, it coincides with twice the number of bottlenecks of the complexification of $X$. The factor of $2$ is attributed to the fact that in the product $X \times X$, the points $(x,y)$ and $(y,x)$ are distinct, though they correspond to the same pair of points in $X$.
\begin{theorem}[\cite{DEW}]
\label{thm:bottleneck_degree}
Under certain genericity assumptions, the degree of the bottleneck locus of a smooth algebraic curve $X \subset \mathbb{R}^2$ of degree $d$ is
\[ d^4-5d^2+4d. \]
\end{theorem}
A degree formula for the bottleneck locus of varieties of any dimension is provided in \cite{DEW}. The proof applies the double point formula from intersection theory to a map taking the variety to a variety of its normals.
\begin{example}
\label{ex:bottlenecks}
We now compute the bottlenecks for the quartic butterfly curve $b(x,y) = 0$. Theorem~\ref{thm:bottleneck_degree} predicts that there are $192/2 = 96$ bottlenecks. Using the description above and \texttt{JuliaHomotopyContinuation} \cite{julia}, we obtain the 96 bottleneck pairs. Of these, 22 are real. We show them in Figure \ref{fig:tooth_bottle}.
\end{example}
We now study bottlenecks from the perspective of Voronoi cells. For a smooth point $p$ in a algebraic curve $X \subset \mathbb{R}^2$, the Voronoi cell $Vor_X(p)$ is a 1-dimensional subset of the normal line to $X$ at the point $p$. Therefore, the normal direction can be recovered from the Voronoi cell $Vor_X(p)$.
For sufficiently small $\epsilon$, an $\epsilon$-approximation $A_\epsilon$ of $X$ will have Voronoi cells whose long edges approximate the normal direction. More precisely, by Theorem \ref{thm:voronoi_convergence}, if $a_\epsilon \in A_\epsilon$ is the point such that $p \in Vor_{A_\epsilon}(a_\epsilon)$, then the directions of the long edges of $Vor_{A_\epsilon}(a_\epsilon)$ converge to the normal direction at $p$.
We remark here that the problem of estimating normal directions from Voronoi cells is well-studied, and numerous efficient, robust algorithms exist \cite{feature_detection,alliez,amenta}.
As in Definition \ref{def:bottleneck}, two points $x,y \in X$ form a bottleneck if their normal lines coincide. This implies that the line connecting them contains both $Vor_X(x)$ and $Vor_X(y)$.
\begin{definition}
\label{def:bottleneck_candidate}
Let $A_\epsilon$ be an $\epsilon$-approximation of an algebraic curve $X \subset \mathbb{R}^2$. We say a pair $x_\epsilon, y_\epsilon \in A_\epsilon$ is an \emph{approximate bottleneck reach candidate} if the line $\overline{x_\epsilon y_\epsilon}$ joining $x_\epsilon$ and $y_\epsilon$ meets each of $Vor_{A_\epsilon}(x_\epsilon)$ and $Vor_{A_\epsilon}(y_\epsilon)$ at short edges of those cells.
\end{definition}
\begin{figure}[h]
\centering
\includegraphics[height = 3.5 in]{narrowest_bottleneck.pdf}
\caption{The approximate bottleneck reach candidates (see Definition \ref{def:bottleneck_candidate}) of 568 points sampled from the butterfly curve. The narrowest width of an approximate bottleneck reach candidate is approximately $0.495$ while the true narrowest bottleneck width is approximately $0.503$.}
\label{fig:approx_bottlenecks}
\end{figure}
In Figure \ref{fig:approx_bottlenecks} we show the approximate bottleneck reach candidates for 348 points sampled from the butterfly curve. The following result tells us that if the reach is achieved by a bottleneck pair, then this bottleneck pair is a limit of approximate bottleneck reach candidates.
\begin{theorem}
Let $\{A_\epsilon\}_{\epsilon \searrow 0}$ be a sequence of $\epsilon$-approximations of a smooth algebraic curve $X \subset \mathbb{R}^2$. If $x,y$ is a bottleneck pair of $X$ that achieves the reach, then there are sequences $x_\epsilon, y_\epsilon \in A_\epsilon$ of approximate bottleneck reach candidates converging to $x$ and $y$.
\end{theorem}
\begin{proof}
Consider the line segment $\overline{xy}$ joining $x$ and $y$. Since $x$ and $y$ are a bottleneck pair that achieves the reach, the midpoint of $\overline{xy}$ is on the medial axis of $X$. So $\overline{xy}$ intersects some short edge of two Voronoi cells $Vor_{A_\epsilon}(x_\epsilon)$ and $Vor_{A_\epsilon}(y_\epsilon)$ in a point $v_\epsilon$. Then, $x_\epsilon$ and $y_\epsilon$ form an approximate bottleneck reach candidate by definition. We must then show that the sequence $x_\epsilon$ converges to $x$ and the sequence $y_\epsilon$ converges to $y$.
Since $v_\epsilon$ is in the normal space of $x$,
there exists a neighborhood of $x$ such that
the nearest point of the intersection of this neighborhood and $X$ to $v_\epsilon$ is $x$. So for $\epsilon$ smaller than the radius of this neighborhood, one of the two points in $A_\epsilon$ on either side of $x$ as one moves along $X$ must be the one whose Voronoi cell contains $v_\epsilon$. Since $x_\epsilon$ is the point whose Voronoi cell contains $v_\epsilon$, we have that $x_\epsilon$ is one of the two closest points in $A_\epsilon$ to $x$, meaning that $d(x,x_\epsilon) \leq \epsilon$. Hence, $x_\epsilon$ converges to $x$. Similarly, $y_\epsilon$ converges to $y$.
\end{proof}
\section{Reach}\label{sec:reach}
\begin{example}
\label{ex:reach}
We may find the reach of the butterfly curve by taking the minimum of half the narrowest bottleneck distance and the minimum radius of curvature. This is shown in Figure \ref{fig:reach}. From the computations in Example \ref{ex:bottlenecks}, we find that the narrowest bottleneck distance is approximately $0.251$. Meanwhile, from Example \ref{ex:curvature}, we find that the minimum radius of curvature is approximately $0.104$. Therefore, the reach of the butterfly is approximately $0.104$.
\end{example}
In previous sections, we describe how the reach is the minimum of the minimal radius of curvature and half of the narrowest bottleneck distance. We also give equations for the ideal of the bottlenecks and for the ideal of the critical points of curvature. We now give \texttt{Macaulay 2} \cite{M2} code to compute these ideals for smooth algebraic curves $X \subset \mathbb{R}^2$. Finding the points in these ideals, using for example \texttt{JuliaHomotopyContinuation} \cite{julia}, and taking appropriate minimums gives the reach of $X$.
\begin{verbatim}
R=QQ[x_1,x_2,y_1,y_2]
f= x_1^4 - x_1^2*x_2^2 + x_2^4 - 4*x_1^2 - 2*x_2^2 - x_1 - 4*x_2 + 1
g=sub(f,{x_1=>y_1,x_2=>y_2})
augjacf=det(matrix{{x_1-y_1,x_2-y_2},{diff(x_1,f),diff(x_2,f)}})
augjacg=det(matrix{{y_1-x_1,y_2-x_2},{diff(y_1,g),diff(y_2,g)}})
bottlenecks=saturate(ideal(f,g,augjacf,augjacg),ideal(x_1-y_1,x_2-y_2))
\end{verbatim}
\begin{verbatim}
R=QQ[x,y]
f=x^4 - x^2*y^2 + y^4 - 4*x^2 - 2*y^2 - x - 4*y + 1
num=(diff(x,f))^2 + (diff(y,f))^2
denom=-(diff(y,f))^2*diff(x,diff(x,f)) +
2*diff(x,f)*diff(y,f)*diff(y,diff(x,f)) -
(diff(x,f))^2*diff(y,diff(y,f))
crit=det(matrix({{num*diff(x,denom)- 3/2*denom*diff(x,num),
num*diff(y,denom)-3/2*denom*diff(y,num)},{diff(x,f),diff(y,f)}}))
criticalcurvature=ideal(f,crit)
\end{verbatim}
Alternatively, one can estimate the reach from a point sample. The paper \cite{ACKMRW17} provides a method to do so. We provide a substantially different method that relies upon computing Voronoi and Delaunay cells of points sampled from the curve. We have already discussed how to approximate bottlenecks and curvature using Voronoi cells. This gives the following Voronoi-based Algorithm~\ref{algo:voronoi_reach} for approximating the reach of a curve.
\begin{algorithm}
\caption{Voronoi-Based Reach Estimation}
\label{algo:voronoi_reach}
\begin{algorithmic}
\REQUIRE $A \subset X$ a finite set of points forming an $\epsilon-$approximation for a compact, smooth algebraic curve $X \subset \mathbb{R}^2$.
\ENSURE $\tau$, an approximation of the reach.
\FOR{$a \in A$}
\STATE Compute an estimate for the curvature $\rho_a$ at $a$ using a technique from \cite{feature_detection}.
\ENDFOR
\STATE Set $\rho_{min} = \min_A(\rho_a)$.
\STATE Set $q$ to be the radius of any disk containing $X$.
\FOR{$a,b \in A$}
\IF{$a,b$ form an approximate bottleneck reach candidate as in Definition \ref{def:bottleneck_candidate}}
\STATE Set $q = \min(q,d(a,b)/2)$
\ENDIF
\ENDFOR
\STATE Set $\tau = \min(q,\rho_{min})$.
\end{algorithmic}
\end{algorithm}
The reach is equivalently defined as the minimum distance to the medial axis, which suggests the following Delaunay-based Algorithm~\ref{algo:delaunay_reach} for estimating the reach. This algorithm is susceptible to sample error, and to give accurate results would require more sophisticated techniques.
\begin{algorithm}
\caption{Delaunay-Based Reach Estimation}
\label{algo:delaunay_reach}
\begin{algorithmic}
\REQUIRE $A \subset X$ a finite set of points forming an $\epsilon-$approximation for a compact, smooth algebraic curve $X \subset \mathbb{R}^2$.
\ENSURE $\tau$, an approximation of the reach.
\STATE Compute a Delaunay triangulation $D$ of $A$.
\STATE Set $M = \emptyset$.
\FOR{$T$ a Delaunay triangle of $D$}
\STATE Set $c_T$ be the circumcenter of the Delaunay triangle $T$.
\STATE Set $M = M \cup \{c_T\}$.
\ENDFOR
\STATE Set $\tau = \min_{c \in M, a\in A} d(a,c)$.
\end{algorithmic}
\end{algorithm}
The approximate methods can be used with curves of higher degree, while the symbolic methods are hard to compute for curves with degrees even as low as $4$, but give a more accurate estimate for the reach. This suggests that more work can be done to develop fast and accurate methods to compute the reach of a variety.
\section*{Acknowledgements}
We thank Paul Breiding, Diego Cifuentes, Yuhan Jiang, Daniel Plaumann, Kristian Ranestad, Rainer Sinn, Bernd Sturmfels, and Sascha Timme for helpful discussions. Research on this project was carried out while the authors were based at the Max Planck Institute for Mathematics in the Sciences (MPI-MiS) in Leipzig, Germany. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 1752814. Any opinions, findings, and
conclusions or recommendations expressed in this material are those of the authors and do not
necessarily reflect the views of the National Science Foundation.
\bibliographystyle{alpha}
| {
"timestamp": "2019-11-05T02:03:39",
"yymm": "1906",
"arxiv_id": "1906.11337",
"language": "en",
"url": "https://arxiv.org/abs/1906.11337",
"abstract": "Voronoi cells of varieties encode many features of their metric geometry. We prove that each Voronoi or Delaunay cell of a plane curve appears as the limit of a sequence of cells obtained from point samples of the curve. We use this result to study metric features of plane curves, including the medial axis, curvature, evolute, bottlenecks, and reach. In each case, we provide algebraic equations defining the object and, where possible, give formulas for the degrees of these algebraic varieties. We show how to identify the desired metric feature from Voronoi or Delaunay cells, and therefore how to approximate it by a finite point sample from the variety.",
"subjects": "Metric Geometry (math.MG); Computational Geometry (cs.CG); Algebraic Geometry (math.AG)",
"title": "Voronoi Cells in Metric Algebraic Geometry of Plane Curves",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462187092608,
"lm_q2_score": 0.8152324848629214,
"lm_q1q2_score": 0.805405850789278
} |
https://arxiv.org/abs/0912.5205 | Exponential growth of ponds in invasion percolation on regular trees | In invasion percolation, the edges of successively maximal weight (the outlets) divide the invasion cluster into a chain of ponds separated by outlets. On the regular tree, the ponds are shown to grow exponentially, with law of large numbers, central limit theorem and large deviation results. The tail asymptotics for a fixed pond are also studied and are shown to be related to the asymptotics of a critical percolation cluster, with a logarithmic correction. | \section{Introduction and definitions}
\subsection{The model: invasion percolation, ponds and outlets}
Consider an infinite connected locally finite graph ${\cal{G}}$, with a distinguished vertex $o$, the root. On each edge, place an independent Uniform$[0,1]$ edge weight, which we may assume (a.s.) to be all distinct. Starting from the subgraph ${\cal{C}}_0=\set{o}$, inductively grow a sequence of subgraphs ${\cal{C}}_i$ according to the following deterministic rule. At step $i$, examine the edges on the boundary of $C_{i-1}$, and form $C_i$ by adjoining to ${\cal{C}}_{i-1}$ the edge whose weight is minimal. The infinite union
\begin{equation}
{\cal{C}}=\bigcup_{i=1}^\infty {\cal{C}}_i
\end{equation}
is called the \emph{invasion cluster}.
Invasion percolation is closely related to ordinary (Bernoulli) percolation. For instance, (\cite{CCN1985} for ${\cal{G}} = Z^d$; later greatly generalized by \cite{HPS1999}) if ${\cal{G}}$ is quasi-transitive, then for any $p>p_c$, only a finite number of edges of weight greater than $p$ are ever invaded. On the other hand, it is elementary to show that for any $p<p_c$, infinitely many edges of weight greater than $p$ must be invaded. In other words, writing $\xi_i$ for the weight of the $i^\th$ invaded edge, we have
\begin{equation}
\label{limsupispcGeneral}
\limsup_{i\rightarrow\infty} \xi_i = p_c
\end{equation}
So invasion percolation produces an infinite cluster using only slightly more than critical edges, even though there may be no infinite cluster at criticality. The fact that invasion percolation is linked to the critical value $p_c$, even though it contains no parameter in its definition, makes it an example of \emph{self-organized criticality}.
Under mild hypotheses (see section \ref{GeneralMarkovStructureSubsection}), the invasion cluster has a natural decomposition into \emph{ponds} and \emph{outlets}. Let $e_1\in{\cal{C}}$ be the edge whose weight $Q_1$ is the largest ever invaded. For $n>1$, $e_n$ is the edge in ${\cal{C}}$ whose weight $Q_n$ is the highest among edges invaded after $e_{n-1}$. We call $e_n$ the \emph{$n^\th$ outlet} and $Q_n$ the corresponding \emph{outlet weight}. Write $\hat{V}_n$ for the step at which $e_n$ was invaded, with $\hat{V}_0=0$. The \emph{$n^\th$ pond} is the subgraph of edges invaded at steps $i\in (\hat{V}_{n-1}, \hat{V}_n]$.
Suppose an edge $e$, with weight $p$, is first examined at step $i\in(\hat{V}_{n-1}, \hat{V}_n]$. (That is, $i$ is the first step at which $e$ is on the boundary of ${\cal{C}}_{i-1}$.) Then we have the following dichotomy: either
\begin{itemize}
\item
$e$ will be invaded as part of the $n^\th$ pond (if $p\leq Q_n$); or
\item
$e$ will never be invaded (if $p>Q_n$)
\end{itemize}
This implies that the ponds are connected subgraphs and touch each other only at the outlets. Moreover, the outlets are pivotal in the sense that any infinite non-intersecting path in ${\cal{C}}$ starting at $o$ must pass through every outlet. Consequently ${\cal{C}}$ is decomposed as an infinite chain of ponds, connected at the outlets.
In this paper we take ${\cal{G}}$ to be a regular tree and analyze the asymptotic behaviour of the ponds, the outlets and the outlet weights. This problem can be approached in two directions: by considering the ponds as a sequence and studying the growth properties of that sequence; or by considering a fixed pond and finding its asymptotics. We will see that the sequence of ponds grows exponentially, with exact exponential constants. For a fixed pond, its asymptotics correspond to those of ordinary percolation with a logarithmic correction.
These computations are based on representing $C$ in terms of the outlet weights $Q_n$, as in \cite{AGdHS2008}. Conditional on $(Q_n)_{n=0}^\infty$, each pond is an independent percolation cluster with parameter related to $Q_n$. In particular, the fluctuations of the ponds are a combination of fluctuations in $Q_n$ and the additional randomness.
Surprisingly, in all but the large deviation sense, the asymptotic behaviour for the ponds is controlled by the outlet weights alone: the remaining randomness after conditioning only on $(Q_n)_{n=0}^\infty$ disappears in the limit, and the fluctuations are attributable solely to fluctuations of $Q_n$.
\subsection{Known results}
The terminology of ponds and outlets comes from the following description (see \cite{vdBJV2007}) of invasion percolation. Consider a random landscape where the edge weights represent the heights of channels between locations. Pour water into the landscape at $o$; then as more and more water is added, it will flow into neighbouring areas according to the invasion percolation mechanism. The water level at $o$, and throughout the first pond, will rise until it reaches the height of the first outlet. Once water flows over an outlet, however, it will flow into a new pond where the water will only ever rise to a lower height. Note that the water level in the $n^\th$ pond is the height (edge weight) of the $n^\th$ outlet.
The edge weights may also be interpreted as energy barriers for a random walker exploring a random energy landscape: see \cite{NewmanStein1995}. If the energy levels are highly separated, then (with high probability and until some large time horizon) the walker will visit the ponds in order, spending a long time in each pond before crossing the next outlet. In this interpretation the growth rate of the ponds determines the effect of entropy on this analysis. See the extended discussion in \cite{NewmanStein1995}.
Invasion percolation is also related to the incipient infinite cluster (IIC), at least in the cases ${\cal{G}}=\mathbb{Z}^2$ (\cite{Jarai2003}) and ${\cal{G}}$ a regular tree: see, e.g., \cite{Jarai2003}, \cite{AGdHS2008} and \cite{DS2009}. For a cylinder event $E$, the law of the IIC can be defined by
\begin{equation}
\label{IICdefinition}
\P_{\text{IIC}}(E)\overset{def}{=}\lim_{k\rightarrow\infty}\P_{p_c}(E\,\big\vert\, o\leftrightarrow\partial B(k))
\end{equation}
or by other limiting procedures, many of which can be proved to be equivalent to each other. Both the invasion cluster and the IIC consist of an infinite cluster that is ``almost critical'', in view of \eqref{limsupispcGeneral} or \eqref{IICdefinition} respectively. For ${\cal{G}}=\mathbb{Z}^2$ (\cite{Jarai2003}) and ${\cal{G}}$ a regular tree (\cite{AGdHS2008}), the IIC can be defined in terms of the invasion cluster: if $X_k$ denotes a vertex chosen uniformly from among the invaded vertices within distance $k$ of $o$, and $\tau_{X_k}E$ denotes the translation of $E$ when $o$ is sent to $X_k$, then
\begin{equation}
\P_{\text{IIC}}(E)=\lim_{k\rightarrow\infty} P(\tau_{{X_k}}E)
\end{equation}
Surprisingly, despite this local equivalence, the invasion cluster and the IIC are globally different: they are mutually singular and, at least on the regular tree, have different scaling limits, although they have the same scaling exponents.
The regular tree case, first considered in \cite{NickWilk1983}, was studied in great detail in \cite{AGdHS2008}. Any infinite non-intersecting path from $o$ must pass through every outlet; on a tree, this implies that there is a \emph{backbone}, the unique infinite non-intersecting path from $o$. In \cite{AGdHS2008} a description of the invasion cluster was given in terms of the \emph{forward maximal weight process}, the outlet weights indexed by height along the backbone (see section \ref{InvasionClusterStructureSubsection}). This parametrization in terms of the external geometry of the tree allowed the calculation of natural geometric quantities, such as the number of invaded edges within a ball. In the following, we will see that when information about the heights is discarded, the process of edge weights takes an even simpler form.
The detailed structural information in \cite{AGdHS2008} was used in \cite{AngelGMerle} to identify the scaling limit of the invasion cluster (again for the regular tree). Since the invasion cluster is a tree with a single infinite end, it can be encoded by its Lukaciewicz path or its height and contour functions. Within each pond, the scaling limit of the Lukaciewicz path is computed, and the different ponds are stitched together to provide the full scaling limit.
The two-dimensional case was also studied in a series of papers by van den Berg, Damron, J\'{a}rai, Sapozhnikov and V\'{a}gv\"{o}lgyi (\cite{vdBJV2007}, \cite{DSV2008} and \cite{DS2009}). There they study, among other things, the probability that the $n^\th$ pond extends a distance $k$ from $o$, for $n$ fixed. For $n=1$ this is asymptotically of the same order as the probability that a critical percolation cluster extends a distance $k$, and for $n>1$ there is a correction factor $(\log k)^{n-1}$. Furthermore an exponential growth bound for the ponds is given. This present work was motivated in part by the question of what the corresponding results would be for the tree. Quite remarkably, they are essentially the same, suggesting that a more general phenomenon may be involved.
In the results and proofs that follow, we shall see that the sequence of outlet weights plays a dominant role. Indeed, all of the results in Theorems \ref{SLLNtheorem}--\ref{QnLnAsymp} are proved first for $Q_n$, then extended to other pond quantities using conditional tail estimates. Consequently, all of the results can be understood as consequences of the growth mechanism for the sequence $Q_n$. On the regular tree, we are able to give an exact description of the sequence $Q_n$ in terms of a sum of independent random variables (see section \ref{RegularTreeCaseSubsection}). In more general graphs, this representation cannot be expected to hold exactly. However, the similarities between the between the pond behaviours, even on graphs as different as the tree and $\mathbb{Z}^2$, suggest that an approximate analogue may hold. Such a result would provide a unifying explanation for both the exponential pond growth and the asymptotics of a fixed pond, even on potentially quite general graphs.
\subsection{Summary of notation}
We will primarily consider the case where ${\cal{G}}$ is the forward regular tree of degree $\sigma$: namely, the tree in which the root $o$ has degree $\sigma$ and every other vertex has degree $\sigma+1$. The weight of the $i^\th$ invaded edge is $\xi_i$. The $n^\th$ outlet is $e_n$ and its edge weight is $Q_n$. We may naturally consider $e_n$ to be an oriented edge $e_n=(\underline{v}_n,\overline{v}_n)$, where $\underline{v}_n$ is invaded before $\overline{v}_n$. The step at which $e_n$ is invaded is denoted $\hat{V}_n$ and the (graph) distance from $o$ to $\overline{v}_n$ is $\hat{L}_n$. Setting $\hat{V}_0=\hat{L}_0=0$ for convenience, we write $V_n=\hat{V}_n-\hat{V}_{n-1}$ and $L_n=\hat{L}_n-\hat{L}_{n-1}$.
There is a natural geometric interpretation of $L_n$ as the length of the part of the backbone in the $n^\th$ pond, and $V_n$ as the volume (number of edges) of the $n^\th$ pond. In particular $\hat{V}_n$ is the volume of the union of the first $n$ ponds.
$R_n$ is the length of the longest upward-pointing path in the $n^\th$ pond, and $R'_n$ is the length of the longest upward-pointing path in the union of the first $n$ ponds.
We shall later work with the quantity $\delta_n$; for its definition, see \eqref{deltanDefinition}.
We note the following elementary relations:
\begin{gather}
\hat{L}_n=\sum_{i=1}^n L_i,\qquad
\hat{V}_n=\sum_{i=1}^n V_i,\\
Q_{n+1}<Q_n,\qquad
L_n\leq R_n\leq R'_n\leq \sum_{i=1}^n R_i.
\end{gather}
Probability laws will generically be denoted $\P$. For $p\in[0,1]$, $\P_p$ denotes the law of Bernoulli percolation with parameter $p$. For a set $A$ of vertices, the event $\set{x\leftrightarrow A}$ means that there is a path of open edges joining $x$ to some point of $A$, and $\set{x\leftrightarrow\infty}$ means that there is an infinite non-intersecting path of open edges starting at $x$. We define the percolation probability $\theta(p)=\P_p(o\leftrightarrow\infty)$ and $p_c=\inf\set{p: \theta(p)>0}$. $\partial B(k)$ denotes the vertices at distance exactly $k$ from $o$.
For non-zero functions $f(x)$ and $g(x)$, we write $f(x)\sim g(x)$ if $\lim \frac{f(x)}{g(x)}=1$; the point at which the limit is to be taken will usually be clear from the context. We write $f(x)\asymp g(x)$ if there are constants $c$ and $C$ such that $cg(x)\leq f(x)\leq Cg(x)$.
\section{Main results}
\subsection{Exponential growth of the ponds}
Let $\vec{Z}_n$ denote the 7-tuple
\begin{align}
\begin{split}
&
\vec{Z}_n=\Bigl(\log\!\left((Q_n-p_c)^{-1}\right)\! , \log L_n, \log \hat{L}_n, \\
&\qquad\qquad
\log R_n, \log R'_n, \tfrac{1}{2}\log V_n, \tfrac{1}{2}\log \hat{V}_n \Bigr)
\end{split}
\end{align}
and write $\mathbbm{1}=(1,1,1,1,1,1,1)$.
\begin{thm}
\label{SLLNtheorem}
With probability $1$,
\begin{equation}
\label{ZnSLLN}
\lim_{n\rightarrow\infty} \frac{\vec{Z}_n}{n}=\mathbbm{1}.
\end{equation}
\end{thm}
\begin{thm}
\label{CLTtheorem}
If $(B_t)_{t\geq 0}$ denotes a standard Brownian motion then
\begin{equation}
\label{ZnCLT}
\left(\frac{\vec{Z}_{\floor{Nt}}-Nt\cdot\mathbbm{1}}{\sqrt{N}}\right)_{t\geq 0} \Rightarrow (B_t\cdot\mathbbm{1})_{t\geq 0}
\end{equation}
as $N\rightarrow\infty$, with respect to the metric of uniform convergence on compact intervals of $t$.
\end{thm}
These theorems say that each component of $\vec{Z}$ satisfies a law of large numbers and functional central limit theorem, with the same limiting Brownian motion for each component.
Theorem \ref{CLTtheorem} shows that the logarithmic scaling in Theorem \ref{SLLNtheorem} cannot be replaced by a linear rescaling such as $e^n(Q_n-p_c)$. Indeed, $\log((Q_n-p_c)^{-1})$ has characteristic additive fluctuations of order $\pm\sqrt{n}$, and therefore $Q_n-p_c$ fluctuates by a multiplicative factor of the form $e^{\pm\sqrt{n}}$. As $n\rightarrow\infty$ this will be concentrated at $0$ and $\infty$, causing tightness to fail.
\begin{thm}
\label{LDtheorem}
$\frac{1}{n}\log\!\left((Q_n-p_c)^{-1}\right)$ satisfies a large deviation principle on $[0,\infty)$ with rate $n$ and rate function
\begin{equation}
\label{RateFunctionphi}
\phi(u)=u-\log u -1.
\end{equation}
$\frac{1}{n}\log L_n$, $\frac{1}{n}\log R_n$ and $\frac{1}{2n}\log V_n$ satisfy large deviation principles on $[0,\infty)$ with rate $n$ and rate function $\psi$, where
\begin{equation}
\psi(u)=
\begin{cases}
u-\log u -1&\text{if $u\geq \frac{1}{2}$,}\\
\log (2) - u&\text{if $u\leq \frac{1}{2}$.}
\end{cases}
\end{equation}
\end{thm}
It will be shown that $\psi$ arises as the solution of the variational problem
\begin{equation}
\psi(u)=
\inf_{v\geq u} \bigl(\phi(v)+v-u\bigr)
\end{equation}
\subsection{Tail behaviour of a pond}
Theorems \ref{SLLNtheorem}--\ref{LDtheorem} describe the growth of the ponds as a sequence. We now consider a fixed pond and study its tail behaviour.
\begin{thm}
\label{QnLnAsymp}
For $n$ fixed and $\epsilon\rightarrow 0^+$, $k\rightarrow\infty$,
\begin{align}
\label{QnTail}
\P\left(Q_n<p_c(1+\epsilon)\right)
&\sim
\frac{2\sigma}{\sigma-1}\frac{\epsilon \left(\log \epsilon^{-1}\right)^{n-1}}{(n-1)!}
\end{align}
and
\begin{align}
\label{LnTail}
\P\left(L_n>k\right) \sim \P\left(\hat{L}_n > k\right)
&\sim
\frac{2\sigma}{\sigma-1}\frac{(\log k)^{n-1}}{k(n-1)!}\\
\label{RnTail}
\P\left(R_n>k\right) \asymp \P\left(R'_n > k\right)
&\asymp
\frac{(\log k)^{n-1}}{k}\\
\label{VnTail}
\P\left(V_n>k\right) \asymp \P\left(\hat{V}_n > k\right)
&\asymp
\frac{(\log k)^{n-1}}{\sqrt{k}}
\end{align}
\end{thm}
Using the well-known asymptotics
\begin{align}
\theta(p)
&\sim
\frac{2\sigma^2}{\sigma-1}(p-p_c)
&&\text{as $p\rightarrow p_c^+$,}\\
\P_{p_c}(o\leftrightarrow\partial B(k))
&\sim
\frac{2\sigma}{(\sigma-1)k}
&&\text{as $k\rightarrow\infty$,}
\end{align}
we may rewrite \eqref{QnTail}--\eqref{VnTail} as
\begin{align}
\P\left(Q_n<p_c(1+\epsilon)\right)
&\sim
\frac{\left(\log \epsilon^{-1}\right)^{n-1}}{(n-1)!} \theta(p_c(1+\epsilon))\\
\P\left(L_n>k\right) \sim \P\left(\hat{L}_n > k\right)
&\sim
\frac{(\log k)^{n-1}}{(n-1)!}\P_{p_c}(o\leftrightarrow\partial B(k))\\
\P\left(R_n>k\right) \asymp \P\left(R'_n > k\right)
&\asymp
(\log k)^{n-1}\P_{p_c}(o\leftrightarrow\partial B(k))\\
\P\left(V_n>k\right) \asymp \P\left(\hat{V}_n > k\right)
&\asymp
(\log k)^{n-1}\P_{p_c}(\abs{C(o)}>k)
\end{align}
Working in the case ${\cal{G}}=\mathbb{Z}^2$, \cite{DSV2008} considers $\tilde{R}_n$, the maximum distance from $o$ to a point in the first $n$ ponds, which is essentially $R'_n$ in our notation. \cite[Theorem 1.5]{DSV2008} states that
\begin{equation}
\label{Z2Rasymp}
\P(\tilde{R}_n\geq k)\asymp(\log k)^{n-1}\P_{p_c}(o\leftrightarrow\partial B(k))
\end{equation}
and notes as a corollary
\begin{equation}
\label{Z2AsymptoPercWithDefects}
\P(\tilde{R}_n\geq k)\asymp\P_{p_c}(o\overset{n-1}{\longleftrightarrow} \partial B(k))
\end{equation}
where $\overset{i}{\leftrightarrow}$ denotes a percolation connection where up to $i$ edges are allowed to be vacant (``percolation with defects''). \eqref{Z2AsymptoPercWithDefects} suggests the somewhat plausible heuristic of approximating the union of the first $n$ ponds by the set of vertices reachable by critical percolation with at most $n-1$ defects. Indeed, the proof of \eqref{Z2Rasymp} uses in part a comparison to percolation with defects. By contrast, on the tree the following result holds:
\begin{thm}
\label{TreePercWithDefectsTheorem}
For fixed $n$ and $k\rightarrow\infty$,
\begin{equation}
\label{TreePercWithDefectsAsymp}
\P_{p_c}(o\overset{n}{\leftrightarrow}\partial B(k))\asymp k^{-2^{-n}}
\end{equation}
\end{thm}
The dramatic contrast between \eqref{Z2AsymptoPercWithDefects} and \eqref{TreePercWithDefectsAsymp} can be explained in terms of the number of large clusters in a box. In $\mathbb{Z}^2$, a box of side length $S$ has generically only one cluster of diameter of order $S$. On the tree, by contrast, there are many large clusters. Indeed, a cluster of size $N$ has of order $N$ edges on its outer boundary, any one of which might be adjacent to another large cluster, independently of every other edge. Percolation with defects allows the best boundary edge to be chosen, whereas invasion percolation is unlikely to invade the optimal edge.
\subsection{Outline of the paper}
Section \ref{GeneralMarkovStructureSubsection} states a Markov property for the outlet weights that is valid for any graph. From section \ref{InvasionClusterStructureSubsection} onwards, we specialize to the case where ${\cal{G}}$ is a regular tree. In section \ref{InvasionClusterStructureSubsection} we recall results from \cite{AGdHS2008} that describe the structure of the invasion cluster conditional on the outlet weights $Q_n$. Section \ref{RegularTreeCaseSubsection} analyzes the Markov transition mechanism of section \ref{GeneralMarkovStructureSubsection} and proves the results of Theorems \ref{SLLNtheorem}--\ref{LDtheorem} for $Q_n$.
Section \ref{PondTailBoundsSection} states conditional tail bounds for $L_n$, $R_n$ and $V_n$ given $Q_n$. The rest of sections \ref{LLNandCLTproofSection}--\ref{TailAsymptoticsSection} use these tail bounds to prove Theorems \ref{SLLNtheorem}--\ref{QnLnAsymp}. The proof of the bounds in section \ref{PondTailBoundsSection} is given in section \ref{PondTailBoundsProofSection}. Finally, section \ref{PercolationWithDefectsSection} gives the proof of Theorem \ref{TreePercWithDefectsTheorem}.
\section{Markov structure of invasion percolation}
In section \ref{GeneralMarkovStructureSubsection} we give sufficient conditions for the existence of ponds and outlets, and state a Markov property for the ponds, outlets and outlet weights. Section \ref{InvasionClusterStructureSubsection} summarizes some previous results from \cite{AGdHS2008} concerning the structure of the invasion cluster. Finally in section \ref{RegularTreeCaseSubsection} we analyze the resulting Markov chain in the special case where ${\cal{G}}$ is a regular tree and prove the results of Theorems \ref{SLLNtheorem}--\ref{LDtheorem} for $Q_n$.
\subsection{General graphs: ponds, outlets and outlet weights}
\label{GeneralMarkovStructureSubsection}
The representation of an invasion cluster in terms of ponds and outlets is guaranteed to be valid under the following two assumptions:
\begin{equation}
\label{NoCriticalPercolation}
\theta(p_c)=0
\end{equation}
and
\begin{equation}
\label{limsupispc}
\limsup_{i\rightarrow\infty} \xi_i=p_c\quad\text{a.s.}
\end{equation}
\eqref{NoCriticalPercolation} is known to hold for many graphs and is conjectured to hold for any transitive graph for which $p_c<1$ (\cite[Conjecture 4]{BenjaminiSchramm1996}; see also, for instance, \cite[section 8.3]{LyonsPeresProbOnTreesNets}). If the graph ${\cal{G}}$ is quasi-transitive, \eqref{limsupispc} follows from the general result \cite[Proposition 3.1]{HPS1999}. Both \eqref{NoCriticalPercolation} and \eqref{limsupispc} hold when ${\cal{G}}$ is a regular tree.
The assumption \eqref{NoCriticalPercolation} implies that w.p. 1,
\begin{equation}
\label{xiAbovepcIO}
\sup_{i> i_0} \xi_i > p_c\quad\text{for all $i_0$}
\end{equation}
since otherwise there would exist somewhere an infinite percolation cluster at level $p_c$. We can then make the inductive definition
\begin{align}
Q_1&=\max_{i>0} \xi_i = \xi_{\hat{V}_1}\\
Q_n&=\max_{i>\hat{V}_{n-1}} \xi_i = \xi_{\hat{V}_n}\quad(n>1)
\end{align}
since \eqref{limsupispc} and \eqref{xiAbovepcIO} imply that the maxima are attained.
Condition on $Q_n$, $e_n$, and the union $\tilde{{\cal{C}}}_n$ of the first $n$ ponds. We may naturally consider $e_n$ to be an oriented edge $e_n=(\underline{v}_n,\overline{v}_n)$ where the vertex $\underline{v}_n$ was invaded before $\overline{v}_n$. The condition that $e_n$ is an outlet, with weight $Q_n$, implies that there must exist an infinite path of edges with weights at most $Q_n$, starting from $\overline{v}_n$ and remaining in ${\cal{G}}\backslash \tilde{{\cal{C}}}_n$. However, the law of the edge weights in ${\cal{G}}\backslash \tilde{{\cal{C}}}_n$ is not otherwise affected by $Q_n,e_n,\tilde{{\cal{C}}}_n$. In particular we have
\begin{equation}
\label{RecursiveLawofQn}
\condP{Q_{n+1}<q'}{Q_n,e_n,\tilde{{\cal{C}}}_n}
=\frac{\P_{q'}(\overline{e}_n\leftrightarrow\infty\text{ in ${\cal{G}}\backslash \tilde{{\cal{C}}}_n$})}{\P_{Q_n}(\overline{e}_n\leftrightarrow\infty\text{ in ${\cal{G}}\backslash \tilde{{\cal{C}}}_n$})}
\end{equation}
on the event $\set{q'\leq Q_n}$. In \eqref{RecursiveLawofQn} we can replace ${\cal{G}}\backslash \tilde{{\cal{C}}}_n$ by the connected component of ${\cal{G}}\backslash \tilde{{\cal{C}}}_n$ that contains $\overline{e}_n$.
\subsection{Geometric structure of the invasion cluster: the regular tree case}
\label{InvasionClusterStructureSubsection}
In \cite[section 3.1]{AGdHS2008}, the same outlet weights are studied, parametrized by height rather than by pond. $W_k$ is defined to be the maximum invaded edge weight above the vertex at height $k$ along the backbone.
A key point in the analysis in \cite{AGdHS2008} is the observation that $(W_k)_{k=0}^\infty$ is itself a Markov process. $W_k$ is constant for long stretches, corresponding to $k$ in the same pond, and the jumps of $W_k$ occur when an outlet is encountered. The relation between the two processes is given by
\begin{equation}
\label{WkQnRelation}
W_{k}=Q_n\quad\text{iff}\quad \hat{L}_{n-1} \leq k < \hat{L}_n
\end{equation}
From \eqref{WkQnRelation} we see that the $(Q_n)_{n=0}^\infty$ are the successive distinct values of $(W_k)_{k=0}^\infty$, and $L_n=\hat{L}_n-\hat{L}_{n-1}$ is the length of time the Markov chain $W_k$ spends in state $Q_n$ before jumping to state $Q_{n+1}$. In particular, $L_n$ is geometric conditional on $Q_n$, with some parameter depending only on $Q_n$. As we will refer to it often, we define $\delta_n$ to be that geometric parameter:
\begin{equation}
\label{deltanDefinition}
\condP{L_n>m\,}{Q_n}=(1-\delta_n)^m
\end{equation}
A further analysis (see \cite[section 2.1]{AGdHS2008}) shows that the off-backbone part of the $n^\th$ pond is a sub-critical Bernoulli percolation cluster with a parameter depending on $Q_n$, independently in each pond. We summarize these results in the following theorem.
\begin{thm}[\cite{AGdHS2008}, sections 2.1 and 3.1]
\label{InvasionStructureTheorem}
Conditional on $(Q_n)_{n=1}^\infty$, the $n^\th$ pond of the invasion cluster consists of
\begin{enumerate}
\item
$L_n$ edges from the infinite backbone, where $L_n$ is geometric with parameter $\delta_n$; and
\item
emerging along the $\sigma-1$ sibling edges of each backbone edge, independent sub-critical Bernoulli percolation clusters with parameter
\begin{equation}
\label{SubcriticalParameter}
p_c(1-\delta_n)
\end{equation}
\end{enumerate}
Given $(Q_n)_{n=0}^\infty$, the ponds are conditionally independent for different $n$. $\delta_n$ is a continuous, strictly increasing functions of $Q_n$ and satisfies
\begin{equation}
\label{QndeltanRelation}
\delta_n\sim \frac{\sigma-1}{2\sigma}\theta(Q_n)\sim \sigma (Q_n-p_c)
\end{equation}
as $Q_n\rightarrow p_c^+$.
\end{thm}
The meaning of \eqref{QndeltanRelation} is that $\delta_n=f(Q_n)$ where $f(q)\sim \frac{\sigma-1}{2\sigma}\theta(q)\sim \sigma(q-p_c)$ as $q\rightarrow p_c^+$.
It is not at first apparent that the geometric parameter $\delta_n$ in \eqref{deltanDefinition} is the same quantity that appears in \eqref{SubcriticalParameter}, and indeed \cite{AGdHS2008} has two different notations for the two quantities: see \cite[equations (3.1) and (2.14)]{AGdHS2008}. Combining equations (2.3), (2.5), (2.14) and (3.1) of \cite{AGdHS2008} shows that they are equivalent to
\begin{equation}
\delta_n=1-\sigma Q_n(1-Q_n\theta(Q_n))^{\sigma-1}
\end{equation}
For $\sigma=2$ we can find explicit formulas for these parameters: $p_c=\frac{1}{2}$, $\theta(p)=p^{-2}(2p-1)$ for $p\geq p_c$, $\delta_n=2Q_n-1$ and $p_c(1-\delta_n)=1-Q_n$. However, all the information needed for our purposes is contained in the asymptotic relation \eqref{QndeltanRelation}.
\subsection{The outlet weight process}
\label{RegularTreeCaseSubsection}
The representation \eqref{RecursiveLawofQn} simplifies dramatically when ${\cal{G}}$ is a regular tree. Then the connected component of ${\cal{G}}\backslash \tilde{{\cal{C}}}_n$ containing $\overline{e}_n$ is isomorphic to ${\cal{G}}$, with $\overline{e}_n$ corresponding to the root. Therefore the dependence of $Q_{n+1}$ on $e_n$ and $\tilde{{\cal{C}}}_n$ is eliminated and we have the following result.
\begin{coro}
On the regular tree, $(Q_n)_{n=1}^\infty$ is a time-homogeneous Markov chain with
\begin{equation}
\label{Q1cdf}
\P(Q_1<q)=\theta(q)
\end{equation}
and
\begin{equation}
\label{QnextFromQn}
\condP{Q_{n+1}<q'}{Q_n=q}=\frac{\theta(q')}{\theta(q)}
\end{equation}
for $p_c<q'<q$.
\end{coro}
Equations \eqref{Q1cdf} and \eqref{QnextFromQn} say that, conditional on $Q_n$, $Q_{n+1}$ is chosen from the same distribution, conditioned to be smaller. In terms of $(W_k)_{k=0}^\infty$, \eqref{QnextFromQn} describes the jumps of $W_k$ when they occur, and indeed the transition mechanism \eqref{QnextFromQn} is implicit in \cite{AGdHS2008}.
Since $\theta$ is a continuous function, it is simpler to consider $\theta(Q_n)$: $\theta(Q_1)$ is uniform on $[0,1]$ and
\begin{equation}
\condP{\theta(Q_{n+1})<u'}{\theta(Q_n)=u}=\frac{u'}{u}
\end{equation}
for $0<u'<u$. But this is equivalent to multiplying $\theta(Q_n)$ by an independent Uniform$[0,1]$ variable. Noting further that the logarithm of a Uniform$[0,1]$ variable is exponential of mean 1, we have proved the following proposition.
\begin{prop}
Let $U_i$, $i\in\mathbb{N}$, be independent Uniform$[0,1]$ random variables. Then, as processes,
\begin{equation}
\Bigl(\theta(Q_n)\Bigr)_{n=1}^\infty \overset{d}{=} \left(\prod_{i=1}^n U_i\right)_{n=1}^\infty
\end{equation}
Equivalently, with $E_i=\log U_i^{-1}$ independent exponentials of mean 1,
\begin{equation}
\label{logthetaQnRWrep}
\log\!\left(\theta(Q_n)^{-1}\right)\overset{d}{=}\sum_{i=1}^n E_i
\end{equation}
jointly for all $n$.
\end{prop}
\begin{coro}
\label{QndeltanetanCorollary}
The triple
\begin{equation}
\vec{Z}'_n=\left(\log\!\left(\theta(Q_n)^{-1}\right)\! ,\log\left((Q_n-p_c)^{-1}\right)\! ,\log\delta_n^{-1}\right)
\end{equation}
satisfies the conclusions of Theorems \ref{SLLNtheorem} and \ref{CLTtheorem}, and each component of $\frac{1}{n}\vec{Z}'_n$ satisfies a large deviation principle with rate $n$ and rate function
\begin{equation}
\phi(u)=u-\log u -1.
\end{equation}
\end{coro}
\begin{proof}
The conclusions about $\log\left(\theta(Q_n)^{-1}\right)$ follow from the representation \eqref{logthetaQnRWrep} in terms of a sum of independent variables; the rate function $\phi$ is given by Cram\'{e}r's theorem. The other results then follow from the asymptotic relation \eqref{QndeltanRelation}.
\end{proof}
\section{Law of large numbers and central limit theorem}
\label{LLNandCLTproofSection}
\subsection{Tail bounds for pond statistics}
\label{PondTailBoundsSection}
Theorem \ref{InvasionStructureTheorem} expressed $L_n, R_n$ and $V_n$ as random variables whose parameters are given in terms of $Q_n$. Their fluctuations are therefore a combination of fluctuations arising from $Q_n$, and additional randomness. The following proposition gives bounds on the additional randomness.
Recall that $\delta_n$ is a certain function of $Q_n$ with $\delta_n\sim\sigma(Q_n-p_c)$: see Theorem \ref{InvasionStructureTheorem}.
\begin{prop}
\label{LnRnVnBoundProp}
There exist positive constants $C,c,s_0,\gamma_L,\gamma_R,\gamma_V$ such that $L_n$, $R_n$ and $V_n$ satisfy the conditional bounds
\begin{align}
\label{LTailUpperBounds}
\condP{\delta_n L_n\geq S}{\delta}
&\leq Ce^{-cS}
&\condP{\delta_n L_n\leq s}{\delta}
&\leq Cs\\
\label{RTailUpperBounds}
\condP{\delta_n R_n\geq S}{\delta}
&\leq Ce^{-cS}
&\condP{\delta_n R_n\leq s}{\delta}
&\leq Cs\\
\label{VTailUpperBounds}
\condP{\delta_n^2 V_n\geq S}{\delta}
&\leq Ce^{-cS}
&\condP{\delta_n^2 V_n\leq s}{\delta}
&\leq C\sqrt{s}
\end{align}
for all $n$ and all $S,s>0$; and
\begin{align}
\label{LTailLowerBound}
\condP{\delta_n L_n\leq s}{\delta}
&\geq cs
&&\text{on $\set{\delta_n\leq \gamma_L s}$}\\
\label{RTailLowerBound}
\condP{\delta_n R_n\leq s}{\delta}
&\geq cs
&&\text{on $\set{\delta_n\leq \gamma_R s}$}\\
\label{VTailLowerBound}
\condP{\delta_n^2 V_n\leq s}{\delta}
&\geq c\sqrt{s}
&&\text{on $\set{\delta_n^2\leq \gamma_V s}$}
\end{align}
for $s\leq s_0$.
\end{prop}
The proofs of \eqref{LTailUpperBounds}--\eqref{VTailLowerBound}, which involve random walk and branching process estimates, are deferred to section \ref{PondTailBoundsProofSection}.
\subsection{A uniform convergence lemma}\label{UniformConvergenceSection}
Because Theorem \ref{CLTtheorem} involves weak convergence of several processes to the same joint limit, it will be convenient to use Skorohod's representation theorem and almost sure convergence. The following uniform convergence result will be used to extend convergence from one set of coupled random variables ($\delta_{n,N}$) to another ($X_{n,N}$): see section \ref{SLLNCLTproof}.
\begin{lemma}
\label{XnNUniformLimitLemma}
Suppose $\set{X_{n,N}}_{n,N\in\mathbb{N}}$ and $\set{\delta_{n,N}}_{n,N\in\mathbb{N}}$ are positive random variables such that $\delta_{n,N}$ is decreasing in $n$ for each fixed $N$, and for positive constants $a$, $\beta$ and $C$,
\begin{align}
\label{XnNUpperTail}
\P(\delta_{n,N}^a X_{n,N} > S) &\leq CS^{-\beta}\\
\label{XnNLowerTail}
\P(\delta_{n,N}^a X_{n,N} < s) &\leq Cs^\beta
\end{align}
for all $S$ and $s$. Define
\begin{equation}
\hat{X}_{n,N}=\sum_{i=1}^n X_{i,N}.
\end{equation}
Then for any $T>0$ and $\alpha>0$, w.p. 1,
\begin{equation}
\label{XnNUniformLimit}
\lim_{N\rightarrow\infty}\max_{1\leq n\leq N T} \frac{\log (\delta_{n,N}^a X_{n,N})}{N^\alpha}=\lim_{N\rightarrow\infty}\max_{1\leq n\leq N T} \frac{\log (\delta_{n,N}^a \hat{X}_{n,N})}{N^\alpha}=0.
\end{equation}
\end{lemma}
\begin{proof}
Let $\epsilon>0$ be given. For a fixed $N$, \eqref{XnNUpperTail} implies
\begin{align}
\P\left(\max_{1\leq n\leq NT} \frac{\log (\delta_{n,N}^a\hat{X}_{n,N})}{N^\alpha} >\epsilon\right)
&\leq
\sum_{1\leq n\leq NT}\P\left(\delta_{n,N}^a \hat{X}_{n,N} > e^{N^\alpha \epsilon}\right)\notag\\
&\leq
\sum_{1\leq n\leq NT}\sum_{i=1}^n \P\left(\delta_{n,N}^a X_{i,N}>\frac{e^{N^\alpha \epsilon}}{n}\right)\notag\\
&\leq
\sum_{1\leq n\leq NT}\sum_{i=1}^n \P\left(\delta_{i,N}^a X_{i,N}>\frac{e^{N^\alpha \epsilon}}{n}\right)\notag\\
&\leq
\sum_{1\leq n\leq NT}\sum_{i=1}^n C n^\beta e^{-\beta N^\alpha \epsilon}\notag\\
&\leq
(NT)^{2+\beta}Ce^{-\beta N^\alpha \epsilon}
\end{align}
where we used $\delta_{i,N}\geq\delta_{n,N}$ in the third inequality. But then since $\sum_{N=1}^\infty (NT)^{2+\beta}Ce^{-\beta N^\alpha \epsilon} <\infty$, the Borel-Cantelli lemma implies
\begin{equation}
\label{hatXnNLimsupBound}
\limsup_{N\rightarrow\infty} \max_{1\leq n\leq N T} \frac{\log (\delta_{n,N}^a \hat{X}_{n,N})}{N^\alpha} \leq \epsilon
\end{equation}
a.s. Similarly, \eqref{XnNLowerTail} implies
\begin{align}
\P\left(\max_{1\leq n\leq NT} \frac{\log(\delta_{n,N}^a X_{n,N})}{N^\alpha} < -\epsilon\right)
&\leq
\sum_{1\leq n\leq NT} \P\left(\delta_{n,N}^a X_{n,N} < e^{-N^\alpha \epsilon}\right)\notag\\
&\leq NTCe^{-\beta N^\alpha \epsilon}
\end{align}
so that
\begin{equation}
\label{XnNLiminfBound}
\liminf_{N\rightarrow\infty} \max_{1\leq n\leq N T} \frac{\log (\delta_{n,N}^a X_{n,N})}{N^\alpha} \geq -\epsilon
\end{equation}
a.s. Since $\epsilon>0$ was arbitrary and $X_{n,N}\leq \hat{X}_{n,N}$, \eqref{XnNUniformLimit} follows.
\end{proof}
\subsection{Proof of Theorems \ref{SLLNtheorem}--\ref{CLTtheorem}}
\label{SLLNCLTproof}
The conclusions about $Q_n$ are contained in Corollary \ref{QndeltanetanCorollary}. The other conclusions will follow from Lemma \ref{XnNUniformLimitLemma}. From Corollary \ref{QndeltanetanCorollary}, we may apply Skorohod's representation theorem to produce realizations of the ponds for each $N\in\mathbb{N}$, coupled so that
\begin{equation}
\label{deltanNCoupling}
\left(\frac{\log(\delta_{\floor{Nt},N}^{-1})-Nt}{\sqrt{N}}\right)_{0\leq t\leq T} \rightarrow (B_t)_{0\leq t\leq T}
\end{equation}
a.s. as $N\rightarrow\infty$. Then the relation
\begin{equation}
\frac{\frac{1}{a}\log X_{n,N}-Nt}{N^{1/2}}=\frac{\log\left(\delta_{\floor{Nt},N}^{-1}\right)-Nt}{N^{1/2}}+\frac{\log\left(\delta_{\floor{Nt},N}^a X_{\floor{Nt},N}\right)}{a N^{1/2}}
\end{equation}
shows that $\frac{1}{a}\log X_n$ will satisfy a central limit theorem as well. The same holds for $\hat{X}_n$. We will successively set
\begin{align}
X_{n,N}
&=
L_{n,N},&&\text{with $a=1$,}\\
X_{n,N}
&=
R_{n,N},&&\text{with $a=1$,}\\
X_{n,N}
&
=V_{n,N},&&\text{with $a=2$.}
\end{align}
The bounds \eqref{XnNUpperTail}--\eqref{XnNLowerTail} follow immediately from the bounds in Proposition \ref{LnRnVnBoundProp}. This proves Theorem \ref{CLTtheorem} for $L_n$ and $V_n$. For $R_n$, the quantity $\hat{R}$ is not the one that appears in Theorem \ref{CLTtheorem}, but the bound $R_n\leq R'_n\leq \hat{R}_n$ implies the result for $R'_n$ as well.
The lemma also implies the law of large numbers results \eqref{ZnSLLN}, by taking $T=1$ and using the same ponds for every $N$.
\section{Large deviations: proof of Theorem \ref{LDtheorem}}
In this section we present a proof of the large deviation results in Theorem \ref{LDtheorem}. As in section \ref{LLNandCLTproofSection}, we prove a generic result using a variable $X_n$ and tail estimates. Theorem \ref{LDtheorem} then follows immediately using Corollary \ref{QndeltanetanCorollary} and Proposition \ref{LnRnVnBoundProp}.
Note that Proposition \ref{LDProp} uses the full strength of the bounds in Proposition \ref{LnRnVnBoundProp}.
\begin{prop}
\label{LDProp}
Suppose that $\delta_n$ and $X_n$ are positive random variables such that, for positive constants $a,\beta,c,C,\gamma,s_0$,
\begin{align}
\label{XnStrongerUpperTail}
\condP{\delta_n^a X_n > S}{\delta_n} &\leq Ce^{-cS^\beta}\\
\label{XnLowerTailUpperBound}
\condP{\delta_n^a X_n < s}{\delta_n} &\leq Cs^{1/a}\\
\intertext{for all $S$ and $s$, and}
\label{XnLowerTailLowerBound}
\condP{\delta_n^a X_n < s}{\delta_n} &\geq cs^{1/a}
\end{align}
on the event $\set{\delta_n^a<\gamma s}$, for $s\leq s_0$. Suppose also that $\frac{1}{n}\log \delta_n^{-1}$ satisfies a large deviation principle with rate $n$ on $[0,\infty)$ with rate function $\phi$ such that $\phi(1)=0$, $\phi$ is continuous on $(0,\infty)$, and $\phi$ is decreasing on $(0,1]$ and increasing on $[1,\infty)$. Then $\frac{1}{an}\log X_n$ satisfies a large deviation principle with rate $n$ on $[0,\infty)$ with rate function
\begin{equation}
\label{ModifiedRateFunction}
\psi(u)=
\inf_{v\geq u} \bigl(\phi(v)+v-u\bigr)
\end{equation}
\end{prop}
\begin{proof}
It is easy to check that $\psi$ is continuous, decreasing on $[0,1]$ and increasing on $[1,\infty)$, $\psi(1)=0$, and $\psi(u)=\phi(u)$ for $u\geq 1$. So it suffices to show that
\begin{equation}
\label{XnLDBoundAbove}
\lim_{n\rightarrow\infty} \frac{1}{n}\log \P\left(\frac{1}{an}\log X_n>u\right) =-\inf_{v>u}\phi(v)
\end{equation}
for $u>0$ and
\begin{equation}
\label{XnLDBoundBelow}
\lim_{n\rightarrow\infty} \frac{1}{n}\log \P\left(\frac{1}{an}\log X_n<u\right) =-\psi(u)
\end{equation}
for $0<u<1$. For \eqref{XnLDBoundAbove}, let $\epsilon>0$. Then
\begin{align}
\P\left(\frac{1}{an}\log X_n>u\right)
&\leq
\P\left(\frac{1}{n}\log\delta_n^{-1}>u-\epsilon\right)
\notag\\
&\quad
+\P\left(\frac{1}{n}\log\delta_n^{-1}\leq u-\epsilon, \frac{1}{an}\log X_n>u\right)\notag\\
&\leq
\P\left(\frac{1}{n}\log\delta_n^{-1}>u-\epsilon\right)
+\P\left(\frac{1}{an}\log(\delta_n^a X_n)>\epsilon\right)\notag\\
\label{LDUpperTailBoundAbove}
&\leq
\P\left(\frac{1}{n}\log\delta_n^{-1}>u-\epsilon\right)+Ce^{-ce^{\beta an\epsilon}}
\end{align}
where we used \eqref{XnStrongerUpperTail} with $S=e^{an\epsilon}$. The last term in \eqref{LDUpperTailBoundAbove} is super-exponentially small, so \eqref{LDUpperTailBoundAbove} and the large deviation principle for $\frac{1}{n}\log\delta_n^{-1}$ imply
\begin{equation}
\limsup_{n\rightarrow\infty}\frac{1}{n}\log\P\left(\frac{1}{an}\log X_n>u\right)\leq -\inf_{v>u-\epsilon}\phi(v).
\end{equation}
On the other hand,
\begin{align}
\P\left(\frac{1}{an}\log X_n>u\right)
&\geq
\P\left(\frac{1}{n}\log\delta_n^{-1}>u+\epsilon, \frac{1}{an}\log (\delta_n^a X_n) > -\epsilon\right)\notag\\
&\geq \P\left(\frac{1}{n}\log\delta_n^{-1}>u+\epsilon\right)(1-Ce^{-n\epsilon})
\end{align}
using \eqref{XnLowerTailUpperBound} with $s=e^{-an\epsilon}$. So
\begin{equation}
\liminf_{n\rightarrow\infty}\frac{1}{n}\log\P\left(\frac{1}{an}\log X_n>u\right)\geq -\inf_{v>u+\epsilon} \phi(v).
\end{equation}
Since $\phi$ is continuous and $\epsilon>0$ was arbitrary, this proves \eqref{XnLDBoundAbove}.
For \eqref{XnLDBoundBelow}, let $u\in(0,1)$ be given and choose $v\in(u,1)$, $\epsilon\in(0,u)$. Then for $n$ sufficiently large we have
\begin{align}
\P\left(\frac{1}{an}\log X_n <u\right)
&\geq
\P\left(v-\epsilon<\frac{1}{n}\log\delta_n^{-1} <v, \frac{1}{an}\log (\delta_n^a X_n)<u-v\right)\notag\\
&\geq
\P\left(v-\epsilon<\frac{1}{n}\log\delta_n^{-1}<v\right)ce^{-n(v-u)}
\end{align}
Here we used \eqref{XnLowerTailLowerBound} with $s=e^{-an(v-u)}$. Note that if $n$ is large enough then $s\leq s_0$ and the condition $\delta_n^a<\gamma s$ follows from $v-\epsilon<\frac{1}{n}\log\delta_n^{-1}$. Therefore, since $\phi$ is decreasing on $(0,1]$,
\begin{align}
\liminf_{n\rightarrow\infty} \frac{1}{n}\log\P\left(\frac{1}{an}\log X_n <u\right)
&\geq
-\left(\inf_{v-\epsilon<w<v}\phi(w)\right)-(v-u)\notag\\
\label{LDLowerTailBoundBelow}
&=
-\phi(v)-a(v-u)
\end{align}
\eqref{LDLowerTailBoundBelow} was proved for $u<v<1$. However, since $\phi$ is continuous and the function $-\phi(v)-av$ is decreasing in $v$ for $v\geq 1$, \eqref{LDLowerTailBoundBelow} holds for all $v\geq u$. So take the supremum over $v\geq u$ to obtain
\begin{equation}
\liminf_{n\rightarrow\infty} \frac{1}{n}\log\P\left(\frac{1}{an}\log X_n <u\right) \geq -\psi(u).
\end{equation}
Finally
\begin{align}
&\P\left(\frac{1}{an}\log X_n <u\right)\notag\\
&\quad\leq
\P\left(\frac{1}{n}\log\delta_n^{-1}\leq u\right)
+\mathbb{E}\left(1\left(\frac{1}{n}\log\delta_n^{-1}>u\right)\condP{\frac{1}{an}\log X_n<u}{\delta_n}\right)\notag\\
&\quad=
\P\left(\frac{1}{n}\log\delta_n^{-1}\leq u\right)\notag\\
&\qquad
+\mathbb{E}\left(1\left(\frac{1}{n}\log\delta_n^{-1}>u\right) \condP{\frac{1}{an}\log (\delta_n^a X_n)<u-\frac{1}{n}\log\delta_n^{-1}}{\delta_n}\right)\notag\\
\label{LDLowerTailUpperBound}
&\quad\leq
\P\left(\frac{1}{n}\log\delta_n^{-1}\leq u\right) +\mathbb{E}\left(1\left(\frac{1}{n}\log\delta_n^{-1}>u\right)Ce^{n\left(u-\frac{1}{n}\log\delta_n^{-1}\right)}\right)
\end{align}
(using \eqref{XnLowerTailUpperBound} with $s=e^{an\left(u-\frac{1}{n}\log\delta_n^{-1}\right)}$). Apply Varadhan's lemma (see, e.g., \cite[p. 32]{dH2000}) to the second term of \eqref{LDLowerTailUpperBound}:
\begin{equation}
\begin{split}
\lim_{n\rightarrow\infty}\frac{1}{n}\log\mathbb{E}\left(1\left(\frac{1}{n}\log\delta_n^{-1}>u\right)Ce^{n\left(u-\frac{1}{n}\log\delta_n^{-1}\right)}\right)\\
\qquad=\sup_{v>u}(u-v-\phi(v))=-\psi(u)
\end{split}
\end{equation}
Therefore
\begin{equation}
\limsup_{n\rightarrow\infty}\frac{1}{n}\log\P\left(\frac{1}{an}\log X_n <u\right) \leq \max\set{-\phi(u),-\psi(u)} = -\psi(u)
\end{equation}
which completes the proof.
\end{proof}
\section{Tail asymptotics}\label{TailAsymptoticsSection}
In this section we prove the fixed-pond asymptotics from Theorem \ref{QnLnAsymp}.
\begin{proof}[Proof of \eqref{QnTail}]
Recall from \eqref{logthetaQnRWrep} that $\log\!\left(\theta(Q_n)^{-1}\right)$ has the same distribution as a sum of $n$ Exponential variables of mean 1, i.e., a Gamma variable with parameters $n,1$. So
\begin{align}
\P(\theta(Q_n)<\epsilon)
&=
\P\left(\log\!\left(\theta(Q_n)^{-1}\right)>\log \epsilon^{-1}\right)\notag\\
&=
\int_{\log\epsilon^{-1}}^\infty \frac{x^{n-1}}{(n-1)!} e^{-x}\d{x}
\end{align}
Make the substitution $x=(1+u)\log\epsilon^{-1}$:
\begin{align}
\P(\theta(Q_n)<\epsilon)
&=
\frac{\epsilon\left(\log\epsilon^{-1}\right)^n}{(n-1)!} \int_0^\infty (1+u)^{n-1} e^{-u\log\epsilon^{-1}}
\end{align}
Then Watson's lemma (see for instance (2.13) of \cite{MurrayAsymp1984}) implies that
\begin{equation}
\label{thetaQnAsymptotics}
\P(\theta(Q_n)<\epsilon)\sim\frac{\epsilon\left(\log\epsilon^{-1}\right)^{n-1}}{(n-1)!}
\end{equation}
and so \eqref{QndeltanRelation} gives
\begin{equation}
\P(Q_n<p_c(1+\epsilon))\sim\frac{2\sigma\epsilon\left(\log\epsilon^{-1}\right)^{n-1}}{(\sigma-1)(n-1)!}
\end{equation}
\end{proof}
Combining \eqref{thetaQnAsymptotics} with \eqref{QndeltanRelation} implies at once that
\begin{equation}
\label{deltanTail}
\P\left(\delta_n<\epsilon\right)\asymp \epsilon(\log \epsilon^{-1})^{n-1}
\end{equation}
We use \eqref{deltanTail} to prove \eqref{RnTail}--\eqref{VnTail} using the following lemma.
\begin{lemma}
\label{XnTailLemma}
Let $\delta_n$ be a random variable satisfying \eqref{deltanTail}. Suppose $a,\beta$ are positive constants such that $a\beta>1$, and $X_n$ is any positive random variable satisfying
\begin{equation}
\label{XnAsympUpperTail}
\condP{\delta_n^a X_n>S}{\delta}\leq C S^{-\beta}
\end{equation}
for all $S,n>0$, and
\begin{equation}
\label{XnWeakLowerTail}
\condP{\delta_n^a X_n>s_0}{\delta}\geq p_0
\end{equation}
for some $s_0,p_0>0$. Write $\hat{X}_n=\sum_{i=1}^n X_i$. Then
\begin{equation}
\label{XnAsymp}
\P(X_n>k)\asymp\P\left(\hat{X}_n>k\right)\asymp\frac{(\log k)^{n-1}}{k^{1/a}}
\end{equation}
as $k\rightarrow\infty$.
\end{lemma}
\begin{proof}
From \eqref{XnWeakLowerTail} and \eqref{deltanTail},
\begin{align}
\P(X_n>k)
&\geq
\condP{\delta_n^a X_n>s_0 \Big.}{\delta_n<s_0 k^{-1/a}}\P(\delta_n<s_0 k^{-1/a})\notag\\
&\geq
\addtocounter{constantsubscript}{1}c_{\arabic{constantsubscript}}\frac{(\log k)^{n-1}}{k^{1/a}}
\end{align}
For the lower bound we condition on the Gamma random variable $\log\theta(Q_n)^{-1}$. From \eqref{QndeltanRelation} we have $\delta_n\geq \namednumberedconstant{deltathetaconstant}\theta(Q_n)$ for some constant $\previousconstant{deltathetaconstant}>0$, so that
\begin{align}
\P(X_n>k)
&=
\mathbb{E}\left(\condP{\delta_n^a X_n>k\delta_n^a}{\delta_n}\right)\notag\\
&\leq
\mathbb{E}\left(1\wedge C(k\delta_n^a)^{-\beta}\right)\notag\\
&\leq
\addtocounter{constantsubscript}{1}C_{\arabic{constantsubscript}}\mathbb{E}\left(1\wedge \left(k\theta(Q_n)^a\right)^{-\beta}\right)\notag\\ \label{XnTailAsympUnsubbedIntegral}
&=
\addtocounter{constantsubscript}{1}C_{\arabic{constantsubscript}}\int_0^\infty \left(1\wedge \left(ke^{-ax}\right)^{-\beta}\right)x^{n-1}e^{-x}\d{x}
\end{align}
Make the substitution $y=ke^{-ax}$ to obtain
\begin{align}
&\P(X_n>k)
\leq
\addtocounter{constantsubscript}{1}C_{\arabic{constantsubscript}}\int_0^k \left(1\wedge y^{-\beta}\right)\left(\log k-\log y\right)^{n-1} \left(\frac{y}{k}\right)^{1/a}\frac{dy}{y}\notag\\
\label{XnTailAsympIntegral}
&\quad\leq
\frac{C_{\arabic{constantsubscript}}(\log k)^{n-1}}{k^{1/a}}\int_0^\infty \left(1\wedge y^{-\beta}\right)\left(1+\frac{\abs{\log y}}{\log k}\right)^{n-1} \frac{dy}{y^{1-1/a}}
\end{align}
The last integral in \eqref{XnTailAsympIntegral} is bounded as $k\rightarrow\infty$: the singularity as $y\rightarrow 0^+$ is integrable since $1-1/a>0$, and the singularity as $y\rightarrow\infty$ is integrable since the exponent in $y^{-1-\beta+1/a}$ has $-1-\beta+1/a<-1$ since $a\beta>1$.
To extend \eqref{XnAsymp} to $\hat{X}_n$, assume inductively that $\P(\hat{X}_n>k)\asymp (\log k)^{n-1}/k^{1/a}$. (The case $n=1$ is already proved since $\hat{X}_1=X_1$.) The bound $\P(\hat{X}_{n+1}>k)\geq \P(X_{n+1}>k)$ is immediate, and we can estimate
\begin{align}
\P\left(\hat{X}_{n+1}>k\right)\leq \P\left(X_{n+1} >k-k'\right) +\P\left(\hat{X}_n >k'\right)
\end{align}
where we set $k'=\floor{k/(\log k)^{a/2}}$. Then $k-k'\sim k$ and $\log (k-k')\sim\log k' \sim \log k$, so that
\begin{equation}
\P\left(X_{n+1} >k-k'\right)\asymp \frac{(\log k)^n}{k^{1/a}}
\end{equation}
while
\begin{equation}
\P\left(\hat{X}_n >k'\right)\asymp \frac{(\log k')^{n-1}}{\left(k'\right)^{1/a}}\asymp \frac{(\log k)^{n-1/2}}{k^{1/a}}
\end{equation}
which is of lower order. This completes the induction.
\end{proof}
\begin{proof}[Proof of \eqref{RnTail}--\eqref{VnTail}]
These relations follow immediately from \eqref{deltanTail} and Lemma \ref{XnTailLemma}; the bounds \eqref{XnAsympUpperTail}--\eqref{XnWeakLowerTail} are immediate consequences of Proposition \ref{LnRnVnBoundProp}. As in section \ref{SLLNCLTproof}, the asymptotics for $R'_n$ follow from the asymptotics for $\hat{R}_n$ and the bound $R_n\leq R'_n\leq \hat{R}_n$.
\end{proof}
\begin{proof}[Proof of \eqref{LnTail}]
For $L_n$, we can use the exact formula $\condP{L_n>k}{\delta_n}=(1-\delta_n)^k$. Write $\delta_n=g(\theta(Q_n))$, where $g(p)$ is a certain continuous and increasing function. By \eqref{QndeltanRelation}, $g(p)\sim \frac{\sigma-1}{2\sigma} p$ as $p\rightarrow 0^+$; as above we will use the bound $g(p)\geq cp$ for some constant $c>0$. Proceeding as in \eqref{XnTailAsympUnsubbedIntegral} and \eqref{XnTailAsympIntegral} we obtain the exact formula
\begin{align}
&\P(L_n>k)
=
\frac{1}{k(n-1)!}\int_0^k \bigl(1-g(y/k)\bigr)^k (\log k -\log y)^{n-1} \d{y}\notag\\
\label{LnAbovekFormula}
&\quad=
\frac{(\log k)^{n-1}}{k(n-1)!} \int_0^\infty \mathbbm{1}(y\leq k)\bigl(1-g(y/k)\bigr)^k \left(1-\frac{\log y}{\log k}\right)^{n-1} dy
\end{align}
But the integral in \eqref{LnAbovekFormula} converges to $\int_0^\infty e^{-\frac{\sigma-1}{2\sigma}y}\d{y}=\frac{2\sigma}{\sigma-1}$ as $k\rightarrow\infty$: pointwise convergence follows from $g(p)\sim \frac{\sigma-1}{2\sigma} p$, and we can uniformly bound the integrand using
\begin{equation}
\bigl(1-g(y/k)\bigr)^k \leq e^{-kg(y/k)}\leq e^{-cy}
\end{equation}
Lastly, a simple modification of the argument for $\hat{X}_n$ extends \eqref{LnTail} to $\hat{L}_n$.
\end{proof}
\section{Pond bounds: proof of Proposition \ref{LnRnVnBoundProp}}
\label{PondTailBoundsProofSection}
In this section we prove the tail bounds \eqref{LTailUpperBounds}--\eqref{VTailLowerBound}. Since the laws of $L_n$, $R_n$ and $V_n$ do not depend on $n$ except through the value of $\delta_n$, we will omit the subscript in this section. For convenient reference we recall the structure of the bounds:
\begin{align}
\label{XStrongerUpperTail}
\condP{\delta^a X > S }{\delta} &\leq Ce^{-cS},\\
\label{XLowerTailUpperBound}
\condP{\delta^a X < s }{ \delta} &\leq Cs\\
\intertext{for all $S$ and $s$, and}
\label{XLowerTailLowerBound}
\condP{\delta^a X < s }{\delta} &\geq cs^{1/a}
\end{align}
on the event $\set{\delta^a<\gamma s}$, for $s\leq s_0$. We have $a=1$ for $X=L$ and $X=R$, and $a=2$ for $X=V$.
In \eqref{XLowerTailLowerBound} it is necessary to assume $\delta^a<\gamma s$. This is due only to a discretization effect: for any $\mathbb{N}$-valued random variable $X$, we have $\condP{\delta^a X<s}{\delta}=0$ whenever $\delta^a<s$. Indeed, the bounds \eqref{LTailLowerBound}--\eqref{VTailLowerBound} can be proved with $\gamma=1$, although it is not necessary for our purposes.
Note that, by proper choice of $C$ and $s_0$, we can assume that $S$ is large in \eqref{XStrongerUpperTail} and $s$ is small in \eqref{XLowerTailUpperBound} and \eqref{XLowerTailLowerBound}. Since we only consider $\mathbb{N}$-valued random variables $X$, we can assume $\delta$ is small in \eqref{XLowerTailUpperBound}, say $\delta<1/2$ (otherwise take $s<(1/2)^a$ without loss of generality). Moreover, Theorem \ref{InvasionStructureTheorem} shows that $L$, $R$ and $V$ are all stochastically decreasing in $\delta$. Consequently it suffices to prove \eqref{XStrongerUpperTail} for $\delta$ small, say $\delta<1/2$. Finally the constraint $\delta^a<\gamma s_0$ makes $\delta$ small in \eqref{XLowerTailLowerBound} also.
We note for subsequent use the inequalities
\begin{equation}
(1-x)^y\leq e^{-xy}
\end{equation}
for $x\in(0,1),y>0$ and
\begin{equation}
\label{1minus1minusxtotheybound}
1-(1-x)^y\leq 1-e^{-2xy}\leq 2xy
\end{equation}
for $x\in(0,1/2),y>0$, which follow from $\log(1-x)\leq -x$ for $x\in(0,1)$ and $\log(1-x)\geq -2x$ for $x\in(0,1/2)$.
\subsection{The backbone length \texorpdfstring{$L$}{L}: proof of \texorpdfstring{\eqref{LTailUpperBounds}}{(\ref{LTailUpperBounds})} and \texorpdfstring{\eqref{LTailLowerBound}}{(\ref{LTailLowerBound})}}
From Theorem \ref{InvasionStructureTheorem}, $L$ is a geometric random variable with parameter $\delta$. So
\begin{align}
\condP{L>S/\delta}{ \delta}
&=
(1-\delta)^{\floor{S/\delta}}\notag\\
\label{LStrongerUpperTail}
&\leq
e^{-\delta\floor{S/\delta}}
\leq e^{-S+\delta}\leq e^{-S+1}
\end{align}
since $\delta\leq 1$, proving \eqref{XStrongerUpperTail}. For \eqref{XLowerTailUpperBound} and \eqref{XLowerTailLowerBound}, we have
\begin{equation}
\label{LLowerTailFormula}
\condP{L < s/\delta}{ \delta}
=
1-(1-\delta)^{\ceil{s/\delta}-1}.
\end{equation}
For $\delta\leq\frac{1}{2}$ we can use \eqref{1minus1minusxtotheybound} to get
\begin{align}
\condP{L < s/\delta}{\delta}
&\leq
2\delta(\ceil{s/\delta}-1)\leq 2s
\end{align}
which proves \eqref{XLowerTailUpperBound}. For \eqref{XLowerTailLowerBound}, take $\gamma_L=1/2$. Then on the event $\set{\delta<\gamma_L s}$ we have $\ceil{s/\delta}\geq 3$ so that expanding \eqref{LLowerTailFormula} as a binomial series gives
\begin{align}
\condP{L < s/\delta}{ \delta}
&\geq
\left(\ceil{s/\delta}-1\right)\delta-\frac{1}{2}\left(\ceil{s/\delta}-1\right)\left(\ceil{s/\delta}-2\right)\delta^2\notag\\
&\geq
(s-\delta)-\frac{1}{2}s(s-\delta)\notag\\
\label{LLowerTailLowerBound}
&\geq
\frac{s}{2}\left(1-\frac{s}{2}\right)\geq \frac{s}{4}
\end{align}
for $s\leq 1=s_0$. So \eqref{XLowerTailLowerBound} holds.
\subsection{The pond radius \texorpdfstring{$R$}{R}: proof of \texorpdfstring{\eqref{RTailUpperBounds}}{(\ref{RTailUpperBounds})} and \texorpdfstring{\eqref{RTailLowerBound}}{(\ref{RTailLowerBound})}}
Conditional on $\delta$ and $L$, $R$ is the maximum height of a percolation cluster with parameter $p_c(1-\delta)$ started from a path of length $L$. We have $R\geq L$ so \eqref{XLowerTailUpperBound} follows immediately from the corresponding bound for $L$. $R$ is stochastically dominated by
\begin{equation}
L+\max_{1\leq i\leq L} \tilde{R}_i
\end{equation}
where $\tilde{R}_i$ is the maximum height of a branching process with offspring distribution Binomial($\sigma,p_c(1-\delta)$) started from a single vertex, independently for each $i$. Define
\begin{equation}
a_k=\condP{\tilde{R}_i>k}{\delta}
\end{equation}
for $k>0$. Thus $a_k$ is the probability that the branching process survives to generation $k+1$. By comparison with a critical branching process,
\begin{equation}
\label{CriticalBranchingProcessBound}
a_k\leq \frac{\namednumberedConstant{CritBPConst}}{k}
\end{equation}
for some constant $\previousConstant{CritBPConst}$. On the other hand, $a_k$ satisfies
\begin{equation}
a_{k+1}=1-f(1-a_k)
\end{equation}
where $f(z)$ is the generating function for the offspring distribution of the branching process. (This is a reformulation of the well-known recursion for the extinction probability.) In particular, since $f'(z)\leq f'(1)=\sigma p_c(1-\delta)=1-\delta$,
\begin{equation}
\label{SubcriticalBranchingProcessRecursion}
a_{k+1}\leq f'(1)a_k=a_k(1-\delta).
\end{equation}
Combining \eqref{CriticalBranchingProcessBound} with \eqref{SubcriticalBranchingProcessRecursion},
\begin{equation}
a_{k+j}\leq a_k(1-\delta)^j\leq \frac{\previousConstant{CritBPConst}}{k}e^{-\delta j}
\end{equation}
and taking $k=\ceil{S/2\delta}\geq S/2\delta$, $j=\floor{S/\delta}-\ceil{S/2\delta}\geq S/2\delta-2$,
\begin{align}
\condP{\tilde{R}_i>\frac{S}{\delta}}{\delta}
&=
a_{\floor{S/\delta}}
\leq
\frac{2\previousConstant{CritBPConst}\delta}{S}e^{\delta(S/2\delta-2)}\notag\\
&\leq
\frac{\addtocounter{constantsubscript}{1}C_{\arabic{constantsubscript}} \delta e^{-\addtocounter{constantsubscript}{1}c_{\arabic{constantsubscript}} S}}{S}
\end{align}
Using this estimate we can compute
\begin{align}
&\condP{R>\frac{S}{\delta}}{\delta}\notag\\
&\qquad\leq
\condP{L>\frac{S}{2\delta}}{\delta}+\condP{L\leq \frac{S}{2\delta},\tilde{R}_i>\frac{S}{2\delta}\text{ for some $i\leq S/2\delta$}}{\delta}\notag\\
&\qquad\leq
\addtocounter{constantsubscript}{1}C_{\arabic{constantsubscript}} e^{-\addtocounter{constantsubscript}{1}c_{\arabic{constantsubscript}} S}+\left(\frac{S}{2\delta}\right) \frac{\addtocounter{constantsubscript}{1}C_{\arabic{constantsubscript}} \delta e^{-\addtocounter{constantsubscript}{1}c_{\arabic{constantsubscript}} S}}{S}
\leq
\addtocounter{constantsubscript}{1}C_{\arabic{constantsubscript}} e^{-\addtocounter{constantsubscript}{1}c_{\arabic{constantsubscript}} S}
\end{align}
Similarly
\begin{align}
\condP{R<\frac{s}{\delta}}{\delta}
&\geq
\condP{L<\frac{s}{2\delta}, \tilde{R}_i<\frac{s}{2\delta}\text{ for all $i<\frac{s}{2\delta}$}}{\delta}\notag\\
&\geq
\addtocounter{constantsubscript}{1}c_{\arabic{constantsubscript}} s\left(1-\frac{\addtocounter{constantsubscript}{1}C_{\arabic{constantsubscript}}\delta e^{-\addtocounter{constantsubscript}{1}c_{\arabic{constantsubscript}} s}}{s}\right)^{s/2\delta}\geq \addtocounter{constantsubscript}{1}c_{\arabic{constantsubscript}} s
\end{align}
provided $\delta$ is sufficiently small compared to $s$, i.e., provided $\gamma_R$ is small enough.
\subsection{The pond volume \texorpdfstring{$V$}{V}: proof of \texorpdfstring{\eqref{VTailUpperBounds}}{(\ref{VTailUpperBounds})} and \texorpdfstring{\eqref{VTailLowerBound}}{(\ref{VTailLowerBound})}}
From Theorem \ref{InvasionStructureTheorem}, conditional on $\delta$ and $L$, $V$ is the number of edges in a percolation cluster with parameter $p_c(1-\delta)$, started from a path of length $L$ and with no edges emerging from the top of the path. We can express $V$ in terms of the return time of a random walk as follows.
Start with an edge configuration with $L$ backbone edges marked as occupied. Mark as unexamined the $(\sigma-1)L$ edges adjacent to the backbone, not including the edges emerging from the top. At each step, take an unexamined edge (if any remain) and either (1) with probability $1-p_c(1-\delta)$, mark it as vacant; or (2) with probability $p_c(1-\delta)$, mark it as occupied and mark its child edges as unexamined. Let $N_k$ denote the number of unexamined edges after $k$ steps. Then it is easy to see that $N_k$ is a random walk $N_k=N_0+\sum_{i=1}^k Y_i$ (at least until $N_k=0$) where $N_0=(\sigma-1)L$ and
\begin{equation}
Y_i=\begin{cases}\sigma-1&\text{w.p. $p_c(1-\delta)$,}\\ -1&\text{w.p. $1-p_c(1-\delta)$.}\end{cases}
\end{equation}
Let $T=\inf\set{k : N_k=0}$. ($T$ is finite a.s. since $\condE{Y_i}{\delta}=-\delta<0$ and $N_k$ can jump down only by 1.) $T$ counts the total number of off-backbone edges examined, namely the number of non-backbone edges in the cluster and on its boundary, not including the edges from the top of the backbone. Consequently
\begin{equation}
T=[V-L]+[(\sigma-1)V+\sigma]-\sigma=\sigma V-L
\end{equation}
In order to apply random walk estimates we write $X_i=Y_i+\delta$, $Z_k=\sum_{i=1}^k X_i$, so that $\condE{X_i}{\delta}=0$; $\namednumberedconstant{XiVarianceLower}\leq\condE{X_i^2}{\delta}\leq \namednumberedConstant{XiVarianceUpper}$ for universal constants $\previousconstant{XiVarianceLower},\previousConstant{XiVarianceUpper}$; and $N_k=Z_k-k\delta+(\sigma-1)L$. Note that
\begin{align}
\set{V>V_0}
&=
\set{T >\sigma V_0-L}
\notag\\
&\subseteq
\set{Z_{\floor{\sigma V_0-L}}>\delta\floor{\sigma V_0-L}-(\sigma-1)L}
\end{align}
so using, for instance, Freedman's inequality \cite[Proposition 2.1]{Freedman1975} leads after some computation to
\newsavebox{\neverusedagain}
\sbox{\neverusedagain}{$\namednumberedconstant{filler}\namednumberedconstant{SLetabound}$}
\begin{align}
&\condP{V>S_0 L/\delta}{\delta,L}
\leq
\condP{Z_{\floor{L(\sigma S_0-\delta)/\delta}}>L(\sigma S_0-2\delta-\sigma+1)}{\delta,L}\notag\\
&\qquad\leq
\exp\left(-\frac{L^2(\sigma S_0-2\delta-\sigma+1)^2}{2(\sigma L(\sigma S_0-2\delta-\sigma+1)+\previousConstant{XiVarianceUpper} L(\sigma S_0-\delta)/\delta)}\right)\notag\\
&\qquad\leq
\exp\left(-\previousconstant{filler} S_0 L\delta(1-2/S_0)^2\right)
\leq \exp(-\previousconstant{SLetabound} S_0 L\delta)
\end{align}
if $S_0\geq 3$, say. Then, setting $S_0=S/\delta L$,
\begin{align}
\condP{V>S/\delta^2}{\delta}
&\leq
\condP{L> S/3\delta}{\delta}+\condP{V>S_0 L/\delta, L\leq S/3\delta}{\delta}\notag\\
&\leq
\addtocounter{constantsubscript}{1}C_{\arabic{constantsubscript}} e^{-\addtocounter{constantsubscript}{1}c_{\arabic{constantsubscript}} S}+\exp\left(-\previousconstant{SLetabound} (S/\delta L) L\delta \right)\notag\\
\label{VStrongerUpperTail}
&\leq
\addtocounter{constantsubscript}{1}C_{\arabic{constantsubscript}} e^{-\addtocounter{constantsubscript}{1}c_{\arabic{constantsubscript}} S}
\end{align}
which proves \eqref{XStrongerUpperTail}.
For \eqref{XLowerTailUpperBound}, apply Freedman's inequality again:
\begin{align}
\condP{V\leq s/\delta^2}{\delta,L}
&=
\condP{T\leq \sigma s/\delta^2-L}{\delta,L}\notag\\
&=
\condP{\min_{k\leq\sigma s/\delta^2-L}(Z_k-k\delta) \leq -(\sigma-1)L}{\delta,L}\notag\\
&\leq
\condP{\min_{k\leq\sigma s/\delta^2} Z_k\leq -(L-\delta(\sigma s/\delta^2))}{\delta,L}\notag\\
&\leq
\exp\left(-\frac{(L-\sigma s/\delta)^2}{2(\sigma(L-\sigma s/\delta)+\addtocounter{constantsubscript}{1}C_{\arabic{constantsubscript}} s/\delta^2)}\right)\label{VLowerTailFreedman}\\
&\leq
\exp\left(-c (\delta L-\sigma s)^2 /s\right)
\end{align}
(where in the denominator of \eqref{VLowerTailFreedman} we use $L\leq s/\delta^2$ since $V\geq L$). So
\begin{align}
\label{VLowerTailUpperBoundSum}
\begin{split}
&\condP{V\leq s/\delta^2}{\delta}
\leq
\condP{L<2\sigma s/\delta}{\delta}\\
&\qquad+
\condE{\indicator{L\geq 2\sigma s/\delta} \exp\left(-c (\delta L-\sigma s)^2/s\right)}{\delta}
\end{split}
\end{align}
The first term in \eqref{VLowerTailUpperBoundSum} is at most $\addtocounter{constantsubscript}{1}C_{\arabic{constantsubscript}} s$ from \eqref{LTailUpperBounds} and will therefore be negligible compared to $\sqrt{s}$. For the second term, note that for $L\geq \sigma s/\delta$,
\begin{equation}
\exp\left(-c(\delta L-\sigma s)^2/s\right)=\int_{\sigma s/\delta}^\infty \indicator{l>L} \frac{2c\delta(\delta l-\sigma s)}{s} e^{-c(\delta l-\sigma s)^2/s} dl
\end{equation}
and so, with the substitution $x\sqrt{s}=\delta l-\sigma s$,
\begin{align}
&\condE{\indicator{L\geq 2\sigma s/\delta} \exp\left(-c (\delta L-\sigma s)^2/s\right)}{\delta}\notag\\
&\qquad=
2c \int_{\sigma s/\delta}^\infty \condP{L<l}{\delta}\frac{\delta(\delta l-\sigma s)}{s} e^{-c(\delta l-\sigma s)^2/s} dl\notag\\
&\qquad=
2c \int_0^\infty \condP{L<\frac{x\sqrt{s}+\sigma s}{\delta}}{\delta} x e^{-cx^2} dx\notag\\
&\qquad\leq
\addtocounter{constantsubscript}{1}C_{\arabic{constantsubscript}} \int_0^\infty (x\sqrt{s}+\sigma s)xe^{-cx^2}dx
\leq
\addtocounter{constantsubscript}{1}C_{\arabic{constantsubscript}} \sqrt{s}
\end{align}
which proves \eqref{XLowerTailUpperBound}
Finally, for \eqref{XLowerTailLowerBound}, the Berry-Esseen inequality (see for instance \cite[Theorem 2.4.9]{Durrett2005}) implies
\begin{equation}
\abs{\condP{Z_k<-x\sqrt{k}\sqrt{\condE{X_i^2}{\delta}}}{\delta}-\Phi(-x)}\leq\frac{\namednumberedConstant{BEConst}}{\sqrt{k}}
\end{equation}
where $\Phi(x)=\P(G<x)$ for $G$ a standard Gaussian, and $\previousConstant{BEConst}$ is some absolute constant. In particular, using $0<\previousconstant{XiVarianceLower}\leq \condE{X_i^2}{\delta}$,
\begin{equation}
\condP{Z_k<-(\sigma-1)\sqrt{k}}{\delta}\geq \namednumberedconstant{BEconsequenceconst} >0
\end{equation}
for $k\geq\namednumberedConstant{BEThreshold}$. Choose $\gamma_V = 1\wedge \gamma_L^2\wedge (\previousConstant{BEThreshold}+1)^{-1}$. Then we have $s/\delta^2\geq 1$ (so we may bound $\sigma s/\delta^2-\sqrt{s}/\delta\geq s/\delta^2$); $\sqrt{s}/\delta\leq \gamma_L$; and $\floor{s/\delta^2}\geq \previousConstant{BEThreshold}$, so that
\begin{align}
\condP{V<s/\delta^2}{\delta}
&=
\condP{T<\sigma s/\delta^2 - L}{\delta}\notag\\
&\geq
\condP{T<\sigma s/\delta^2 - \sqrt{s}/\delta,L<\sqrt{s}/\delta}{\delta}\notag\\
&\geq
\condP{T<s/\delta^2,L<\sqrt{s}/\delta}{\delta}\notag\\
&\geq
\condP{Z_{\floor{s/\delta^2}}< \delta\floor{s/\delta^2}-(\sigma-1)L,L<\sqrt{s}/\delta}{\delta}\notag\\
&\geq
\condP{Z_{\floor{s/\delta^2}}<-(\sigma-1)\sqrt{s/\delta^2}}{\delta}\condP{L<\sqrt{s}/\delta}{\delta}\notag\\
&\geq
\addtocounter{constantsubscript}{1}c_{\arabic{constantsubscript}}\sqrt{s}
\end{align}
proving \eqref{XLowerTailLowerBound}.
\setcounter{constantsubscript}{0}
\section{Percolation with defects}\label{PercolationWithDefectsSection}
In this section we prove
\begin{equation}
\label{PercWithDefectsAsymptotics}
\P_{p_c}(o\overset{n}{\leftrightarrow}\partial B(k))\asymp k^{-2^{-n}}
\end{equation}
The case $n=0$ is a standard branching process result. For $n>0$, proceed by induction. Write $C(o)$ for the percolation cluster of the root $o$. The lower bound follows from the following well-known estimate:
\begin{equation}
\label{ClusterSizeNAsymptotics}
\P_{p_c}(\abs{C(o)}>N)\asymp N^{-1/2}
\end{equation}
If $C(o)>N$ then there are at least $N$ vertices $v_1,\dotsc,v_N$ on the outer boundary of $C(o)$, any one of which may have a connection to $\partial B(k)$ with $n-1$ defects. As a worst-case estimate we may assume that $v_1,\dotsc,v_N$ are still at distance $k$ from $\partial B(k)$, so that by independence we have
\begin{align}
\P_{p_c}(o\overset{n}{\leftrightarrow}\partial B(k))
&\geq
\P_{p_c}(\abs{C(o)}>N) \left(1-\left(1-\P_{p_c}\bigl(o\overset{n-1}{\leftrightarrow}\partial B(k)\bigr)\right)^N\right)\notag\\
&\geq
\frac{c_1}{\sqrt{N}} \left(1-(1-c_2 k^{-2^{-n+1}})^N\right)
\end{align}
for constants $c_1,c_2$. If we set $N=k^{2^{-n+1}}$ then the second factor is of order 1, and the lower bound is proved. For the upper bound, use a slightly stronger form of \eqref{ClusterSizeNAsymptotics} (see for instance \cite[p. 260]{GrimmettPerc1999}):
\begin{equation}
\label{ClusterSizeNAsymptoticsStronger}
\P_{p_c}(\abs{C(o)}=N)\asymp (N+1)^{-3/2}
\end{equation}
Now if $C(o)=N$, with $N\leq k/2$, then there are at most $\sigma N$ vertices on the outer boundary of $C(o)$, one of which must have a connection with $n-1$ defects of length at least $k-N\geq k/2$. So
\begin{align}
&\P_{p_c}(o\overset{n}{\leftrightarrow}\partial B(k))\notag\\
&\qquad\leq
\P_{p_c}(\abs{C(o)}>k/2)\notag\\
&\qquad\qquad
+\sum_{N=0}^{\floor{k/2}} \P_{p_c}(\abs{C(o)}=N) \left(1- \left(1-\P_{p_c}\bigl(0\overset{n-1}{\leftrightarrow}\partial B(k/2)\bigr)\right)^{\sigma N}\right)\notag\\
&\qquad\leq
\frac{c_3}{\sqrt{k}}+ \sum_{N=0}^\infty \frac{c_4}{(N+1)^{3/2}}\left(1- \bigr(1-c_5 k^{-2^{-n+1}}\bigr)^{\sigma N}\right)\notag\\
&\qquad\leq
\frac{c_3}{\sqrt{k}}+ \sum_{N<k^{2^{-n+1}}}\frac{c_6 k^{-2^{-n+1}}N}{(N+1)^{3/2}} +\sum_{N\geq k^{2^{-n+1}}} c_4 N^{-3/2}\notag\\
&\qquad\leq
c_3 k^{-2^{-1}}+c_7 \left(k^{2^{-n+1}}\right)^{-1/2}
\end{align}
which proves the result (the first term is an error term if $n\geq 2$ and combines with the second if $n=1$).
\bibliographystyle{plain}
| {
"timestamp": "2009-12-28T20:01:08",
"yymm": "0912",
"arxiv_id": "0912.5205",
"language": "en",
"url": "https://arxiv.org/abs/0912.5205",
"abstract": "In invasion percolation, the edges of successively maximal weight (the outlets) divide the invasion cluster into a chain of ponds separated by outlets. On the regular tree, the ponds are shown to grow exponentially, with law of large numbers, central limit theorem and large deviation results. The tail asymptotics for a fixed pond are also studied and are shown to be related to the asymptotics of a critical percolation cluster, with a logarithmic correction.",
"subjects": "Probability (math.PR)",
"title": "Exponential growth of ponds in invasion percolation on regular trees",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982287697666963,
"lm_q2_score": 0.8198933359135361,
"lm_q1q2_score": 0.8053711372669933
} |
https://arxiv.org/abs/2005.04554 | A comparison study of deep Galerkin method and deep Ritz method for elliptic problems with different boundary conditions | Recent years have witnessed growing interests in solving partial differential equations by deep neural networks, especially in the high-dimensional case. Unlike classical numerical methods, such as finite difference method and finite element method, the enforcement of boundary conditions in deep neural networks is highly nontrivial. One general strategy is to use the penalty method. In the work, we conduct a comparison study for elliptic problems with four different boundary conditions, i.e., Dirichlet, Neumann, Robin, and periodic boundary conditions, using two representative methods: deep Galerkin method and deep Ritz method. In the former, the PDE residual is minimized in the least-squares sense while the corresponding variational problem is minimized in the latter. Therefore, it is reasonably expected that deep Galerkin method works better for smooth solutions while deep Ritz method works better for low-regularity solutions. However, by a number of examples, we observe that deep Ritz method can outperform deep Galerkin method with a clear dependence of dimensionality even for smooth solutions and deep Galerkin method can also outperform deep Ritz method for low-regularity solutions. Besides, in some cases, when the boundary condition can be implemented in an exact manner, we find that such a strategy not only provides a better approximate solution but also facilitates the training process. | \section{Introduction}
In the past decade, deep learning has achieved great success in many subjects, like computer vision, speech recognition, and natural language processing \cite{NIPS2012_4824,hinton2012deep,Goodfellow2016} due to the strong representability of deep neural networks (DNNs). Meanwhile, DNNs have also been used to solve partial differential equations (PDEs); see for example \cite{lagaris1998artificial,raissi2018hidden,weinan2017deep, berg2018unified,E2018,long2017pde,han2018solving,deepGalerkin2018,zang2020weak}. In classical numerical methods such as finite difference method \cite{leveque2007finite} and finite element method \cite{brenner2007mathematical}, the number of degrees of freedoms (dofs) grows exponentially fast as the dimension of PDE increases. One striking advantage of DNNs over classical numerical methods is that the number of dofs only grows (at most) polynomially. Therefore, DNNs are particularly suitable for solving high-dimensional PDEs. The magic underlying this is to approximate a function using the network representation of independent variables without using mesh points. Afterwards, Monte-Carlo method is used to approximate the loss (objective) function which is defined over a high-dimensional space. Some methods are based on the PDE itself \cite{raissi2018hidden,deepGalerkin2018}
and some other methods are based on the variational or the weak formulation {\cite{E2018,kharazmi2019variational,liao2019deep,zang2020weak}. Another successful example is the multilevel Picard approximation which is provable to overcome the curse of dimensionality for a class of semilinear parabolic equations~\cite{hutzenthaler2020proof}.} In the current work, we focus on two representative methods: deep Ritz method (DRM) proposed by E and Yu \cite{E2018} and deep Galerkin method (DGM) proposed by Sirignano and Spiliopoulos \cite{deepGalerkin2018}. {It is worth mentioning that the loss function in DGM is defined as the PDE residual in the least-squares sense. Therefore, DGM is not a Galerkin method and has no connection with Galerkin from the perspective of numerical PDEs although it is named after Galerkin.}
In classical numerical methods, boundary conditions can be exactly enforced for mesh points at the boundary. Typically boundary conditions include Dirichlet, Neumann, Robin, and periodic boundary conditions \cite{evans_2010}. However, it is very difficult to impose exact boundary conditions for a DNN representation. Therefore, in the loss function, it is often to add a penalty term which penalizes the difference between the DNN representation on the boundary and the exact boundary condition, typically in the sense of $L^2$ norm. Only when Dirichlet boundary condition is imposed, a novel construction of two DNN representations can be used: one for the approximation of function on the boundary and the other for the approximation of function over the domain \cite{berg2018unified}. The main purpose of the current work is to provide a comprehensive study of four boundary conditions using DRM and DGM.
The highest derivative in the loss function in DRM is lower than that in DGM, thus it is thought that DGM works better for smooth solutions while DRM works better for low-regularity solutions. However, by a number of examples, we observe that DRM can outperform DGM with a clear dependence of dimensionality even for smooth solutions and DGM can also outperform DRM for low-regularity solutions. Besides, in some cases, when the boundary condition can be implemented in an exact manner, we find that such a strategy not only provides a better approximate solution but also facilitates the training process.
The paper is organized as follows. First, a brief introduction of DGM and DRM, systematic treatment of four different boundary conditions using the penalty method, and how to use DNNs to solve PDEs are given in Section \ref{sec:method}. Numerous examples with different boundary conditions are compared in Section \ref{sec:numerics}. Conclusions are drawn in Section \ref{sec:conclusion}.
\section{Methodology}\label{sec:method}
The usage of a DNN to solve a PDE problem consists of three parts: the loss function, the neural network structure, and the way how the loss function is optimized over the parameter space. In what follows, we first give a brief introduction of DGM and DRM. Both methods use DNNs to approximate the PDE solution, but the main difference is the choice of loss function, which is the objective function to be optimized. Afterwards, we discuss how different boundary conditions are treated using the penalty method. We then illustrate the network structure used to approximate the PDE solution. Finally, we describe the stochastic gradient descent method which is often adopted in the optimization of loss functions.
\subsection{Deep Ritz method and deep Galerkin method}
Consider the following boundary value problem over a bounded domain $\Omega\subset\mathbb{R}^d$
\begin{equation}
\begin{cases}
\mathcal{L} u(x) = f(x), & \;\text{in} \;\Omega,\\
\Gamma u(x) = g(x), & \; \text{on} \; \partial\Omega,
\end{cases}
\label{eq}
\end{equation}
where $d$ is the dimension, $f(x)$ and $g(x)$ are given functions, $\mathcal{L}$ is a differential operator with respect to $x$, and $\Gamma$ is a boundary operator which represents Dirichlet, Neumann, Robin, or periodic boundary condition. To proceed, we assume the well-posedness of \eqref{eq}.
The basic idea of solving a PDE using DNNs is to seek an approximate solution represented by a DNN in a certain sense \cite{hornik1989multilayer}.
Denote the approximate solution by $u(x;\theta)$ with $\theta$ the set of neural network parameters. Both DRM and DGM use DNNs to approximate the solution, and they only differ by the corresponding loss function. Precisely, loss functions associated to DGM and DRM in terms of $u(x;\theta)$ read as
\begin{equation*}
\mathcal{J}_{\textrm{DGM}} [u(x;\theta)] = \int_{\Omega} {|\mathcal{L} u(x;\theta) - f(x)|}^2 \mathrm{d}x,
\end{equation*}
and
\begin{equation*}
\mathcal{J}_{\textrm{DRM}} [u(x;\theta)] = \int_{\Omega} \left(W(u(x;\theta)) - f(x)u(x;\theta)\right) \mathrm{d}x,
\end{equation*}
respectively. DGM aims to minimize the imbalance when the approximate DNN solution is substituted into $\mathcal{L} u(x) = f(x)$ of \eqref{eq} in the least-squares sense \cite{deepGalerkin2018}. DRM works in a variational sense that the variation of $\mathcal{J}_{\textrm{DRM}} [u(x;\theta)]$ with respect to $u(x;\theta)$ yields the associated Euler-Lagrange equation $\mathcal{L} u(x) = f(x)$ \cite{E2018, liao2019deep}.
The inclusion of boundary conditions is done by adding a penalty term
\begin{equation*}
\mathcal{B} [u(x;\theta)] = \int_{\partial\Omega} {| \Gamma u(x;\theta) - g(x) |}^2 \mathrm{d} s,
\end{equation*}
and respectively, the total loss functions $\mathcal{I} [u(x;\theta)]$ for DGM and DRM are
\begin{equation}\label{eqn:dgm}
\mathcal{I}_{\textrm{DGM}} [u(x;\theta)] = \mathcal{J}_{\textrm{DGM}} [u(x;\theta)] + \lambda \mathcal{B} [u(x;\theta)],
\end{equation}
and
\begin{equation}\label{eqn:drm}
\mathcal{I}_{\textrm{DRM}} [u(x;\theta)] = \mathcal{J}_{\textrm{DRM}} [u(x;\theta)] + \lambda \mathcal{B} [u(x;\theta)],
\end{equation}
where $\lambda$ is the penalty parameter.
The optimal approximation $u^*(x;\theta^*)$ is obtained by solving the following optimization problem:
\begin{equation}\label{eqn:optimization}
u^*(x;\theta^*) = \arg\min_{u(x;\theta) \in \mathcal{H}(\Omega)} \mathcal{I} [u(x;\theta)],
\end{equation}
where $\mathcal{H}(\Omega)$ is the set of admissible functions.
\subsection{Boundary conditions}
To illustrate the penalty method for boundary conditions in DGM and DRM, we start with the following explicit example over $\Omega = (0, 1)^d$ (by default)
\begin{equation}\label{eqn:poisson}
- \Delta u + \pi^2 u = f(x).
\end{equation}
The corresponding loss terms in DGM and DRM are
\begin{equation}\label{eqn:poissondgm}
\mathcal{J}_{\textrm{DGM}} [u(x;\theta)] = \int_{\Omega} {| - \Delta u(x;\theta) + \pi^2 u(x;\theta) - f(x) |}^2 \mathrm{d}x,
\end{equation}
and
\begin{equation}\label{eqn:poissondrm}
\mathcal{J}_{\textrm{DRM}} [u(x;\theta)] = \int_{\Omega} {\frac{1}{2} \left( {|\nabla u(x;\theta)|}^2 + \pi^2 {u(x;\theta)}^2 \right) - f(x) u(x;\theta)} \mathrm{d}x,
\end{equation}
respectively.
For comparison, the exact solution is set to be $u(x) = \sum_{k=1}^d \cos(\pi x_k)$ which is smooth. $f(x)$ and $g(x)$ which can be calculated explicitly will be specified later .
\subsubsection{Dirichlet boundary condition}
Dirichlet boundary condition reads as
\begin{equation*}
u(x) = g(x), \; x\in \partial \Omega,
\end{equation*}
and the corresponding penalty term is
\begin{equation}\label{eqn:poissonpenaltyd}
\mathcal{B}_{\textrm{D}} [u(x;\theta)] = \int_{\partial\Omega} {| u(x;\theta) - g(x) |}^2 \mathrm{d}s.
\end{equation}
Thus, the total loss functions of DGM and DRM for Dirichlet boundary condition are
\begin{equation}\label{eqn:poissondgmd}
\mathcal{I}_{\textrm{DGM}} [u(x;\theta)] = \mathcal{J}_{\textrm{DGM}} [u(x;\theta)] + \lambda \mathcal{B}_{\textrm{D}} [u(x;\theta)],
\end{equation}
and
\begin{equation}\label{eqn:poissondrmd}
\mathcal{I}_{\textrm{DRM}} [u(x;\theta)] = \mathcal{J}_{\textrm{DRM}} [u(x;\theta)] + \lambda \mathcal{B}_{\textrm{D}} [u(x;\theta)],
\end{equation}
respectively.
\subsubsection{Neumann boundary condition}
Neumann boundary condition reads as
\begin{equation*}
{\partial u} / {\partial { n}} = g(x), \; x\in\partial \Omega,
\end{equation*}
where $ {\partial u} / {\partial { n}} := \left({\partial u} / {\partial x_1},\cdots,{\partial u} / {\partial x_d}\right) \cdot n(x)$
and $n(x)$ is the unit outer normal vector along $\partial \Omega$. The corresponding penalty term is
\begin{equation}\label{eqn:poissonpenaltyn}
\mathcal{B}_{\textrm{N}} [u(x;\theta)] = \int_{\partial\Omega} {| {\partial u(x;\theta)} / {\partial n} - g(x) |}^2 \mathrm{d}s
\end{equation}
Thus, the total loss functions of DGM and DRM for Neumann boundary condition are
\begin{equation}\label{eqn:poissondgmn}
\mathcal{I}_{\textrm{DGM}} [u(x;\theta)] = \mathcal{J}_{\textrm{DGM}} [u(x;\theta)] + \lambda \mathcal{B}_{\textrm{N}} [u(x;\theta)],
\end{equation}
and
\begin{equation}\label{eqn:poissondrmn}
\mathcal{I}_{\textrm{DRM}} [u(x;\theta)] = \mathcal{J}_{\textrm{DRM}} [u(x;\theta)] + \lambda \mathcal{B}_{\textrm{N}} [u(x;\theta)],
\end{equation}
respectively.
\subsubsection{Robin boundary condition}
Robin boundary condition reads as
\begin{equation*}
{\partial u} / {\partial { n}} + u(x) = g(x), \; x\in \partial \Omega,
\end{equation*}
and the corresponding penalty term is
\begin{equation}\label{eqn:poissonpenaltyr}
\mathcal{B}_{\textrm{R}} [u(x;\theta)] = \int_{\partial\Omega} {| {\partial u(x;\theta)} / {\partial { n}} + u(x;\theta) - g(x) |}^2 \mathrm{d}s.
\end{equation}
Thus, the total loss functions of DGM and DRM for Robin boundary condition are
\begin{equation}\label{eqn:poissondgmr}
\mathcal{I}_{\textrm{DGM}} [u(x;\theta)] = \mathcal{J}_{\textrm{DGM}} [u(x;\theta)] + \lambda \mathcal{B}_{\textrm{R}} [u(x;\theta)],
\end{equation}
and
\begin{equation}\label{eqn:poissondrmr}
\mathcal{I}_{\textrm{DRM}} [u(x;\theta)] = \mathcal{J}_{\textrm{DRM}} [u(x;\theta)] + \lambda \mathcal{B}_{\textrm{R}} [u(x;\theta)],
\end{equation}
respectively.
\subsubsection{Periodic boundary condition}\label{sec:pbcpenalty}
Periodic boundary condition over the boundary of $\Omega = (-1, 1)^d$ reads as
\begin{equation*}
\begin{cases}
u(\tilde{x}_k,-1) = u(\tilde{x}_k,1), \\
{\partial u(\tilde{x}_k,-1)}/{\partial {x_k}} = {\partial u(\tilde{x}_k,1)}/{\partial {x_k}}, \\
\end{cases}
\end{equation*}
where $\tilde{x}_k =(x_1,\cdots,x_{k-1},x_{k+1},\cdots,x_d)$ for $k=1,\cdots,d$. The exact solution is still $u(x) = \sum_{k=1}^d \cos(\pi x_k)$.
Note that the penalty term $\mathcal{B}_{\textrm{P}} [u(x;\theta)]$ in this case consists of two terms:
\begin{align*}
\mathcal{B}_{\textrm{P}_1} [u(x;\theta)] & = \sum_{k=1}^{d} \int_{\partial\Omega} {|u(\tilde{x}_k,-1) -u(\tilde{x}_k,1)|}^2 \mathrm{d}s, \\
\mathcal{B}_{\textrm{P}_2} [u(x;\theta)] & = \sum_{k=1}^{d} \int_{\partial\Omega} {|{\partial u(\tilde{x}_k,-1)}/{\partial {x_k}} - {\partial u(\tilde{x}_k,1)}/{\partial {x_k}}|}^2 \mathrm{d}s.
\end{align*}
Thus, the corresponding loss functions of DGM and DRM for periodic boundary condition are
\begin{equation}\label{eqn:poissondgmp}
\mathcal{I}_{\textrm{DGM}} [u(x;\theta)] = \mathcal{J}_{\textrm{DGM}} [u(x;\theta)] + \lambda_1 \mathcal{B}_{\textrm{P}_1} [u(x;\theta)] + \lambda_2 \mathcal{B}_{\textrm{P}_2} [u(x;\theta)],
\end{equation}
and
\begin{equation}\label{eqn:poissondrmp}
\mathcal{I}_{\textrm{DRM}} [u(x;\theta)] = \mathcal{J}_{\textrm{DRM}} [u(x;\theta)] + \lambda_1 \mathcal{B}_{\textrm{P}_1} [u(x;\theta)] + \lambda_2 \mathcal{B}_{\textrm{P}_2} [u(x;\theta)],
\end{equation}
where $\lambda_1$ and $\lambda_2$ are prescribed penalty parameters.
\subsection{Network structure}
The deep network structure employed here is similar to ResNet \cite{DBLP:journals/corr/HeZRS15}, which is built by stacking several residual blocks. Each residual block contains one input, two weight layers, and two nonlinear transformation operations (activation functions) with a skip identity connection and one output. In details, let us consider a network with $n$ residual blocks. For the $i$-th block, let $L^{[i]}(x) \in \mathbb{R}^{m}$ be the input, $W^{[i]}_1, W^{[i]}_2 \in \mathbb{R}^{m \times m}$ and $b^{[i]}_1, b^{[i]}_2 \in \mathbb{R}^{m}$ be the weight matrices and the bias vectors, $\sigma(\cdot)$ be the activation function, and $L^{[i+1]}(x)$ be the output which can be specified as
\begin{equation}\label{eqn:resnet}
L^{[i+1]}(x) = \sigma (W^{[i]}_2 \cdot (\sigma (W^{[i]}_1 \cdot L^{[i]}(x) + b^{[i]}_1)) + b^{[i]}_2) + L^{[i]}(x).
\end{equation}
The initial input $L^{[0]}(x) = W^{[0]} \cdot x + b^{[0]}$ and the final output $L^{[n+1]}(x) = W^{[n+1]} \cdot L^{[n]}(x) + b^{[n+1]}$ with $ W^{[0]} \in \mathbb{R}^{m \times d}, b^{[0]} \in \mathbb{R}^{m \times 1}$ and $W^{[n+1]} \in \mathbb{R}^{1 \times m}, b^{[n+1]} \in \mathbb{R}$.
The schematic picture of one residual block is given in Figure \ref{residual}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{residual_block.eps}
\caption{One residual block in the neural network structure.}
\label{residual}
\end{figure}
{Below are activation functions used in the current work}
\begin{align*}
relu(x) & = \max(0,x), \\
sigmoid(x) & = \frac{1}{1+\exp(-x)}, \\
swish(x) & = \frac{x}{1+\exp(-x)}, \\
\sigma(x) & = {(\sin x )}^3, \\
swish(ax) & = \frac{ax}{1+\exp(-ax)}.
\end{align*}
{The last activation function is the adaptive swish function where $a$ is an additional parameter and is optimized in the training process \cite{JAGTAP2020109136,2019arXiv190912228J}.}
Overall, the DNN approximation of PDE solution can be written as
\begin{eqnarray}\label{eqn:ddnsolution}
u(x;\theta) = L^{[n+1]} \circ L^{[n]} \circ \cdots \circ L^{[1]} \circ L^{[0]}(x),
\end{eqnarray}
where $\theta$ is the full set of all weight and bias parameters in the neural network, i.e., $\theta = \{W^{[0]}, b^{[0]}, \{W_1^{[i]}, b_1^{[i]}, W_2^{[i]},b_2^{[i]} \}_{i=1}^n, W^{[n+1]}, b^{[n+1]}\}$. The total number of parameters is $m(d+1)+(2mn+1)(m+1)$.
\subsection{Stochastic gradient descent algorithm}
Using DNNs to solve PDEs is now transferred to solve the optimization problem \eqref{eqn:optimization} with the loss function \eqref{eqn:dgm} or \eqref{eqn:drm} over the possible DNN representations \eqref{eqn:ddnsolution}. Even if the original PDE is linear, the DNN representation \eqref{eqn:ddnsolution} can be highly nonlinear due to the successive composition of nonlinear activation functions. On the other hand, quadrature schemes for the high-dimensional integral in \eqref{eqn:dgm} and \eqref{eqn:drm} run into the curse of dimensionality and Monte-Carlo method can overcome this issue. The stochastic gradient descent (SGD) algorithm and its variants play a key role in deep learning training. It is a first-order optimization method which naturally incorporates the idea of Monte-Carlo sampling and thus avoids the curse of dimensionality. At each iteration, SGD updates neural parameters by evaluating the gradient of the loss function only at a batch of samples as
{
\begin{equation}\label{eqn:sgd}
\theta_{k+1} = \theta_{k} - \epsilon_k \frac{1}{N}\sum_{i=1}^{N}\nabla_\theta
l_i(\theta_k),
\end{equation}}
where $\theta_{k}$ is the parameters of neural network at the $k$-th iteration, $\epsilon_k$ is the learning rate, and {$l_i(\theta_k)$ is used to approximate the loss function using the single function value $u(x_i;\theta_k)$ multiplied by the volume or boundary measure.} $x_i$ are randomly generated with uniform distribution over $\Omega$ and $\partial\Omega$. Though better sampling strategies, such as quasi-Monte Carlo sampling \cite{chen2019quasi}, can be used, we stick to Monte-Carlo sampling \cite{Ogata1989} in the current work for the comparison purpose.
In our work, Adam optimizer is used to accelerate the training of the deep neural network \cite{ADAM}. Adam algorithm estimates first-order and second-order moments of gradient to dynamically adjust the learning rate for each parameter. The main advantage is that the learning rate at each iteration has a certain range after correction, which makes the parameter update more stable. In implementation, the global learning rate $\epsilon$ is $0.001$, the exponential decay rates of moment estimation $\rho_1, \rho_2$ are set to be $0.9$ and $0.999$, and the small constant $\delta$ used for numerical stability is set to be $10^{-8}$.
In addition, we use the finite difference method to approximate derivatives in the loss function.
\section{Numerical results}\label{sec:numerics}
We shall use the following relative $L^2$ error to measure the approximation error
\begin{equation*}
\textrm{error} = \sqrt{\frac{\int_{\Omega} {\left( u^{*}(x;\theta^*) - u(x) \right)}^2 \mathrm{d} x}{\int_{\Omega} {\left( u(x) \right)}^2 \mathrm{d} x}},
\end{equation*}
where $u^{*}(x;\theta^*)$ is the DDN approximation of DGM or DRM and $u(x)$ is the exact solution, respectively.
\subsection{Training process and dimensional dependence for four boundary conditions}\label{sec:bcresult}
For four different boundary conditions, we record the training process of DGM and DRM and measure the error in terms of dimensionality. For comparison purpose, the same setup is employed for different boundary conditions, but the network structure varies as the dimensionality $d$ increases. Typically, each neural network contains three to four residual blocks with several neural units in each layer. The activation function used here is $swish(x)$.
{
For Dirichlet boundary condition, there are some strategies to avoid the penalty term; see \cite{berg2018unified,sheng2020pfnn} for example. The basic idea is to employ one DNN denoted by $DNN(x;\theta)$ to approximate the PDE solution in the following trail form
\begin{equation}\label{eqn:twostage}
u(x;\theta) = L_D(x) DNN(x;\theta) + G(x),
\end{equation}
where $L_D(x)$ is the distance function to the boundary and $G(x)$ is a smooth extension of $g(x)$ over the whole domain $\Omega$. For periodic boundary condition, construction of a specific DNN can automatically satisfy the boundary condition \cite{han2020solving}. We shall return to this in Section \ref{sec:pbc}.
For the other two boundary conditions, in principle, the strategy in \eqref{eqn:twostage} can be applied if a natural extension function $G(x)$ satisfies the boundary condition and the distance function is available. From the practical perspective, however, it is unclear how to find such an extension function $G(x)$ that satisfies the boundary condition. Therefore, we mainly focus on the penalty method for four different boundary conditions and provide results without the penalty term for both Dirichlet and periodic boundary conditions.}
Figure \ref{DGM v.s. DRM in 2D} - Figure \ref{DGM v.s. DRM in 16D} record the training processes of DGM and DRM in 2D, 4D, 8D, and 16D, respectively. One general trend we have observed is that DGM converges faster than DRM in the low-dimensional case; see 2D for example, while it is the opposite in the high-dimensional case; see 16D for example. In lower dimensions, both DGM and DRM converge well. However, in 16D, a significant amount of efforts have been paid in order to achieve the convergence in DGM. From Figure \ref{DGM v.s. DRM in 2D} - Figure \ref{DGM v.s. DRM in 16D}, a general observation is that the penalty parameter decreases from Dirichlet, Neumann, Robin, to periodic boundary conditions. Since this parameter is a bit tuned to get a better approximation for a given DNN, a larger damping parameter implies a better agreement between the DNN solution and the exact solution on the boundary.
\begin{figure}[H
\centering
\subfigure[Dirichlet]{
\includegraphics[width=2.0in]{dirichlet_error_2d.eps}
}
\subfigure[Neumann]{
\includegraphics[width=2.0in]{neumann_error_2d.eps}
}
\quad
\subfigure[Robin]{
\includegraphics[width=2.0in]{robin_error_2d.eps}
}
\subfigure[Periodic]{
\includegraphics[width=2.0in]{periodic_error_2d.eps}
}
\caption{Training processes of DGM and DRM for four boundary conditions in 2D. Each neural network contains three residual blocks with four neural units in each layer. The mini-batch size is $2000$ in the domain and $400$ on the boundary for one epoch. The penalty parameter $\lambda = 100.0$ for Dirichlet, Neumann, and Robin boundary conditions and $\lambda_1 = 10.0, \lambda_2 = 5.0$ for periodic boundary condition.}
\label{DGM v.s. DRM in 2D}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[Dirichlet]{
\includegraphics[width=2.0in]{dirichlet_error_4d.eps}
}
\subfigure[Neumann]{
\includegraphics[width=2.0in]{neumann_error_4d.eps}
}
\quad
\subfigure[Robin]{
\includegraphics[width=2.0in]{robin_error_4d.eps}
}
\subfigure[Periodic]{
\includegraphics[width=2.0in]{periodic_error_4d.eps}
}
\caption{Training processes of DGM and DRM for four boundary conditions in 4D. Each neural network contains three residual blocks with eight neural units in each layer. The mini-batch size is $2000$ in the domain, $800$ on the boundary for Dirichlet, Neumann, and Robin boundary conditions, and $8000$ on the boundary for periodic boundary condition. The penalty parameter $\lambda = 100.0$ for Dirichlet boundary condition, $\lambda = 1.0$ for Neumann boundary condition, $\lambda = 500.0$ for Robin boundary condition, and $\lambda_1 = 1.0, \lambda_2 = 0.5$ for periodic boundary condition.}
\label{DGM v.s. DRM in 4D}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[Dirichlet]{
\includegraphics[width=2.0in]{dirichlet_error_8d.eps}
}
\subfigure[Neumann]{
\includegraphics[width=2.0in]{neumann_error_8d.eps}
}
\quad
\subfigure[Robin]{
\includegraphics[width=2.0in]{robin_error_8d.eps}
}
\subfigure[Periodic]{
\includegraphics[width=2.0in]{periodic_error_8d.eps}
}
\caption{Training processes of DGM and DRM for four boundary conditions in 8D. Each neural network contains three residual blocks with sixteen neural units in each layer. The mini-batch size is $2000$ in the domain for Dirichlet, Neumann, and Robin boundary conditions, and $4000$ in the domain for periodic boundary condition, $1600$ on the boundary for Dirichlet, Neumann, and Robin boundary conditions, and $16000$ on the boundary for periodic boundary condition. The penalty parameter $\lambda = 100.0$ for Dirichlet boundary condition, $\lambda = 1.0$ for Neumann boundary condition, $\lambda = 10.0$ for Robin boundary condition, and $\lambda_1 = 1.0, \lambda_2 = 0.5$ for periodic boundary condition.}
\label{DGM v.s. DRM in 8D}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[Dirichlet]{
\includegraphics[width=2.0in]{dirichlet_error_16d.eps}
}
\subfigure[Neumann]{
\includegraphics[width=2.0in]{neumann_error_16d.eps}
}
\quad
\subfigure[Robin]{
\includegraphics[width=2.0in]{robin_error_16d.eps}
}
\subfigure[Periodic]{
\includegraphics[width=2.0in]{periodic_error_16d.eps}
}
\caption{Training processes of DGM and DRM for four boundary conditions in 16D. Each neural network contains three residual blocks with thirty-two neural units in each layer. The mini-batch size is $2000$ in the domain and $3200$ on the boundary. The penalty parameter $\lambda = 100.0$ for Dirichlet boundary condition, $\lambda = 1.0$ for Neumann boundary condition, $\lambda = 10.0$ for Robin boundary condition, and $\lambda_1 = 10.0, \lambda_2 = 5.0$ for periodic boundary condition.}
\label{DGM v.s. DRM in 16D}
\end{figure}
{Table \ref{error} records relative $L^2$ errors for DGM and DRM when $d = 2, 4, 8, 16, 32$. The maximum number of training epochs is set to be $10000$ in 2D, $20000$ in 4D, $50000$ in 8D, $100000$ in 16D, and $400000$ in 32D. Other parameters of neural network are the same as those in Figure \ref{DGM v.s. DRM in 2D} - Figure \ref{DGM v.s. DRM in 16D}. When $d=32$, the batch size is $2000$ in the domain and $3200$ on the boundary. The network width is $32$ and the depth is $4$. $-$ means the training process does not converge after the number of training epochs has reached the maximum epoch number.} Generally speaking, DGM has a better approximation accuracy in low-dimensional cases; see 2D and 4D for example, while DRM outperforms in high-dimensional cases; see 8D and 16D for example. For \eqref{eqn:poisson}, the second-order derivative appears in the formulation of DGM while only the first-order derivative exists in DRM. Therefore, to some extent, this is out of expectation since the exact solution here is smooth and DGM should approximate the exact solution better.
\begin{table}[htbp]
\centering
\caption{Relative $L^2$ errors for four different boundary conditions in different dimensions. The number of training epochs is $10000$ in 2D, $20000$ in 4D, $50000$ in 8D, $100000$ in 16D, and $200000$ in 32D. Other parameters in DNNs are specified in Figure \ref{DGM v.s. DRM in 2D} - Figure \ref{DGM v.s. DRM in 16D}.}
\label{error}
\begin{tabular}{c|cc|cc|cc|cc}
\toprule[2pt]
\noalign{\smallskip}
\multirow{2}*{$d$}
&\multicolumn{2}{c}{Dirichlet}
&\multicolumn{2}{c}{Neumann}
&\multicolumn{2}{c}{Robin}
&\multicolumn{2}{c}{Periodic} \\
&\multicolumn{1}{c}{DGM} & \multicolumn{1}{c}{DRM}
&\multicolumn{1}{c}{DGM} & \multicolumn{1}{c}{DRM}
&\multicolumn{1}{c}{DGM} & \multicolumn{1}{c}{DRM}
&\multicolumn{1}{c}{DGM} & \multicolumn{1}{c}{DRM} \\
\noalign{\smallskip}
\midrule[1pt]
\noalign{\smallskip}
\multirow{1}*{2}
& 0.0071 & 0.0236 & 0.0020 & 0.0078 & 0.0006 & 0.0065 & 0.0063 & 0.0115 \\
\multirow{1}*{4}
& 0.0074 & 0.0105 & 0.0128 & 0.0336 & 0.0197 & 0.0622 & 0.0449 & 0.0514 \\
\multirow{1}*{8}
& 0.0226 & 0.0256 & 0.0674 & 0.0199 & 0.0561 & 0.0221 & 0.0672 & 0.0573 \\
\multirow{1}*{16}
& 0.0290 & 0.0224 & 0.1747 & 0.0368 & 0.0938 & 0.0379 & 0.0525 & 0.0617 \\
\multirow{1}*{{32}}
& {0.0912} & {0.0561} & {-} & {0.0399} & {0.1828} & {0.0303} & {-} & {-} \\
\noalign{\smallskip}
\bottomrule[2pt]
\end{tabular}
\end{table}
\subsection{Dependence on network structures}
The above observations hold true over a wide range of issues, such as penalty parameter, mini-batch size, activation function, neural depth, and neural width. We will show how the approximation accuracy of DGM and DRM depends on these issues by several representative results in what follows.
\subsubsection{Penalty parameter}
Consider Dirichlet boundary condition in 4D. Relative $L^2$ errors of DGM and DRM are recorded in Table \ref{4D Dirichlet penalty} for different penalty parameters. In theory, the damping parameter $\lambda$ shall be infinity if the exact solution is found. In practice, instead, for a given DNN, $\lambda$ shall always be a finite number. It is observed from Table \ref{4D Dirichlet penalty} that the larger the penalty parameter $\lambda$ is, the better the approximation is. However, if $\lambda$ is set to be too small or too large, then the penalty term can be ignored or be dominant{; see Table \ref{penalty 2D} for example where the approximation error increases if $\lambda$ is too large}. This may result in wrong DNN solutions, i.e., a DNN approximation satisfies the PDE but not the boundary condition or satisfies the boundary condition but not the PDE. Therefore, for a given DNN, how to choose a penalty parameter which grantees the optimal approximation accuracy is of particular importance and deserves further consideration.
\begin{table}[H]
\centering
\caption{Relative $L^2$ errors of DGM and DRM in terms of penalty parameter $\lambda$ for Dirichlet boundary condition in 4D. The neural network contains three residual blocks with eight neural units in each layer. The activation function is $swish(x)$. The mini-batch size is $2000$ in the domain and $800$ on the boundary.}
\begin{tabular}{c|c|c}
\toprule[2pt]
$\lambda$ & DGM & DRM \\
\toprule[2pt]
0.1 & 0.2186 & 0.0185 \\
1.0 & 0.0366 & 0.0176 \\
10.0 & 0.0127 & 0.0196 \\
100.0 & 0.0081 & 0.0083 \\
\toprule[2pt]
\end{tabular}
\label{4D Dirichlet penalty}
\end{table}
\subsubsection{Mini-batch size}
Consider Robin boundary condition in 4D. {Relative $L^2$ errors of DGM and DRM are recorded in Table \ref{4D Robin mini-batch} for different mini-batch sizes in the domain with fixed mini-batch size on the boundary and in Table \ref{4D Robin mini-batch boundary} for different mini-batch size on the boundary and fixed mini-batch size in the domain, respectively. From the results, one can expect that a balance of sampling points in the domain and on the boundary will yield the optimal approximation accuracy. Interested readers may refer to \cite{2020arXiv200206269V} for such an effort.}
\begin{table}[H]
\centering
\caption{Relative $L^2$ errors of DGM and DRM in terms of mini-batch size in the domain. The neural network contains three residual blocks with eight neural units in each layer. The activation function is $swish(x)$. The mini-batch size is $800$ on the boundary. The penalty parameter $\lambda = 100$.}
\begin{tabular}{c|c|c}
\toprule[2pt]
Mini-batch size & DGM & DRM \\
\toprule[2pt]
500 & 0.0822 & 0.0230 \\
1000 & 0.1064 & 0.0266 \\
2000 & 0.0197 & 0.0622 \\
4000 & 0.1026 & 0.0321 \\
\toprule[2pt]
\end{tabular}
\label{4D Robin mini-batch}
\end{table}
\begin{table}[H]
\centering
\caption{{Relative $L^2$ errors of DGM and DRM in terms of mini-batch size on the boundary. The neural network contains three residual blocks with eight neural units in each layer. The activation function is $swish(x)$. The mini-batch size is $500$ in the domain. The penalty parameter $\lambda = 100$.}}
\begin{tabular}{c|c|c}
\toprule[2pt]
{Mini-batch size} & {DGM} & {DRM} \\
\toprule[2pt]
{400} & {0.0339} & {0.0184} \\
{800} & {0.1048} & {0.0204} \\
{1600} & {0.5105} & {0.0409} \\
\toprule[2pt]
\end{tabular}
\label{4D Robin mini-batch boundary}
\end{table}
\subsubsection{Activation function}
Consider Neumann boundary condition in 4D. Table \ref{4D Neumann acti} records relative $L^2$ errors of DGM and DRM in terms of several activation functions. From Table \ref{4D Neumann acti}, it is recognized that the choice of activation functions is quite important. The failure of $relu(x)$ in DGM is due to the low regularity of the activation function and the usage of higher derivatives in the loss function of DGM. This is why a better performance of DGM for smooth solutions is expected while DRM is expected to be better for low-regularity solutions. Based on results in Section \ref{sec:bcresult}, we know that the former is not always true. In Section \ref{sec:nonlinearresult}, the latter is not true as well.
\begin{table}[H]
\centering
\caption{Relative $L^2$ errors in terms of activation function. The neural network contains three residual blocks with eight neural units in each layer. The mini-batch size is $2000$ in the domain and $800$ on the boundary. The penalty parameter $\lambda = 500$.}
\begin{tabular}{c|c|c}
\toprule[2pt]
Activation function & DGM & DRM \\
\toprule[2pt]
$relu(x)$ & 0.9992 & 0.0783 \\
$sigmoid(x)$ & 0.0226 & 0.0136 \\
$swish(x)$ & 0.0176 & 0.0169 \\
${(\sin x)}^3$ & 0.0231 & 0.0110 \\
{$swish(ax)$} & {0.0147} & {0.0112} \\
\toprule[2pt]
\end{tabular}
\label{4D Neumann acti}
\end{table}
\subsubsection{Neural depth and neural width}
Consider Dirichlet boundary condition in 4D. Table \ref{4D Dirichlet depth} and Table \ref{4D Dirichlet width} record relative $L^2$ errors of DGM and DRM in terms of neural depth $n$ and neural width $m$, respectively. It is expected that approximation errors of DGM and DRM reduce as $n$ and $m$ increase to some extent. Unlike classical numerical methods, a systematic reduction of errors cannot be observed for DNNs.
\begin{table}[H]
\centering
\caption{Relative $L^2$ errors in terms of neural depth $n$. Each neural network contains varying residual blocks with eight neural units in each layer. The activation function is $swish(x)$. The mini-batch size is $2000$ in the domain and $800$ on the boundary. The penalty parameter $\lambda = 100$.}
\begin{tabular}{c|c|c}
\toprule[2pt]
Neural depth $n$ & DGM & DRM \\
\toprule[2pt]
2 & 0.0114 & 0.0193 \\
3 & 0.0074 & 0.0105 \\
4 & 0.0108 & 0.0057 \\
\toprule[2pt]
\end{tabular}
\label{4D Dirichlet depth}
\end{table}
\begin{table}[H]
\centering
\caption{Relative $L^2$ errors in terms of neural width $m$. Each neural network contains three residual blocks with varying neural units in each layer. The activation function is $swish(x)$. The mini-batch size is $2000$ in the domain and $800$ on the boundary. The penalty parameter $\lambda = 100$.}
\begin{tabular}{c|c|c}
\toprule[2pt]
Neural width $m$ & DGM & DRM \\
\toprule[2pt]
4 & 0.0218 & 0.1118 \\
6 & 0.0208 & 0.1124 \\
8 & 0.0074 & 0.0105 \\
10 & 0.0072 & 0.0095 \\
\toprule[2pt]
\end{tabular}
\label{4D Dirichlet width}
\end{table}
\subsubsection{Periodic boundary condition}\label{sec:pbc}
{Consider equation \eqref{eqn:poisson} satisfying periodic boundary condition with period $p_i=2, i=1,\cdots,d$. In order to construct a DNN which satisfies the periodicity, following \cite{han2020solving}, we construct a transform $\mathbb{R}^d \to \mathbb{R}^{2kd}$ for the input $x = (x_1,\cdots,x_d)$ before the first fully connected layer of the neural network. The component $x_i$ of $x$ is transformed as follows
\begin{equation*}
x_i \to \{\sin(j \cdot 2\pi \frac {x_i} { p_i}) , \cos(j \cdot 2\pi \frac {x_i} { p_i})\}_{j = 1}^k
\end{equation*}
for $i = 1, \cdots, d$.}
{Since the exact solution in Section \ref{sec:pbcpenalty} can be exactly expressed by the above transform and the approximation error is significantly small. To avoid this, we choose the exact solution $u(x) = \sum_{i = 1}^d \cos(\pi x_i) \cos(2 \pi x_i) $, which cannot be explicitly represented by the above transform. The approximation error is recorded in Table \ref{p1error}.}
\begin{table}[H]
\centering
\caption{{Relative $L^2$ errors for periodic boundary condition without the penalty term in different dimensions. Here we set $k = 3$.}}
\begin{tabular}{|c|c|c|c|}
\hline
{d} & {DGM} & {DRM} \\
\hline
{2} & {0.0033} & {0.0281} \\
\hline
{4} & {0.0012} & {0.0656} \\
\hline
{8} & {0.0021} & {0.0657} \\
\hline
{16} & {0.0067} & {0.0490} \\
\hline
\end{tabular}
\label{p1error}
\end{table}
\subsection{A nonlinear problem with low-regularity solution}\label{sec:nonlinearresult}
Note that all the previous examples are linear PDEs and their solutions belong to $C^{\infty}(\Omega)$. Next, we study a nonlinear PDE with the low-regularity solution. The nonlinear problem over the unit sphere $\Omega = \{x \in \mathbb{R}^d:|x| < 1\} $ reads as
\begin{equation}
\begin{cases}
- \Delta u + u^3 = f(x), & \; \text{in} \; \Omega,\\
u(x) = 0, & \; \text{on} \; \partial \Omega.
\end{cases}
\end{equation}
The exact solution $u(x) = \sin \left(\frac{\pi}{2} (1 - |x|)\right) \in C^1(\Omega)$ but $u(x) \notin C^2(\Omega)$, and
\begin{equation*}
f(x) = \frac{\pi^2}{4} \sin \left(\frac{\pi}{2} (1 - |x|)\right) + \frac{\pi}{2} \cos \left(\frac{\pi}{2} (1 - |x|)\right) \frac{d-1}{|x|} + \sin^3 \left(\frac{\pi}{2} (1 - |x|)\right).
\end{equation*}
Loss functions associated to DGM and DRM are
\begin{equation}
\mathcal{J}_{\textrm{DGM}} [u(x;\theta)] = \int_{\Omega} {| - \Delta u(x;\theta) + {u(x;\theta)}^3 - f(x) |}^2 \mathrm{d}x,
\end{equation}
\begin{equation}
\mathcal{J}_{\textrm{DRM}} [u(x;\theta)] = \int_{\Omega} {\frac{1}{2} {|\nabla u(x;\theta)|}^2 + \frac{1}{4} {u(x;\theta)}^4 - f(x) u(x;\theta)} \mathrm{d}x,
\end{equation}
and the penalty term is
\begin{equation}
\mathcal{B}_{\textrm{D}} [u(x;\theta)] = \int_{\partial\Omega} {| u(x;\theta)|}^2 \mathrm{d}s.
\end{equation}
Thus, total loss functions of DGM and DRM with penalty are
\begin{equation}
\mathcal{I}_{\textrm{DGM}} [u(x;\theta)] = \mathcal{J}_{\textrm{DGM}} [u(x;\theta)] + \lambda \mathcal{B}_{\textrm{D}} [u(x;\theta)],
\end{equation}
and
\begin{equation}
\mathcal{I}_{\textrm{DRM}} [u(x;\theta)] = \mathcal{J}_{\textrm{DRM}} [u(x;\theta)] + \lambda \mathcal{B}_{\textrm{D}} [u(x;\theta)],
\end{equation}
respectively.
\subsubsection{Dimensional dependence}
Relative $L^2$ errors of DGM and DRM are reported in different dimensions $d = 2, 4, 8$. Each neural network contains three residual blocks with varying neural units in each layer. The number of neural units is 8 for 2D and 4D, and 16 for 8D. The activation function is $swish(x)$. The mini-batch size is $2000$ in the domain and $400$ on the boundary in 2D, $1000$ in the domain and $800$ on the boundary in 4D, and $1000$ in the domain and $1600$ on the boundary for 8D. The penalty parameter is $50.0$ for 2D, $100.0$ for 4D, and $400.0$ for 8D. To our surprise, DGM outperforms DRM by over one order of magnitude. Note that the exact solution is only in $C^1(\Omega)$, the second-order derivative in space appears in DGM while only the first-order derivative in space is needed in DRM. Therefore, such an observation definitely deserves further investigation. Moreover, this observation holds true over a wide range of issues, such as penalty parameter, mini-batch size, activation function, neural depth, and neural width. We will show how the approximation accuracy of DGM and DRM depends on a couple of representative issues in what follows.
\begin{table}[H]
\centering
\caption{Relative $L^2$ errors of DGM and DRM in different dimensions. Each neural network contains three residual blocks with varying neural units in each layer. The number of neural units is 8 for 2D and 4D, and 16 for 8D. The activation function is $swish(x)$. The mini-batch size is $2000$ in the domain and $400$ on the boundary in 2D, $1000$ in the domain and $800$ on the boundary in 4D, and $1000$ in the domain and $1600$ on the boundary for 8D. The penalty parameter is $50.0$ for 2D, $100.0$ for 4D, and $400.0$ for 8D.}
\begin{tabular}{c|c|c}
\toprule[2pt]
$d$ & DGM & DRM \\
\toprule[2pt]
2 & 0.0003 & 0.0090 \\
4 & 0.0055 & 0.0777 \\
8 & 0.0292 & 0.1603 \\
\toprule[2pt]
\end{tabular}
\label{Dirichlet Dim 3}
\end{table}
\subsubsection{Penalty parameter}
Table \ref{penalty 2D} records relative $L^2$ errors of DGM and DRM in terms of penalty parameter $\lambda$ in 2D.
\begin{table}[H]
\centering
\caption{Relative $L^2$ errors of DGM and DRM in terms of penalty parameter $\lambda$ in 2D. Each neural network contains three residual blocks with eight neural units in each layer. The activation function is $swish(x)$. The mini-batch size is $2000$ in the domain and $400$ on the boundary.}
\begin{tabular}{c|c|c}
\toprule[2pt]
$\lambda$ & DGM & DRM \\
\toprule[2pt]
50.0 & 0.0011 & 0.0517 \\
100.0 & 0.0022 & 0.0161 \\
200.0 & 0.0015 & 0.0076 \\
400.0 & 0.0003 & 0.0090 \\
{2000.0} & {0.0006} & {0.0175} \\
{10000.0} & {0.0038} & {0.0300} \\
{100000.0} & {0.0106} & {0.3873} \\
\toprule[2pt]
\end{tabular}
\label{penalty 2D}
\end{table}
\subsubsection{Activation function}
Table \ref{4D acti} records relative $L^2$ errors of DGM and DRM with respect to activation function in 4D. From Table \ref{4D acti}, we see that $relu(x)$ still has some problem due to the same reason and $swish(x)$ is the best among all the tested functions.
\begin{table}[H]
\centering
\caption{Relative $L^2$ errors of DGM and DRM with respect to activation function in 4D. Each neural network contains three residual blocks with eight neural units in each layer. The mini-batch size is $1000$ in the domain and $800$ on the boundary. The penalty parameter $\lambda$ is $100$.}
\begin{tabular}{c|c|c}
\toprule[2pt]
Activation function & DGM & DRM \\
\toprule[2pt]
$relu(x)$ & 0.9990 & 0.1546 \\
$sigmoid(x)$ & 0.0262 & 0.0881 \\
$swish(x)$ & 0.0055 & 0.0777 \\
${(\sin x)}^3$ & 0.0146 & 0.0907 \\
\toprule[2pt]
\end{tabular}
\label{4D acti}
\end{table}
\subsubsection{With versus without penalty}
For Dirichlet boundary condition, as discussed earlier, we can actually avoid the penalty term \cite{berg2018unified} by constructing a trail function in the form of \eqref{eqn:twostage}. Since $\Omega$ is a unit sphere, there exists a simple way to construct a trail function which automatically satisfies the exact boundary condition. Precisely, we can build the neural network solution in the form of $u(x;\theta) = (1 - |x|) DNN(x;\theta)$, where $DNN(x;\theta)$ is the DNN approximation to be trained. This will be used for both DGM and DRM without penalty term for the comparison purpose.
Figure \ref{nonlinearPDE} plots training processes of DGM and DRM with or without penalty and Table \ref{nonlinear Table} records the corresponding relative $L^2$ errors in 4D. Each neural network contains three residual blocks with eight neural units in each layer. The mini-batch size is $1000$ in the domain and $800$ on the boundary. The penalty parameter $\lambda$ is $100.0$. From Figure \ref{nonlinearPDE}, without penalty, we see that both DGM and DRM converge better. Sometimes we even see that DGM and DRM without penalty converge while do not converge in the presence of penalty term. Besides, from Table \ref{nonlinear Table}, we see that DGM outperforms DRM by over one order of magnitude regardless of the penalty term, and both methods perform better by over one order of magnitude if the trail function automatically satisfies the boundary condition. These together show the great importance of boundary conditions. A better treatment not only facilitates the training process but also provides a better approximation accuracy for the same network setup.
\begin{figure}[H]
\centering
\includegraphics[width= 12cm]{nonlinear_dirichlet_error_4d.eps}
\caption{Training processes of DGM and DRM with or without penalty. Each neural network contains three residual blocks with eight neural units in each layer. The mini-batch size is $1000$ in the domain and $800$ on the boundary. The penalty parameter $\lambda$ is $100.0$. The total number of epochs is $10000$.}
\label{nonlinearPDE}
\end{figure}
\begin{table}[H]
\centering
\caption{Relative $L^2$ erros of DGM and DRM with or without penalty.}
\begin{tabular}{c|c|c}
\toprule[2pt]
With or without penalty & DGM & DRM \\
\toprule[2pt]
With penalty & 0.0055 & 0.0777 \\
Without penalty & 0.0002 & 0.0084 \\
\toprule[2pt]
\end{tabular}
\label{nonlinear Table}
\end{table}
\section{Conclusions}\label{sec:conclusion}
In the work, we have conducted a comprehensive study of four different boundary conditions, i.e., Dirichlet, Neumann, Robin, and periodic boundary conditions, using two representative methods: DRM and DGM. It is thought that DGM works better for smooth solutions while DRM works better for low-regularity solutions. However, by a number of examples, we have observed that DRM can outperform DGM with a clear dependence of dimensionality even for smooth solutions and DGM can also outperform DRM for low-regularity solutions. Besides, in some cases, when the boundary condition can be implemented in an exact manner, we have found that such a strategy not only provides a better approximate solution but also facilitates the training process.
There are several interesting issues which deserves further considerations. Since the penalty method works in general, the most important one is the choice of penalty parameters. For a fixed neural structure, a good choice of these parameters not only facilitates the training process but also provides a better approximation. Another issue is to understand why DGM outperforms DRM for low-regularity problems.
\section*{Acknowledgements}
This work is supported in part by the grants NSFC 11971021 and National Key R\&D Program of China (No. 2018YF645B0204404) (J.~Chen), NSFC 11501399 (R.~Du). We are grateful to Liyao Lyu for helpful discussions. All codes for producing the results in this work are available at \url{https://github.com/wukekever/DGM-and-DRM}.
\newpage
\bibliographystyle{unsrt}
| {
"timestamp": "2020-07-28T02:35:57",
"yymm": "2005",
"arxiv_id": "2005.04554",
"language": "en",
"url": "https://arxiv.org/abs/2005.04554",
"abstract": "Recent years have witnessed growing interests in solving partial differential equations by deep neural networks, especially in the high-dimensional case. Unlike classical numerical methods, such as finite difference method and finite element method, the enforcement of boundary conditions in deep neural networks is highly nontrivial. One general strategy is to use the penalty method. In the work, we conduct a comparison study for elliptic problems with four different boundary conditions, i.e., Dirichlet, Neumann, Robin, and periodic boundary conditions, using two representative methods: deep Galerkin method and deep Ritz method. In the former, the PDE residual is minimized in the least-squares sense while the corresponding variational problem is minimized in the latter. Therefore, it is reasonably expected that deep Galerkin method works better for smooth solutions while deep Ritz method works better for low-regularity solutions. However, by a number of examples, we observe that deep Ritz method can outperform deep Galerkin method with a clear dependence of dimensionality even for smooth solutions and deep Galerkin method can also outperform deep Ritz method for low-regularity solutions. Besides, in some cases, when the boundary condition can be implemented in an exact manner, we find that such a strategy not only provides a better approximate solution but also facilitates the training process.",
"subjects": "Numerical Analysis (math.NA)",
"title": "A comparison study of deep Galerkin method and deep Ritz method for elliptic problems with different boundary conditions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877012965885,
"lm_q2_score": 0.8198933315126791,
"lm_q1q2_score": 0.8053711359199913
} |
https://arxiv.org/abs/2211.01318 | Proving Taylor's Theorem from the Fundamental Theorem of Calculus by Fixed-point Iteration | Taylor's theorem (and its variants) is widely used in several areas of mathematical analysis, including numerical analysis, functional analysis, and partial differential equations. This article explains how Taylor's theorem in its most general form can be proved simply as an immediate consequence of the Fundamental Theorem of Calculus (FTOC). The proof shows the deep connection between the Taylor expansion and fixed-point iteration, which is a foundational concept in numerical and functional analysis. One elegant variant of the proof also demonstrates the use of combinatorics and symmetry in proofs in mathematical analysis. Since the proof emphasizes concepts and techniques that are widely used in current science and industry, it can be a valuable addition to the undergraduate mathematics curriculum. | \section{Introduction}
Taylor’s theorem is one of the most important results that are taught in basic calculus classes. Its importance is both theoretical and practical. Taylor's theorem is a foundational result in the field of numerical analysis: many error estimates for numerical solutions to algebraic or differential equations are based on the Taylor expansion of the solution. Taylor series is also fundamental to the theory and application of differential equations, in which series solutions play a large role. In fact, the Cauchy-Kowalesky theorem that establishes the existence and uniqueness of solutions to partial differential equations relies on Taylor expansion of the solution \cite{folland1995}.
It has been noted that the proofs of Taylor’s theorem given in many textbooks are not well motivated \cite{zvonimir1990}. Most often, the theorem is derived either using Rolle's theorem \cite{thomas2013}, or as a consequence of the mean value theorem \cite{leithold1996}, or by repeated integration by parts \cite{courant1999}. A less common proof uses induction \cite{brauer1987}.
It would seem that such a key theorem should have a stronger motivation, and should not be derived as an incidental result from other theorems.
The purpose of this paper is to show that Taylor’s theorem is in fact an immediate consequence of the fundamental theorem of calculus; and furthermore, the proof is a straightforward application of fixed point iteration, which is one of the most basic techniques in numerical and functional analysis. We thus both provide motivation for Taylor's theorem, and show its deep relationship with other areas of mathematical analysis.
The paper is organized as follows. Section \ref{Background} gives necessary background for the mathematical concepts and notation used in the proof. Section \ref{Method} presents the new proof of Taylor's theorem. Finally, Section \ref{Discussion} discusses the possible instructional uses for this new proof.
\section{Background}\label{Background}
This section introduces notation and concepts necessary for understanding the proof of Taylor's theorem in Section~\ref{Method}. These concepts are indispensible in modern applied mathematics.
\subsection{Theorem statements}
The fundamental theorem of calculus expresses the inverse relationship between differentiation and integration. There are many alternative statements of the theorem that differ slightly from one another. We will use the following formulation \cite{barcenas2000}:
\begin{thm}\label{thm0}{(Fundamental theorem of calculus for the Lebesgue integral)~~}
A function $f : [a, b] \
\rightarrow \ensuremath{\mathbb{R}} $ is absolutely continuous if and only if it is
differentiable almost everywhere, its derivative $f' \in L^1[a,b]$ and , for each
$x \in [a, b]$,
\begin{equation}\label{FTOC}
f(x)=f(a)+\int_{a}^{x} f^{\prime}\left(t \right) d t .
\end{equation}
\end{thm}
Several alternative proofs of this theorem are cited in \cite{barcenas2000}.
Taylor's theorem may be stated as follows:
\begin{thm}\label{thm1}{(Taylor's theorem)~~}
Given a function $f : [a, b] \rightarrow \ensuremath{\mathbb{R}} $ such that the $N$th derivative $f^{(N)}$ is absolutely continuous on $[a,b]$ , then for all $x \in [a,b]$
\begin{equation}\label{taylor}
\begin{aligned}
f(x)&=f(a)+(x-a)f'(a) + \frac{(x-a)^2}{2!}f''(a) + \ldots \\
& \qquad \ldots+ \frac{(x-a)^N}{N!}f^{(N)} + R_n(x),
\end{aligned}
\end{equation}
where $R_n(x)$ satisfies the inequality
\begin{equation}\label{boundR}
|R_n(x)| \le \sup_{c \in [a,x]} |f^{(N+1)}(c)| \frac{(x-a)^{N+1}}{(N+1)!},
\end{equation}
and $R_n(x)$ is given exactly by the expression
\begin{equation}\label{exactR}
R_n(x) = \int_a^x \frac{(x-t)^{N}}{N!}f^{(N+1)}(t) dt.
\end{equation}
\end{thm}
\subsection{Linear operators and some basic properties}\label{ops}
The concept of a \emph{linear operator} is not typically mentioned in basic calculus classes: however, there are good reasons for changing this practice. Modern computational mathematics is fundamentally based on the connection between linear algebra and analysis, and the idea of linear operator is a key aspect of this connection.
Linear operators play the same role in calculus that matrices play in linear algebra, as we shall now explain in more detail.
Let $V,W$ be finite-dimensional real vector spaces, and let $L:V\rightarrow W$ be a mapping (i.e. a function) with domain $V$ and codomain $W$. The mapping $L$ is \emph{linear} if for all vectors $\ensuremath{\mathbf{u}} ,\ensuremath{\mathbf{v}} \in V$ and constants $a,b \in \mathbb{R}$ we have
\begin{equation}\label{linearDef}
L(a\ensuremath{\mathbf{u}} + b\ensuremath{\mathbf{v}} ) = aL(\ensuremath{\mathbf{u}} ) + bL(\ensuremath{\mathbf{v}} )
\end{equation}
It is a well-known result in linear algebra that every linear mapping $L$ between finite-dimensional vector spaces can be associated with a unique matrix (which we denote here as $M$) such that $L(\ensuremath{\mathbf{v}} ) = M\ensuremath{\mathbf{v}} $.
In order to draw the connection between matrices and linear operators, we observe that function spaces (such as $L^1[a,b]$ or the set of absolutely continuous functions) are infinite-dimensional vector spaces. Therefore given two function spaces $A,B$, we may consider the set of mappings from $A$ to $B$. Such mappings are called \emph{operators}. An operator $\ensuremath{\mathcal{L}} :A \rightarrow B$ is \emph{linear} if for any functions $f,g \in A$ we have
\begin{equation}\label{linearOp}
\ensuremath{\mathcal{L}} (af + bg) = a\ensuremath{\mathcal{L}} (f) + b\ensuremath{\mathcal{L}} (g),
\end{equation}
which obviously corresponds exactly with \eqref{linearDef}. Because of the close connection between matrices and linear operators, the action of a linear operator is often written as if it were a multiplication: for example, $\ensuremath{\mathcal{L}} (f)$ is written instead as $\ensuremath{\mathcal{L}} f$, so that \eqref{linearOp} is written as
\begin{equation}\label{linearOp2}
\ensuremath{\mathcal{L}} (af + bg) = a\ensuremath{\mathcal{L}} f + b\ensuremath{\mathcal{L}} g.
\end{equation}
Note that $\ensuremath{\mathcal{L}} f$ is in itself a real-valued function, which is a mapping from \ensuremath{\mathbb{R}} to \ensuremath{\mathbb{R}} : thus $\ensuremath{\mathcal{L}} f(x)$ denotes the value of the function $\ensuremath{\mathcal{L}} f$ evaluated at the point $x$.
Two important examples of operators are the \emph{differentiation } (or \emph{derivative}) \emph{operator}
\begin{equation}\label{Ddef}
\ensuremath{\mathcal{D}} f(x) := \lim_{\delta \rightarrow 0} \frac{f(x+\delta)-f(x)}{\delta}.
\end{equation}
where \ensuremath{\mathcal{D}} is defined on the set of absolutely continuous functions; and the \emph{integral operator}
\begin{equation}\label{Idef}
\ensuremath{\mathcal{I}_a} g(x) := \int_a^x g(t) dt,
\end{equation}
which is defined for all $g \in L^1[a,b]$.
Another operator that is defined for absolutely continuous functions is the \emph{evaluation operator}:
\begin{equation}
f \rightarrow f(a)\ensuremath{\mathbf{1}} ,
\end{equation}
where $a$ is any number in the domain of $f$, and \ensuremath{\mathbf{1}} denotes the constant function that takes the value 1 for all values of $x$ in the domain of $f$.
Operators can be composed to produce other operators. In the subsequent discussion, the \emph{associativity} of operator composition will play an important role: if $\ensuremath{\mathcal{J}} , \ensuremath{\mathcal{K}} , \ensuremath{\mathcal{L}} $ are compatible operators, then for any function $f$ in the domain of \ensuremath{\mathcal{L}} we have
\begin{equation}
\ensuremath{\mathcal{J}} (\ensuremath{\mathcal{K}} \ensuremath{\mathcal{L}} f) = (\ensuremath{\mathcal{J}} \ensuremath{\mathcal{K}} ) \ensuremath{\mathcal{L}} f.
\end{equation}
It follows that parentheses are unnecessary when writing operator compositions: for example, we may write $\ensuremath{\mathcal{I}_a} (\ensuremath{\mathcal{I}_a} (\ensuremath{\mathcal{D}} (\ensuremath{\mathcal{D}} f)))$ as $\ensuremath{\mathcal{I}_a} ^2\ensuremath{\mathcal{D}} ^2f$.
A linear combination of linear operators is also a linear operator. In Section~\ref{Method} we will make use of the following linear operator, which is defined as a linear combination:
\begin{equation}\label{ldef}
\ensuremath{\mathcal{L}} f := f(a)\ensuremath{\mathbf{1}} + \ensuremath{\mathcal{I}_a} \ensuremath{\mathcal{D}} f,
\end{equation}
$\ensuremath{\mathcal{L}} f$ is defined on the set of absolutely continuous real-valued functions defined on any interval containing $a$.
In Section~\ref{Method} we will make use of the \emph{monotonicity} property of the integral operator: if $g, h \in L^1[a,x]$, then
\begin{equation}\label{Imon}
g \le h \implies \ensuremath{\mathcal{I}_a} g \le \ensuremath{\mathcal{I}_a} h,
\end{equation}
where the inequalities in \eqref{Imon} hold pointwise throughout the interval $[a,x]$. Since
\begin{equation}\label{Imon0}
\begin{aligned}
- \sup_{c\in [a,x]}|g(c)| \cdot \ensuremath{\mathbf{1}} \le g \le \sup_{c\in [a,x]}|g(c)| \cdot \ensuremath{\mathbf{1}} \\
\end{aligned}
\end{equation}
it follows from the linearity of \ensuremath{\mathcal{I}_a} that
\begin{equation}\label{Imon1}
\begin{aligned}
- \sup_{c\in [a,x]}|g(c)| \cdot \ensuremath{\mathcal{I}_a} \ensuremath{\mathbf{1}} &\le \ensuremath{\mathcal{I}_a} g \le \sup_{c\in [a,x]}|g(c)| \cdot \ensuremath{\mathcal{I}_a} \ensuremath{\mathbf{1}} \\
\implies | \ensuremath{\mathcal{I}_a} g | &\le \sup_{c\in [a,x]}|g(c)| \cdot \ensuremath{\mathcal{I}_a} \ensuremath{\mathbf{1}} .
\end{aligned}
\end{equation}
By iterating \eqref{Imon} $n$ times and applying $\ensuremath{\mathcal{I}_a} ^n$ to \eqref{Imon0}, we may generalize \eqref{Imon1} to
\begin{equation}\label{Imon2}
\begin{aligned}
| \ensuremath{\mathcal{I}_a} ^n g | \le \sup_{c\in [a,x]}|g(c)| \cdot \ensuremath{\mathcal{I}_a} ^n \ensuremath{\mathbf{1}} .
\end{aligned}
\end{equation}
\subsection{Fixed points and fixed-point iteration}\label{Fixed}
A \emph{fixed point} is any mathematical object $x$ that satisfies an equation of the form $x = g(x)$, where $g$ is a function. The concept of fixed point play a hugely important role in mathematical analysis, both theoretically and computationally. The online resource Mathworld lists nine prominent fixed-point theorems that appear in diverse areas of mathematics, including the famous Brouwer and Banach fixed point theorems for topological and metric spaces respectively \cite{enwiki:1087275378}.
Fixed point ideas also figure in other important theoretical results.
The central limit theorem in probability is connected to the fact that the standard normal density function is a fixed point of the modified convolution operator $f(x) \rightarrow f(\sqrt{2}x)*f(\sqrt{2}x)$ \cite{feller1968}. Given a time-dependent differential equation $\frac{df}{dt} = A(f,x)$, then a steady-state solution satisfies $A(f,x)=0$ which can also be written as a fixed point equation: $f = f + A(f,x)$. Furthermore, periodic solutions correspond to fixed points of the Poincare map \cite{angenent2016}. In general, any algebraic equation can be written as a fixed point equation: for example, we have for any real-valued function $f$
\begin{equation}
f(x) = 0 \iff x = x + f(x).
\end{equation}
On the practical side, one of the most versatile and widely-applied techniques in numerical analysis is \emph{fixed point iteration}.
One example is Newton’s method for finding roots of functions. This method is based on the fact that any root $x^*$ of a differentiable function $f:\mathbb{R}\rightarrow \mathbb{R}$,
is a locally stable fixed point of the function $g(x) := x - \frac{f(x)}{f^{\prime}(x)}$ as long as $f^{\prime}(x^*) \neq 0$.
As a result, $x^*$ can be estimated numerically by iteration: that is, given an initial guess $x_0$ that is sufficiently close to $x^*$ then the series $g(x_0), g(g(x_0)), g(g(g(x_0))), \ldots g^{(n)}(x_0) \ldots$ converges to $x^*$.
Another example is the power method for finding the largest eigenvector-eigenvalue pair of a matrix (or more generally, a linear operator).
Given a matrix $M$, the unit eigenvector \ensuremath{\mathbf{v}} corresponding to the eigenvalue with largest absolute value is a solution to the fixed-point equation $\ensuremath{\mathbf{v}} = F(\ensuremath{\mathbf{v}} )$ where $F(\ensuremath{\mathbf{v}} ) := \frac{M\ensuremath{\mathbf{v}} }{|M\ensuremath{\mathbf{v}} |}$. It follows that \ensuremath{\mathbf{v}} can be estimated numerically by repeatedly iterating the function $F$: $F^{(n)}(\ensuremath{\mathbf{v}} ) \xrightarrow[n \rightarrow \infty]{} \ensuremath{\mathbf{v}} ^*$ \cite{panju2011}. In physics, perturbation series are often obtained through fixed-point iteration. A prominent example is the \emph{Born series} in electromagnetic and quantum scattering \cite{newton2013}.
\section{Method}\label{Method}
\subsection{Basic proof of Taylor's theorem}
It is not commonly recognized in calculus textbooks that the fundamental theorem of calculus is simply a fixed-point equation:
\begin{equation}\label{FTOC2}
f = \ensuremath{\mathcal{L}} f = f(a)\ensuremath{\mathbf{1}} + \ensuremath{\mathcal{I}_a} \ensuremath{\mathcal{D}} f,
\end{equation}
where \ensuremath{\mathcal{L}} is defined in \eqref{ldef}.
According to Theorem~\ref{thm0}, \eqref{FTOC2} holds almost everywhere for all absolutely continuous functions $f: [a,b]\rightarrow \ensuremath{\mathbb{R}} $.
To obtain Taylor's theorem, we apply a slight modification of the technique of fixed-point iteration described in Section~\ref{Fixed}.
If the derivative $\ensuremath{\mathcal{D}} f$ is absolutely continuous,
then by \eqref{FTOC2} we have $\ensuremath{\mathcal{D}} f = \ensuremath{\mathcal{L}} Df = \ensuremath{\mathcal{D}} f(a)\ensuremath{\mathbf{1}} + \ensuremath{\mathcal{I}_a} \ensuremath{\mathcal{D}} (\ensuremath{\mathcal{D}} f)$. Making this replacement in \eqref{FTOC2} gives
\begin{equation}\label{iter2}
\begin{aligned}
f &= f(a)\ensuremath{\mathbf{1}} + \ensuremath{\mathcal{I}_a} \ensuremath{\mathcal{D}} f\\
&= f(a)\ensuremath{\mathbf{1}} + \ensuremath{\mathcal{I}_a} (\, \ensuremath{\mathcal{D}} f(a)\ensuremath{\mathbf{1}} + \ensuremath{\mathcal{I}_a} \ensuremath{\mathcal{D}} (\ensuremath{\mathcal{D}} f) \,)\\
&= f(a)\ensuremath{\mathbf{1}} + \ensuremath{\mathcal{D}} f(a)\ensuremath{\mathcal{I}_a} \ensuremath{\mathbf{1}} + \ensuremath{\mathcal{I}_a} (\ensuremath{\mathcal{I}_a} \ensuremath{\mathcal{D}} (\ensuremath{\mathcal{D}} f))\\
&= f(a)\ensuremath{\mathbf{1}} + \ensuremath{\mathcal{D}} f(a)\ensuremath{\mathcal{I}_a} \ensuremath{\mathbf{1}} + \ensuremath{\mathcal{I}_a} ^2 \ensuremath{\mathcal{D}} ^2 f.
\end{aligned}
\end{equation}
The last two equalities in \eqref{iter2} follow from the linearity and associativity properties of operators described in Section~\ref{ops}.
We can now continue the iterative process. If $\ensuremath{\mathcal{D}} ^2 f$ is absolutely continuous, then by \eqref{FTOC2} we may similarly replace $\ensuremath{\mathcal{D}} ^2 f$ in \eqref{iter2} with $ \ensuremath{\mathcal{D}} ^2 f(a)\ensuremath{\mathbf{1}} + \ensuremath{\mathcal{I}_a} \ensuremath{\mathcal{D}} (\ensuremath{\mathcal{D}} ^2 f)$.
This gives
\begin{equation}\label{iter3}
\begin{aligned}
f
&= f(a)\ensuremath{\mathbf{1}} + \ensuremath{\mathcal{D}} f(a)\ensuremath{\mathcal{I}_a} \ensuremath{\mathbf{1}} + \ensuremath{\mathcal{I}_a} ^2 ( \ensuremath{\mathcal{D}} ^2 f(a) + \ensuremath{\mathcal{I}_a} \ensuremath{\mathcal{D}} (\ensuremath{\mathcal{D}} ^2 f) )\\
&= f(a)\ensuremath{\mathbf{1}} + \ensuremath{\mathcal{D}} f(a)\ensuremath{\mathcal{I}_a} \ensuremath{\mathbf{1}} + \ensuremath{\mathcal{D}} ^2 f(a)\ensuremath{\mathcal{I}_a} ^2 \ensuremath{\mathbf{1}} + \ensuremath{\mathcal{I}_a} ^3 \ensuremath{\mathcal{D}} ^3 f.
\end{aligned}
\end{equation}
Clearly we may continue the same process for $\ensuremath{\mathcal{D}} ^3 f, \ensuremath{\mathcal{D}} ^4 f, \ldots \ensuremath{\mathcal{D}} ^N f$ as long as all of these derivatives exist and are absolutely continuous (in fact, if $\ensuremath{\mathcal{D}} ^N f$ is absolutely continuous, then all of the lower-order derivatives will be absolutely continuous as well). We may summarize the result as follows:
\begin{equation}\label{TTN}
f = f(a)\ensuremath{\mathbf{1}} + \ensuremath{\mathcal{D}} f(a)\ensuremath{\mathcal{I}_a} \ensuremath{\mathbf{1}} + \ensuremath{\mathcal{D}} ^2 f(a)\ensuremath{\mathcal{I}_a} ^2 \ensuremath{\mathbf{1}} + \ldots + \ensuremath{\mathcal{D}} ^N f(a)\ensuremath{\mathcal{I}_a} ^N 1 + \ensuremath{\mathcal{I}_a} ^{N+1} \ensuremath{\mathcal{D}} ^{N+1} f.
\end{equation}
Equation \eqref{TTN} is actually Taylor's theorem in disguise. To see this, we only need to rewrite the functions $\ensuremath{\mathcal{I}_a} ^n \ensuremath{\mathbf{1}} $ in conventional notation, and evaluate the integrals successively:
\begin{equation}\label{TTNb}
\begin{aligned}
\ensuremath{\mathcal{I}_a} \ensuremath{\mathbf{1}} (x) &= \int_{a}^{x} dt_1 = (x-a);\\
\ensuremath{\mathcal{I}_a} ^2 \ensuremath{\mathbf{1}} (x) &= \int_{a}^{x} \ensuremath{\mathcal{I}_a} \ensuremath{\mathbf{1}} (t_1) dt_1 = \int_{a}^{x} (t_1-a) dt_1 = \frac{(x-a)^2}{2};\\
\ensuremath{\mathcal{I}_a} ^3 \ensuremath{\mathbf{1}} (x) &= \int_{a}^{x} \ensuremath{\mathcal{I}_a} ^2 \ensuremath{\mathbf{1}} (t_1) dt_1 = \int_{a}^{x} \frac{(t_1-a)^2}{2} dt_1 = \frac{(x-a)^3}{3!};\\
\ldots &\ldots \ldots \\
\ensuremath{\mathcal{I}_a} ^N \ensuremath{\mathbf{1}} (x) &= \int_{a}^{x} \ensuremath{\mathcal{I}_a} ^{N-1} \ensuremath{\mathbf{1}} (t_1) dt_1 = \int_{a}^{x} \frac{(t_1-a)^{N-1}}{(N-1)!} dt_1 = \frac{(x-a)^N}{N!}.
\end{aligned}
\end{equation}
It remains to evaluate the final term in \eqref{TTN}, which is $\ensuremath{\mathcal{I}_a} ^{N+1} \ensuremath{\mathcal{D}} ^{N+1} f$. In this case, the integrand is not the constant function $\ensuremath{\mathbf{1}} $ as in the other integrals. However, we may use \eqref{Imon2} to give an upper bound on this term:
\begin{equation}
\begin{aligned}
| \ensuremath{\mathcal{I}_a} ^{N+1} \ensuremath{\mathcal{D}} ^{N+1} f | &\le \sup_{c\in [a,x]}| \ensuremath{\mathcal{D}} ^{N+1} f(c)| \ensuremath{\mathcal{I}_a} ^{N+1} \ensuremath{\mathbf{1}} \\
& \le \sup_{c\in [a,x]}| f^{(N+1)}(c)| \frac{(x-a)^{N+1}}{(N+1)!}
\end{aligned}
\end{equation}
which is the bound on $R_N$ in \eqref{boundR}. For the exact evaluation of $R_N$,
we may repeatedly use the fact that
integral order can be exchanged:
\begin{equation}\label{exch}
\int_a^{t_k} \int_a^{t_j} g(t_i,t_j) dt_i\,dt_j = \int_a^{t_k} \int_{t_i}^{t_k} g(t_i,t_j) dt_j\,dt_i.
\end{equation}
Notice what happens to the integral limits under exchange: the limits of the outer integral do not change; the lower limit of the inner integral becomes the outer integration variable; and the upper limit of the inner integral is the same as the upper limit of the outer integral. We may apply this rule first
to exchange the $dt_N$ and $dt_{N+1}$ integrals (shown in parentheses):
\begin{equation}\label{TTNc0}
\begin{aligned}
&\ensuremath{\mathcal{I}_a} ^{N+1} \ensuremath{\mathcal{D}} ^{N+1} f \\
&\quad = \int_{a}^{x} \ldots \int_a^{t_{N-2}} \left( \int_a^{t_{N-1}} \int_a^{t_{N}} f^{(n+1)}(t_{N+1}) dt_{N+1} dt_N \right) dt_{N-1} \ldots dt_1\\
&\quad= \int_{a}^{x} \ldots \int_a^{t_{N-2}} \left(\int_a^{t_{N-1}} \int_{t_{N+1}}^{t_{N-1}} f^{(n+1)}(t_{N+1}) dt_{N} dt_{N+1} \right)dt_{N-1} \ldots dt_1,
\end{aligned}
\end{equation}
Next, we exchange the $dt_{N-1}$ and $dt_{N+1}$ integrals in similar fashion:
\begin{equation}\label{TTNc1}
\begin{aligned}
&\ensuremath{\mathcal{I}_a} ^{N+1} \ensuremath{\mathcal{D}} ^{N+1} f \\
&\quad= \int_{a}^{x} \ldots\left( \int_a^{t_{N-2}} \int_a^{t_{N-1}} \left[ \int_{t_{N+1}}^{t_{N-1}} f^{(n+1)}(t_{N+1}) dt_{N} \right] dt_{N+1} dt_{N-1} \right) \ldots dt_1\\
&\quad= \int_{a}^{x} \ldots\left( \int_a^{t_{N-2}} \int_{t_{N+1}}^{t_{N-2}} \left[ \int_{t_{N+1}}^{t_{N-1}} f^{(N+1)}(t_{N+1}) dt_{N} \right] dt_{N-1} dt_{N+1} \right) \ldots dt_1,
\end{aligned}
\end{equation}
where the integral in square brackets plays the role of $g(t_i,t_j)$ in \eqref{exch}, and the integrals in parentheses have been exchanged. In the same way we may exchange the $dt_{N+1}$ integral successively with $dt_{N-2},dt_{N-3}, \ldots dt_1$. The final result is
\begin{equation}\label{TTNc}
\begin{aligned}
&\ensuremath{\mathcal{I}_a} ^{N+1} \ensuremath{\mathcal{D}} ^{N+1} f \\
&\quad= \int_{a}^{x} \int_{t_{N+1}}^{t_{1}} \ldots \int_{t_{N+1}}^{t_{N-1}} f^{(N+1)}(t_{N+1}) dt_{N} \ldots dt_{1} dt_{N+1} \\
&\quad= \int_{a}^{x} f^{(N+1)}(t) \left(\int_t^{t_{1}} \ldots\int_{t}^{t_{N-1}} dt_{N} \ldots dt_{1}\right) dt,
\end{aligned}
\end{equation}
where the last expression in \eqref{TTNc} is obtained by replacing $t_{N+1}$ with $t$, and by moving $f^{(n+1)}(t)$ outside of the integrals over $dt_1,\ldots dt_N$. The $N$-fold integral in parentheses is identical to the final integral in \eqref{TTNb}, except that the lower limit is $t$ instead of $a$. It follows that
\begin{equation}\label{TTNd}
\begin{aligned}
\ensuremath{\mathcal{I}_a} ^{N+1} \ensuremath{\mathcal{D}} ^{N+1} f & = \int_{a}^{x} f^{(n+1)}(t) \frac{(x-t)^N}{N!} dt,
\end{aligned}
\end{equation}
which agrees with \eqref{exactR}.
\subsection{Alternative evaluation using volume integrals}
An alternative, elegant evaluation of the integrals in \eqref{TTNc} is acheived by interpreting the integrals as volume integrals over regions in $\ensuremath{\mathbb{R}} ^n, 1\le n \le N$.
The $n$-dimensional integral $\int_{a}^{x} \int_a^{t_1} \ldots \int_a^{t_n} dt_n \ldots dt_2 dt_1$ can be interpreted as the volume of the set $\AA_n$ in $\mathbb{R}^n$, where
\begin{equation}
\AA_n := {a \le t_n \le t_{n-1} \le \ldots \le t_1 \le x }.
\end{equation}
But the ordering $t_n \le t_{n-1} \le \ldots \le t_1$ is simply one of $n!$ possible orderings of the $n$ variables $t_1,\ldots t_n$. Since these variables are dummy variables that are integrated over, the value of the integral does not depend on the ordering of the variables. For example, in the case where $n= 3$ we have $3!=6$ different orderings of the variables, namely:
\begin{equation}
\begin{aligned}
&a \le t_3 \le t_2 \le t_1 \le x; \quad a \le t_3 \le t_1 \le t_2 \le x; \quad a \le t_2 \le t_3 \le t_1 \le x;\\
&a \le t_2 \le t_1 \le t_3 \le x;\quad a \le t_1 \le t_3 \le t_2 \le x; \quad a \le t_1 \le t_2 \le t_3 \le x.
\end{aligned}
\end{equation}
But these 3! orderings correspond to 3! disjoint sets that together make up the cube $[a,x]
^3$ \footnote{There is a technical issue here in that the surfaces of these sets are not disjoint, but comprise a set of measure 0. However, it is intuitively clear that the volumes of these sets should add to the volume of the cube. In three dimensions, this can be demonstrated using a model.}. The volumes of these $3!$ sets are equal: therefore the volume of $\AA_3$ is $1/3!$ of the volume of the cube, giving the same result as \eqref{TTNb} for $N=3$.
The argument generalizes directly to $n$ dimensions ($1 \le n \le N$), giving the result:
\begin{equation}\label{A_int}
\int_{a}^{x} \int_a^{t_1} \ldots \int_a^{t_n} dt_n \ldots dt_2 dt_1 = (\text{Volume of }\AA_n) = \frac{(x-a)^n}{n!},
\end{equation}
which is the same as \eqref{TTNb}. To evaluate the final integral \eqref{TTNd}, we note that
\begin{equation}
\AA_{N+1} \cap \{t_{N+1} =t \} = \AA'_{N}(t),
\end{equation}
where
\begin{equation}
\AA'_{N}(t) := {t \le t_N \le t_{N-1} \le \ldots \le t_1 \le x }
\end{equation}
so that the volume integral $\ensuremath{\mathcal{I}_a} ^{N+1} \ensuremath{\mathcal{D}} ^{N+1} f $ can be expressed as:
\begin{equation}\label{lastInt}
\begin{aligned}
\ensuremath{\mathcal{I}_a} ^{N+1} \ensuremath{\mathcal{D}} ^{N+1} f &= \int_a^x \left( \int_{\AA'_{N}(t)} f^{(N+1)}(t) dt_N \ldots dt_2 dt_1 \right) dt\\
&= \int_a^x f^{(N+1)}(t) \left( \int_{\AA'_{N}(t)} dt_N \ldots dt_2 dt_1\right) dt\\
& = \int_a^x f^{(N+1)}(t) \frac{(t-a)^N}{N!} dt,
\end{aligned}
\end{equation}
where we have used the volume formula \eqref{A_int} with $n \rightarrow N$ and $x \rightarrow t$. This result
agrees with \eqref{TTNd}.
\section{Discussion}\label{Discussion}
The role of mathematics within science and society is changing rapidly. Because of the digital revolution, math is having an increasingly weighty impact on all aspects of society, including business and government. This impact comes through the need for mathematically-based algorithms and procedures in communication, classification, modeling, prediction, and control. Many of these results are framed in terms of linear algebraic concepts such as vectors, matrices, and tensors, while calculus and functional analysis enter in cases where the systems under study can be approximated as continuous.
The idea of linear operator (or functional) as a function of functions is an important bridge between linear algebra and mathematical analysis. It is thus one of the cornerstones of modern applied mathematics.
In order to adequately prepare students to deal with current mathematical challenges in science and technology, the teaching of mathematics should make appropriate adaptations.
The proof of Taylor's theorem in Section~\ref{Method} emphasizes the relation between numerical methods and analytical theory, and is thus suitable for this purpose.
One possible objection to the proof is that it involves long equations (such as \eqref{TTNc1}) that appear very complicated.
Many students may find such long equations difficult to deal with, because they lack the technical facility in algebraic manipulations. But in applied mathematics (particularly areas of mathematics related to computation, such as linear algebra and numerical analysis) these types of equations are very common. Although they appear complicated, in actuality these equations are built up from the repeated application of very simple ideas. Furthermore, such equations may readily handled by looking at simple cases first, and then building up to the general case. The proof in Section~\ref{Method} is a good example of this approach. For this reason, we assert that the new perspective on Taylor's theorem provided in this paper can be a valuable addition to the undergraduate mathematics curriculum.
\bibliographystyle{plain}
| {
"timestamp": "2022-11-04T01:13:25",
"yymm": "2211",
"arxiv_id": "2211.01318",
"language": "en",
"url": "https://arxiv.org/abs/2211.01318",
"abstract": "Taylor's theorem (and its variants) is widely used in several areas of mathematical analysis, including numerical analysis, functional analysis, and partial differential equations. This article explains how Taylor's theorem in its most general form can be proved simply as an immediate consequence of the Fundamental Theorem of Calculus (FTOC). The proof shows the deep connection between the Taylor expansion and fixed-point iteration, which is a foundational concept in numerical and functional analysis. One elegant variant of the proof also demonstrates the use of combinatorics and symmetry in proofs in mathematical analysis. Since the proof emphasizes concepts and techniques that are widely used in current science and industry, it can be a valuable addition to the undergraduate mathematics curriculum.",
"subjects": "General Mathematics (math.GM)",
"title": "Proving Taylor's Theorem from the Fundamental Theorem of Calculus by Fixed-point Iteration",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9850429112114063,
"lm_q2_score": 0.8175744828610095,
"lm_q1q2_score": 0.8053459487295689
} |
https://arxiv.org/abs/1905.09729 | 2-factors with k cycles in Hamiltonian graphs | A well known generalisation of Dirac's theorem states that if a graph $G$ on $n\ge 4k$ vertices has minimum degree at least $n/2$ then $G$ contains a $2$-factor consisting of exactly $k$ cycles. This is easily seen to be tight in terms of the bound on the minimum degree. However, if one assumes in addition that $G$ is Hamiltonian it has been conjectured that the bound on the minimum degree may be relaxed. This was indeed shown to be true by Sárközy. In subsequent papers, the minimum degree bound has been improved, most recently to $(2/5+\varepsilon)n$ by DeBiasio, Ferrara, and Morris. On the other hand no lower bounds close to this are known, and all papers on this topic ask whether the minimum degree needs to be linear. We answer this question, by showing that the required minimum degree for large Hamiltonian graphs to have a $2$-factor consisting of a fixed number of cycles is sublinear in $n.$ | \section{Introduction}
A celebrated theorem by Dirac \cite{Dirac52} asserts the existence of a Hamilton cycle whenever the minimum degree of a graph $G$, denoted $\delta(G)$, is at least $\frac{n}{2}$. Moreover, this is best possible as can be seen from the complete bipartite graph $K_{\floor{\frac{n-1}{2}},\ceil{\frac{n+1}{2}}}$. Dirac's theorem is one of the most influential results in the study of Hamiltonicity of graphs and has seen generalisations in many directions over the years (for some examples consider surveys \cite{lisurvey,gouldsurvey,bennysurvey} and references therein). In this paper we discuss one such direction by considering what conditions ensure that we can find various \textit{$2$-factors} in $G$. Here, a \textit{$2$-factor} is a spanning 2-regular subgraph of $G$ or equivalently, a union of vertex-disjoint cycles that contains every vertex of $G$ and hence, $2$-factors can be seen as a natural generalisation of Hamilton cycles. Brandt, Chen, Faudree, Gould and Lesniak \cite{brandt97} proved that for a large enough graph the same degree condition as in Dirac's theorem, $\delta(G)\ge n/2$, allows one to find a $2$-factor with exactly $k$ cycles.
\begin{thm}
If $k \geq 1$ is an integer and $G$ is a graph of order $n \geq 4k$ such that $\delta(G) \geq \frac{n}{2}$, then $G$ has a $2$-factor consisting of exactly $k$ cycles.
\end{thm}
Once again, this theorem gives the best possible bound on the minimum degree, using the same example as for the tightness of Dirac's theorem above. This indicates that perhaps if we restrict our attention to Hamiltonian graphs, thereby excluding this example, a smaller minimum degree might be enough. That this is in fact the case was conjectured by Faudree, Gould, Jacobson, Lesniak and Saito \cite{Faudree05}.
\begin{conj}\label{conj:main}
For any $k \in \mathbb{N}$ there are constants $c_k <1/2,$ $n_k$ and $a_k$ such that any Hamiltonian graph $G$ of order $n\ge n_k$ with $\delta(G) \ge c_k n+a_k$ contains a $2$-factor consisting of $k$ cycles.
\end{conj}
Faudree et al.\ prove their conjecture for $k=2$ with $c_2=5/12.$
The conjecture was shown to be true for all $k$ by S\'{a}rk\"{o}zy \cite{Sarkozy08} with $c_k=1/2-\varepsilon$ for an uncomputed small value of $\varepsilon>0.$ Gy\"ori and Li \cite{gyori12} announced that they can show that $c_k=5/11+\varepsilon$ suffices. The best known bound was due to DeBiasio, Ferrara and Morris \cite{DeBiasio14} who show that $c_k= \frac{2}{5}+\varepsilon$ suffices.
On the other hand no constructions of very high degree Hamiltonian graphs without $2$-factors of $k$ cycles are known. Faudree et al.~ \cite{Faudree05} say ``we do not know whether a linear bound of minimum degree in Conjecture~\ref{conj:main} is appropriate''. Sark\"ozy~\cite{Sarkozy08} says ``the obtained bound on the minimum degree is probably far from
best possible; in fact, the ``right'' bound might not even be linear''. DeBiasio et al.~\cite{DeBiasio14} say ``one vexing aspect of Conjecture~\ref{conj:main} and the related work described here is that it is possible that a sublinear, or even constant, minimum degree would suffice to ensure a Hamiltonian graph has a 2-factor of the desired type''. In particular, in \cite{Faudree05,Sarkozy08,DeBiasio14} they all ask the question of whether the minimum degree needs to be linear in order to guarantee a $2$-factor consisting of $k$ cycles. We answer this question by showing that the minimum degree required to find $2$-factors consisting of $k$ cycles in Hamiltonian graphs is indeed sublinear in $n.$
\begin{thm}\label{thm:main}
For every $k\in \mathbb{N}$ and $ \varepsilon>0$, there exists $N = N(k,\varepsilon)$ such that if $G$ is a Hamiltonian graph on $n \geq N$ vertices with $\delta (G) \geq \varepsilon n$, then $G$ has a $2$-factor consisting of $k$ cycles.
\end{thm}
\subsection{An overview of the proof}
We now give an overview of the proof to help the reader navigate the rest of the paper.
In the next section we will show that any $2$-edge-coloured graph $G$ on $n$ vertices with minimum degree being linear in both colours contains a blow-up of a short colour-alternating cycle. This is an auxiliary result which we need for our main proof. There, we also introduce ordered graphs and show a result which, given an ordering of the vertices of $G$ allows us to find a blow-up as above that is also consistent with the ordering, meaning that given two parts of the blow-up, vertices of one part all come before the other.
The main part of the proof appears in \Cref{sec:main}. The key idea is given a graph $G$ with a Hamilton cycle $H=v_1\ldots v_nv_1$, to build an auxiliary $2$-edge-coloured graph $A$ whose vertex set is the set of edges $e_i=v_iv_{i+1}$ of $H$ and for any edge $v_iv_j\in G\setminus H$ we have a red edge between $e_i$ and $e_j$ and a blue edge between $e_{i-1}$ and $e_{j-1}$ in $A$.
The crucial property of $A$ is that given any vertex disjoint union of colour-alternating cycles $S$ in $A$ one can find a $2$-factor $F(S)$ in $G$, consisting of the edges of $H$ which are not vertices of $S$ and the edges of $G$ not in $H$ which gave rise to the edges of $S$ in $A$.
However, we can not control the number of cycles in $F(S)$ (except knowing that $F(S)$ has at most $|S|$ cycles), since it depends on the structure of $S$ and also on how $S$ is embedded within $A$. To circumvent this issue we will find instead a large blow-up of $S$. Then within this blow-up we show how to find a modification of $S$ denoted $S^+$ which has the property that $F(S^+)$ has precisely one cycle more than $F(S)$. Similarly, we find another modification $S^-$ such that the corresponding $2$-factor $F(S^-)$ has precisely one cycle less than $F(S)$. Since the number of cycles in $F(S)$ is bounded, if our blow-up of $S$ is sufficiently large we can perform these operations multiple times and therefore obtain a $2$-factor with the target number of cycles.
\section{Preliminaries}\label{sec:prelim}
Let us first fix some notation and conventions that we use throughout the paper. For a graph $G=(V,E)$, let $\delta(G)$ denote its minimum degree, $\Delta(G)$ its maximum degree and $d(v)$ the degree of a vertex $v \in V$. For us, a \emph{$2$-edge-coloured graph} is a triple $G = (V, E_1, E_2)$ such that both $G_1 = (V, E_1)$ and $G_2 = (V,E_2)$ are simple graphs. We always think of $E_1$ as the set of \emph{red} edges and of $E_2$ as the set of \emph{blue} edges of $G$. Accordingly, we define $\delta_1(G)$ to be the minimum degree of red edges of $G$ (that is $\delta(G_1)$), and analogously $\Delta_1(G)$, $\delta_2(G)$, etc. Note that with our definition the same two vertices may be connected by two edges with different colours. In this case, we say that $G$ has a \emph{double edge}. A \textit{blow-up} $G(t)$ of a $2$-edge coloured graph $G$ (with no two vertices joined by both a red and a blue edge) is constructed by replacing each vertex $v$ with a set of $t$ independent vertices and adding a complete bipartite graph between any two such sets corresponding to adjacent vertices in the colour of their edge. When working with \emph{digraphs} we always assume they are simple, so without loops and with at most one edge from any vertex to another (but we allow edges in both directions between the same two vertices).
\subsection{Colour-alternating cycles}\label{subs:cycles}
In this subsection, our goal is to prove that any $2$-edge-coloured graph, which is dense in both colours contains a blow-up of a colour-alternating cycle. We begin with the following auxiliary lemma that will only be used in the subsequent lemma where we will apply it to a suitable auxiliary digraph to give rise to many colour-alternating cycles.
\begin{lem}
Let $k \ge 2$ be a positive integer. A directed graph on $n$ vertices with minimum out-degree at least $\frac{n\log (2k)}{k-1}$ has at least $\frac{n^\ell}{2k^{\ell+1}}$ cycles of length $\ell$ for some fixed $2\le \ell\le k.$
\end{lem}
\begin{proof}
Let us sample $k$ vertices $v_1,\ldots, v_{k}$ from $V(G),$ independently, uniformly at random, with repetition. We denote by $X_i$ the event that vertex $v_i$ has no out-neighbour in $S:=\{v_1,\ldots, v_k\}.$ We know that $\mathbb{P}(X_i)\le \left(1-\frac{\log (2k)}{k-1}\right)^{k-1}\le \frac{1}{2k}.$ If no $X_i$ occurs then the subgraph induced by $S$ has minimum out-degree at least $1$ so contains a directed cycle. The probability of this occurring is at least:
$$ \mathbb{P}\left(\overline{X_1}\cap \ldots \cap \overline{X_k}\right)=1-\mathbb{P}(X_1\cup \ldots \cup X_k)\ge 1-k\mathbb{P}(X_i) \ge 1/2,$$
where we used the union bound. This means that in at least $n^k/2$ outcomes we can find a cycle of length at most $k$ within $S.$ In particular, there is an $\ell \le k$ such that in at least $\frac{n^k}{2k}$ outcomes the cycle we find has length exactly $\ell$. Note that the same cycle might have been counted multiple times, but at most $k^\ell n^{k-\ell}$ times. This implies that $C_\ell$ occurs at least $\frac{n^\ell}{2k^{\ell+1}}$ times.
\end{proof}
Now, we use this lemma to conclude that there are many copies of some short colour-alternating cycle in any $2$-edge-coloured graph which has big minimum degree in both colours.
\begin{lem}\label{lem:many-cycles}
For every $\gamma \in (0,1)$ there exist $c=c(\gamma), L = L(\gamma)$ and $K = K(\gamma)$ such that, if $G$ is a $2$-edge-coloured graph on $n \geq K$ vertices satisfying $\delta_1(G), \delta_2(G) \geq \gamma n$, then $G$ contains at least $cn^\ell$ copies of a colour-alternating cycle of some fixed length $4 \le \ell \le L$.
\end{lem}
\begin{proof}
Let $k=8/\gamma^2 \log (8/\gamma^2)$ so that $\gamma^2/4 \ge \log (2k)/(k-1).$ We set $L=2k,$ $K=8k/\gamma^2$ and $c=(\gamma/2)^{2k}/(4k^{k+1}).$
We build a digraph $D$ on the same vertex set as $G$ by placing an edge from $v$ to $u$ if and only if there are at least $\gamma^2n/2$ vertices $w$ such that $vw$ is red and $wu$ is blue.
Let us first show that every vertex of $D$ has out-degree at least $\gamma^2n/4.$ There are at least $\gamma n$ red neighbours of $v$ and each has $\gamma n$ blue neighbours so there are at least $\gamma^2n^2$ red-blue paths of length $2$ starting at $v.$ Let us assume that there are less than $\gamma^2n/2$ vertices $u$ such that there are at least $\gamma^2n/2$ vertices $w$ such that $vw$ is red and $wu$ is blue.
In this case there are less than $\gamma^2n/2 \cdot n+ n \cdot \gamma^2n/2$ red-blue paths starting at $v$ which is a contradiction. Note that we allowed $u=v$ in the above consideration so we deduce that minimum out-degree in $D$ is at least $\gamma^2n/2-1\ge \gamma^2n/4.$
The previous lemma implies that there is some $\ell \le k$ such that $D$ contains at least $n^{\ell}/(2k^{\ell+1})$ copies of $C_\ell.$
For any such cycle by replacing each directed edge by a red-blue path of $G$ between its endpoints, ensuring we don't reuse a vertex, we obtain at least $(\gamma^2n/2-\ell)(\gamma^2n/2-\ell-1)\cdots(\gamma^2n/2-2\ell+1)\ge (\gamma/2)^{2\ell}n^\ell$
colour-alternating $C_{2\ell}$'s in $G$. Noticing that each such $C_{2\ell}$ may arise in at most $2$ different ways from a directed $C_{\ell}$ of $D$ we deduce that there are at least $n^{\ell}/(2k^{\ell+1})\cdot(\gamma/2)^{2\ell}n^\ell/2\ge c(\gamma)n^{2\ell}$ colour-alternating $C_{2\ell}$'s in $G$.
\end{proof}
The reason for formulating the above lemma is that we can deduce the existence of the blow-up of a cycle from the existence of many copies of this cycle using the hypergraph version of the celebrated K\H{o}v\'ari-S\'os-Tur\'an theorem proved by Erd\H{o}s in \cite{kst}:
\begin{thm}\label{thm:kst}
Let $\ell,t \in \mathbb{N}$. There exists $C=C(\ell,t)$ such that any $\ell$-graph on $n$ vertices with at least $Cn^{\ell-1/t^\ell}$ edges contains $K^{(\ell)}(t)$, the complete $\ell$-partite hypergraph with parts of size $t,$ as a subgraph.
\end{thm}
We are now ready to find our desired blow-up.
\begin{lem}\label{lem:cycle-blow-up}
For every $\gamma \in (0,1)$ and $t\in \mathbb{N}$, there exist positive integers $L = L(\gamma)$ and $K = K(\gamma,t)$ such that, if $G$ is a $2$-edge-coloured graph on $n \geq K$ vertices satisfying $\delta_1(G), \delta_2(G) \geq \gamma n$, then $G$ contains $\mathcal{C}(t6^L)$ where $\mathcal{C}$ is a colour-alternating cycle with $|V(\mathcal{C})| \leq L.$
\end{lem}
\begin{proof}
Let $L=L(\gamma),c=c(\gamma),K \ge K(\gamma)$ be parameters of \Cref{lem:many-cycles} so that we can find $cn^\ell$ copies of a colour-alternating cycle of length $4 \le \ell \le L.$ Let $C=C(L,t6^L)\ge C(\ell,t6^L)$ be the parameter given by \Cref{thm:kst}. By assigning each vertex of $V(G)$ into one of $\ell$ parts uniformly at random we can find a partition of $V(G)$ into $V_1,\ldots,V_\ell$ such that there are $cn^\ell/\ell^\ell$ colour-alternating cycles $v_1\ldots v_\ell$ with $v_i \in V_i$.
We also know that at least half of these cycles always use edges of the same colour between all $V_i,V_{i+1}.$ We now build an $\ell$-graph $H$ on the same vertex set as $G$ whose edges correspond to sets of vertices of such colour-alternating cycles. So we know $H$ has at least $\frac{c}{2\ell^\ell}n^\ell \ge Cn^{\ell-1/(t^\ell\cdot 6^{\ell L})}$ many edges, by taking $K$ large enough, depending on $t,L.$ So \Cref{thm:kst} implies that $H$ contains $K^{(\ell)}(t6^L)$ as a subgraph, which corresponds to a desired $\mathcal{C}(t6^L).$
\end{proof}
\subsection{Ordered graphs}
In our arguments it will not be enough to just find a blow-up of a colour-alternating cycle as in the previous subsection; we will also care about the ``order'' in which the cycles are embedded. In this section we give some notation about ordered graphs and a result which we will need later.
An \emph{ordered graph} is a graph together with a total order of its vertex set.
Here, whenever $G$ is a graph on an indexed vertex set $V(G) = \{v_1, \dots, v_n\}$, we assume that $G$ is ordered by $v_i < v_j \iff i < j$. An \emph{ordered subgraph} of an ordered graph $G$ is a subgraph of $G$ that is endowed with the order that is induced by $G$ and if not stated otherwise, we assume that subgraphs of $G$ are always endowed with that order. For us, two vertices $u < v$ of an ordered graph $G$ are called \emph{neighbouring}, if the set of vertices between $u$ and $v$, that is $\{x \in V(G) | u \leq x \leq v\}$, is either just $\{u,v\}$ or the whole vertex set $V(G)$.
Given an ordered graph $G$ we say a blow-up $H=G(k)$ of $G$ is \textit{ordered consistently} if for any $x,y \in V(H)$ which belong to parts of the blow-up coming from vertices $u,v\in G$ respectively we have $x <_H y$ iff $u <_G v.$
\begin{lem}\label{lem:ordered}
Let $t,L\in \mathbb{N},$ $H$ be a graph on $L$ vertices and $H(t2^L)\subseteq G$ for an ordered graph $G$. There exists an ordering of $H$ for which the consistently ordered $H(t)$ is an ordered subgraph of $G.$
\end{lem}
\begin{proof}
We prove the result by induction on $L,$ where the $L=1$ case is immediate. Let $\{V_1, \dots, V_L\}$ be the clusters of vertices of $H(t2^L),$ so $|V_i|=t2^L.$ Let $w_1, \dots, w_p$ be the median vertices of the sets $V_1, \dots ,V_p$ with respect to the ordering of $H(t2^L)$ induced by $G$ and assume without loss of generality that $w_1$ is the smallest of them. We now throw away all vertices of $V_1$ that are larger than $w_1$ and all vertices of $V_i$ that are smaller than $w_i$ for $i \geq 2$. This leaves us with $L$ sets $\{W_1, \dots, W_L\}$ of size $\ceil{|V_i|/2}=t2^{L-1}$ with the property that $v_1 \in W_1, v_i \in W_i \implies v_1 <_G w_1<_G w_i<_G v_i$ for all $i \geq 2$. If $v\in H$ corresponds to $V_1$ and we denote $H'=H-v$ then $\mathcal{W} = \{W_2, \dots, W_L\}$ spans $H'(t2^{L-1})\subseteq G\setminus V_1$. By the induction hypothesis we can find a consistently ordered $H'(t)$ as an ordered subgraph of $G\setminus V_1$ which together with any subset of size $t$ of $W_1$ gives the desired consistently ordered $H(t)$ in $G$.
\end{proof}
\section{Proof of \Cref{thm:main}}\label{sec:main}
\subsection{Constructing an auxiliary graph}\label{constrA}
Throughout the whole section, let $G$ be a Hamiltonian graph on $n$ vertices. First of all, let us fix a Hamilton cycle $H$ of $G$ and name the vertices of $G$ such that $H = v_1 v_2 \dots v_n v_1$. We assume that $G$ is ordered according to this labelling. Also, let us denote the edges of $H$ by $e_1, e_2, \dots , e_n$ such that $e_1 = v_1 v_2, \dots , e_n = v_n v_1$. In all our following statements, we will identify $v_{n+1}$ and $v_1$, and more generally $v_i$ and $v_j$, as well as $e_i$ and $e_j$, if $i$ and $j$ are congruent modulo $n$. Furthermore, since we can always picture $G$ as a large cycle with some edges inside it, we call all the edges that are not part of $H$, the \emph{inner edges} of $G$.
Our goal is to find a $2$-factor with a fixed number of cycles in $G$. Note that, if $G$ is dense, it is not hard to find a large collection of vertex-disjoint cycles in $G$. The difficulty lies in the fact that we want this collection to be spanning while still controlling the exact number of cycles. Naturally, we have to rely on the Hamiltonian structure of $G$ to give us such a spanning collection of cycles. Indeed, when building these cycles we will try to use large parts of the Hamilton cycle $H$ as a whole and connect them correctly using some inner edges of $G$. It is convenient for our approach to construct an auxiliary graph $A$ out of $G$, that captures the information we need about the inner edges of $G$.
\begin{defn} \label{defn:aux}
Given the setup above, we define the \emph{auxiliary graph} $A = A(G,H)$ as the following ordered, $2$-edge-coloured $n$-vertex graph:
\begin{enumerate}
\item Every vertex of $A$ corresponds to exactly one edge of $H$, thus we have $V(A) = \{e_1, \dots , e_n\}$ and we order the vertices of $A$ according to this labelling;
\item two vertices $e_i = v_i v_{i+1}$ and $e_j = v_j v_{j+1}$ of $A$ are connected with a red edge if there is an inner edge of $G$ connecting $v_{i+1}$ and $v_{j+1}$;
\item similarly, the vertices $e_i$ and $e_j$ of $A$ are connected with a blue edge if there is an inner edge of $G$ connecting $v_i$ and $v_j$.
\end{enumerate}
\end{defn}
Throughout this section, let $A = A(G,H)$ for our fixed $G$ and $H$. Note that, by the above definition, every edge $\ell \in E(A)$ corresponds to a unique inner edge $e$ of $G$. In the following, we denote this edge by $e(\ell) \in E(G)$. To be precise, if $\ell = e_i e_j$, then $e(\ell) \coloneqq v_{i+1} v_{j+1}$ if $\ell$ is a red edge and $e(\ell) \coloneqq v_i v_j$ if $\ell$ is a blue edge. Conversely, every inner edge of $G$ corresponds to exactly one red edge and to one blue edge of $A$. This leads to the following observation:
\begin{obs} \label{deltaA}
For $i \in \{1, \dots, n\}$, we have $d^{A}_1(e_i) = d^{G}(v_{i+1})-2$ and $d^{A}_2(e_i) = d^{G}(v_i)-2$. In particular, we have $\delta_1(A) = \delta_2(A) = \delta(G) - 2$.
\end{obs}
In \Cref{ExA} we give an example of a Hamiltonian graph and its corresponding auxiliary graph.
\begin{figure}[ht]
\caption{Let us call the left graph $G$ and fix its Hamilton cycle $H = v_1 \dots v_8 v_1$. Then the graph on the right is the auxiliary graph $A(G,H)$.}
\includegraphics[scale = 0.8]{2-factors_picture1.pdf}
\label{ExA}
\end{figure}
The motivation for defining $A$ just as above is given by the fact that 2-regular (possibly non-spanning!) subgraphs $S \subseteq A$ satisfying some extra conditions naturally correspond to a $2$-factor in $G$. Recall that in our setting, two vertices $e_i$ and $e_j$ of $A$ are neighbouring if $|i-j| \equiv 1$ (modulo $n$). Let us make the following definition:
\begin{defn} \label{defn:correspondence}
Given the same setup as above and a subgraph $S \subseteq A$ that is a union of vertex-disjoint colour-alternating cycles without neighbouring vertices (i.e.\ if $e_i \in V(S)$ then $e_{i-1},e_{i+1}\notin V(S)$), we define its corresponding subgraph $F(S) \subseteq G$ as follows:
\begin{enumerate}
\item $V(F(S)) \coloneqq V(G)$;
\item the edges of $F(S)$ are all the edges of $H$ except for those that correspond to vertices of $S$. Additionally, for every edge $\ell \in E(S)$, let the corresponding inner edge $e(\ell)$ be an edge of $F(S)$ too. That is, $E(F(S)) \coloneqq \left(\{e_1, \ldots, e_n\} \setminus V(S)\right) \cup \{e(\ell) \mid \ell \in E(S)\}$.
\end{enumerate}
\end{defn}
\begin{lem} \label{correspondence}
If $S \subseteq A$ is a union of vertex-disjoint colour-alternating cycles without neighbouring vertices, then $F(S) \subseteq G$ is a $2$-factor.
\end{lem}
In order to illustrate the above definitions, consider the Hamiltonian graph given in \Cref{ExA} and the subgraphs $S_1$ and $S_2$ of the corresponding auxiliary graph where $S_1$ is just the cycle $e_2 e_4 e_6 e_8 e_2$ and $S_2$ is the union of the cycles $e_1 e_3 e_1$ and $e_5 e_7 e_5$. Their corresponding $2$-factors $F(S_1)$ and $F(S_2)$ are shown as dashed in \Cref{ExCorr}. Note that they use the same inner edges of $G$ but still have different numbers of cycles.
\begin{figure}[ht]
\caption{$2$-factors $F(S_1)$ and $F(S_2)$ used in the illustration above.}
\includegraphics[scale = 0.8]{2-factors_picture2.pdf}
\label{ExCorr}
\end{figure}
\begin{proof}[ of \Cref{correspondence}]
Since $F \coloneqq F(S)$ consists of exactly $n$ edges, it suffices to show that $\delta(F)\ge 2$. Let $v_j$ be an arbitrary vertex of $F$. We distinguish two cases: If both edges $e_{j-1}, e_j \notin V(S)$, then $e_{j-1}, e_j \in E(F)$ and $v_j$ is incident to $e_{j-1}$ and $e_j$ in $F$. Else, exactly one of the edges $e_{j-1}$ and $e_j$ is a vertex of $S$ since $S$ contains no neighbouring vertices. In this case we use the fact that every vertex $e_i$ of $S$ is incident to a red edge $\ell_i$ and to a blue edge $\ell_i'$. Hence, by \Cref{defn:correspondence}, either $e_{j-1}\in S$ and $e_j \notin S$ in which case $v_j$ is incident to $e_j$ and $e(\ell_{j-1})$ in $F$ or $e_{j-1}\notin S$ and $e_j \in S$ in which case $v_j$ is incident to $e_{j-1}$ and $e(\ell_j')$ in $F$. In both cases these two edges are distinct as one of them is an inner edge of $G$ and the other one is not.
\end{proof}
We note that $F(S)$ does not only depend on the structure of $S$ but also on the order in which $S$ is embedded within $A$. However, it is immediate that if $S$ is embedded in auxiliary graphs of two Hamiltonian graphs (possibly with different number of vertices) in the same order then $F(S)$ has the same number of cycles in both cases.
\begin{obs}\label{obs:samecomps}
Let $A_1=A(G_1,H_1)$ and $A_2=A(G_2,H_2)$. Let $S_1$ and $S_2$ be disjoint unions of colour-alternating cycles without neighbouring vertices, which are isomorphic as coloured subgraphs of $A_1$ and $A_2$ whose corresponding vertices appear in the same order along $H_1$ and $H_2.$ Then $F(S_1)$ and $F(S_2)$ consist of the same number of cycles.
\end{obs}
We remark that it is not always true that all $2$-factors of $G$ arise as $F(S)$ for some $S \subseteq A.$
\subsection{Controlling the number of cycles} \label{control}
It is not hard to see that the auxiliary graph $A$ (of a graph with a big enough minimum degree) must contain a colour-alternating cycle $C$, which corresponds to a $2$-factor $F(C) \subseteq G$ by \Cref{correspondence} (disregarding, for the moment, the issue of $C$ containing neighbouring vertices). However, it is not at all obvious how to generally determine the number of components of $F(C).$
We begin by giving a rough upper bound.
\begin{obs} \label{obs:no.cycles}
If $C \subseteq A$ is a non-empty colour-alternating cycle of length $L$ without neighbouring vertices, then the number of components of the corresponding $2$-factor $F(C)$ is at most $L$.
\end{obs}
\begin{proof}
Note that the $2$-factor $F(C)$ contains exactly $L$ inner edges and, since $F(C) \neq H$, each cycle of $F(C)$ must contain at least one inner edge (in fact, at least two in our setting).
\end{proof}
However, in order to prove \Cref{thm:main}, we need to be able to show the existence of a $2$-factor consisting of exactly $k$ cycles, for a fixed predetermined number $k$. This is where we are going to make use of \Cref{lem:cycle-blow-up,lem:ordered} which allow us to find a consistently ordered blow-up of $C$. This will give us the freedom to find slight modifications of $C$ with different numbers of cycles in $F(C)$.
\subsubsection{Going up}
In this subsection we give a modification of a union of colour-alternating cycles which will have precisely one more cycle in its corresponding $2$-factor.
\begin{defn}\label{defn:goingup}
Let $S$ be a disjoint union of colour-alternating cycles with $V(S)=\{s_1, \dots, s_m\}$ and let $C$ be a cycle of $S$. We construct a $2$-edge-coloured ordered graph $U(S,C)$ as follows:
\begin{enumerate}
\item Start with a copy of $S$ and for every $s_i \in V(C)$, add a vertex $s_{i+1/2}$;
\item For every red or blue edge $s_i s_j \in E(C)$, add an edge $s_{i+1/2} s_{j+1/2}$ of the same colour;
\item Order the resulting graph according to the order of the indices of its vertices.
\end{enumerate}
Given a $2$-edge-coloured ordered graph $U$, we say that $U$ is a \emph{going-up version} of $S$, if there exists a component $C$ of $S$ such that $U$ and $U(S,C)$ are isomorphic $2$-edge-coloured ordered graphs.
\end{defn}
In other words $U(S,C)$ consists of $S$ with an additional copy of $C$ ordered in such a way that the vertices of the new copy of $C$ immediately follow their corresponding vertices of the original copy of $C$. In particular, $U$ is also a disjoint union of colour-alternating cycles and is an ordered subgraph of a consistently ordered $S(2)$. Note if $S$ contains no double edges, neither does $U.$
\Cref{fig:goingup} shows what a going-up version $U$ of $S$ looks like if $S$ is just a colour-alternating $C_4$. \Cref{fig:goingupG} shows what the corresponding $2$-factors look like (assuming $S \subseteq U \subseteq A$). Note that the dashed cycles of $F(U)$ have the same structure as the dashed cycles in $F(S)$ but $F(U)$ additionally has a new bold cycle. We now show that a similar situation occurs in general.
\begin{figure}[ht]
\caption{A colour-alternating cycle $S$ and a going-up version of it $U$}
\includegraphics[scale = 0.8]{2-factors_going_up.pdf}
\label{fig:goingup}
\end{figure}
\begin{figure}[ht]
\caption{$2$-factors corresponding to $U$ and $S$ given in \Cref{fig:goingup}.}
\includegraphics[scale = 0.8]{2-factors_going_up_G.pdf}
\label{fig:goingupG}
\end{figure}
\begin{lem}[Going up]\label{lem:goingup}
Let $S \subseteq A$ be a disjoint union of colour-alternating cycles without neighbouring vertices and let $U$ be an ordered subgraph of $A$ without neighbouring vertices that is a going-up version of $S$. Then, the $2$-factor $F(U) \subseteq G$ has exactly one component more than $F(S)$.
\end{lem}
\begin{proof}[ of \Cref{lem:goingup}]
For an edge $e=v_k v_{k+1}\in H$ we let $v^{+}(e)=v_{k+1}$ and $v^{-}(e)=v_k.$
We denote the vertices of $S$ by $s_1, \dots, s_m$ according to their order in $A$. Let $C$ be a colour-alternating cycle $s_{j_1} \dots s_{j_k} s_{j_1}$ in $S$ for which $U=U(S,C)$. Let us denote the vertices of $U$ by $u_1, \dots, u_m$ and $u_{j_1 + 1/2}, \dots, u_{j_k+1/2}$ as they appear along $H$ such that $u_1, \dots, u_m$ make a copy of $S$ and $u_{j_1}, \dots, u_{j_k}$ correspond to $C$.
The vertices $v^+(u_{j_i})$ and $v^-(u_{j_i + 1/2})$ are connected in $F(U)$ by paths $P_i \subseteq H$ for $i \in \{1, \dots, k\}$. Furthermore, since $C$ is a colour-alternating cycle either $v^+(u_{j_i})v^+(u_{j_{i+1}})\in E(G)$ for all odd $i$ and $v^-(u_{j_i + 1/2})v^-(u_{j_{i+1} + 1/2})\in E(G)$ for all even $i$ or vice versa in terms of parity. This means that taking all $P_i$ and these edges we obtain one cycle $$Z:=v^+(u_{j_1})v^+(u_{j_2})P_2v^-(u_{j_2+1/2})v^-(u_{j_3+1/2})P_3v^{+}(u_{j_3})\dots P_kv^-(u_{j_k + 1/2})v^-(u_{j_1+1/2})P_1v^+(u_{j_1})\in F(U),$$ if $C$ starts with a red edge (which is exactly the bold cycle in the example shown in \Cref{fig:goingupG}) or
$$Z:=v^-(u_{j_1+1/2})v^-(u_{j_2+1/2})P_2v^+(u_{j_2})v^+(u_{j_3})P_3v^{-}(u_{j_3+1/2})\dots P_kv^+(u_{j_k})v^+(u_{j_1})P_1v^-(u_{j_1+1/2})\in F(U),$$
if $C$ starts with a blue edge.
Let us now consider the graph $G'$ that is obtained from $G$ by deleting $Z$ (including all edges incident to vertices of $Z$) and adding the edges $S_{j_i} = v^-(u_{j_i}) v^+(u_{j_i + 1/2})$ for $i \in \{1, \dots, k\}$. Let $H'$ be the Hamilton cycle of $G'$ made of $H$ and $S_j$'s ordered according to the order of $G$. We claim that sending the vertices $s_i$ to $S_i$ if $s_i \in C$ and to $u_i$ otherwise for $i \in \{1, \dots, m\}$ gives an order-preserving isomorphism from $S$ to its image $S' \subseteq A(G', H')$. Indeed, if $s_i, s_j \notin C$, then the fact that $u_i u_j$ is a red or a blue edge whenever $s_i s_j$ is a red or a blue edge just follows from \Cref{defn:goingup}. Furthermore, if $s_{j_i} s_{j_{i+1}}$ is a red edge for $i \in \{1, \dots, k\}$, then $v^+(s_{j_i + 1/2})$ is adjacent to $v^+(s_{j_{i+1}+1/2})$, which means that $S_i S_{i+1}$ is a red edge. This works analogously for blue edges of $C$, which shows the claim. Hence, by \Cref{obs:samecomps}, the $2$-factor $F(S')$ in $G'$ has the same number of components as $F(S)$ in $G$. However, since $F(S')$ is by definition just $F(U) \setminus Z$, this completes the proof.
\end{proof}
\subsubsection{Going down}
We now turn to the remaining case when we want to find a $2$-factor with less components than one that we already found.
\begin{defn}
Let $S \subseteq A$ be a disjoint union of colour-alternating cycles without neighbouring vertices. We say that a vertex $e_k \in V(A)$ \emph{separates components of $F(S)$} if the vertices $v_k$ and $v_{k+1}$ lie in different connected components of $F(S)$.
\end{defn}
\begin{obs}\label{obs:sepcomps}
If $F(S)$ has more than one connected component, then at least one vertex of $S$ separates components.
\end{obs}
\begin{proof}
Since $F(S)$ is not connected there must exist vertices $v_k,v_{k+1}$ of $H$ belonging to different components of $F(S).$ Let $e_k=v_k v_{k+1}$ so $e_k \notin E(F(S))$. Since the only edges of $H$ (that is vertices of $A$) that are not in $E(F(S))$ are vertices of $S,$ $e_k$ is the desired separating vertex.
\end{proof}
We are now ready to construct a going-down version of $S$ giving rise to a $2$-factor with one less cycle.
\begin{defn}\label{defn:goingdown}
Let $S$ be a disjoint union of colour-alternating cycles with $V(S)=\{s_1, \dots, s_m\}$. For any $s_k \in V(S)$ we construct the $2$-edge-coloured ordered graph $D=D(S,s_k)$ as follows:
\begin{enumerate}
\item Start with a copy of $S$ and for every vertex $s_i$ in the cycle $C\subseteq S$ that contains $s_k$, add the vertices $s_{i+1/3}$ and $s_{i+2/3}$ to $D$;
\item if $i, j \neq k$ and if $s_i s_j$ is a red or a blue edge of $S$, then add the edges $s_{i+1/3}s_{j+1/3}$ and $s_{i+2/3}s_{j+2/3}$ of the same colour to $D$;
\item if $s_i s_k$ is the blue edge of $S$ incident to $s_k$, then delete it and add the blue edges $s_i s_{k+1/3}$, $s_{i+1/3} s_{k+2/3}$ and $s_{i+2/3} s_k$ to $D$;
\item if $s_i s_k$ is the red edge of $S$ incident to $s_k$, then add the red edges $s_{i+1/3} s_{k+2/3}$ and $s_{i+2/3} s_{k+1/3}$ to $D$;
\item order the resulting graph according to the order of the indices of its vertices.
\end{enumerate}
Let $S\subseteq A$ be a disjoint union of colour alternating cycles without neighbouring vertices, so that $F(S)$ exists. We say that a $2$-edge-coloured ordered graph $D$ is a \emph{going-down version} of $S$ if there exists a vertex $s_k$ that separates components of $F(S)$ such that $D$ and $D(S,s_k)$ are isomorphic $2$-edge-coloured ordered graphs.
\end{defn}
In other words $D=D(S,s_k)$ consists of a copy of $S$ with added two copies of the cycle containing $s_k$ where the edges incident to $s_k$ and its copies are rewired in a certain way. It is easy to see that every vertex of $D$ is still incident to exactly one edge of each colour so is still a disjoint union of colour-alternating cycles. Note also that $D$ is an ordered subgraph of consistently ordered $S(3).$ If $S$ contained no double edges neither does $D$.
\Cref{fig:goingdown} shows a going-down version $D = D(S, s_1)$ for $S$ on $\{s_1, \dots, s_4\}$ being again a colour-alternating $C_4$. Note that $F(D)$, shown in \Cref{fig:goingdownG}, contains two paths, marked as dotted and bold, that connect the two dashed parts of $F(D)$ that resemble the two disjoint cycles of $F(S),$ into a single cycle. We will show that this occurs in general.
\begin{figure}[ht]
\caption{A colour-alternating cycle $S$ and a going-down version of it $D(S,s_1).$}
\includegraphics[scale = 0.8]{2-factors_going_down.pdf}
\label{fig:goingdown}
\end{figure}
\begin{figure}[ht]
\caption{$2$-factors corresponding to $U$ and $D(S,s_1)$ given in \Cref{fig:goingdown}.}
\includegraphics[scale = 0.8]{2-factors_going_down_G.pdf}
\label{fig:goingdownG}
\end{figure}
\begin{lem}[Going down]\label{lem:goingdown}
Let $S \subseteq A$ be a disjoint union of colour-alternating cycles without neighbouring vertices and let $D$ be an ordered subgraph of $A$ without neighbouring vertices that is a going-down version of $S$. Then the $2$-factor $F(D) \subseteq G$ consists of one cycle less than $F(S)$.
\end{lem}
\begin{proof}
For an edge $e=v_kv_{k+1}\in H$ we let $v^{+}(e)=v_{k+1}$ and $v^{-}(e)=v_k.$
We denote the vertices of $S$ by $s_1, \dots, s_m$ where $D=D(S,s_1)$ and $s_1$ separates components of $F(S)$. We denote the vertices of $D$ by $d_1, \dots, d_m$ and $d_{4/3}, d_{5/3}, d_{j_1+1/3}, d_{j_1 + 2/3}, \dots, d_{j_k + 2/3}$ as they appear along $H$ such that $d_1, \dots, d_m$ make a copy of $S$ in which $d_1$ corresponds to $s_1$ and $d_1,d_{j_1}, \dots, d_{j_k}$ to the cycle $C = s_1 s_{j_1} \dots s_{j_k} s_1$ of $S.$
The vertices $v^+(d_{j_i})$ and $v^-(d_{j_i + 1/3})$ as well as the vertices $v^+(d_{j_i+1/3})$ and $v^-(d_{j_i+2/3})$ in $F(D)$ are connected by paths $P_i \subseteq H$ and $Q_i \subseteq H$ respectively for all $i \in \{1, \dots, k\}$.
If $C$ begins by a red edge then $$P:=v^+(d_1)v^+(d_{j_1})P_1v^-(d_{j_1+1/3})v^-(d_{j_2+1/3})P_2v^{+}(d_{j_2})\dots P_kv^-(d_{j_k + 1/3})v^-(d_{5/3})\in F(D),$$ where $v^+(d_1)v^+(d_{j_1})\in F(D)$ by \Cref{defn:goingdown} part 4; $v^-(d_{j_k + 1/3})v^-(d_{5/3})\in F(D)$ by part $3$ and edges between paths $P_i$ are in $F(D)$ by part 2 in the same way as in the going up case. Similarly, $$Q:=v^+(d_{5/3})v^+(d_{j_1+1/3})Q_1v^-(d_{j_1+2/3})v^-(d_{j_2+2/3})Q_2v^{+}(d_{j_2+1/3})\dots Q_kv^-(d_{j_k +2/3})v^-(d_{1})\in F(D)$$
On the other hand if $C$ begins by a blue edge then we have
$$P:=v^-(d_{5/3})v^-(d_{j_1+1/3})P_1v^+(d_{j_1})v^+(d_{j_2})P_2\dots P_kv^+(d_{j_k})v^+(d_{1})\in F(D),$$
$$Q:=v^-(d_{1})v^-(d_{j_1+2/3})Q_1v^+(d_{j_1+1/3})v^+(d_{j_2+1/3})Q_2\dots Q_kv^+(d_{j_k+1/3})v^+(d_{5/3})\in F(D)$$
So in either case the path $P\subseteq F(D)$ contains $P_1, \dots, P_k$ and has endpoints $v^+(d_1),v^-(d_{5/3})$ while $Q \subseteq F(D)$ contains $Q_1, \dots, Q_k$ and has endpoints $v^+(d_{5/3}),v^-(d_1)$. For example in \Cref{fig:goingdownG}, the paths $P$ and $Q$ correspond to the dotted and the bold path respectively.
Our goal now is to show that $P$ and $Q$ connect two ``originally distinct'' components that are ``inherited'' from $F(S)$. Consider the graph $G'$ that is obtained from $G$ by deleting all the vertices of paths $P_i$ and $Q_i$ (equivalently all inner vertices of $P$ and $Q$) and adding the edges $S_{j_i} = v^-(d_{j_i}) v^+(d_{j_i + 2/3})$ for $i \in \{1, \dots, k\}$. Let $H'$ be the Hamilton cycle of $G'$ made of $H$ and $S_j$'s ordered according to the order of $G$. First, we claim that the map that sends $s_1$ to $d_{4/3}$ and $s_i$ to $S_i$ if $s_i$ is part of $C \setminus \{s_1\}$ and to $d_i$ otherwise for $i \in \{2, \dots, m\}$ is an order-preserving isomorphism from $S$ onto its image $S' \subseteq A(G', H')$. Indeed, by \Cref{defn:goingdown} parts 3 and 4 for $i=1,k$ if $s_1s_{j_i}$ is red then $d_{4/3}d_{j_i+2/3}$ is a red edge of $A$ so $v^+(d_{4/3})v^+(d_{j_i+2/3})\in F(D)$ implying that $d_{4/3} S_{j_i}$ is red in $A'.$ If $s_1s_{j_i}$ is blue then $d_{4/3}d_{j_i}$ is a blue edge of $A$ so $v^-(d_{4/3})v^-(d_{j_i})\in F(D)$ implying that $d_{4/3} S_{j_i}$ is blue in $A'.$
For $i\neq k$ edge $S_{j_i} S_{j_{i+1}}$ is of the same colour as $s_{j_i}s_{j_{i+1}}$ by \Cref{defn:goingdown} part 2 and for $s_i,s_j \notin C$ we know $d_i d_j$ has the same colour by part 1. Therefore, by \Cref{obs:samecomps}, $F(S')$ has the same number of components as $F(S)$. Since $s_1$ separates components in $S$ we know that $d_{4/3}$ separates components in $F(S')$. This means in particular that $d_1$ and $d_{5/3}$ lie in two different cycles $C_1$ and $C_2$ of $F(S')$. Now, observe that we obtain $F(D)$ from $F(S')$ by deleting $d_1$ and $d_{5/3}$ and adding the paths $P$ and $Q$. However, since $P$ connects $v^+(d_1)$ and $v^-(d_{5/3})$ and $Q$ connects $v^+(d_{5/3})$ and $v^-(d_1)$, this process joins $C_1$ and $C_2$ into one big cycle and hence, $F(D)$ has exactly one component less than $F(S)$.
\end{proof}
\subsection{Completing the proof}\label{subs:complete}
We are now ready to put all the ingredients together in order to complete our proof of \Cref{thm:main} in the way that has already been outlined throughout the previous section.
\begin{proof}[ of \Cref{thm:main}]
Let $k$ be a positive integer and $\varepsilon$ a positive real number. Let $L=L(\varepsilon/2),K=K(\varepsilon/2,2^k)$ be the parameters coming from \Cref{lem:cycle-blow-up}. Let $N\ge \max(4/\varepsilon,K).$
Now, suppose that $G$ is a Hamiltonian graph on $n \geq N$ vertices with minimum degree $\delta(G) \geq \varepsilon n$. Let us fix a Hamilton cycle $H \subseteq G$, name the vertices of $G$ such that $H = v_1 v_2 \dots v_n v_1$ and assume that $G$ is ordered according to this labelling. Let $A = A(G,H)$ be the ordered, $2$-edge-coloured auxiliary graph corresponding to $G$ and $H$ according to \Cref{defn:aux}. We know by \Cref{deltaA} that $\delta_\nu(A)\ge \delta_\nu(G)-2 \ge \frac{\varepsilon}{2} n.$
\Cref{lem:cycle-blow-up} shows that there is a $\mathcal{C}(2^k6^L)\subseteq A$ where $\mathcal{C}$ is a colour-alternating cycle of length at most $L$ without double-edges. \Cref{lem:ordered} allows us to find a consistently ordered $\mathcal{C}(2^k3^L)$ as an ordered subgraph of $A.$ By removing every second vertex of $\mathcal{C}(2^k3^L)$ in $A$ we obtain a consistently ordered $\mathcal{C}'=\mathcal{C}(2^{k-1}3^L)$ that is an ordered subgraph of $A$ without neighbouring vertices. For $\mathcal{C}\subseteq \mathcal{C}'$ by \Cref{correspondence} we obtain a $2$-factor $F(\mathcal{C}) \subseteq G$. Let $\ell$ be the number of cycles of $F(\mathcal{C}).$ By \Cref{obs:no.cycles}, we know that $1 \leq \ell \leq L$.
Let us first assume that $k > \ell$. We find a sequence $S_0, S_1, \dots, S_{k-\ell}$ defined as follows: let $S_0 = \mathcal{C};$ given $S_{i-1}$ let $\mathcal{C}_{i-1}$ be an arbitrary cycle of $S_{i-1}$ and let $S_{i} = U(S_{i-1}, \mathcal{C}_{i-1})$. By construction, $S_i$ is again a disjoint union of colour-alternating cycles, without double edges, and is an ordered subgraph of $\mathcal{C}(2^i) \subseteq \mathcal{C}'$ (since by construction $S_i\subseteq S_{i-1}(2)$). Therefore, for all $i\le k-\ell$ there is an order-preserving embedding of $S_i$ into $A$ without neighbouring vertices. So, by \Cref{correspondence} and \Cref{lem:goingup} we deduce that $F(S_i)$ has one more cycle than $F(S_{i-1})$. In particular, the $2$-factor $F(S_{k-\ell}) \subseteq G$ consists of exactly $k$ components.
Let us now assume that $k < \ell$. Here, we find a sequence $S_0, S_1, \dots, S_{\ell - k}$ of disjoint unions of colour-alternating cycles that are ordered subgraphs of $A$ without neighbouring vertices such that $F(S_i)$ consists of $\ell-i$ cycles. Let $S_0 = \mathcal{C},$ and assume we are given $S_{i-1}$ for $i\le \ell-k$ with $F(S_{i-1})$ having $\ell-i+1 \ge k+1 \ge 2$ cycles. This means that $S_{i-1}$ has a vertex $v_{i-1}$ that separates components of $F(S_{i-1})$ by \Cref{obs:sepcomps}. We let $S_i = D(S_{i-1}, v_{i-1})$, which is a disjoint union of colour-alternating cycles, without double edges, and is an ordered subgraph of a consistently ordered $\mathcal{C}(3^i)$ (since by construction $S_i \subseteq S_{i-1}(3)$). Note that $\ell - k \leq L$ by \Cref{obs:no.cycles} and hence, $ \mathcal{C}(3^i) \subseteq \mathcal{C}(3^{\ell - k}) \subseteq \mathcal{C}'$ so we can find a copy of $S_i$ into $A$ without having neighbouring vertices. By \Cref{lem:goingdown}, $F(S_i)$ has one less cycle than $F(S_{i-1})$, so exactly $\ell - i$ cycles. In particular, $F(S_{\ell - k})$ is a $2$-factor in $G$ with $k$ cycles, which concludes the proof.
\end{proof}
\section{Concluding remarks and open problems}
In this paper we show that in a Hamiltonian graph the minimum degree condition needed to guarantee any $2$-factor with $k$-cycles is sublinear in the number of vertices. The best lower bound is still only a constant.
In the case of a $2$-factor with two components, the best bounds are given by Faudree et al. \cite{Faudree05} who construct minimum degree $4$ Hamiltonian graphs without a $2$-factor with $2$ components. In the case of $2$-factors with $k$ components, no constructions have been given previously, but it is easy to see that a minimum degree of at least $k+2$ is necessary:
\begin{prop}
There are arbitrarily large Hamiltonian graphs with minimum degree $k+1$ which do not have a $2$-factor with $k$ components.
\end{prop}
\begin{proof}
Let $G$ consist of a cycle $\mathcal{C}$ of length $n-k+1$ and an independent set $U$ of size $k-1$ with all the edges between $\mathcal{C}$ and $U$ added. It's easy to see that for $n\ge 2k$, $G$ is Hamiltonian and has minimum degree $k+1$. However $G$ does not have a $2$-factor with $k$ components (e.g.\ because every cycle in a $2$-factor of $G$ must use at least one vertex in $U$).
\end{proof}
For fixed $k$, we do not know of any Hamiltonian graphs with non-constant minimum degree which do not have a $2$-factor with $k$ components. This indicates that the necessary minimum degree in \Cref{conj:main} may in fact be much smaller, perhaps even a constant (depending on $k$). A step in this direction was made by Pfender \cite{Pfender04} who showed that in the $k=2$ case, a Hamiltonian graph $G$ with minimum degree of $7$ contains a $2$-factor with $2$ cycles in a very special case when $G$ is claw-free.
If one takes greater care with various parameters in \Cref{sec:prelim} one can show that a minimum degree of $\frac{Cn}{\sqrt[4]{\log \log n / (\log \log \log n)^2}}$ suffices for finding an ordered blow-up of a short cycle so in particular this minimum degree is enough to find $2$-factors consisting of a fixed number of cycles. We believe that it would be messy but not too hard to improve this a little bit further, but to reduce the minimum degree condition to $n^{1-\varepsilon}$ would require some new ideas. On the other hand we do believe that our approach of finding alternating cycles in the auxiliary graph could still be useful in this case, but one needs to either find a better way of finding ordered blow-ups of short cycles or obtain a better understanding of how the number of cycles in $F(S)$ depends on the order and structure of a disjoint union of colour-alternating cycles $S.$ Another possibility is to augment the auxiliary graph in order to include edges that connect the front/back to the back/front vertex of two edges of the Hamilton cycle, which would allow us to obtain a $1$-to-$1$-correspondence between $2$-factors of $G$ and suitable structures in this new auxiliary graph.
Another way of saying that a graph is Hamiltonian is that it has a $2$-factor consisting of a single cycle. A possibly interesting further question which arises is whether knowing that $G$ contains a $2$-factor consisting of $\ell$ cycles already allows the minimum degree condition needed for having a $2$-factor with $k>\ell$ cycles to be weakened.
\textbf{Acknowledgements.} We are extremely grateful to the anonymous referees for their careful reading of the paper and many useful suggestions.
| {
"timestamp": "2020-03-10T01:13:02",
"yymm": "1905",
"arxiv_id": "1905.09729",
"language": "en",
"url": "https://arxiv.org/abs/1905.09729",
"abstract": "A well known generalisation of Dirac's theorem states that if a graph $G$ on $n\\ge 4k$ vertices has minimum degree at least $n/2$ then $G$ contains a $2$-factor consisting of exactly $k$ cycles. This is easily seen to be tight in terms of the bound on the minimum degree. However, if one assumes in addition that $G$ is Hamiltonian it has been conjectured that the bound on the minimum degree may be relaxed. This was indeed shown to be true by Sárközy. In subsequent papers, the minimum degree bound has been improved, most recently to $(2/5+\\varepsilon)n$ by DeBiasio, Ferrara, and Morris. On the other hand no lower bounds close to this are known, and all papers on this topic ask whether the minimum degree needs to be linear. We answer this question, by showing that the required minimum degree for large Hamiltonian graphs to have a $2$-factor consisting of a fixed number of cycles is sublinear in $n.$",
"subjects": "Combinatorics (math.CO)",
"title": "2-factors with k cycles in Hamiltonian graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9850429151632047,
"lm_q2_score": 0.8175744761936437,
"lm_q1q2_score": 0.8053459453928169
} |
https://arxiv.org/abs/1306.3574 | Early stopping and non-parametric regression: An optimal data-dependent stopping rule | The strategy of early stopping is a regularization technique based on choosing a stopping time for an iterative algorithm. Focusing on non-parametric regression in a reproducing kernel Hilbert space, we analyze the early stopping strategy for a form of gradient-descent applied to the least-squares loss function. We propose a data-dependent stopping rule that does not involve hold-out or cross-validation data, and we prove upper bounds on the squared error of the resulting function estimate, measured in either the $L^2(P)$ and $L^2(P_n)$ norm. These upper bounds lead to minimax-optimal rates for various kernel classes, including Sobolev smoothness classes and other forms of reproducing kernel Hilbert spaces. We show through simulation that our stopping rule compares favorably to two other stopping rules, one based on hold-out data and the other based on Stein's unbiased risk estimate. We also establish a tight connection between our early stopping strategy and the solution path of a kernel ridge regression estimator. | \section{Introduction}
The phenomenon of overfitting is ubiquitous throughout statistics. It
is especially problematic in nonparametric problems, where some form
of regularization is essential to prevent overfitting. In the problem
of nonparametric regression, the most classical form of regularization
is that of Tikhonov regularization, where a quadratic smoothness
penalty is added to the least-squares loss. An alternative and
algorithmic approach to regularization is based on early stopping of
an iterative algorithm, such as gradient descent applied to the
unregularized loss function. The main advantage of early stopping for
regularization, as compared to penalized forms, is lower computational
complexity.
The idea of early stopping has a fairly lengthy history, dating back
to the 1970's in the context of the Landweber iteration. (For
instance, see the paper by Strand~\cite{Strand74}, with follow-up work
by Anderssen and Prenter~\cite{AndrerssenPrenter81} as well as
Wahba~\cite{Wahba87}.) Early stopping has also been widely used in
neural networks (e.g.,~\cite{MorganBourlard90}), in which stochastic
gradient descent is used to estimate the network parameters. Past work
provided intuitive arguments for the benefits of early stopping. It
was argued that each step of an iterative algorithm will reduce bias
but increase variance, so early stopping ensures the variance of the
estimator is not too high. However, prior to the 1990s, there had been
little theoretical justification for these claims. A more recent line
of work has provided theoretical justification for various types of
early stopping, including boosting algorithms
(e.g.,~\cite{BartlettTraskin07, BuhlmannYu03, Freund97, Jiang04,
Mason99, Yao05,ZhangYu05}), greedy methods~\cite{Barron08}, gradient
descent over reproducing kernel Hilbert spaces
(e.g.~\cite{Bauer07,Caponnetto06,CaponnettoYao06,DeVito10, Yao05}),
the conjugate gradient algorithm~\cite{BlanchKram10}, and the power
method for eigenvalue computation~\cite{Orecchia11}. Most relevant to
our work is the work of B\"{u}hlmann and Yu~\cite{BuhlmannYu03}, who
derived optimal mean-squared error bounds for $L^2$-boosting with
early stopping in the case of fixed design regression. However, these
optimal rates are based on an ``oracle'' stopping rule, one that
cannot be computed based on the data. Thus, their work left open the
following natural question: is there a data-dependent and easily
computable stopping rule that produces a minimax-optimal estimator?
The main contribution of this paper is to answer this question in the
affirmative for a certain class of non-parametric regression problems,
in which the underlying regression function belongs to a reproducing
kernel Hilbert space (RKHS). In this setting, a standard estimator is
the method of kernel ridge regression (e.g.,~\cite{Wahba}), which
minimizes a weighted sum of the least-squares loss with a squared
Hilbert norm penalty as a regularizer. Instead of a penalized form of
regression, we analyze early stopping of an iterative update that is
equivalent to gradient descent on the least-squares loss in an
appropriately chosen coordinate system. By analyzing the mean-squared
error of our iterative update, we derive a data-dependent stopping
rule that provides the optimal trade-off between the estimated bias
and variance at each iteration. In particular, our stopping rule is
based on the first time that a running sum of step-sizes after $t$
steps increases above the critical trade-off between bias and
variance. For Sobolev spaces and other types of kernel classes, we
show that the function estimate obtained by this stopping rule
achieves minimax-optimal estimation rates in both the empirical and
population norms. Importantly, our stopping rule does not require the
use of cross-validation or hold-out data.
In more detail, our first main result (Theorem~\ref{ThmMain}) provides
bounds on the squared prediction error for all iterates prior to the
stopping time, and a lower bound on the squared error for all
iterations after the stopping time. These bounds are applicable to
the case of fixed design, where as our second main result
(Theorem~\ref{ThmRandDesign}) provides similar types of upper bounds
for randomly sampled covariates. These bounds are stated in terms of
the squared $\ensuremath{L^2(\mathbb{P})}$ norm, as opposed to the prediction error or $\ensuremath{{L^2(\mathbb{P}_n)}}$
(semi)norm defined by the data. Both of these theorems apply to any
reproducing kernel, and lead to specific predictions for different
kernel classes, depending on their eigendecay. For the case of low
rank kernel classes and Sobolev spaces, we prove that our stopping
rule yields a function estimate that achieves the minimax optimal rate
(up to a constant pre-factor), so that the bounds from our analysis
are essentially unimprovable. Our proof is based on a combination of
analytic techniques~\cite{BuhlmannYu03} with techniques from empirical
process theory~\cite{vandeGeer}. We complement these theoretical
results with simulation studies that compare its performance to other
rules, in particular a method using hold-out data to estimate the
risk, as well as a second method based on Stein's Unbiased Risk
Estimate (SURE). In our experiments for first-order Sobolev kernels,
we find that our stopping rule performs favorably compared to these
alternatives, especially as the sample size grows. Finally, in
Section~\ref{SecRidgeCompare}, we provide an explicit link between our
early stopping strategy and the kernel ridge regression estimator.
\section{Background and problem formulation}
\label{SecProbSetup}
We begin by introducing some background on non-parametric regression
and reproducing kernel Hilbert spaces, before turning to a precise
formulation of the problem studied in this paper.
\subsection{Non-parametric regression and kernel classes}
\label{SecRKHS}
Suppose that our goal is to use a covariate $X \in \ensuremath{\mathcal{X}}$ to predict
a real-valued response $Y \in \ensuremath{\mathbb{R}}$. We do so by using a function
$f: \ensuremath{\mathcal{X}} \rightarrow \ensuremath{\mathbb{R}}$, where the value $f(x)$ represents our
prediction of $Y$ based on the realization $X = x$. In terms of
mean-squared error, the optimal choice is the \emph{regression
function} defined by \mbox{$\ensuremath{f^*}(x) \ensuremath{: \, = } \ensuremath{\mathbb{E}}[Y \mid x]$.} In the
problem of non-parametric regression with random design, we observe
$\ensuremath{n}$ samples of the form \mbox{$\{ (\ensuremath{x_i}, \ensuremath{\y_i}), i = 1, \ldots,
\ensuremath{n} \}$,} each drawn independently from some joint distribution
on the Cartesian product $\ensuremath{\mathcal{X}} \times \ensuremath{\mathbb{R}}$, and our goal is to
estimate the regression function $\ensuremath{f^*}$. Equivalently, we observe
samples of the form
\begin{align}
\label{EqnLinObs}
\ensuremath{\y_i} & = \ensuremath{f^*}(\ensuremath{x_i}) + \ensuremath{w_i}, \quad \mbox{for $i = 1,2, \ldots,
\ensuremath{n}$,}
\end{align}
where $\ensuremath{w_i} \ensuremath{: \, = } \ensuremath{\y_i} - \ensuremath{f^*}(\ensuremath{x_i})$ is a zero-mean noise random variable.
Throughout this paper, we assume that the random variables $\ensuremath{w_i}$ are
\emph{sub-Gaussian} with parameter $\sigma$, meaning that
\begin{align}
\label{EqnDefnSubGaussian}
\ensuremath{\mathbb{E}}[e^{t w_i}] & \leq e^{t^2 \sigma^2/2} \quad \mbox{for all $t \in
\ensuremath{\mathbb{R}}$.}
\end{align}
For instance, this sub-Gaussian condition is satisfied for normal
variates $w_i \sim N(0, \sigma^2)$, but it also holds for various
non-Gaussian random variables. Parts of our analysis also apply to
the fixed design setting, in which we condition on a particular
realization $\{x_i\}_{i=1}^\ensuremath{n}$ of the covariates.
In order to estimate
the regression function, we make use of the machinery of reproducing
kernel Hilbert spaces~\cite{Aron50,Wahba,Gu01}. Using $\ensuremath{\mathbb{P}}$ to
denote the marginal distribution of the covariates, we consider a
Hilbert space \mbox{$\ensuremath{\mathcal{H}} \subset \ensuremath{L^2(\mathbb{P})}$,} meaning a family of
functions $g: \ensuremath{\mathcal{X}} \rightarrow \ensuremath{\mathbb{R}}$, with $\|g\|_{\ensuremath{L^2(\mathbb{P})}} < \infty$,
and an associated inner product $\inprod{\cdot}{\cdot}_\ensuremath{\mathcal{H}}$ under
which $\ensuremath{\mathcal{H}}$ is complete. The space $\ensuremath{\mathcal{H}}$ is a reproducing kernel
Hilbert space (RKHS) if there exists a symmetric function $\ensuremath{\mathbb{K}}: \ensuremath{\mathcal{X}}
\times \ensuremath{\mathcal{X}} \rightarrow \ensuremath{\mathbb{R}}_+$ such that: (a) for each $x
\in \ensuremath{\mathcal{X}}$, the function $\ensuremath{\mathbb{K}}(\cdot, x)$ belongs to the Hilbert
space $\ensuremath{\mathcal{H}}$, and (b) we have the reproducing relation $f(x) =
\inprod{f}{\ensuremath{\mathbb{K}}(\cdot, x)}_{\ensuremath{\mathcal{H}}}$ for all $f \in \ensuremath{\mathcal{H}}$. Any such
kernel function must be positive semidefinite; under suitable
regularity conditions, Mercer's theorem~\cite{Mercer09} guarantees
that the kernel has an eigen-expansion of the form
\begin{align}
\label{EqnMercer}
\ensuremath{\mathbb{K}}(x, x') & = \sum_{k=1}^\infty \ensuremath{\lambda}_k \phi_k(x)
\phi_k(x'),
\end{align}
where $\lambda_1 \geq \lambda_2 \geq \lambda_3 \geq \ldots \geq 0$ are a non-negative sequence of eigenvalues, and $\{\phi_k\}_{k=1}^\infty$
are the associated eigenfunctions, taken to be orthonormal in $\ensuremath{L^2(\mathbb{P})}$.
The decay rate of the eigenvalues will play a crucial role in our
analysis.
Since the eigenfunctions $\{\phi_k\}_{k=1}^\infty$ form an orthonormal
basis, any function $f \in \ensuremath{\mathcal{H}}$ has an expansion of the form $f(x) =
\sum_{k=1}^{\infty} \sqrt{\ensuremath{\lambda}_k} a_{k} \phi_k(x)$, where
for all $k$ such that $\ensuremath{\lambda}_k > 0$, the coefficients
\begin{align*}
a_k & \ensuremath{: \, = } \frac{1}{\sqrt{\ensuremath{\lambda}_k}} \inprod{f}{\phi_k}_{\ensuremath{L^2(\mathbb{P})}} =
\int_\ensuremath{\mathcal{X}} f(x) \phi_k(x) \, d\, \ensuremath{\mathbb{P}}(x)
\end{align*}
are rescaled versions of the generalized Fourier
coefficients. Associated with any two functions in $\ensuremath{\mathcal{H}}$---where
\mbox{$f = \sum_{k = 1}^\infty \sqrt{\ensuremath{\lambda}_k} a_k \phi_k$} and
\mbox{$g = \sum_{k =1}^\infty \sqrt{\ensuremath{\lambda}_k} b_k \phi_k$}---are two
distinct inner products. The first is the usual inner product in the
space $\ensuremath{L^2(\mathbb{P})}$---namely, \mbox{$\inprod{f}{g}_{\ensuremath{L^2(\mathbb{P})}} \ensuremath{: \, = } \int_\ensuremath{\mathcal{X}}
f(x) g(x) \, d\, \ensuremath{\mathbb{P}}(x)$.} By Parseval's theorem, it has an
equivalent representation in terms of the expansion coefficients and
kernel eigenvalues---that is,
\begin{align*}
\inprod{f}{g}_{\ensuremath{L^2(\mathbb{P})}} & = \sum_{k=1}^\infty \ensuremath{\lambda}_k a_k b_k.
\end{align*}
The second inner product, denoted $\inprod{f}{g}_{\ensuremath{\mathcal{H}}}$, is the one
that defines the Hilbert space; it can be written in terms of the
expansion coefficients as
\begin{align*}
\inprod{f}{g}_\ensuremath{\mathcal{H}} & = \sum_{k=1}^\infty {a_k
b_k}.
\end{align*}
Using this definition, the Hilbert ball of radius $1$ for the Hilbert
space $\ensuremath{\mathcal{H}}$ with eigenvalues $\{\ensuremath{\lambda}_k\}_{k=1}^\infty$ and
eigenfunctions $\{\phi_k\}_{k=1}^\infty$ takes the form
\begin{align}
\ensuremath{\mathbb{B}}_\ensuremath{\mathcal{H}}(1) & \ensuremath{: \, = } \big \{ f = \sum_{k=1}^{\infty} \sqrt{\ensuremath{\lambda}_k}
b_{k} \phi_k \quad \mbox{for some} \quad \sum_{k=1}^{\infty} {b_k^2}
\leq 1 \big\}.
\end{align}
The class of reproducing kernel Hilbert spaces contains many
interesting classes that are widely used in practice, including
polynomials of degree $d$, Sobolev spaces with smoothness $\ensuremath{\nu}$,
and Gaussian kernels. For more background and examples on reproducing
kernel Hilbert spaces, we refer the reader to various standard
references~\cite{Aron50,Saitoh88,Scholkopf02,Wahba,Weinert82}.
Throughout this paper, we assume that any function $f$ in the unit
ball of the Hilbert space is uniformly bounded, meaning that there is
some constant $\ensuremath{B} < \infty$ such that
\begin{align}
\label{EqnCond}
\|f\|_{\infty} & \ensuremath{: \, = } \sup_{x \in \ensuremath{\mathcal{X}}} |f(x)| \leq \ensuremath{B} \qquad
\mbox{for all $f \in \ensuremath{\mathbb{B}}_\ensuremath{\mathcal{H}}(1)$.}
\end{align}
This boundedness condition~\eqref{EqnCond} is satisfied for any RKHS
with a kernel such that $\sup_{x \in \ensuremath{\mathcal{X}}} \ensuremath{\mathbb{K}}(x,x) \leq \ensuremath{B}$.
Kernels of this type include the Gaussian and Laplacian kernels, the
kernels underlying Sobolev and other spline classes, as well as as
well as any trace class kernel with trignometric eigenfunctions. The
boundedness condition~\eqref{EqnCond} is quite standard in
non-asymptotic analysis of non-parametric regression
procedures~\cite[e.g.]{vandeGeer}.
\subsection{Gradient update equation}
\label{SecGradStep}
We now turn to the form of the gradient update that we study in this
paper. Given the samples \mbox{$\{(x_i, y_i)\}_{i=1}^\ensuremath{n}$},
consider minimizing the least-squares loss function
\begin{align}
\ensuremath{\mathcal{L}}(f) & \ensuremath{: \, = } \frac{1}{2 \ensuremath{n}} \sum_{i=1}^\ensuremath{n} {\big (\ensuremath{\y_i}
- f(x_i) \big)^2}
\end{align}
over some subset of the Hilbert space $\ensuremath{\mathcal{H}}$. By the representer
theorem~\cite{Kimeldorf71}, it suffices to restrict attention to
functions $f$ belonging to the span of the kernel functions $\{
\ensuremath{\mathbb{K}}(\cdot, x_i), i = 1, \ldots, \ensuremath{n} \}$. Accordingly, we adopt
the parameterization
\begin{align}
\label{EqnRepresenter}
f(\cdot) & = \frac{1}{\sqrt{n}} \sum_{i=1}^{n}{\ensuremath{\omega}_i
\ensuremath{\mathbb{K}}(\cdot ,x_i)},
\end{align}
for some coefficient vector $\ensuremath{\omega} \in \ensuremath{\mathbb{R}}^\ensuremath{n}$. Here the
rescaling by $1/\sqrt{\ensuremath{n}}$ is for later theoretical convenience.
Our gradient descent procedure is based on a parameterization of the
least-squares loss that involves the \emph{empirical kernel matrix}
\mbox{$\ensuremath{K} \in \mathbb{R}^{n \times n}$} with entries
\begin{equation}
\label{EqnDefnEmpiricalKernel}
[\ensuremath{K}]_{ij} = \frac{1}{n}\ensuremath{\mathbb{K}}(x_i, x_j) \qquad \mbox{for $i, j = 1,
2, \ldots, \ensuremath{n}$.}
\end{equation}
For any positive semidefinite kernel function, this matrix must be
positive semidefinite, and so has a unique symmetric square root
denoted by $\ensuremath{\sqrt{\EmpKer}}$. Introducing the convenient shorthand
\mbox{$\ensuremath{y_1^\numobs} \ensuremath{: \, = } \big(y_1 \; y_2 \; \cdots y_\ensuremath{n} \big) \in
\ensuremath{\mathbb{R}}^\ensuremath{n}$,} we can then write the least-squares loss in the
form
\begin{align*}
\ensuremath{\mathcal{L}}(\ensuremath{\omega}) & = \frac{1}{2 \ensuremath{n}} \| \ensuremath{y_1^\numobs} -
\sqrt{\ensuremath{n}} \ensuremath{K} \ensuremath{\omega} \|_2^2.
\end{align*}
A direct approach would be to perform gradient descent on this form of
the least-squares loss. For our purposes, it turns out to be more
natural to perform gradient descent in the transformed co-ordinate
system $\BVEC{} = \ensuremath{\sqrt{\EmpKer}} \, \ensuremath{\omega}$. Some straightforward
calculations (see Appendix~\ref{AppGrad} for details) yield that the
gradient descent algorithm in this new co-ordinate system generates a
sequence of vectors $\{\BVEC{\ensuremath{t}}\}_{\ensuremath{t}=0}^\infty$ via the
recursion
\begin{align}
\label{EqnGradBvec}
\BVEC{\ensuremath{t}+1} & = \BVEC{\ensuremath{t}} - \Step{\ensuremath{t}} \Big( \ensuremath{K} \,
\BVEC{\ensuremath{t}} - \frac{1}{\sqrt{\ensuremath{n}}} \ensuremath{\sqrt{\EmpKer}} \, \ensuremath{y_1^\numobs} \Big),
\end{align}
where $\{\Step{\ensuremath{t}}\}_{\ensuremath{t}=0}^\infty$ is a sequence of positive
step sizes (to be chosen by the user). We assume throughout that the
gradient descent procedure is initialized with $\BVEC{0} = 0$.
The parameter estimate $\BVEC{\ensuremath{t}}$ at iteration $t$ defines a
function estimate $\FUNIT{\ensuremath{t}}$ in the following way. We first
compute\footnote{If the empirical matrix $\ensuremath{K}$ is not invertible,
then we use the pseudoinverse. Note that it may appear as though a
matrix inversion is required to estimate $\ensuremath{\omega}^{\ensuremath{t}}$ for
each $\ensuremath{t}$ which is computationally intensive. However, the
weights $\ensuremath{\omega}^\ensuremath{t}$ may be computed directly via the
iteration $\ensuremath{\omega}^{\ensuremath{t}+1} = \ensuremath{\omega}^{\ensuremath{t}} -
\Step{\ensuremath{t}} \ensuremath{K} (\ensuremath{\omega}^{\ensuremath{t}} -
\frac{\ensuremath{y_1^\numobs}}{\sqrt{\ensuremath{n}}} )$. However, the equivalent
update~\eqref{EqnGradBvec} is more convenient for our analysis.} the
weight vector $\ensuremath{\omega}^\ensuremath{t} = \sqrt{\ensuremath{K}^{-1}} \;
\BVEC{\ensuremath{t}}$, which then defines the function estimate
$\FUNIT{\ensuremath{t}}(\cdot) = \frac{1}{\sqrt{\ensuremath{n}}} \sum_{i=1}^\ensuremath{n}
\ensuremath{\omega}^\ensuremath{t}_i \ensuremath{\mathbb{K}}(\cdot, x_i)$ as before. In this paper, our
goal is to study how the sequence $\{\FUNIT{\ensuremath{t}}\}_{\ensuremath{t}=0}^\infty$
evolves as an approximation to the true regression function $\ensuremath{f^*}$.
We measure the error in two different ways: the $L^2(\ensuremath{\mathbb{P}}_\ensuremath{n})$
norm
\begin{align}
\label{EqnDefnEllTwoEmp}
\|f^\ensuremath{t} - \ensuremath{f^*}\|_\ensuremath{n}^2 & \ensuremath{: \, = } \frac{1}{\ensuremath{n}}
\sum_{i=1}^\ensuremath{n} \big( f^\ensuremath{t}(x_i) - \ensuremath{f^*}(x_i) \big)^2
\end{align}
compares the functions only at the observed design points, whereas the
$L^2(\ensuremath{\mathbb{P}})$-norm
\begin{align}
\label{EqnDefnEllTwoPop}
\|f^\ensuremath{t} - \ensuremath{f^*}\|_2^2 & \ensuremath{: \, = } \ensuremath{\mathbb{E}} \Big[ \big(f^\ensuremath{t}(X) -
\ensuremath{f^*}(X) \big)^2 \Big]
\end{align}
corresponds to the usual mean-squared error.
\subsection{Overfitting and early stopping}
In order to illustrate the phenomenon of interest in this paper, we
performed some simulations on a simple problem. In particular, we
formed $\ensuremath{n} = 100$ i.i.d. observations of the form $y =
\ensuremath{f^*}(x_i) + w_i$, where $w_i \sim N(0, 1)$, and using the fixed
design $x_i = i/\ensuremath{n}$ for $i= 1, \ldots, \ensuremath{n}$. We then
implemented the gradient descent update~\eqref{EqnGradBvec} with
initialization $\BVEC{0} = 0$ and constant step sizes $\STEP{\ensuremath{t}} =
0l25$. We performed this experiment with the regression function
$\ensuremath{f^*}(x) = |x-1/2| - 1/2$, and two different choices of kernel
functions. The kernel $\ensuremath{\mathbb{K}}(x, x') = \min \{x, x'\}$ on the unit
square $[0,1] \times [0,1]$ generates an RKHS of Lipschitz functions,
whereas the Gaussian kernel $\ensuremath{\mathbb{K}}(x, x') = \exp(-\frac{1}{2} (x -
x')^2)$ generates a smoother class of infinitely differentiable
functions.
Figure~\ref{FigPath} provides plots of the squared prediction error
$\|\FUNIT{\ensuremath{t}} - \ensuremath{f^*}\|_\ensuremath{n}^2$ as a function of the iteration
number $\ensuremath{t}$. For both kernels, the prediction error decreases
fairly rapidly, reaching a minimum before or around $T \approx 20$
iterations, before then beginning to increase.
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccc}
\widgraph{.45\textwidth}{fig_sobolevone_earlypath.eps} & &
\widgraph{.45\textwidth}{fig_gauss_earlypath.eps} \\
(a) & & (b)
\end{tabular}
\caption{Behavior of gradient descent update~\eqref{EqnGradBvec} with
constant step size \mbox{$\STEP{} = 0.25$} applied to least-squares
loss with $\ensuremath{n} = 100$ with equi-distant design points $x_i =
i/\ensuremath{n}$ for $i = 1, \ldots, \ensuremath{n}$, and regression function
$\ensuremath{f^*}(x) = |x-1/2| - 1/2$. Each panel gives plots the
$L^2(\ensuremath{\mathbb{P}}_\ensuremath{n})$ error $\|\FUNIT{\ensuremath{t}} - \ensuremath{f^*}\|_\ensuremath{n}^2$
as a function of the iteration number $\ensuremath{t} = 1, 2, \ldots, 100$.
(a) For the first-order Sobolev kernel $\ensuremath{\mathbb{K}}(x, x') = \min\{x,
x'\}$. (b) For the Gaussian kernel $\ensuremath{\mathbb{K}}(x, x') = \exp(-\frac{1}{2}
(x-x')^2)$. }
\label{FigPath}
\end{center}
\end{figure}
As the analysis of this paper will clarify, too many iterations lead
to fitting the noise in the data (i.e., the additive perturbations
$w_i$), as opposed to the underlying function $\ensuremath{f^*}$. In a
nutshell, the goal of this paper is to quantify precisely the meaning
of ``too many'' iterations, and in a data-dependent and easily
computable manner.
\section{Main results and their consequences}
\label{SecMain}
In more detail, our main contribution is to formulate a data-dependent
stopping rule, meaning a mapping from the data $\{(x_i,
y_i)\}_{i=1}^\ensuremath{n}$ to a positive integer $\ensuremath{\widehat{T}}$, such that the two
forms of prediction error $\|\FUNIT{\ensuremath{\widehat{T}}} - \ensuremath{f^*}\|_\ensuremath{n}$ and
$\|\FUNIT{\ensuremath{\widehat{T}}} - \ensuremath{f^*}\|_2$ are minimal. In our formulation of
such a stopping rule, two quantities play an important role: first,
the \emph{running sum} of the step sizes
\begin{align}
\RUNSUM{t} & \ensuremath{: \, = } \sum_{\tau=0}^{t-1}{\Step{\tau}},
\end{align}
and secondly, the eigenvalues $\ensuremath{\widehat{\ensuremath{\lambda}}}_1 \geq \ensuremath{\widehat{\ensuremath{\lambda}}}_2 \geq
\cdots \geq \ensuremath{\widehat{\ensuremath{\lambda}}}_\ensuremath{n} \geq 0$ of the empirical kernel matrix
$\ensuremath{K}$ previously defined~\eqref{EqnDefnEmpiricalKernel}. The
kernel matrix and hence these eigenvalues are computable from the
data. We also note that there is a large body of work on fast
computation of kernel eigenvalues (e.g., see the
paper~\cite{DrinMah05} and references therein).
\subsection{Stopping rules and general error bounds}
Our stopping rule involves the use of a model complexity measure,
familiar from past work on uniform laws over kernel
classes~\cite{Bartlett02, Mendelson02}, known as the local empirical
Rademacher complexity. For the kernel classes studied in this paper,
it takes the form
\begin{align}
\label{EqnDefnKernCompEmp}
\ensuremath{\widehat{\Rad}}_{\ensuremath{K}}(\varepsilon) & \ensuremath{: \, = } \biggr[ \frac{1}{n}
\sum_{i=1}^n \min \big \{ \ensuremath{\widehat{\ensuremath{\lambda}}}_i, \varepsilon^2 \big \}
\biggr]^{1/2}.
\end{align}
For a given noise variance $\sigma > 0$, a closely related
quantity---one of central importance to our analysis---is
\emph{critical empirical radius} $\ensuremath{\widehat{\varepsilon}_\numobs} > 0$, defined to be the
smallest positive solution to the inequality
\begin{align}
\label{EqnDefnEmpCrit}
\ensuremath{\widehat{\Rad}}_{\ensuremath{K}}(\varepsilon ) & \leq \varepsilon^2/(2 e \sigma).
\end{align}
The existence and uniqueness of $\ensuremath{\widehat{\varepsilon}_\numobs}$ is guaranteed for any
reproducing kernel Hilbert space; see Appendix~\ref{AppRade} for
details. As clarified in our proof, this inequality plays a key role
in trading off the bias and variance in a kernel regression estimate.
Our stopping rule is defined in terms of an analogous inequality that
involves the running sum \mbox{$\RUNSUM{\ensuremath{t}} = \sum_{\tau =
0}^{t-1}{\Step{\tau}}$} of the step sizes. Throughout this paper,
we assume that the step sizes are chosen to satisfy the following
properties:
\begin{itemize}
\item Boundedness: $0 \; \leq \; \Step{\tau} \; \leq \; \min \{1,
1/\ensuremath{\widehat{\ensuremath{\lambda}}}_1 \}$ for all $\tau = 0, 1, 2, \ldots$.
\item Non-increasing: $\Step{\tau+1} \leq \Step{\tau}$ for all $\tau =
0, 1, 2, \ldots$.
\item Infinite travel: the running sum $\RUNSUM{\ensuremath{t}} =
\sum_{\tau=0}^{\ensuremath{t}-1} \STEP{\tau}$ diverges as $\ensuremath{t} \rightarrow
+\infty$.
\end{itemize}
We refer to any sequence $\{\STEP{\tau}\}_{\tau=0}^\infty$ that
satisfies these conditions as a \emph{valid stepsize sequence}. We
then define the \emph{stopping time}
\begin{align}
\label{EqnStoppingRule}
\ensuremath{\widehat{T}} & \ensuremath{: \, = } \arg \min \biggr \{ \ensuremath{t} \in \ensuremath{\mathbb{N}} \, \mid
\ensuremath{\widehat{\Rad}}_{\ensuremath{K}} \big(1/\sqrt{\RUNSUM{\ensuremath{t}}}\big) > (2 e \sigma
\RUNSUM{\ensuremath{t}})^{-1} \biggr \} - 1.
\end{align}
As discussed in Appendix~\ref{AppRade}, the integer $\ensuremath{\widehat{T}}$ belongs to
the interval $[0, \infty)$ and is unique for any valid stepsize
sequence. As will be clarified in our proof, the intuition
underlying the stopping rule~\eqref{EqnStoppingRule} is that the sum
of the step-sizes $\RUNSUM{\ensuremath{t}}$ acts as a tuning parameter that
controls the bias-variance tradeoff. The stated choice of
$\widehat{T}$ optimizes this trade-off.
The following result applies to any sequence $\{ \FUNIT{\ensuremath{t}}
\}_{\ensuremath{t}=0}^\infty$ of function estimates generated by the gradient
iteration~\eqref{EqnGradBvec} with a valid stepsize sequence.
\btheos
\label{ThmMain}
Given the stopping time $\ensuremath{\widehat{T}}$ defined by the
rule~\eqref{EqnStoppingRule}, there are universal positive constants
$(\ensuremath{c}_1, \ensuremath{c}_2)$ such that the following events both hold with
probability at least $1 -\ensuremath{c}_1 \exp(-\ensuremath{c}_2 \ensuremath{n}
\ensuremath{\widehat{\varepsilon}_\numobs}^2)$:
\begin{enumerate}
\item[(a)] For all iterations $\ensuremath{t} = 1, 2,...,\ensuremath{\widehat{T}}$:
\begin{align}
\label{EqnGeneralBound}
\| \FUNIT{\ensuremath{t}} - f^*\|_\ensuremath{n}^2 \; \leq \;
\frac{4}{e \, \RUNSUM{\ensuremath{t}} }.
\end{align}
\item[(b)] At the iteration $\ensuremath{\widehat{T}}$ chosen according to the stopping
rule~\eqref{EqnStoppingRule}, we have
\begin{align}
\label{BoundAtOpt}
\| \FUNIT{\ensuremath{\widehat{T}}}- \ensuremath{f^*}\|_\ensuremath{n}^2 & \leq 12 \, \ensuremath{\widehat{\varepsilon}_\numobs}^2.
\end{align}
\item[(c)] Moreover, for all $t > \ensuremath{\widehat{T}}$,
\begin{align}
\label{BoundAfterOpt}
\mathbb{E}[\| \FUNIT{\ensuremath{t}}- \ensuremath{f^*}\|_\ensuremath{n}^2] & \geq
\frac{\sigma^2}{4} \RUNSUM{\ensuremath{t}}
\ensuremath{\widehat{\Rad}}_{\ensuremath{K}}(\RUNSUM{\ensuremath{t}}^{-1/2}).
\end{align}
\end{enumerate}
\etheos
\noindent \paragraph{Remarks:} Although the bounds (a) and (b) are
stated as high probability claims, a simple integration argument can
be used to show that the expected mean-squared error (over the noise
variables, with the design fixed) satisfies a bound of the form
\begin{align}
\label{EqnIterationsEbound}
\ensuremath{\mathbb{E}} \big[ \|\FUNIT{\ensuremath{t}} - \ensuremath{f^*}\|_\ensuremath{n}^2 \big] \leq \frac{4}{e
\, \RUNSUM{\ensuremath{t}} }.
\end{align}
Moreover, as will be clarified in corollaries to follow,
Theorem~\ref{ThmMain} can be used to show that our stopping rule
provides minimax-optimal rates for various function classes. The
interpretation of Theorem~\ref{ThmMain} is as follows: if the sum of
the step-sizes $\RUNSUM{\ensuremath{t}}$ remains below the threshold defined
by~\eqref{EqnStoppingRule}, applying the gradient
update~\eqref{EqnGradBvec} reduces the prediction error. Moreover,
note that for Hilbert spaces with a larger kernel complexity, the
stopping time $\ensuremath{\widehat{T}}$ is smaller, since fitting functions in a larger
class incurs a greater risk of overfitting.\\
In the case of random design $x_i \sim \ensuremath{\mathbb{P}}$, we can also provide
bounds on the $L^2(\ensuremath{\mathbb{P}})$-error $\|\FUNIT{\ensuremath{t}} - \ensuremath{f^*}\|_2$. In
this setting, for the purposes of comparing to minimax lower bounds,
it is also useful to state some results in terms of the population
analog of the local empirical Rademacher
complexity~\eqref{EqnDefnKernCompEmp}, namely the quantity
\begin{align}
\label{EqnDefnPopKernComp}
\ensuremath{\mathcal{R}}_{\ensuremath{\mathbb{K}}}(\ensuremath{\varepsilon}) & \ensuremath{: \, = } \biggr[\frac{1}{n}\sum_{j = 1}^{\infty}
\min \big \{ \ensuremath{\lambda}_j, \ensuremath{\varepsilon}^2 \big \} \biggr]^{1/2}.
\end{align}
Using this complexity measure, we define the \emph{critical population
rate} $\ensuremath{\varepsilon_\numobs}$ to be the smallest positive solution to the
inequality
\begin{align}
\label{EqnDefnPopCrit}
40 \: \ensuremath{\mathcal{R}}_{\ensuremath{\mathbb{K}}}(\ensuremath{\varepsilon}) & \leq \frac{\ensuremath{\varepsilon}^2}{\sigma}.
\end{align}
(Our choice of the pre-factor $40$ is for later theoretical
convenience.) In contrast to the critical empirical rate $\ensuremath{\widehat{\varepsilon}_\numobs}$,
this quantity is not data-dependent, since it depends on the
population eigenvalues of the RKHS $\ensuremath{\mathcal{H}}$.
\btheos[Random design]
\label{ThmRandDesign}
Suppose that the design variables $\{x_i\}_{i=1}^\ensuremath{n}$ are sampled
i.i.d. according to $\ensuremath{\mathbb{P}}$. Then under the conditions of
Theorem~\ref{ThmMain}, there are universal positive constants
$\ensuremath{c}_j, j = 1, 2, 3$ such that
\begin{align}
\|\FUNIT{\ensuremath{\widehat{T}}} - f^*\|_2^2 \leq \ensuremath{c}_3 \ensuremath{\varepsilon_\numobs}^2
\end{align}
with probability at least $1 - \ensuremath{c}_1 \exp(-\ensuremath{c}_2 \ensuremath{n}
\ensuremath{\widehat{\varepsilon}_\numobs}^2)$.
\etheos Theorems~\ref{ThmMain} and~\ref{ThmRandDesign} are general
results that apply to any reproducing kernel Hilbert space. Their
proofs involve combination of direct analysis of our iterative
update~\eqref{EqnGradBvec} combined with techniques from empirical
process theory and concentration of measure~\cite{vandeGeer,Ledoux01};
see Section~\ref{SecProofs} for the details.
To compare with the past work of B\"{u}hlmann and
Yu~\cite{BuhlmannYu03}, they also provide a theoretical analysis for
gradient descent (referred to as $L^2$-boosting in their paper),
focusing exclusively on the fixed design case. Our theory applies to
random as well as fixed design, and a broader set of step-size
choices. The most significant difference between Theorem~\ref{ThmMain}
in our paper and Theorem 3 in the paper~\cite{BuhlmannYu03} is that we
provide a data-dependent stopping rule where as their analysis does
not lead to a computable stopping rule.
\subsection{Some consequences for specific kernel classes}
Let us now illustrate some consequences of our general theory for
special choices of kernels that are of interest in practice.
\paragraph{Kernels with polynomial eigendecay:}
We begin with the class of RKHSs whose eigenvalues satisfy a
\emph{polynomial decay condition}, meaning that
\begin{align}
\label{EqnPolyDecay}
\ensuremath{\lambda}_k & \leq C \big(\frac{1}{k}\big)^{2 \ensuremath{\nu}} \qquad \mbox{for
some $\ensuremath{\nu} > 1/2$ and constant $C$.}
\end{align}
Among other examples, this type of scaling covers various types of
Sobolev spaces, consisting of functions with $\nu$ derivatives
(e.g.,~\cite{BirSol67,Gu02}). For instance, the first-order Sobolev
kernel \mbox{$\ensuremath{\mathbb{K}}(x, x') = \min \{x, x'\}$} on the unit square
\mbox{$[0,1] \times [0,1]$} generates an RKHS of functions that are
differentiable almost \mbox{everywhere}, given by
\begin{align}
\label{EqnFirstOrderSobolev}
\ensuremath{\mathcal{H}} & \ensuremath{: \, = } \big \{ f: [0,1] \rightarrow \ensuremath{\mathbb{R}} \mid \, f(0) = 0,
\quad \int_0^1 (f'(x))^2 dx < \infty \big \},
\end{align}
For the uniform measure on $[0,1]$, this class exhibits polynomial
eigendecay~\eqref{EqnPolyDecay} with $\ensuremath{\nu} = 1$. For any class
that satisfies the polynomial decay condition, we have the following
corollary: \\
\bcors
\label{CorAchieveSmooth}
Suppose that in addition to the assumptions of
Theorem~\ref{ThmRandDesign}, the kernel class $\ensuremath{\mathcal{H}}$ satisfies the
polynomial eigenvalue decay~\eqref{EqnPolyDecay} for some parameter
$\ensuremath{\nu} > 1/2$. Then there is a universal constant $\ensuremath{c}_5$ such
that
\begin{align}
\label{EqnSobolevUpper}
\ensuremath{\mathbb{E}} \big[\|\FUNIT{\ensuremath{\widehat{T}}}- f^*\|_2^2] & \leq \ensuremath{c}_5 \,
\big(\frac{\sigma^2}{\ensuremath{n}} \big)^{\frac{2 \ensuremath{\nu}}{2 \ensuremath{\nu}+1}}.
\end{align}
Moreover, if $\ensuremath{\lambda}_k \geq c \, (1/k)^{2 \ensuremath{\nu}}$ for all $k = 1,
2, \ldots$, then
\begin{align}
\ensuremath{\mathbb{E}} \big[\|\FUNIT{\ensuremath{t}} - \ensuremath{f^*}\|_2^2 \big] & \geq \frac{1}{4} \min
\big \{ 1, \; \sigma^2 \frac{(\RUNSUM{\ensuremath{t}})^{\frac{1}{2
\ensuremath{\nu}}}}{\ensuremath{n}} \big \} \quad \mbox{for all iterations
$\ensuremath{t} = 1, 2, \ldots$.}
\end{align}
\ecors
\noindent The proof, provided in
Section~\ref{SecProofCorAchieveSmooth}, involves showing that the
population critical rate~\eqref{EqnDefnPopKernComp} is of the order
$\ensuremath{\mathcal{O}}(\ensuremath{n}^{- \frac{2 \ensuremath{\nu}}{2 \ensuremath{\nu}+1}})$. By known results
on non-parametric regression~\cite{Sto85, YanBar99}, the error
bound~\eqref{EqnSobolevUpper} is minimax-optimal.
In the special case of the first-order spline
family~\eqref{EqnFirstOrderSobolev}, Corollary~\ref{CorAchieveSmooth}
guarantees that
\begin{align}
\label{EqnOptimalSobone}
\ensuremath{\mathbb{E}}[ \|\FUNIT{\ensuremath{\widehat{T}}} - \ensuremath{f^*}\|_2^2] & \precsim \big(
\frac{\sigma^2}{\ensuremath{n}} \big)^{2/3}.
\end{align}
In order to test the accuracy of this prediction, we performed the
following set of simulations. First, we generated samples from the
observation model
\begin{align}
\label{EqnStandard}
y_i & = \ensuremath{f^*}(x_i) + w_i, \qquad \mbox{for $i = 1, 2, \ldots,
\ensuremath{n}$},
\end{align}
where $x_i = i/\ensuremath{n}$, and $w_i \sim N(0,\sigma^2)$ are i.i.d. noise
terms. We present results for the function $\ensuremath{f^*}(x) = |x - 1/2| -
1/2$, a piecewise linear function belonging to the first-order Sobelev
class. For all our experiments, the noise variance $\sigma^2$ was set
to one, but so as to have a data-dependent method, this knowledge was
not provided to the estimator. There is a large body of work on
estimating the noise variance $\sigma^2$ in non-parametric regression
(see e.g. Hall and Marron~\cite{HallMarron90}). For our simulations,
we use the simple estimator based on Hall and
Marron~\cite{HallMarron90}. They proved that their estimator is ratio
consistent, which is sufficient for our purposes.
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccc}
\widgraph{.45\textwidth}{Fig2a.eps} & &
\widgraph{.45\textwidth}{Fig2b.eps} \\
(a) & & (b)
\end{tabular}
\caption{Prediction error obtained from the stopping rule applied to a
regression with $\ensuremath{n}$ samples of the form $\ensuremath{f^*}(x_i) + w_i$ at
equi-distant design points $x_i = i/\ensuremath{n}$ for $i = 0, 1, \ldots
99$, and i.i.d. Gaussian noise $w_i \sim N(0,)$. For these
simulations, the true regression function is given by $\ensuremath{f^*}(x) =
|x - 1/2| - 1/2$. Panel (a): Mean-squared error (MSE) using the
stopping rule~\eqref{EqnStoppingRule} versus the sample size
$\ensuremath{n}$. Each point is based on $10,000$ independent realizations
of the noise variables $\{w_i\}_{i=1}^\ensuremath{n}$. $10,000$
randomizations of $(w_i)_{i=1}^\ensuremath{n}$ against the sample size
$\ensuremath{n}$. Panel (b): Plots of the quantity $MSE^{-3/2}$ versus
sample size $\ensuremath{n}$. As predicted by the theory, this
representation yields a straight line.}
\label{FigRates}
\end{center}
\end{figure}
For a range of sample sizes $\ensuremath{n}$ between $10$ and $300$, we
performed the updates~\eqref{EqnGradBvec} with constant stepsize
$\STEP{} = 0.25$, stopping at the specified time $\ensuremath{\widehat{T}}$. For each
sample size, we performed $10,000$ independent trials, and averaged
the resulting prediction errors. In panel (a) of
Figure~\ref{FigRates}, we plot the mean-squared error versus the
sample size, which shows consistency of the method. We also plotted
the mean-squared error raised to the power $-3/2$ versus the sample
size. After this rescaling, the bound~\eqref{EqnOptimalSobone}
predicts a linear relation, as is observed in panel (b) of
Figure~\ref{FigRates}. We also performed the same experiments for
the case of randomly drawn designs $x_i \sim \mbox{Unif}(0,1)$. In
this case, we observed similar results but with more trials required
to average out the additional randomness in the design.
\paragraph{Finite rank kernels:} We
now turn to the class of RKHSs based on finite-rank kernels, meaning
that there is some finite integer $\ensuremath{m} < \infty$ such that
$\ensuremath{\lambda}_j = 0$ for all $j \geq \ensuremath{m} + 1$. For instance, the
kernel function $\ensuremath{\mathbb{K}}(x, x') = (1 + x x')^2$ is a finite rank kernel
(with $\ensuremath{m} = 2$) that generates the RKHS of all quadratic
functions. More generally, for any integer $d \geq 2$, the kernel
$\ensuremath{\mathbb{K}}(x, x') = (1 + x x')^d$ generates the RKHS of all polynomials
with degree at most $d$. For any such kernel, we have the following
corollary:
\bcors
\label{CorAchieveFinite}
If, in addition to the conditions of Theorem~\ref{ThmRandDesign}, the
kernel has finite rank $\ensuremath{m}$, then
\begin{align}
\ensuremath{\mathbb{E}} \big [\|\widehat{f}_{\widehat{T}}- f^*\|_2^2 \big] & \leq
\ensuremath{c}_5 \, \sigma^2 \frac{\ensuremath{m}}{\ensuremath{n}}.
\end{align}
\ecors
\noindent Importantly, for a rank $\ensuremath{m}$-kernel, the rate
$\frac{\ensuremath{m}}{n}$ is minimax optimal in terms of squared $\ensuremath{L^2(\mathbb{P})}$
error~\cite[e.g.,]{RasWaiYu10b}. \\
\subsection{Comparison with other stopping rules}
\label{SecRuleComp}
In this section, we provide a comparison of our stopping rule to two
other stopping rules, as well as a oracle method (that involves
knowledge of $\ensuremath{f^*}$, and so cannot be computed in practice).
\paragraph{Hold-out method:}
First, we consider a simple hold-out method: it performs gradient
descent using $50\%$ of the data, and uses the other $50 \%$ of the
data to estimate the risk (e.g.~\cite{DevroyeWagner79}). Assuming
that the sample size is even for simplicity, we split the full data
set $\{x_i\}_{i=1}^\ensuremath{n}$ into two equally sized subsets $\ensuremath{S_{\mbox{\tiny{tr}}}}$
and $S_{te}$. The data indexed by the training set $\ensuremath{S_{\mbox{\tiny{tr}}}}$ is used
to estimate the function $\ensuremath{f_{\mbox{\tiny{tr}}}}^t$ using the gradient descent
update~\eqref{EqnGradBvec}. At each iteration $\ensuremath{t} = 0,1,2,
\ldots$, the data indexed by $S_{te}$ is used to estimate the risk via
$\ensuremath{R_{\mbox{\tiny{HO}}}} (f_\ensuremath{t}) = \frac{1}{n} \sum_{i \in S_{te}} \big(y_i -
\ensuremath{f_{\mbox{\tiny{tr}}}}^\ensuremath{t}(x_i) \big)^2$, which defines the stopping rule
\begin{align}
\label{EqnStoppingRuleHO}
\ensuremath{\STOP_{\mbox{\tiny{HO}}}} & \ensuremath{: \, = } \arg \min \biggr \{ \ensuremath{t} \in \ensuremath{\mathbb{N}} \, \mid
\ensuremath{R_{\mbox{\tiny{HO}}}}(\ensuremath{f_{\mbox{\tiny{tr}}}}^{\ensuremath{t}+1}) > \ensuremath{R_{\mbox{\tiny{HO}}}}(\ensuremath{f_{\mbox{\tiny{tr}}}}^\ensuremath{t}) \biggr \} - 1.
\end{align}
A line of past work~\cite{Yao05,
Bauer07,Caponnetto06,CaponnettoYao06,CaponnettoYaoJournal,
DeVito10}) has analyzed stopping rules based on this type of
hold-out rule. For instance, Caponetto~\cite{Caponnetto06} analyzes a
hold-out method, and shows that it yields rates that are optimal for
Sobolev spaces with $\ensuremath{\nu} \leq 1$ but not in general. The major
drawback of using hold-out as that a percentage of the data is lost
which increases the risk.
\paragraph{SURE method:} Stein's Unbiased Risk estimate (SURE)
can be used to define an alternative stopping rule. If we define the
shrinkage matrix $\tilde{S}^\ensuremath{t} = \prod_{\tau=0}^{\ensuremath{t}-1}{(I -
\STEP{\tau}\ensuremath{K})}$, then it can be shown that the SURE
estimator~\cite{Stein81} takes the form
\begin{align}
\ensuremath{R_{\mbox{\tiny{SU}}}}(f^\ensuremath{t}) & = \frac{1}{\ensuremath{n}} \{ \ensuremath{n} \sigma^2 + Y^T
(\tilde{S}^\ensuremath{t})^2 Y - 2 \sigma^2 \ensuremath{\operatorname{trace}}(\tilde{S}^\ensuremath{t}) \},
\end{align}
which is easy to compute. This risk estimate defines the associated
stopping rule
\begin{align}
\label{EqnStoppingRuleSURE}
\ensuremath{\STOP_{\mbox{\tiny{SU}}}} & \ensuremath{: \, = } \arg \min \biggr \{ \ensuremath{t} \in \ensuremath{\mathbb{N}} \, \mid
\ensuremath{R_{\mbox{\tiny{SU}}}}(f^{\ensuremath{t}+1}) > \ensuremath{R_{\mbox{\tiny{SU}}}}(f^\ensuremath{t}) \biggr \} - 1.
\end{align}
In contrast with hold-out, this approach makes use of all the data.
However, we are not aware of any theoretical guarantees for early
stopping using the stopping rule~\eqref{EqnStoppingRuleSURE}.
It can be shown for both stopping rules~\eqref{EqnStoppingRuleHO} and
~\eqref{EqnStoppingRuleSURE}, a valid sequence of step-sizes
guarantees existence and uniqueness of the stopping point. Note that
our stopping rule $\ensuremath{\widehat{T}}$ based on~\eqref{EqnStoppingRule} requires
estimation of both the empirical eigenvalues, and the noise variance
$\sigma^2$. In contrast, the SURE-based rule requires estimation of
$\sigma^2$ but not the empirical eigenvalues, whereas the hold-out
rule requires no parameters to be estimated, but a percentage of the
data is used to estimate the risk.
\paragraph{Oracle method:} As a third
point of reference, we also plot the mean-squared error for an
``oracle'' method. It is allowed to base its stopping time on the
exact prediction error $\ensuremath{R_{\mbox{\tiny{OR}}}}(f^\ensuremath{t}) = \|f^{\ensuremath{t}} -
\ensuremath{f^*}\|_\ensuremath{n}^2$, which defines the oracle stopping rule
\begin{align}
\label{EqnStoppingRuleOracle}
\ensuremath{\STOP_{\mbox{\tiny{OR}}}} & \ensuremath{: \, = } \arg \min \biggr \{ \ensuremath{t} \in \ensuremath{\mathbb{N}} \, \mid
\ensuremath{R_{\mbox{\tiny{OR}}}}(f^{\ensuremath{t}+1}) > \ensuremath{R_{\mbox{\tiny{OR}}}}(f^\ensuremath{t}) \biggr \} - 1.
\end{align}
Note that this stopping rule is not computable from the data, since it
assumes exact knowledge of the function $\ensuremath{f^*}$ that we are trying to
estimate.
In order to compare our stopping rule~\eqref{EqnStoppingRule} with
these alternatives, we generated i.i.d. samples from the previously
described model (see equation~\eqref{EqnStandard} and the following
discussion). We varied the sample size $\ensuremath{n}$ from $10$ to $300$,
and for each sample size, we performed $10,000$ independent trials
(randomizations of the noise variables $\{w_i\}_{i=1}^\ensuremath{n}$), and
computed the average of squared prediction error.
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccc}
\widgraph{.45\textwidth}{Fig3a.eps} & &
\widgraph{.45\textwidth}{Fig3b.eps} \\
(a) & & (b)
\end{tabular}
\caption{The non-parametric function is $f^*(x) = |x - 1/2| - 1/2$
with kernel $\ensuremath{\mathbb{K}}(x,y) = \min(|x|,|y|)$. We apply the gradient
descent update~\eqref{EqnGradBvec} with $\Step{\ensuremath{t}}=1$ for all
$t$, and plot the average mean-squared error over $10,000$
randomizations against the sample size for $n
=10,20,30,40,50,60,70,80,90,100,200,300$. Mean-squared error is
plotted for $4$ stopping rules: (i) our stopping
rule~\eqref{EqnStoppingRule}; (ii) holding out $50 \%$ of the data
and using~\eqref{EqnStoppingRuleHO}; (iii)
SURE~\eqref{EqnStoppingRuleSURE}; and (iv) oracle stopping
rule~\eqref{EqnStoppingRuleHO}. For panel (a) results are plotted on
a normal scale and for panel (b), curves are plotted using a log-log
scale.}
\label{FigHO}
\end{center}
\end{figure}
Figure~\ref{FigHO} plots the resulting mean-squared errors of our
stopping rule, the hold-out stopping rule~\eqref{EqnStoppingRuleHO},
the SURE-based stopping rule~\eqref{EqnStoppingRuleSURE}, and the
oracle rule~\eqref{EqnStoppingRuleOracle}. Panel (a) shows the
mean-squared error versus sample size, whereas panel (b) shows the
same curves in terms of logarithm of mean-squared error. Our proposed
rule exhibits better performance than the hold-out and SURE-based
rules for sample sizes $\ensuremath{n}$ larger than $50$. On the flip side,
since the construction of our stopping rule is based on the assumption
that $\ensuremath{f^*}$ belongs to a known RKHS, it is unclear how robust it
would be to model mis-specification. In contrast, the hold-out and
SURE-based stopping rules are generic methods, not based directly on
the RKHS structure, so might be more robust to model mis-specification.
Thus, one interesting direction is to explore the robustness of our
stopping rule. On the theoretical front, it would be interesting to
determine whether the hold-out and/or SURE-based stopping rules can be
proven to achieve minimax optimal rates for general kernels, as we have
established for our stopping rule.
\subsection{Connections to kernel ridge regression}
\label{SecRidgeCompare}
We conclude by presenting an interesting link between our early
stopping procedure and kernel ridge regression. The kernel ridge
regression (KRR) estimate is defined as
\begin{align}
\label{EqnKernelRidge}
\ensuremath{\widehat{f}}_\nu & \ensuremath{: \, = } \arg \min_{f \in \ensuremath{\mathcal{H}}} \big\{ \frac{1}{2
\ensuremath{n}} \sum_{i=1}^\ensuremath{n} (y_i - f(x_i))^2 + \frac{1}{2 \nu}
\|f\|_\ensuremath{\mathcal{H}}^2 \big \},
\end{align}
where $\nu$ is the (inverse) regularization parameter. For any
$\nu < \infty$, the objective is strongly convex, so that the KRR
solution is unique.
Friedman and Popescu~\cite{Friedman04} observed through simulations
that the regularization paths for early stopping of gradient descent
and ridge regression are similar, but did not provide any theoretical
explanation of this fact. As an illustration of this empirical
phenomenon, Figure~\ref{FigRidgePath} compares the prediction error
$\|\ensuremath{\widehat{f}}_{\nu} - \ensuremath{f^*}\|_\ensuremath{n}^2$ of the kernel ridge
regression estimate over the interval $\nu \in [1, 100]$ versus
that of the gradient update~\eqref{EqnGradBvec} over the first $100$
iterations. Note that the curves, while not identical, are
qualitatively very similar.
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccc}
\widgraph{0.45\textwidth}{fig_sobone_bothcurves.eps} & &
\widgraph{0.45\textwidth}{fig_sobone_bothcurves2.eps} \\
(a) & & (b)
\end{tabular}
\caption{Comparison of the prediction error of the path of kernel
ridge regression estimates~\eqref{EqnKernelRidge} obtained by
varying $\nu \in [1, 100]$ to those of the gradient
updates~\eqref{EqnGradBvec} over $100$ iterations with constant step
size. All simulations were performed with the kernel $\ensuremath{\mathbb{K}}(x, x') =
\min \{x, x' \}$ based on $\ensuremath{n} = 100$ samples at the design
points $x_i = i/\ensuremath{n}$ with $\ensuremath{f^*}(x) = |x - 1/2| - 1/2$. (a)
Noise variance $\sigma^2 = 1$. (b) Noise variance $\sigma^2 = 2$.}
\label{FigRidgePath}
\end{center}
\end{figure}
From past theoretical work~\cite[e.g.,]{vandeGeer,Mendelson02}, kernel
ridge regression, with the appropriate setting of the penalty
parameter $\nu$, is known to achieve minimax-optimal error for
various kernel classes, among them the Sobolev and finite-rank kernels
for which stopping rule is provably optimal. In this section, we
provide a theoretical basis for these connections, in particular by
showing that if the inverse penalty parameter $\nu$ is chosen
using the same criterion as our stopping rule, then the prediction
error satisfies the same type of bounds (with $\nu$ now playing
the role of the running sum $\RUNSUM{\ensuremath{t}}$).
More precisely, suppose that we choose $\ensuremath{\widehat{\PenPar}}$ to be the smallest
positive solution to the inequality
\begin{align}
\label{EqnParChoice}
\big(4 \sigma \nu \big)^{-1} < \ensuremath{\widehat{\Rad}}_{\ensuremath{K}}(1/\sqrt{\nu}
\big).
\end{align}
Note that this criterion is identical to the one underlying our
stopping rule, except that the continuous parameter $\nu$ replaces
the discrete parameter $\RUNSUM{\ensuremath{t}} =
\sum_{\tau=0}^{t-1}{\Step{\tau}}$.
\bprops
\label{PropMainRidge}
Consider the kernel ridge regression estimator~\eqref{EqnKernelRidge}
applied to $\ensuremath{n}$ i.i.d. samples $\{(x_i, y_i)\}$ with $\sigma$-sub
Gaussian noise. Then there are universal constants $(\ensuremath{c}_1,
\ensuremath{c}_2, \ensuremath{c}_3)$ such that for all $\ensuremath{\delta} > 0$, the following
claims hold with probability at least $1 - \ensuremath{c}_1 \exp(-\ensuremath{c}_2
\, \ensuremath{n} \, \ensuremath{\widehat{\varepsilon}_\numobs}^2 )$:
\begin{enumerate}
\item[(a)] For all $0 < \nu \leq \ensuremath{\widehat{\PenPar}}$, we have
\begin{align}
\label{EqnRidgeBoundBefore}
\|\ensuremath{\widehat{f}}_{\nu}- f^*\|_\ensuremath{n}^2 & \leq \frac{2}{\nu}
\end{align}
\item[(b)] With $\ensuremath{\widehat{\PenPar}}$ chosen according to the
rule~\eqref{EqnParChoice}, we have
\begin{align}
\label{EqnRidgeBoundOpt}
\|\ensuremath{\widehat{f}}_{\ensuremath{\widehat{\PenPar}}} - \ensuremath{f^*}\|_\ensuremath{n}^2 & \leq \ensuremath{c}_3 \:
\ensuremath{\widehat{\varepsilon}_\numobs}^2.
\end{align}
\item[(c)] Moreover, for all $\nu > \ensuremath{\widehat{\PenPar}}$, we have
\begin{align}
\label{EqnRidgeBoundOptAfter}
\mathbb{E}[\| \ensuremath{\widehat{f}}_{\nu}- \ensuremath{f^*}\|_\ensuremath{n}^2] & \geq
\frac{\sigma^2}{4} \nu \ensuremath{\widehat{\Rad}}_{\ensuremath{K}}(\nu^{-1/2}).
\end{align}
\end{enumerate}
\eprops
Note that (apart from a slightly different leading constant) the upper
bound~\eqref{EqnRidgeBoundBefore} is \emph{identical} to the upper
bound in equation~\eqref{EqnGeneralBound} in Theorem~\ref{ThmMain}.
The only difference is that the inverse regularization parameter
$\nu$ replaces the running sum $\RUNSUM{\ensuremath{t}} = \sum_{\tau =
0}^{\ensuremath{t}-1}{\Step{\tau}}$. Similarly, part (b) of
Proposition~\ref{PropMainRidge} guarantees that the kernel ridge
regression~\eqref{EqnKernelRidge} has prediction error that is upper
bounded by the empirical critical rate $\ensuremath{\widehat{\varepsilon}_\numobs}^2$, as in part (b) of
Theorem~\ref{ThmMain}. Let us emphasize that bounds of this type on
kernel ridge regression have been derived in past
work~\cite[e.g.,]{Mendelson02,vandeGeer}. The novelty here is that
the structure of our result reveals the intimate connection to early
stopping, and in fact, the proofs follow a parallel thread.
In conjunction, Proposition~\ref{PropMainRidge} and
Theorem~\ref{ThmMain} provide a theoretical explanation for why, as
shown in Figure~\ref{FigRidgePath}, the paths of the gradient descent
update~\eqref{EqnGradBvec} and kernel ridge regression
estimate~\eqref{EqnKernelRidge} are so similar. However, it is
important to emphasize that from a computational point of view, early
stopping has certain advantages over kernel ridge regression. In
general, solving a quadratic program of the
form~\eqref{EqnKernelRidge} requires on the order of
$\ensuremath{\mathcal{O}}(\ensuremath{n}^3)$ basic operations, and this must be done repeatedly
at each new choice of $\nu$. On the other hand, by its very
construction, the iterates of the gradient algorithm correspond to the
desired path of solutions, and each gradient update involves
multiplication by the kernel matrix, incurring $\ensuremath{\mathcal{O}}(\ensuremath{n}^2)$
operations.
\section{Proofs}
\label{SecProofs}
We now turn to the proofs of our main results. The main steps in each
proof are provided in the main text, with some of the more technical
results deferred to the appendix.
\subsection{Proof of Theorem~\ref{ThmMain}}
In order to derive upper bounds on the $\ensuremath{{L^2(\mathbb{P}_n)}}$-error in
Theorem~\ref{ThmMain}, we first rewrite the gradient
update~\eqref{EqnGradBvec} in an alternative form. For each iteration
$\ensuremath{t} = 0, 1, 2, \ldots$, let us introduce the shorthand
\begin{align}
\FUN{\ensuremath{t}} & \ensuremath{: \, = } \begin{bmatrix} f^\ensuremath{t}(x_1) & f^\ensuremath{t}(x_2) &
\cdots & f^\ensuremath{t}(x_\ensuremath{n})
\end{bmatrix} \in \ensuremath{\mathbb{R}}^\ensuremath{n},
\end{align}
corresponding to the $\ensuremath{n}$-vector obtained by evaluating the
function $f^\ensuremath{t}$ at all design points, and the short-hand
\begin{align}
\ensuremath{w} & \ensuremath{: \, = } \begin{bmatrix} w_1, w_2,...,w_n\end{bmatrix} \in \ensuremath{\mathbb{R}}^\ensuremath{n},
\end{align}
corresponding to the vector of zero mean sub-Gaussian noise random variables. From
equation~\eqref{EqnRepresenter}, we have the relation
\begin{align*}
\FUN{\ensuremath{t}} = \frac{1}{\sqrt{\ensuremath{n}}} \ensuremath{K} \, \ensuremath{\omega}^\ensuremath{t} \;
= \; \frac{1}{\sqrt{\ensuremath{n}}} \ensuremath{\sqrt{\EmpKer}} \, \BVEC{\ensuremath{t}}.
\end{align*}
Consequently, by multiplying both sides of the gradient
update~\eqref{EqnGradBvec} by $\ensuremath{\sqrt{\EmpKer}}$, we find that the sequence
$\{\FUN{\ensuremath{t}}\}_{\ensuremath{t}=0}^\infty$ evolves according to the recursion
\begin{align}
\label{EqnAltGrad}
\FUN{\ensuremath{t}+1} & = \FUN{\ensuremath{t}} - \Step{\ensuremath{t}} \ensuremath{K} \, ( \FUN{\ensuremath{t}}
- \ensuremath{y_1^\numobs}) \; = \; \Big( I_{\ensuremath{n} \times \ensuremath{n}} - \Step{\ensuremath{t}}
\ensuremath{K} \Big) \FUN{\ensuremath{t}} - \Step{\ensuremath{t}} \ensuremath{K} \, \ensuremath{y_1^\numobs}.
\end{align}
Since $\BVEC{0} = 0$, the sequence is initialized with $\FUN{0} = 0$.
The recursion~\eqref{EqnAltGrad} lies at the heart of our analysis.
Letting $r = \mbox{rank}(\ensuremath{K})$, the empirical kernel matrix has
the eigendecomposition $\ensuremath{K} = U \Dmat U^T$, where $U \in
\mathbb{R}^{\ensuremath{n} \times \ensuremath{n}}$ is an orthonormal matrix
(satisfying $U U^T = U^T U = I_{n \times n}$) and
\begin{align*}
\Dmat \ensuremath{: \, = } \mbox{diag}(\ensuremath{\widehat{\ensuremath{\lambda}}}_1, \ensuremath{\widehat{\ensuremath{\lambda}}}_2, \ldots, \ensuremath{\widehat{\ensuremath{\lambda}}}_r,
0,0, \cdots, 0 )
\end{align*}
is the diagonal matrix of eigenvalues, augmented with $\ensuremath{n} - r$
zero eigenvalues as needed. We then define a sequence of diagonal
\emph{shrinkage matrices} $\SHRINK{\ensuremath{t}}$ as follows:
\begin{align*}
\SHRINK{\ensuremath{t}} & \ensuremath{: \, = } \prod_{\tau = 0}^{t-1} {(I_{n \times n} -
\Step{\tau} \Dmat)} \in \ensuremath{\mathbb{R}}^{n \times n}.
\end{align*}
The matrix $\SHRINK{\ensuremath{t}}$ indicates the extent of shrinkage towards
the origin; since $0 \; \leq \; \Step{\ensuremath{t}} \; \leq \; \min
\{1,1/\ensuremath{\widehat{\ensuremath{\lambda}}}_1 \}$ for all iterations $\ensuremath{t}$, in the positive
semodefinite ordering, we have the sandwich relation
\begin{align*}
0 \preceq \SHRINK{\ensuremath{t} + 1} \preceq \SHRINK{\ensuremath{t}} \preceq I_{\ensuremath{n}
\times \ensuremath{n}}.
\end{align*}
Moreover, the following lemma shows that the
$L^2(\ensuremath{\mathbb{P}}_\ensuremath{n})$-error at each iteration can be bounded in terms
of the eigendecomposition and these shrinkage matrices:
\blems[Bias/variance decomposition]
\label{LemMain}
At each iteration $\ensuremath{t} = 0, 1, 2, \ldots$,
\begin{align}
\label{EqnMainUpper}
\| \FUNIT{\ensuremath{t}} - \ensuremath{f^*}\|_\ensuremath{n}^2
& \leq \underbrace{\frac{2}{n}
\sum_{j=1}^{r}{\SHRINKSQ{\ensuremath{t}}_{jj} [U^T \ensuremath{f^*(x_1^\numobs)}]_j^2} +
\frac{2}{n} \sum_{j=r+1}^{n}{[U^T \ensuremath{f^*(x_1^\numobs)}]_j^2}}_{\mbox{Squared
Bias $\ensuremath{B_\iter^2}$}} + \underbrace{\frac{2}{n} \sum_{j = 1}^{r} {(1 -
\SHRINK{\ensuremath{t}}_{jj})^{2} [U^T \ensuremath{w}]_j^2}}_{\mbox{Variance
$\ensuremath{V_\iter}$}}.
\end{align}
Moreover, we have the lower bound $ \mathbb{E}[\| \FUNIT{\ensuremath{t}} -
\ensuremath{f^*}\|_\ensuremath{n}^2] \geq \mathbb{E}[\ensuremath{V_\iter}]$.
\elems
\noindent See Appendix~\ref{AppLemMain} for the proof of this
intermediate claim. \\
In order to complete the proof of the upper bound in
Theorem~\ref{ThmMain}, our next step is to obtain high probability
upper bounds on these two terms. We summarize our conclusions in an
additional lemma, and use it to complete the proof of
Theorem~\ref{ThmMain}(a) before returning to prove it.
\blems[Bounds on the bias and variance]
\label{LemBiasVarianceBound}
For all iterations $\ensuremath{t} = 1, 2, \ldots$, the squared bias is upper
bounded as
\begin{align}
\label{EqnBiasBound}
\ensuremath{B_\iter^2} & \leq \frac{1}{e \, \RUNSUM{\ensuremath{t}} },
\end{align}
Moreover, there is a universal constant $\ensuremath{c}_1 > 0$ such that, for
any iteration $\ensuremath{t} = 1, 2, \ldots, \ensuremath{\widehat{T}}$,
\begin{align}
\label{EqnVarBound}
\ensuremath{V_\iter} \; \leq \; 5 \sigma^2 \, \RUNSUM{\ensuremath{t}} \ensuremath{\mathcal{R}}^2_{\ensuremath{K}}
\big(1/\sqrt{\eta_\ensuremath{t}} \big)
\end{align}
with probability at least $1 - \exp \big(- \ensuremath{c}_1 \, \ensuremath{n}
\ensuremath{\widehat{\varepsilon}_\numobs}^2 \big)$. Moreover for all $t$, we have $\mathbb{E}[\ensuremath{V_\iter}]
\geq \frac{\sigma^2}{4} \, \RUNSUM{\ensuremath{t}} \ensuremath{\mathcal{R}}^2_{\ensuremath{K}}\big(1/\sqrt{\eta_\ensuremath{t}} \big)$.
\elems
We can now complete the proof of Theorem~\ref{ThmMain}(a). The
bound~\eqref{EqnGeneralBound} follows quickly: conditioned on the
event $\ensuremath{V_\iter} \leq 5 \sigma^2 \RUNSUM{\ensuremath{t}} \ensuremath{\mathcal{R}}^2_{\ensuremath{K}}
\big(1/\sqrt{\eta_\ensuremath{t}} \big)$, we have
\begin{align*}
\|\FUNIT{\ensuremath{t}} - \ensuremath{f^*}\|_\ensuremath{n}^2 & \stackrel{(i)}{\leq} \ensuremath{B_\iter^2} +
\ensuremath{V_\iter} \; \stackrel{(ii)}{\leq} \; \frac{1}{e \, \RUNSUM{\ensuremath{t}}} + 5
\sigma^2 \, \RUNSUM{\ensuremath{t}} \ensuremath{\mathcal{R}}^2_{\ensuremath{K}}
\big(1/\sqrt{\RUNSUM{\ensuremath{t}}} \big) \stackrel{(iii)}{\leq} \frac{4}{e
\, \RUNSUM{\ensuremath{t}}},
\end{align*}
where inequality (i) follows from~\eqref{EqnMainUpper} in
Lemma~\ref{LemMain}, and inequality (ii) follows from the bounds in
Lemma~\ref{LemBiasVarianceBound} and (iii) follows since $\ensuremath{t} \leq
\ensuremath{\widehat{T}}$. The lower bound (c) in equation~\eqref{BoundAfterOpt} follows
from ~\eqref{EqnVarBound}.
Turning to the proof of part (b), using the upper bound from (a)
\begin{align*}
\|\FUNIT{\ensuremath{\widehat{T}}} - \ensuremath{f^*}\|_\ensuremath{n}^2 & \leq \frac{1}{e \,
\RUNSUM{\ensuremath{\widehat{T}}}} + \frac{5}{\RUNSUM{\ensuremath{\widehat{T}}}} \; \leq \;
\frac{4}{e \RUNSUM{\ensuremath{\widehat{T}}}}.
\end{align*}
Based on the definition of $\ensuremath{\widehat{T}}$ and $\ensuremath{\widehat{\varepsilon}_\numobs}$,
we are guaranteed that $\frac{1}{\RUNSUM{\ensuremath{\widehat{T}}+1}} \leq \ensuremath{\widehat{\varepsilon}_\numobs}^2$,
Moreover, by the non-decreasing nature of our step sizes, we have
$\STEP{\ensuremath{\widehat{T}}+1} \leq \STEP{\ensuremath{\widehat{T}}}$, which implies that
$\RUNSUM{\ensuremath{\widehat{T}}+1} \leq 2 \RUNSUM{\ensuremath{\widehat{T}}}$, and hence
\begin{align*}
\frac{1}{\RUNSUM{\ensuremath{\widehat{T}}}} \leq \frac{2}{\RUNSUM{\ensuremath{\widehat{T}}+1}} \; \leq \; 2
\ensuremath{\widehat{\varepsilon}_\numobs}^2.
\end{align*}
Putting together the pieces establishes the bound claimed in part (b).
It remains to establish the bias and variance bounds stated in
Lemma~\ref{LemBiasVarianceBound}, and we do so in the following
subsections. The following auxiliary lemma plays a role in both
proofs:
\blems[Properties of shrinkage matrices]
\label{LemMatrices}
For all indices $j \in \{1, 2, \ldots, r\}$, the shrinkage matrices
$\SHRINK{\ensuremath{t}}$ satisfy the bounds
\begin{subequations}
\begin{align}
\label{EqnUpperBias}
0 \leq \; \SHRINKSQ{\ensuremath{t}}_{jj} & \leq \frac{1}{2 e \RUNSUM{\ensuremath{t}}
\ensuremath{\widehat{\ensuremath{\lambda}}}_j}, \quad \mbox{and} \\
\label{EqnLowerBias}
\frac{1}{2} \min \{1, \eta_\ensuremath{t} \ensuremath{\widehat{\ensuremath{\lambda}}}_j \} & \leq
1-\SHRINK{\ensuremath{t}}_{jj} \; \leq \min \{ 1, \RUNSUM{\ensuremath{t}} \ensuremath{\widehat{\ensuremath{\lambda}}}_j
\}.
\end{align}
\end{subequations}
\elems
\noindent See Appendix~\ref{AppLemMatrices} for the proof of this result.
\subsubsection{Bounding the squared bias}
\label{SecBias}
Let us now prove the upper bound~\eqref{EqnBiasBound} on the squared
bias. We bound each of the two terms in the
definition~\eqref{EqnMainUpper} of $\ensuremath{B_\iter^2}$ in term. Applying the
upper bound ~\eqref{EqnUpperBias} from Lemma~\ref{LemMatrices}, we see
that
\begin{align*}
\frac{2}{n} \sum_{j=1}^{r} {\SHRINKSQ{\ensuremath{t}}_{jj} [U^T
\ensuremath{f^*}(x_1^\ensuremath{n})]_j^2} & \leq \frac{1}{e \; \ensuremath{n} \;
\RUNSUM{\ensuremath{t}}} \; \sum_{j=1}^{r} { \frac{[U^T \ensuremath{f^*}(x_1^\ensuremath{n})
]_j^2}{\ensuremath{\widehat{\ensuremath{\lambda}}}_j}}.
\end{align*}
Now consider the linear operator \mbox{$\Phi_X: \ell^2(\ensuremath{\mathbb{N}})
\rightarrow \ensuremath{\mathbb{R}}^\ensuremath{n}$} defined element-wise via $[\Phi_X]_{jk}
= \phi_j(x_k)$. Similarly, we define a (diagonal) linear operator
\mbox{$\DiagOpt: \ell^2(\ensuremath{\mathbb{N}}) \rightarrow \ell^2(\ensuremath{\mathbb{N}})$} with entries
$[\DiagOpt]_{jj} = \lambda_j$ and $[\DiagOpt]_{jk} = 0$ for $j \neq
k$. With these definitions, the vector $\FUN{} \in \ensuremath{\mathbb{R}}^\ensuremath{n}$ can
be expressed in terms of some sequence $a \in \ell^2(\ensuremath{\mathbb{N}})$ in the
form
\begin{align*}
\FUN{} & = \Phi_X \DiagOpt^{1/2} a.
\end{align*}
In terms of these quantities, we can write $\ensuremath{K} =
\frac{1}{n}\Phi_X \DiagOpt \Phi_X^T$. Moreover, as previously noted,
we also have $\ensuremath{K} = U \Dmat U^T$ where $\Dmat = \mbox{diag}
\{\ensuremath{\widehat{\ensuremath{\lambda}}}_1, \ensuremath{\widehat{\ensuremath{\lambda}}}_2, \ldots, \ensuremath{\widehat{\ensuremath{\lambda}}}_\ensuremath{n} \}$, and $U
\in \ensuremath{\mathbb{R}}^{\ensuremath{n} \times \ensuremath{n}}$ is orthonormal. Combining the two
representations, we conclude that
\begin{equation*}
\frac{\Phi_X \DiagOpt^{1/2}}{\sqrt{n}} = U \Dmat^{1/2} \Psi^*,
\end{equation*}
for some linear operator $\Psi: \ensuremath{\mathbb{R}}^n \rightarrow \ell^2(\ensuremath{\mathbb{N}})$
(with adjoint $\Psi^*$) such that $\Psi^* \Psi = I_{\ensuremath{n} \times
\ensuremath{n}}$. Using this equality, we have
\begin{align}
\frac{1}{e \, \RUNSUM{\ensuremath{t}} \, n} \sum_{j=1}^{r}{\frac{[U^T
f^*(X)]_j^2}{\widehat{\ensuremath{\lambda}_j}}} & = \frac{1}{e \, \RUNSUM{\ensuremath{t}}
\, \ensuremath{n}} \; \sum_{j=1}^{r} {\frac{[U^T \Phi_X \DiagOpt^{1/2}
a]_j^2}{\ensuremath{\widehat{\ensuremath{\lambda}}}_j}} \nonumber \\
& = \frac{1}{e \, \RUNSUM{\ensuremath{t}}} \; \sum_{j = 1}^{r} { \frac{[U^T U
\Dmat^{1/2} V^* a ]_j^2}{\ensuremath{\widehat{\ensuremath{\lambda}}}_j}} \nonumber \\
& = \frac{1}{e \, \RUNSUM{\ensuremath{t}}} \; \sum_{j = 1}^{r}
{\frac{\ensuremath{\widehat{\ensuremath{\lambda}}}_j \, [\Psi^* a]_j^2}{\ensuremath{\widehat{\ensuremath{\lambda}}}_j}} \nonumber \\
& \leq \frac{1}{e \, \RUNSUM{\ensuremath{t}}} \; \|\Psi^* a\|_2^2 \nonumber \\
\label{EqnBoundOne}
& \leq \frac{1}{e \, \RUNSUM{\ensuremath{t}}},
\end{align}
Here the final step follows from the fact that $\Psi$ is a unitary
operator, so that \mbox{$\|\Psi^* a\|_2^2 \leq \|a\|_2^2 =
\|f^*\|_{\ensuremath{\mathcal{H}}}^2 \leq 1$.}
Turning to the second term in the definition~\eqref{EqnMainUpper}, we
have
\begin{align}
\sum_{j=r+1}^{n} {[U^T \ensuremath{f^*}(x_1^\ensuremath{n}) ]_j^2} & = \frac{2}{n}
\sum_{j=r+1}^{n}{[U^T \Phi_X \DiagOpt^{1/2} a]_j^2} \nonumber \\
& = \sum_{j=r+1}^\ensuremath{n} {[U^T U \Dmat^{1/2} \Psi^* a]_j^2} \nonumber \\
& = \sum_{j = r+1}^{n} [\Dmat^{1/2} \Psi^* a]_j^2 \nonumber \\
\label{EqnBoundTwo}
& = 0,
\end{align}
where the final step uses the fact that $\Dmat^{1/2}_{jj} = 0$ for all
$j \in \{r +1, \ldots, \ensuremath{n} \}$ by construction. Combining the
upper bounds~\eqref{EqnBoundOne} and~\eqref{EqnBoundTwo} with the
definition~\eqref{EqnMainUpper} of $\ensuremath{B_\iter^2}$ yields the
claim~\eqref{EqnBiasBound}.
\subsubsection{Controlling the variance}
\label{SecVar}
Let us now prove the bounds~\eqref{EqnVarBound} on the variance term
$\ensuremath{V_\iter}$. (To simplify the proof, we assume throughout that $\sigma =
1$; the general case can be recovered by a simple rescaling argument).
By the definition of $\ensuremath{V_\iter}$, we have
\begin{align*}
\ensuremath{V_\iter} & = \frac{2}{n} \sum_{j = 1}^{r}{(1-\SHRINK{\ensuremath{t}}_{jj})^{2}
[U^T w]_j^2} \; = \; \frac{2}{\ensuremath{n}} \ensuremath{\operatorname{trace}}(U Q U^T \, w w^T),
\end{align*}
where $Q = \mbox{diag} \{ (1 -\SHRINK{\ensuremath{t}}_{jj})^2, \; j = 1, \ldots,
\ensuremath{n} \}$ is a diagonal matrix. Since $\ensuremath{\mathbb{E}}[w w^T] \leq I_{\ensuremath{n}
\times \ensuremath{n}}$ by assumption, we have $\ensuremath{\mathbb{E}}[\ensuremath{V_\iter}] =
\frac{2}{\ensuremath{n}} \ensuremath{\operatorname{trace}}(Q)$. Using the upper bound in
equation~\eqref{EqnLowerBias} from Lemma~\ref{LemMatrices}, we have
\begin{align*}
\frac{1}{\ensuremath{n}} \ensuremath{\operatorname{trace}}(Q) & \leq \frac{1}{\ensuremath{n}} \sum_{j=1}^r \min
\{1, (\RUNSUM{\ensuremath{t}} \ensuremath{\widehat{\ensuremath{\lambda}}}_j)^2 \} \; = \; \RUNSUM{\ensuremath{t}} \;
\biggr( \ensuremath{\mathcal{R}}_{\ensuremath{K}}(1/\sqrt{\RUNSUM{\ensuremath{t}}}) \biggr)^2,
\end{align*}
where the final equality uses the definition of $\ensuremath{\mathcal{R}}_\ensuremath{K}$.
Putting together the pieces, we see that
\begin{subequations}
\begin{align}
\label{EqnUpperEmyvar}
\ensuremath{\mathbb{E}}[\ensuremath{V_\iter}] & \leq 2 \; \RUNSUM{\ensuremath{t}} \biggr(
\ensuremath{\mathcal{R}}_{\ensuremath{K}}(1/\sqrt{\RUNSUM{\ensuremath{t}}}) \biggr)^2.
\end{align}
Similarly, using the lower bound in equation~\eqref{EqnLowerBias}, we
can show that
\begin{align}
\label{EqnLowerEmyvar}
\ensuremath{\mathbb{E}}[\ensuremath{V_\iter}] & \geq \frac{\sigma^2}{4} \RUNSUM{\ensuremath{t}} \biggr(
\ensuremath{\mathcal{R}}_{\ensuremath{K}}(1/\sqrt{\RUNSUM{\ensuremath{t}}}) \biggr)^2.
\end{align}
\end{subequations}
Our next step is to obtain a bound on the two-sided tail probability
$\ensuremath{\mathbb{P}}[|\ensuremath{V_\iter} - \ensuremath{\mathbb{E}}[\ensuremath{V_\iter}]| \geq \delta]$, for which we make use of
a result on two-sided deviations for quadratic forms in sub-Gaussian
variables. In particular, consider a random variable of the form
$Q_\ensuremath{n} = \sum_{i,j=1}^\ensuremath{n} a_{ij} (Z_i Z_j - \ensuremath{\mathbb{E}}[Z_i Z_j])$
where $\{Z_i\}_{i=1}^\ensuremath{n}$ are i.i.d. zero-mean and sub-Gaussian
variables (with parameter $1$). Wright~\cite{Wri73} proves that there
is a constant $c$ such that
\begin{align}
\label{EqnWrightBound}
\ensuremath{\mathbb{P}} \big[|Q - \ensuremath{\mathbb{E}}[Q]| \geq \ensuremath{\delta} \big] \leq \exp \Big(-c \;
\min \big \{ \frac{\ensuremath{\delta}}{\opnorm{A}},
\frac{\ensuremath{\delta}^2}{\fronorm{A}^2} \big \} \Big) \quad \mbox{for all $u>
0$,}
\end{align}
where $(\opnorm{A}, \fronorm{A})$ are (respectively) the operator and
Frobenius norms of the matrix \mbox{$A = \{a_{ij}\}_{i,j=1}^\ensuremath{n}$.}
If we apply this result with $A= \frac{2}{\ensuremath{n}} U Q U^T$ and $Z_i =
w_i$, then we have $Q = \ensuremath{V_\iter}$, and moreover
\begin{align*}
\opnorm{A} & \leq \frac{2 }{\ensuremath{n}}, \quad \mbox{and} \\
\fronorm{A}^2 \; = \; \frac{4}{\ensuremath{n}^2} \ensuremath{\operatorname{trace}}(U^T Q U^T U Q U^T) &
= \frac{4}{\ensuremath{n}^2} \ensuremath{\operatorname{trace}}(Q^2) \leq \frac{4}{\ensuremath{n}^2} \ensuremath{\operatorname{trace}}(Q)
\leq \frac{4}{\ensuremath{n}} \RUNSUM{\ensuremath{t}} \; \biggr(
\ensuremath{\mathcal{R}}_{\ensuremath{K}}(1/\sqrt{\RUNSUM{\ensuremath{t}}}) \biggr).
\end{align*}
Consequently, the bound~\eqref{EqnWrightBound} implies that
\begin{align}
\label{EqnTwoMyvar}
\ensuremath{\mathbb{P}} \big[ |\ensuremath{V_\iter} - \ensuremath{\mathbb{E}}[\ensuremath{V_\iter}] | \geq \ensuremath{\delta} \big] & \leq \exp
\big( - 4 c \, \ensuremath{n} \, \delta \min \{1, \delta \biggr(\RUNSUM{\ensuremath{t}} \ensuremath{\mathcal{R}}_{\ensuremath{K}}(1/\sqrt{\RUNSUM{\ensuremath{t}}}) \biggr)^{-1}\} \big).
\end{align}
Since $\ensuremath{t} \leq \ensuremath{\widehat{T}}$ setting $\delta = 3 \sigma^2 \RUNSUM{\ensuremath{t}} \; \biggr(\ensuremath{\mathcal{R}}_{\ensuremath{K}}(1/\sqrt{\RUNSUM{\ensuremath{t}}}) \biggr)$, the claim~\eqref{EqnVarBound} follows.
\subsection{Proof of Theorem~\ref{ThmRandDesign}}
This proof is based on the following two steps:
\begin{itemize}
\item first, proving that the error $\|\FUNIT{\ensuremath{\widehat{T}}} - \ensuremath{f^*}\|_2$ in
the $L^2(\ensuremath{\mathbb{P}})$ norm is, with high probability, close to the error
in the $L^2(\ensuremath{\mathbb{P}}_\ensuremath{n})$ norm, and
\item second, showing the empirical critical radius $\ensuremath{\widehat{\varepsilon}_\numobs}$ defined
in equation~\eqref{EqnDefnEmpCrit} is upper bounded by the
population critical radius $\ensuremath{\varepsilon_\numobs}$ defined in
equation~\eqref{EqnDefnPopCrit}.
\end{itemize}
\noindent Our proof is based on a number of more technical auxiliary
lemmas, proved in the appendices. The first lemma provides a high
probability bound on the Hilbert norm of the estimate $\FUNIT{\ensuremath{\widehat{T}}}$.
\blems
\label{LemHilbBound}
There exist universal constants $c_1$ and $c_2 > 0$ such that
$\|\FUNIT{\ensuremath{t}}\|_{\ensuremath{\mathcal{H}}} \leq 2$ for all $\ensuremath{t} \leq \ensuremath{\widehat{T}}$ with
probability greater than or equal to $1 - c_1 \exp(-c_2 \ensuremath{n}
\ensuremath{\widehat{\varepsilon}_\numobs}^2)$.
\elems
See Appendix~\ref{AppLemHilbBound} for the proof of this claim. Our
second lemma shows in any bounded RKHS, the $\ensuremath{L^2(\mathbb{P})}$ and $\ensuremath{{L^2(\mathbb{P}_n)}}$ norms
are uniformly close up to the population critical radius $\ensuremath{\varepsilon_\numobs}$
over a Hilbert ball of constant radius:
\blems
\label{LemGeneral}
Consider a Hilbert space such that $\|g\|_\infty \leq \ensuremath{B}$ for all
$g \in \ensuremath{\mathbb{B}}_\ensuremath{\mathcal{H}}(3)$. Then there exist universal constants
$(\ensuremath{c}_1, \ensuremath{c}_2, \ensuremath{c}_3)$ such that for any $t \geq
\ensuremath{\varepsilon_\numobs}$, we have
\begin{equation}
\label{EqnSandwich}
| \|g\|_\ensuremath{n}^2 - \|g\|_2^2 | \; \leq \; \ensuremath{c}_1 t^2,
\end{equation}
with probability at least $1 - \ensuremath{c}_2 \exp(-\ensuremath{c}_3 \ensuremath{n}
t^2)$.
\elems
\noindent This claim follows from known results on reproducing kernel
Hilbert spaces (e.g., Lemma 5.16 in the paper~\cite{vandeGeer} and
Theorem 2.1 in the paper~\cite{Bartlett05}). Our final lemma, proved
in Appendix~\ref{AppLemEmpTru}, relates the critical empirical radius
$\ensuremath{\widehat{\varepsilon}_\numobs}$ to the population radius $\ensuremath{\varepsilon_\numobs}$:
\blems
\label{LemEmpTru}
The inequality $\ensuremath{\widehat{\varepsilon}_\numobs} \leq \ensuremath{\varepsilon_\numobs}$ holds with probability at
least $1 - \ensuremath{c}_1 \exp( - \ensuremath{c}_2 \ensuremath{n} \ensuremath{\widehat{\varepsilon}_\numobs}^2)$.
\elems
With these lemmas in hand, the proof of the theorem is
straightforward. First, from Lemma~\ref{LemHilbBound}, we have
$\|\FUNIT{\ensuremath{\widehat{T}}}\|_{\ensuremath{\mathcal{H}}} \leq 2$ and hence by triangle inequality,
$\|\FUNIT{\ensuremath{\widehat{T}}} - \ensuremath{f^*}\|_{\ensuremath{\mathcal{H}}} \leq 3$ with high probability as
well. Next, applying Lemma~\ref{LemGeneral} with $t = \ensuremath{\varepsilon_\numobs}$, we
find that
\begin{align*}
\| \FUNIT{\ensuremath{\widehat{T}}} - \ensuremath{f^*} \|_2^2 & \leq \| \FUNIT{\ensuremath{\widehat{T}}} - \ensuremath{f^*}
\|_\ensuremath{n}^2 + \ensuremath{c}_1 \ensuremath{\varepsilon_\numobs}^2 \leq \ensuremath{c}_4(\ensuremath{\widehat{\varepsilon}_\numobs}^2 +
\ensuremath{\varepsilon_\numobs}^2),
\end{align*}
with probability greater than $1 -\ensuremath{c}_2 \exp(-\ensuremath{c}_3
\ensuremath{n} \ensuremath{\varepsilon_\numobs}^2)$. Finally, applying Lemma~\ref{LemEmpTru} yields
that the bound \mbox{$\|\FUNIT{\ensuremath{\widehat{T}}} - \ensuremath{f^*}\|_2^2 \leq \ensuremath{c}
\ensuremath{\varepsilon_\numobs}^2$} holds with the claimed probability.
\subsection{Proof of Corollaries}
\label{SecProofCorAchieveSmooth}
In each case, it suffices to upper bound the population critical rate
$\ensuremath{\varepsilon_\numobs}^2$ previously defined.
\paragraph{Proof of Corollary~\ref{CorAchieveFinite}:}
In this case, we have
\begin{align*}
\ensuremath{\mathcal{R}}_{\ensuremath{\mathbb{K}}}(\epsilon) & = \frac{1}{\sqrt{\ensuremath{n}}} \sqrt{\sum_{j=1}^m
\min \{ \ensuremath{\lambda}_j, \epsilon^2 \}} \; \leq \;
\sqrt{\frac{m}{\ensuremath{n}}} \, \epsilon
\end{align*}
so that $\ensuremath{\varepsilon_\numobs}^2 = c' \sigma^2 \frac{m}{\ensuremath{n}}$.
\paragraph{Proof of Corollary~\ref{CorAchieveSmooth}:}
For any $M \geq 1$, we have
\begin{align*}
\ensuremath{\mathcal{R}}_{\ensuremath{\mathbb{K}}}(\epsilon) \; = \; \frac{1}{\sqrt{\ensuremath{n}}} \,
\sqrt{\sum_{j=1}^\infty \min \{C \, j^{-2 \ensuremath{\nu}}, \epsilon^2 \}} &
\leq \sqrt{\frac{M}{\ensuremath{n}}} \epsilon + \sqrt{\frac{C}{\ensuremath{n}}} \,
\sqrt{\sum_{j=\lceil M \rceil}^\infty j^{-2 \ensuremath{\nu}}} \\ & \leq
\sqrt{\frac{M}{\ensuremath{n}}} \epsilon + \sqrt{\frac{C'}{\ensuremath{n}}}
\sqrt{\int_{M}^\infty t^{-2 \ensuremath{\nu}} dt} \\
& \leq \sqrt{\frac{M}{\ensuremath{n}}} \epsilon + C''
\frac{1}{\sqrt{\ensuremath{n}}} (1/M)^{\ensuremath{\nu} -\frac{1}{2}}
\end{align*}
Setting $M = \epsilon^{-1/\ensuremath{\nu}}$ yields $\ensuremath{\mathcal{R}}_\ensuremath{\mathbb{K}}(\epsilon) \leq
C^* \epsilon^{1 - \frac{1}{2 \ensuremath{\nu}}}$. Consequently, the critical
inequality $\ensuremath{\mathcal{R}}_\ensuremath{\mathbb{K}}(\epsilon) \leq 40 \epsilon^2/\sigma$ is
satisfied for $\ensuremath{\varepsilon_\numobs} \asymp (\sigma^2/\ensuremath{n})^{\frac{2 \ensuremath{\nu}}{2
\ensuremath{\nu}+1}}$, as claimed.
\subsection{Proof of Proposition~\ref{PropMainRidge}}
We now turn to the proof of our results on the kernel ridge regression
estimate~\eqref{EqnKernelRidge}. The proof follows a very similar
structure to that of Theorem~\ref{ThmMain}. Recall the
eigendecomposition $\ensuremath{K} = U \Dmat U^T$ of the empirical kernel
matrix, and that we use $r$ to denote its rank. For each $\nu >
0$, we define the \emph{ridge shrinkage matrix}
\begin{align}
\label{EqnDefnKERSHRINK}
\ensuremath{R^\PenPar} & \ensuremath{: \, = } \big(I_{\ensuremath{n} \times \ensuremath{n}} + \nu \Dmat
\big)^{-1}.
\end{align}
We then have the following analog of Lemma~\ref{LemBiasVarianceBound}
from the proof of Theorem~\ref{ThmMain}:
\blems[Bias/variance decomposition for kernel ridge regression]
\label{LemRidgeOne}
For any $\nu > 0$, the prediction error for the estimate
$\ensuremath{\widehat{f}}_{\nu}$ is bounded as
\begin{align}
\| \ensuremath{\widehat{f}}_{\nu} - \ensuremath{f^*}\|_\ensuremath{n}^2 & \leq \frac{2}{\ensuremath{n}}
\sum_{j=1}^{r} [\ensuremath{R^\PenPar}]_{jj}^{2} [U^T \ensuremath{f^*}(x_1^\ensuremath{n})]_j^2 +
\frac{2}{\ensuremath{n}} \sum_{j = r+1}^{n}{[U^T \ensuremath{f^*}(x_1^\ensuremath{n})]_j^2} +
\frac{2}{\ensuremath{n}} \sum_{j = 1}^{r} \big(1- \ensuremath{R^\PenPar}_{jj}\big)^{2}
[U^T w]_j^2.
\end{align}
\elems
\noindent Note that Lemma~\ref{LemRidgeOne} is identical to
Lemma~\ref{LemBiasVarianceBound} with the shrinkage matrices
$\SHRINK{\ensuremath{t}}$ replaced by their analogues $\ensuremath{R^\PenPar}$. See
Appendix~\ref{AppRidgeOne} for the proof of this claim. \\
\noindent Our next step is to show that the diagonal elements of the
shrinkage matrices $\ensuremath{R^\PenPar}$ are bounded:
\blems[Properties of kernel ridge shrinkage]
\label{LemRidgeTwo}
For all indices $j \in
\{1, 2, \ldots, r\}$, the diagonal entries $\ensuremath{R^\PenPar}$ satisfy the
bounds
\begin{subequations}
\begin{align}
\label{EqnUpperRidgeBias}
0 \; \leq \; (\ensuremath{R^\PenPar}_{jj})^2 \leq \frac{1}{4 \nu
\ensuremath{\widehat{\ensuremath{\lambda}}}_j}, \quad \mbox{and} \\
\label{EqnLowerRidgeBias}
\frac{1}{2} \min \big \{ 1, \nu \ensuremath{\widehat{\ensuremath{\lambda}}}_j \big \} \; \leq \;
1-\ensuremath{R^\PenPar}_{jj} \; \leq \; \min \big \{1, \nu \ensuremath{\widehat{\ensuremath{\lambda}}}_j \big
\}.
\end{align}
\end{subequations}
\elems
\noindent Note that this is the analog of Lemma~\ref{LemMatrices} from
Theorem~\ref{ThmMain}, albeit with the constant $\frac{1}{4}$ in the
bound~\eqref{EqnUpperRidgeBias} instead of $\frac{1}{2 e}$. See
Appendix~\ref{AppRidgeTwo} for the proof of this claim. With these
lemmas in place, the remainder of the proof follows as in the proof of
Theorem~\ref{ThmMain}.
\section{Discussion}
In this paper, we have analyzed the early stopping strategy as applied
to gradient descent on the non-parametric least squares loss. Our
main contribution was to propose an easily computable and
data-dependent stopping rule, and to provide upper bounds on the
empirical $\ensuremath{{L^2(\mathbb{P}_n)}}$ error (Theorem~\ref{ThmMain}) and population $\ensuremath{L^2(\mathbb{P})}$
error (Theorem~\ref{ThmRandDesign}). We demonstrate in
Corollaries~\ref{CorAchieveSmooth} and ~\ref{CorAchieveFinite} that
our stopping rule yields minimax optimal rates for both low rank
kernel classes and Sobolev spaces. Our simulation results confirm that
our stopping rule yields theoretically optimal rates of convergence
for Lipschitz kernels, and performs favorably in comparison to
stopping rules based on hold-out data and Stein's Unbiased Risk
Estimate. We also showed that early stopping with sum of step-sizes
$\RUNSUM{\ensuremath{t}} = \sum_{k=0}^{t-1}{\Step{k}}$ has a regularization
path that satisfies almost identical mean-squared error bounds as
kernel ridge regression indexed by penalty parameter $\nu$.
Our analysis and stopping rule may be improved and extended in a
number of ways. First, it would interesting to see how our stopping
rule can be adapted to mis-specified models. As specified, our method
relies on computation of the eigenvalues of the kernel matrix. A
stopping rule based on approximate eigenvalue computations, for
instance via some form of sub-sampling~\cite{DrinMah05}, would be
interesting to study as well.
\subsection*{Acknowledgements}
This work was partially supported by NSF grant DMS-1107000 to MJW and BY. In addition, BY was partially supported by the NSF grant SES-0835531 (CDI), ARO-W911NF-11-1-0114 and the Center for Science of
Information (CSoI), an US NSF Science and Technology Center, under grant agreement CCF-0939370, and MJW was also partially supported ONR MURI grant N00014-11-1-086. During this work, GR received partial support from a Berkeley Graduate Fellowship.
| {
"timestamp": "2013-06-18T02:01:00",
"yymm": "1306",
"arxiv_id": "1306.3574",
"language": "en",
"url": "https://arxiv.org/abs/1306.3574",
"abstract": "The strategy of early stopping is a regularization technique based on choosing a stopping time for an iterative algorithm. Focusing on non-parametric regression in a reproducing kernel Hilbert space, we analyze the early stopping strategy for a form of gradient-descent applied to the least-squares loss function. We propose a data-dependent stopping rule that does not involve hold-out or cross-validation data, and we prove upper bounds on the squared error of the resulting function estimate, measured in either the $L^2(P)$ and $L^2(P_n)$ norm. These upper bounds lead to minimax-optimal rates for various kernel classes, including Sobolev smoothness classes and other forms of reproducing kernel Hilbert spaces. We show through simulation that our stopping rule compares favorably to two other stopping rules, one based on hold-out data and the other based on Stein's unbiased risk estimate. We also establish a tight connection between our early stopping strategy and the solution path of a kernel ridge regression estimator.",
"subjects": "Machine Learning (stat.ML)",
"title": "Early stopping and non-parametric regression: An optimal data-dependent stopping rule",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9850429142850273,
"lm_q2_score": 0.8175744739711883,
"lm_q1q2_score": 0.8053459424856275
} |
https://arxiv.org/abs/math/0608769 | Universal Cycles on 3-Multisets | Consider the collection of all t-multisets of {1,...,n}. A universal cycle on multisets is a string of numbers, each of which is between 1 and n, such that if these numbers are considered in t-sized windows, every multiset in the collection is present in the string precisely once. The problem of finding necessary and sufficient conditions on n and t for the existence of universal cycles and similar combinatorial structures was first addressed by DeBruijn in 1946 (who considered t-tuples instead of t-multisets). The past 15 years has seen a resurgence of interest in this area, primarily due to Chung, Diaconis, and Graham's 1992 paper on the subject. For the case t=3, we determine necessary and sufficient conditions on n for the existence of universal cycles, and we examine how this technique can be generalized to other values of t. | \section{Introduction and Previous Work}
Consider the collection of all $t$--multisets over the universe
$[n]=\{1,\ldots, n\}$. A universal cycle (ucycle) on multisets is
a cyclic string $X=a_1a_2...a_k$ with $a_i\in[n]$ for which the
collection $\big\{\{a_1,a_2,...,a_t\},$
$\{a_2,a_3,...,a_{t+1}\},...,$ $\{a_{k-t+1},a_{k-t+2},...,a_k\},$
$\{a_{k-t+2},a_{k-t+3},...,a_k,a_1\},...
\{a_k,a_1,a_2,...,a_{t-1}\} \big\}$ is precisely the collection of
all $t$--multisets over $[n]$, i.e. each $t$--multiset over $[n]$
occurs precisely once in the above collection. For the remainder
of this paper, the term universal cycle will refer to universal
cycles on multisets unless noted otherwise.
Universal cycles do not exist for every value of $n$ and $t$.
Indeed, simple symmetry arguments show that each of the numbers
$1,\ldots,n$ must occur an equal number of times in the ucycle.
Since the length of the ucycle is equal to the number of
$t$--multisets over $[n]$, which is $\binom{n+t-1}{t}$, we must
have that $n\!\left.|\binom{n+t-1}{t}\right..$ While this
condition is necessary, it is not sufficient for the existence of
ucycles.
To date, the bulk of research on ucycles has been devoted to
studying ucycles over sets (as opposed to multisets). Ucycles over
sets are constructed in the same fashion as ucycles over
multisets, except that we consider the collection of all $t$--sets
over $[n]$ instead of the collection of all $t$--multisets, and
our divisibility condition becomes $n\!\left|\binom{n}{t}\right.$.
In \cite{Chung}, Chung, Diaconis and Graham conjectured that for
each value of $t$, there exists a number $n_0(t)$ such that
universal cycles exist for $n\left|\binom{n}{t}\right.$ and $n\geq
n_0(t)$. In \cite{Hurlbert}, Hurlbert consolidated and extended
previous work, verifying the conjecture for $t=2$ and $3$ and
developing partial results for $t=4$ and $6$. In \cite{Godbole},
Godbole et al.~considered universal cycles over multisets for the
case $t=2$, and verified the analogous form of the Chung--Diaconis--Graham conjecture (i.e. with the modified divisibility
criterion) for this case. This work is of particular interest
because Godbole et al.~used a new inductive technique to arrive at
their proof, and in this paper we extend this technique to the
case $t=3$. This new work suggests that the inductive method is a
promising way of addressing the Chung--Diaconis--Graham
conjecture. We also consider a second proof of the conjecture for
the $t=3$ multiset case, which builds off universal cycles on sets
and lends itself more easily to generalization.
\section{An Inductive Proof for Universal Cycles on 3--Multisets}
For $t=3$, the condition $n\left|\binom{n+t-1}{t}\right.$ implies
that $t\equiv 1$ or $2$ (mod 3). We will consider the case
$n\equiv 1$ (mod 3), as the other case can be dealt with
similarly. We will show that for $n\geq 4$, universal cycles exist
whenever $n$ satisfies $n\left|\binom{n+t-1}{t}\right.$
Before describing the proof itself, we will define some
terminology that will be useful for describing universal cycles.
We say that a cyclic string $X=a_1a_2...a_k$ \emph{contains} the
multiset collection $\mathcal{I}$ if
$\mathcal{I}=\big\{\{a_1,a_2,a_3\},\{a_2,a_3,a_4\},...,\{a_{k-2},a_{k-1},{a_k}\},\{a_{k-1},a_k,a_1\},\{a_k,a_1,a_2\}
\big\}$, where each of these sets must be distinct. Clearly
$k=\binom{n+2}{3}$, since this is the number of $3$--multisets on
$[n]$.
For a string $X=a_1a_2...a_k$, we call the \emph{lead-in} of $X$
the substring $a_1a_2$ and the \emph{lead-out} the substring
$a_{k-1}a_k$.
Now, consider the collection of all 3--multisets over $[n]$. We
shall partition this collection into four subcollections. Let
$\mathcal{A}$ be the collection of all 3--multisets over $[n-3]$,
and let $\mathcal{B}$ be the collection of all $3$--multisets over
$\{n-2,n-1,n\}$ and $[n-6]$ which contain at least one element
from $\{n-2,n-1,n\}$. Let $\mathcal{C}$ be the collection of all
3--multisets with one or two elements from $\{n-5,n-4,n-3\}$ and
one or two elements from $\{n-2,n-1,n\}$, and let $\mathcal{D}$ be
the collection of all 3--multisets with one element from each of
$[n-6]$, $\{n-5,n-4,n-3\}$, and $\{n-2,n-1,n\}$. We can see that
$\mathcal{A,B,C},$ and $\mathcal{D}$ are disjoint, and that their
union is the collection of all 3--multisets on $[n]$, as desired.
Now, let $S$ be a universal cycle on $[n-6]$, and since $1,1,1$
must occur somewhere in $S$ and the beginning of $S$ is arbitrary,
we shall have $S$ begin with $1,1,1$. We shall also select $S$ so
that its lead-out is $n-6,n-7$. Thus $S$, when considered as a
cyclic string, contains all $3$--multisets over $[n-6]$, and when
considered as a non-cyclic string, contains all $3$--multisets
except $\{1,n-7,n-6\}$ and $\{1,1,n-7\}$. Let $T$ be a string over
$[n-3]$ such that $ST$---the concatenation of $S$ and $T$---is a
universal cycle over $[n-3]$. It is not clear that such a $T$ must
exist, but we shall find a specific example shortly. In the
example we will find, $T$ will begin with $1,1$ and will end with
$n-3,n-4$. Since $T$ begins with $1,1$, the string $ST$ contains
the multisets $\{1,n-7,n-6\},\ \{1,1,n-7\}$. We can see that the
cyclic string $ST$ contains all of the multisets in $\mathcal{A}$,
and that when $ST$ is considered as a non-cyclic string, it
contains $\mathcal{A}\backslash\big\{\{1,n-4,n-3\},\
\{1,1,n-4\}\big\}$. Now, consider the string $T^\prime$ obtained
by taking $T$ and replacing each instance of $n-5$ by $n-2$, $n-4$
by $n-1$, and $n-3$ by $n$. Since $T$ contained all multisets over
$[n-3]$ which contained at least one element from
$\{n-5,n-4,n-3\}$, we have that $T^\prime$ contains all multisets
over $\{n-2,n-1,n\}$ and $[n-6]$ which contain at least one
element from $\{n-2,n-1,n\}$, i.e. $T^\prime$ contains all the
multisets in $\mathcal{B}$. Since the lead-in of $T$ is $1,1$, the
lead-in of $T^\prime$ is also $1,1$, and since $T$ ends with
$n-3,n-4$, $T^\prime$ ends with $n,n-1$. If we consider the cyclic
string $STT^\prime$, we can see that this string contains all the
multisets in $\mathcal{A}\cup \mathcal{B}$, while the non-cyclic
version of this string is missing the multisets $\{1,n-1,n\},\
\{1,1,n-1\}$.
For notational convenience, we will use the following assignments:
$a:=n-5,\ b:=n-4,\ c:=n-3,\ d:=n-2,\ e:=n-1,$ and $f:=n$. Now,
consider the following string:
\begin{eqnarray*}
V=&&\mathrm{be}(n-6)\mathrm{af}(n-7)\mathrm{be}(n-8)\mathrm{af}(n-9)...\mathrm{af}1\mathrm{be}\\
&&\ \ \ \mathrm{ad}(n-6)\mathrm{ce}(n-7)\mathrm{ad}(n-8)\mathrm{ce}(n-9)...\mathrm{ce}1\mathrm{ad}\\\
&&\ \ \ \ \ \
\mathrm{cf}(n-6)\mathrm{bd}(n-7)\mathrm{cf}(n-8)\mathrm{bd}(n-9)...\mathrm{bd}1\mathrm{cfe}.
\end{eqnarray*}
We can see that this string contains every multiset in
$\mathcal{D}$, as well as the multisets $\{a,b,e\}$, $\{a,d,e\}$,
$\{a,c,d\}$, and $\{c,d,f\}$. Now, the following string (found
with the aid of a computer) contains all of the multisets in
$\mathcal{C}\backslash \big\{\{a,b,e\}$, $\{a,d,e\}$, $\{a,c,d\}$,
$\{c,d,f\} \big\}$:
$$U=\mathrm{aaffc\phantom{1}aeebb\phantom{1}decec\phantom{1}bddcc\phantom{1}fbada\phantom{1}dfbf}.$$
Note that while the multisets $\{b,b,f\}$ and $\{b,e,f\}$ are not
present in the above string $U$, they are present in the
concatenation of $U$ with $V$. Similarly, while $U$ does not
contain $\{a,e,f\}$ and $\{a,a,f\}$, these multisets are present
in the concatenation of $T^\prime$ with $U$.
Now, we can see that the string $STT^\prime UV$ is a universal
cycle over $[n]$ because the non-cyclic string $STT^\prime$
contained all the multisets in
$\mathcal{A}\cup\mathcal{B}\backslash\big\{\{1,n-1,n\},\
\{1,1,n-1\} \big\}$, and it is precisely the multisets
$\{1,n-1,n\}$ and $\{1,1,n-1\}$ which are obtained by the
wrap-around of the lead-out of $V$ with the lead-in of $S$. The
lead-in and lead-out of the other strings has been engineered so
as to ensure that each multiset occurs precisely once.
This completes the induction proof, since the string $ST$ is a
universal cycle over $[n-3]$ (taking the place of $S$ in the
previous iteration of the induction), and the string $T^\prime UV$
extends this cycle to $[n]$ (taking the place of $T$ in the
previous iteration of the induction). Also note that $T^\prime UV$
begins with $1,1$ and ends with $n,n-1$, as required for the
induction hypothesis.
Thus, all that remains is the find a base case from which the
induction can proceed. A possible base case (there are many) for
$n-6=4,\ n-3=7$ is
\begin{eqnarray*}S&=&11144\phantom{1}42223\phantom{1}33121\phantom{1}24343\\
T&=&11522\phantom{1}63374\phantom{1}45166\phantom{1}27732\phantom{1}57366\phantom{1}77135\phantom{1}34641\phantom{1}71555\phantom{1}36127\phantom{1}42556\phantom{1}66477\phantom{1}75526\phantom{1}4576,\end{eqnarray*}
which would lead to
\begin{eqnarray*}
T^\prime&=&11822\phantom{1}93304\phantom{1}48199\phantom{1}20032\phantom{1}80399\phantom{1}00138\phantom{1}34941\phantom{1}01888\phantom{1}39120\phantom{1}42889\phantom{1}99400\phantom{1}08829\phantom{1}4809\\
U&=&55007\phantom{1}59966\phantom{1}89797\phantom{1}68877\phantom{1}06585\phantom{1}8060\\
V&=&69450\phantom{1}36925\phantom{1}01695\phantom{1}84793\phantom{1}58279\phantom{1}15870\phantom{1}46837\phantom{1}02681\phantom{1}709,
\end{eqnarray*}
Where ``0'' denotes 10 and the spacings have been added to
increase readability.
All of the work up to this point has dealt with $n \equiv 1 $ (mod
3). The proof for $n\equiv2$ (mod 3) is similar, so it has been
omitted for the sake of brevity.
\section{A Second Proof of the Existence of Ucycles on 3--Multisets}
In this proof, we construct a ucycle on 3--multisets of $[n]$ by modifying a ucycle on 3--subsets of $[n]$. (We know from \cite{Hurlbert} that ucycles on 3--subsets of $[n]$ exist for all $n \geq 8$ not divisible by $3$.) Before giving the proof, we introduce two terms. We call each element of $[n]$ a \emph{letter}, and each $a_i$ in the ucycle $X=a_1\ldots a_k$ a \emph{character}. To summarize, a ucycle on 3--multisets of $[n]$ is made up of $\binom{n+t-1}{t}$ characters, each of which equals one of $n$ letters.
To demonstrate the proof's technique, we will first use an argument similar to it to create ucycles on 2--multisets from ucycles on 2--subsets.
We start with this ucycle on 2--subsets of $[5]$:
$$
1234513524
$$
Then, we repeat the first instance of every letter to create the following ucycle on 2--multisets:
$$
112233445513524
$$
The technique works because repeating a character $a_i$ as above adds the multiset $\{a_i,a_i\}$ to the ucycle and has no other effect.
To use this technique on ucycles on 3--subsets, we repeat not single characters, but pairs of characters. For example, changing
$$
\ldots a_{i-1}a_ia_{i+1}a_{i+2}\ldots
$$
to
$$
\ldots a_{i-1}a_ia_{i+1}a_ia_{i+1}a_{i+2} \ldots
$$
has only has the effect of adding the 3--multisets $\{a_i,a_i,a_{i+1}\}$ and $\{a_i,a_{i+1},a_{i+1}\}$ to the cycle. In order to use this technique, we will need to know which consecutive pairs of letters appear in a ucycle on 3--subsets. For instance, the following ucycle (generated using methods from \cite{Hurlbert}) on 3--subsets of $[8]$ contains every unordered pair of letters as consecutive characters but $\{1,5\}$, $\{2,6\}$, $\{3,7\}$, and $\{4,8\}$:
$$
1235783\ 6782458\ 3457125\ 8124672\ 5671347\ 2346814\ 7813561\ 4568236
$$
(The spaces in the cycle are added only for readability.)
This ucycle is missing 4 pairs, which happens to be $n/2$. This is no coincidence: in fact, this is the most pairs that a ucycle on 3--subsets can fail to contain.
\newtheorem*{missingpairs}{Lemma}
\begin{missingpairs}
No two unordered pairs not appearing as consecutive characters in a ucycle on 3--subsets have a letter in common. A ucycle can hence be missing at most $n/2$ pairs of letters.
\end{missingpairs}
\begin{proof}
Suppose that we have a ucycle on 3--subsets that contains neither $a$ and $b$ as consecutive characters, nor $a$ and $c$ as consecutive characters, where $a,b,c \in [n]$. Then the ucycle does not contain the 3--subset $abc$, for all permutations of $abc$ contain either $a$ and $b$ consecutively, or $a$ and $c$ consecutively. But this is a contradiction, as a ucycle by definition contains all 3--subsets.
Hence, no two pairs of characters missing in the ucycle can have a letter in common. By the pigeonhole principle, the ucycle can be missing at most $n/2$ pairs of letters.
\end{proof}
With this lemma, we can finish our proof, creating a ucycle on 3--multisets of $[n]$ whenever $n$ is not divisible by 3.
First, we consider the case when $n$ is even. Let $X$ be a ucycle on 3--subsets of $[n]$.
Let $x_1,\ldots,x_n$ be a permutation of $[n]$ such that
\begin{itemize}
\item $x_1$ equals the first character in $X$.
\item $x_n$ equals the last character in $X$.
\item The list $\{x_1,x_2\}, \{x_3,x_4\}, \ldots, \{x_{n-1}x_n\}$ contains all unordered pairs of letters not contained as consecutive characters in $X$, which is possible by our lemma. (If $X$ is missing exactly $n/2$ pairs of letters, these pairs will be exactly the pairs missing from $X$. If $X$ is missing fewer than $n/2$ pairs of letters, then the pairs consist of all missing pairs of letters, plus the remaining letters paired arbitrarily.)
\end{itemize}
Make $X'$ by repeating the first instance of every
unordered pair of letters in $X$ except for $\{x_1,x_2\},
\{x_2,x_3\},$ $\ldots,$ $\{x_{n-1},x_n\},\{x_n,x_1\}$. The cycle $X'$
now contains all multisets except
\setlength\arraycolsep{1pt}
$$
\{x_1,x_1,x_1\},\ldots,\{x_n,x_n,x_n\}\\
$$
$$
\{x_1,x_1,x_2\},\{x_1,x_2,x_2\},\{x_2,x_2,x_3\},\{x_2,x_3,x_3\},\ldots,\{x_n,x_n,x_1\},\{x_n,x_1,x_1\}
$$
Now, add the string $x_1x_1x_1x_2x_2x_2\ldots x_nx_nx_n$ to the
end of $X'$ to create $X''$. This provides exactly the missing multisets, creating a
ucycle on 3--multisets.
For example, when $n=8$, we start with the following ucycle on 3--subsets:
\begin{eqnarray*}
X & = & 1235783\ 6782458\ 3457125\ 8124672\\
&& 5671347\ 2346814\ 7813561\ 4568236\\
\end{eqnarray*}
The ucycle on 3--subsets $X$ does not contain the pairs $\{1,5\}$, $\{2,6\}$, $\{3,7\}$, and $\{4,8\}$. Hence, we set
\begin{eqnarray*}
x_1 & = & 1,\ x_2=5,\ x_3=3,\ x_4=7\\
x_5 & = & 4,\ x_6=8,\ x_7=2,\ x_8=6
\end{eqnarray*}
Note that $x_1$ equals the first character of $X$, and $x_8$ equals the last.
Now, we repeat the first instance of every unordered pair except for $\{1,5\}$, $\{5,3\}$, $\{3,7\}$, $\{7,4\}$, $\{4,8\}$, $\{8,2\}$, $\{2,6\}$, and $\{6,1\}$. (Note that four of these pairs do not appear in $X$. If some of these pairs actually did appear in $X$, because $X$ was missing fewer than $n/2$ pairs of letters, it would not affect the proof.):
\begin{eqnarray*}
X' & = &12123235757878383\
63676782424545858\
3434571712525\
81812464672\\
&& 56567131347\
2723468681414\
7813561\
4568236
\end{eqnarray*}
Finally, we add the string $x_1x_1x_1\ldots x_nx_nx_n$ to complete the ucycle:
\begin{eqnarray*}
X'' & = &12123235757878383\
63676782424545858\
3434571712525\
81812464672\\
&& 56567131347\
2723468681414\
7813561\
4568236\\
&& 111555333777444888222666
\end{eqnarray*}
The proof is similar when $n$ is odd, and we omit it for
the sake of brevity.
\section{Further Directions and Remarks}
Both of the proofs given above suggest natural extensions to the
$t=4$ and larger cases, and it is simple to use the
techniques described above to create a proof sketch. In personal
correspondence, Glenn Hurlbert indicated that his technique for creating ucycles on sets in \cite{Hurlbert} can also be used to create ucycles on multisets.
Though this provides a more concise proof for the existence of ucycles on 3--multisets, the two proofs presented may prove useful by their introduction of new techniques for approaching ucycles. The first proof is notable for its use of induction, a technique which has not been used before to create ucycles. The second, while it is tied to ucycles on sets, is not tied to any particular approach for creating ucycles on sets; it could perhaps be extended to situations to which Hurlbert's technique cannot.
For values of $n$ and $t$ for which ucycles do exist, one
interesting question is how many ucycles exist. Clearly each
ucycles has $n!$ representations, since there are $n!$
permutations of $1,\ldots,n$. However, when searching for ucycles
using a computer, vast numbers of \emph{distinct} (i.e. not
differing merely by a permutation of $1,\ldots,n$) ucycles were
found. Currently, it is not clear whether $N(n,t)$, the number of
distinct ucycles for a given value of $n$ and $t$, is a function
that has a simple description.
\bibliographystyle{amsplain}
| {
"timestamp": "2006-08-31T01:50:15",
"yymm": "0608",
"arxiv_id": "math/0608769",
"language": "en",
"url": "https://arxiv.org/abs/math/0608769",
"abstract": "Consider the collection of all t-multisets of {1,...,n}. A universal cycle on multisets is a string of numbers, each of which is between 1 and n, such that if these numbers are considered in t-sized windows, every multiset in the collection is present in the string precisely once. The problem of finding necessary and sufficient conditions on n and t for the existence of universal cycles and similar combinatorial structures was first addressed by DeBruijn in 1946 (who considered t-tuples instead of t-multisets). The past 15 years has seen a resurgence of interest in this area, primarily due to Chung, Diaconis, and Graham's 1992 paper on the subject. For the case t=3, we determine necessary and sufficient conditions on n for the existence of universal cycles, and we examine how this technique can be generalized to other values of t.",
"subjects": "Combinatorics (math.CO)",
"title": "Universal Cycles on 3-Multisets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9907319882609761,
"lm_q2_score": 0.8128673110375458,
"lm_q1q2_score": 0.8053336472565811
} |
https://arxiv.org/abs/2301.01272 | A gallery of diagonal stability conditions with structured matrices (and review papers) | This note presents a summary and review of various conditions and characterizations for matrix stability (in particular diagonal matrix stability) and matrix stabilizability. | \section{Definitions and notations}
\begin{itemize}
\item A square real matrix is a \textbf{Z-matrix} if it has nonpositive off-diagonal elements.
\item A \textbf{Metzler} matrix is a real matrix in which all the off-diagonal components are nonnegative (equal to or greater than zero).
\item A Z-matrix with positive principal minors is an \textbf{M-matrix}.
\begin{itemize}
\item Note: There are numerous equivalent characterizations for M-matrix \citep{fiedler1962matrices, plemmons1977m}. A more commonly-used condition is the following: A matrix $A \in \mathbb{R}^{n \times n}$ is called an M-matrix, if its non-diagonal entries are non-positive and its eigenvalues have positive real parts.
\end{itemize}
\item A square matrix $A$ is \textbf{(positive) stable} if all its eigenvalues have positive parts. Equivalently, a square matrix $A$ is (positive) stable iff there exists a positive definite matrix $D$ such that $AD+DA^T$ is positive definite.
\begin{itemize}
\item Note: in control system theory we often define stable matrix as the set of square matrices whose eigenvalues have negative real parts (a.k.a. \textbf{Hurwitz} matrix). The two definitions of stable matrices will be distinguished in the context.
\end{itemize}
\item A square complex matrix is a \textbf{P-matrix} if it has positive principal minors.
\item A square complex matrix is a \textbf{$P^+_0$-matrix} if it has nonnegative principal minors and at least one principal minor of each order is positive.
\item A real square matrix $A$ is \textbf{multiplicative D-stable} (in short, \textbf{D-stable}) if $DA$ is stable for every positive diagonal matrix $D$.
\item A square matrix $A$ is called \textbf{totally stable} if any principal submatrix of $A$ is D-stable.
\item A real square matrix $A$ is said to be \textbf{additive D-stable} if $A + D$ is stable for every nonnegative diagonal matrix $D$.
\item A real square matrix $A$ is said to be \textbf{Lyapunov diagonally stable} if there exists a positive diagonal matrix $D$ such that $AD + DA^T$ is positive definite.
\begin{itemize}
\item Note: Lyapunov diagonally stable matrices are often referred to as just \textbf{diagonally stable} matrices or as \textbf{Volterra–Lyapunov stable}, or as \textbf{Volterra dissipative} in the literature (see e.g., \citep{logofet2005stronger}).
\end{itemize}
\item A matrix $A = \{a_{ij}\}\in \mathbb{R}^{n \times n}$ is \textbf{generalized \textit{\textbf{row}}-diagonally dominant}, if there exists $x = (x_1, x_2, \cdots, x_n) \in \mathbb{R}^n$ with $x_i >0$, $\forall i$, such that
\begin{align}
|a_{ii}| x_i > \sum_{j=1, j \neq i}^{n} |a_{ij}|x_j, \forall i = 1, 2, \cdots, n.
\end{align}
\item
A matrix $A = \{a_{ij}\} \in \mathbb{R}^{n \times n}$ is \textbf{generalized \textit{\textbf{column}}-diagonally dominant}, if there exists $x = (x_1, x_2, \cdots, x_n) \in \mathbb{R}^n$ with $x_i >0$, $\forall i$, such that
\begin{align}
|a_{jj}| x_j > \sum_{i=1, i \neq j}^{n} |a_{ij}|x_i, \forall j = 1, 2, \cdots, n.
\end{align}
\begin{itemize}
\item Note: the set of generalized \textit{\textbf{column}}-diagonally dominant matrices is equivalent to the set of generalized \textit{\textbf{row}}-diagonally dominant matrices \citep{varga1976recurring, sun2021distributed}. They are also often referred to as \textbf{quasi-diagonally dominant} matrices \citep{kaszkurewicz2012matrix}.
\end{itemize}
\item For a real matrix $A = \{a_{ij}\} \in \mathbb{R}^{n \times n}$, we associate it with a \textbf{comparison matrix} $M_A = \{m_{ij}\} \in \mathbb{R}^{n \times n}$, defined by
\begin{align}
m_{ij} = \left\{
\begin{array}{cc}
|a_{ij}|, &\text{ if } \,\,\,\,j = i; \\ \nonumber
-|a_{ij}|, &\text{ if } \,\,\,\, j \neq i. \nonumber
\end{array}
\right.
\end{align}
A given matrix $A$ is called an \textbf{H-matrix} if its comparison matrix $M_A$ is an M-matrix.
\begin{itemize}
\item The set of H-matrix is equivalent to the set of quasi-diagonally dominant matrices \citep{kaszkurewicz2012matrix, sun2021distributed}.
\end{itemize}
\item A square matrix $A$ is \textbf{diagonally stabilizable} if there exists a diagonal matrix $D$ such that $DA$ is stable.
\end{itemize}
Note: Many definitions above for real matrices also carry over to complex matrices; the distinction between real and complex matrices will be made clear in the context.
\section{Conditions for diagonally stabilizable matrices}
\textit{\textbf{A key motivating question}: Given a square matrix $A$, can we find a diagonal matrix $D$ such that the matrix $DA$ is stable? }
\\
Fisher and Fuller \citep{fisher1958stabilization} proved the following result:
\begin{theorem} \citep{fisher1958stabilization}
If $P$ is a real $n \times n$ matrix fulfilling the condition:
\begin{itemize} \label{thm:fisher-fuller}
\item (A): $P$ has at least one sequence of non-zero principal minors $M_k$ of every order
$k = 1,2,\cdots,n$, such that $M_{k-1}$ is one of the $k$ first principal minors of $M_k$;
\end{itemize}
then there exists a real diagonal matrix $D$ such that the characteristic equation
of $DP$ is stable.
\end{theorem}
The Fisher-Fuller theorem is also formulated as the following alternative version:
\begin{theorem}
Let $P$ be an $n \times n$ real matrix all of
whose leading principal minors are positive. Then there is an $n \times n$ positive
diagonal matrix $D$ such that all the roots of $DP$ are positive and
simple.
\end{theorem}
Fisher later gave a simple proof for a similar yet stronger result \citep{fisher1972simple}.
\begin{theorem} \citep{fisher1972simple}
If $P$ is an $n \times n$ real matrix that has at least one nested set of principal minors, $M_k$, such that $(-1)^k M_k >0, \forall k = 1\cdots, n$, then there exists a real diagonal
matrix $D$ with positive diagonal elements such that the characteristic roots of
$DP$ are all real, negative, and distinct.
\end{theorem}
\begin{remark} Some remarks on the conditions of diagonally stabilizable matrices are in order.
\begin{itemize}
\item The above theorems involve determining the sign of (at least) one nested set of principal minors. In \citep{johnson1997nested}, sufficient conditions are determined for an $n$-by-$n$ zero-nonzero pattern to allow a nested sequence of nonzero principal minors. In particular, a method is given to \textit{sign} such a pattern so that it allows a nested sequence of $k$-by-$k$ principal minors with $\text{sign}(-1)^k$ for $k = 1, \cdots, n$.
\item The condition in the Fisher-Fuller theorem appears as a \textit{sufficient} condition for matrix diagonal stabilizability. A necessary condition for matrix diagonal stabilizability is: for each order $k = 1\cdots, n$, at least one $k \times k$ principal minor of $P$ is non-zero. It is unclear what would be \textbf{the} necessary and sufficient condition.
\end{itemize}
\end{remark}
Ballantine \citep{ballantine1970stabilization} extended the above Fisher-Fuller theorem to the complex matrix case.
\begin{theorem} \citep{ballantine1970stabilization}
Let $A$ be an $n \times n$ \textbf{complex} matrix all of whose leading
principal minors are nonzero. Then there is an $n \times n$ \textbf{complex} diagonal
matrix $D$ such that all the roots of $DA$ are positive and simple.
\end{theorem}
\begin{remark}
It is shown in \citep{hershkowitz1992recent} the above Ballantine theorem cannot be strengthened by replacing ``complex
diagonal matrix D'' by ``positive diagonal matrix D''. A counterexample is shown in \citep{hershkowitz1992recent} involving a $2 \times 2$ complex matrix $A$ with positive leading principal minors that there exists no positive diagonal matrix $D$ such that the eigenvalues of $DA$
are positive.
\end{remark}
A related problem to characterize diagonal stabilizable matrix is the \textbf{Inverse Eigenvalue Problem} (IEP), and Friedland \citep{friedland1977inverse} proved the following theorem.
\begin{theorem} \citep{friedland1977inverse}
Let $A$ be a given $n \times n$ \textbf{complex} valued matrix. Assume
that all the principal minors of $A$ are different from zero. Then for any
specified set $\lambda = \{\lambda, \cdots, \lambda_n \} \in \mathbb{C}^n$ there exists a diagonal \textbf{complex} valued
matrix $D$ such that the spectrum of $AD$ is the set $\lambda$. The number of such $D$
is finite and does not exceed $n!$. Moreover, for almost all $\lambda$ the number of the diagonal matrices $D$ is exactly $n!$.
\end{theorem}
\begin{remark}
The Friedland theorem of the IEP problem in the complex matrix case cannot be directly carried over to the real case. Further, it is easy to show with a counterexample of a $2 \times 2$ matrix that eigenvalue positionability
in the real case cannot always be guaranteed, even with nonzero principal minors.
\end{remark}
In \citep{hershkowitz1992recent} the following two theorems are proved.
\begin{theorem} \citep{hershkowitz1992recent}
Let $A$ be a \textbf{complex} square matrix with positive leading
principal minors, and let $\epsilon$ be any positive number. Then there exists a positive diagonal matrix $D$ such that the eigenvalues of $DA$ are simple, and
the argument of every such eigenvalue is less in absolute value than $\epsilon$.
\end{theorem}
\begin{theorem} \citep{hershkowitz1992recent}
Let $A$ be a complex square matrix with real principal
minors and positive leading principal minors. Then there exists a positive
diagonal matrix $D$ such that $DA$ has simple positive eigenvalues.
\end{theorem}
\begin{remark}
The above theorems all present certain sufficient conditions to characterize diagonally stabilizable matrix and the IEP problem, and they are not necessary. A
necessary condition for the diagonal matrix $D$ to exist is that for each order $i$, at least one $i \times i$ principal minor of
A is nonzero. However, a full characterization (with necessary and sufficient condition) for diagonally stabilizable matrix still remains an open problem.
\end{remark}
A variation of the diagonal matrix stabilization problem is the following:
\begin{itemize}
\item Problem (*): Given a real square matrix $G$, find a real diagonal
matrix $D$ such that the product $GD$ is Hurwitz together with all its principal submatrices.
\end{itemize}
Surprisingly, a necessary and sufficient condition exists for solving the above problem as shown in \citep{locatelli2012necessary}. Let $\mathcal{M}:= \{1, 2, \cdots, m\}$ and $\mathcal{F}:= \{f | f \subset \mathcal{M}\}$.
Further, for any $m \times m$ matrix $\Delta$, denote by $\Delta (f)$ the principal submatrix obtained from it after removing the rows and columns with indexes in $f$, $f \in \mathcal{F}$. The main result of \citep{locatelli2012necessary} proves the following:
\begin{theorem} \citep{locatelli2012necessary}
Problem (*) admits a solution if and only if
\begin{align}
\text{det} (G(f)) \text{det} (G_D(f)) > 0, \forall f \in \mathcal{F}
\end{align}
where $G_D = \text{diag}\{g_{ii}\}$. Moreover, if the above condition is satisfied, then there exists $\bar \epsilon >0$ such that, for any given $\epsilon \in (0, \bar \epsilon)$, the matrix
\begin{align}
D:= G_D Z(\epsilon), Z(\epsilon): = -\text{diag}\{\epsilon^i \}
\end{align}
solves the stabilization problem (*).
\end{theorem}
\newpage
\section{Conditions for diagonally stable matrices}
We give a short summary of available conditions for diagonally stable matrices (excerpts from \citep{barker1978positive}, \citep{cross1978three} and \citep{hershkowitz2006matrix}).
\begin{itemize}
\item \citep{barker1978positive} Lyapunov diagonally stable matrices are P-matrices.
\item \citep{barker1978positive} A matrix $A$ being Lyapunov diagonally stable is equivalent to that there exists a positive diagonal matrix $D$ such that $x^TDAx >0$ for all nonzero vectors $x$.
\item \citep{barker1978positive} A $2 \times 2$ real matrix is Lyapunov diagonally stable if and only if it is a P-matrix.
\item \citep{cross1978three} For a given Lyapunov diagonally stable matrix $P$, all principal submatrices of $P$ are Lyapunov diagonally stable.
\item \citep{barker1978positive} A real square matrix $A$ is Lyapunov diagonally stable if and only if for every nonzero real symmetric positive semidefinite matrix $H$ the matrix $HA$ has at least one positive diagonal element.
\begin{itemize}
\item Note: this result is termed the BBP theorem, and is proved again in \citep{shorten2009alternative} with a simpler proof.
\end{itemize}
\item \citep{cross1978three} The set of Lyapunov diagonally stable matrices is a strict subset of multiplicative D-stable matrices.
\item \citep{cross1978three} The set of Lyapunov diagonally stable matrices is a strict subset of additive D-stable matrices.
\begin{itemize}
\item Note: Multiplicative D-stable and additive D-stable matrices are not necessarily diagonally stable.
\end{itemize}
\item A Z-matrix is Lyapunov diagonally stable if and only if it is a P -matrix (that is, it is an M-matrix).
\item A non-singular H-matrix with nonnegative diagonal parts is Lyapunov diagonally stable.
\item A quasi-diagonal dominant matrix with nonnegative diagonal parts is Lyapunov diagonally stable. Note the equivalence of Hurwitz H-matrix and quasi-diagonal dominant matrix \citep{sun2021distributed}.
\end{itemize}
The following facts are shown in \citep{cross1978three} and \citep{kaszkurewicz2012matrix}:
\begin{itemize}
\item For normal matrices and within the set Z, D-stability, additive D-stability, and diagonal stability are all equivalent to
matrix stability.
\item If a matrix $A$ is Hurwitz stable, D-stable, or diagonally stable, then the matrices $A^T$ and $A^{-1}$ also have the corresponding properties.
\end{itemize}
In \citep{shorten2009theorem} Shorten and Narendra showed the following necessary and sufficient condition for matrix diagonal stability (an alternative proof of the theorem of Redheffer via the KYP lemma):
\begin{theorem} \citep{shorten2009theorem} and \citep{redheffer1985volterra}
Let $A \in \mathbb{R}^{n \times n}$ be a Hurwitz matrix with negative diagonal entries. Let $A_{n-1}$ denote the
$[n-1 \times n-1]$ leading sub-matrix of $A$, and $B_{n-1}$ denote the corresponding block of $A^{-1}$. Then, the
matrix $A$ is diagonally stable, if and only if there is a common diagonal Lyapunov function for the LTI systems $\Sigma_{A_{n-1}}$ and $\Sigma_{B_{n-1}}$.
\end{theorem}
The above theorem involves finding a common diagonal Lyapunov function for a set of LTI systems, which may be restrictive and computationally demanding in practical applications especially when the dimension of the matrix $A$ is large.
\newpage
\section{Relations of matrix stability and diagonal stability}
The paper \citep{berman1983matrix} characterizes the relations of certain special matrices for matrix diagonal stability. They define
\begin{itemize}
\item $\mathscr{A} = \{A: \text{there exists a positive definite diagonal matrix $D$ }\,\text{such that} \,\,AD +DA^T \text{
is positive definite}\}$; \\i.e., $\mathscr{A}$ denotes the set of diagonally stable matrices;
\item $\mathscr{L} = \{A: \text{there exists a positive definite matrix $D$ }\,\,\text{such that} \,\,AD +DA^T \text{
is positive definite}\}$; i.e., $\mathscr{L}$ denotes the set of (positive) stable matrices;
\item $\mathscr{P} = \{A: \text{the principle minors of $A$} \,\text{
are positive}\}$; \\i.e., $\mathscr{P}$ denotes the set of P-matrices;
\item $\mathscr{S} = \{A: \text{there exists a positive vector $x$ such that $Ax$ is positive}\}$; \\i.e., $\mathscr{S}$ denotes the set of semipositive
matrices.
\end{itemize}
The main result of \citep{berman1983matrix} is cited and shown in Fig.~\ref{fig:theorem_Ber}. In general, these different sets of structured matrices are not equivalent, and the set $\mathscr{A}$ is a subset of the other sets. However, for Z-matrices, these sets are equivalent. In particular, for the case of Z-matrices, the characterizations of these sets give equivalent conditions for M-matrices (upon a sign change). Note there are yet many more conditions to characterize M-matrices; see e.g., \citep{plemmons1977m}.
\begin{figure}
\begin{center}
\vspace{-10pt}
\fbox{\includegraphics[width=1\textwidth]{theorem_Ber.png}}
\caption{Relations of matrix stability under different matrix types: the main theorem in \citep{berman1983matrix}}
\label{fig:theorem_Ber}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\vspace{-10pt}
\fbox{\includegraphics[width=0.8\textwidth]{Relation_Her.png}}
\caption{The implication relations between matrix stability conditions, cited from \citep{hershkowitz1992recent}}.
\label{fig:Relation_Her}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\vspace{-10pt}
\fbox{\includegraphics[width=0.8\textwidth]{Relation_Her_2.png}}
\caption{For Z-matrices, all the stability types are equivalent. Cited from \citep{hershkowitz1992recent}}.
\label{fig:Relation_Her_2}
\end{center}
\end{figure}
The review paper \citep{hershkowitz1992recent} presents the implication relations between matrix stability conditions, and the equivalent relations of matrix stabilities for Z-matrices, as cited in Figs.~\ref{fig:Relation_Her} and~\ref{fig:Relation_Her_2}. Again, as shown in Figs.~\ref{fig:Relation_Her_2}, for Z-matrices, all the stability types are equivalent.
\newpage
The survey paper \citep{logofet2005stronger} presents some beautiful flower-shaped characterizations of the relations among matrix stabilities, as cited in Figs.~\ref{fig:Relation_Log} and \ref{fig:Relation_Log_2}.
\begin{figure}
\begin{center}
\vspace{-10pt}
\fbox{\includegraphics[width=0.8\textwidth]{Relation_Log.png}}
\caption{Relations among matrix stabilities. Cited from \citep[Fig.2]{logofet2005stronger}}.
\label{fig:Relation_Log}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\vspace{-10pt}
\fbox{\includegraphics[width=0.8\textwidth]{Relation_Log_2.png}}
\caption{Petals of sign-stable matrices within the Flower. Cited from \citep[Fig.4]{logofet2005stronger}}.
\label{fig:Relation_Log_2}
\end{center}
\end{figure}
\newpage
\section{Stability conditions with submatrices and Schur complement}
Stability conditions of `structured' matrices often involve stability properties of submatrices, which employ block submatrices and their Schur complements to determine stability.
In \citep{narendra2010hurwitz}, Narendra and Shorten presented necessary and sufficient conditions to characterize if a given Metzler matrix is Hurwitz, based on the fact that a Hurwitz Metzler matrix is diagonally stable. These conditions are generalized in \citep{souza2017note}. We recall some main stability criteria from \citep{souza2017note}.
\begin{lemma} \label{lemma:M-SC}
Let $A \in \mathbb{R}^{n \times n}$ be a
Metzler matrix partitioned in blocks of compatible dimensions as $A = [A_{11}, A_{12}; A_{21}, A_{22}]$ with $A_{11}$ and $A_{22}$ being square matrices. Then the following statements are equivalent.
\begin{itemize}
\item $A$ is Hurwitz stable.
\item $A_{11}$ and its Schur complement $A/A_{11} := A_{22} - A_{21} A_{11}^{-1} A_{12}$ are Hurwitz stable Metzler matrices.
\item $A_{22}$ and its Schur complement $A/A_{22} := A_{11} - A_{12} A_{22}^{-1} A_{21}$ are Hurwitz stable Metzler matrices
\end{itemize}
\end{lemma}
\begin{remark} Some remarks are in order.
\begin{itemize}
\item
For a structured matrix, the property that its Schur complements also preserve the same stability and structure properties is termed \textbf{Schur complement stability property}. Other types of structured matrices that have Schur complement stability property include symmetric matrices, triangular matrices, and Schwarz matrices. See \citep{souza2017note}.
\item The result on M-matrix in Lemma~\ref{lemma:M-SC} can be generalized to H-matrix: Let $A$ be a H-matrix partitioned in blocks of compatible dimensions as $A = [A_{11}, A_{12}; A_{21}, A_{22}]$ with $A_{11}$ and $A_{22}$ being square matrices. If $A$ is Hurwitz stable, then $A_{11}$ and its Schur complement $A/A_{11} := A_{22} - A_{21} A_{11}^{-1} A_{12}$ are Hurwitz stable H matrices, or $A_{22}$ and its Schur complement $A/A_{22} := A_{11} - A_{12} A_{22}^{-1} A_{21}$ are Hurwitz stable H matrices.
\item Schur complement and its \textbf{closure property} for several structured matrices (including diagonal matrices, triangular matrices, symmetric matrices, P-matrices, diagonal dominant matrices, M-matrices etc.) are discussed in \citep[Chap. 4]{zhang2006schur}.
\end{itemize}
\end{remark}
\newpage
\section{Application examples of matrix diagonal stability conditions}
The Fisher-Fuller theorem on diagonal matrix stabilizability (Theorem~\ref{thm:fisher-fuller} and its variations) has been rediscovered several times by the control system community, and has been applied in solving distributed stabilization and decentralized control problems in practice. This section reviews two application examples.
\subsection{Conditions for decentralized stabilization}
In \citep{corfmat1973stabilization} Corfmat and Morse solved the following problem:
\begin{itemize}
\item For given and fixed real matrices $A$ and $P$, find (if possible) a non-singular diagonal matrix $D$ such that $I+ADP$ is Schur stable (i.e., all eigenvalues of $I+ADP$ are located within the unit circle in the complex plane.
\end{itemize}
To solve the above problem they proved the following:
\begin{theorem}
If $A$ is an $n \times n$ \textbf{strongly non-singular} matrix, then there exists a
diagonal matrix $D$ such that $(I + DA)$ is Schur stable.
\end{theorem}
Note: in \citep{corfmat1973stabilization} a matrix is termed \textbf{strongly non-singular}, if its all $n$ leading principal
minors are nonzero.
\begin{theorem}
If $A$ is a fixed non-singular matrix, then there exists a
permutation matrix $P$ such that $PA$ is strongly non-singular.
\end{theorem}
Solution to decentralized stabilization: the non-singularity of $A$ is a necessary and sufficient
condition for the existence of a permutation matrix $P$ and a non-singular diagonal matrix $D$ such that $(I + ADP)$ is Schur stable.
\subsection{Distributed stabilization of persistent formations}
In \citep{yu2009control}, the problem on persistent formation stabilization involves studying the stabilizability of the following differential equation
\begin{align*}
\dot z = \Delta A z
\end{align*}
where $\Delta$ is a diagonal or possibly block diagonal matrix, and $A$ is a rigidity-like matrix on formation shapes. To solve the formation stabilization problem in \citep{yu2009control}
the following result is employed (\citep[Theorem 3.2]{yu2009control}):
\begin{theorem}
Suppose A is an $m \times m$ non-singular matrix with every leading
principal minor nonzero. Then there exists a diagonal $D$ such that the real parts of
the eigenvalues of $DA$ are all negative.
\end{theorem}
We remark that this is a reformulation of the Fisher-Fuller theorem.
\newpage
\section{A selection of key review papers and books on matrix stability and diagonal stability conditions}
\begin{itemize}
\item The survey paper \citep{hershkowitz1992recent} that presents a summary of relevant matrix stability results and the developments, up until 1992.
\item The paper \citep{bhaya2003characterizations} that presents comprehensive discussions and characterizations for various classes of matrix stability conditions.
\item The paper \citep{hershkowitz2003positivity} that studies the relations between positivity of principal minors, sign symmetry and stability of matrices.
\item The review paper \citep{hershkowitz2006matrix} that presents an concise overview on matrix stability and inertia.
\item The book \citep{kaszkurewicz2012matrix} on matrix diagonal stability in systems and computation.
\item The summary paper \citep{logofet2005stronger} that presents a review and some beautiful connections/relations on different matrix stabilities.
\item The very long survey paper \citep{kushel2019unifying} that provides a unifying viewpoint on matrix stability, and its historical development.
\item The recent book \citep{johnson2020matrix} on positive matrix, P-matrix and inverse M-matrix.
\end{itemize}
| {
"timestamp": "2023-01-04T02:14:34",
"yymm": "2301",
"arxiv_id": "2301.01272",
"language": "en",
"url": "https://arxiv.org/abs/2301.01272",
"abstract": "This note presents a summary and review of various conditions and characterizations for matrix stability (in particular diagonal matrix stability) and matrix stabilizability.",
"subjects": "Systems and Control (eess.SY); Numerical Analysis (math.NA); Optimization and Control (math.OC)",
"title": "A gallery of diagonal stability conditions with structured matrices (and review papers)",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587218253717,
"lm_q2_score": 0.8152324960856175,
"lm_q1q2_score": 0.8052530083240369
} |
https://arxiv.org/abs/1906.03439 | Convergence in Density of Splitting AVF Scheme for Stochastic Langevin Equation | In this article, we study the density function of the numerical solution of the splitting averaged vector field (AVF) scheme for the stochastic Langevin equation. To deal with the non-globally monotone coefficient in the considered equation, we first present the exponential integrability properties of the exact and numerical solutions. Then we show the existence and smoothness of the density function of the numerical solution by proving its uniform non-degeneracy in Malliavin sense. In order to analyze the approximate error between the density function of the exact solution and that of the numerical solution, we derive the optimal strong convergence rate in every Malliavin--Sobolev norm of the numerical scheme via Malliavin calculus. Combining the approximation result of Donsker's delta function and the smoothness of the density functions, we prove that the convergence rate in density coincides with the optimal strong convergence rate of the numerical scheme. | \section{Introduction}
Convergence in density of numerical approximations through the probabilistic approach has received considerable attentions for stochastic differential equations (SDEs) whose coefficients are smooth vector fields with bounded derivatives. It is well known that,
under the uniform ellipticity condition, the numerical solution given by the Euler--Maruyama scheme admits a density function (see e.g. \cite{VE02}) and converges in density of order $1$ (see e.g. \cite[Theorem 8]{JG05}). Under H\"{o}rmander's condition, the idea of perturbing the numerical solution has been used in \cite{BT96,HW96,KHA97} to approximate the density function $p_T(x,y)$ of the exact solution starting from $x$ at time $T$.
In \cite{BT96}, the authors show that the difference between $p_T(x,y)$ and the density function of the law of a small perturbation of the Euler--Maruyama method with stepsize $\frac{T}{N}$ is expanded in terms of powers of $\frac{1}{N}$.
The authors in \cite{HW96} obtain a general approximation result for Donsker's delta functions and approximate
$p_T(x,y)$ by the density function of the sum of the It\^{o}--Taylor scheme and an independent Gaussian random variable.
In \cite{KHA97} the author studies the It\^{o}--Taylor approximation by applying a slight modification of the weak approximation technique and proves that the rate of convergence in density can be considered as weak approximation rate.
For the numerical approximations of SDEs with superlinearly growing nonlinearities and degenerate additive noises,
to the best of our knowledge,
there are few results available concerning the convergence in density. Two natural questions are:
\textit{{\rm{(i)}} Does the density function of the numerical solution exist?}
\textit{{\rm{(ii)}} Once the density function of numerical solution exists, does it provide a proper approximation for the density function of the exact solution?}
To study the above questions, the present work considers the numerical approximation of the stochastic Langevin equation
\begin{equation}\label{SDE1}
\left\{
\begin{split}
&\,\mathrm d P=-\nabla F(Q)\,\mathrm d t-vP \,\mathrm d t+\sigma\,\mathrm d W_t,\\
&\,\mathrm d Q=P\,\mathrm d t.
\end{split}
\right.
\end{equation}
Here $t\in(0,T],\,T>0,\,v>0$, $\sigma=[\sigma_1,\ldots,\sigma_d]$ with $\sigma_k,\,k=1,\ldots,d$, being $m$-dimensional constant vectors, $-\nabla F$ is a locally Lipschitz function and $W=(W^1,...,W^d)^\top$ is a $d$-dimensional standard Wiener process on a filtered complete probability space $(\Omega,\mathscr{F}, \{\mathscr{F}_t\}_{t\ge0}, \mathbb{P})$. Equation \eqref{SDE1} arises in various complex dynamical system models subject to random noise such as chemical interactions and molecular dynamics, for more details, see \cite{DTG00,AB18} and references therein.
With the help of exponential moment estimate of $X(t)=(P(t)^\top,Q(t)^\top)^\top$,
we show that $\{X(t)\}_{t\in(0,T]}$ possesses a smooth density function $\{p_t(X(0),y)\}_{t\in(0,T]}$ for equation \eqref{SDE1} under H\"ormander's condition.
In order to inherit this property in numerical approximation, we propose the splitting AVF scheme:
\begin{equation}\label{split sol}
\left\{
\begin{split}
&\bar P_{n+1}=P_n-h\int_0^1 \nabla F\left(Q_n+\tau\left(\bar Q_{n+1}-Q_n\right)\right)\,\mathrm d \tau,\\
&\bar Q_{n+1}=Q_n+\frac{h}{2}\left(\bar P_{n+1}+P_n\right),\\
&P_{n+1}=e^{-vh}\bar P_{n+1}+\sum_{k=1}^d \int_{t_n}^{t_{n+1}} e^{-v(t_{n+1}-t)}\sigma_k\,\mathrm d W_{t}^{k},\\
&Q_{n+1}=\bar Q_{n+1},
\end{split}
\right.
\end{equation}
where $(P_0^\top,Q_0^\top)^\top=(P(0)^\top,Q(0)^\top)^\top$ is a deterministic datum, $h=T/N^h$ and $n=0,\ldots,N^h-1$.
With regard to the problem \textit{{\rm{(i)}}},
we first study the regularity estimate of the numerical solution $X_n=(P_n^\top,Q_n^\top)^\top$ in Malliavin sense.
By showing the exponential integrability property of $X_n$, we obtain its regularity estimate, in every Malliavin--Sobolev space, for equation \eqref{SDE1} with non-globally monotone coefficient.
Then
combining this estimate with the invertibility of the corresponding Malliavin covariance matrices $\gamma_{n},\,n=2,\ldots,N^h$, we
prove the existence of the density functions $p^n_T(X_0,y)$ of $X_n,\,n=2,\ldots,N^h$.
Furthermore, we wonder whether $p^{N^h}_T(X_0,y)$ could
inherit the smoothness of $p_T(X(0),y)$.
This is more involved than studying the smoothness of $p_T(X(0),y)$ due to the loss of
H\"{o}rmander's theorem. Our solution to
this problem lie on deriving the regularity estimate of $X_{N^h}$ and proving the non-degeneracy of $\gamma_{N^h}$. By deducing a positive lower bound estimate of the smallest eigenvalue of $\gamma_{N^h}$, we prove that
$\left(\det\gamma_{N^h}\right)^{-1}\in L^{\infty-}(\Omega)$.
By means of the criterion for the smoothness of the density function of a random variable (see e.g. \cite[Theorem 2.1.4]{DN06}), we finally prove the smoothness of $p^{N^h}_T(X_0,y)$.
Concerning the problem \textit{{\rm{(ii)}}}, our strategy includes two stages. In the first stage, we derive
the optimal strong convergence rate of scheme \eqref{split sol} for equation \eqref{SDE1}.
\begin{tho}\label{SC1}
Let Assumption \ref{F2} hold, $h_0$ be a sufficiently small positive constant and $p\ge1$. There exists some positive constant $C=C(p,T,\sigma,X(0))$ such that for any $h\in(0,h_0]$,
\begin{equation*}
\sup_{n\le N^h}\left\|X_n-X(t_n)\right\|_{L^{2p}(\Omega;\mathbb{R}^{2m})}\le Ch.
\end{equation*}
\end{tho}
Up to now, there already exist a lot of strong convergence results of numerical approximations for SDEs with monotone coefficients, see e.g. \cite{SS16,TZ13} and reference therein.
For SDEs with non-globally monotone coefficients driven by additive noises, we are only aware that the authors in \cite{HJ14} obtain the strong convergence rate of the stopped increment-tamed Euler--Maruyama scheme.
To the best of our knowledge, no optimal strong convergence rate results of the numerical schemes are known for such equations.
In Theorem \ref{SC1}, we solve the problem emerged from \cite[Remark 3.1]{HJ14} and overcome the order barrier in the strong error analysis in terms of scheme \eqref{split sol} for equation \eqref{SDE1}. The key ingredients in proving the optimal convergence rate result lie on two aspects, one being to deduce a priori strong error estimate of scheme \eqref{split sol} by the exponential integrability properties, another being the applications of the regularity estimate in Malliavin sense and Malliavin integration by parts formula.
In the second stage, we extend the strong convergence result to the convergence result in density for scheme \eqref{split sol}.
\begin{tho}\label{order}
Let Assumptions \ref{F2}-\ref{F4} hold, $\alpha>0, \beta\ge0$ and $1<p<\infty$. Then for $\alpha>\beta+2m/q+1,\, 1/p+1/q=1$, it holds that
\begin{equation*}
\sup_{y\in\mathbb{R}^{2m}}\left\|(1-\Delta)^{\beta/2}\delta_y\circ X_{N^h}-(1-\Delta)^{\beta/2}\delta_y\circ X(T)\right\|_{-\alpha,p}=\mathcal{O}(h),~as~h\rightarrow0.
\end{equation*}
\end{tho}
Here $\delta_y\circ X_{N^h}$ and $\delta_y\circ X(T)$ are Donsker's delta functions, and $\|\cdot\|_{-\alpha,p}$ denotes the norm in the Banach space $\mathbb{D}^{-\alpha,p}=(\mathbb{D}^{\alpha,q})^{\prime}$.
To the best of our knowledge, Theorem \ref{order} is the first convergence rate result in density of numerical approximations for SDEs with non-globally monotone coefficients and degenerate additive noises.
The key ingredients in proving this convergence result are
the strong convergence analysis in every Malliavin--Sobolev norm and
the uniform non-degeneracy property of $X_{N^h}$.
By the regularity estimates of exact and numerical solutions and Theorem \ref{SC1}, we first obtain the strong convergence in every Malliavin--Sobolev norm.
Then combining the error estimate in Malliavin-Sobolev space $\mathbb{D}^{1,p}$ with
$\left\|\det(\gamma_{N^h})^{-1}\right\|_{L^p(\Omega)}=\mathcal{O}\left(h^{-\nu(p)}\right)$,
we deduce the uniform non-degeneracy property of $X_{N^h}$, that is, for sufficiently small positive constant $h_0$ and for any $p\ge1$,
\begin{equation*}
\sup_{h\in(0,h_0]}\left\|\det(\gamma_{N^h})^{-1}\right\|_{L^p(\Omega)}<\infty.
\end{equation*}
Using the approximation result of Donsker's delta function, we finally show that the convergence rate in density coincides with the optimal strong convergence rate for scheme \eqref{split sol}.
We would like to mention that, the approaches to deriving the optimal strong convergence rate
and to deducing the convergence in density are also applicable to a number of other numerical approximations for general SDEs.
The outline of this paper is as follows. Section \ref{S2} is devoted to an introduction of Malliavin calculus, the regularity of probability laws and main assumptions on equation \eqref{SDE1}. In Section \ref{S3}, we present the exponential integrability property of the exact solution, as well as the existence and smoothness of its density function. In Section \ref{S4}, we propose the splitting AVF scheme and show the exponential integrability property and the regularity estimate of the numerical solution in Malliavin sense. The optimal strong convergence rate of scheme \eqref{split sol} is shown in Section \ref{S5}.
In Section \ref{S6}, we show that the numerical solution is uniformly non-degenerate and admits a smooth density function. Combined with the strong convergence in every Malliavin--Sobolev norm, we derive the optimal convergence rate of the numerical scheme in density.
Finally, several numerical experiments are presented in Section \ref{S7} to support our theoretical analysis.
\section{Preliminaries}\label{S2}
In this section, we introduce some frequently used notations and some basic elements from Malliavin calculus on the Wiener space and the regularity of probability laws, as well as main assumptions on equation \eqref{SDE1}.
Given a matrix $A\in\mathbb{R}^{m\times m}$, denote by $\lambda_i(A)$ the $i$th eigenvalue, $i=1,\cdots,m$, by
$\lambda_{min}(A)$ the smallest eigenvalue, and by
$\rho(A)$ the spectral radius of $A$.
We use $\mathbb{H}$ to denote the Hilbert space $L^2([0,T];\mathbb{R}^d)$ endowed with the inner product $\langle g,h\rangle_{\mathbb{H}}=\int_0^T \langle g(t),h(t)\rangle_{\mathbb{R}^d}\,\mathrm d t,\,\forall\, g,\,h\in\mathbb{H}$.
For $\vec{l}=(l_1,\ldots,l_m)$ with $l_i\ge1,\,i=1,\ldots,m$ and $x=(x_1,\ldots,x_m)$, denote $\lfloor x\rfloor^{\vec{l}}:=\sum_{i=1}^m|x_i|^{l_i}$ and $|\,\vec{l}\,|_{\infty}:=\max_{1\le i\le m}l_i$. Throughout the paper,
we denote by $C$ a generic constant which may depend on several parameters but never on the stepsize $h$ and may change from occurrence to occurrence.
\subsection{Malliavin calculus on the Wiener space}
Some basic ingredients of Malliavin calculus are presented in this part. For further results, we refer to \cite{YZH17,IW84,DN06}. By identifying $W(t,\omega)$ with the value $\omega(t)$ at time $t$ of an element $\omega\in C_0([0,T];\mathbb{R}^d)$, we take $\Omega=C_0([0,T];\mathbb{R}^d)$ as the Wiener space and $\mathbb{P}$ as the Wiener measure.
For $h\in\mathbb{H}$, we set $W(h):=\sum_{k=1}^d\int_0^Th_k(t)\,\mathrm d W^k_t$. We denote
$\mathcal{S}$ the class of smooth random variables such that $F\in\mathcal{S}$ has the form
\begin{equation}\label{F}
F=f(W(h_1),\ldots,W(h_n)),
\end{equation}
where $f$ belongs to $C_p^\infty(\mathbb{R}^n)$, $h_i\in \mathbb{H},\, i=1,\ldots,n, \,n\ge 1$.
The derivative of a smooth random variable $F$ of the form \eqref{F} is an $\mathbb{H}$-valued random variable given by
$DF=\sum_{i=1}^n\frac{\partial f}{\partial x_i}(W(h_1),\ldots,W(h_n))h_i.$
For any $p\ge 1$, we denote the domain of $D$ in $L^p(\Omega)$ by $\mathbb{D}^{1,p}$, meaning that $\mathbb{D}^{1,p}$ is the closure of $\mathcal{S}$ with respect to the norm
$\|F\|_{1,p}=\left(\mathbb{E}\left[|F|^p+\|DF\|_\mathbb{H}^p\right]\right)^{\frac{1}{p}}.$
We define the iteration of the operator $D$ in such a way that for a smooth random variable $F$, the iterated derivative $D^\alpha F$ is a random variable with values in $\mathbb{H}^{\bigotimes \alpha}$.
Then for any $p\ge 1$ and integer $\alpha\ge1$, we denote by $\mathbb{D}^{\alpha,p}$ the completion of $\mathcal{S}$ with respect to the norm
\begin{equation*}
\|F\|_{\alpha,p}=\left(\mathbb{E}\left[|F|^p+\sum_{j=1}^{\alpha}\|D^jF\|_{\mathbb{H}^{\bigotimes j}}^p\right]\right)^{\frac{1}{p}}.
\end{equation*}
Define
\begin{equation}\label{LDD}
L^{\infty-}(\Omega):=\bigcap_{p\ge 1} L^p(\Omega),\qquad\mathbb{D}^{\alpha,\infty}:=\bigcap_{p\ge 1} \mathbb{D}^{\alpha,p},\qquad\mathbb{D}^\infty:=\bigcap_{p\ge 1}\bigcap_{\alpha\ge 1} \mathbb{D}^{\alpha,p}
\end{equation}
to be topological projective limits. As in the Schwartz theory of distributions, we introduce the topological dual of the Banach space $\mathbb{D}^{\alpha,p}$, by
$\mathbb{D}^{-\alpha,q}=(\mathbb{D}^{\alpha,p})^{\prime},$
where $1/p+1/q=1$, and the space of generalized Wiener functionals, by
$\mathbb{D}^{-\infty}=\bigcup_{p\ge 1}\bigcup_{\alpha\ge 1}\mathbb{D}^{-\alpha,p}.$ The natural coupling of $G\in\mathbb{D}^{\alpha,p}$ and $\Phi\in\mathbb{D}^{-\alpha,q}$ with $1/p+1/q=1$ or that of $G\in\mathbb{D}^{\infty}$ and $\Phi\in\mathbb{D}^{-\infty}$ is denoted by $\mathbb{E}[G\cdot\Phi]$.
Similarly, let $V$ be a real separable Hilbert space and we define the space $\mathbb{D}^{\alpha,p}(V)$ as the completion of $V$-valued smooth random variables with respect to the norm
\begin{equation*}
\|F\|_{\alpha,p,V}=\left(\mathbb{E}\left[\|F\|_V^p+\sum_{j=1}^{\alpha}\|D^jF\|_{\mathbb{H}^{\bigotimes j}\bigotimes V}^p\right]\right)^{\frac{1}{p}}.
\end{equation*}
When we consider $V$-valued functional, the corresponding spaces in \eqref{LDD} are denoted by $L^{\infty-}(\Omega;V)$, $\mathbb{D}^{\alpha,\infty}(V)$ and $\mathbb{D}^{\infty}(V)$, respectively.
\subsection{Regularity of probability laws}
In order to study the density function of the numerical approximation, we begin with imposing the non-degeneracy condition.
\begin{Def}\label{Def1}
A random vector $F=(F^1,F^2,\cdots,F^m)$ whose components are in $\mathbb{D}^\infty $ is non-degenerate if the Malliavin covariance matrix
$\gamma_F:=(\langle DF^i,DF^j\rangle_{\mathbb{H}})_{1\le i,j\le m}$
is invertible a.s. and
$(\det \gamma_F)^{-1}\in L^{\infty-}(\Omega).$
\end{Def}
It is well known that if $F$ is non-degenerate, then for every $T\in\mathcal{S}'(\mathbb{R}^m)$, $T\circ F$ can be defined in $\mathbb{D}^{-\infty}$ and
$T\circ F\in\bigcap_{p\ge1}\bigcup_{\alpha\ge 1}\mathbb{D}^{-\alpha,p}$ (see e.g. \cite{HW96}). Here, $\mathcal{S}'(\mathbb{R}^m)$ is the space of tempered distributions.
In the particular case that $T=(1-\Delta)^{\beta/2}\delta_y,$ $\beta\ge 0,$ $y\in\mathbb{R}^m$, if $\alpha>\beta+\frac{m}{q}, 1/p+1/q=1$, then
\begin{equation}\label{delta}
T\circ F=(1-\Delta)^{\beta/2}\delta_y\circ F\in\mathbb{D}^{-\alpha,p}.
\end{equation}
$\delta_y\circ F$ is called a Donsker's delta function. Notice that
$\mathbb{E}[\delta_y\circ F]=\rho_F(y),$
where $\rho_F(y)$ is the density at $y$ of the probability law of $F$ (see \cite[Section 4]{IW84} for a detailed discussion). We close this part with introducing some results in \cite{HW96}, which are useful for deriving the convergence in density of the numerical approximation in Section \ref{S6}.
\begin{lem}\label{pdf0}
Assume that $H_n,\,H\in \mathbb{D}^{1,\infty}(\mathbb{R}^{m})$ satisfy the following conditions:
(i) there exists $\kappa>0$ such that for any $1\le p<\infty$,
$\lim_{n\rightarrow\infty}\|H_n-H\|_{1,p,\mathbb{R}^{m}}=\mathcal{O}(n^{-\kappa}),$
(ii) $(\det\gamma_H)^{-1}\in L^{\infty-}(\Omega),$
(iii) for any $1\le p<\infty$, there exists $\nu(p)>0$ such that $\|\det(\gamma_{H_n})^{-1}\|_{L^p(\Omega)}=\mathcal{O}\left(n^{\nu(p)}\right)$ as $n\rightarrow\infty$.
Then, for any $1\le p<\infty$, we have
$\sup_n\left\|\det(\gamma_{H_n})^{-1}\right\|_{L^p(\Omega)}<\infty.$
\end{lem}
\begin{pro}\label{pdf}
Let $H_n,\, n=1,2,\cdots$ and $H$ be smooth d-dimensional Wiener functionals, i.e.,
$H_n,\, H\in \mathbb{D}^{\infty}(\mathbb{R}^{m}),$ $\alpha>0,\,\beta\ge0,\,\delta>0$ and $1<p<\infty$.
Suppose that $H_n$ and $H$ satisfy the following conditions:
(i) $H_n$ approximates $H$ in $\mathbb{D}^{\infty}(\mathbb{R}^{m})$ with order $\kappa$ $(\kappa>0)$ in the sense that for every $1\le p<\infty$ and $\alpha>0$,
$\lim_{n\rightarrow\infty}\|H_n- H\|_{\alpha,p,\mathbb{R}^{m}}=\mathcal{O}(n^{-\kappa}).$
(ii) $H$ is non-degenerate, i.e.,
$\left(\det\gamma_H\right)^{-1}\in L^{\infty-}(\Omega).$
Then for $\alpha>\beta+m/q+1, 1/p+1/q=1$,
\begin{equation}\label{pdf11}
\sup_{y\in\mathbb{R}^{m}}\left\|\left[(1-\Delta)^{\beta/2}\phi_{n^{-\delta}}\right](H_n-y)-(1-\Delta)^{\beta/2}\delta_y\circ H\right\|_{-\alpha,p}=\mathcal{O}\left(n^{-\kappa\land\delta}\right),
\end{equation}
as $n\rightarrow\infty$, where
$\phi_\rho(x)=\left(2\pi\rho^2\right)^{-m/2}e^{-\frac{\|x\|^2}{2\rho^2}},\, x\in\mathbb{R}^{m},\,\rho>0.$
\end{pro}
\begin{rem}\label{pdf1}
If in addition $H_n$ in Proposition \ref{pdf} is uniformly non-degenerate, i.e.\\
$\sup_n\|(\det\gamma_{H_n})^{-1}\|_{L^p(\Omega)}<\infty,$
then we have
\begin{equation*}\label{pdf2}
\sup_{y\in\mathbb{R}^{m}}\left\|(1-\Delta)^{\beta/2}\delta_y\circ H_n-(1-\Delta)^{\beta/2}\delta_y\circ H\right\|_{-\alpha,p}=\mathcal{O}(n^{-\kappa}).
\end{equation*}
\end{rem}
\subsection{Main assumptions}
In this part, we introduce main assumptions on equation \eqref{SDE1}.
To ensure the existence and uniqueness of a strong solution of equation \eqref{SDE1} (see \cite[Subsection 3.1]{HJ14}),
we assume that $F\in C^2$ is bounded below, and
$\limsup_{r\to 0}\sup_{y\in \mathbb R^m}\frac {\|y\|^{r}}{C_0+F(y)}<\infty.$
Here, $F$ is called bounded below if $F(y)+C_0 > 0$ holds for any $y\in \mathbb R^m$ and some constant $C_0$.
For the purpose of getting the solvability of scheme \eqref{split sol}, we further impose the assumption that $\nabla^2 F$ is bounded below uniformly in the sense that there exists a constant $K\ge0$ such that for any $y\in \mathbb R^m$, $\lambda_{min}\left(\nabla^2 F(y)\right)\ge-K$. We remark that it is, for example, satisfied in the case that $F$ is convex. All the above assumptions are supposed to be fulfilled throughout this article. For convenience, further assumptions on the drift coefficient $F$ and the diffusion coefficient $\sigma$ that may be used in the ensuing sections are given as follows.
\begin{hyp}\label{F2}
Assume that $F\in C_p^\infty$ and there exist some constants $C_i>0,\,i=1,2,3,\,\epsilon>0$ and $\vec{l}=(l_1,\ldots,l_m)$ with integers $l_i\ge1,\,i=1,\ldots,m$, such that for any $y\in\mathbb{R}^m$, the following inequalities hold:
\begin{align}\label{FQ1}
&-C_3+C_1\lfloor y\rfloor^{2\vec{l}}\le F(y)\le C_2\lfloor y\rfloor^{2\vec{l}}+C_3,\\\label{FQ2}
&\|\nabla^2 F(y)\|\le C_2\lfloor y\rfloor^{2{\vec{l}-\epsilon\mathbbm{1}}}+C_3.
\end{align}
\end{hyp}
For simplicity, we suppose that for any multi-index $\alpha$ with $|\alpha|:=\sum_{i=1}^m\alpha_i\ge1$, it holds that
$\|\partial^\alpha F(y)\|\le C\left(1+\|y\|^{2|\,\vec{l}\,|_{\infty}}\right)$.
The Assumption \eqref{F2} is needed to deduce the optimal strong convergence rate of the splitting AVF scheme \eqref{split sol} in Section \ref{S5}. If $F$ is a polynomial satisfying \eqref{FQ1}, then $F$ satisfies \eqref{FQ2} as well.
It can be seen that when $|\,\vec{l}\,|_{\infty}>1$, equation \eqref{SDE1} under Assumption \ref{F2} satisfies neither globally Lipschitz condition nor globally monotone condition.
Two examples satisfying Assumption \ref{F2} are given as follows:
\begin{align*}
&(1)\,m=1,\,F(y)=\sum_{i=0}^{2\kappa}a_iy^{i},\,a_{2\kappa}>0,\,\kappa\ge1,\\
&(2)\,m=2,\,F(y)=y_1^4+y_2^6+y_1y_2+\sin y_1.
\end{align*}
For convenience, we don't consider the case that $l_i=0$ for some $i=1,\dots,m$, since all the arguments in Sections \ref{S3}-\ref{S6} still hold with a slight modification.
\begin{hyp}\label{F4}
There are at least m vectors of $\{\sigma_1,\ldots,\sigma_d\}$ linearly independent.
\end{hyp}
It is easily verified that the noise in equation \eqref{SDE1} is degenerate and that Assumption \ref{F4} implies H\"{o}rmander's condition (see e.g \rm{\cite{HSW17}}), which indicates that the law of the exact solution $X(t)$ of equation \eqref{SDE1} is absolutely continuously with respect to the Lebesgue measure on $\mathbb{R}^{2m}$, for any $t\in(0,T]$.
\section{Stochastic Langevin equation}\label{S3}
In this section, we give the exponential integrability property and the existence and smoothness of the density function of the exact solution for equation \eqref{SDE1}. For convenience, we rewrite \eqref{SDE1} as
\begin{equation}\label{SDE2}
dX(t)=A_0(X(t))dt+\sum_{k=1}^d \left[\begin{array}{c}\sigma_k\\0\end{array}\right]\circ\,\mathrm d W_{t}^{k},
\end{equation}
with
\begin{equation*}
A_0(x)=\left[\begin{array}{c}-\nabla F(Q)-vP\\P\end{array}\right],\,x=(P^\top,\,Q^\top)^\top.
\end{equation*}
\subsection{Exponential integrability property of the exact solution}
\label{EI}
Let $U(x)=K_0\left(\frac{\|P\|^2}{2}+F(Q)+C_0\right),\,x=(P^\top,\,Q^\top)^\top, \,K_0\ge1$. Then $U$ is a nonnegative functional. By applying It\^o's formula to $U(X(t))$ and a standard argument, we show the following a priori estimate, where $X(t)=(P(t)^\top,Q(t)^\top)^\top$.
\begin{lem}\label{MB}
Let $p\ge1$, then there exists $C=C(T,\sigma,X(0),p)>0$ such that
\begin{equation}\label{ME}
\mathbb{E}\left[\sup_{0\le t\le T}\|X(t)\|^p\right]\le C.
\end{equation}
\end{lem}
Beyond the above a priori estimate of $X(t)$, the exponential integrability property is also shown, which plays a key role in the study of strong convergence rate (see e.g. \cite{CHZ19,HJ14}). Let us recall the following exponential integrability lemma (see \cite[Proposition 3.1]{CHZ16} or \cite[Corollary 2.4]{HSA}). For more applications of exponential integrability property, see the references \cite{BCH18,CH17,CHS18,HJW18} and the references therein.
\begin{lem}\label{exp}
Let $H$ be a separable Hilbert space, $U\in\mathcal{C}^2(H;\mathbb{R})$, $\bar{U}\in L^0([0,T]\times H;\mathbb R)$, $X$ be an $H$-valued, adapted stochastic process with continuous sample paths satisfying $\int_0^T\|\mu(X_s)\|+\|\sigma(X_s)\|^2\,\mathrm d s<\infty$ a.s., and for all $t\in[0,T]$, $X_t=X_0+\int_0^t\mu(X_s)\,\mathrm d s+\int_0^t\sigma(X_s)\,\mathrm d W_s$ a.s. Assume that there exists an $\mathbb{R}$-valued $\mathscr{F}_0$-measurable random variable $\beta$ such that a.s.
\begin{equation}\label{exp1}
DU(X)\mu(X)+\frac{tr[D^2U(X)\sigma(X)\sigma^*(X)]}{2}+\frac{\|\sigma^*(X)DU(X)\|^2}{2e^{\beta t}}+\bar{U}(X)\le\beta U(X),
\end{equation}
then
\begin{equation*}
\sup_{t\in[0,T]}\mathbb{E}\left[\exp\left(\frac{U(X_t)}{e^{\beta t}}+\int_o^t\frac{\bar{U}(X_r)}{e^{\beta r}}\,\mathrm d r\right)\right]\le\mathbb{E}\left[e^{U(X_0)}\right].
\end{equation*}
\end{lem}
Based on Lemma \ref{exp}, the authors of \cite{HJ14} prove the exponential integrability of the exact solution of equation \eqref{SDE1} when $\sigma=\sqrt{\epsilon}I$, see the formula (4.28) in \cite[Section 4.5]{HJ14}. Here, $I$ denotes the identity matrix. We now present the exponential integrability property of the exact solution of equation \eqref{SDE1}.
\begin{pro}\label{EEI}
For any $\beta\ge K_0\left(\sum\limits_{k=1}^d \|\sigma_k\|^2-2v\right),$ there holds that
\begin{equation}\label{EqEI}
\sup_{t\in[0,T]}\mathbb{E}\left[\exp\left(\frac{U(X(t))}{e^{\beta t}}\right)\right]\le C(\beta,T)e^{U(X(0))} .
\end{equation}
\end{pro}
\begin{proof}
Take $H=\mathbb{R}^{2m}$, $\mu(x)=\left[\begin{array}{c}-\nabla F(Q)-vP\\P\end{array}\right],\sigma(x)=\left[\begin{array}{ccc}\sigma_1&\ldots&\sigma_d \\0&\ldots&0\end{array}\right],\, \bar{U}\equiv-\frac{K_0}{2}\sum\limits_{k=1}^d \|\sigma_k\|^2$ in Lemma \ref{exp}. Then a straightforward calculation, similar to the formula (4.27) in \cite[Section 4.5]{HJ14}, shows that
\eqref{exp1} holds for any $\beta\ge K_0\left(\sum\limits_{k=1}^d \|\sigma_k\|^2-2v\right)$, and thereby \eqref{EqEI} follows from Lemma \ref{exp}.
\end{proof}
\subsection{Probability density function of the exact solution}
In this part, we show that the exact solution $X(t)$ of equation \eqref{SDE1} admits a smooth density function under Assumptions \ref{F2}-\ref{F4}, for any $t\in(0,T]$.
By using Malliavin calculus and the exponential integrability property,
we obtain the following result on the smoothness of the density function of $X(t)$, for any $t\in(0,T]$.
\begin{lem}\label{NS}
Let Assumptions \ref{F2}-\ref{F4} hold. Then for any fixed $t\in(0,T]$, $X(t)$ admits an infinitely differentiable density function.
\end{lem}
\begin{proof}
Fix $t\in(0,T]$.
According to \cite[Theorem 2.1.4]{DN06} and \cite[Theorem 2.3.3]{DN06},
it remains to prove that for any integer $\alpha\ge1$, $X(t)\in\mathbb{D}^{\alpha,\infty}(\mathbb R^{2m}).$ Denote $j(K):=j_{\epsilon_1},\ldots,j_{\epsilon_\eta}, r(K):=r_{\epsilon_1},\ldots,r_{\epsilon_\eta}$ with $j_{\epsilon_i}\in\{1,\ldots,d\}$ and $r_{\epsilon_i}\in[0,T],\,i\in\{1,\ldots,\eta\}$ for any subset $K=\{\epsilon_1,\cdots,\epsilon_\eta\}$ of $\{1,\ldots,\alpha\}$ with $\epsilon_1<\cdots<\epsilon_\eta$. Then by the chain rule, for $t\ge r_1 \vee\cdots\vee r_\alpha,\, i=1,\ldots,2m$,
the $\alpha$-th Malliavin derivative of $X^i(t)$ satisfies:
\begin{align}\label{DXM1}
&D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}\left(X^i(t)\right)\\\nonumber
&=\int_{r_1 \vee\cdots\vee r_\alpha}^t \sum_{1\le\nu\le \alpha}\left(\partial_{k_1} \cdots\partial_{k_\nu}A_0^i\right)\left(X(s)\right)\times D_{r(I_1)}^{j(I_1)}\left[X^{k_1}(s)\right] \cdots D_{r(I_\nu)}^{j(I_\nu)}\left[X^{k_\nu}(s)\right]\,\mathrm d s,
\end{align}
where $\sum\limits_{1\le\nu\le \alpha}$ denotes the sum over all sets of partitions $\{1,\ldots,\alpha\}=I_1\cup\cdots \cup I_{\nu},\, k_l\in\{1,\ldots,2m\},\, l=1,\ldots,\nu,$ and $\nu=1,\ldots,\alpha$, and for $t<r_1 \vee\cdots\vee r_\alpha,\, i=1,\ldots,2m,$
\begin{equation*}\label{DXM2}
D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}(X^i(t))=0.
\end{equation*}
Now we aim to show that for $p\ge1,\,\alpha\ge1$,
\begin{equation}\label{DK}
\sup_{r_1,\ldots,r_\alpha\in[0,T]}\mathbb E\left(\sup_{r_1 \vee\cdots\vee r_\alpha\le t\le T}\|D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}(X(t))\|^p\right)\le C(\alpha,p).
\end{equation}
for all choices of $j_1,\ldots,j_\alpha\in\{1,\ldots,d\}$. We prove it by an induction argument on the order $\alpha$ of the Malliavin derivative of $X(t)$.
For $\alpha=1$, the Malliavin derivative of $X(t)$ satisfies the following integral equation
\begin {align*}
D_rX(t)\textbf{1}_{\{r\le t\}}=\int_r^t (\nabla A_0)(X(s))D_rX(s)\,\mathrm d s+\sum_{k=1}^d \left[\begin{array}{c}\sigma_k\\0\end{array}\right]\circ\,\mathrm d W_{t}^{k}\textbf{1}_{\{r\le t\}},
\end{align*}
with $\textbf{1}_{\{r\le t\}}$ denoting the indicator function of the set $\{r\le t\}$ and
\begin{equation}\label{A0}
(\nabla A_0)(X(s))=\left[\begin{array}{cc} -vI&-\nabla^2F(Q(s))\\I&0\end{array}\right].
\end{equation}
By the triangle inequality and Gronwall's inequality, for any fixed $r\le t$,
\begin{align*}
\|D_rX(t)\|&\le\|A\|\exp\left(\int_r^t \|(\nabla A_0)(X(s))\|\,\mathrm d s\right).
\end{align*}
Due to the fact that
\begin{align*}
\sum\limits_{i=1}^{m}\left|x_i\right|^{2l_i-\epsilon}=\sum_{i=1}^m \left|x_i\right|^{2l_i\cdot\frac{l_i-\epsilon/2}{l_i}}\le\sum_{i=1}^m \left|x_i\right|^{2l_i\cdot\frac{|\,\vec{l}\,|_{\infty}-\epsilon/2}{|\,\vec{l}\,|_{\infty}}}+C\le C(m)\left(\sum_{i=0}^m \left|x_i\right|^{2l_i}\right)^{\frac{|\,\vec{l}\,|_{\infty}-\epsilon/2}{|\,\vec{l}\,|_{\infty}}}+C,
\end{align*}
where $x=(x_1,...,x_m)\in\mathbb{R}^m$,
and Assumption \ref{F2}, we have
\begin{equation*}\label{Q2l}
\lfloor Q(t)\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}\le C\left(\lfloor Q(t)\rfloor^{2\vec{l}}\right)^{\frac{|\,\vec{l}\,|_{\infty}-\epsilon/2}{|\,\vec{l}\,|_{\infty}}}+C\le C(U(X(t)))^{\frac{|\,\vec{l}\,|_{\infty}-\epsilon/2}{|\,\vec{l}\,|_{\infty}}}+C.
\end{equation*}
From \eqref{EqEI}, the H\"{o}lder, Young and Jensen inequalities, it follows that for $\beta\ge K_0\sum\limits_{k=1}^d \|\sigma_k\|^2$,
\begin{align}\label{EEB}
\mathbb{E}\left[\exp\left(\int_0^T C\lfloor Q(t)\rfloor^{2{\vec{l}-\epsilon\mathbbm{1}}}\,\mathrm d t\right)\right]
\le&\frac{1}{T} \int_0^T\mathbb{E}\left[\exp(CT\lfloor Q(t)\rfloor^{2{\vec{l}-\epsilon\mathbbm{1}}})\right] \,\mathrm d t\\\nonumber
\le&\frac{1}{T} \int_0^T\mathbb{E}\left[\exp\left(CTU(X(t))^{\frac{|\,\vec{l}\,|_{\infty}-\epsilon/2}{|\,\vec{l}\,|_{\infty}}}+C\right)\right] \,\mathrm d t\\\nonumber
\le&\frac{C}{T} \int_0^T\mathbb{E}\left[\exp\left(\left(\frac{U(X(t))}{e^{\beta t}}\right)^{\frac{|\,\vec{l}\,|_{\infty}-\epsilon/2}{|\,\vec{l}\,|_{\infty}}}CTe^{\beta t\frac{|\,\vec{l}\,|_{\infty}-\epsilon/2}{|\,\vec{l}\,|_{\infty}}}\right)\right] \,\mathrm d t\\\nonumber
\le&\frac{C}{T} \int_0^T\mathbb{E}\left[\exp\left(\frac{U(X(t))}{e^{\beta t}}\right)\right]^{\frac{|\,\vec{l}\,|_{\infty}-\epsilon/2}{|\,\vec{l}\,|_{\infty}}}\,\mathrm d t\\\nonumber
\le&\frac{C}{T} \int_0^T\mathbb{E}\left[\exp\left(\frac{U(X(t))}{e^{\beta t}}\right)\right]\,\mathrm d t+C
\le C.
\end{align}
Since $\|(\nabla A_0)(X(s))\|\le C\lfloor Q(s)\rfloor^{2{\vec{l}-\epsilon\mathbbm{1}}}+C$, we obtain that
\begin{align}\label{M=1}
&\sup_{r\in[0,T]}\mathbb{E}\left[\sup_{t\in[r,T]}\|D_rX(t)\|^p\right]\\\nonumber
&\le\sup_{r\in[0,T]}\mathbb{E}\left[C\exp\left(\int_r^T p\left(C\lfloor Q(s)\rfloor^{2{\vec{l}-\epsilon\mathbbm{1}}}+C\right)\,\mathrm d s\right)\right]\\\nonumber
&=\mathbb{E}\left[C\exp\left(\int_0^T p\left(C\lfloor Q(s)\rfloor^{2{\vec{l}-\epsilon\mathbbm{1}}}+C\right)\,\mathrm d s\right)\right]\le C,
\end{align}
which completes the proof of \eqref{DK} for $\alpha=1$.
Assuming that \eqref{DK} holds up to the index $\alpha-1,\,\alpha\ge2$, we divide the sum in \eqref{DXM1} as
\begin{align*}
&D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}(X(t))\\
&=\sum_{2\le\nu\le \alpha}\int_{r_1 \vee\cdots\vee r_\alpha}^t(\partial_{k_1} \cdots\partial_{k_\nu}A_0)(X(s)) D_{r(I_1)}^{j(I_1)}\left[X^{k_1}(s)\right] \cdots D_{r(I_\nu)}^{j(I_\nu)}\left[X^{k_\nu}(s)\right]\,\mathrm d s\\
&\quad+\sum_{\kappa=1}^{2m}\int_{r_1 \vee\cdots\vee r_\alpha}^t(\partial_\kappa A_0)(X(s))D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}\left(X^\kappa(s)\right)\,\mathrm d s.
\end{align*}
By applying the triangle inequality and then taking the supremum over $t_1\le T$, we obtain
\begin{align*}
&\sup_{r_1 \vee\cdots\vee r_\alpha\le t\le t_1}\left\|D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}(X(t))\right\|\\
&\quad\le \sum_{2\le\nu\le \alpha}\int_{r_1 \vee\cdots\vee r_\alpha}^T\|(\partial_{k_1} \cdots\partial_{k_\nu}A_0)(X(s))\|\left\|D_{r(I_1)}^{j(I_1)}\left[X^{k_1}(s)\right]\right\| \cdots \left\|D_{r(I_\nu)}^{j(I_\nu)}\left[X^{k_\nu}(s)\right]\right\|\,\mathrm d s\\
&\qquad+\int_{r_1 \vee\cdots\vee r_\alpha}^{t_1}\|(\nabla A_0)(X(s))\|\|D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}(X(s))\|\,\mathrm d s\\
&\quad\le B(T)+\int_{r_1 \vee\cdots\vee r_\alpha}^{t_1}\|(\nabla A_0)(X(s))\|\left(\sup_{r_1 \vee\cdots\vee r_\alpha\le t\le s}\left\|D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}(X(t))\right\|\right)\,\mathrm d s,
\end{align*}
where
\begin{align*}
&B(T)=\sum_{2\le\nu\le \alpha}\int_{r_1 \vee\cdots\vee r_\alpha}^T\|(\partial_{k_1} \cdots\partial_{k_\nu}A_0)(X(s))\|
\prod_{\zeta=1}^\nu\left\|D_{r(I_\zeta)}^{j(I_\zeta)}\left[X^{k_\zeta}(s)\right]\right\|\,\mathrm d s.
\end{align*}
It follows from the Gronwall lemma that,
\begin{align*}
\sup_{r_1 \vee\cdots\vee r_\alpha\le t\le T}\left\|D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}(X(t))\right\|\le B(T)\exp\left(\int_{r_1 \vee\cdots\vee r_\alpha}^T\|C(\nabla A_0)(X(s))\|\,\mathrm d s\right).
\end{align*}
Similar to \eqref{M=1}, there holds that
\begin{align}\label{M=M}
&\sup_{r_1,\ldots,r_\alpha\in[0,T]}\mathbb{E}\left[\exp\left({\int_{r_1 \vee\cdots\vee r_\alpha}^T\beta\|(\nabla A_0)(X(s))\|\,\mathrm d s}\right)\right]\\\nonumber
&=\mathbb{E}\left[\exp\left({\int_0^T\beta\|(\nabla A_0)(X(s))\|\,\mathrm d s}\right)\right]\le C,
\end{align}
for any $\beta>1$. Combining the fact that $F\in C^\infty_p$ and \eqref{A0}, for all choices of $k_i\in\{1,\ldots,2m\},\, i=1,\ldots,\nu,\,1\le\nu\le \alpha$, we deduce
\begin{equation*}
\|(\partial_{k_1} \cdots\partial_{k_\nu}A_0)(X(s))\|\le C+\|Q(s)\|^{2|\,\vec{l}\,|_{\infty}}.
\end{equation*}
By induction assumption and the H\"{o}lder inequality, we get for any $q\ge1$,
\begin{align}\label{M2}
&\sup_{r_1,\ldots,r_\alpha\in[0,T]}\mathbb{E}\left[B(T)^q\right]\\\nonumber
&\le C\sum_{2\le\nu\le \alpha}\int_0^T\mathbb{E}\left[\|(\partial_{k_1} \cdots\partial_{k_\nu}A_0)(X(s))\|^q\prod_{\zeta=1}^\nu\left(\sup_{\substack{r_i\in[0,T]\\i\in I_\zeta}}\left\|D_{r(I_\zeta)}^{j(I_\zeta)}[X^{k_\zeta}(s)]\right\|^q\right)\right]\mathrm d s
\le C.
\end{align}
As a result, \eqref{M=M} and \eqref{M2} implies that \eqref{DK} holds for $\alpha$ via the H\"{o}lder inequality.
It follows from \eqref{DK} that
\begin{align}\label{DKp}
\mathbb{E}&\|D^\alpha X(t)\|_{\mathbb{H}^{\bigotimes\alpha}\bigotimes\mathbb{R}^{2m}}^p=\mathbb{E}\|D^\alpha X(t)\|_{L^2\left([0,T]^\alpha;(\mathbb{R}^d)^{\bigotimes\alpha}\bigotimes\mathbb{R}^{2m}\right)}^p\\\nonumber
\le& C(T,p,\alpha)\sup_{r_1,\ldots,r_\alpha\in[0,T]}\mathbb E\left(\sup_{r_1 \vee\cdots\vee r_\alpha\le t\le T}\|D_{r_1,\ldots,r_\alpha}(X(t))\|_{(\mathbb{R}^d)^{\bigotimes\alpha}\bigotimes\mathbb{R}^{2m}}^{p}\right)
\le C,
\end{align}
which completes the proof.
\end{proof}
\section{Splitting AVF scheme} \label{S4}
The bulk of this section presents the exponential integrability property, and the existence and smoothness of the density function for the numerical solution generated through the splitting AVF scheme \eqref{split sol}. To this end, we begin with introducing the splitting AVF scheme.
Let $0=t_0<t_1<\cdots<t_{N^h-1}<t_{N^h}=T$ be a uniform partition of interval $[0,T]$, where $t_n=nh,\,n=0,\ldots,N^h$.
The main idea of constructing the splitting AVF scheme is to split equation \eqref{SDE1} as
\begin{equation*}\label{SSDE2}
\begin{split}
&\,\mathrm d \bar P=-\nabla F(\bar Q)\,\mathrm d t,\,\mathrm d \bar Q=\bar P\,\mathrm d t;\\
&\,\mathrm d \tilde{P}=-v\tilde{P} \,\mathrm d t+\sum_{k=1}^d \sigma_k\,\mathrm d W_{t}^{k},\,\mathrm d \tilde{Q}=0.
\end{split}
\end{equation*}
Here, the first subsystem is a Hamiltonian system and the second one can be solvable exactly. For the purpose of inheriting the exponential integrability property of the exact solution $X(t)$, we discrete the first subsystem by using the AVF scheme.
Combining it with explicit expression of the exact solution of the second subsystem, we obtain the splitting AVF scheme \eqref{split sol}.
It is readily get by \eqref{split sol} that
\begin{equation*}
Q_{n+1}=Q_n+hP_n-\frac{h^2}{2}\int_0^1\nabla F(Q_n+\tau(Q_{n+1}-Q_n))\,\mathrm d \tau.
\end{equation*}
Define
\begin{equation*}
Z(h,P,Q,z)=z-Q-hP+\frac{h^2}{2}\int_0^1\nabla F(Q+\tau (z-Q))\,\mathrm d \tau,
\end{equation*}
then
\begin{equation*}
\frac{\partial Z}{\partial z}=I+\frac{h^2}{2}\int_0^1\tau\nabla^2 F(Q+\tau (z-Q))\,\mathrm d \tau.
\end{equation*}
Under the assumption that $\nabla^2F$ is bounded below uniformly, we have $\det\left(\frac{\partial Z}{\partial z}\right)\neq 0$ as long as $h<\frac{2}{\sqrt{K}}$, which implies that \eqref{split sol} is solvable due to the implicit function theorem. In particular, if $F$ is a convex function, the proposed scheme is solvable for any stepsize $h>0$.
\subsection{Exponential integrability property of the numerical approximation}
In this part, we prove the exponential integrability property of $X_n$, which is helpful for deducing the strong convergence rate in Section \ref{S5}. For simplicity, we denote $\bar X_{n}:=(\bar P_{n}^\top,\bar Q_{n}^\top)^\top$ with $\bar P_{n},\bar Q_{n}$ defined by \eqref{split sol}, for $n=1,\ldots,N^h$.
\begin{pro}\label{EqE}
For any $\beta\ge K_0\left(\sum\limits_{k=1}^d \|\sigma_k\|^2-2v\right),$
\begin{equation}\label{EqNI}
\sup_{n\le N^h}\mathbb{E}\left[\exp\left(\frac{U(X_n)}{e^{\beta t_n}}\right)\right]\le C(\beta)e^{U(X(0))}.
\end{equation}
\end{pro}
\begin{proof}
Notice that the AVF scheme preserves the Hamiltonian $U$ exactly, i.e., $U(\bar X_{n+1})=U(X_n)$ for $n=0,\ldots,N^h-1$ (see e.g. \cite[Proposition 2]{CCH16}). We define an auxiliary process $\tilde X(t)=(\tilde P(t)^\top,\tilde Q(t)^\top)^\top$ satisfying
\begin{equation*}
\left\{
\begin{split}
&\mathrm d \tilde{P}=-v\tilde{P} \,\mathrm d t+\sum_{k=1}^d \sigma_k\,\mathrm d W_{t}^{k},\,t\in(t_n,t_{n+1}],\\
&\mathrm d \tilde{Q}=0
\end{split}
\right.
\end{equation*}
with $\left(\tilde P(t_n)^\top,\tilde Q(t_n)^\top\right)^\top=\left(\bar P_{n+1}^\top,\bar Q_{n+1}^\top\right)^\top,\, \forall\,n=0,\ldots,N^h-1.$
By similar arguments in the proof of \eqref{EqEI}, we obtain
\begin{align*}
\mathbb{E}\left[\exp\left(\frac{U(\tilde X(t_{n+1}))}{e^{\beta t_{n+1}}}\right)\right]
\le\mathbb{E}\left[\exp\left(\frac{U(\tilde X(t_n))}{e^{\beta t_n}}\right)\right]\exp\left[\left(\frac{K_0}{2\beta}\sum_{k=1}^d\|\sigma_k\|^2\right)(e^{-\beta t_n}-e^{-\beta t_{n+1}})\right].
\end{align*}
Since $U(\tilde X(t_n))=U(\bar X(t_{n+1}))=U(X_n)$ and $U(\tilde X(t_{n+1}))=U(X_{n+1})$, we have
\begin{align*}
&\mathbb{E}\left[\exp\left(\frac{U(X_{n+1})}{e^{\beta t_{n+1}}}\right)\right]\le\mathbb{E}\left[\exp\left(\frac{U(X_n)}{e^{\beta t_n}}\right)\right]\exp\left[\left(\frac{K_0}{2\beta}\sum_{k=1}^d\|\sigma_k\|^2\right)(e^{-\beta t_n}-e^{-\beta t_{n+1}})\right].
\end{align*}
As a consequence,
\begin{align*}
\sup_{n\le N^h}\mathbb{E}\left[\exp\left(\frac{U(X_n)}{e^{\beta t_n}}\right)\right]
&\le\prod_{i=0}^{N^h-1}\exp\left[\left(\frac{K_0}{2\beta}\sum_{k=1}^d\|\sigma_k\|^2\right)(e^{-\beta t_i}-e^{-\beta t_{i+1}})\right]e^{U(X(0))}\\
&\le\exp\left(\frac{K_0}{2\beta}\sum_{k=1}^d\|\sigma_k\|^2\right)e^{U(X(0))},
\end{align*}
which completes the proof.
\end{proof}
Furthermore, the following moment boundedness result of the numerical solutions $X_n$ and $\bar X_n$ is established by using It\^o's formula and the Burkholder-Davis-Gundy inequality.
\begin{lem}\label{EE}
For any $p\ge1$, there exists $C=C(T,\sigma,X(0),p)>0$ such that
\begin{align*}
&\mathbb{E}\left[\sup_{n\le N^h}|U(\bar X_n)|^p\right]+ \mathbb{E}\left[\sup_{n\le N^h}|U(X_n)|^p\right]\le C.
\end{align*}
\end{lem}
\subsection{Probability Density Function}
After proving the existence and smoothness of the density function of the exact solution, it's a natural question to ask whether the numerical scheme could inherit these properties (see e.g. \cite{BT96,HW96,KHA97}).
In particular,
for SDEs with superlinearly growing nonlinearities and degenerate additive noises, to the best of our knowledge, there exists no result on the existence of the density function of the numerical approximation.
In this part,
we give a probabilistic proof of the existence of the density function of the numerical solution of stochastic Langevin equation with non-globally monotone coefficient under H\"ormander's condition.
Compared to the continuous case, it is more involved to establish the existence of the density function of the numerical approximation even though the H\"{o}rmander condition holds.
We would like to mention that
in general case, H\"{o}rmander's condition is not a sufficient condition for the validity of the existence of the density function of the numerical solution.
Similar to the proof of \cite[Theorem 2.2.1]{DN06}, the Malliavin derivative of $X_{n+1}$ exists and satisfies, for $r\in[0,t_n]$,
\begin{align*}
&D_rP_{n+1}=e^{-vh}\left(D_rP_n-h\int_0^1\nabla^2 F(Q_n+\tau(Q_{n+1}-Q_n))(D_rQ_n+\tau(D_rQ_{n+1}-D_rQ_n))\mathrm d \tau\right),\\
&D_rQ_{n+1}=D_rQ_n+hD_rP_n-\frac{h^2}{2}\int_0^1 \nabla^2 F(Q_n+\tau(Q_{n+1}-Q_n))(D_rQ_n+\tau(D_rQ_{n+1}-D_rQ_n))\,\mathrm d \tau,
\end{align*}
and for $r\in(t_n,t_{n+1}]$,
\begin{equation}\label{DX2}
\begin{split}
&D_rP_{n+1}=e^{-v(t_{n+1}-r)}\sigma,\\
&D_rQ_{n+1}=0.
\end{split}
\end{equation}
For simplicity, we introduce the following $m\times m$ symmetric matrices,
\begin{align*}
&F_1(Q_n,Q_{n+1}):=\int_0^1\nabla^2 F(Q_n+\tau(Q_{n+1}-Q_n))\tau\,\mathrm d \tau,\\
&F_2(Q_n,Q_{n+1}):=\int_0^1\nabla^2 F(Q_n+\tau(Q_{n+1}-Q_n))(1-\tau)\,\mathrm d \tau,
\end{align*}
and get
\begin{align*}
&\int_0^1\nabla^2 F(Q_n+\tau(Q_{n+1}-Q_n))(D_rQ_n+\tau(D_rQ_{n+1}-D_rQ_n))\,\mathrm d \tau\\
&=F_1(Q_n,Q_{n+1})D_rQ_{n+1}+F_2(Q_n,Q_{n+1})D_rQ_n.
\end{align*}
Therefore, for $r\in[0,t_n]$, we have
\begin{equation*}
\left[\begin{array}{cc}I & he^{-vh}F_1(Q_n,Q_{n+1})\\0 & I+\frac{h^2}{2}F_1(Q_n,Q_{n+1})\end{array}\right]\left[\begin{array}{cc}D_rP_{n+1}\\D_rQ_{n+1}\end{array}\right]=\left[\begin{array}{cc}e^{-vh}I & he^{-vh}F_2(Q_n,Q_{n+1})\\hI & I-\frac{h^2}{2}F_2(Q_n,Q_{n+1})\end{array}\right]\left[\begin{array}{cc}D_rP_n\\D_rQ_n\end{array}\right].
\end{equation*}
Since $\nabla^2F$ is bounded below by $-K$ uniformly, we have
\begin{align*}
&\lambda_{min}(F_1(Q_n,Q_{n+1}))=\inf_{\|y\|_2=1}\int_0^1 \tau y^\top\nabla^2F(Q_n+\tau(Q_{n+1}-Q_n))y\,\mathrm d \tau\ge\frac{-K}{2},\\
&\lambda_{min}(F_2(Q_n,Q_{n+1}))=\inf_{\|y\|_2=1}\int_0^1(1-\tau)y^\top\nabla^2 F(Q_n+\tau(Q_{n+1}-Q_n))y\,\mathrm d \tau\ge\frac{-K}{2},
\end{align*}
which imply that the matrix $I+\frac{h^2}{2}F_1(Q_n,Q_{n+1})$ is invertible for any $h<\frac{2}{\sqrt{K}}$ and $n=0,\ldots,N^h-1.$
In order to judge whether $\gamma_n$ is invertible,
we next proceed to derive a recursive relationship between $\gamma_{n+1}$ and $\gamma_n$. Notice that if $I+\frac{h^2}{2}F_1(Q_n,Q_{n+1})$ is invertible, then
\begin{align}\label{DX}
D_rX_{n+1}=A_nD_rX_n, \,r\in[0,t_n],
\end{align}
where $D_rX_n=\left[\begin{array}{c}D_rP_n\\D_rQ_n\end{array}\right]$ and
\begin{align*}
A_n
=\left[\begin{array}{cc}I & -he^{-vh}F_1(Q_n,Q_{n+1})\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}\\0 & \left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}\end{array}\right]\left[\begin{array}{cc}e^{-vh}I & he^{-vh}F_2(Q_n,Q_{n+1})\\hI & I-\frac{h^2}{2}F_2(Q_n,Q_{n+1})\end{array}\right].
\end{align*}
From \eqref{DX2} and \eqref{DX}, it follows that
\begin{align}\label{MX}
\gamma_{n+1}:=&\int_0^{t_{n+1}}D_rX_{n+1}(D_rX_{n+1})^\top\,\mathrm d r\\\nonumber
=&\int_0^{t_n}D_rX_{n+1}(D_rX_{n+1})^\top\,\mathrm d r+\int_{t_n}^{t_{n+1}}D_rX_{n+1}(D_rX_{n+1})^\top\,\mathrm d r\\\nonumber
=&\int_0^{t_n}A_nD_rX_n(D_rX_n)^\top A_n^\top\,\mathrm d r+\int_{t_n}^{t_{n+1}}D_rX_{n+1}(D_rX_{n+1})^\top\,\mathrm d r\\
=&A_n\gamma_nA_n^\top+\frac{1-e^{-2vh}}{2v}\left[\begin{array}{cc}\sigma\sigma^\top&0\\0&0\end{array}\right],\,a.s.\nonumber
\end{align}
Now we turn to showing the following regularity estimate of $X_n$ in Malliavin sense.
\begin{lem}\label{NDI}
Let Assumption \ref{F2} hold, then
\begin{equation}\label{eq43:1}
X_n\in \mathbb D^\infty(\mathbb{R}^{2m}),\, n=1,\ldots,N^h.
\end{equation}
More precisely, there exists a positive constant $h_0$ such that for any $h\in(0,h_0]$, $\alpha\ge1$ and $p\ge1$,
\begin{equation}\label{eq43:2}
\sup\limits_{r_1,\cdots,r_\alpha \in [0,T]}\mathbb{E}\left[\sup\limits_{r_1\lor \cdots \lor r_\alpha\le t_n\le T}\|D_{r_1,\ldots,r_\alpha}X_n\|^p\right]\le C,\, n=1,\ldots,N^h,
\end{equation}
holds for some positive constant $C=C(\alpha,p)$.
\end{lem}
\begin{proof}
Since
\eqref{eq43:1} follows from \eqref{eq43:2}, it suffices to prove \eqref{eq43:2}, which is shown by an induction argument.
Let $r_1\in(t_{i_1},t_{{i_1}+1}]$, for $0\le i_1\le N^h-1$. It follows from \eqref{DX} that for any $i_1<n\le N^h$,
\begin{align}\label{DEP}
D_{r_1}P_{n+1}&=\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}e^{-vh}\left(I-\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)D_{r_1}P_n\\\nonumber
&+\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}he^{-vh}\left(F_2(Q_n,Q_{n+1})-F_1(Q_n,Q_{n+1})\right)D_{r_1}Q_n\\\nonumber
&+\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}h^3e^{-vh}F_1(Q_n,Q_{n+1})F_2(Q_n,Q_{n+1})D_{r_1}Q_n,\\
D_{r_1}Q_{n+1}&=\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}\left(hD_{r_1}P_n+\left(I-\frac{h^2}{2}F_2(Q_n,Q_{n+1})\right)D_{r_1}Q_n\right).\label{DEQ}
\end{align}
By the spectral mapping theorem and the symmetry of $F_1(Q_n,Q_{n+1})$, for any $n=0,\ldots,N^h-1$, we get
\begin{align}\label{term1}
&\left\|\left(I+\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right)^{-1}\right\|
=\max_{1\le i\le m}\left|\frac{1}{1+\frac{h^2}{2}\lambda_i(F_1(Q_n,Q_{n+1}))}
\right|,\\\label{term2}
&\left\|\left(I+\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right)^{-1}\left(I-\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right)\right\|
=\max_{1\le i\le m}\left|\frac{1-\frac{h^2}{2}\lambda_i(F_1(Q_n,Q_{n+1}))}{1+\frac{h^2}{2}\lambda_i(F_1(Q_n,Q_{n+1}))}\right|,\\\label{term3}
&\left\|\left(I+\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right)^{-1}\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right\|
=\max_{1\le i\le m}\left|\frac{\frac{h^2}{2}\lambda_i(F_1(Q_n,Q_{n+1}))}{1+\frac{h^2}{2}\lambda_i(F_1(Q_n,Q_{n+1}))}\right|.
\end{align}
Next we estimate these three terms separately. Choosing $h_0\le\sqrt{\frac{2}{K}}$, combined with the fact $\lambda_{min}(F_1(Q_n,Q_{n+1}))\ge\frac{-K}{2}$, it follows that
$1+\frac{h^2}{2}\lambda_i(F_1(Q_n,Q_{n+1}))
\ge\frac{1}{2},\,\forall\,i=1,\ldots,m.$
Therefore
\begin{equation}\label{DEF1}
\left\|\left(I+\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right)^{-1}\right\|\le2.
\end{equation}
Notice that if $\lambda_i(F_1(Q_n,Q_{n+1}))\ge0$, the left hands of \eqref{term2} and \eqref{term3} are dominated by $1$. If
$\frac{-K}{2}\le\lambda_i(F_1(Q_n,Q_{n+1}))<0$,
the left hands of \eqref{term2} and \eqref{term3} are bounded as
\begin{align*}
\left\|\left(I+\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right)^{-1}\left(I-\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right)\right\|&\le
\frac{1+\frac{h^2K}{4}}{1-\frac{h^2K}{4}}=1+\frac{\frac{h^2K}{2}}{1-\frac{h^2K}{4}}\le1+h^2K,
\end{align*}
and
\begin{equation*}
\left\|\left(I+\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right)^{-1}\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right\|
\le
\frac{\frac{h^2K}{4}}{1-\frac{h^2K}{4}}\le\frac{h^2}{2}K\le1.
\end{equation*}
Hence
\begin{align}\label{DEF2}
&\left\|\left(I+\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right)^{-1}\left(I-\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right)\right\|\le1+h^2K,
\end{align}
and
\begin{align}
\label{DEF3}
&\left\|\left(I+\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right)^{-1}\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right\|\le1.
\end{align}
Furthermore, \eqref{DEF3} leads to
\begin{align*}
&\left\|\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}h^3e^{-vh}F_1(Q_n,Q_{n+1})F_2(Q_n,Q_{n+1})D_{r_1}Q_n\right\|\\\nonumber
&\le 2h\left\|\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right\|\left\|F_2(Q_n,Q_{n+1})\right\|\left\|D_{r_1}Q_n\right\|\\\label{DE2}
&\le 2h\left\|F_2(Q_n,Q_{n+1})\right\|\left\|D_{r_1}Q_n\right\|.
\end{align*}
From \eqref{DEP}-\eqref{DEF3}, it follows that there exists some constant $C=C(K)$ such that
\begin {align*}
&\|D_{r_1}P_{n+1}\|\le(1+Ch^2)\|D_{r_1}P_n\|+Ch\|F_1(Q_n,Q_{n+1})\|\|D_{r_1}Q_n\|+Ch\|F_2(Q_n,Q_{n+1})\|\|D_{r_1}Q_n\|,\\
&\|D_{r_1}Q_{n+1}\|\le(1+Ch^2)\|D_{r_1}Q_n\|+Ch\|D_{r_1}P_n\|.
\end{align*}
Set $e_n=\|D_{r_1}P_n\|+\|D_{r_1}Q_n\|$, then
\begin{equation}\label{DEen}
e_{n+1}\le e_n+Ch\left(1+\left\|F_1(Q_n,Q_{n+1})\right\|+\|F_2(Q_n,Q_{n+1})\|\right)e_n.
\end{equation}
Due to \eqref{DX2} and $r_1\in(t_{i_1},t_{{i_1}+1}]$, there exists a positive constant $C=C(\sigma)$ such that
$\|D_{r_1}P_{{i_1}+1}\|+\|D_{r_1}Q_{{i_1}+1}\|\le C(\sigma).$
The discrete Gronwall lemma and \eqref{DEen} imply that
\begin{align*}
e_{n}\le C(\sigma)\exp\left(\sum_{j={i_1}}^{n-1}Ch\left(\|F_1(Q_j,Q_{j+1})\|+\|F_2(Q_j,Q_{j+1})\|+1\right)\right),\,\forall\,i_1<n\le N^h.
\end{align*}
From the H\"{o}lder, Jensen and Young inequalities and the fact $(N^h-{i_1})h\le T$, it
follows that
\begin{align*}\nonumber
\mathbb{E}\left[\sup_{{i_1}<n\le N^h}e_n^p\right]&\le C(\sigma,p)\mathbb{E}\left[\exp\left(\sum_{j={i_1}}^{N^h-1}Ch(\|F_1(Q_j,Q_{j+1})\|+\|F_2(Q_j,Q_{j+1})\|+1)\right)\right]\\\nonumber
&\le C(\sigma,p)\frac{1}{N^h-{i_1}}\sum_{j={i_1}}^{N^h-1}\mathbb{E}\Big[\exp\left(CT(\|F_1(Q_j,Q_{j+1})\|+\|F_2(Q_j,Q_{j+1})\|+1)\right)\Big].
\end{align*}
By Assumption \ref{F2} and the definitions of $F_i,\,i=1,2$, we arrive at
\begin{equation*}
\|F_i(Q_j,Q_{j+1})\|\le C(1+\lfloor Q_j\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}+\lfloor Q_{j+1}\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}),\, i=1,2.
\end{equation*}
Applying the H\"{o}lder inequality and \eqref{EqNI}, for any $i_1\le j\le N^h-1$, we obtain
\begin{align}\label{DEFE}
&\mathbb{E}\left[\exp\left(CT(\|F_1(Q_j,Q_{j+1})\|+\|F_2(Q_j,Q_{j+1})\|+1\right)\right]\\\nonumber
&\le C\sup_{i_1\le j\le N^h-1}\mathbb{E}\left[\exp\left(C(\lfloor Q_j\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}+\lfloor Q_{j+1}\rfloor^{2\vec{l}-\epsilon\mathbbm{1}})\right)\right]\\\nonumber\nonumber
&\le C\sup_{i_1\le j\le N^h}\mathbb{E}\left[\exp\left(C\lfloor Q_j\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}\right)\right]+C\le C.
\end{align}
The above estimates, combined with the fact
$\|D_{r_1}X_n\|^p\le C\left(p,m,d\right)e_n^p,$
yield
\begin{equation}\label{D1}
\sup\limits_{r_1\in [0,T]}\mathbb{E}\left[\sup\limits_{r_1\le t_n\le T}\|D_{r_1}X_n\|^p\right]\le C,
\end{equation}
which proves the assertion for $\alpha=1$.
\textit{Step 2}: Let $r_2\in(t_{i_2},t_{{i_2}+1}]$ for $0\le i_2\le N^h-1$. Taking the Malliavin derivatives on both sides of \eqref{DEP} and \eqref{DEQ} yields that, for any $i_1\lor i_2<n\le N^h-1$,
\begin{align}\label{DEP2}
&D_{r_2}D_{r_1}P_{n+1}\\\nonumber
&=\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}e^{-vh}\left(I-\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)D_{r_2}D_{r_1}P_n\\\nonumber
&\quad+\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}he^{-vh}\left(F_2(Q_n,Q_{n+1})-F_1(Q_n,Q_{n+1})\right)D_{r_2}D_{r_1}Q_n\\\nonumber
&\quad+\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}h^3e^{-vh}F_1(Q_n,Q_{n+1})F_2(Q_n,Q_{n+1})D_{r_2}D_{r_1}Q_n\\\nonumber
&\quad+D_{r_2}\left[\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}\right]e^{-vh}\left(I-\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)D_{r_1}P_n\\\nonumber
&\quad+\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}e^{-vh}D_{r_2}\left[I-\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right]D_{r_1}P_n\\\nonumber
&\quad+D_{r_2}\left[\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}\right]he^{-vh}\left(F_2(Q_n,Q_{n+1})-F_1(Q_n,Q_{n+1})\right)D_{r_1}Q_n\\\nonumber
&\quad+\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}he^{-vh}D_{r_2}\left[F_2(Q_n,Q_{n+1})-F_1(Q_n,Q_{n+1})\right]D_{r_1}Q_n\\\nonumber
&\quad+D_{r_2}\left[\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}\right]h^3e^{-vh}F_1(Q_n,Q_{n+1})F_2(Q_n,Q_{n+1})D_{r_1}Q_n\\\nonumber
&\quad+\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}h^3e^{-vh}D_{r_2}\left[F_1(Q_n,Q_{n+1})\right]F_2(Q_n,Q_{n+1})D_{r_1}Q_n\\\nonumber
&\quad+\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}h^3e^{-vh}F_1(Q_n,Q_{n+1})D_{r_2}\left[F_2(Q_n,Q_{n+1})\right]D_{r_1}Q_n\\\nonumber
&=:J_{1n}^1+J_{2n}^1+J_{3n}^1+J_{4n}^1+J_{5n}^1+J_{6n}^1+J_{7n}^1+J_{8n}^1+J_{9n}^1+J_{10n}^1,
\end{align}
and
\begin{align}\label{DEQ2}
&D_{r_2}D_{r_1}Q_{n+1}\\\nonumber
&=\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}\left[hD_{r_2}D_{r_1}P_n+(I-\frac{h^2}{2}F_2(Q_n,Q_{n+1}))D_{r_2}D_{r_1}Q_n\right]\\\nonumber
&\quad+hD_{r_2}\left[\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}\right]D_{r_1}P_n\\\nonumber
&\quad+D_{r_2}\left[\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}\right]\left(I-\frac{h^2}{2}F_2(Q_n,Q_{n+1})\right)D_{r_1}Q_n\\\nonumber
&\quad+\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}D_{r_2}\left[I-\frac{h^2}{2}F_2(Q_n,Q_{n+1})\right]D_{r_1}Q_n\\\nonumber
&=:J_{1n}^2+J_{2n}^2+J_{3n}^2+J_{4n}^2.
\end{align}
We now claim that
for $\iota=1,\, \kappa=4,\ldots,10$ and $ \iota=2,\,\kappa=2,3,4$, it holds that
\begin{equation}\label{JKI}
\mathbb{E}[\|h^{-1}J^\iota_{\kappa n}\|^q]\le C(q),
\end{equation}
for any $q\in[1,\infty)$, $i_1\lor i_2<n\le N^h-1$.
In fact, by the chain rule, we have
\begin{align*}
&D_{r_2}\left[\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}\right]\\
&=-\frac{h^2}{2}\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}D_{r_2}\left[F_1(Q_n,Q_{n+1})\right]\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1},\\
&D_{r_2}\left[I-\frac{h^2}{2}F_1(Q_n,Q_{n+1})\right]=-\frac{h^2}{2}D_{r_2}\left[F_1(Q_n,Q_{n+1})\right],\\
&D_{r_2}\left[I-\frac{h^2}{2}F_2(Q_n,Q_{n+1})\right]=-\frac{h^2}{2}D_{r_2}\left[F_2(Q_n,Q_{n+1})\right].
\end{align*}
From \eqref{DEF1}, the following estimation
\begin{align*}
\left\|I-\frac{h^2}{2}F_2(Q_n,Q_{n+1})\right\|\le C+\frac{h^2}{2}\left\|F_2(Q_n,Q_{n+1})\right\|,
\end{align*}
and the fact that $L^{\infty-}(\Omega)$ is an algebra, it remains to show that
\begin{align}\label{DR2F}
\|D_{r_2}F_i(Q_n,Q_{n+1})\|, \,\|F_i(Q_n,Q_{n+1})\|\in L^{\infty-}(\Omega),\, i=1,2.
\end{align}
Combining
\begin{eqnarray*}
D_{r_2}F_i(Q_n,Q_{n+1})=\nabla F_i(Q_n,Q_{n+1})^\top\left[\begin{array}{c}D_{r_2}Q_n \\D_{r_2}Q_{n+1}\end{array}\right], \,i=1,2,
\end{eqnarray*}
\eqref{D1} and Lemma \ref{EE}, we get \eqref{DR2F},
which implies that \eqref{JKI} holds.
Define $\mathrm{E}_{n}:=\|D_{r_2}D_{r_1}P_n\|+\|D_{r_2}D_{r_1}Q_n\|$. From \eqref{DEP2}, \eqref{DEQ2} and \eqref{DEF1}-\eqref{DEF3}, it follows that
\begin{equation*}\label{DR2e}
\mathrm{E}_{n+1}\le \mathrm{E}_n+Ch(1+\|F_1(Q_n,Q_{n+1})\|+\|F_2(Q_n,Q_{n+1})\|)\mathrm{E}_n+hJ_n,
\end{equation*}
with $J_n=h^{-1}\sum \|J_{\kappa n}^\iota\|=\sum \|h^{-1}J_{\kappa n}^\iota\|$, where the sums are extended to the set $\{\iota=1,\, \kappa=4,\ldots,10;\, \iota=2,\, \kappa=2,3,4\}$. It follows from \eqref{JKI} that
\begin{equation}\label{JKI1}
\mathbb{E}[\|J_n\|^q]\le C(q),\,\forall\,i_1\lor i_2<n\le N^h-1.
\end{equation}
According to
$r_1\in(t_{i_1},t_{{i_1}+1}],\, r_2\in(t_{i_2},t_{{i_2}+1}]$, as well as \eqref{DX2}, we have
$\mathrm{E}_{(i_1\lor i_2)+1}=0$.
Since $D_{r_1,r_2}$ is a symmetric operator with respect to $r_1,\,r_2$, without loss of generality, we suppose that $i_1\le i_2$.
By using the discrete Gronwall lemma and then taking $p$th power on both sides, we obtain that for any $i_2<n\le N^h-1$,
\begin{align*}
&\mathrm{E}_n^p\le\exp\left(\sum_{j=i_2+1}^{n-1}Ch(\|F_1(Q_j,Q_{j+1})\|+\|F_2(Q_j,Q_{j+1})\|+1)\right)\left(\sum_{j=i_2+1}^{n-1}hJ_j\right)^p\\
&\le\exp\left(\sum_{j=i_2+1}^{n-1}Ch(\|F_1(Q_j,Q_{j+1})\|+\|F_2(Q_j,Q_{j+1})\|+1)\right)h^p(n-1-i_2)^{p-1}\left(\sum_{j=i_2+1}^{n-1}J_j^p\right).
\end{align*}
Subsequent proof is based on \eqref{DEFE} and \eqref{JKI1}. For $\alpha\ge3$, the desired result is achieved by a recursive argument.
\end{proof}
\begin{rems}\label{Fk0}
Let $F\in C_p^k$ for some $k\ge2$, $t\in(0,T]$ and $n=1,\ldots,N^h$. From the proofs of Lemmas \ref{NS} and \ref{NDI}, for any $\alpha\le k-2$ and $p\ge1$, we have $X(t),\,X_n\in\mathbb{D}^{\alpha,p}(\mathbb{R}^{2m})$.
\end{rems}
Based on Lemma \ref{NDI}, we are now in a position to prove the existence of the density function of $X_n$, $n=2,\ldots,N^h$. We remark that $X_1$ is degenerate in Malliavin sense since $\gamma_1$ is not invertible.
\begin{tho}\label{MRL}
Let Assumptions \ref{F2}-\ref{F4} hold. Then for any $n\in\{2,\ldots,N^h\}$, the law of $X_n$ is absolutely continuous with respect to the Lebesgue measure on $\mathbb R^{2m}$.
\end{tho}
\begin{proof}
In view of \cite[Theorem 2.1.2]{DN06} and Lemma \ref{NDI}, it remains to prove that for $n=2,\ldots,N^h$, the Malliavin covariance matrix $\gamma_{n}$ of $X_n$, is invertible a.s. Since $\gamma_{n}=\int_0^{t_n}D_rX_n(D_rX_n)^\top\,\mathrm d r$ is a nonnegative definite matrix, it suffices to show that $\lambda_{min}(\gamma_{n+1})>0$, a.s. $\forall\,n=1,\ldots,N^h-1$. Notice that the symmetry of $\gamma_{n}$ yields that
\begin{equation*}\label{Gam}
\lambda_{min}(\gamma_{n+1})=\min_{\substack{y=(y_1^\top,y_2^\top)^\top\in\mathbb{R}^{2m}\\\|y\|=1}}y^\top\gamma_{n+1}y.
\end{equation*}
Since $\sigma\sigma^\top$ is invertible, we have $\frac{1-e^{-2vh}}{2v}\|y_1^\top\sigma\|^2>0$ as long as $y_1\neq0$. It suffices to show that for $y=(y_1^\top,y_2^\top)^\top$ with $\|y_2\|=1$, it holds that $y^\top\gamma_{n+1}y>0$, a.s. Now we prove $\lambda_{min}(\gamma_{n+1})>0$ by induction on $n$.
\textit{Step 1:}
Let $n=1$. By \eqref{MX}, we have
\begin{equation*}
\begin{split}
y^\top\gamma_2y
=&\frac{1-e^{-2vh}}{2v}\left\|{y_1^\top e^{-vh}\left(I-\frac{h^2}{2}F_1(Q_1,Q_2)\right)\left(I+\frac{h^2}{2}F_1(Q_1,Q_2)\right)^{-1}\sigma}\right.\\
&\left.{\qquad\qquad\quad +hy_2^\top\left(I+\frac{h^2}{2}F_1(Q_1,Q_2)\right)^{-1}\sigma}\right\|^2+\frac{1-e^{-2vh}}{2v}\|y_1^\top\sigma\|^2.
\end{split}
\end{equation*}
Substituting $y_1=0, \|y_2\|=1$ into the above equation and using the invertibility of $\sigma\sigma^\top$ and $I+\frac{h^2}{2}F_1(Q_1,Q_2)$ lead to
\begin{align*}
y^\top\gamma_2y=\frac{1-e^{-2vh}}{2v}\left\|hy_2^\top\left(I+\frac{h^2}{2}F_1(Q_1,Q_2)\right)^{-1}\sigma\right\|^2>0.
\end{align*}
\textit{Step 2}:
Assume that $\lambda_{min}(\gamma_{n+1})>0$ holds for $n-1$.
Substituting $y_1=0, \|y_2\|=1$ and \eqref{MX} into the expression of $y^\top\gamma_{n+1}y$ gives
\begin{align*}
y^\top\gamma_{n+1}y=\left[\begin{array}{c}y_1^\top,y_2^\top\end{array}\right]A_n\gamma_n^\top A_n^\top\left[\begin{array}{c}y_1\\y_2\end{array}\right]+\frac{1-e^{-2vh}}{2v}\|y_1^\top\sigma\|^2=\left[\begin{array}{c}z_1^\top,z_2^\top\end{array}\right]\gamma_n^\top\left[\begin{array}{c}z_1\\z_2\end{array}\right],
\end{align*}
where
\begin{align*}
&z_1=hy_2^\top\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1},\\
&z_2=y_2^\top\left(I+\frac{h^2}{2}F_1\left(Q_n,Q_{n+1}\right)\right)^{-1}\left(I-\frac{h^2}{2}F_2(Q_n,Q_{n+1})\right).
\end{align*}
Then the desired result $y^\top\gamma_{n+1}y>0,$ a.s. follows from $z_1\neq0,$ and the induction assumption that $\gamma_n$ is invertible a.s., which completes the proof.
\end{proof}
\section{Strong convergence}\label{S5}
In this section, we present the optimal strong convergence rate of the splitting AVF scheme \eqref{split sol} under Assumption \ref{F2}. Before that, we recall the mild form of the exact solution of equation \eqref{SDE1}, for any $0\le s<t\le T,$
\begin{equation}\label{Exact sol}
\left\{
\begin{split}
&P(t)=e^{-v(t-s)}P(s)-\int_s^te^{-v(t-u)}\nabla F(Q(u))\,\mathrm d u+\sum_{k=1}^d\int_s^te^{-v(t-u)}\sigma_k\,\mathrm d W_{u}^{k},\\
&Q(t)=Q(s)+\int_s^t P(u)\,\mathrm d u.
\end{split}
\right.
\end{equation}
According to the exponential integrability properties of both exact and numerical solutions, a priori strong error estimate between $X(t_n)$ and $X_n$ is established in the following Lemma.
\begin{lem}\label{lem2}
Let Assumption \ref{F2} hold, $h_0$ be a sufficiently small positive constant and $p\ge1$. Then there exists some positive constant $C=C(p,T,\sigma,X(0))$ such that for any $h\in(0,h_0]$,
\begin{equation*}
\sup_{n\le N^h}\|X_n-X(t_n)\|_{L^{2p}(\Omega;\mathbb{R}^{2m})}\le Ch^{1/2}.
\end{equation*}
\end{lem}
\begin{proof}
From \eqref{split sol} and \eqref{Exact sol}, it follows that
\begin{align}\label{P}
P_{n+1}-P(t_{n+1})=&e^{-vh}(P_n-P(t_n))+\int_{t_n}^{t_{n+1}}[-e^{-vh}+e^{-v(t_{n+1}-t)}]\nabla F(Q(t))\,\mathrm d t\\\nonumber
&+e^{-vh}\int_{t_n}^{t_{n+1}}\int_0^1R_1\,\mathrm d \tau\,\mathrm d t,\\\label{Q}
Q_{n+1}-Q(t_{n+1})=&Q_n-Q(t_n)+h(P_n-P(t_n))+R_2-\sum_{k=1}^d \int_{t_n}^{t_{n+1}}\int_{t_n}^t e^{-v(t-s)}\sigma_k\,\mathrm d W_{s}^{k}\,\mathrm d t,
\end{align}
where
\begin{align*}
R_1:&=\nabla F(Q(t))-\nabla F(Q_n+\tau(Q_{n+1}-Q_n)),\\
R_2:&=\int_{t_n}^{t_{n+1}}\int_{t_n}^{t}e^{-v(t-s)}\nabla F(Q(s))\,\mathrm d s\,\mathrm d t+\left(h-\frac{1-e^{-vh}}{v}\right)P(t_n)\\
&-\frac{h^2}{2}\int_0^1\nabla F(Q_n+\tau(Q_{n+1}-Q_n))\,\mathrm d\tau.
\end{align*}
The mean value theorem yields that
\begin{align*}
R_1&=\int_0^1 \nabla^2 F\left(\theta Q(t)+\left(1-\theta\right)\left(Q_n+\tau\left(Q_{n+1}-Q_n\right)\right)\right)(Q(t)-Q_n-\tau(Q_{n+1}-Q_n))\,\mathrm d \theta\\
&=\int_0^1 \nabla^2 F\left(\theta Q(t)+\left(1-\theta\right)\left(Q_n+\tau\left(Q_{n+1}-Q_n\right)\right)\right)(Q(t_n)-Q_n)\,\mathrm d \theta\\
+&\int_0^1 \nabla^2 F\left(\theta Q(t)+\left(1-\theta\right)\left(Q_n+\tau\left(Q_{n+1}-Q_n\right)\right)\right)\left(\int_{t_n}^{t} P(s)\,\mathrm d s-\frac{\tau h}{2}(P_n+\bar P_{n+1})\right)\,\mathrm d \theta,
\end{align*}
where $\theta\in (0,1)$ depends on $Q(t)$ and $Q_n$, $Q_{n+1}$.
The inequalities $1-e^{-vh}\le Ch$ and $e^{-vh}-1+vh\le Ch^2$, $\forall\,h\le1$ with $C$ independent of $h$ and Assumption \ref{F2} imply that for any $\theta\in(0,1)$, $\tau\in(0,1)$, $t\in[t_n,t_{n+1}]$ and $n=0,\ldots,N^h-1$,
\begin{align*}
\|\nabla F(Q(t))\|&\le C+\|Q(t)\|^{2|\,\vec{l}\,|_{\infty}},\\
\|\nabla F(Q_n+\tau(Q_{n+1}-Q_n))\|&\le C+\|Q(t)\|^{2|\,\vec{l}\,|_{\infty}}+\|Q_{n+1}\|^{2|\,\vec{l}\,|_{\infty}},\\
\|\nabla^2 F(\theta Q(t)+(1-\theta)(Q_n+\tau(Q_{n+1}-Q_n)))\|&\le C(1+\lfloor Q_n\rfloor^{2{\vec{l}-\epsilon\mathbbm{1}}}+\lfloor Q_{n+1}\rfloor^{2{\vec{l}-\epsilon\mathbbm{1}}}+\lfloor Q(t)\rfloor^{2{\vec{l}-\epsilon\mathbbm{1}}}).
\end{align*}
Applying the Young inequality and the triangle inequality, we get
\begin{align}\label{PN}
&\|P_{n+1}-P(t_{n+1})\|\le \|P_n-P(t_n)\|+G_n\|Q(t_n)-Q_n\|+Ch^2K_{1n},\\\label{QN}
&\|Q_{n+1}-Q(t_{n+1})\|\le \|Q_n-Q(t_n)\|+h\|P(t_n)-P_n\|+Ch^2K_{2n}+\|\eta_n\|,
\end{align}
where
\begin{align*}
K_{1n}&=\left(1+\sup_{t\in[0,T]}\|Q(t)\|^{2|\,\vec{l}\,|_{\infty}}+\|Q_n\|^{2|\,\vec{l}\,|_{\infty}}+\|Q_{n+1}\|^{2|\,\vec{l}\,|_{\infty}}\right)\left(\sup_{t\in[0,T]}\|P(t)\|+\|P_n\|+\|\bar P_{n+1}\|+1\right)\\
K_{2n}&=1+\sup_{t\in[0,T]}\|Q(t)\|^{2|\,\vec{l}\,|_{\infty}}+\|Q_n\|^{2|\,\vec{l}\,|_{\infty}}+\|Q_{n+1}\|^{2|\,\vec{l}\,|_{\infty}}+\sup_{t\in[0,T]}\|P(t)\|,\\
\eta_n&=\sum_{k=1}^d \int_{t_n}^{t_{n+1}}\int_{t_n}^t \sigma_k\,\mathrm d W_{s}^{k}\,\mathrm d t,
\end{align*}
and
\begin{equation}\label{Gn}
G_n=\int_{t_n}^{t_{n+1}} C\left(1+\lfloor Q(t)\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}+\lfloor
Q_n\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}+\lfloor Q_{n+1}\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}\right)\,\mathrm d t.\\
\end{equation}
Define $\mathcal{E}_{n+1}:=\|P_{n+1}-P(t_{n+1})\|+\|Q_{n+1}-Q(t_{n+1})\|$. The estimates \eqref{PN} and \eqref{QN} lead to
\begin{align*}\label{SE1}
&\mathcal{E}_{n+1} \le\mathcal{E}_{n}+(h+G_n)\mathcal {E}_{n}+K_n,
\end{align*}
where
$K_n=Ch^2K_{1n}+Ch^2K_{2n}+\|\eta_n\|\le Ch^2K_{1n}+\|\eta_n\|.$
Using the discrete Gronwall lemma and $\mathcal{E}_0=0$, we obtain
\begin {equation*}
\mathcal{E}_{n+1} \le \left(\sum_{j=0}^{n} K_j\right)\exp\left(\sum_{j=0}^{n} (h+G_j)\right),\,\forall\,n=0,\ldots,N^h-1.
\end {equation*}
Taking $p$th power on both sides and applying the H\"{o}lder inequality, we have
\begin {align}\label{SE2}
\mathcal{E}_{n+1}^p &\le \left(\sum_{j=0}^{n} K_j\right)^p\exp\left(\sum_{j=0}^{n} p(h+G_j)\right)\\\nonumber
&\le n^{p-1}\left(\sum_{j=0}^{n} K_j^p\right)\exp\left(pT\right)\exp\left(\sum_{j=0}^{n} pG_j\right),\,\forall\, n=0,\ldots,N^h-1.
\end {align}
The H\"{o}lder inequality, together with Lemmas \ref{MB}, \ref{EE} implies that
\begin {equation}\label{K12}
\left\|K_{1j}^{p}\right\|_{L^2(\Omega)} \le C,\,\left\|K_{2j}^{p}\right\|_{L^2(\Omega)} \le C,\, \forall \,j=0,\ldots,N^h-1.
\end {equation}
The stochastic Fubini theorem and the H\"{o}lder inequality lead to
\begin {align}\label{eta}
\left\|\|\eta_j\|^{p}\right\|_{L^2(\Omega)}^2&=\mathbb{E}\left[\left\|\sum_{k=1}^d \int_{t_j}^{t_{j+1}}\int_{t_j}^t \sigma_k\,\mathrm d W_{s}^{k}\,\mathrm d t\right\|^{2p}\right]
=\mathbb{E}\left[\left\|\sum_{k=1}^d \int_{t_j}^{t_{j+1}}(t_{j+1}-s) \sigma_k\,\mathrm d W_{s}^{k}\right\|^{2p}\right]\\\nonumber
&\le C\sum_{k=1}^d \mathbb{E}\left[\left\|\int_{t_j}^{t_{j+1}}(t_{j+1}-s) \sigma_k\,\mathrm d W_{s}^{k}\right\|^{2p}\right]\le Ch^{3p},\,\forall\,j=0,\ldots,N^h-1.
\end {align}
Combining the above estimates together, we obtain that for $n=0,\ldots,N^h-1$,
\begin {align*}\label{KP}
\left\|\sum_{j=0}^{n} K_j^p\right\|_{L^2(\Omega)}&\le\sum_{j=0}^{n} \left\|K_j^p\right\|_{L^2(\Omega)}
\le \sum_{j=0}^{n}\left(Ch^{2p}\|K_{1j}^{p}\|_{L^2(\Omega)}+C\left\|\|\eta_j\|^{p}\right\|_{L^2(\Omega)}\right)
\le Ch^{\frac{3p}{2}-1}.
\end {align*}
Further, \eqref{DEFE} and the Jensen inequality imply that $\mathbb{E}\left[\exp\left(\sum_{j=1}^{n+1} Ch\lfloor Q_j\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}\right)\right]\le C.$ Consequently, according to the H\"{o}lder inequality, we have
\begin{eqnarray}\label{SE3}
&&\left\|\exp\left(\sum_{j=0}^{n}pG_j\right)\right\|_{L^2(\Omega)}\\\nonumber
&&\le\exp(CT)\left\|\exp\left(\int_0^T C\lfloor Q(t)\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}\,\mathrm d t\right)\right\|_{L^4(\Omega)}\left\|\exp\left(\sum_{j=1}^{n+1} 2Ch\lfloor Q_j\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}\right)\right\|_{L^4(\Omega)}\\\nonumber
&&\le C.
\end{eqnarray}
From the estimates \eqref{SE2}-\eqref{SE3}, we deduce that
$\mathbb{E}\left[\mathcal{E}_{n}^p\right]\le Ch^{\frac{p}{2}},\,\forall\,n=1,\ldots,N^h,$ which together with the fact that $\|X_n-X(t_n)\|^{p}\le C\mathcal{E}_n^p$ completes the proof.
\end{proof}
With a slight modified procedure, we get the following strong convergence result.
\begin{cor}
Let Assumption \ref{F2} hold, $h_0$ be a sufficiently small positive constant and $p\ge1$. Then there exists some positive constant $C=C(X(0),p,T,\sigma)$ such that for any $h\in(0,h_0]$,
\begin{equation*}
\left\|\sup_{n\le N^h}\|X_n-X(t_n)\|\right\|_{L^{2p}(\Omega)}\le Ch^{1/2}.
\end{equation*}
\end{cor}
\begin{proof}
Taking supreme over $n\le N^h-1$ and square on both sides of \eqref{SE2} yields
\begin {align*}
\mathbb{E}\left[\sup_{n\le N^h-1}\mathcal{E}_{n+1}^{2p}\right]&\le \left(\sum_{j=0}^{N^h-1} K_j\right)^{2p}\exp\left(\sum_{j=0}^{N^h-1} 2p(h+G_j)\right)\\\nonumber
&\le (N^h)^{2p-1}\left(\sum_{j=0}^{N^h-1} K_j^{2p}\right)\exp\left(2pT\right)\exp\left(\sum_{j=0}^{N^h-1} 2pG_j\right).
\end {align*}
Similar to the proof of Lemma \ref{lem2}, we complete the proof.
\end{proof}
The optimal strong convergence order of the numerical approximation which only use the increments of the Wiener process is known to be $1$ for SDEs with Lipschitz and regular coefficients driven by additive noises (see e.g. \cite{CC80}). However, for SDEs with non-globally monotone coefficients driven by additive noises, it seems that there exists a order barrier to achieve optimal strong rate (see e.g. \cite{HJ14}).
In this part, we overcome the order barrier of the proposed scheme \eqref{split sol} by using the Malliavin integration by parts formula and Lemma \ref{lem2}. To this end, the following a priori estimate is needed to the proof of Theorem \ref{SC1}.
\begin{lem}\label{DH}
Let Assumption \ref{F2} hold, $h_0$ be a sufficiently small positive constant and $p\ge1$. For any positive constant $K_1$, there exists some positive constant $C=C(p,K_1)>0$ such that for any $r\in[0,T],\,k\in\{1,...,d\},$ $0\le j< n\le N^h,$ $h\in(0,h_0]$,
\begin{equation*}
\mathbb{E} \left[\left(D_r^k\left(\prod_{i=j+1}^{n}\left(1+K_1(h+G_i)\right)\right)\right)^{2p}\right]<C,
\end{equation*}
where $G_i$ is defined by \eqref{Gn}.
\end{lem}
\begin{proof}
Since $X_n$ and $X(t)$ are differentiable in Malliavin sense, and $G_i$ is a functional of $Q(t),\,Q_i,\,Q_{i+1}$, the Malliavin derivative of $G_i$ exists (see e.g. \cite[Chapter 1]{DN06}).
By the chain rule, the H\"{o}lder inequality and the estimation \eqref{SE3}, we obtain
\begin{align}\label{DH1}
&\mathbb{E} \left[\left(D_r^k\left(\prod_{i=j+1}^{n}\left(1+K_1(h+G_i)\right)\right)\right)^{2p}\right]\\\nonumber
&=\mathbb{E}\left[\left(\sum_{i=j+1}^{n}\prod_{\substack{\kappa=j+1\\ \kappa\neq i}}^{n}\left(1+K_1(h+G_\kappa)\right)K_1D_r^kG_i\right)^{2p}\right]\\\nonumber
&\le(n-j)^{2p-1}\mathbb{E}\left[\sum_{i=j+1}^{n}\left(\prod_{\substack{\kappa=j+1\\ \kappa\neq i}}^{n}\left(1+K_1(h+G_\kappa)\right)K_1D_r^kG_i\right)^{2p}\right]\\\nonumber
&\le C(n-j)^{2p-1}\sum_{i=j+1}^{n}\mathbb{E}\left[\exp\left(2pK_1\sum_{\kappa=j+1}^{n}(h+G_\kappa)\right)\left(D_r^kG_i\right)^{2p}\right]\\\label{DH1}
&\le C(n-j)^{2p-1}\sum_{i=j+1}^{n}\left(\mathbb{E}\left[\left(D_r^kG_i\right)^{2q}\right]\right)^{\frac{p}{q}},\nonumber
\end{align}
where $q>p$. The chain rule, the H\"{o}lder inequality and the Fubini theorem yield that
\begin{align*}
&\mathbb{E}\left[\left(D_r^kG_i\right)^{2q}\right]\\
&=\mathbb{E}\left[\left(\int_{t_i}^{t_{i+1}} CD_r^k\left(1+\lfloor Q(t)\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}+\lfloor Q_i\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}+\lfloor Q_{i+1}\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}\right)\,\mathrm d t\right)^{2q}\right]\\
&\le C h^{2q-1}\mathbb{E}\left[\int_{t_i}^{t_{i+1}} \left|D_r^k\left(1+\lfloor Q(t)\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}+\lfloor Q_i\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}+\lfloor Q_{i+1}\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}\right)\right|^{2q}\,\mathrm d t\right]\\
&= C h^{2q-1}\int_{t_i}^{t_{i+1}}\mathbb{E}\left[ \left|D_r^k\left(1+\lfloor Q(t)\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}+\lfloor Q_i\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}+\lfloor Q_{i+1}\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}\right)\right|^{2q}\right]\,\mathrm d t\\
&\le Ch^{2q-1}\int_{t_i}^{t_{i+1}}\mathbb{E}\left[\left|D_r^k\left(\lfloor Q(t)\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}\right)\right|^{2q}+\left|D_r^k\left(\lfloor Q_i\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}\right)\right|^{2q}+\left|D_r^k\left(\lfloor Q_{i+1}\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}\right)\right|^{2q}\right]\,\mathrm d t.
\end{align*}
Furthermore, for any $r,\,t\in[0,T],\,k=1,\ldots,d$, by \eqref{DK} and Lemma \ref{MB}, we have
\begin{align*}
&\mathbb{E}\left[\left|D_r^k\left(\lfloor Q(t)\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}\right)\right|^{2q}\right]\\
&=\mathbb{E}\left[\left|\sum_{\beta=1}^m(2l_\beta-\epsilon)Q_\beta(t)^{2l_\beta-\epsilon-1}D_r^kQ_\beta(t)\right|^{2q}\right]\\
&\le C\sum_{\beta=1}^m\mathbb{E}\left|Q_\beta(t)^{2l_\beta-\epsilon-1}D_r^kQ_\beta(t)\right|^{2q}\\
&\le C\sum_{\beta=1}^m\left(\mathbb{E}\left[\|Q(t)\|^{4q(2|\,\vec{l}\,|_{\infty}-\epsilon-1)}\right]\right)^{\frac{1}{2}}\left(\mathbb{E}\left|D_r^kQ_\beta(t)\right|^{4q}\right)^{\frac{1}{2}}\\
&\le C.
\end{align*}
Likewise, by Lemma \ref{NDI} and Lemma \ref{EE}, for any $r\in[0,T],\,i=1,..,N^h,\,k=1,\ldots,d$, we have
$\mathbb{E}\left[\left|D_r^k\left(\lfloor Q_i\rfloor^{2\vec{l}-\epsilon\mathbbm{1}}\right)\right|^{2q}\right]\le C.$
Combining the above estimates together, we get
\begin{equation}\label{DH3}
\mathbb{E}\left[(D_r^kG_i)^{2q}\right]\le Ch^{2q}.
\end{equation}
Combining \eqref{DH1} and \eqref{DH3}, we complete the proof.
\end{proof}
Based on Lemmas \ref{lem2} and \ref{DH}, now we prove the main result of this section.
\textit{Proof of Theorem \ref{SC1}}
We begin with establishing a refined estimate of the error between $Q(t_{n+1})$ and $Q_{n+1}$.
By \eqref{Q}, $\|R_2\|\le Ch^2K_{2n}$ and choosing $h_0\le1$, we obtain by the Young inequality that
\begin{align*}
&\|Q_{n+1}-Q(t_{n+1})\|^2\\
&=\|Q_n-Q(t_n)\|^2+h^2\|P_n-P(t_n)\|^2+\|R_2\|^2+\|\eta_n\|^2+2h(Q_n-Q(t_n))^\top(P_n-P(t_n))\\
&\quad+2(Q_n-Q(t_n))^\top R_2-2(Q_n-Q(t_n))^\top\eta_n+2h(P_n-P(t_n))^\top R_2-2h(P_n-P(t_n))^\top\eta_n\\
&\quad-2R_2^\top\eta_n\\
&\le\|Q_n-Q(t_n)\|^2+h^2\|P_n-P(t_n)\|^2+Ch^4K_{2n}^2+\|\eta_n\|^2+2h(Q_n-Q(t_n))^\top(P_n-P(t_n))
\\
&\quad+h\|Q_n-Q(t_n)\|^2+Ch^3K_{2n}^2-2(Q_n-Q(t_n))^\top\eta_n+h\|P_n-P(t_n)\|^2+Ch^5K_{2n}^2\\
&\quad+h\|P_n-P(t_n)\|^2+h\|\eta_n\|^2+\|\eta_n\|^2+Ch^4K_{2n}^2\\
&\le {\|Q_n-Q(t_n)\|^2+Ch\mathcal{E}_n^2+C\|\eta_n\|^2+Ch^3K_{2n}^2}-2(Q_n-Q(t_n))^\top\eta_n.
\end{align*}
Further,
\begin{align}\label{Q2p}
&\|Q_{n+1}-Q(t_{n+1})\|^{2p}\\\nonumber
&=\|Q_n-Q(t_n)\|^{2p}+p\|Q_n-Q(t_n)\|^{2p-2}\left({Ch\mathcal{E}_n^2+C\|\eta_n\|^2+Ch^3K_{2n}^2}\right)\\\nonumber
&-2p\|Q_n-Q(t_n)\|^{2p-2}(Q_n-Q(t_n))^\top\eta_n\\\nonumber
&+\sum_{\kappa=2}^pC\|Q_n-Q(t_n)\|^{2p-2\kappa}\left(Ch\mathcal{E}_n^2+{C\|\eta_n\|^2+Ch^3K_{2n}^2}-2(Q_n-Q(t_n))^\top\eta_n\right)^\kappa\\\nonumber
&=:\|Q_n-Q(t_n)\|^{2p}+I_1+I_2+I_3.
\end{align}
From the Young inequality and the H\"{o}lder inequality, it follows that
\begin{align*}
I_1&\le ph\|Q_n-Q\left(t_n\right)\|^{2p}+ph^{1-p}\left(Ch\mathcal{E}_n^2+C\|\eta_n\|^2+Ch^3K_{2n}^2\right)^p\\
&\le ph\|Q_n-Q\left(t_n\right)\|^{2p}+ph^{1-p}\left(Ch^p\mathcal{E}_n^{2p}+C\|\eta_n\|^{2p}+Ch^{3p}K_{2n}^{2p}\right)\\
&\le Ch\mathcal{E}_n^{2p}+Ch^{1-p}\|\eta_n\|^{2p}+Ch^{2p+1}K_{2n}^{2p}
\end{align*}
and
\begin{align*}
I_3&\le\sum_{\kappa=2}^pC\|Q_n-Q(t_n)\|^{2p-2\kappa}\left(Ch\mathcal{E}_n^2+C\|\eta_n\|^2+Ch^3K_{2n}^2\right)^\kappa\\
&\quad+\sum_{\kappa=2}^pC\|Q_n-Q(t_n)\|^{2p-2\kappa}\left(\|Q_n-Q(t_n)\|\|\eta_n\|\right)^\kappa\\
&\le C\sum_{\kappa=2}^p\|Q_n-Q(t_n)\|^{2p-2\kappa}\left(h^\kappa\mathcal{E}_n^{2\kappa}+\|\eta_n\|^{2\kappa}+h^{3\kappa}K_{2n}^{2\kappa}\right)+C\sum_{\kappa=2}^p\|Q_n-Q(t_n)\|^{2p-\kappa}\|\eta_n\|^\kappa\\
&\le Ch^2\mathcal{E}_n^{2p}+C\sum_{\kappa=2}^p\left(h\|Q_n-Q(t_n)\|^{2p}+h^{1-p/\kappa}\left(\|\eta_n\|^{2\kappa}+h^{3\kappa}K_{2n}^{2\kappa}\right)^{\frac{p}{\kappa}}\right)\\
&\quad+\sum_{\kappa=2}^p\left(h\|Q_n-Q(t_n)\|^{2p}+h^{1-p}\|\eta_n\|^{2p}\right)\\
&\le Ch\mathcal{E}_n^{2p}+Ch^{1-p}\|\eta_n\|^{2p}+Ch^{\frac{5}{2}p+1}K_{2n}^{2p}.
\end{align*}
Substituting the above two inequalities into \eqref{Q2p} gives
\begin{align}\label{Q2p1}
\|Q_{n+1}-Q(t_{n+1})\|^{2p}&\le\|Q_n-Q(t_n)\|^{2p}+Ch\mathcal{E}_n^{2p}+Ch^{2p+1}K_{2n}^{2p}+Ch^{1-p}\|\eta_n\|^{2p}\\\nonumber
&\quad-C\|Q_n-Q(t_n)\|^{2p-2}(Q_n-Q(t_n))^\top\eta_n,\nonumber
\end{align}
where we used $h\le1$. Now we turn to estimating $P_{n+1}-P(t_{n+1})$. Taking $2p$th power on both sides of \eqref{PN}, we get
\begin{align*}
\|P_{n+1}-P(t_{n+1})\|^{2p}&\le\left(\|P_n-P(t_n)\|+G_n\|Q(t_n)-Q_n\|+Ch^2K_{1n}\right)^{2p}\\
&=\|P_n-P(t_n)\|^{2p}+2p\|P_n-P(t_n)\|^{2p-1}\left(G_n\|Q(t_n)-Q_n\|+Ch^2K_{1n}\right)\\
&\quad+\sum_{\kappa=2}^{2p}C\|P_n-P(t_n)\|^{2p-\kappa}\left(G_n\|Q_n-Q(t_n)\|+Ch^2K_{1n}\right)^\kappa.
\end{align*}
According to the Young inequality, for $2\le\kappa\le 2p$,
\begin{align*}
&\|P_n-P(t_n)\|^{2p-\kappa}\left(G_n\|Q_n-Q(t_n)\|+Ch^2K_{1n}\right)^\kappa\\
&=\|P_n-P(t_n)\|^{2p-\kappa}\left(G_n\|Q_n-Q(t_n)\|+Ch^2K_{1n}\right)^{\frac{2p-\kappa}{2p-1}}\\
&\quad\left(G_n\|Q_n-Q(t_n)\|+Ch^2K_{1n}\right)^{\kappa-\frac{2p-\kappa}{2p-1}}\\
&\le\frac{2p-\kappa}{2p-1}\|P_n-P(t_n)\|^{2p-1}\left(G_n\|Q(t_n)-Q_n\|+Ch^2K_{1n}\right)\\
&\quad+\frac{\kappa-1}{2p-1}\left(G_n\|Q_n-Q(t_n)\|+Ch^2K_{1n}\right)^{2p}.
\end{align*}
Combining the above two estimates and the H\"{o}lder inequality, we obtain
\begin{align}\label{P2p}
\|P_{n+1}-P(t_{n+1})\|^{2p}
&\le\|P_n-P(t_n)\|^{2p}+CG_n^{2p}\|Q_n-Q(t_n)\|^{2p}+Ch^{4p}K_{1n}^{2p}\\\nonumber
&\quad+C\|P_n-P(t_n)\|^{2p-1}\left(G_n\|Q_n-Q(t_n)\|+Ch^2K_{1n}\right)\\\nonumber
&\le\|P_n-P(t_n)\|^{2p}+C(h+G_n)\mathcal{E}_n^{2p}+Ch^{1+2p}K_{1n}^{2p}+CG_n^{2p}\mathcal{E}_n^{2p}.
\end{align}
Define $\mathcal{S}_{n+1}:=\left(\|P_{n+1}-P(t_{n+1})\|^{2p}+\|Q_{n+1}-Q(t_{n+1})\|^{2p}\right)^{\frac{1}{2p}}$. Note that $\mathcal{E}_n^{2p}\le C\mathcal{S}_n^{2p}$. Then it follows from \eqref{Q2p1} and \eqref{P2p} that
$\mathcal{S}_{n+1}^{2p}\le\mathcal{S}_n^{2p}+C(h+G_n)\mathcal{S}_n^{2p}+T_n,$
where $T_n=T_{1n}+T_{2n}$ with
\begin{align*}
&T_{1n}=Ch^{2p+1}K_{1n}^{2p}+Ch^{1-p}\|\eta_n\|^{2p}+Ch^{2p+1}K_{2n}^{2p}+CG_n^{2p}\mathcal{S}_n^{2p},\\
&T_{2n}=C\|Q_n-Q(t_n)\|^{2p-2}(Q(t_n)-Q_n)^\top\eta_n.
\end{align*}
Notice that $\mathcal{S}_{0}=0$.
The discrete Gronwall lemma (see e.g. \cite[Lemma 1.4.2]{QV94}) yields that
\begin{align}\label{En+1}
\mathcal{S}_{n+1}^{2p}&\le\sum_{j=0}^{n}\left(\prod_{i=j+1}^{n}(1+C(h+G_i))\right)T_{1j}+\sum_{j=0}^{n}\left(\prod_{i=j+1}^{n}(1+C(h+G_i))\right)T_{2j}\\\nonumber
&\le\sum_{j=0}^{n}\exp\left({\sum\limits_{i=j+1}^{n}C(h+G_i)}\right)T_{1j}+\sum_{j=0}^{n}\left(\prod_{i=j+1}^{n}\left(1+C(h+G_i)\right)\right)T_{2j},
\end{align}
with the conventions $\prod_{i=n+1}^{n}(1+C(h+G_i))=1$ and $\sum_{i=n+1}^{n}C(h+G_i)=0$.
Now, we estimate the above two sums separately. For the first summand, Lemma \ref{lem2} and estimations \eqref{K12}-\eqref{SE3} yield that,
\begin{align*}
\mathbb{E}\left[\exp\left({\sum\limits_{i=j+1}^{n}C(h+G_i)}\right)T_{1j}\right]\le C\left(\mathbb{E}\left[\exp\left({\sum\limits_{i=j+1}^{n}CG_i}\right)\right]\right)^{\frac{1}{2}}\left(\mathbb{E}\left[T_{1j}^2\right]\right)^{\frac{1}{2}}\le Ch^{2p+1},
\end{align*}
whence
\begin{equation}\label{T1}
\mathbb{E}\left[\sum_{j=0}^{n}\exp\left(\sum_{i=j+1}^{n}C(h+G_i)\right)T_{1j}\right]\le Ch^{2p}.
\end{equation}
Now we estimate the second summand in \eqref{En+1}.
By the definition of $T_{2j}$ and using the Malliavin integration by parts formula (see e.g. \cite[Lemma 1.2.1]{DN06}), we obtain
\begin{align*}
&\mathbb{E}\left[\left(\prod_{i=j+1}^{n}(1+C(h+G_i))\right)T_{2j}\right]\\
&=C\mathbb{E}\left[\left(\prod_{i=j+1}^{n}(1+C(h+G_i))\right)\|Q_j-Q(t_j)\|^{2p-2} (Q(t_j)-Q_j)^\top\left(\sum_{k=1}^d \int_{t_j}^{t_{j+1}}\int_{t_j}^t \sigma_k\,\mathrm d W_{s}^{k}\,\mathrm d t\right)\right]\\
&=C\sum_{k=1}^d\int_{t_j}^{t_{j+1}}\mathbb{E}\left[\left(\prod_{i=j+1}^{n}(1+C(h+G_i))\right)\|Q_j-Q(t_j)\|^{2p-2}(Q(t_j)-Q_j)^\top\sigma_k\int_{t_j}^t\,\mathrm d W_{s}^{k}\right]\mathrm d t\\
&=C\sum_{k=1}^d\int_{t_j}^{t_{j+1}}\mathbb{E} \Bigg[\int_{t_j}^t D_r^k\Bigg[\left(\prod_{i=j+1}^{n}\left(1+C(h+G_i)\right)\right)\|Q_j-Q(t_j)\|^{2p-2}(Q(t_j)-Q_j)^\top\sigma_k\Bigg]\,\mathrm d r\Bigg]\mathrm d t.
\end{align*}
The chain rule leads to
\begin{align*}
&\mathbb{E} \left[\int_{t_j}^t D_r^k\left[\left(\prod_{i=j+1}^{n}(1+C(h+G_i))\right)\|Q_j-Q(t_j)\|^{2p-2}(Q(t_j)-Q_j)^\top\sigma_k\right]\,\mathrm d r\right]\\
&=\mathbb{E} \left[\int_{t_j}^t D_r^k\left(\prod_{i=j+1}^{n}\left(1+C(h+G_i)\right)\right)\|Q_j-Q(t_j)\|^{2p-2}(Q(t_j)-Q_j)^\top\sigma_k\,\mathrm d r\right]\\
&\quad+\mathbb{E} \left[\int_{t_j}^t \left(\prod_{i=j+1}^{n}\left(1+C(h+G_i)\right)\right)D_r^k\left(\|Q_j-Q(t_j)\|^{2p-2}\left(Q(t_j)-Q_j\right)^\top\sigma_k\right)\,\mathrm d r\right]\\
&=\mathbb{E} \left[\int_{t_j}^t D_r^k\left(\prod_{i=j+1}^{n}\left(1+C(h+G_i)\right)\right)\|Q_j-Q(t_j)\|^{2p-2}\left(Q(t_j)-Q_j\right)^\top\sigma_k\,\mathrm d r\right],
\end{align*}
where we used the fact that $D_r^k\left(\|Q_j-Q(t_j)\|^{2p-2}\left(Q(t_j)-Q_j\right)^\top\sigma_k\right)$ is zero almost everywhere in $(t_j,t]\times\Omega$ since $Q_j-Q(t_j)$ is $\mathscr{F}_{t_j}$-measurable (see e.g. \cite[Corollary 1.2.1]{DN06}).
Then the H\"{o}lder inequality, Lemma \ref{DH} and the Young inequality yield that
\begin{align*}
&\mathbb{E}\left[\sum_{j=0}^{n}\left(\prod_{i=j+1}^{n}\left(1+C(h+G_i)\right)\right)T_{2j}\right]\\
&\le C\sum_{j=0}^{n}\sum_{k=1}^d\int_{t_j}^{t_{j+1}}\int_{t_j}^t \mathbb{E}\Bigg[D_r^k\left(\prod_{i=j+1}^{n}\left(1+C(h+G_i)\right)\right)\|Q_j-Q(t_j)\|^{2p-2}\left(Q(t_j)-Q_j\right)^\top\sigma_k\,\mathrm d r\Bigg]\,\mathrm d t\\
&\le C\sum_{j=0}^{n}\sum_{k=1}^d\int_{t_j}^{t_{j+1}}\int_{t_j}^t \left(\mathbb{E} \left[D_r^k\left(\prod_{i=j+1}^{n}\left(1+C(h+G_i)\right)\right)\right]^{2p}\right)^{\frac{1}{2p}}\left(\mathbb{E} \left[\|Q_j-Q(t_j)\|^{2p}\right]\right)^{\frac{2p-1}{2p}}\,\mathrm d r\,\mathrm d t\\
&\le \sum_{j=0}^{n}Ch^2\left(\mathbb{E} \left[\|Q_j-Q(t_j)\|^{2p}\right]\right)^{\frac{2p-1}{2p}}= \sum_{j=0}^{n}Ch^{\frac{2p+1}{2p}}\left(h\mathbb{E} \left[\|Q_j-Q(t_j)\|^{2p}\right]\right)^{\frac{2p-1}{2p}}\\
&\le\sum_{j=0}^{n}h\mathbb{E} \left[\mathcal{S}_j^{2p}\right]+\sum_{j=0}^{n}h^{2p+1}.
\end{align*}
Combining \eqref{En+1}, \eqref{T1} and the discrete Gronwall lemma, we complete the proof.
\qed
Similar to \cite[Corollary 4.1]{BCH18}, from the Theorem \ref{SC1} above, we conclude the following stronger error estimation immediately.
\begin{cor}
Let Assumption \ref{F2} hold, $h_0$ be a sufficiently small positive constant and $p\ge1$. Then for arbitrary $0<\delta<1$, there exists some positive constant $C=C(p,T,\sigma,\delta,X(0))$ such that for any $h\in(0,h_0]$,
\begin{equation*}
\left\|\sup_{n\le N^h}\|X_n-X(t_n)\|\right\|_{L^{2p}(\Omega)}\le Ch^{\delta}.
\end{equation*}
\end{cor}
\begin{proof}
Owing to Theorem \ref{SC1}, we deduce that
\begin{align*}
\mathbb{E}\left[\left\|\sup_{n\le N^h}\|X_n-X(t_n)\|\right\|^{2q}\right]\le\mathbb{E}\left[\sum_{n=1}^{N^h}\left\|X_n-X(t_n)\right\|^{2q}\right]\le Ch^{2q-1},\,\forall\,q\ge 1.
\end{align*}
By choosing $1-\frac{1}{q}\ge \delta$ and $q\ge p$, we finish the proof.
\end{proof}
\section{Convergence in probability density function}\label{S6}
In Sections \ref{S3} and \ref{S4}, we have shown the existence of density functions of $X(t)$, $t\in(0,T]$ and $X_n$, $n=2,\cdots,N^h$. It is natural to ask what the relationship between these density functions is.
In this section, we show that the density function of
$X(T)$ can be approximated by that of $X_{N^h}$.
Meanwhile, the approximation error between the density functions is analyzed.
\subsection{Convergence in $\mathbb{D}^{\alpha,p}(\mathbb{R}^{2m})$}
We consider the convergence in $\mathbb{D}^{\alpha,p}(\mathbb{R}^{2m})$ in this part, which is a nature extension of the convergence in $L^{2p}(\Omega;\mathbb{R}^{2m})$ of the proposed scheme \eqref{split sol}. We also remark that convergence in $\mathbb{D}^{1,p}$ for It\^{o}-Taylor approximation solution for general SDEs whose coefficients are smooth with bounded derivatives has been shown in \cite{HW96}.
\begin{tho}\label{MDC}
Let Assumption \ref{F2} hold, $h_0$ be a sufficiently small positive constant and $\alpha,\,p\ge1$ be two integers. There exists some positive constant $C=C(p,T,\sigma,\alpha,X(0))$ such that for any $h\in(0,h_0]$,
\begin{equation}\label{Mdc}
\sup_{n\le N^h}\left\|D^\alpha X_n-D^\alpha X(t_n)\right\|_{L^p(\Omega;\mathbb{H}^{\otimes\alpha}\bigotimes\mathbb{R}^{2m})}\le Ch.
\end{equation}
\end{tho}
\begin{proof}
We prove \eqref{Mdc} by induction on $\alpha$.
For $\alpha=1$, by the H\"{o}lder inequality, there exists $C>0$ such that
\begin{align*}
\|DX_n-DX(t_n)\|_{L^p(\Omega;\mathbb{H}\bigotimes\mathbb{R}^{2m})}^p
&=\mathbb{E}\left|\int_0^T\|D_{r_1}X_n-D_{r_1}X(t_n)\|^2\,\mathrm d r\right|^{\frac{p}{2}}\\
&\le C\int_0^T\mathbb{E}\|D_{r_1}X_n-D_{r_1}X(t_n)\|^p\,\mathrm d r.
\end{align*}
Thus, it suffices to show that for any fixed $r_1\in(0,T]$,
\begin{equation*}
\sup_{n\le N^h}\mathbb{E}\|D_{r_1}X_n-D_{r_1}X(t_n)\|^{p}\le Ch^{p}.
\end{equation*}
Let $r_1\in(t_i,t_{i+1}]$ for some integer $0\le i\le N^h-1$. Taking the Malliavin derivatives on both sides of \eqref{P} and \eqref{Q} respectively, then for $i<n\le N^h-1$,
\begin{align}\label{DDP}
D_{r_1}P_{n+1}-D_{r_1}P(t_{n+1})&=e^{-vh}(D_{r_1}P_n-D_{r_1}P(t_n))\\\nonumber
&\quad+e^{-vh}\int_{t_n}^{t_{n+1}}\int_0^1\int_0^1 \nabla^2 F\left(\theta Q(t)+(1-\theta)\left(Q_n+\tau(Q_{n+1}-Q_n)\right)\right)\\\nonumber
&\quad(D_{r_1}Q(t_n)-D_{r_1}Q_n)\,\mathrm d \theta\,\mathrm d \tau\,\mathrm d t+S_{1n},\\\label{DDQ}
D_{r_1}Q_{n+1}-D_{r_1}Q(t_{n+1})&=D_{r_1}Q_n-D_{r_1}Q(t_n)+h(D_{r_1}P_n-D_{r_1}P(t_n))+S_{2n},
\end{align}
where
\begin{align*}
S_{1n}&=S^{11}_n+S^{12}_n+S^{13}_n+S^{14}_n,\\
S_{2n}&=S^{21}_n+S^{22}_n+S^{23}_n,\\
S^{11}_n&=\int_{t_n}^{t_{n+1}}\left[-e^{-vh}+e^{-v(t_{n+1}-t)}\right]\nabla^2 F(Q(t))D_{r_1}Q(t)\,\mathrm d t,\\
S^{12}_n&=e^{-vh}\int_{t_n}^{t_{n+1}}\int_0^1\int_0^1 D_{r_1}\left[\nabla^2 F\left(\theta Q(t)+\left(1-\theta\right)\left(Q_n+\tau\left( Q_{n+1}-Q_n\right)\right)\right)\right]\\
&\quad(Q(t_n)-Q_n)\,\mathrm d \theta\,\mathrm d \tau\,\mathrm d t,\\
S^{13}_n&=e^{-vh}\int_{t_n}^{t_{n+1}}\int_0^1\int_0^1 D_{r_1}\left[\nabla^2 F\left(\theta Q(t)+\left(1-\theta\right)\left(Q_n+\tau\left(Q_{n+1}-Q_n\right)\right)\right)\right]\\
&\quad\left(\int_{t_n}^{t} P(s)\,\mathrm d s-\frac{\tau h}{2}(P_n+\bar P_{n+1})\right)\,\mathrm d \tau\,\mathrm d \theta\,\mathrm d t,\\
S^{14}_n&=e^{-vh}\int_{t_n}^{t_{n+1}}\int_0^1\int_0^1 \nabla^2 F\left(\theta Q(t)+\left(1-\theta\right)\left(Q_n+\tau\left(Q_{n+1}-Q_n\right)\right)\right)\\
&\quad D_{r_1}\left[\int_{t_n}^{t} P(s)\,\mathrm d s-\frac{\tau h}{2}(P_n+\bar P_{n+1})\right]\,\mathrm d \tau\,\mathrm d \theta\,\mathrm d t,\\
S^{21}_n&=\int_{t_n}^{t_{n+1}}\int_{t_n}^{t}e^{-v(t-s)}\nabla^2 F(Q(s))D_{r_1}Q(s)\,\mathrm d s\,\mathrm d t,\\
S^{22}_n&=\left(h-\frac{1-e^{-vh}}{v}\right)D_{r_1}P(t_n),\\
S^{23}_n&=-\frac{h^2}{2}\int_0^1D_{r_1}\left[\nabla F(Q_n+\tau(Q_{n+1}-Q_n))\right]\,\mathrm d\tau.
\end{align*}
Applying the triangle inequality yields
\begin{align*}
\|D_{r_1}P_{n+1}-D_{r_1}P(t_{n+1})\|&\le \|D_{r_1}P_n-D_{r_1}P(t_n)\|+G_n\|D_{r_1}Q(t_n)-D_{r_1}Q_n\|+\|S_{1n}\|,\\
\|D_{r_1}Q_{n+1}-D_{r_1}Q(t_{n+1})\|&\le \|D_{r_1}Q_n-D_{r_1}Q(t_n)\|+h\|D_{r_1}P(t_n)-D_{r_1}P_n\|+\|S_{2n}\|.
\end{align*}
Define $\mathcal{R}_{n+1}:=\|D_{r_1}P_{n+1}-D_{r_1}P(t_{n+1})\|+\|D_{r_1}Q_{n+1}-D_{r_1}Q(t_{n+1})\|$, then
\begin{align}\label{DE1}
\mathcal{R}_{n+1}\le \mathcal{R}_{n}+(h+G_n)\mathcal {R}_{n}+S_n,
\end{align}
where $S_n=\|S_{1n}\|+\|S_{2n}\|$. Using the H\"{o}lder inequality, the estimate \eqref{DK}, Lemmas \ref{MB}, \ref{EE} and \ref{NDI}, we obtain that for $\kappa=1,\,\iota=1,2,3,4,$
\begin{align}\label{Snp}
&\|S^{\kappa\iota}_n\|_{L^q(\Omega)}\le Ch^2,\,q \ge1, \,n=0,\ldots,N^h-1.
\end{align}
And for $\kappa=2,\,\iota=1,2,3$, \eqref{Snp} also holds.
Therefore
\begin{align}\label{Snp1}
\mathbb{E}\left[\left(\sum_{j=i+1}^{n} S_j\right)^q\right]\le(n-i)^{q-1}\sum_{j=i+1}^{n}\mathbb{E}\left[S_j^q\right]
\le Ch^{q}, \,\forall\,q\ge1.
\end{align}
For $n=i$, since $r\in (t_i,t_{i+1}]$, we get $D_{r_1}X(t_i)=0,\, D_{r_1}X_i=0$. Hence
\begin{align*}
&D_{r_1}P_{i+1}-D_{r_1}P(t_{i+1})=S_{1i},\qquad D_{r_1}Q_{i+1}-D_{r_1}Q(t_{i+1})=S^{21}_i+\frac{1-e^{-v(t_{i+1}-r)}}{v}\sigma.
\end{align*}
Combining \eqref{Snp} and the fact that $t_{i+1}-r<h$, we obtain
\begin{align}\label{Enp}
\mathbb{E}\left[\mathcal{R}_{i+1}^q\right]\le Ch^q,\,\forall\,q\ge1.
\end{align}
It follows from the discrete Gronwall lemma and \eqref{DE1} that
for any $n=0,\ldots,N^h-1$,
\begin{align}\label{DE2}
\mathcal{R}_{n+1}^p \le C\left(\sum_{j=i+1}^{n} S_j\right)^p\exp\left(\sum_{j=i+1}^{n} p(h+G_j)\right)+C\exp\left(\sum_{j=i+1}^{n} p(h+G_j)\right)\mathcal{R}_{i+1}^p.
\end{align}
Then using estimates \eqref{SE3}, \eqref{Snp1}, \eqref{Enp} and the H\"{o}lder inequality, we complete the proof of the assertion for $\alpha=1$.
For $\alpha\ge2$, let $r_k\in(t_{i_k},t_{{i_k}+1}]$ for $0\le i_k\le N^h-1,\, k=1,\ldots,\alpha$. Taking the $\alpha$th Malliavin derivatives on both sides of \eqref{P} and \eqref{Q}, and using the chain rule, we have that for $\max\limits_ki_k<n\le N^h-1$,
\begin{align*}
&D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}P_{n+1}-D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}P(t_{n+1})\\
&=e^{-vh}\left(D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}P_n-D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}P(t_n)\right)\\
&\quad+e^{-vh}\int_{t_n}^{t_{n+1}}\int_0^1\int_0^1 \nabla^2 F(\theta Q(t)+(1-\theta)(Q_n+\tau(Q_{n+1}-Q_n)))\\
&\qquad\qquad\qquad\qquad\qquad\quad(D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}Q(t_n)-D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}Q_n)\,\mathrm d \theta\,\mathrm d \tau\,\mathrm d t
+S_{n\alpha}^1,\\
&D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}Q_{n+1}-D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}Q(t_{n+1})\\
&=D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}Q_n-D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}Q(t_n)+h\left(D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}P_n-D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}P(t_n)\right)+S_{n\alpha}^2,
\end{align*}
where
\begin{align*}
S_{n\alpha}^1&=S_{n\alpha}^{11}+S_{n\alpha}^{12}+S_{n\alpha}^{13}+S_{n\alpha}^{14},\\
S_{n\alpha}^2&=S_{n\alpha}^{21}+S_{n\alpha}^{22}+S_{n\alpha}^{23},\\
S_{n\alpha}^{11}&=\int_{t_n}^{t_{n+1}}\left[-e^{-vh}+e^{-v(t_{n+1}-t)}\right]D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}\left[\nabla F(Q(t))\right]\,\mathrm d t,\\
S_{n\alpha}^{12}&=e^{-vh}\int_{t_n}^{t_{n+1}}\int_0^1\int_0^1 D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}\bigg[\nabla^2 F\left(\theta Q(t)+\left(1-\theta\right)\left(Q_n+\tau\left( Q_{n+1}-Q_n\right)\right)\right)\\
&\quad(Q(t_n)-Q_n)\bigg]\,\mathrm d \theta\,\mathrm d \tau\,\mathrm d t-e^{-vh}\int_{t_n}^{t_{n+1}}\int_0^1\int_0^1\\
&\nabla^2F(\theta Q(t)+(1-\theta)(Q_n+\tau(Q_{n+1}-Q_n)))(D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}Q(t_n)-D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}Q_n)\,\mathrm d \theta\,\mathrm d \tau\,\mathrm d t,\\
S_{n\alpha}^{13}&=e^{-vh}\int_{t_n}^{t_{n+1}}\int_0^1\int_0^1 D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}\bigg[\nabla^2 F\left(\theta Q(t)+\left(1-\theta\right)\left(Q_n+\tau\left(Q_{n+1}-Q_n\right)\right)\right)\\
&\quad\left(\int_{t_n}^{t} P(s)\,\mathrm d s-\frac{\tau h}{2}(P_n+\bar P_{n+1})\right)\bigg]\,\mathrm d \tau\,\mathrm d \theta\,\mathrm d t,\\
S_{n\alpha}^{21}&=\int_{t_n}^{t_{n+1}}\int_{t_n}^{t}e^{-v(t-s)}D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}[\nabla F(Q(s))]\,\mathrm d s\,\mathrm d t,\\
S_{n\alpha}^{22}&=\left(h-\frac{1-e^{-vh}}{v}\right)D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}P(t_n),\\
S_{n\alpha}^{23}&=-\frac{h^2}{2}\int_0^1D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}[\nabla F(Q_n+\tau(Q_{n+1}-Q_n))]\,\mathrm d\tau.
\end{align*}
In view of the Wiener-It\^o chaos expansion of the Malliavin derivative (see e.g. \cite[Proposition 1.2.7]{DN06}), we have $D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}Q_{n+1}=D_{r_{\sigma_1},\ldots,r_{\sigma_\alpha}}^{j_{\sigma_1},\ldots,j_{\sigma_\alpha}}Q_{n+1}$ for all permutations of $(1,2,\ldots,\alpha)$. Thus, for $\max\limits_ki_k=n$, without loss of generality, we assume that $n=i_1$. Then it follows that
\begin{align*}
D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}P_{n+1}-D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}P(t_{n+1})=S_{n\alpha}^1;\qquad
D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}Q_{n+1}-D_{r_1,\ldots,r_\alpha}^{j_1,\ldots,j_\alpha}Q(t_{n+1})=S_{n\alpha}^{21}.
\end{align*}
Subsequent proof is similar to the case of $\alpha=1$ and is omitted.
\end{proof}
\begin{rem}\label{Fk1}
In Theorem \ref{MDC}, if the condition $F\in C_p^\infty$ is replaced by $F\in C_p^k$ for some fixed constant $k\ge2$, then by Remark \ref{Fk0}, the conclusion \eqref{Mdc} holds for any $\alpha\le k-2$ and $p\ge1$.
\end{rem}
\subsection{Convergence in probability density function}
As is well known, the first probabilistic proof of H\"ormander's theorem was given by Malliavin, whose key step is to prove that, under H\"ormander's condition, the Malliavin covariance matrix of the exact solution of the SDE is non-degenerate. For our discrete case, in the light of Lemma \ref{NDI}, the smoothness of the density function of numerical solution $X_{N^h}$ boils down to the question of the boundedness of the moments of $\det(\gamma_{N^h})^{-1}$ as well.
In this part, we show that the proposed numerical solution $X_{N^h}$ is uniformly non-degenerate with respect to sufficiently small stepsize $h>0$, and therefore admits a smooth density function.
\begin{tho}\label{UD}
Let Assumptions \ref{F2}-\ref{F4} hold. Then for any $1\le p<\infty$, there exists a positive constant $\nu(p)$ such that
\begin{equation*}
\left\|\det(\gamma_{N^h})^{-1}\right\|_{L^p(\Omega)}=\mathcal{O}\left(h^{-\nu(p)}\right),~as~h\rightarrow0.
\end{equation*}
\end{tho}
\begin{proof}
Since
\begin{align}\label{det}
\det(\gamma_{N^h})^{-1}=\prod_{i=1}^{2m}\lambda_i(\gamma_{N^h})^{-1}\le\left(\lambda_{min}(\gamma_{N^h})\right)^{-2m},
\end{align}
it suffices to estimate the smallest eigenvalue of $\gamma_{N^h}$. It follows from \eqref{MX} that
\begin{align*}
\gamma_{N^h}&=A_{N^h-1}\gamma_{N^h-1}A_{N^h-1}^\top+\gamma_1\\
&=\frac{1-e^{-2vh}}{2v}\left\{\sum_{k=0}^{N^h-2}A_{N^h-1}\cdots A_{k+1}\left[\begin{array}{cc}\sigma\sigma^\top& 0 \\0 & 0 \end{array}\right]A_{k+1}^\top\cdots A_{N^h-1}^\top+\left[\begin{array}{cc}\sigma\sigma^\top& 0 \\0 & 0 \end{array}\right]\right\}.
\end{align*}
The definition of $A_{N^h-1}$ yields that
\begin{align*}
&\left[\begin{array}{c}y_1^\top,y_2^\top\end{array}\right]A_{N^h-1}\left[\begin{array}{cc}\sigma\sigma^\top& 0 \\0 & 0 \end{array}\right]A_{N^h-1}^\top\left[\begin{array}{c}y_1\\y_2\end{array}\right]\\
=&\Bigg\|e^{-vh}y_1^\top\left(1-\frac{h^2}{2}F_1(Q_{N^h-1},Q_{N^h})\right)\left(1+\frac{h^2}{2}F_1(Q_{N^h-1},Q_{N^h})\right)^{-1}\sigma\\
&\quad+hy_2^\top\left(1+\frac{h^2}{2}F_1(Q_{N^h-1},Q_{N^h})\right)^{-1}\sigma\Bigg\|^2.
\end{align*}
To simplify the notations, we introduce
\begin{align*}
&B_{N^h}:=h\left(I+\frac{h^2}{2}F_1(Q_{N^h-1},Q_{N^h})\right)^{-1},\\
&U_{N^h}:=e^{-vh}\left(I-\frac{h^2}{2}F_1(Q_{N^h-1},Q_{N^h})\right)\left(I+\frac{h^2}{2}F_1(Q_{N^h-1},Q_{N^h})\right)^{-1}.
\end{align*}
Combining the above equalities together, we get
\begin{align*}
&\lambda_{min}(\gamma_{N^h})=\min_{\substack{y=(y_1^\top,y_2^\top)^\top\in\mathbb{R}^{2m}\\\|y\|_2=1}}y^\top\gamma_{N^h}y\\
&\ge\min_{\substack{y=(y_1^\top,y_2^\top)^\top\in\mathbb{R}^{2m}\\\|y\|=1}}\frac{1-e^{-2vh}}{2v}\left[\begin{array}{c}y_1^\top,y_2^\top\end{array}\right]\left\{A_{N^h-1}\left[\begin{array}{cc}\sigma\sigma^\top& 0 \\0 & 0 \end{array}\right]A_{N^h-1}^\top+\left[\begin{array}{cc}\sigma\sigma^\top& 0 \\0 & 0 \end{array}\right]\right\}\left[\begin{array}{c}y_1\\y_2\end{array}\right]\\
&=:\frac{1-e^{-2vh}}{2v}\min_{\substack{y=(y_1^\top,y_2^\top)^\top\in\mathbb{R}^{2m}\\\|y\|=1}}f(y),
\end{align*}
where
\begin{align*}
f(y)&=\|y_1^\top U_{N^h}\sigma+y_2^\top B_{N^h}\sigma\|^2+\|y_1^\top\sigma\|^2\\
&=y_1^\top U_{N^h}\sigma\sigma^\top U_{N^h}^\top y_1+y_2^\top B_{N^h}\sigma\sigma^\top B_{N^h}^\top y_2+2y_1^\top U_{N^h}\sigma\sigma^\top B_{N^h}^\top y_2+y_1^\top\sigma\sigma^\top y_1.
\end{align*}
By splitting the unit sphere of $\mathbb{R}^{2m}$ into $\{\|y\|=1,\,\|y_1\|\ge h^\delta\}$ and $\{\|y\|=1,\,\|y_1\|< h^\delta\}$, with $\delta>0$ being later determined, we estimate $\lambda_{min}(\gamma_{N^h})$ as
\begin{equation}\label{gam}
\lambda_{min}(\gamma_{N^h})=\frac{1-e^{-2vh}}{2v}\min_{\substack{y=(y_1^\top,y_2^\top)^\top\in\mathbb{R}^{2m}\\\|y\|=1}}\left\{\min_{\|y_1\|\ge h^\delta}f(y), \min_{\|y_1\|<h^\delta} f(y)\right\}.
\end{equation}
Next we estimate the lower bound of $f$. The estimation of $\min_{\|y_1\|\ge h^\delta}f(y)$ is trivial, since
\begin{equation}\label{gam1}
\min_{\|y_1\|\ge h^\delta}f(y)\ge\min_{\|y_1\|\ge h^\delta}y_1^\top\sigma\sigma^\top y_1\ge\lambda_{min}\left(\sigma\sigma^\top\right)h^{2\delta}.
\end{equation}
Now we turn to giving the lower bound of the term $\min_{\|y_1\|<h^\delta}f(y)$.
Let $h\le1$. The conditions $\|y_1\|^2<h^{2\delta}$ and $\|y_1\|^2+\|y_2\|^2=1$ imply that $\|y_2\|^2>1-h^{2\delta}$.
The Young inequality gives
\begin{equation*}
2y_1^\top U_{N^h}\sigma\sigma^\top B_{N^h}^\top y_2\ge-\epsilon y_2^\top B_{N^h}\sigma\sigma^\top B_{N^h}^\top y_2-\frac{1}{\epsilon}y_1^\top U_{N^h}\sigma\sigma^\top U_{N^h}^\top y_1,\,\forall\,\epsilon>0,
\end{equation*}
which implies
\begin{equation}\label{st0}
f(y)\ge(1-\epsilon)y_2^\top B_{N^h}\sigma\sigma^\top B_{N^h}^\top y_2+\left(1-\frac{1}{\epsilon}\right)y_1^\top U_{N^h}\sigma\sigma^\top U_{N^h}^\top y_1+y_1^\top\sigma\sigma^\top y_1.
\end{equation}
Since $\lambda_{min}({F_1(Q_{N^h-1},Q_{N^h})})\ge-\frac{K}{2}$, we have
$\lambda_i\left(U_{N^h}U_{N^h}^\top\right)\le e^{-2vh}\left(\frac{1+\frac{h^2}{4}K}{1-\frac{h^2}{4}K}\right)^2,\,i=1,\ldots,m.$
Then it follows that
\begin{align*}
\left(\frac{1}{\epsilon}-1\right)y_1^\top U_{N^h}\sigma\sigma^\top U_{N^h}^\top y_1&\le\left(\frac{1}{\epsilon}-1\right)\lambda_{max}\left(\sigma\sigma^\top\right)\|U_{N^h}^\top y_1\|^2\\&\le\left(\frac{1}{\epsilon}-1\right)\lambda_{max}\left(\sigma\sigma^\top\right)e^{-2vh}\left(\frac{1+\frac{h^2}{4}K}{1-\frac{h^2}{4}K}\right)^2\|y_1\|^2.
\end{align*}
For simplicity, set $a:=e^{-2vh}\left(\frac{1+\frac{h^2}{4}K}{1-\frac{h^2}{4}K}\right)^2$. Consequently,
\begin{equation}\label{st10}
\left(1-\frac{1}{\epsilon}\right)y_1^\top U_{N^h}\sigma\sigma^\top U_{N^h}^\top y_1+y_1^\top\sigma\sigma^\top y_1\ge\lambda_{min}\left(\sigma\sigma^\top\right)\|y_1\|^2+\left(1-\frac{1}{\epsilon}\right)\lambda_{max}\left(\sigma\sigma^\top\right)a\|y_1\|^2.
\end{equation}
Notice that if $h\rightarrow0$, then $a\rightarrow1$. Thus there exists a sufficiently small stepsize $h(v,K)\le 1$ such that for all $h\le h(v,K)$, it holds $a<2$. For any $\epsilon$ such that $\frac{2\lambda_{max}\left(\sigma\sigma^\top\right)}{2\lambda_{max}\left(\sigma\sigma^\top\right)+\lambda_{min}\left(\sigma\sigma^\top\right)}\le\epsilon<1$, we have $\frac{\lambda_{max}\left(\sigma\sigma^\top\right)a}{\lambda_{max}\left(\sigma\sigma^\top\right)a+\lambda_{min}\left(\sigma\sigma^\top\right)}<\epsilon<1$. By a straightforward calculation, we deduce that
\begin{equation}\label{st1}
\left(1-\frac{1}{\epsilon}\right)y_1^\top U_{N^h}\sigma\sigma^\top U_{N^h}^\top y_1+y_1^\top\sigma\sigma^\top y_1\ge0.
\end{equation}
Then it suffices to give the lower bound of $(1-\epsilon)y_2^\top B_{N^h}\sigma\sigma^\top B_{N^h}^\top y_2$.
By Choosing $\epsilon=\frac{2\lambda_{max}\left(\sigma\sigma^\top\right)}{2\lambda_{max}\left(\sigma\sigma^\top\right)+\lambda_{min}\left(\sigma\sigma^\top\right)}$, $h_0=\min\{h(v,K),\sqrt{\frac{2}{K}},1\}$, the inequality \eqref{st1}, together with \eqref{st0}, implies that
\begin{align*}
f(y)&\ge(1-\epsilon)y_2^\top B_{N^h}\sigma\sigma^\top B_{N^h}^\top y_2\\
&\ge\frac{\lambda_{min}\left(\sigma\sigma^\top\right)}{2\lambda_{max}\left(\sigma\sigma^\top\right)+\lambda_{min}\left(\sigma\sigma^\top\right)}\lambda_{min}\left(B_{N^h}\sigma\sigma^\top B_{N^h}^\top\right)\|y_2\|^2.
\end{align*}
It remains to evaluate the minimum eigenvalue of the symmetric positive definite matrix $B_{N^h}\sigma\sigma^\top B_{N^h}^\top$.
By utilizing \cite[Lemma 1]{YG97}, we obtain
\begin{align*}
\lambda_{min}\left(B_{N^h}\sigma\sigma^\top B_{N^h}^\top\right)&\ge\det\left(B_{N^h}\sigma\sigma^\top B_{N^h}^\top\right)\cdot\left(\frac{m-1}{\|B_{N^h}\sigma\sigma^\top B_{N^h}^\top\|_{\mathbb F}^2}\right)^{\frac{m-1}{2}}\\
&=C(m)\det\left(B_{N^h}\sigma\sigma^\top B_{N^h}^\top\right)\cdot\frac{1}{\|B_{N^h}\sigma\sigma^\top B_{N^h}^\top\|_{\mathbb F}^{m-1}},
\end{align*}
where $\left\|B_{N^h}\sigma\sigma^\top B_{N^h}^\top\right\|_{\mathbb F}\le\|B_{N^h}\|^2_{\mathbb F}\left\|\sigma\sigma^\top\right\|_{\mathbb F}$. Here $\|\cdot\|_{\mathbb F}$ is the Frobenius norm.
Since $B_{N^h}$ is symmetric positive definite, we have
$\|B_{N^h}\|^2_{\mathbb F}=tr(B_{N^h}B_{N^h}^\top)=tr\left(B_{N^h}^2\right)\le m\lambda_{max}^2(B_{N^h}).$
The spectral mapping theorem and $h<\sqrt{\frac{2}{K}}$ lead to
\begin{equation*}
\lambda_{max}(B_{N^h})=h\left(1+\frac{h^2}{2}\lambda_{min}(F_1(Q_{N^h-1},Q_{N^h}))\right)^{-1}\le h\left(1-\frac{h^2}{4}K\right)^{-1}<2h.
\end{equation*}
Notice that
\begin{equation*}
\det\left(B_{N^h}\sigma\sigma^\top B_{N^h}^\top\right)=\det(B_{N^h})^2\det\left(\sigma\sigma^\top\right)\ge\det\left(\sigma\sigma^\top\right)\lambda_{min}^{2m}(B_{N^h}).
\end{equation*}
Combining the above inequalities together, we have
\begin{equation}\label{st3}
\lambda_{min}\left(B_{N^h}\sigma\sigma^\top B_{N^h}^\top\right)\ge C(m,\sigma)\frac{\lambda_{min}^{2m}(B_{N^h})}{\lambda_{max}^{2m-2}(B_{N^h})}\ge C(m,\sigma)h^2\left(1+\frac{h^2}{2}\lambda_{max}(F_1(Q_{N^h-1},Q_{N^h}))\right)^{-2m}.
\end{equation}
Inserting \eqref{st1} and \eqref{st3} into \eqref{st0}, we obtain that for $\|y_1\|<h^\delta$ with $h\le h_0$,
\begin{equation}\label{gam2}
f(y)\ge C(m,\sigma)h^2\left(1+\frac{h^2}{2}\lambda_{max}(F_1(Q_{N^h-1},Q_{N^h}))\right)^{-2m}\left(1-h^{2\delta}\right).
\end{equation}
Combining \eqref{gam}, \eqref{gam1} and \eqref{gam2}, we get
\begin{align*}
&\lambda_{min}(\gamma_{N^h})\ge\\
&\frac{1-e^{-2vh}}{2v}\min\left\{\lambda_{min}\left(\sigma\sigma^\top\right)h^{2\delta},\,C(m,\sigma)h^2\left(1+\frac{h^2}{2}\lambda_{max}(F_1(Q_{N^h-1},Q_{N^h}))\right)^{-2m}\left(1-h^{2\delta}\right)\right\}.
\end{align*}
Taking its reciprocal leads to
\begin{align*}
\lambda_{min}^{-1}(\gamma_{N^h})&\le\frac{2v}{1-e^{-2vh}}\max\left\{\frac{1}{\lambda_{min}\left(\sigma\sigma^\top\right)h^{2\delta}},\,\frac{\left(1+\frac{h^2}{2}\lambda_{max}(F_1(Q_{N^h-1},Q_{N^h}))\right)^{2m}}{C(m,\sigma)h^2(1-h^\delta)}\right\}.
\end{align*}
It follows from Lemma \ref{EE} that $\mathbb{E}\left|\lambda_{max}(F_1(Q_{N^h-1},Q_{N^h}))\right|^p\le C(p,T)$ holds for any $p\ge1$. Since $\frac{2v}{1-e^{-2vh}}=\mathcal{O}(h^{-1})$ as $h\rightarrow0$, we get for any $p\ge1$,
$$\mathbb{E}\left|\lambda_{min}^{-1}(\gamma_{N^h})\right|^p\le C(p,m,\sigma)\max\left\{h^{-2\delta p},h^{-2p}\right\}h^{-p}.$$
Taking $\delta=1$ and using \eqref{det}, the desired result $\left\|\det(\gamma_{N^h})^{-1}\right\|_{L^p(\Omega)}=\mathcal{O}\left(h^{-\nu(p)}\right)$, $\nu(p)\le 6m$ as $h\rightarrow 0$ follows.
\end{proof}
\begin{cor}
Let Assumptions \ref{F2}-\ref{F4} hold. Then $X_{N^h}$ admits a smooth density function.
\end{cor}
\begin{proof}
It follows immediately from Lemma \ref{NDI} and Theorem \ref{UD}.
\end{proof}
\begin{rem}
If the coefficient $F\in C_p^k$ for some fixed constant $k\ge2$, then from Proposition 1.1 and \cite[Proposition 5.4]{SSM05},
the density functions of $X(T)$ and $X_{N^h}$ belong to $C^\alpha$ for some $\alpha=\alpha(k)$.
\end{rem}
Now we are in the position to deduce the convergence rate in density of scheme \eqref{split sol} for equation \eqref{SDE1}.
\textit{Proof of Theorem \ref{order}}\quad Let $h_0$ be a sufficiently small positive constant.
Theorem \ref{UD}, together with Lemma \ref{NS}, Theorem \ref{MDC} and Lemma \ref{pdf0} indicates that $\det(\gamma_{N^h})^{-1}$ has moments of all orders uniformly with respect to $h\in(0,h_0]$, i.e.,
\begin{equation*}
\sup_{h\in(0,h_0]}\left\|\det(\gamma_{N^h})^{-1}\right\|_{L^p(\Omega)}<\infty,
\end{equation*}
which combined with Lemma \ref{NS}, Theorem \ref{MDC}, Proposition \ref{pdf} and Remark \ref{pdf1} completes the proof.\qed
\begin{cor}
Let Assumptions \ref{F2}-\ref{F4} hold. Let $\beta\ge0, 1<q<\infty$ and $\alpha>\beta+2m/q+1$ and $G\in \mathbb{D}^{\alpha,q}, 1/p+1/q=1$. Then
\begin{align*}
\sup_{y\in\mathbb{R}^{2m}}\left|(1-\Delta)^{\beta/2}\mathbb{E}\left[G\cdot\delta_y\circ X_{N^h}\right]-(1-\Delta)^{\beta/2}\mathbb{E}\left[G\cdot\delta_y\circ X(T)\right]\right|=\mathcal{O}(h)~as~h\rightarrow0.
\end{align*}
In particular, set $G=1$, then we have
\begin{align*}
\sup_{y\in\mathbb{R}^{2m}}\left|(1-\Delta)^{\beta/2}p^{N^h}_T(X_0,y)-(1-\Delta)^{\beta/2}p_T(X(0),y)\right|=\mathcal{O}(h)~as~h\rightarrow0.
\end{align*}
where $p^{N^h}_T(X_0,y)=\mathbb{E}\left[\delta_y\circ X_{N^h}\right]$, $p_T(X(0),y)=\mathbb{E}\left[\delta_y\circ X(T)\right]$ are the density functions of $X_{N^h}$ and $X(T)$, respectively.
\end{cor}
\begin{proof}
Lemma \ref{NS} and Lemma \ref{NDI} and Theorem \ref{UD} yield
that $X(T),\,X_{N^h}$ are non-degenerate functionals. Thus, from \eqref{delta}, it follows that for any $\alpha>\beta+2m/q+1,\,1/p+1/q=1$, we have
$(1-\Delta)^{\beta/2}\delta_y\circ X(T)\in\mathbb{D}^{-\alpha,p},$ and $
(1-\Delta)^{\beta/2}\delta_y\circ X_{N^h}\in\mathbb{D}^{-\alpha,p}.$
\cite[Theorem 4.3]{IW84} implies that the map $y\rightarrow\mathbb{E}\left[G\cdot\delta_y\circ X_{N^h}\right]$ is $\beta$-times continuously differentiable.
From the definition of $\mathbb{D}^{-\alpha,p}$, it follows that
\begin{align*}
&(1-\Delta)^{\beta/2}\mathbb{E}\left[G\cdot\delta_y\circ X_{N^h}\right]-(1-\Delta)^{\beta/2}\mathbb{E}\left[G\cdot\delta_y\circ X(T)\right]\\
&=\mathbb{E}\left[G\cdot\left\{(1-\Delta)^{\beta/2}\left[\delta_y\circ X_{N^h}\right]-(1-\Delta)^{\beta/2}\left[\delta_y\circ X(T)\right]\right\}\right]\\
&\le\left\|(1-\Delta)^{\beta/2}\delta_y\circ X_{N^h}-(1-\Delta)^{\beta/2}\delta_y\circ X(T)\right\|_{-\alpha,p}\|G\|_{\alpha,q}.
\end{align*}
Taking supremum over $y\in\mathbb{R}^{2m}$, we complete the proof.
\end{proof}
\section{Numerical experiments}\label{S7}
In this section, we implement some numerical tests to verify our theoretic result on the strong convergence rate of scheme \eqref{split sol}. In particular, we consider the following two stochastic Langevin equations.
\textit{Example 1:} Taking $m=1$, $d=1$ and $F(Q)=Q^4$, consider the following $2$-dimensional Langevin equation
\begin{equation}\label{example1}
\begin{split}
&\,\mathrm d P=-4Q^3\,\mathrm d t-vP\,\mathrm d t+\sigma\,\mathrm d W(t),\\
&\,\mathrm d Q=P\,\mathrm d t,
\end{split}
\end{equation}
where $v>0, \,\sigma$ are fixed constants.
\textit{Example 2:} Taking $m=2$, $d=2$ and $F(Q)=Q_1^8+Q_2^2+2Q_1Q_2$, consider the following $4$-dimensional Langevin equation
\begin{equation}\label{example2}
\begin{split}
&\,\mathrm d P_1=-8Q_1^7\,\mathrm d t-2Q_2\,\mathrm d t-vP_1\,\mathrm d t+\sigma_{11}\,\mathrm d W_1(t)+\sigma_{12}\,\mathrm d W_2(t),\\
&\,\mathrm d P_2=-2Q_2\,\mathrm d t-2Q_1\,\mathrm d t-vP_2\, \mathrm d t+\sigma_{21}\,\mathrm d W_2(t)+\sigma_{22}\,\mathrm d W_2(t),\\
&\,\mathrm d Q_1=P_1\,\mathrm d t,\\
&\,\mathrm d Q_2=P_2\,\mathrm d t,
\end{split}
\end{equation}
where $v>0,\, \sigma_{ij},\, i,\,j=1,2$ are fixed constants.
In the following experiments, we choose $\sigma=1,\,P(0)=Q(0)=1$ in equation \eqref{example1} and $\sigma_{ij}=1,\,i,j=1,2,\,P_i(0)=Q_i(0)=1,\,i=1,2$ in equation \eqref{example2}.
Errors in mean square sense of the numerical solutions against stepsize $h$ on a log-log scale are shown in Figure \ref{fig2}.
In this experiment, we compute the mean square errors at the final time $T=1$ with time steps ranging from $h=2^{-7}$ to $h=2^{-11}$, respectively. The reference solution is computed by using the tamed Euler scheme with stepsize $h_{ref}=2^{-14}$. The expectation is realized by using the average of 200 samples and 2000 samples, which are represented by green and blue solid lines, respectively.
The reference red dashed line has slope $1$. Figure \ref{fig2} illustrates that the strong convergence order of the splitting AVF scheme \eqref{split sol} is consistent with the theoretical result in Theorem \ref{SC1}.
\begin{figure}
\centering
\subfloat[$m=1,\,v=1$]{\includegraphics[width=0.5\columnwidth]{1d}}
\subfloat[$m=2,\,v=1$]{\includegraphics[width=0.5\columnwidth]{2d}}\\
\caption{Mean square convergence rate of splitting AVF method for stochastic Langevin equations.}\label{fig2}
\vspace{0.2in}
\end{figure}
\\
\textbf{Acknowledgments.}
The authors are very grateful to Professor Yaozhong Hu (University of Alberta) for his helpful discussions and suggestions.
\bibliographystyle{plain}
| {
"timestamp": "2019-06-11T02:08:52",
"yymm": "1906",
"arxiv_id": "1906.03439",
"language": "en",
"url": "https://arxiv.org/abs/1906.03439",
"abstract": "In this article, we study the density function of the numerical solution of the splitting averaged vector field (AVF) scheme for the stochastic Langevin equation. To deal with the non-globally monotone coefficient in the considered equation, we first present the exponential integrability properties of the exact and numerical solutions. Then we show the existence and smoothness of the density function of the numerical solution by proving its uniform non-degeneracy in Malliavin sense. In order to analyze the approximate error between the density function of the exact solution and that of the numerical solution, we derive the optimal strong convergence rate in every Malliavin--Sobolev norm of the numerical scheme via Malliavin calculus. Combining the approximation result of Donsker's delta function and the smoothness of the density functions, we prove that the convergence rate in density coincides with the optimal strong convergence rate of the numerical scheme.",
"subjects": "Probability (math.PR); Numerical Analysis (math.NA)",
"title": "Convergence in Density of Splitting AVF Scheme for Stochastic Langevin Equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587272306606,
"lm_q2_score": 0.8152324826183822,
"lm_q1q2_score": 0.8052529994282248
} |
https://arxiv.org/abs/1608.04571 | A simple algorithm to find the L-curve corner in the regularisation of ill-posed inverse problems | We propose a simple algorithm to locate the "corner" of an L-curve, a function often used to select the regularisation parameter for the solution of ill-posed inverse problems. The algorithm involves the Menger curvature of a circumcircle and the golden section search method. It efficiently finds the regularisation parameter value corresponding to the maximum positive curvature region of the L-curve. The algorithm is applied to some commonly available test problems and compared to the typical way of locating the l-curve corner by means of its analytical curvature. The application of the algorithm to the data processing of an electrical resistance tomography experiment on thin conductive films is also reported. | \section{Introduction}
The solution $\bm{\hat x}$ of an ill-posed inverse problem is often formulated in terms of residuals
\begin{equation}
\begin{aligned}
\label{eqn:lsq}
\bm{\hat x} =\textup{arg } \underset{\bm{x}}{\textup{min}} \left \{||\bm{A}\bm{x}-\bm{b}||^{2}\right\}, \quad \bm{A} \in \mathds{R}^{m \times n}, \quad m>n
\end{aligned}
\end{equation}
where the quantity $\bm{Ax}-\bm{b}$ is the vector of residuals between the experimental data vector $\bm{b}$ and the reconstructed data $\bm{A}\bm{x}$ for a given $\bm{x}$. Regularization techniques are applied to make the problem less sensitive to the noise of $\bm{b}$ and find a stable solution.
One popular regularization method is to replace the system~\ref{eqn:lsq} with
\begin{equation}
\begin{aligned}
\label{eqn:funct}
\bm{\hat x}_{\lambda}= \textup{arg } \underset{\bm{x}}{\textup{min}} \left \{ ||\bm{A}\bm{x}-\bm{b}||^{2}+\lambda \bm{R}(\bm{x})\right \},\quad \lambda \in \mathds{R}, \quad \lambda \geq 0
\end{aligned}
\end{equation}
where the regularization term $\bm{R}(\bm{x})$ represents a cost function, which may include some prior information about the solution. The scalar factor $\lambda$ is the \emph{regularization parameter}, serving as weighing factor of $\bm{R}(\bm{x})$. The choice of $\lambda$ is crucial for the solution to be meaningful.
As an example, we consider the regularization method named after Tikhonov~\cite{tikhonov2013numerical}, in which $\bm{R}(\bm{x}) = ||\bm{x}||^{2}$.
Several methods~(see \cite[p.~44]{almasy2009inverse} and references therein) have been developed in order to find an optimal tuning of $\lambda$ for a given problem.
Of particular interest is the L-curve method~\cite{hansen1992analysis}.
The L-curve is a two-dimensional curve, parametric in $\lambda$, defined by points with cartesian coordinates
\begin{equation}
\label{eqn:lcurve}
P(\lambda)=(\xi(\lambda), \eta(\lambda))\rightarrow\begin{cases}
& \xi(\lambda)= \log||\bm{Ax}-\bm{b}||^2\\
&\eta(\lambda)= \log||\bm{x}||^2
\end{cases}
\end{equation}
The point of maximum positive curvature $P(\lambda_\mathrm{opt})$, the ``corner'', can be associated to the optimal reconstruction parameter $\lambda_\mathrm{opt}$. The underlying concept is that the ``corner'' represents a compromise between the fitting to the data and the regularization of the solution~\cite{hansen1993use}.
Numerical search algorithms have been proposed for the identification of $\lambda_\mathrm{opt}$; among them we mention the splines method~\cite{hansen1992analysis,hansen2007adaptive}, the triangle method~\cite{castellanos2002triangle} and the L-ribbon method~\cite{calvetti1999estimation}.
Here we propose an alternative method which is based on an estimation of the local curvature of the L-curve from three sampled points (which define a circle), and a sampling update rule based on the golden section search. The method attempts to reduce the computational effort by minimizing the number of sampled points of the L-curve explicitly computed. A detailed algorithm is described; an application to a reconstruction problem in electrical resistance tomography is reported.
\section{Algorithm}
\label{sec:alg}
The proposed algorithm~\ref{alg} is written in pseudo-code. The algorithm calls two functions. The function P = \texttt{l\_curve\_P($\lambda$)} is based on the the specific regularization problem being solved; it is assumed that at each call, given as input the regularization parameter $\lambda$ it solves the system (1) and provides as output the point $P(\lambda)$, that is the coordinates $\xi(\lambda)$ and $\eta(\lambda)$ of the L-curve. The function $C_k = \texttt{menger($P_j$, $P_k$, $P_\ell$)}$ is defined below in Sec.~\ref{sec:curv}.
The algorithm is iterative and identifies $\lambda_\mathrm{opt}$ with the golden section search, described in Sec.~\ref{sec:gss}.
\subsection{Curvature}
\label{sec:curv}
The function $C_k = \texttt{menger($P_j$, $P_k$, $P_\ell$)}$ is based on the definition of the curvature of a circle by three points given by Menger~\cite{menger1930untersuchungen} . In our case three values $\lambda_{j}<\lambda_{k}<\lambda_{\ell}$ of the regularization parameter identify three points $P(\lambda_{j})$, $P(\lambda_{k})$ and $P(\lambda_{\ell})$ on the L-curve.
Here we follow the notation of~\ref{eqn:lcurve} for the coordinates of a generic point $P(\lambda)$. For sake of simplicity of notation the substitution~\ref{eqn:subst} is made:
%
\begin{equation}
\begin{aligned}
\label{eqn:subst}
\xi(\lambda_i)&\rightarrow \xi_i,\\
\eta(\lambda_i)&\rightarrow \eta_i,\\
P(\lambda_i)&\rightarrow P_i.
\end{aligned}
\end{equation}
So we define a signed curvature $C_{k}$ of the circumcircle as
\begin{equation}
\begin{aligned}
\label{eqn:menga}
C_k = \frac{2\cdot(\xi_{j}\eta_{k}+\xi_{k}\eta_{\ell}+\xi_{\ell}\eta_{j}-\xi_{j}\eta_{\ell}-\xi_{k}\eta_{j}-\xi_{\ell}\eta_{k})}
{\Big(\overline{P_{j}P_{k}}\cdot \overline{P_{k}P_{\ell}}\cdot \overline{P_{\ell}P_{j})}\Big)^{1/2}},
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
\label{eqn:defs}
\overline{P_{j}P_{k}}&=(\xb-\xa)^{2}+(\yb-\ya)^{2},\\
\overline{P_{k}P_{\ell}}&=(\xc-\xb)^{2}+(\yc-\yb)^{2},\\
\overline{P_{\ell}P_{j}}&=(\xa-\xc)^{2}+(\ya-\yc)^{2},
\end{aligned}
\end{equation}
are the euclidean distances between the sampled L-curve points. Note that we choose to index the curvature with the intermediate index ($k$) of the three points.
\subsection{Golden Section Search}
\label{sec:gss}
The algorithm is initialized by assigning the search interval $[\lambda_1,\lambda_4]$. Two other values $\lambda_{2}$ and $\lambda_{3}$ are calculated as
\begin{equation}
\begin{aligned}
\label{eqn:goldenratio}
\lambda_{2} &= (\lambda_{4}+\varphi\cdot\lambda_{1})/(1+\varphi), \\
\lambda_{3} &= \lambda_{1}+(\lambda_{4}-\lambda_{2}),\\
\end{aligned}
\end{equation}
where $\varphi = (1+\sqrt{5})/2$ is the golden section\cite{ kiefer1953sequential} .
%
Four values of $\lambda$ define four points on the L-curve and allow to calculate two curvatures, $C_{2}$ from $\{P(\lambda_{1})$, $P(\lambda_{2})$, $P(\lambda_{3})\}$ and $C_{3}$ from $\{P(\lambda_{2})$, $P(\lambda_{3})$, $P(\lambda_{4})\}$.
The curvatures $C_{2}$ and $C_{3}$ are compared; consistent reassignment and recalculation are done in order to work at each iteration with four points $P(\lambda_1)\ldots P(\lambda_4)$.
The algorithm terminates when the search interval
$[\lambda_1,\lambda_4]$ is smaller than a specified threshold $\epsilon$ and returns $\lambda_\mathrm{opt}$.
Note that it may happen that the curvature $C_3$ associated to the right-hand circle is negative at the initial stage of the search, since $C_k$ is defined with sign in~\ref{eqn:menga}. Moreover by definition the corner must always correspond to a positive curvature and it lays on the left-side of the plot. For these reasons the algorithm performs a check, and while $C_{3} < 0$ the search extreme $\lambda_{1}$ is kept fixed, $\lambda_{4}$ is shifted toward smaller values and $\lambda_{2}$ and $\lambda_{3}$ are recalculated. The condition on $C_3$ is strong enough that even in case of both negative curvatures it guarantees the convergence toward the corner.
Some considerations can be done: (a) according to the golden section search method, the algorithm needs to recalculate only one $P(\lambda)$ at each iteration (except for the first iteration), the other can be simply reassigned; this limit the calculation effort;
(b) as $P(\lambda_{1})$ and $P(\lambda_{4})$ are distant at the first iterations, $C_2$ and $C_3$ are just rough approximations of the curvature of the L-curve in different regions, but become more accurate as the distance between the search extremes decreases.
\begin{algorithm}[H]
\caption{L-curve corner search}
\label{alg}
\begin{algorithmic}[1]
\STATE{Initialize $\lambda_{1}$ and $\lambda_{4}$;}
\COMMENT{search extremes}
\STATE{Assign $\epsilon$;}
\COMMENT{termination threshold}
\STATE{$\varphi \leftarrow (1+\sqrt{5})/2$;}
\COMMENT{golden section}
\STATE{$\lambda_{2} \leftarrow (\lambda_{4}+\varphi\cdot\lambda_{1})/(1+\varphi)$;}
\STATE{$\lambda_{3} \leftarrow \lambda_{1}+(\lambda_{4}-\lambda_{2})$;}
\FOR{$i=1$ to $4$}
\STATE{$P_{i}\leftarrow$\texttt{l\_curve\_P}($\lambda_i$);}
\COMMENT{func. \texttt{l\_curve\_P} returns~\ref{eqn:lcurve}}
\ENDFOR
\REPEAT
\STATE{$C_2 \leftarrow$\texttt{menger}($P_1$,$P_2$,$P_3$);}
\COMMENT{func. \texttt{menger} calls~\ref{eqn:menga}}
\STATE{$C_3 \leftarrow$\texttt{menger}($P_2$,$P_3$,$P_4$);}
\REPEAT
\STATE{$\lambda_{4} \leftarrow \lambda_{3}$;\quad $P_{4} \leftarrow P_{3}$;}
\STATE{$\lambda_{3} \leftarrow \lambda_{2}$;\quad $P_{3} \leftarrow P_{2}$;}
\STATE{$\lambda_{2} \leftarrow (\lambda_{4}+\varphi\cdot\lambda_{1})/(1+\varphi)$;}
\STATE{$P_{2}\leftarrow$\texttt{l\_curve\_P}($\lambda_2$);}
\STATE{$C_3 \leftarrow$\texttt{menger}($P_2$,$P_3$,$P_4$);}
\UNTIL{$C_3 >0$}
\IF{$C_{2} > C_{3}$}
\STATE{$\lambda_\mathrm{opt}\leftarrow\lambda_{2}$;}
\COMMENT{store $\lambda_\mathrm{opt}$}
\STATE{$\lambda_{4} \leftarrow \lambda_{3}$; \quad $P_{4} \leftarrow P_{3}$;}
\STATE{$\lambda_{3} \leftarrow \lambda_{2}$; \quad $P_{3} \leftarrow P_{2}$;}
\STATE{$\lambda_{2} \leftarrow (\lambda_{4}+\varphi\cdot\lambda_{1})/(1+\varphi)$;}
\STATE{$P_{2}\leftarrow$\texttt{l\_curve\_P}($\lambda_2$);}
\COMMENT{only $P_2$ is recalculated}
\ELSE
\STATE{$\lambda_\mathrm{opt}\leftarrow\lambda_{3}$}
\STATE{$\lambda_{1} \leftarrow \lambda_{2}$; \quad $P_{1} \leftarrow P_{2}$;}
\STATE{$\lambda_{2} \leftarrow \lambda_{3}$; \quad $P_{2} \leftarrow P_{3}$;}
\STATE{$\lambda_{3} \leftarrow \lambda_{1}+(\lambda_{4}-\lambda_{2})$;}
\STATE{$P_{3}\leftarrow$\texttt{l\_curve\_P}($\lambda_3$);}
\COMMENT{only $P_3$ is recalculated}
\ENDIF
\UNTIL{ $(\lambda_{4}-\lambda_{1})/\lambda_{4}<\epsilon$}
\RETURN $\lambda_\mathrm{opt}$
\end{algorithmic}
\end{algorithm}
\begin{figure}[h]
\centering
\subfloat[Iteration 1]{\label{fig:lcurves1}\includegraphics[height=9cm]{L1.eps}}
\subfloat[Iteration 6]{\label{fig:lcurves2}\includegraphics[height=9cm]{L2.eps}}
\subfloat[Iteration 28]{\label{fig:lcurves3}\includegraphics[height=9cm]{L3.eps}}
\caption{The proposed algorithm at various iterations. The reference L-curve is reported as a solid line. Solid circles represent the points $P(\lambda)$ evaluated at the present iteration. Empty circles represent the points evaluated at past iterations. In~\ref{fig:lcurves1} arrows indicate the initial search extremes. In~\ref{fig:lcurves3} the four points are no more resolved, see Fig.~\ref{fig:zoom} for clarity.}
\label{fig:lcurves}
\end{figure}
\section{Example of application}
\label{sec:experiments}
In the following we show an example of application of the algorithm to electrical resistance tomography~\cite{holder2004electrical, seo2013electrical}. This example aims to image the sheet conductance spatial distribution of tin-oxide films on glass substrates having circular geometry and with electrical contacts on the sample boundary as shown in Fig.~\ref{fig:reca}.
Four-terminal resistance measurements among the contacts are performed with a scanning setup; the values are the elements of the data vector $\bm{b}$. A detailed description of the experiment is given in~\cite{cultrera2016Electrical}.
The reconstructed image (shown in Fig.~\ref{fig:recb}) is obtained by solving a discretized Laplace equation. Tikhonov regularization is applied.
Two software packages of common usage in the solution of ERT problems (EIDORS~\cite{adler2005eidors}, Regularisation Tools~\cite{hansen1994regularization} ) are here employed running under Matlab environment. EIDORS routines are used to generate a two-dimensional circular mesh (2304 elements) with 16 contact points (corresponding to a $\bm{b}$ of size 208) on the mesh boundary, to discretize the Laplace equation and thus obtain the matrix $\bm{A}$.
The library Regularisation Tools is employed to implement the function \texttt{L\_curve\_P($\lambda$)}. Other EIDORS routines are used to graphically render the solution $\bm{x}$ as in Fig.~\ref{fig:recb}.
The proposed algorithm is implemented in Matlab as well. Figures~\ref{fig:lcurves} and~\ref{fig:zoom} show the evolution of the search for $\lambda_\mathrm{opt}$ performed by the algorithm. These figures display also a full L-curve obtained by dense sampling of \texttt{L-\_curve\_P($\lambda$)} as a reference.
The algorithm is run by choosing as initial search extremes $\lambda_{1} = 1\cdot10^{-10}$ and $\lambda_{4} = 1\cdot10^{-3}$.
Fig.~\ref{fig:lcurves} reports three snapshots of the running algorithm. Empty circles represent points visited at previous iterations, while filled circles represent the four points $P_1 \ldots P_4$ of the given iteration.
Fig.~\ref{fig:zoom} shows a detail of the last iteration. The reader can notice in Fig.~\ref{fig:zoom1} the L-curve the ``corner''. Fig.~\ref{fig:zoom2} shows the $P(\lambda_i)$ points of the last iteration, where the magnification factor does not allow anymore to appreciate the curvature.
\begin{figure}[h]
\centering
\subfloat[]{\label{fig:zoom1}\includegraphics[height=7cm]{L3Z1.eps}}
\subfloat[]{\label{fig:zoom2}\includegraphics[height=7cm]{L3Z2.eps}}
\caption{Detail of the last iteration. In~\ref{fig:zoom1} it is visible the ``corner'' of the present L-curve. In~\ref{fig:zoom2} the four $P(\lambda)$ are resolved. The label indicates $\lambda_{opt}$.}
\label{fig:zoom}
\end{figure}
Fig.~\ref{fig:conv} shows the evolution of the algorithm towards convergence. For sake of generality, the search extremes were chosen very far from each other; when knowledge about the problem allows to restrict the $\lambda$ search interval, the number of iterations can be reduced.
\begin{figure}[tbhp]
\centering
\includegraphics[width=8cm]{convergence.eps}
\caption{Behaviour of the algorithm versus iteration number for the present data with a threshold $\epsilon = 1\%$. The steep slope corresponds to lines 9-18 of the algorithm~\ref{alg}. The last point (iteration 28) corresponds to $\lambda_\mathrm{opt}$.}
\label{fig:conv}
\end{figure}
The $\lambda_\mathrm{opt}$ returned by the algorithm with a $\epsilon = 1\%$ is $1.84\cdot10^{-6}$. This result is used to perform the final image reconstruction shown in Fig.~\ref{fig:recb}.
\begin{figure}[tbhp]
\centering
\subfloat[Scheme of the sample. Colors represent nominal values of sheet conductance.]{\label{fig:reca}\includegraphics[width=6cm]{reca.eps}}
\qquad
\subfloat[Sheet conductance map reconstructed from the data with $\lambda=1.84\cdot10^{-6}$.]{\label{fig:recb}\includegraphics[width=6cm]{recb.eps}}
\caption{Sheet conductance map obtained with the optimal regularization parameter $\lambda_\mathrm{opt}$ returned by our algorithm.}
\label{fig:rec}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
The proposed algorithm allows, for a given inverse problem having the form \ref{eqn:lsq}, the determination of the regularization parameter $\lambda_\mathrm{opt}$ corresponding to the maximum positive curvature of the L-curve. The algorithm is designed for maximum simplicity of implementation on already existing solvers. In a test on real electrical resistance tomography problem, convergence is achieved in a couple of dozen iterations, each iteration corresponding to a redetermination of \ref{eqn:funct}.
\bibliographystyle{plain}
| {
"timestamp": "2016-08-17T02:08:15",
"yymm": "1608",
"arxiv_id": "1608.04571",
"language": "en",
"url": "https://arxiv.org/abs/1608.04571",
"abstract": "We propose a simple algorithm to locate the \"corner\" of an L-curve, a function often used to select the regularisation parameter for the solution of ill-posed inverse problems. The algorithm involves the Menger curvature of a circumcircle and the golden section search method. It efficiently finds the regularisation parameter value corresponding to the maximum positive curvature region of the L-curve. The algorithm is applied to some commonly available test problems and compared to the typical way of locating the l-curve corner by means of its analytical curvature. The application of the algorithm to the data processing of an electrical resistance tomography experiment on thin conductive films is also reported.",
"subjects": "Numerical Analysis (math.NA)",
"title": "A simple algorithm to find the L-curve corner in the regularisation of ill-posed inverse problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9740426450627305,
"lm_q2_score": 0.8267117898012105,
"lm_q1q2_score": 0.8052525384425152
} |
https://arxiv.org/abs/1102.5590 | On uniqueness of the Laplace transform on time scales | After introducing the concept of null functions, we shall present a uniqueness result in the sense of the null functions for the Laplace transform on time scales with arbitrary graininess. The result can be regarded as a dynamic extension of the well-known Lerch's theorem. | \section{Introduction}\label{intro}
The Laplace transform is one of the fundamental representatives of
integral transformations used in mathematical analysis.
This transform is essentially bijective for the majority of
practical uses. The Laplace transform has the useful property that
many relationships and operations over the originals functions correspond to
simpler relationships and operations over the image functions.
The discrete analogue of the Laplace transform is called as $Z$-transform,
which is also converts a sequence of real or complex numbers
into a complex frequency-domain representation.
This transform is also bijective.
The calculus on time scales has been initiated by Hilger (see \cite{MR1066641}) in order to
unify the theories of continuous analysis and of discrete analysis.
The Laplace transform on time scales was introduced by Hilger in \cite{MR1722974},
but in a form that tries to unify the (continuous) Laplace transform
and the (discrete) $Z$-transform.
For arbitrary time scales, the Laplace transform was
introduced by and investigated by Bohner and Peterson in \cite{MR1948468}
(see also \cite[Section~3.10]{MR1843232}).
Let $\sup\T=\infty$, for locally $\Delta$-integrable function $f:[s,\infty)_{\T}\to\C$,
i.e., $\Delta$-integrable over each compact interval of $[s,\infty)_{\T}$, the Laplace transform is defined to be
\begin{equation}
\lt{}\{f\}(z):=\int_{s}^{\infty}f(\eta)\ef_{\ominus{}z}(\sigma(\eta),s)\Delta\eta\quad\text{for}\ z\in\mathcal{D},\notag
\end{equation}
where $\mathcal{D}$ consists of such complex numbers for which the improper integral converges.
In order to determine an explicit region of convergence,
conditions on the class of the determining functions should be provided.
This was done by Davis et al.\ in \cite{MR2324337},
where some restrictions were imposed on the graininess $\mu$.
In a recent paper \cite{bo/gu/ka10}, Bohner et al.\ removed the restriction on the graininess
of the time scale and considered some fundamental properties of the
Laplace transform on time scales.
The readers may be referred to \cite{MR1129464,MR1716143,MR0622023} for the basic properties
of the usual Laplace transform.
For other results about the Laplace transform on time scales,
see \cite{MR2597442,MR2320804,MR2585078,MR2679122}.
The uniqueness property of the Laplace transform and of the $Z$-transform are well-known
(see \cite{MR1129464,MR1716143,MR0622023}), which is a necessary tool for the inverse Laplace transform.
To the best of our knowledge, nothing has been published up to now on the uniqueness of the
Laplace transform on arbitrary time scales.
In this paper, we shall provide a uniqueness result on time scales with arbitrary
graininess for the Laplace transform,
which reduces to the well-known the Lerch's theorem in the continuous case.
Our result on time scales with constant graininess ($\R$ and $\Z$),
gives a unified proof for the uniqueness of the Laplace transform
(the usual Laplace transform and the $Z$-transform).
The paper is organized as follows:
In Section~\ref{al}, we present some results that are required in the proof of
the main result.
In Section~\ref{ult}, we state and prove our main results together with some
necessary definitions.
And in Section~\ref{pts}, as an appendix, we recall a short account concerning
the time scale calculus.
\section{Auxiliary Lemmas}\label{al}
We define the minimal graininess function $\mu_{\ast}:\T\to\R_{0}^{+}$ by
\begin{equation}
\mu_{\ast}(s):=\inf\big\{\mu(t):\ t\in[s,\infty)_{\T}\big\}\quad\text{for}\ s\in\T\notag
\end{equation}
and the set of positively regressive constants $\creg{+}(\T,\C)$ by
\begin{equation}
\creg{+}(\T,\C):=\big\{z\in\C:\ 1+z\mu(t)>0\ \text{for all}\ t\in\T\big\}.\notag
\end{equation}
For $h\in\R_{0}^{+}$ and $\lambda\in\creg{+}(\T,\R)$, we also define the set $\C_{h}(\lambda)$ by
\begin{equation}
\C_{h}(\lambda):=\big\{z\in\C_{h}:\ \Rl_{h}(z)>\lambda\big\}.\notag
\end{equation}
Now, we proceed this section with a result quoted from \cite{bo/gu/ka10}.
\begin{lemma}[See {\protect\cite[Theorem~3.4(iii)]{bo/gu/ka10}}]\label{allm1}
Let $\sup\T=\infty$, $s\in\T$, $\lambda\in\creg{+}([s,\infty)_{\T},\R)$ and $z\in\C_{\mu_{\ast}(s)}(\lambda)$, then
\begin{equation}
\lim_{t\to\infty}\ef_{\lambda\ominus{}z}(t,s)=0.\notag
\end{equation}
\end{lemma}
The inclusion $\R^{+}\subset\C_{\mu_{\ast}(s)}(0)$ for any $s\in\T$ yields the following corollary.
\begin{corollary}\label{alcrl1}
Let $\sup\T=\infty$, $s\in\T$ and $x\in\R^{+}$, then
\begin{equation}
\lim_{t\to\infty}\ef_{\ominus{}x}(t,s)=0\quad\text{and}\quad\lim_{t\to\infty}\ef_{x}(t,s)=\infty.\notag
\end{equation}
\end{corollary}
Next, we present a result on asymptotic property of the time scale exponential.
\begin{lemma}\label{allm2}
Let $s,t\in\T$ and $\lambda\in\R^{+}$, then
\begin{equation}
\lim_{x\to\infty}\big[x^{\lambda}\ef_{\ominus{}x}(t,s)\big]=
\begin{cases}
0,&t>s\\
\infty,&t\leq s.
\end{cases}\notag
\end{equation}
\end{lemma}
\begin{proof}
As we will be considering the limit as $x\to\infty$, we may assume that $x\in\R^{+}$.
First, we consider the case $s,t\in\T$ with $t>s$.
We may find $n\in\N$ such that $n>\lambda$.
By the Taylor's formula, we have
\begin{equation}
\ef_{x}(t,s)=\sum_{\ell=0}^{n}x^{\ell}\hf_{\ell}(t,s)+x^{n+1}\int_{s}^{t}\hf_{n}(t,\sigma(\eta))\ef_{x}(\eta,s)\Delta\eta\geq{}x^{n}\hf_{n}(t,s).\notag
\end{equation}
Therefore, we see that
\begin{equation}
0\leq x^{\lambda}\ef_{\ominus{}x}(t,s)=\frac{x^{\lambda}}{\ef_{x}(t,s)}\leq\frac{x^{\lambda}}{x^{n}\hf_{n}(t,s)},\notag
\end{equation}
which proves $x^{\lambda}\ef_{\ominus{}x}(t,s)\to0$ by letting $x\to\infty$.
Next, let $s,t\in\T$ with $t\leq s$, then $\ef_{\ominus{}x}(t,s)=\ef_{x}(s,t)\geq1$ and thus we have $x^{\lambda}\ef_{\ominus{}x}(t,s)\geq{}x^{\lambda}$, which shows that $x^{\lambda}\ef_{\ominus{}x}(t,s)\to\infty$ as $x\to\infty$.
This completes the proof.
\end{proof}
Let us introduce the function $\Lambda:\creg{}(\T,\R)\times\T\times\T\to\R$ defined by
\begin{equation}
\Lambda(x;t,s):=\exp\big\{-x\ef_{\ominus{}x}(t,s)\big\}\quad\text{for}\ x\in\creg{}(\T,\R)\ \text{and}\ s,t\in\T.\label{aleq1}
\end{equation}
\begin{corollary}\label{alcrl2}
Let $s,t\in\T$, then
\begin{equation}
\lim_{x\to\infty}\Lambda(x;t,s)=\chi_{(-\infty,t)_{\T}}(s),\notag
\end{equation}
where $\chi_{D}:\R\to\{0,1\}$ is the characteristic function of the set $D\subset\R$.
\end{corollary}
\section{Uniqueness of the Laplace Transform}\label{ult}
In this section, we shall always assume that $\sup\T=\infty$.
We first start with the definition of the set of null functions.
\begin{definition}\label{ultdf1}
A function $f:[s,\infty)_{\T}\to\C$ is called a \emph{null function} if
\begin{equation}
\int_{s}^{t}f(\eta)\Delta\eta=0\quad\text{for all}\ t\in[s,\infty)_{\T}.\notag
\end{equation}
The set of null functions on will be denoted by $\nf([s,\infty)_{\T},\C)$.
\end{definition}
Next, we give some properties of the null functions some of
which will be required in the proof of the main result.
\begin{lemma}\label{ultlm2}
Let $f\in\nf([s,\infty)_{\T},\C)$ and $g\in\crd{1}([s,\infty)_{\T},\C)$, then $fg^{\sigma}\in\nf([s,\infty)_{\T},\C)$.
\end{lemma}
\begin{proof}
Performing an integration by parts, for any $t\in[s,\infty)_{\T}$, we have
\begin{equation}
\int_{s}^{t}f(\eta)g^{\sigma}(\eta)\Delta\eta=\Bigg[\bigg[\int_{s}^{\eta}f(\zeta)\Delta\zeta\bigg]g(\eta)\Bigg]_{\eta=s}^{\eta=t}-\int_{s}^{t}\bigg[\int_{s}^{\eta}f(\zeta)\Delta\zeta\bigg]g^{\Delta}(\eta)\Delta\eta=0,\notag
\end{equation}
which proves the claim.
\end{proof}
\begin{corollary}\label{ultcrl1}
Let $f\in\nf([s,\infty)_{\T},\C)$ and $g\in\reg{}([s,\infty)_{\T},\C)$, then $f\ef_{g}(\sigma(\cdot),s)\in\nf([s,\infty)_{\T},\C)$.
\end{corollary}
\begin{corollary}\label{ultcrl1a}
Let $f\in\nf([s,\infty)_{\T},\C)$, then
\begin{equation}
\int_{s}^{\infty}f(\eta)\ef_{\ominus z}(\sigma(\eta),s)\Delta\eta=0\quad\text{for any}\ z\in\creg{}([s,\infty)_{\T},\C).\label{ultcrl1aeq1}
\end{equation}
\end{corollary}
We have now filled the necessary background for the proof of our main result.
\begin{theorem}[Lerch's theorem]\label{lerchthm}
Assume that $f:[s,\infty)_{\T}\to\C$, there exist an increasing divergent sequence $\{\varsigma_{k}\}_{k\in\N_{0}}\subset\R_{0}^{+}$ and $\alpha\in\creg{+}([s,\infty)_{\T},\C)$ such that
\begin{equation}
\lt{}\{f\ef_{\ominus(n\odot\varsigma_{k})}(\sigma(\cdot),s)\}(\alpha)=0\quad\text{for all}\ n,k\in\N_{0}.\label{lerchthmeq1}
\end{equation}
Then $f\in\nf([s,\infty)_{\T},\C)$.
\end{theorem}
\begin{proof}
Define the function $g:[s,\infty)_{\T}\to\C$ by $g(t):=f(t)\ef_{\ominus\alpha}(\sigma(t),s)$ for $t\in[s,\infty)_{\T}$, then we have
\begin{align}
\int_{s}^{\infty}g(\eta)\ef_{\ominus(n\odot\varsigma_{k})}(\sigma(\eta),s)\Delta\eta=&\int_{s}^{\infty}f(\eta)\ef_{\ominus(n\odot\varsigma_{k})}(\sigma(\eta),s)\ef_{\ominus\alpha}(\sigma(\eta),s)\Delta\eta\notag\\
=&\int_{s}^{\infty}f(\eta)\ef_{\ominus(\alpha\oplus(n\odot\varsigma_{k}))}(\sigma(\eta),s)\Delta\eta=0\label{lerchthmprfeq1}
\end{align}
for all $n,k\in\N_{0}$.
Let $r\in[s,\infty)_{\T}$, and define $h:[s,\infty)_{\T}\to\C$ by
\begin{equation}
h_{r}(t):=\int_{r}^{t}g(\eta)\Delta\eta\quad\text{for}\ t\in[s,\infty)_{\T}.\notag
\end{equation}
It follows from \eqref{lerchthmeq1} with $n=0$ that
\begin{equation}
\int_{s}^{\infty}g(\eta)\Delta\eta=0,\label{lerchthmprfeq2}
\end{equation}
which shows that $\lim_{t\to\infty}h_{r}(t)$ exists.
So we can find $M_{r}\in\R^{+}$ such that $|h_{r}(t)|\leq M_{r}$ for all $t\in[r,\infty)_{\T}$.
We may (and do) assume that $M_{r}\to0$ as $r\to\infty$.
Using \eqref{lerchthmprfeq1}, and performing integration by parts, we get
\begin{align}
\int_{s}^{r}g(\eta)\ef_{\ominus(n\odot\varsigma_{k})}(\sigma(\eta),s)\Delta\eta=&-\int_{r}^{\infty}g(\eta)\ef_{\ominus(n\odot\varsigma_{k})}(\sigma(\eta),s)\Delta\eta\notag\\
\begin{split}
=&-\bigg[\big[\ef_{\ominus(n\odot\varsigma_{k})}(\eta,s)h_{r}(\eta)\big]_{\eta=r}^{\eta\to\infty}\\
&-\int_{r}^{\infty}\ef_{\ominus(n\odot\varsigma_{k})}^{\Delta_{1}}(\eta,s)h_{r}(\eta)\Delta\eta\bigg]\\
\end{split}\notag\\
\begin{split}
=&-\bigg[\big[\big(\ef_{\ominus\varsigma_{k}}(\eta,s)\big)^{n}h_{r}(\eta)\big]_{\eta=r}^{\eta\to\infty}\\
&+\int_{r}^{\infty}(n\odot\varsigma_{k})(\eta)\ef_{\ominus(n\odot\varsigma_{k})}(\sigma(\eta),s)h_{r}(\eta)\Delta\eta\bigg]
\end{split}\notag\\
=&-\int_{r}^{\infty}(n\odot\varsigma_{k})(\eta)\ef_{\ominus(n\odot\varsigma_{k})}(\sigma(\eta),s)h_{r}(\eta)\Delta\eta\label{lerchthmprfeq3}
\end{align}
for all $n,k\in\N_{0}$.
Note that above have used Corollary~\ref{alcrl1} while passing to the last step.
Now multiplying both sides of \eqref{lerchthmprfeq3} with $\ef_{\ominus(n\odot\varsigma_{k})}(s,r)$, we have
\begin{equation}
\int_{s}^{r}g(\eta)\ef_{\ominus(n\odot\varsigma_{k})}(\sigma(\eta),r)\Delta\eta=-\int_{r}^{\infty}(n\odot\varsigma_{k})(\eta)\ef_{\ominus(n\odot\varsigma_{k})}(\sigma(\eta),r)h_{r}(\eta)\Delta\eta,\notag
\end{equation}
which yields
\begin{equation}
\bigg|\int_{s}^{r}g(\eta)\ef_{\ominus(n\odot\varsigma_{k})}(\sigma(\eta),r)\Delta\eta\bigg|\leq M_{r}\int_{r}^{\infty}(n\odot\varsigma_{k})(\eta)\ef_{\ominus(n\odot\varsigma_{k})}(\sigma(\eta),r)\Delta\eta=M_{r}.\notag
\end{equation}
By the series expansion of the exponential function, we know that
\begin{equation}
\sum_{\ell\in\N_{0}}\frac{(-1)^{\ell}\varsigma_{k}^{\ell}}{\ell!}\ef_{\ominus(\ell\odot\varsigma_{k})}(t,s)=\Lambda(\varsigma_{k};t,s)\quad\text{for}\ s,t\in\T\ \text{and}\ k\in\N_{0},\notag
\end{equation}
where $\Lambda$ is defined by \eqref{aleq1}.
Thus, for all $t\in[s,r)_{\T}$ and all $k\in\N_{0}$, we can estimate that
\begin{align}
\bigg|\int_{s}^{r}g(\eta)\Lambda(\varsigma_{k};\sigma(\eta),t)\Delta\eta\bigg|=&\bigg|\sum_{\ell\in\N_{0}}\frac{(-1)^{\ell-1}\varsigma_{k}^{\ell}}{\ell!}\int_{s}^{r}g(\eta)\ef_{\ominus(\ell\odot\varsigma_{k})}(\sigma(\eta),t)\Delta\eta\bigg|\notag\\
\leq&\sum_{\ell\in\N_{0}}\frac{\varsigma_{k}^{\ell}}{\ell!}\ef_{\ominus(\ell\odot\varsigma_{k})}(r,t)\bigg|\int_{s}^{r}g(\eta)\ef_{\ominus(\ell\odot\varsigma_{k})}(\sigma(\eta),r)\Delta\eta\bigg|\notag\\
\leq&M_{r}\sum_{\ell\in\N_{0}}\frac{\varsigma_{k}^{\ell}}{\ell!}\big(\ef_{\ominus\varsigma_{k}}(r,t)\big)^{\ell}=M_{r}\exp\big\{\varsigma_{k}\ef_{\ominus\varsigma_{k}}(r,t)\big\}.\notag
\end{align}
Letting $r\to\infty$, we have $M_{r}\to0$ and $\ef_{\ominus\varsigma_{k}}(r,t)\to0$ by Corollary~\ref{alcrl1}, which yields $M_{r}\exp\{\varsigma_{k}\ef_{\ominus\varsigma_{k}}(r,t)\}\to0$ as $r\to\infty$.
We can therefore write
\begin{equation}
\int_{s}^{\infty}g(\eta)\Lambda(\varsigma_{k};\sigma(\eta),t)\Delta\eta=0\quad\text{for all}\ k\in\N_{0}.\label{lerchthmprfeq4}
\end{equation}
By \eqref{lerchthmprfeq2}, the function $g$ is integrable over $[s,\infty)_{\T}$ and the characteristic function $\chi$ is piecewise constant.
Letting $k\to\infty$ in \eqref{lerchthmprfeq4}, we get
\begin{equation}
\int_{s}^{\infty}g(\eta)\chi_{(-\infty,t)_{\T}}(\sigma(\eta))\Delta\eta=0\quad\text{for all}\ t\in[s,\infty)_{\T}\label{lerchthmprfeq5}
\end{equation}
by Lebesque's dominated convergence theorem and Corollary~\ref{alcrl2}.
Now, we are in a position to prove that
\begin{equation}
\int_{s}^{t}g(\eta)\Delta\eta=0\quad\text{for all}\ t\in[s,\infty)_{\T}.\label{lerchthmprfeq6}
\end{equation}
From \eqref{lerchthmprfeq5}, for all $t\in[s,\infty)_{\T}$, we have
\begin{align}
\int_{s}^{\infty}g(\eta)\chi_{(-\infty,t)_{\T}}(\sigma(\eta))\Delta\eta=&\int_{s}^{t}g(\eta)\chi_{[s,t)_{\T}}(\sigma(\eta))\Delta\eta\notag\\
=&\int_{s}^{t}g(\eta)\big[\chi_{[s,t)_{\T}}(\eta)+\mu(\eta)\chi_{[s,t)_{\T}}^{\Delta}(\eta)\big]\Delta\eta\notag\\
=&\int_{s}^{t}g(\eta)\chi_{[s,t)_{\T}}(\eta)\Delta\eta,\notag
\end{align}
which together with the definition of the characteristic function $\chi$ and \eqref{lerchthmprfeq5} gives \eqref{lerchthmprfeq6}.
Therefore, we learn that $g$ is a null function.
An application of Corollary~\ref{ultcrl1} shows that $f=g\ef_{\alpha}(\sigma(\cdot),s)$ is a null function too.
This completes the proof.
\end{proof}
\begin{corollary}
Assume that $f,g:[s,\infty)_{\T}\to\C$, there exist an increasing divergent sequence $\{\varsigma_{k}\}_{k\in\N_{0}}\subset\R^{+}$ and $\alpha\in\creg{+}([s,\infty)_{\T},\C)$ such that
\begin{equation}
\lt{}\{f\ef_{\ominus(n\odot\varsigma_{k})}(\sigma(\cdot),s)\}(\alpha)=\lt{}\{g\ef_{\ominus(n\odot\varsigma_{k})}(\sigma(\cdot),s)\}(\alpha)\quad\text{for all}\ n,k\in\N_{0}.\notag
\end{equation}
Then $f-g\in\nf([s,\infty)_{\T},\C)$.
\end{corollary}
\begin{corollary}
Assume that the graininess function $\mu$ is constant and there exists $\alpha\in\creg{+}([s,\infty)_{\T},\C)$ such that
\begin{equation}
\lt{}\{f\}(z)=0\quad\text{for all}\ z\in\C_{\mu_{\ast}(s)}(\alpha).\notag
\end{equation}
Then $f\in\nf([s,\infty)_{\T},\C)$.
\end{corollary}
\begin{proof}
In this case, for any fixed $\beta\in\R_{\mu_{\ast}(s)}(\alpha)\subset\C_{\mu_{\ast}(s)}(\alpha)$, we have
\begin{equation}
\lt{}\{f\}(\beta)=0,\notag
\end{equation}
which yields $\beta\oplus((nk)\odot\varsigma)\in\R_{\mu_{\ast}(s)}(\alpha)\subset\C_{\mu_{\ast}(s)}(\alpha)$ for all $n,k\in\N_{0}$ and all $t\in[s,\infty)_{\T}$, where $\varsigma\in\R^{+}$, i.e.,
\begin{equation}
\lt{}\{f\}(\beta\oplus((nk)\odot\varsigma))=\lt{}\{f\ef_{\ominus((nk)\odot\varsigma)}(\sigma(\cdot),s)\}(\beta)=0\quad\text{for all}\ n,k\in\N_{0}.\notag
\end{equation}
This shows that the conditions of Theorem~\ref{lerchthm} hold with $\varsigma_{k}:=k\odot\varsigma$ for $k\in\N_{0}$.
\end{proof}
\section{Appendix: Time Scales Essentials}\label{pts}
A \emph{time scale}, which inherits the standard topology on $\R$,
is a nonempty closed subset of reals.
Throughout this paper, the time scale is assumed to be unbounded above and
will be denoted by the symbol $\T$,
and the intervals with a subscript $\T$ are used to denote
the intersection of the usual interval with $\T$.
For $t\in\T$, we define the \emph{forward jump operator}
$\sigma:\T\to\T$ by $\sigma(t):=\inf(t,\infty)_{\T}$
while the \emph{graininess function}
$\mu:\T\to\R_{0}^{+}$ is defined to be $\mu(t):=\sigma(t)-t$.
A point $t\in\T$ is called \emph{right-dense} if $\sigma(t)=t$;
otherwise, it is called \emph{right-scattered},
and similarly \emph{left-dense} and \emph{left-scattered} points
are defined in terms of the so-called backward jump operator.
A function $f:\T\to\C$ is said to be \emph{Hilger differentiable}
(or $\Delta$-differentiable) at the point $t\in\T$ if there exists $\ell\in\C$ such that for any
$\varepsilon>0$ there exists a neighborhood $U$ of $t$ such that
\begin{equation}
\big|[f(\sigma(t))-f(s)]-\ell[\sigma(t)-s]\big|\leq\varepsilon|\sigma(t)-s|\quad\text{for all}\ s\in U,\notag
\end{equation}
and in this case we denote $f^{\Delta}(t)=\ell$.
A function $f$ is called \emph{rd-continuous}
provided that it is continuous at right-dense points in $\T$,
and has finite limits at left-dense points,
and the set of rd-continuous functions is denoted by $\crd{}(\T,\C)$.
The set of functions $\crd{1}(\T,\C)$ includes the functions whose
derivative is in $\crd{}(\T,\C)$ too.
For $f\in\crd{1}(\T,\C)$, we have
\begin{equation}
f^{\sigma}=f+\mu f^{\Delta}\quad\text{on}\ \T^{\kappa},\notag
\end{equation}
where $f^{\sigma}:=f\circ\sigma$ and $\T^{\kappa}:=\T\backslash\{\sup\T\}$ if $\sup\T=\max\T$ and satisfies $\rho(\max\T)\neq\max\T$; otherwise, $\T^{\kappa}:=\T$.
For $s,t\in\T$ and a function $f\in\crd{}(\T,\C)$,
the $\Delta$-integral of $f$ is defined by
\begin{equation}
\int_{s}^{t}f(\eta)\Delta\eta=F(t)-F(s)\quad\text{for}\ s,t\in\T,\notag
\end{equation}
where $F:\T\to\C$ is an antiderivative of $f$,
i.e., $F^{\Delta}=f$ on $\T^{\kappa}$.
A function $f\in\crd{}(\T,\C)$
is called \emph{regressive} if $1+\mu f\neq0$ on $\T$,
and \emph{positively regressive} if it is real valued and $1+\mu f>0$ on $\T$.
The set of regressive functions and the set of positively regressive functions
are denoted by $\reg{}(\T,\C)$ and $\reg{+}(\T,\R)$, respectively,
and $\reg{-}(\T,\R)$ is defined similarly.
For simplicity, we denote by $\creg{}(\T,\C)$ the set of complex regressive constants,
and similarly, we define the sets $\creg{+}(\T,\R)$ and $\creg{-}(\T,\R)$.
Let $f\in\reg{}(\T,\C)$.
Then the \emph{exponential function} $\ef_{f}(\cdot,s)$ is defined
to be the unique solution of the initial value problem
\begin{equation}
\begin{cases}
x^{\Delta}=fx\quad\text{on}\ \T^{\kappa}\\
x(s)=1
\end{cases}\notag
\end{equation}
for some fixed $s\in\T$.
For $h>0$, set
\begin{equation}
\C_{h}:=\big\{z\in\C:\ z\neq-1/h\big\}\quad\text{and}\quad\Z_{h}:=\big\{z\in\C:\-\pi/h<\Img(z)\leq\pi/h\big\},\notag
\end{equation}
and $\C_{0}:=\Z_{0}:=\C$.
For $h\in\R_{0}^{+}$, the Hilger \emph{real part} and \emph{imaginary part}
of a complex number are given by
\begin{equation}
\Rl_{h}(z):=\lim_{\nu\to h}\frac{1}{\nu}\big(|1+\nu z|-1\big)\quad\text{and}\quad\Img_{h}(z):=\lim_{\nu\to h}\frac{1}{\nu}\Arg(1+\nu z),\notag
\end{equation}
respectively, where $\Arg$ denotes the principle argument function,
i.e., $\Arg:\C\to(-\pi,\pi]_{\R}$.
For $h\in\R_{0}^{+}$ and any fixed $z\in\C_{h}$, the Hilger real part $\Rl_{h}(z)$ is a nondecreasing function of $h\in\R_{0}^{+}$, i.e., $\Rl_{h_{1}}(z)\geq\Rl_{h_{2}}(z)$ for $h_{1},h_{2}\in\R_{0}^{+}$ with $h_{1}\geq h_{2}$.
For $h\in\R_{0}^{+}$, we define the \emph{cylinder transformation}
$\xi_{h}:\C_{h}\to\Z_{h}$ by
\begin{equation}
\xi_{h}(z):=\lim_{\nu\to h}\frac{1}{\nu}\Log(1+\nu z)\quad\text{for}\ z\in\C_{h}.\notag
\end{equation}
Then the exponential function can also be written in the form
\begin{equation}
\ef_{f}(t,s):=\exp\bigg\{\int_{s}^{t}\xi_{\mu(\eta)}\big(f(\eta)\big)\Delta\eta\bigg\}\quad\text{for}\ s,t\in\T.\notag
\end{equation}
It is known that the exponential function $\ef_{f}(\cdot,s)$ is
strictly positive on $[s,\infty)_{\T}$
provided that $f\in\reg{+}([s,\infty)_{\T},\R)$,
while $\ef_{f}(\cdot,s)$ alternates in sign at
right-scattered points of the interval $[s,\infty)_{\T}$
provided that $f\in\reg{-}([s,\infty)_{\T},\R)$.
For $h\in\R_{0}^{+}$ and $w,z\in\C_{h}$, the \emph{circle plus} and the \emph{circle minus}
are defined by
\begin{equation}
z\oplus_{h}w:=z+w+hzw\quad\text{and}\quad z\ominus_{\mu}w:=\frac{z-w}{1+hw},\notag
\end{equation}
respectively. It is known that $(\reg{}(\T,\C),\oplus_{\mu})$ is a group,
and the inverse of $f\in\reg{}(\T,\C)$ is $\ominus_{\mu}f:=0\ominus_{\mu}f$.
Moreover, $\creg{+}(\T,\C)$ is a subgroup of $\creg{}(\T,\C)$.
For $\lambda\in\C$ and $z\in\C_{h}$, the \emph{circle dot} is defined by
\begin{equation}
\lambda\odot_{h}z:=\lim_{\nu\to h}\frac{1}{\nu}\big((1+\nu z)^{\lambda}-1\big).\notag
\end{equation}
With this multiplication, $(\reg{}(\T,\C),\oplus_{\mu},\odot_{\mu})$ becomes a complex vector space.
It should be noted that
\begin{equation}
\ef_{\lambda\odot_{\mu}f}(t,s)=\big(\ef_{f}(t,s)\big)^{\lambda}\quad\text{for}\ s,t\in\T,\notag
\end{equation}
where $\lambda\in\C$ and $f\in\reg{}(\T,\C)$.
For simplicity in the notation, we shall use $\oplus,\ominus$ and $\odot$ instead of $\oplus_{\mu},\ominus_{\mu}$ and $\odot_{\mu}$, respectively.
The definition of the generalized monomials on time scales
(see \cite[\S~1.6]{MR1843232}) $\hf_{n}:\T\times\T\to\R$ is given as
\begin{equation}
\hf_{n}(t,s):=
\begin{cases}
1,&n=0,\\
\displaystyle\int_{s}^{t}\hf_{n-1}(\eta,s)\Delta\eta,&n\in\N
\end{cases}\quad\text{for}\ s,t\in\T.\notag
\end{equation}
Using induction, it is easy to see that
$\hf_{n}(t,s)\geq0$ holds for all $n\in\N_{0}$
and all $s,t\in\T$ with $t\geq s$,
and $(-1)^{n}\hf_{n}(t,s)\geq 0$ holds for all $n\in\N$
and all $s,t\in\T$ with $t\leq s$.
The readers are referred to \cite{MR1843232}
for fundamentals of time scale theory.
| {
"timestamp": "2011-03-01T02:02:29",
"yymm": "1102",
"arxiv_id": "1102.5590",
"language": "en",
"url": "https://arxiv.org/abs/1102.5590",
"abstract": "After introducing the concept of null functions, we shall present a uniqueness result in the sense of the null functions for the Laplace transform on time scales with arbitrary graininess. The result can be regarded as a dynamic extension of the well-known Lerch's theorem.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "On uniqueness of the Laplace transform on time scales",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540716711546,
"lm_q2_score": 0.822189134878876,
"lm_q1q2_score": 0.8052142769274113
} |
https://arxiv.org/abs/1008.1368 | Representation theory and homological stability | We introduce the idea of *representation stability* (and several variations) for a sequence of representations V_n of groups G_n. A central application of the new viewpoint we introduce here is the importation of representation theory into the study of homological stability. This makes it possible to extend classical theorems of homological stability to a much broader variety of examples. Representation stability also provides a framework in which to find and to predict patterns, from classical representation theory (Littlewood--Richardson and Murnaghan rules, stability of Schur functors), to cohomology of groups (pure braid, Torelli and congruence groups), to Lie algebras and their homology, to the (equivariant) cohomology of flag and Schubert varieties, to combinatorics (the (n+1)^(n-1) conjecture). The majority of this paper is devoted to exposing this phenomenon through examples. In doing this we obtain applications, theorems and conjectures.Beyond the discovery of new phenomena, the viewpoint of representation stability can be useful in solving problems outside the theory. In addition to the applications given in this paper, it is applied in [CEF] to counting problems in number theory and finite group theory. Representation stability is also used in [C] to give broad generalizations and new proofs of classical homological stability theorems for configuration spaces on oriented manifolds. | \section{Introduction}
In this paper we introduce the idea of \emph{representation stability} (and
several variations) for a sequence of representations $V_n$ of
groups $G_n$.
A central application of the new viewpoint we introduce here is
the importation of representation theory into the study of homological stability.
This make it possible to extend classical theorems of homological stability to a much broader variety of examples.
Representation stability also
provides a framework in which to find and to predict patterns, from
classical representation theory (Littlewood--Richardson and Murnaghan rules,
stability of Schur functors), to cohomology of groups (pure braid,
Torelli and congruence groups), to Lie algebras and their homology,
to the (equivariant) cohomology of flag and Schubert varieties, to
combinatorics (the $(n+1)^{n-1}$ conjecture). The majority of this paper is devoted to
exposing this phenomenon through examples. In doing this we obtain
applications, theorems and conjectures.
Beyond the discovery of new phenomena, the viewpoint of representation stability can be useful in solving problems outside the theory. In addition to the applications given in this paper, it is applied in \cite{CEF} to counting problems in number theory. Representation stability is also used in \cite{C} to give broad generalizations and new proofs
of classical homological stability theorems for configuration spaces on oriented manifolds.
We begin with some context and motivation.
\para{Classical homological stability} Let $\{Y_n\}$ be a sequence of
groups, or topological spaces, equipped with maps (e.g.\ inclusions)
$\psi_n\colon Y_n\to Y_{n+1}$. The sequence $\{Y_n\}$ is
\emph{homologically stable} (over a coefficient ring $R$) if for each
$i\geq 1$ the map
\[(\psi_n)_\ast\colon H_i(Y_n,R)\to H_i(Y_{n+1},R)\] is an isomorphism
for $n$ large enough (depending on $i$). Homological stability is
known to hold for certain sequences of arithmetic groups, such as
$\{\SL_n\Z\}$ and $\{\Sp_{2n}\Z\}$. It is also known for braid groups,
mapping class groups of surfaces with boundary, and for (outer)
automorphism groups of free groups, by major results of many people
(including Borel, Arnol'd, Harer, Hatcher--Vogtmann and many others;
see, e.g.\ \cite{Coh,Vo} and the references therein). Further,
in many of these cases the stable homology groups have been computed.
In contrast, even for $\Q$ coefficients (or for $\F_p$ coefficients in the arithmetic examples),
almost nothing is known about the homology of finite
index and other natural subgroups of the above-mentioned groups, even
in the simplest examples. Indeed, homological stability is known to
fail in many cases, and it is not even clear what a closed-form description
of the homology might look like. We now consider an example to
illustrate this point.
\para{A motivating example} Consider the set $X_n$ of ordered
$n$--tuples of distinct points in the complex plane:
\[X_n\coloneq \big\{(z_1,\ldots,z_n)\in \C^n\big|z_i\neq z_j\text{ for
all }i\neq j\big\}.\] The set $X_n$ can be considered as a
hyperplane complement in $\C^n$. The fundamental group of $X_n$ is
the \emph{pure braid group} $P_n$. It is known that $X_n$ is
aspherical, and so $H_i(P_n;\Z)=H_i(X_n;\Z)$. The symmetric group $S_n$
acts freely on $\C^n$ by permuting the coordinates, and this action
clearly restricts to a free action by homeomorphisms on $X_n$. The
quotient $Y_n\coloneq X_n/S_n$ is the space of unordered $n$--tuples
of distinct points in $\C$. The space $Y_n$ is aspherical, and so is
a classifying space for its fundamental group $B_n$, the \emph{braid
group}. We have an exact sequence:
\[1\to P_n\to B_n\to S_n\to 1\]
Arnol'd \cite{Ar} and F. Cohen \cite{Co} proved that the sequence of
braid groups $\{B_n\}$ satisfies homological stability with integer
coefficients. Over the rationals, they proved for $n\geq 3$ that
\[H_i(B_n;\Q)=
\begin{cases}
\Q&\text{if }i=0,1\\
0&\text{if }i\geq 2
\end{cases}
\]
and so stability holds in a trivial way. In contrast,
\[H_1(P_n;\Q)=\Q^{n(n-1)/2}\] and so the pure braid groups $\{P_n\}$
do not satisfy homological stability, even for $i=1$.
Arnol'd also gave a presentation for the cohomology algebra
$H^\ast(P_n;\Q)$ (see \S\ref{section:braids} for the description).
But we can try to extract much finer information, using representation
theory, as follows. The action of $S_n$ on the space $X_n$ induces an
action of $S_n$ on the vector space $H^i(P_n;\Q)$, making it an
$S_n$--representation for each $i\geq 0$. Each of these
representations is finite-dimensional, and so can be decomposed as a
finite direct sum of irreducible $S_n$--representations. The question
of how many of these summands are trivial is already interesting: an
easy transfer argument gives that \[H^i(B_n;\Q)=H^i(P_n;\Q)^{S_n};\]
that is, $H^i(B_n;\Q)$ is the subspace of $S_n$--fixed vectors in
$H^i(P_n;\Q)$. Thus we see that the ``trivial piece'' of
$H^i(P_n;\Q)$ already contains the Arnol'd--Cohen computation of
$H^i(B_n;\Q)$; the other summands evidently contain even deeper
information.
Now, the irreducible representations of $S_n$ are completely
classified: they are in bijective correspondence with partitions
$\lambda$ of $n$. Which irreducibles (that is, which partitions)
occur in the $S_n$--representation $H^i(P_n;\Q)$? What are their
multiplicities? There have been a number of results in this direction
(most notably by Lehrer--Solomon \cite{LS}), but an explicit count of
the multiplicity of a fixed partition $\lambda$ is known only for a
few $\lambda$; an answer for arbitrary $\lambda$ and arbitrary $i$
seems out of reach.
On the other hand, using the notation $V(a_1,\ldots ,a_r)$ to denote
the irreducible $S_n$--representation corresponding to the partition
$((n-\sum_{i=1}^r a_i), a_1, \ldots ,a_r)$ (see \S\ref{section:reps1}
below for more details), it is not hard to check that
\begin{equation}
\label{eq:h1pn}
H^1(P_n;\Q)=V(0)\oplus V(1)\oplus V(2) \qquad \text{for } n\geq 4.
\end{equation}
Note that, with our notation, the right-hand side of \eqref{eq:h1pn}
has a uniform description, independent of $n$ as long as $n\geq 4$.
More interestingly, using work of Lehrer--Solomon and the computer program Magma, we computed the following:
\begin{equation}
\label{eq:stabilizationinaction}
\begin{array}{l}
H^2(P_4;\Q) = V(1)^{\oplus 2}\oplus V(1,1) \oplus V(2)\\ \\
H^2(P_5;\Q) = V(1)^{\oplus 2} \oplus V(1,1)^{\oplus 2}
\oplus V(2)^{\oplus 2} \oplus
V(2,1)\\ \\
H^2(P_6;\Q) = V(1)^{\oplus 2} \oplus V(1,1)^{\oplus 2}
\oplus V(2)^{\oplus 2} \oplus
V(2,1)^{\oplus 2} \oplus V(3)\\ \\
H^2(P_n;\Q) = V(1)^{\oplus 2} \oplus V(1,1)^{\oplus 2}
\oplus V(2)^{\oplus 2} \oplus
V(2,1)^{\oplus 2} \oplus V(3) \oplus V(3,1)
\end{array}
\end{equation}
where we carried out the computation in the last line for $n=7$, $8$, and $9$. We will see below that the last line of \eqref{eq:stabilizationinaction} in fact holds for all $n\geq 7$, so the irreducible
decomposition of $H^2(P_n;\Q)$ \emph{stabilizes}. These low-dimensional ($i=1$, $2$) cases are indicative of a more general
pattern. The language needed to describe this pattern is given by the
main concept in this paper, which we now describe (in a special case).
\para{Representation stability}
Let $V_n$ be a sequence of $S_n$--representations, equipped with
linear maps $\phi_n\colon V_n\to V_{n+1}$, making the following diagram
commute for each $g\in S_n$:
\[\xymatrix{
V_n\ar^{\phi_n}[r]\ar_{g}[d]&V_{n+1}\ar^{g}[d]\\
V_n\ar_{\phi_n}[r]&V_{n+1} }\]
where $g$ acts on $V_{n+1}$ by its image under the standard inclusion
$S_n\hookrightarrow S_{n+1}$. We call such a sequence of
representations \emph{consistent}.
We want to compare the representations $V_n$ as $n$ varies. However,
since $V_n$ and $V_{n+1}$ are representations of different groups, we
cannot ask for an isomorphism as representations. But we can ask for
injectivity and surjectivity, once they are properly
formulated. Moreover, by using the decomposition into irreducibles, we
can formulate what it means for $V_n$ and $V_{n+1}$ to be the ``same
representation''.
\begin{definition}[Representation stability, special case]
Let $\{V_n\}$ be a consistent sequence of $S_n$--representations. We
say that the sequence $\{V_n\}$ is \emph{representation stable} if,
for sufficiently large $n$, each of the following conditions holds:
\begin{enumerate}[I.]
\item \textbf{Injectivity:} The maps $\phi_n\colon V_n\to V_{n+1}$ are
injective.
\item \textbf{Surjectivity:} The span of the $S_{n+1}$--orbit of
$\phi_n(V_n)$ equals all of $V_{n+1}$.
\item \textbf{Multiplicities:} Decompose $V_n$ into irreducible
$S_n$--representations as
\[V_n=\bigoplus_\lambda c_{\lambda,n}V(\lambda)\] with
multiplicities $0\leq c_{\lambda,n}\leq \infty$. For each $\lambda$,
the multiplicities $c_{\lambda,n}$ are eventually independent of
$n$.
\end{enumerate}
\end{definition}
The idea of representation stability can be extended to other families
of groups whose representation theory has a ``consistent naming
system'', for example $\GL_n\Q$, $\Sp_{2n}\Q$ and the hyperoctahedral
groups; see \S\ref{section:repstab:def} for the precise definitions.
As an easy example, let $V_n=\Q^n$ denote the standard representation
of $\GL_n\Q$. Then the decomposition $V_n\otimes
V_n=\Sym^2V_n\oplus\bwedge^2V_n$ into irreducibles shows that the
sequence of $\GL_n\Q$--representations $\{V_n\otimes V_n\}$ is
representation stable; see Example~\ref{example:tensor}. A natural
non-example is the sequence of regular representations $\{\Q S_n\}$ of
$S_n$. These are not representation stable since, for any partition
$\lambda$, the multiplicity of $V(\lambda)$ in $\Q S_n$ is
$\dim(V(\lambda))$, which is not constant, and indeed tends to
infinity with $n$.
In \S\ref{section:reps1} and \S\ref{section:repsGLSp} we review the representation theory of all the groups we will be considering. In \S\ref{section:repstab:def} we develop the foundations of representation stability, in particular giving a number of useful examples, variations and refinements, such as uniform stability. In particular, we introduce strong stability, used
when one wishes to more finely control the $G_{n+1}$--span of the image of $V_n$ under
$\phi_n$; this is important for applications. We also develop the idea of ``mixed tensor stability'', which is meant to capture in certain cases subtle phenomena not detected by representation stability.
With the above language in hand, we can state our first theorem.
``Forgetting the $(n+1)^{\text{st}}$ marked point'' gives a
homomorphism $P_{n+1}\to P_n$ and thus induces a homomorphism
$H^{i}(P_n;\Q)\to H^{i}(P_{n+1};\Q)$. For each fixed $i\geq 1$ the
sequence of $S_n$--representations $\{H^i(P_n;\Q)\}$ is
consistent in the sense given above. While the exact multiplicities
in the decomposition of $H^i(P_n;\Q)$ into $S_n$--irreducible
subspaces are far from known, we have discovered the following.
\bigskip
\noindent
\textbf{Theorem~\ref{thm:pure}, slightly weaker version.} {\it For each fixed
$i\geq 0$, the sequence of $S_n$--representations $\{H^i(P_n;\Q)\}$
is representation stable. Indeed the sequence stabilizes once $n\geq 4i$.}
\bigskip See \S\ref{section:braids} for the proof. Note that the example in \eqref{eq:stabilizationinaction} above shows that the ``stable range'' we give in
Theorem \ref{thm:pure} is close to being sharp.
The obvious explanation for the stability in Theorem~\ref{thm:pure}
would be that that each $V(\lambda)\subseteq H^i(P_n;\Q)$ includes
into $H^i(P_{n+1};\Q)$ with $S_{n+1}$--span equal to
$V(\lambda)$, at least for $n$ large enough. But in fact this
coherence never happens, even for the trivial representation $V(0)$. Thus the mechanism
for stability of multiplicities in $\{H^i(P_n;\Q)\}$ must be more subtle, and indeed it is
perhaps surprising that this stability occurs at all.
See \S\ref{section:braid} for a discussion.
To prove Theorem~\ref{thm:pure}, we originally used work of
Lehrer--Solomon \cite{LS} to reduce the problem to a statement about
stability for certain sequences of induced representations of $S_n$.
We conjectured this stability to D. Hemmer \cite{He}, who then proved
it (and more). A. Putman has informed us that he has a different
approach to Theorem~\ref{thm:pure}. In \S\ref{section:braids} we
derive classical homological stability for $B_n$ with twisted
coefficients as a corollary of Theorem~\ref{thm:pure}. We extend these results to generalized braid groups in \S\ref{section:gbg}.\pagebreak
\para{Three applications}
In joint work \cite{CEF} with Jordan Ellenberg, we use the
Grothendieck--Lefschetz trace formula to translate results proved here
on the representation-stable cohomology of spaces into counting theorems about points on varieties
over finite fields. We then apply this to obtain statistics for
polynomials over $\F_q$ and for maximal tori in certain finite groups of Lie type such as $\GL_n(\F_q)$.
For each fixed partition $\lambda$, stability for
the multiplicity of $V(\lambda)$ in $\{H^i(P_n;\Q)\}$ is
related to a different counting problem in $\F_q[T]$. For example,
Theorem~\ref{thm:pure} for the sign representation $V(1,\ldots ,1)$
implies that the discriminant of a random monic squarefree polynomial
is equidistributed between residues and non-residues in $\F_q^{\times}$.
Theorem~\ref{thm:pure} for the standard representation $V(1)$ implies
that the expected number of linear factors of a random monic
squarefree polynomial of degree $n$ is
\[1-\frac{1}{q}+\frac{1}{q^2}-\frac{1}{q^3}+\cdots\pm\frac{1}{q^{n-2}}.\]
The stability of
$\{H^i(P_n;\Q)\}$ itself, even without knowing what the stable
multiplicities are, already implies that the associated counting
problems all have limits as the degree of the polynomials tends to
infinity. One can also obtain the Prime Number Theorem, counting the number of
irreducible polynomials of degree $n$, this way. At present this approach
reproduces results already known to analytic number theorists,
but our methods should generalize to wider classes of
examples, such as sections of line bundles on curves other than $\mathbf{P}^1$.
In \cite{CEF} we also give an application of representation stability of the cohomology of
flag varieties (see Section~\ref{section:flags}), obtaining for each $V(\lambda)$ a counting theorem for maximal tori in $\GL_n\F_q$ and for Lagrangian tori in $\Sp_{2n}\F_q$. For the trivial representation
$V(0)$ we obtain Steinberg's theorem that the number of maximal tori in $\GL_n\F_q$ is $N=q^{n^2-n}$. The standard representation $V(1)$ gives a formula for the expected number of eigenvectors of a random maximal torus in $\GL_n(\F_q)$ which are defined over $\F_q$. The
sign representation gives a theorem of Srinivasan \cite[Lemma 5]{Sr}: when splitting a random maximal torus into irreducible factors, the number of factors is more likely to be even than odd, with bias exactly $\frac{1}{\sqrt{N}}$.
Another application of representation stability is given in \cite{C}. Thinking of Theorem~\ref{thm:pure} as a statement about the configuration space of points in the plane, this is generalized to prove representation stability for the cohomology of ordered configuration spaces on an arbitrary orientable manifold. Specializing to the case of stability for the trivial representation
already gives new proofs and vast generalizations of classical homological stability
theorems of McDuff and Segal for open manifolds. One reason these theorems were not known for general manifolds is that for closed manifolds, there are no maps connecting the unordered configuration spaces for different numbers of points, so it is hard to compare these different spaces, and indeed homological stability often fails integrally. Looking instead at representation stability for the ordered configuration spaces makes it possible to relate different configuration spaces, then push the results down to unordered configuration spaces by taking invariants.
\para{Representation stability in group homology} The example of pure
braid groups given above fits into a much more general framework.
Suppose $\Gamma$ is a group with normal subgroup $N$ and quotient
$A\coloneq \Gamma/N$. The conjugation action of $\Gamma$ on $N$
induces a $\Gamma$--action on the group homology (and cohomology) of
$N$, with any coefficients $R$. This action factors through an
$A$--action on $H_i(N,R)$, making $H_i(N,R)$ into an $A$--module.
As with pure braid groups, the structure of $H_i(N,R)$ as an
$A$--module encodes fine information. For example, the
transfer isomorphism shows that when $A$ is finite and $R=\Q$ the space
$H_i(\Gamma;\Q)$ appears precisely as the subspace of $A$--fixed
vectors in $H_i(N;\Q)$. But there are typically many other summands,
and knowing the representation theory of $A$ (over $R$) gives us a
language with which to access these.
The following table summarizes some of the examples fitting in to this
framework. Each example will be explained in detail later in this
paper: the first in \S\ref{section:braids}, the second and third in \S\ref{section:torelli}, and the fourth and fifth in \S\ref{section:congruence}.
\vspace{.2in}
\begin{tabular}{c|c|c|c|c}
kernel $N$ & group $\Gamma$ & acts on & quotient $A$
& $H_1(N,R)$ for big $n$\\
\hline
& & & & \\
$P_n$ & $B_n$ & $\{1,\ldots ,n\}$ & $S_n$ & ${\rm Sym}^2V/V$\\
& & & & \\
Torelli group $\I_n$ & mapping class&$H_1(\Sigma_n,\Z)$&$\Sp_{2n}\Z$&
$\bwedge^3V/V$ \\
& group $\Mod_n$ & & & \\
& & & & \\
$\IA(F_n)$&$\Aut(F_n)$&$H_1(F_n,\Z)$&$\GL_n\Z$
&$V^\ast\otimes \bwedge^2V$\\
& & & & \\
congruence&$\SL_n\Z$&$\F_p^n$&$\SL_n\F_p$&$\fsl_n\F_p$\\
subgroup $\Gamma_n(p)$& & & &\\
& & & & \\
level $p$ subgroup&$\Mod_n$&$H_1(\Sigma_n;\F_p)$&$\Sp_{2n}\F_p$&
$\bwedge^3 V/V\oplus\ \fsp_{2n}\F_p$\\
$\Mod_n(p)$&&&&
\end{tabular}
\vspace{.3in} Here $R=\Q$ in the first three examples, $R=\F_p$ in the
fourth and fifth, and $V$ stands in each case for the standard
representation of $A$. In the last example $p$ is an odd prime.
In each of the examples given, the groups $\Gamma$ are known to
satisfy classical homological stability. In contrast, the rightmost
column shows that none of the groups $N$ satisfies homological
stability, even in dimension 1. In fact, except for the first
example, almost nothing is known about the $A$--module $H_i(N,R)$ for
$i>1$, and indeed it is not clear if there is a nice closed form
description of these homology groups. However, the appearance of some
kind of ``stability'' can already be seen in the rightmost column, as
the names of the irreducible summands of these $A$--modules are
constant for large enough $n$; this is discussed in detail for each
example in the body of the paper.
A crucial observation for us is that each of the groups $A$ in the
table above has an inherent stability in the naming of its irreducible
representations (over $R$). For example, an irreducible
representation of $\SL_n$ is determined by its highest weight vector,
and these vectors may be described uniformly without reference to
$n$. For example, for $\SL_n$ the irreducible representation
$V(L_1+L_2+L_3)$ with highest weight $L_1+L_2+L_3$ is isomorphic to
$\bigwedge^3 V$ regardless of $n$, where $V$ is the standard
representation of $\SL_n$ (see Section~\ref{section:repsGLSp} for the
representation theory of $\SL_n$). This inherent stability can be
used, at least conjecturally, to give a closed form description for
$H_i(N,R)$ (for $n$ large enough, depending on $i$). One idea is that
the growth in $\dim_R(H_i(N,R))$ should be fully accounted for by the
fact that each element of $H_i(N,R)$ brings along with it an entire
$A$--orbit.
\para{Homology of Lie algebras}
In \S\ref{section:liealg} we develop representation stability for Lie algebras and their homology.
The main theoretical result here, Theorem~\ref{thm:equivhomLie}, proves the equivalence between stability for a family of Lie algebras and stability for its
homology. Both directions of this implication are applied to
give nontrivial results. For example, in Corollary~\ref{corollary:nilp} we deduce stability for the homology of nilpotent Lie algebras, which is quite complicated, from stability for the homology of free Lie algebras, which is trivial to compute; the proof uses both directions of Theorem~\ref{thm:equivhomLie} in an essential way. We also give applications to the adjoint homology of free nilpotent Lie algebras (Corollary \ref{cor:adjnil}) and the homology of Heisenberg Lie algebras (Examples~\ref{example:Heis} and \ref{example:adjHeis}).
Although homological stability results for lattices in semisimple Lie groups has been known for some time, we emphasize that there do not seem to have been any stability results for the homology of lattices in nilpotent Lie groups. Since Nomizu proved in the 1950s that the rational homology of a lattice in a nilpotent Lie group $N$ is isomorphic to the Lie algebra homology of the Lie algebra of $N$, such homological stability results follow from our theorems on nilpotent Lie algebras.
\para{The ubiquity of representation stability}
The phenomenon of representation stability occurs in a number of
different places in mathematics. The majority of this paper is devoted to
exposing this phenomenon through examples. In doing this we obtain applications, theorems and conjectures. The examples include:
\begin{enumerate}
\item Classical representation theory
(\S\ref{section:classicalstability}): stability of Schur functors;
Littlewood--Richardson rule; Murnaghan's theorem on stability of
Kronecker coefficients; other natural constructions. These
constructions arise in most other examples, and so their stability
underlies the whole theory of representation stability.
\item Cohomology of moduli spaces (\S\ref{section:braids}, \S\ref{section:torelli}): pure braid groups and generalized pure braid groups; conjecturally in the Torelli subgroups
of mapping class groups $\Mod(S)$ and the analogue for automorphism groups $\Aut(F_n)$ of free groups.
We prove representation stability for the homology of pure
braid groups in Theorem~\ref{thm:pure} and pure generalized braid groups in Theorem~\ref{thm:genpure}. In \S\ref{section:torelli} we give a number of conjectures about the stable homology of the Torelli groups and their analogues. Previously there had been few (if any) general suggestions in this direction.
\item Lie algebras (\S\ref{section:liealg}): graded components of free
Lie algebras; homology of various families of Lie algebras, for
example free nilpotent Lie algebras and Heisenberg Lie algebras; Malcev
completions of surface groups and (conjecturally) pure braid groups. As discussed in the introduction, the main tool proved is Theorem~\ref{thm:equivhomLie}. We apply it to prove
representation stability for various nilpotent Lie algebras.
\item (Equivariant) cohomology of flag and Schubert varieties
(\S\ref{section:flags}). As explained in \S\ref{section:flags}, the space $\cF_n$ of complete flags in $\C^n$ admits a nontrivial action of $S_n$, and the resulting representation on $H^i(\cF_n;\Q)$ is rather complicated. Similarly, the hyperoctahedral group acts on the space $\cF'_n$ of complete flags on Lagrangian subspaces of $\C^{2n}$.
For each $i\geq 1$ the natural families $\{H^i({\cF_n};\Q)\}$ and
$\{H^i({\cF'_n};\Q))\}$ do not satisfy classical (co)homological stability. However, we prove in each case (see Theorem \ref{thm:flag} and Theorem \ref{thm:flagsp}) that these sequences are representation stable.
Another class of well-studied families of varieties are the \emph{Schubert varieties}. Each permutation $w$ of any finite set determines a family $\{X_w[n]\}$ of Schubert varieties
(see \S\ref{section:schubert}). For each $i\geq 0$ the ($T$--equivariant, for a certain torus $T$) cohomology $H_T^i(X_w[n];\Q)$ admits a non-obvious action by $S_n$. While the sequence
$\{H_T^i(X_w[n];\Q)\}$ does not satisfy homological stability in the classical sense, we prove in Theorem \ref{theorem:schubert} that this sequence is representation stable.
\item Algebraic combinatorics (\S\ref{section:flags}): Lefschetz
representations associated to rank-selected posets and
cross-polytopes. Here Stanley's counts of multiplicities in
terms of various Young tableaux are shown to give representation
stability. We also conjecture representation stability for the bigraded pieces of the diagonal
coinvariant algebras. This gives an ``asymptotic refinement'' of the famous
$(n+1)^{(n-1)}$ conjecture in algebraic combinatorics (proved by Haiman), as well as conjectures for ``higher coinvariant algebras'', where very little is known.
\item Homology of congruence subgroups and modular representations. In
\S\ref{section:congruence} we study congruence subgroups of certain arithmetic groups and their analogues for $\Mod(S)$ and for $\Aut(F_n)$. Each of these groups $\Gamma$ admits an
action by outer automorphisms by a finite group $G$ of Lie type, such as $G=\SL_n(\F_p)$. This action makes each homology vector space $H_i(\Gamma,\F_p)$ a $G$--representation.
As $p$ divides the order of $G$, this is a modular representation. Thus, in order to obtain results and conjectures about these important representations, we need to develop a
version of our theory using modular representation theory.
Here a new phenomenon occurs: \emph{stable periodicity} of a sequence of representations
(see \S\ref{section:congruence}).
For each of the sequences of groups $\Gamma$ above we
state a ``stable periodicity conjecture'' for its homology with $\F_p$ coefficients. The
few computations that have been completely worked out are almost all in degree $1$, and these
use deep mathematics (e.g. the congruence subgroup problem, work of Johnson, etc.). These computations show that our conjectures are satisfied for $H_1$. {See \S\ref{section:congruence} for details.}
\end{enumerate}
\para{Historical notes} Various stability phenomena have been
known in representation theory at least as far back as the 1930s, when
formulas were given for the decomposition of tensor products of
irreducible representations of $\SL_n\Q$ (by Littlewood--Richardson,
see e.g.\ \cite[Appendix A]{FH}) and of the symmetric group $S_n$ (by
Murnaghan \cite{Mu}). Some aspects of representation stability can be
found in previous work on Lie algebras. Related ideas appear in terms
of mixed tensor representations in the work of Hanlon \cite{Han} and
R. Brylinski \cite{Bry} on Lie algebra cohomology of non-unital
algebras; in Tirao's description in \cite{Ti} of the homology of free
nilpotent Lie algebras; and in Hain's description in \cite{Ha} of the
associated graded Lie algebra of the fundamental group of a closed
surface.\pagebreak
\para{Acknowledgements} It is a pleasure to thank Jon Alperin, Jordan
Ellenberg, Victor Ginzburg, David Hemmer, Brian Parshall, Andy Putman, Steven Sam,
Richard Stanley, and Paulo Tirao for helpful discussions.
\section{Representation stability}
\label{section:representationstability}
In order to define representation stability and its variants, we will
need to be very precise in the labeling of the irreducible
representations of the various groups we consider. We begin by
reviewing the representation theory of the following families of
groups in order to establish uniform notation across the different
families.
In this section $G_n$ will always denote one of the following families of groups:
\begin{itemize}
\item $G_n=\SL_n\Q$, the special linear group.
\item $G_n=\GL_n\Q$, the general linear group.
\item $G_n=\Sp_{2n}\Q$, the symplectic group.
\item $G_n=S_n$, the symmetric group.
\item $G_n=W_n$, the hyperoctahedral group.
\end{itemize}
By a \emph{representation} of a group $G$ we mean a $\Q$--vector space
equipped with a linear action of $G$. With the exception of Section~\ref{section:congruence}, throughout this paper we work
over $\Q$, but the definitions and results hold over any field of
characteristic $0$, in particular over $\C$. In Section~\ref{section:congruence} we will extend the definition of representation stability to modular representations of $\SL_n\F_p$ and
$\Sp_{2n}\F_p$.
\subsection{Symmetric and hyperoctahedral groups}
\label{section:reps1}
Our basic reference for representation theory is Fulton--Harris
\cite{FH}. For hyperoctahedral groups, see Geck--Pfeiffer \cite[\S
1.4 and \S 5.5]{GP}.
\para{Symmetric groups} The irreducible representations of $S_n$ are
classified by the partitions $\lambda$ of $n$. A \emph{partition} of
$n$ is a sequence $\lambda=(\lambda_1\geq \cdots \geq \lambda_\ell\geq0)$
with $\lambda_1+\cdots+\lambda_\ell=n$; we write $|\lambda|=n$ or
$\lambda\vdash n$. These partitions are identified with
Young diagrams, where the diagram corresponding to $\lambda$ has
$\lambda_i$ boxes in the $i$th row. We identify partitions if their nonzero entries coincide; every partition then can be uniquely written with $\lambda_\ell>0$, in which case we say that $\ell=\ell(\lambda)$ is the \emph{length} of $\lambda$. The irreducible representation
corresponding to the partition $\lambda$ is denoted $V_\lambda$. This
irreducible representation can be obtained as the image $\Q S_n \cdot
c_\lambda$ of a certain idempotent $c_\lambda$ in the group algebra
$\Q S_n$. The fact that every irreducible representation of $S_n$ is
defined over $\Q$ implies that any $S_n$--representation defined over $\Q$
decomposes over $\Q$ into irreducibles. Since every representation of $S_n$ is defined over $\Q$, or alternately since $g$ is conjugate to $g^{-1}$ for all $g\in S_n$, every representation of $S_n$ is self-dual.
For example, the irreducible $V_{(n-1,1)}$ is the standard
representation of $S_n$ on $\Q^n/\Q$. The induced representation
$\bwedge^3 V_{(n-1,1)}$ is the irreducible representation
$V_{(n-3,1,1,1)}$. To remove the dependence of this notation on $n$,
we make the following definition. If $\lambda=(\lambda_1,\ldots,\lambda_\ell)\vdash k$ is any
partition of a fixed number $k$, then for any $n\geq
k+\lambda_1$ we may define the \emph{padded partition}
\[\lambda[n]=(n-k,\lambda_1,\ldots,\lambda_\ell).\]
The condition $n\geq k+\lambda_1$ is needed so that this sequence is
nonincreasing and defines a partition. For $n\geq k+\lambda_1$ we then
define $V(\lambda)_n$ to be the irreducible $S_n$--representation
\[V(\lambda)_n=V_{\lambda[n]}.\] Every irreducible representation of
$S_n$ is of the form $V(\lambda)_n$ for a unique partition
$\lambda$. When unambiguous, we denote this representation simply by
$V(\lambda)$. In this notation, the standard representation is $V(1)$,
and the identity $\bwedge^3 V(1)=V(1,1,1)$ holds whenever both sides
are defined.
\para{Hyperoctahedral groups} The hyperoctahedral group $W_n$ is the
wreath product $\Z/2\Z\wr S_n$; that is, the semidirect product
$(\Z/2\Z)^n\rtimes S_n$ where the action is by permutations. $W_n$ can also
be thought of as the group of signed permutation matrices. General
analysis of wreath products shows that the irreducible representations
of $W_n$ are classified by \emph{double partitions}
$(\lambda^+,\lambda^-)$ of $n$, meaning that
$|\lambda^+|+|\lambda^-|=n$. Given any representation $V$ of $S_n$, we
may regard $V$ as a representation of $W_n$ by pullback. The
irreducible representation $V_\lambda$ of $S_n$ yields the irreducible
representation $V_{(\lambda,0)}$ of $W_n$. Let $\nu$ be the
one-dimensional representation of $W_n$ which is trivial on $S_n$,
while each $\Z/2\Z$ factor acts by $-1$. Then $V_{(\lambda,0)}\otimes
\nu=V_{(0,\lambda)}$. In general, if $\lambda^+\vdash k$ and
$\lambda^-\vdash n-k$, the irreducible $V_{(\lambda^+,\lambda^-)}$ is
obtained as the induced representation
\[V_{(\lambda^+,\lambda^-)}=\Ind_{W_k\times W_{n-k}}^{W_n}
V_{(\lambda^+,0)}\boxtimes V_{(0,\lambda^-)}\]
where $V_{(\lambda^+,0)}\boxtimes V_{(0,\lambda^-)}$ denotes the vector space $V_{(\lambda^+,0)}\otimes V_{(0,\lambda^-)}$ considered as a representation of $W_k\times W_{n-k}$.
As before, for an arbitrary double partition
$\lambda=(\lambda^+,\lambda^-)$ with $|\lambda^+|+|\lambda^-|=k$, for
$n\geq k+\lambda^+_1$ we define the padded partition
\[\lambda[n]=((n-k,\lambda^+),\lambda^-),\]
and define $V(\lambda)_n$ to be the irreducible $W_n$--representation
\[V(\lambda)_n=V_{\lambda[n]}.\] Every irreducible representation of
$W_n$ is of the form $V(\lambda)_n$ for a unique double partition
$\lambda$.
\subsection{The algebraic groups $\SL_n$, $\GL_n$ and $\Sp_{2n}$}
\label{section:repsGLSp}
In this subsection we recall the representation theory of the
algebraic groups $\SL_n$, $\GL_n$ and $\Sp_{2n}$.
\para{Special linear groups} We first review the representation theory
of $\SL_n\Q$ and $\GL_n\Q$. There is an interplay between two
important perspectives here, that of highest weight vectors and that
of Schur functors.
Every representation of $\SL_n\Q$ induces a representation of the Lie
algebra $\fsl_n\Q$. Fixing a basis gives a triangular decomposition
$\fsl_n\Q=\fn^-\oplus \fh\oplus \fn^+$, consisting of strictly lower
triangular, diagonal, and strictly upper triangular matrices
respectively. Given a representation $V$ of $\fsl_n\Q$, a
\emph{highest weight vector} is a vector $v\in V$ which is an
eigenvector for $\fh$ and is annihilated by $\fn^+$. Every irreducible
representation contains a unique highest weight vector and is
determined by the corresponding eigenvalue in $\fh^*$, called a
\emph{weight}.
Considering the obvious basis for the diagonal matrices, we obtain
dual functionals $L_i$; this
yields \[\fh^*=\Q[L_1,\ldots,L_n]/(L_1+\cdots+L_n=0).\] Every weight
lies in the \emph{weight lattice}
\[\Lambda_W=\Z[L_1,\ldots,L_n]/(L_1+\cdots+L_n).\]
The \emph{fundamental weights} are $\omega_i=L_1+\cdots+L_i$. A
\emph{dominant weight} is a weight that can be written as a
nonnegative integral combination $\sum c_i\omega_i$ of the fundamental
weights. A highest weight vector always has a dominant weight as its
eigenvalue, and every dominant weight is the highest weight of a
unique irreducible representation. If $\lambda=\sum c_i\omega_i$ is a
dominant weight, we denote by $V(\lambda)_n$ the irreducible
representation of $\SL_n\Q$ with highest weight $\lambda$. These
representations remain distinct and irreducible when restricted to
$\SL_n\Z$.
We now give another labeling of the irreducible $\SL_n\Q$--representations. Let $\lambda=(\lambda_1\geq \cdots \geq \lambda_\ell)$ be a
partition of $d$. Each such partition determines a \emph{Schur
functor} $\Schur_\lambda$ which attaches to any vector space $V$ the
vector space
\[\Schur_\lambda(V)=V^{\otimes d}\otimes_{\Q S_d}V_\lambda,\]
where $\Q S_d$ acts on $V^{\otimes d}$ by permuting the factors. If
$\dim V$ is less than $\ell(\lambda)$, the number of rows of $\lambda$, then
$\Schur_\lambda(V)$ is the zero representation. If $V$ is a representation of a
group $G$, the induced action makes $\Schur_\lambda(V)$ a
representation of $G$ as well.
Consider the standard representation of $\SL_n\Q$ on $\Q^n$. For any
partition $\lambda=(\lambda_1\geq\cdots\geq \lambda_n\geq 0)$ with at
most $n$ rows, the resulting representation $\Schur_\lambda(\Q^n)$ is
isomorphic to $V(\lambda_1L_1+\cdots+\lambda_nL_n)_n$ as
$\SL_n\Q$--representations. In particular, $\Schur_\lambda(\Q^n)$ is
irreducible, and all irreducible representations arise this way; see
\cite[\S 6 and \S 15.3]{FH}. For example, let $V=\Q^n$. When $\lambda=d$ is the trivial partition
then $\Schur_\lambda(V)=\Sym^dV$, and when $\lambda=1+\cdots +1$ then
$\Schur_\lambda(V)=\bwedge^dV$. Note that since $L_1+\cdots+L_n=0$,
two partitions $\lambda$ and $\mu$ determine the same
$\SL_n\Q$--representation if and only if $\lambda_i-\mu_i$ is constant
for all $1\leq i\leq n$. Thus we may always take our partitions to
have $\lambda_n=0$ (see the ``important notational convention'' remark below).
If $\lambda=(\lambda_1\geq\ldots \geq \lambda_k\geq 0)$ is a partition
with $k$ rows, then for any $n>k$ we define
\begin{equation}
\label{eq:Vlambda}
V(\lambda)_{n}\coloneq \Schur_{\lambda}(\Q^n).
\end{equation}
With this convention, every irreducible representation of $\SL_n\Q$ is
of the form $V(\lambda)_n$ for a unique partition $\lambda$. As
before, we will sometimes refer to $V(\lambda)_n$ as $V(\lambda)$ when
the dimension is clear from context. Note that with this terminology
$V(3,1)_n$ has the same meaning as $V(3,1,0,0)_n$.
\para{Important notational convention} The right side of
\eqref{eq:Vlambda} makes sense even when $n=k$ or $n<k$. However, we
intentionally decline to define $V(\lambda)_n$ when $n\leq k$. The
reason is that as noted above, $V(\lambda_1,\ldots,\lambda_n)$
coincides with
$V(\lambda_1-\lambda_n,\ldots,\lambda_n-\lambda_n)$. This coincidence
causes confusion with intuitive expectations about
multiplicity. For example, we would expect that the multiplicity of
the irreducible $\SL_n\Q$--representation $\Schur_{(2,2,2,2)}(\Q^n)$
in the trivial representation $\Q$ is 0, and this is in fact true for all
$n>4$. However, when $n=4$ we have $\Schur_{(2,2,2,2)}(\Q^4)=\Q$, and
so the multiplicity in this case is 1. For $n<4$ the representation
$\Schur_{(2,2,2,2)}(\Q^n)$ is the zero representation, so the
multiplicity is not well-defined. Another benefit of this convention
is the important fact that every irreducible representation of
$\SL_n\Q$ is of the form $V(\lambda)_n$ for a \emph{unique}
$\lambda$. This notational convention is equivalent to requiring all
partitions to have $\lambda_n=0$, as mentioned above.
\para{General linear groups} Consider the standard representation of
$\GL_n\Q$ on $\Q^n$. If $\lambda=(\lambda_1\geq\cdots\geq
\lambda_n\geq 0)$ is a partition with at most $n$ rows, then
$V(\lambda)_n=\Schur_\lambda(\Q^n)$ is an irreducible representation
of $\GL_n\Q$. The partition $(1,\ldots,1)$ with $n$ rows yields the
representation $V(1,\ldots,1)_n=\bwedge^n\Q^n=D$, the one-dimensional
determinant representation of $\GL_n\Q$, and in general for any
positive $k$ we
have \[\Schur_{(\lambda_1+k,\ldots,\lambda_n+k)}(\Q^n)=
\Schur_{(\lambda_1,\ldots,\lambda_n)}(\Q^n) \otimes D^{\otimes k}.\] A \emph{pseudo-partition} is a sequence
$\lambda=(\lambda_1\geq \cdots\geq \lambda_\ell)$, where the integers
$\lambda_i$ are allowed to be negative. The length $\ell(\lambda)$ of a pseudo-partition is the largest $i$ for which $\lambda_i\neq 0$. We extend the definition of
$V(\lambda)$ to pseudo-partitions by the above formula. That is, for
any pseudo-partition $\lambda=(\lambda_1\geq \cdots\geq \lambda_k)$
and any $n\geq k$, we define
\[V(\lambda_1,\ldots,\lambda_k)_n\coloneq
\Schur_{(\lambda_1-\lambda_k,\ldots,\lambda_k-\lambda_k)}(\Q^n)\otimes
D^{\otimes\lambda_k}.\] Every irreducible representation of $\GL_n\Q$
is of the form $V(\lambda)_n$ for a unique pseudo-partition
$\lambda$. For example, the dual of $V(\lambda_1,\ldots,\lambda_n)$ is
$V(-\lambda_n,\ldots,-\lambda_1)$. As before, the obvious basis for
the diagonal matrices yields dual functionals $L_i$. For a
pseudo-partition $\lambda=(\lambda_1\geq\cdots\geq\lambda_k)$, the
irreducible representation $V(\lambda)_n$ has heighest weight
$\lambda_1L_1+\cdots+\lambda_kL_k$. When restricted to $\GL_n\Z$, all
of these representations remain irreducible, and $D^{\otimes 2}$
becomes trivial; thus two pseudo-partitions $\lambda$ and $\mu$
determine the same representation of $\GL_n\Z$ if and only if
$\lambda_i-\mu_i$ is constant and even for all $1\leq i\leq n$.
\begin{remark}
Note that for $\GL_n\Q$--representations, $V(3,1)_n$ has the same meaning as
$V(3,1,1,1)_n$, while $V(3,1,0)_n$ has the same meaning as
$V(3,1,0,0)_n$. The discrepancy between the terminology for
representations of $\SL_n\Q$ and $\GL_n\Q$ comes from the fact that
for $\SL_n\Q$ we always assume that $\lambda_n=0$.
\end{remark}
\para{Symplectic groups} We now review the representation theory of
$\Sp_{2n}\Q$. Every representation of $\Sp_{2n}\Q$ induces a
representation of the Lie algebra $\fsp_{2n}\Q$. Again we have a
decomposition $\fsp_{2n}\Q=\fn^-\oplus \fh\oplus \fn^+$, with
$\fh^*=\Q[L_1,\ldots,L_n]$. The fundamental weights are
$\omega_i=L_1+\cdots+L_i$, and so for any dominant weight
$\lambda=\sum c_i\omega_i$ there is a unique irreducible
representation $V(\lambda)_n$ of $\Sp_{2n}\Q$.
These can be identified explicitly as follows. Let $V=\Q^{2n}$ be the
standard representation of $\Sp_{2n}\Q$. For each $1\leq i<j\leq d$,
the symplectic form gives a contraction $V^{\otimes d}\to V^{\otimes
d-2}$ as $\Sp_{2n}\Q$--modules. Define $V^{\langle d\rangle}\leq
V^{\otimes d}$ to be the intersection of the kernels of these
contractions. For any partition $\lambda\vdash d$, the representation
$\Schur_\lambda V$ is realized as the image of $c_\lambda\in \Q S_d$
acting on $V^{\otimes d}$. If $k$ is the number of rows of the
partition $\lambda$, for any $n\geq k$ we define $V(\lambda)_n$ to be
the intersection
\[V(\lambda)_n\coloneq\Schur_\lambda V\cap V^{\langle d\rangle}.\] The notation
$\Schur_{\langle\lambda\rangle}V$ is also used for the intersection
$\Schur_\lambda V\cap V^{\langle d\rangle}$. We
remark that this intersection is trivial if $n$ is less than the
number of rows of $\lambda$. Every irreducible representation of
$\Sp_{2n}\Q$ is of the form $V(\lambda)_n$ for a unique partition
$\lambda$. In particular, it follows that each irreducible
representation $V(\lambda)_n$ is self-dual. These representations
remain distinct and irreducible when restricted to $\Sp_{2n}\Z$.
\begin{remark}\label{rem:Spweights}
There is one issue which can cause confusion when comparing weights
for $\GL_{2n}\Q$ and $\Sp_{2n}\Q$. To clarify, we work out the
comparison explicitly in terms of a basis. Let
$\{a_1,b_1,\ldots,a_n,b_n\}$ be a symplectic basis for $\Q^{2n}$,
meaning that the symplectic form satisfies $\omega(a_i,b_i)=1$ and
$\omega(a_i,b_j)=\omega(a_i,a_j)=\omega(b_i,b_j)=0$. By abuse of
notation, in this remark we also denote by
$\{a_1,b_1,\ldots,a_n,b_n\}$ the corresponding basis for $\fh_\fgl$,
the diagonal matrices in $\fgl_{2n}\Q$, with dual basis
$\{a_1^*,b_1^*,\ldots,a_n^*,b_n^*\}$ for $\fh^*_\fgl$. These
elements, in some order, will be the weights
$\{L^{\fgl}_1,\ldots,L^{\fgl}_{2n}\}$, but we defer until later the
explicit identification.
If $\fh_\fsp$ denotes the diagonal matrices in $\fsp_{2n}\Q$, the
dual $\fh^*_\fsp$ has basis $\{L^{\fsp}_i=a_i^*-b_i^*\}$. These
weights are ordered so that $L^{\fsp}_1>\cdots>L^{\fsp}_n$. The
restriction from $\fh^*_\fgl$ to $\fh^*_\fsp$ maps $a_i^*\mapsto
L^{\fsp}_i$ and $b_i^*\mapsto -L^{\fsp}_i$. To correctly compare
representations of $\GL_{2n}\Q$ with those of $\Sp_{2n}\Q$, this
restriction should preserve the ordering on weights (for example, so
that the notions of ``highest weight'' agree). This forces us to
label the weights of $\GL_{2n}\Q$ as \[L^{\fgl}_1=a^*_1,\ \ldots,\
L^{\fgl}_n=a^*_n,\ \ L^{\fgl}_{n+1}=b^*_n,\ \ldots,\
L^{\fgl}_{2n}=b^*_1.\] Thus the restriction maps $L^{\fgl}_i\mapsto
L^{\fsp}_i$ and $L^{\fgl}_{2n-i+1}\mapsto -L^{\fsp}_i$ for $1\leq
i\leq n$. With respect to the ordered basis
$\{a_1,\ldots,a_n,b_n,\ldots,b_1\}$, the subalgebra $\fn^+$ consists
of exactly those matrices in $\fsp_{2n}\Q$ that are
upper-triangular.
\end{remark}
\subsection{Definition of representation stability}
\label{section:repstab:def}
We are now ready to define the main concept of this paper. Let $G_n$ be one of the families $\GL_n\Q$, $\SL_n\Q$, $\Sp_{2n}\Q$,
$S_n$, or $W_n$. In this section $\lambda$ refers to the datum
determining the irreducible representations of the corresponding
family, namely a pseudo-partition, a partition, or a double
partition. For each family we have natural inclusions
$G_n\hookrightarrow G_{n+1}$: for $S_n$ and $W_n$ we take the standard
inclusions, and for $\GL_n\Q$, $\SL_n\Q$, and $\Sp_{2n}\Q$ we take the
upper-left inclusions
Let $\{V_n\}$ be a sequence of $G_n$--representations, equipped with
linear maps $\phi_n\colon V_n\to V_{n+1}$, making the following diagram
commute for each $g\in G_n$:
\[\xymatrix{
V_n\ar^{\phi_n}[r]\ar_{g}[d]&V_{n+1}\ar^{g}[d]\\ V_n\ar_{\phi_n}[r]&V_{n+1}
}\]
On the right side we consider $g$ as an element of $G_{n+1}$ by the
inclusion $G_n\hookrightarrow G_{n+1}$. This condition is equivalent
to saying that $\phi_n$, thought of as a map from $V_n$ to the restriction $V_{n+1}\downarrow{G_n}$, is a map of $G_n$--representations.
We allow the vector spaces
$V_n$ to be infinite-dimensional, but we ask that each vector lies in
some finite-dimensional representation. This ensures that $V_n$
decomposes as a direct sum of finite-dimensional irreducibles. We call
such a sequence of representations \emph{consistent}.
We want to compare the representations $V_n$ as $n$ varies. However,
since $V_n$ and $V_{n+1}$ are representations of different groups, we
cannot ask for an isomorphism as representations. But we can ask for
injectivity and surjectivity, once they are properly
formulated. Moreover, using the uniformity of our labeling of irreducible representations, we
can formulate what it means for $V_n$ and $V_{n+1}$ to be the ``same
representation''.
\begin{definition}[Representation stability]
\label{definition:repstab1}
Let $\{V_n\}$ be a consistent sequence of $G_n$--representations.
The sequence $\{V_n\}$ is \emph{representation stable} if, for
sufficiently large $n$, each of the following conditions holds.
\begin{enumerate}[{\bf I.}]
\item \textbf{Injectivity:} The natural map $\phi_n\colon V_n\to
V_{n+1}$ is injective.
\item \textbf{Surjectivity:} The span of the $G_{n+1}$--orbit of
$\phi_n(V_n)$ equals all of $V_{n+1}$.
\item \textbf{Multiplicities:} Decompose $V_n$ into irreducible
representations as
\[V_n=\bigoplus_\lambda c_{\lambda,n}V(\lambda)_n\] with
multiplicities $0\leq c_{\lambda,n}\leq \infty$. For each $\lambda$,
the multiplicities $c_{\lambda,n}$ are eventually independent of
$n$.
\end{enumerate}
\end{definition}
It is not hard to check that, given Condition I for $\phi_n$, Condition II for $\phi_n$ is equivalent to the following when $G_n$ is finite: $\phi_n$ is a composition of the inclusion $V_n\hookrightarrow \Ind_{G_n}^{G_{n+1}}V_n$ with a surjective $G_{n+1}$--module homomorphism $ \Ind_{G_n}^{G_{n+1}}V_n\to V_{n+1}$.
By requiring Condition III just for the multiplicity of the single
irreducible representation $V(\lambda)_n$, we obtain the notion of
\emph{$\lambda$--representation stable} for a fixed $\lambda$. In the
presence of Condition IV below, $\lambda$--representation stability is
exactly equivalent to representation stability for the
$\lambda$--isotypic components $V_n^{(\lambda)}$.
\begin{remark}
Fix either $G_n=S_n$ or $W_n$ and take for each $n\geq 1$ an exact
sequence of groups
\[1\to A_n\to \Gamma_n\to G_n\to 1.\] Then an easy transfer argument
shows that $\lambda$--representation stability of $\{H_i(A_n;\Q)\}$
for the trivial representation ($\lambda=0$) is equivalent to
classical homological stability for the sequence
$\{H_i(\Gamma_n;\Q)\}$.
\end{remark}
\begin{remark}
It seems likely that many of the results in this paper can be extended to orthogonal groups and to the corresponding Weyl groups; it would be interesting to know what differences arise, if any.
\end{remark}
\para{Uniform stability} In Definition~\ref{definition:repstab1} we did not require the
multiplicities of all the irreducible representations to stabilize
simultaneously. We will see that in many cases a stronger form of
stability holds, as follows.
\begin{definition}[Uniform representation stability]
A consistent sequence $\{V_n\}$ of $G_n$--representations is
\emph{uniformly representation stable} if Conditions I and II hold for
sufficiently large $n$, and the following condition holds:
\begin{enumerate}
\item[{\bf III$'$.}] \textbf{Multiplicities (uniform):} There is some
$N$, not depending on $\lambda$, so that for $n\geq N$ the
multiplicities $c_{\lambda,n}$ are independent of $n$ for all
$\lambda$. In particular, for any $\lambda$ for which $V(\lambda)_N$
is not defined, $c_{\lambda,n}=0$ for all $n\geq N$.
\end{enumerate}
\end{definition}
For example, if $G_n=\GL_n\Q$, the latter condition applies to any
partition $\lambda$ with more than $N$ rows. We will see examples
below both of uniform and nonuniform representation stability.
\para{Multiplicity stability}
It sometimes happens that for a sequence $\{V_n\}$ of $G_n$--representations
there are no natural maps $V_n\to V_{n+1}$. For example, this is the
situation for the Torelli groups of closed surfaces (see
Section~\ref{section:torelli} below). In this case we can still ask
whether the decomposition of $V_n$ into irreducibles stabilizes in
terms of multiplicities.
\begin{definition}[(Uniform) multiplicity stability]
A sequence of $G_n$--representations $V_n$ is called
\emph{multiplicity stable} (respectively \emph{uniformly
multiplicitly stable}) if Condition III (respectively Condition
III$'$) holds.
\end{definition}
\para{Reversed maps}
The definitions above capture the behavior of a sequence of representations, one including into the next. In a number of contexts (see, e.g., \S~\ref{section:flags} below) we are given sequences of representations with the maps going the other way, from $V_{n+1}\to V_n$. In this case we need to alter the definition of representation stability, in particular injectivity and surjectivity.
\begin{definition}[Representation stability with maps reversed]
\label{definition:repstabrev}
A consistent sequence of $G_n$--representations $\{V_n\}$ with maps
$\phi_n\colon V_n\leftarrow V_{n+1}$ is \emph{representation stable}
if for sufficiently large $n$, Condition III holds, and the
following conditions hold:
\begin{enumerate}[I$'$.]
\item \textbf{Surjectivity:} The map $\phi_n\colon V_n\leftarrow
V_{n+1}$ is surjective.
\item \textbf{Injectivity:} There exists a subspace $V_{n+1}$ which
maps isomorphically under $\phi_n$ to $V_n$, and whose
$G_{n+1}$--orbit spans $V_{n+1}$.
\end{enumerate}
\end{definition}
We remark that Definition~\ref{definition:repstabrev} is \emph{not} equivalent to representation stability for the dual sequence $\phi_n^\ast:V_n^\ast\to V_{n+1}^\ast$. For an explanation, and a way to handle dual sequences, see the discussion of ``mixed tensor stability'' in \S\ref{section:strong}.
\para{Examples of representation stability} We will see many examples
of representation stability below; indeed much of this paper is an
exploration of such examples. For now, we mention the following
simple examples.
\begin{xample}
\label{example:tensor}
Let $V_n=\Q^n$ be the standard representation of $\GL_n\Q$. Then
$\{V_n\otimes V_n\}$ is uniformly representation stable. This
follows easily from the decomposition \[V_n\otimes
V_n=\Sym^2V_n\oplus \bwedge^2V_n\] of $V_n\otimes V_n$ into
irreducibles. We will see in
Section~\ref{section:classicalstability} that $\{V_n\otimes V_n\}$
will be uniformly representation stable for any sequence $\{V_n\}$ of
uniformly stable representations.
\end{xample}
In the other direction, we have the following.
\begin{nonexample}
Let $V_n=\Q^n$ be the standard representation of $\SL_n\Q$, and let
$W_n=\bwedge^* V_n$. Then $\{W_n\}$ is a stable sequence of
$\SL_n\Q$--representations, but it is not a uniformly stable
sequence: the multiplicity of the irreducible
representation $V(1,\ldots ,1)_n$, with $k$ occurrences of
$1$, does not stabilize until $n>k$.
\end{nonexample}
\begin{nonexample}
\label{nonexample:regular}
Let $G_n$ be either $S_n$ or $W_n$. Then the sequence of regular representations $\{\Q G_n\}$ is not representation stable, or even
$\lambda$--representation stable for any partition or double
partition $\lambda$. This follows from the standard
fact that the multiplicity of $V(\lambda)_n$ in the regular
representation equals $\dim(V(\lambda)_n)$, which is not constant,
and indeed tends to infinity with $n$.
\end{nonexample}
\subsection{Strong and mixed tensor stability}
\label{section:strong}
In this subsection we define two variations of
representation stability. Both variations will be used later in the
paper in the analysis of certain examples. The reader might want to
skip this subsection until encountering those examples and move to \S\ref{section:classicalstability}.
\para{Strong stability} Conditions I and II together give a kind of
``isomorphism'' between representations of different groups, but
they give no information about the subrepresentations of the
$V_n$. Condition III better captures the internal structure of the
representations $V_n$, but ignores the maps between the
representations. For example, Condition III alone does not rule out
the possibility that the maps $\phi_n\colon V_n\to V_{n+1}$ are all
zero. The following condition combines these approaches to give
careful control over the behavior of a subrepresentation under
inclusion. We require that for every irreducible $V(\lambda)_n\subset
V_n$, the $G_{n+1}$--span of the image $\phi_n(V(\lambda)_n)$ is
isomorphic to $V(\lambda)_{n+1}$.
\begin{definition}[Strong representation stability]
\label{definition:repstab2}
A consistent sequence $\{V_n\}$ of $G_n$--representations is
\emph{strong representation stable} if for sufficiently large $n$, not
depending on $\lambda$, Conditions I, II, and III$'$ hold (that is,
$\{V_n\}$ is uniformly representation stable), and the following condition holds:
\begin{enumerate}
\item[{\bf IV.}] \textbf{Type-preserving:} For any subrepresentation
$W\subset V_n$ so that $W\approx V(\lambda)_{n}$, the span of the
$G_{n+1}$--orbit of $\phi_n(W)$ is isomorphic to $V(\lambda)_{n+1}$.
\end{enumerate}
\end{definition}
It is possible to embed the $\GL_n\Q$--module $\bwedge^i
\Q^n=V(\omega_i)_n$ into the $\GL_{n+1}\Q$--module
$\bwedge^{i+1}\Q^{n+1}=V(\omega_{i+1})_{n+1}$ by $v\mapsto v\wedge
x_{n+1}$. This embedding respects the group actions, but the
$\GL_{n+1}\Q$--span of the image is all of $\bwedge^{i+1}\Q^{n+1}$;
similar embeddings occur for other pairs of irreducible
representations. Condition IV rules out this type of phenomenon. One
example of a uniformly stable sequence of $S_n$--representations that
is not strongly stable is given by the cohomology of pure braid
groups; see \S\ref{section:braid}.\pagebreak
\begin{remark}
\label{remark:strong}
For applications, we will need the stronger statement that any
subspace isomorphic to $V(\lambda)_n^{\oplus k}$ has $G_{n+1}$--span
isomorphic to $V(\lambda)_{n+1}^{\oplus k}$, where the multiplicity
$k$ may be greater than $1$. Fortunately, this stronger statement
follows from Condition IV above. First, the maps
$V_n\to V_{n+1}$ are injective (apply Condition IV to any $W$
contained in the kernel). Furthermore, for a fixed $\lambda$,
Condition IV implies that the inclusions $V_n\hookrightarrow
V_{n+1}$ restrict to inclusions of $\lambda$--isotypic components
$V_n^{(\lambda)}\hookrightarrow V_{n+1}^{(\lambda)}$.
It is thus clear that the $G_{n+1}$--span of $V(\lambda)_{n}^{\oplus
k}$ is $V(\lambda)_{n+1}^{\oplus \ell}$ with $\ell\leq k$. The
potential problem is that two independent subrepresentations
$W,W'\approx V(\lambda)_n\subset V_n$ could both map into the same
$V(\lambda)_{n+1}\subset V_{n+1}$. This is ruled out by the
following property, shared by each of our families of groups: the
restriction $V(\lambda)_{n+1}\downarrow G_n$ contains the
irreducible $G_n$--representation $V(\lambda)_n$ with multiplicity
1. Thus the multiplicity of $V(\lambda)_n$ in
$V(\lambda)_{n+1}^{\oplus \ell}\downarrow G_n$ is $\ell$. But as
$G_n$--representations, we have an inclusion
\[V(\lambda)_n^{\oplus k}\hookrightarrow
\big(V(\lambda)_{n+1}^{\oplus \ell}\downarrow G_n\big),\] which
implies $k\leq \ell$, verifying the stronger statement as desired.
For $G_n=\SL_n\Q$ or $\GL_n\Q$, the property mentioned above can be
seen from the formula \eqref{eq:SLres} given in the proof of
Theorem~\ref{thm:classicalstability}(6) below. For $G_n=\Sp_{2n}\Q$,
it follows from \eqref{eq:Spres} below. For $G_n=S_n$, this is the
classical \emph{branching rule} \cite[Equation 4.42]{FH}:
\[V(\lambda)_{n+1}\downarrow S_n=V(\lambda)_n\oplus
\bigoplus_\mu V(\mu)_n\] where $\mu$ ranges over those partitions
obtained by removing one box from $\lambda$. For $G_n=W_n$, the
branching rule has the form \cite[Lemma 6.1.3]{GP}:
\[V(\lambda^+,\lambda^-)_{n+1}\downarrow W_n
=V(\lambda^+,\lambda^-)_n\oplus
\bigoplus_{\mu^+}V(\mu^+,\lambda^-)_n\oplus
\bigoplus_{\mu^-}V(\lambda^+,\mu^-)_n\] where $\mu^+$ is obtained
from $\lambda^+$, and $\mu^-$ is obtained from $\lambda^-$, by
removing one box.
It follows that assuming surjectivity, Condition IV also implies
Condition III$'$. Conversely, as long as the $V_n$ are
finite-dimensional, or even have finite multiplicities $0\leq
c_{\lambda,n}<\infty$, Conditions III$'$ and IV together imply
Condition II.
\end{remark}
\para{An equivalent formulation of Condition IV}
\label{reformulation}
When $G_n$ is $\SL_n\Q$ or $\GL_n\Q$, Condition IV can be stated in a
more familiar basis-dependent form. Let $P_{n+1}$ be the
$n$--dimensional subgroup of $\SL_{n+1}\Q$ preserving and acting
trivially on $\Q^n<\Q^{n+1}$; that is, agreeing with the identity outside the rightmost column. Then assuming uniform multiplicity
stability, Condition IV can be stated as follows.
\begin{proposition}
\label{prop:SLstrong}
For $G_n=\SL_n\Q$ or $\GL_n\Q$, let $\{V_n\}$ be a uniformly
multiplicity stable sequence of $G_n$--representations. Assume that
the maps $\phi_n\colon V_n\hookrightarrow V_{n+1}$ are injective. The
sequence $\{V_n\}$ is type-preserving (satisfies Condition IV) for
sufficiently large $n$ if and only if the following condition is
satisfied for sufficiently large $n$.
\end{proposition}
\begin{enumerate}
\item[{\bf IV$'$.}] $P_{n+1}$ acts trivially on the image
$\phi(V_n)$ of $V_n$ in $V_{n+1}$.
\end{enumerate}
Condition IV$'$ is in practice much easier to check than Condition
IV. As we will see in Theorem~\ref{thm:classicalstability}, Condition
IV$'$ is also preserved by many natural constructions. It is
equivalent to the statement that $\phi_n$ takes highest weight vectors in $V_n$ to
highest weight vectors in $V_{n+1}$.
\begin{proof}
Within this proof, let $\fp_{n+1}$ be the Lie algebra of
$P_{n+1}$. Explicitly, $\fp_{n+1}$ is the span of the elementary
matrices $E_{i,n+1}$ with $1\leq i\leq n$. The subgroup $P_{n+1}$
was chosen exactly so that $\fn^+_{n+1}$ is spanned by $\fn^+_n$
together with $\fp_{n+1}$.
\para{IV$'$ $\implies$ IV} In fact, we need only assume that $P_{n+1}$
acts trivially on the image of each highest weight vector. Consider
a highest weight vector $v\in V_n$, so $v$ is an eigenvector for
$\fh_n$ with weight $\lambda\in\fh_n^*$, and $v$ is annihilated by
$\fn^+_n$. By possibly rechoosing $v$, we may assume that $\phi(v)$
is an eigenvector for $\fh_{n+1}$ with weight
$\lambda'\in\fh_{n+1}^*$. The consistency of the map $V_n\to
V_{n+1}$ implies that under the restriction map $\fh_{n+1}^*\to
\fh_n^*$, the weight $\lambda'$ restricts to $\lambda$. The
condition that $P_{n+1}$ acts trivially on $\phi(V_n)$ implies that
$\fp_{n+1}$ annihilates $\phi(V_n)$. It follows that
$\fn^+_{n+1}=\fn^+_n\oplus \fp_{n+1}$ annihilates $\phi(v)$, so
$\phi(v)$ is a highest weight vector for $G_{n+1}$. By assumption
$\{V_n\}$ is uniformly multiplicity stable. This implies that once
$n$ is sufficiently large, the only weight $\lambda'$ occurring in
$V_{n+1}$ which restricts to $\lambda\in \fh_n^*$ is the weight
satisfying $V(\lambda)_{n+1}=V(\lambda')_{n+1}$. Thus we see that
$\phi(v)$ spans the subrepresentation $V(\lambda)_{n+1}$, as
desired. Since this holds for all highest weight vectors $v$, and
each irreducible subrepresentation is the span of a highest weight
vector, Condition IV follows.
\para{IV $\implies$ IV$'$} Conversely, if $\phi\colon V_n\hookrightarrow
V_{n+1}$ is type-preserving, let $v\in V_n$ be a highest weight
vector for $G_n$ spanning $V(\lambda)_n$, and consider its image in
$V_{n+1}$. Certainly $\phi(v)$ remains a highest weight vector for
$G_n$ with weight $\lambda$. By Condition IV, its $G_{n+1}$--span is
isomorphic to $V(\lambda)_{n+1}=V(\lambda')_{n+1}$. Let $w\in
V(\lambda)_{n+1}$ be the $G_{n+1}$--highest weight vector with
heighest weight $\lambda'$. Then $w$ is evidently a highest weight vector
for $G_n$ with weight $\lambda$ as well. But as noted in
Remark~\ref{remark:strong}, the restriction
$V(\lambda)_{n+1}\downarrow G_n$ contains $V(\lambda)_n$ with
multiplicity 1, so $V(\lambda)_{n+1}$ contains a unique
$G_n$--highest weight vector with weight $\lambda$. Thus $\phi(v)$
must coincide with $w$, and in particular $\phi(v)$ is a highest
weight vector for $G_{n+1}$.
This implies that $P_{n+1}$ acts trivially on $\phi(v)$ for each
highest weight vector $v$. It remains to show that $P_{n+1}$ acts
trivially on the entire image of $V_n$, that is on the $G_n$--span
of the highest weight vectors $\phi(v)$. This is a general fact of
representation theory. Since $P_{n+1}$ is contained in
$\SL_{n+1}\Q$, we may assume that $G_n=\SL_n\Q$. For the rest of
the argument we identify
$\mathfrak{h}_{n+1}^*=\Z[L_1,\ldots,L_{n+1}]/(L_1+\cdots+L_{n+1})$
with $\Z[L_1,\ldots,L_n]$ by setting $L_{n+1}=0$. Restrict to the
inclusion of a single irreducible $V(\lambda)_n\subset
V(\lambda)_{n+1}$, with highest weight vector $v$ of weight
$\lambda=\lambda_1L_1+\cdots+\lambda_nL_n$. Let $k\coloneq \sum
\lambda_i$ be the sum of the coefficients
The irreducible representation $V(\lambda)_{n+1}$ is the span of $v$
under $\fn^-_{n+1}$, which is spanned by the elementary matrices
$\{E_{j,i}|1\leq i<j\leq n+1\}$. If $j\leq n$ the
matrix $E_{j,i}$ has weight $L_j-L_i$, while $E_{n+1,i}$ has weight
$L_{n+1}-L_i=-L_i$. Adding the former does not change the sum of the
coefficients, while adding the latter decreases the sum, so every
weight $\mu=\mu_1L_1+\cdots+\mu_nL_n$ occurring in $V(\lambda)_{n+1}$
has $\sum \mu_i\leq k$. The subspace $V(\lambda)_n$ is the span of
$v$ under $\fn^-_n$, which is spanned by the $\{E_{j,i}|1\leq
i<j\leq n\}$ with roots $\{L_j-L_i|1\leq i<j\leq n\}$. Thus the
weights $\mu$ occurring in $V(\lambda)_n$ all have
$\sum\mu_i=k$. Applying any matrix $E_{i,n+1}\in \fp_{n+1}$ with
weight $L_i-L_{n+1}=L_i$ to such a vector would yield a vector with
weight
\[\mu_1L_1+\cdots+(\mu_i+1)L_i+\cdots+\mu_nL_n.\]
The sum of the coefficients of such a weight is $k+1$, and we have
already said that no such weight occurs in $V(\lambda)_{n+1}$. It
follows that $\fp_{n+1}$ must annihilate every element of
$V(\lambda)_{n}$, and thus $P_{n+1}$ acts trivially on
$V(\lambda)_n\subset V(\lambda)_{n+1}$, as desired. We conclude that
$P_{n+1}$ acts trivially on $\phi(V_n)\subset V_{n+1}$.
\end{proof}
\begin{remark}
There is no such nice formulation of Condition IV for
representations of $\Sp_{2n}\Q$. Indeed we will see in
Theorem~\ref{thm:classicalstability} that if $\{V_n\}$ is a strongly
stable sequence of $\SL_n\Q$--representations, then for any Schur
functor $\Schur_\lambda$, the sequence $\{\Schur_\lambda(V_n)\}$ is
strongly stable as well. The proof hinges upon the equivalence of
Conditions IV and IV$'$.
The corresponding fact for $\Sp_{2n}\Q$ is false. For example,
$\{V_n=\Q^{2n}\}$ is certainly strongly stable. However,
$\{\bwedge^2 V_n=\bwedge^2 \Q^{2n}\}$ is not strongly stable. The
unique trivial subrepresentation of $\bwedge^2 \Q^{2n}$ is spanned
by $a_1\wedge b_1+\cdots+a_n\wedge b_n$. However, this vector is not
taken to a trivial subrepresentation of $\bwedge^2 \Q^{2n+2}$. In
fact, the $\Sp_{2n+2}\Q$--span of $a_1\wedge b_1+\cdots+a_n\wedge
b_n$ is all of $\bwedge^2 \Q^{2n+2}$. This failure is related to the
fact, described in Remark~\ref{rem:Spweights}, that the upper-left
inclusion $\Sp_{2n}\Q\subset \Sp_{2n+2}\Q$ does not respect the
ordering of the roots. Of course there does exist some map
$V(0)_n\oplus V(\lambda_2)_n\to V(0)_{n+1}\oplus V(\lambda_2)_{n+1}$
which is type-preserving, but viewed as a map $\bwedge^2 \Q^{2n}\to
\bwedge^2 \Q^{2n+2}$ this map appears wholly unnatural.
\end{remark}
\para{Mixed tensor representations}
There are certain natural families of representations with an inherent
``stability'', which indeed satisfy the definition of
representation stability given above, but for trivial reasons that do
not capture the real nature of their stability. For example, the dual
of the standard representation of $\GL_n\Q$ has highest weight $-L_n$.
In terms of pseudo-partitions, the dual representation
$V(1,0,\ldots,0)_n^*$ is the representation $V(0,\ldots,0,-1)_n$,
which is given by a different pseudo-partition for each $n$. So in the
sequence of representations $\{V_n=(\Q^n)^*\}$, for each $\lambda$ the
irreducible $V(\lambda)_n$ appears in $V_n$ for at most one $n$, from
which it follows that the sequence $\{V_n\}$ does fit the definition of
representation stable given above. However, the ``stable
representation'' is trivial, since each representation $V(\lambda)$
eventually has multiplicity 0.
To accurately capture the stability of this sequence, as well as other
natural sequences such as $\{V_n^*\otimes \bwedge^2 V_n\}$ and the adjoint
representations $\{\fsl_n\Q\}$, we will use mixed tensor representations to
define a stronger condition than representation stability. Given
two partitions $\lambda=(\lambda_1,\ldots,\lambda_k)$ and
$\mu=(\mu_1,\ldots,\mu_\ell)$, for $n\geq k+\ell$ the \emph{mixed
tensor representation} $V(\lambda;\mu)_n$ is the irreducible
representation of $\GL_n\Q$ with highest
weight \[\lambda_1L_1+\cdots+\lambda_kL_k-\mu_\ell
L_{n-\ell+1}-\cdots-\mu_1L_n.\] Equivalently, $V(\lambda;\mu)_n$ is
the irreducible representation corresponding to the pseudo-partition
$(\lambda_1,\ldots,\lambda_k,0,\ldots,0,-\mu_\ell,\ldots,-\mu_1)$. Note
that when restricted to $\SL_n\Q$, this representation corresponds to
the partition \[(\mu_1+\lambda_1,\ldots,\mu_1+\lambda_k,\mu_1,
\ldots,\mu_1,\mu_1-\mu_\ell,\ldots,\mu_1-\mu_2,0).\]
\begin{definition}[Mixed tensor stable]
A consistent sequence of $\GL_n\Q$--representations or
$\SL_n\Q$--representations $\{V_n\}$ is called \emph{mixed
representation stable} if Conditions I and II are satisified for
large enough $n$, and if in addition the following condition
is satisfied:
\begin{enumerate}
\item[{\bf MTIII.}] For all partitions $\lambda$ and
$\mu$, the multiplicity of the mixed tensor representation
$V(\lambda;\mu)_n$ in $V_n$ is eventually constant.
\end{enumerate}
\end{definition}
Note that mixed tensor stability implies representation stability. As an example of mixed tensor stability, consider the adjoint representation of $\SL_n\Q$ or
$\GL_n\Q$ on $\fsl_n\Q$. This corresponds to the partition
$(2,1,\ldots,1,0)$, or to the pseudo-partition
$(1,0,\ldots,0,-1)$. Thus $\fsl_n\Q$ is the mixed tensor
representation $V(1;1)_n$. Similarly, the dual $(\Q^n)^*$ of the
standard representation is $V(0;1)_n$. In general, the dual of
$V(\lambda;\mu)_n$ is $V(\mu;\lambda)_n$, so in particular if a
sequence $\{V_n\}$ is representation stable, the sequence of duals
$\{V_n^*\}$ is mixed tensor stable. We remark that a sequence
which is mixed representation stable is essentially never type-preserving.
Mixed representation stability is used in
Sections~\ref{section:torelli} and \ref{section:congruence}. This
notion was applied by Hanlon \cite{Han} to Lie algebra cohomology over
non-unital algebras, and futher applied by R. Brylinski \cite{Bry}.
\section{Stability in classical representation theory}
\label{section:classicalstability}
In this section we discuss examples of representation stability in
classical representation theory. We remark that the definition of
representation stability itself already relies upon an inherent
stability in the classification of irreducible representations of the
groups $G_n$, in the sense that the system of names of representations
of the varying groups $G_n$ can be organized in a coherent way.
\subsection{Combining and modifying stable sequences}
The ubiquity of representation stability would be unlikely were it not
that many of the natural constructions in classical representation
theory preserve representation stability. We now formalize this.
Many of the results follow from well-known classical theorems.
\begin{theorem}
\label{thm:classicalstability}
Suppose that $G_n=\SL_n\Q$ and that $\{V_n\}$ and $\{U_n\}$ are
multiplicity stable sequences of finite-dimensional
$G_n$--representations. Fix partitions $\lambda$ and $\mu$. Then
the following sequences of $G_n$--representations are multiplicity
stable.
\begin{enumerate}
\item \textbf{Tensor products: } $\{V_n\otimes
U_n\}$.
\item \textbf{Schur functors: } $\{\Schur_\lambda
(V_n)\}$.
\item \textbf{Schur functors of direct sums: } $\{\Schur_\lambda(V_n\oplus
U_n)\}$.
\item \textbf{Schur functors of tensor products: } $\{\Schur_{\lambda}(V_n\otimes
U_n\})$.
\item \textbf{Compositions of Schur functors: }
$\{\Schur_\lambda(\Schur_\mu(V_n))\}$.\\ For example,
$\{\Sym^r(\bwedge^s(V_n))\}$ for each fixed $r,s\geq 0$.
\end{enumerate}
If $G_n$ is $\SL_n\Q$, $\GL_n\Q$ or $\Sp_{2n}\Q$ and $\{V_n\}$ and
$\{U_n\}$ are uniformly multiplicity stable sequences of
finite-dimensional $G_n$--representations, then all the preceding
examples are uniformly multiplicity stable, as are the following two
examples.
\begin{enumerate}
\item[6.] \textbf{Shifted sequences: } The restrictions
$\{V_n\downarrow G_{n-k}\}$ for any fixed $k\geq 0$.
\item[7.] \textbf{Restrictions: } The restrictions $\{V_n\downarrow
\SL_n\Q\}$ and $\{V_{2n}\downarrow\Sp_{2n}\Q\}$.
\end{enumerate}
If $G_n$ is $\SL_n\Q$ or $\GL_n\Q$ and $\{V_n\}$ and
$\{U_n\}$ are strongly stable, then the resulting sequences in Parts
1--5 are strongly stable.
\end{theorem}
We also have a version of Theorem~\ref{thm:classicalstability} for
$S_n$--representations.
\begin{theorem}
\label{thm:classicalSn}
Let $\{V_n\}$ and $\{W_n\}$ be consistent sequences of
$S_n$--representations that are uniformly multiplicity stable. Then
the following sequences of $S_n$--representations are uniformly
multiplicity stable.
\begin{enumerate}
\item \textbf{Tensor products:} $\{V_n\otimes W_n\}$
\item \textbf{Shifted sequences:} The restrictions $\{V_n\downarrow
S_{n-k}\}$ for any fixed $k\geq 0$.
\end{enumerate}
\end{theorem}
Before proving Theorem~\ref{thm:classicalstability} and
Theorem~\ref{thm:classicalSn}, we give a number of examples in order
to illustrate the necessity of various hypotheses in the theorems.
\begin{nonexample}
In the final part of the statement of
Theorem~\ref{thm:classicalstability}, we need to assume strong
stability even to conclude standard representation stability of the
various combinations of representations. This strong assumption is
used via the ``type-preserving'' condition, and without this
assumption stability may not hold. Perhaps surprisingly, the issue
is not the stability of the multiplicities, but surjectivity. Here
is a simple example which illustrates the problem. This example is
not injective, but it can easily be made so; we do this
below. Let \[V_n=V(0)_n\oplus V(1)_n=\Q\oplus \Q^n,\] with maps
$V_n\to V_{n+1}$ defined by
\[\Q\oplus \Q^n\ni(a,v)\mapsto (a,ax_{n+1})\in\Q\oplus \Q^{n+1}\]
where $x_{n+1}$ is the basis vector $x_{n+1}=(0,\ldots,0,1)$. The tensor product
$V_n\otimes V_n$ decomposes into irreducibles as ${V(0)\oplus
V(1)^{\oplus 2}\oplus V(2)\oplus V(1,1)}$, where the last two
factors come from the decomposition $\Q^n\otimes
\Q^n=\Sym^2\Q^n\oplus \bwedge^2 \Q^n$. It is easy to check that the
$\SL_{n+1}\Q$--span of the image of $V_n\otimes V_n$ in
$V_{n+1}\otimes V_{n+1}$ is exactly $V(0)\oplus V(1)^{\oplus
2}\oplus V(2)$; the $\bwedge^2 \Q^n$ factor is inaccessible.
For an example which is actually representation stable, let
\[V_n=V(0)_n\oplus V(1)_n\oplus V(2)_n=\Q\oplus \Q^n\oplus
\Sym^2\Q^n\] with $V_n\hookrightarrow V_{n+1}$ defined by $(a,v,w)
\mapsto (a,ax_{n+1},w+v\cdot x_{n+1})$. The sequence $\{V_n\}$ is consistent and uniformly representation stable. The tensor product $V_n\otimes V_n$
contains $V(1,1)=\bwedge^2 \Q^n$ with multiplicity 1, but the
$\SL_{n+1}\Q$ image of $V_n\otimes V_n$ does not contain this
factor. Thus the sequence $\{V_n\otimes V_n\}$ is not surjective in
the sense of Definition~\ref{definition:repstab1}.
\end{nonexample}
\begin{nonexample}
Even when the sequence $\{V_n\}$ is type-preserving (and so strongly
stable), restrictions often fail to be surjective. Take
$V_n=V(1)_n=\Q^n$, which certainly is strongly stable. The
restriction $W_n=V_{n+1}\downarrow \GL_n\Q$ splits as $V(1)_n\oplus
V(0)_n=\Q^n\oplus \Q$, which is multiplicity stable. But the image
of $W_n$ in $W_{n+1}=\Q^{n+2}$ is invariant under $\GL_{n+1}\Q$, and
so the sequence $\{W_n\}$ is not stable due to the failure of
surjectivity.
\end{nonexample}
\begin{proof}[Proof of Theorem~\ref{thm:classicalstability}]
In each case, injectivity is either trivial or follows from the
functoriality of the Schur functor $\Schur_\lambda$. The proofs of
multiplicity stability generally separate into two parts. First, we
check stability when each $V_n$ is a single irreducible; the proof
in this case often corresponds to a classical fact of representation
theory. Second, we promote this to the general case when $\{V_n\}$
is an arbitrary representation stable sequence. This sometimes
requires bootstrapping off the first step of other parts.
To reduce confusion, we have labeled the two steps of the proofs
separately as (for example) Parts 1a and 1b. For simplicity, we
refer to ``stability'' and ``uniform stability'' in the course of
the proofs, but we reiterate that we are not claiming surjectivity,
so these should properly be references to ``multiplicity
stability''. We defer the discussion of strong stability until after
the claims have been verified in the stable and uniformly stable
cases. \\
\textbf{1.} In general, the problem of decomposing the tensor
product of two irreducible representations is called the
\emph{Clebsch--Gordan problem}. The quintessential example of
stability is the Littlewood--Richardson rule, which answers the
Clebsch--Gordan problem for $\SL_n$ and shows that the
multiplicities in the decomposition are independent of $n$. Given
two partitions $\lambda$ and $\mu$, and a partition $\nu\vdash
|\lambda|+|\mu|$, the \emph{Littlewood--Richardson coefficient}
$C^{\nu}_{\lambda\mu}$ is the number of ways that $\nu$ can be
obtained as a strict $\mu$--expansion of $\lambda$ (see
\cite[Appendix A]{FH} for full definitions). The tensor product then
decomposes as \cite[Equation 6.7]{FH}
\begin{equation}
\label{eq:LR}
\Schur_\lambda(V)\otimes \Schur_\mu(V)=\bigoplus
C^{\nu}_{\lambda\mu}\Schur_\nu(V).
\end{equation}
\textbf{1a.} We first verify the claim in the case of a single
irreducible. We show that in each case, the tensor
$V(\lambda)_n\otimes V(\mu)_n$ decomposes as $\bigoplus
N_{\lambda\mu}^\nu V(\nu)_n$ for some constant $N_{\lambda\mu}^\nu$
independent of $n$. For $\SL_n\Q$, by the Littlewood--Richardson
rule $V(\lambda)_n\otimes V(\mu)_n$ decomposes as $\bigoplus
C^{\nu}_{\lambda\mu}V(\nu)_n$, so we may take
$N^{\nu}_{\lambda\mu}=C^{\nu}_{\lambda\mu}$. For $\GL_n\Q$, recall
that $D$ denotes the determinant representation, and note that for fixed
$\ell$, the representation $D^\ell\otimes V_n$ is stable if and only
if $V_n$ is stable. Every irreducible $V(\lambda)_n$ can be written
as $\Schur_{\overline{\lambda}}\Q^n\otimes D^\ell$ for a unique
partition $\overline{\lambda}$ and integer $\ell$ (namely
$\ell=\lambda_n$ and
$\overline{\lambda}_i=\lambda_i-\lambda_n$). Then
\[V(\lambda)_n\otimes V(\mu)_n
=\Schur_{\overline{\lambda}}(\Q^n)\otimes D^\ell\otimes
\Schur_{\overline{\mu}}(\Q^n)\otimes D^m=D^{\ell+m}\otimes \bigoplus
C^{\overline{\nu}}_{\overline{\lambda}\overline{\mu}}\Schur_{\overline{\nu}}(\Q^n).\]
The decomposition of the right side into irreducibles
$V(\nu)_n=D^{\ell+m}\otimes \Schur_{\overline{\nu}}(\Q^n)$ is
independent of $n$. Thus we may take
$N^{\nu}_{\lambda\mu}=C^{\overline{\nu}}_{\overline{\lambda}\overline{\mu}}$
for those $\nu$ with $\nu_n=\lambda_n+\mu_n$, and
$N^{\nu}_{\lambda\mu}=0$ otherwise.
For $\Sp_{2n}\Q$, the corresponding formula is \cite[Equation
25.27]{FH}:
\begin{equation}
\label{eq:SpLR}
\Schur_{\langle\lambda\rangle}(\Q^{2n}) \otimes
\Schur_{\langle\mu\rangle}(\Q^{2n}) =\bigoplus
\sum_{\zeta,\sigma,\tau}C^{\lambda}_{\zeta\sigma}
C^{\mu}_{\zeta\tau}
C^{\nu}_{\sigma\tau}\Schur_{\langle\nu\rangle}(\Q^{2n})
\end{equation}
where the sum is over all partitions $\zeta,\sigma,\tau$. Thus
$N^{\nu}_{\lambda\mu}=\sum_{\zeta,\sigma,\tau}
C^{\lambda}_{\zeta\sigma} C^{\mu}_{\zeta\tau}C^{\nu}_{\sigma\tau}$.
\textbf{1b.} Now consider arbitrary consistent sequences $\{V_n\}$ and
$\{U_n\}$. If these sequences are uniformly representation stable, their decompositions
$V_n=\bigoplus c_{\lambda,n}V(\lambda)_n$ and $U_n=\bigoplus
d_{\mu,n}V(\mu)_n$ are eventually independent of $n$. Thus the
decomposition of the tensor product as
\[V_n\otimes U_n =\left(\bigoplus
c_{\lambda,n}V(\lambda)_n\right)\otimes\left(\bigoplus
d_{\mu,n}V(\mu)_n\right) =\bigoplus_\nu
\sum_{\lambda,\mu}c_{\lambda,n}d_{\mu,n}N_{\lambda\mu}^\nu
V(\nu)_n\] is eventually independent of $n$.
For $\SL_n\Q$ the assumption of uniform stability is not
necessary. In this case $N_{\lambda\mu}^\nu$ is the
Littlewood--Richardson coefficient, which is nonzero only if
$|\nu|=|\lambda|+|\mu|$. Thus for fixed $\nu$, only finitely many
pairs $(\lambda,\mu)$ can contribute to the
$\sum_{\lambda,\mu}c_{\lambda,n}d_{\mu,n}N_{\lambda\mu}^\nu
V(\nu)_n$ term above. Thus we may take $n$ large enough that these
finitely many coefficients $c_{\lambda,n}$ and $d_{\mu,n}$ are all
independent of $n$, and the multiplicity
$\sum_{\lambda,\mu}c_{\lambda,n}d_{\mu,n}N_{\lambda\mu}^\nu$ of
$V(\nu)_n$ is eventually independent of $n$ as desired.\\
\textbf{2a.} The classical \emph{plethysm} problem is to decompose the
composition of two Schur functors:
\begin{equation}
\label{eq:SLpleth}
\Schur_{\lambda}(\Schur_\mu V)
=\bigoplus M_{\lambda\mu}^\nu \Schur_\nu V
\end{equation}
To compute the coefficients $M_{\lambda\mu}^\nu$ is difficult, but
it is known that such coefficients exist, and are nonzero only when
$|\nu|=|\lambda|\cdot|\mu|$ \cite[Exercise 6.17a]{FH}. It
immediately follows that the sequence $\{\Schur_\lambda(V(\mu)_n\}$
of $\SL_n\Q$--representations $\Schur_\lambda(V(\mu)_n)=\bigoplus
M_{\lambda\mu}^\nu V(\nu)_n$ is representation stable. For
$\GL_n\Q$, write $V(\mu)_n=\Schur_{\overline{\mu}}\Q^n\otimes
D^{\ell}$ for some partition $\lambda$ and integer $\ell$. Note that
in general, if $\rho$ acts on $V$ diagonally by multiplication by
$R$, then the action of $\rho$ on $\Schur_{\lambda}V$ will be
multiplication by $R^{|\lambda|}$. Since the center of $\GL_n\Q$
acts diagonally, it follows that
\[\Schur_\lambda(V(\mu)_n)
=\Schur_\lambda(\Schur_{\overline{\mu}}(\Q^n)\otimes
D^\ell)=\Schur_\lambda(\Schur_{\overline{\mu}}(\Q^n))\otimes
D^{\ell|\lambda|}=\bigoplus
M_{\lambda\overline{\mu}}^{\overline{\nu}}
\Schur_{\overline{\nu}}(\Q^n)\otimes D^{\ell|\lambda|}\] and thus
$\{\Schur_\lambda(V(\mu)_n)\}$ is representation stable. For the
symplectic group the stability of the
plethysm \[\Schur_\lambda(\Schur_{\langle\mu\rangle}(\Q^{2n}))=\bigoplus
L_{\lambda\mu}^\nu\Schur_{\langle\nu\rangle}(\Q^{2n})\] was only
proved recently by Kabanov \cite[Theorem 7]{Kab}. If $\mu$ has $\ell=\ell(\mu)$
rows, the coefficients $L_{\lambda\mu}^\nu$ are independent of $n$
once $n\geq \ell|\lambda|$ and are nonzero only for those $\nu$ with at
most $\ell|\lambda|$ rows.
\textbf{2b and 3.} We now verify Parts 2 and 3 in parallel by
induction on total multiplicity. We do this first under the
assumption of uniform stability, so that total multiplicity is
well-defined. We then explain how to extend this to all
representation stable sequences in the case of $\SL_n\Q$. In
general, when a Schur functor is applied to a direct sum we have the
decomposition \cite[Exercise 6.11]{FH}
\begin{equation}
\label{eq:Schursum}
\Schur_\lambda(V\oplus U)
=\bigoplus C_{\mu\nu}^\lambda(\Schur_\mu V\otimes \Schur_\nu U).
\end{equation}
We have already verified Part 2 when $V_n$ has total multiplicity 1,
and Part 3 reduces to Part 2 when $V_n\oplus U_n$ has total
multiplicity 1. We now prove Part 3 when $V_n\oplus U_n$ has total
multiplicity $k$ by strong induction. Assume that $\{V_n\}$ and
$\{U_n\}$ are uniformly representation stable sequences, and that neither is eventually
zero. So we may assume that Part 2 of the theorem holds for
$\{V_n\}$ and for $\{U_n\}$ by induction. By Part 2 we have that
$\{\Schur_\mu V_n\}$ and $\{\Schur_\nu U_n\}$ are each uniformly
stable. By Part 1, the tensor product $\{\Schur_\mu V_n\otimes
\Schur_\nu U_n\}$ is uniformly stable. Thus the sum
\[\Schur_\lambda(V_n\oplus U_n) =\bigoplus
C_{\mu\nu}^\lambda(\Schur_\mu V_n\otimes \Schur_\nu U_n)\] is uniformly
stable, verifying Part 3. To verify Part 2 when $V_n$ has total
multiplicity $k$, write $V_n=U_n\oplus W_n$ with each factor
uniformly stable and apply Part 3. Although the splitting
$V_n=U_n\oplus W_n$ might not respect the maps $V_n\to V_{n+1}$, we
are only concerned with multiplicities at this point so this is not
a problem. When we revisit this issue later, it will be under the
assumption of strong stability, in which case $\{V_n\}$ does split
as a sum of consistent, strongly stable sequences $\{U_n\}$ and $\{W_n\}$.
We now consider the case when $G_n=\SL_n\Q$ and the sequences are
not necessarily uniformly stable. For a fixed finite-dimensional
$\SL_n\Q$--representation $V=\bigoplus c_\eta V(\eta)$, consider
decomposing $\Schur_\lambda(V)=\Schur_{\lambda}(\bigoplus c_\eta
V(\eta))$ by repeatedly applying the formula \eqref{eq:Schursum} for
$\Schur_\lambda(V\oplus W)$. We obtain a decomposition of the form
\[\Schur_\lambda(V)=\bigoplus X_\bullet
\Schur_{\mu_\bullet}V(\eta_\bullet)\otimes \cdots\otimes
\Schur_{\mu_\bullet}V(\eta_\bullet),\] where the $V(\eta_\bullet)$
range over the irreducible summands of $V$. Consider the individual
terms $\Schur_{\mu}V(\eta)$. As we noted above, the decomposition
$\Schur_{\mu}V(\eta)=\bigoplus M_{\mu\eta}^\zeta V(\zeta)$ only
contains those $V(\zeta)$ with $|\zeta|=|\mu|\cdot
|\eta|$. Furthermore, recall that the coefficients
$C_{\mu\nu}^\lambda$ are only nonzero if $|\mu|+|\nu|=|\lambda|$
(this is where we use that $G_n=\SL_n\Q$). It follows that the
irreducibles $V(\nu)$ appearing in a tensor $V(\zeta_1)\otimes
\cdots V(\zeta_k)$ all satisfy
$|\nu|=|\zeta_1|+\cdots+|\zeta_k|$. Combining this, we obtain the
key point of the argument: when considering the multiplicity of
$V(\nu)$ in $\Schur_\lambda(\bigoplus c_\eta V(\eta))$, \emph{we
need only consider those $V(\eta)$ with $|\eta|\leq |\nu|$}.
Using this observation, we reduce this case to the uniformly stable
case as follows. For fixed $\nu$, replace $V_n=\bigoplus c_{\eta,n}
V(\eta)$ with \[V_n^{\leq \nu}=\bigoplus_{|\eta|\leq |\nu|}
c_{\eta,n} V(\eta).\] If the sequence $\{V_n\}$ is representation
stable, then since only finitely many $\eta$ satisfy $|\eta|\leq
|\nu|$, the sequence $\{V_n^{\leq \nu}\}$ is uniformly stable. Thus
applying Part 2, we conclude that
$\{\Schur_\lambda(V_n^{\leq\nu})\}$ is uniformly stable; in
particular, the multiplicity of $V(\nu)$ is eventually constant. By
the observation, this is the same as the multiplicity of $V(\nu)$ in
$\Schur_\lambda(V_n)$. Thus $\{\Schur_\lambda(V_n)\}$ is
multiplicity stable, as desired.\\
\textbf{4.} In general, when a Schur functor is applied to a tensor
product, we have the decomposition \cite[Exercise 6.11b]{FH}
\[\Schur_\lambda(V\otimes W)=\bigoplus D_{\mu\nu}^\lambda(\Schur_\mu
V\otimes \Schur_\nu W),\] where the sum is over partitions with
$|\mu|=|\nu|=|\lambda|$ and the coefficients are defined as
follows. Let $d=|\lambda|$; given a partition $\eta\vdash d$, let
$C_\eta$ be the conjugacy class in $S_d$ whose cycle decomposition
is encoded by $\eta$. Let $\chi_\lambda$ be the character of the
irreducible $S_d$--representation $V(\lambda)$. Then
\[D_{\mu\nu}^\lambda=\sum_{\eta\vdash d}
\frac{\chi_\lambda(C_\eta)\chi_\mu(C_\eta)\chi_\nu(C_\eta)}
{|Z_{S_d}(C_\eta)|}\] where $Z_{S_d}(C_\eta)$ is the centralizer in
$S_d$ of a representative of $C_\eta$. Now assume that $\{V_n\}$ and
$\{U_n\}$ are uniformly stable, and consider
$\Schur_\lambda(V_n\otimes U_n)$. By Part 2, $\{\Schur_\mu V_n\}$
and $\{\Schur_\nu U_n\}$ are uniformly stable for each $\mu$ and
$\nu$. By Part 1 we have that $\{\Schur_\mu V_n\otimes \Schur_\nu
U_n\}$ is uniformly stable. Thus the sum
\[\Schur_\lambda(V_n\oplus U_n) =\bigoplus
D_{\mu\nu}^\lambda(\Schur_\mu U_n\otimes \Schur_\nu W_n)\] is
uniformly stable. The case when $G_n=\SL_n\Q$ and uniform stability
is not assumed proceeds exactly as in Part 2, since the coefficents
$D_{\mu\nu}^\lambda$ are only nonzero if $|\mu|=|\nu|=|\lambda|$.\\
\textbf{5.} Stability for the composition of Schur functors
$\Schur_\lambda(\Schur_\mu(V_n))$ can be deduced from the plethysm
decomposition in \eqref{eq:SLpleth}, or just by applying Part 2
twice, first to $\{\Schur_\mu(V_n)\}$ and then to
$\{\Schur_\lambda(\Schur_\mu(V_n))\}$.\\
\textbf{6.} For the restriction of
$V(\lambda)_n=\Schur_\lambda(\Q^n)$ from $\GL_n\Q$ to $\GL_{n-k}\Q$,
the restriction decomposes as \cite[Exercise 6.12]{FH}
\begin{equation}
\label{eq:SLres}
\Schur_\lambda(\Q^n)\downarrow \GL_{n-k}\Q
=\bigoplus_\nu\big(\sum_\mu C_{\mu\nu}^\lambda
\dim\Schur_\mu(\Q^k)\big)\Schur_\nu(\Q^{n-k}).
\end{equation}
Note that $\dim \Schur_\mu(\Q^k)$ does not depend on $n$. The claim
for a single irreducible representation of $\SL_n\Q$ immediately
follows: \[V(\lambda)_n\downarrow
\SL_{n-k}\Q=\bigoplus_\nu\big(\sum_\mu C_{\mu\nu}^\lambda \dim
\Schur_\mu(\Q^k)\big)V(\nu)_{n-k}\] For $\GL_{n-k}\Q$ it follows
after noting that the determinant representation restricts to the
determinant representation: $D\downarrow \GL_{n-k}\Q=D$, so if
$V(\lambda)_n=\Schur_{\overline{\lambda}}(\Q^n)\otimes D^\ell$ we
get \[\Schur_{\overline{\lambda}}(\Q^n)\otimes D^\ell\downarrow
\GL_{n-k}\Q=\bigoplus_{\overline{\nu}}\big(\sum_\mu
C_{\mu\overline{\nu}}^{\overline{\lambda}} \dim
\Schur_\mu(\Q^k)\big)\Schur_{\overline{\nu}}(\Q^{n-k})\otimes
D^\ell.\] The claim for a uniformly stable sequence $\{V_n\}$
follows by taking $n$ large enough that the decomposition
$V_n=\bigoplus c_{\lambda,n}V(\lambda)_n$ is independent of
$n$. Note that uniform stability is necessary here even for
$\SL_n\Q$. Indeed, from \eqref{eq:SLres} we see that for every
partition $\lambda$ with $\ell(\lambda)\leq k$, the restriction
$\Schur_\lambda(\Q^n)\downarrow \SL_{n-k}\Q$ contains the trivial
representation with multiplicity $\dim \Schur_\lambda(\Q^k)$. Thus
the multiplicity of $V(0)$ in $V_n\downarrow \SL_{n-k}\Q$ is at
least the total multiplicity of subrepresentations $V(\lambda)_n$ of
$V_n$ with $\ell(\lambda)\leq k$, which need not be eventually constant
if we do not assume uniform stability.
For $\Sp_{2n}\Q$, we consider the restriction to $\Sp_{2n-2}\Q$. For
a single irreducible representation $V(\lambda)_n$ of $\Sp_{2n}\Q$,
the restriction decomposes as \cite[Equation 25.36]{FH}
\begin{equation}
\label{eq:Spres}
V(\lambda)_n\downarrow \Sp_{2n-2}\Q=\bigoplus_\nu N_\lambda^\nu
V(\nu)_{n-1},
\end{equation}
where the sum is over partitions with $\nu_n=0$. The
coefficient $N_\lambda^\nu$ is the number of sequences
$p_1,\ldots,p_n$ satisfying:
\begin{align*}
\lambda_1\geq\ &p_1\geq \lambda_2\geq p_2\geq
\cdots\geq \lambda_n\geq p_n\\
&p_1\geq\nu_1\geq p_2\geq \cdots\geq\nu_{n-1}\geq p_n\geq\nu_n= 0
\end{align*} Note that if $\lambda_k=0$, then $p_i=0$ for $i\geq k$,
and thus any $\nu$ contributing to this sum has $\nu_i=0$ for
$i>k$. It follows that for fixed $\lambda$, the collection of $\nu$
that contribute to this sum is independent of $n$ once $n\geq \ell(\lambda)$, so the collection of
$p_1,\ldots,p_n$ above and the multiplicities $N_\lambda^\nu$ are
also eventually independent of $n$. Thus $\{V(\lambda)_n\downarrow
\Sp_{2n-2}\Q\}$ is stable, and as above it follows that
$\{V_n\downarrow \Sp_{2n-2}\Q\}$ is uniformly stable if $\{V_n\}$ is
uniformly stable. Uniform stability for the restriction to
$\Sp_{2n-2k}\Q$ now follows by induction.\\
\textbf{7.} Every irreducible $\GL_n\Q$--representation
$V(\lambda)_n$ remains irreducible when restricted to $\SL_n\Q$; the
resulting representation is $V(\overline{\lambda})_n$, where
$\overline{\lambda}$ is the partition defined by
${\overline{\lambda}_i=\lambda_i-\lambda_n}$. If
$V_n=\bigoplus_\lambda c_{\lambda,n}V(\lambda)_n$, the restriction
$V_n\downarrow \SL_n\Q$ is $\bigoplus_\mu\sum_\lambda
c_{\lambda,n}V(\mu)$ where the sum is over those $\lambda$ with
$\overline{\lambda}=\mu$. For fixed $\mu$, the collection of such
$\lambda$ is independent of $n$. Thus if $\{V_n\}$ is uniformly
stable and $c_{\lambda,n}$ are eventually independent of $n$, the
same is true of the multiplicities $\sum_\lambda
c_{\lambda,n}$ of $V(\mu)_n$.
It thus suffices to consider the restriction from $\SL_{2n}\Q$ to
$\Sp_{2n}\Q$. Littlewood proved that if $\lambda$ is a partition
with at most $n$ rows, the restriction of the irreducible
$V(\lambda)_{2n}=\Schur_\lambda(\Q^{2n})$ decomposes as
\cite[Equation 25.39]{FH}
\begin{equation}
\label{eq:SLSpres}
\Schur_\lambda(\Q^{2n})\downarrow \Sp_{2n}\Q
=\bigoplus_\mu \sum_\eta C_{\eta\mu}^\lambda \Schur_{\langle\mu\rangle}(\Q^{2n}),
\end{equation}
where the sum is over all partitions $\eta=(\eta_1=\eta_2\geq
\eta_3=\eta_4\geq \cdots)$ where each number appears an even number
of times. Note that this formula is independent of $n$ once $n\geq \ell(\lambda)$ (so that the formula applies). Thus the sequence
$\{V(\lambda)_{2n}\downarrow \Sp_{2n}\Q\}$ is representation stable. If
$\{V_n=\bigoplus c_{\lambda,n}V(\lambda)_n\}$ is uniformly stable, let
$N$ be the largest number of rows of any partition $\lambda$ for
which the eventual multiplicity of $V(\lambda)_n$ is
positive. Assume that the decomposition of $V_n$ stabilizes once
$n\geq N$. Then for $n\geq N$, we may apply Littlewood's rule to
conclude that \[V_n\downarrow \Sp_{2n}\Q =
\bigoplus_\mu\sum_{\lambda,\eta}c_{\lambda,n}C^{\lambda}_{\eta\mu}V(\mu)_n\]
is independent of $n$. We conclude that the sequence
$\{V_n\downarrow \Sp_{2n}\Q\}$ is uniformly representation stable.\\
\textbf{Strong stability.} We now consider the case when
$G_n=\SL_n\Q$ or $\GL_n\Q$ and $\{V_n\}$ and $\{U_n\}$ are strongly
stable, meaning they are not only uniformly stable but also
type-preserving. Recall that for $G_n=\SL_n\Q$ or $\GL_n\Q$, this
implies Condition IV$'$: that $P_{n+1}<G_{n+1}$ acts trivially on
$V_n\subset V_{n+1}$. This property is preserved by direct sum and
by tensor product: if $P_{n+1}$ acts trivially on $V_n\subset
V_{n+1}$ and on $U_n\subset U_{n+1}$, it acts trivially on
$V_n\oplus U_n\subset V_{n+1}\oplus U_{n+1}$ and $V_n\otimes
U_n\subset V_{n+1}\otimes U_{n+1}$. The functoriality of
$\Schur_\lambda$ implies that if $P_{n+1}$ acts trivially on
$V_n\subset V_{n+1}$, it acts trivially on
$\Schur_\lambda(V_n)\subset\Schur_\lambda(V_{n+1})$. The
constructions in Parts 1--5 are obtained by composing these
operations, so we conclude that Condition IV$'$ holds for each of
the resulting sequences.
We have already proved above that the resulting sequences in Parts
1--5 are uniformly multiplicity stable (Condition III$'$). Thus we
may apply Proposition~\ref{prop:SLstrong} to conclude that these
sequences are type preserving (Condition IV). Finally, by
Remark~\ref{remark:strong} Conditions III$'$ and IV together imply
surjectivity (Condition II). This concludes the proof of strong
stability in Parts 1--5, and thus completes the proof of
Theorem~\ref{thm:classicalstability}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:classicalSn}]\
\textbf{1.} For irreducible representations $V(\lambda)_n$ and
$V(\mu)_n$ of $S_n$, Murnaghan proved that the decomposition of
the tensor product $V(\lambda)_n\otimes V(\mu)_n$ into irreducibles
$V(\nu)_n$ is eventually independent of $n$ (see Section 1 of
\cite{Mu}), and Briand--Orellana--Rosas have recently proved that the decomposition of $V(\lambda)_n\otimes V(\mu)_n$ stabilizes once $n\geq |\lambda|+|\mu|+\lambda_1+\mu_1$ \cite[Theorem 1.2]{BOR}. If $\{V_n=\bigoplus c_{\lambda,n}V(\lambda)_n\}$ and
$\{W_n=\bigoplus d_{\mu,n}V(\mu)_n\}$ are uniformly multiplicity
stable, taking $n$ large enough that the decomposition of
$V(\lambda)_n\otimes V(\mu)_n$ stabilizes for all $\lambda$ and
$\nu$ occurring in $V_n$ and $W_n$, it follows by distributivity that
$\{V_n\otimes W_n\}$ is uniformly multiplicity stable.\\
\textbf{2.} For $k=1$, we repeat from Remark~\ref{remark:strong} the
branching rule for restrictions from $S_n$ to $S_{n-1}$:
\[V(\lambda)_n\downarrow S_{n-1}=V(\lambda)_{n-1}\oplus
\bigoplus_\mu V(\mu)_{n-1}\] where $\mu$ ranges over those
partitions obtained by removing one box from $\lambda$. It is
immediate that uniform multiplicity stability is preserved. For
restrictions from $S_n$ to $S_{n-k}$ with $k>1$ a similar formula
can be given explicitly \cite[Exercise 4.44]{FH}, but to
conclude stability we can just inductively apply the result for
$k=1$.
\end{proof}
\subsection{Reversing the Clebsch--Gordan problem} We conclude this
section by discussing the possibility of reversing the conclusion of
Theorem~\ref{thm:classicalstability}(1). This idea will play an
important role in Section~\ref{section:liealg}.
\begin{theorem}\label{thm:LRinvert}
Let $G_n=\SL_n\Q$, $\GL_n\Q$, or $\Sp_{2n}\Q$. If
$\{W_n\}$ and $\{V_n\otimes W_n\}$ are nonzero and multiplicity stable as
$G_n$--representations, then $\{V_n\}$ is multiplicity stable.
This remains true if ``multiplicity stable'' is replaced by ``uniformly multiplicity
stable''.
\end{theorem}
\begin{proof}
We will prove the theorem in the following form: given the irreducible decompositions of $W$ and of $V\otimes W$, the irreducible decomposition of $V$ can be determined, and without reference to $n$. This will be formalized in the course of the proof. In the remark following the proof, we sketch a constructive way to determine the decomposition of $V$. But the theorem as stated follows from more general properties, as we now explain.
First, we will use that the representation ring is a domain for any such group $G_n$. Recall that the \emph{representation ring} $R_n$ consists of formal differences $V-U$ of representations of $G_n$, with addition given by direct sum and multiplication given by tensor product. Complete reducibility implies that as a group, $R_n$ is the free abelian group on the irreducible representations. The ring structure is more complicated, but we can in fact describe $R_n$ explicitly. Indeed, let $\Lambda_n$ be
the weight lattice in $\fh^*_n$. Any representation $V$ determines a ``character'' in the group ring $\Z[\Lambda_n]$, where the coefficient of the weight $L\in \Lambda_n$ is the dimension of the eigenspace $V^{(L)}$:
\[V\mapsto \sum_{L\in \Lambda_n}\dim V^{(L)}\cdot L\]
The highest weight decomposition as described in Section~\ref{section:repsGLSp} implies that a representation is determined by its character; that is, the induced ring homomorphism $R_n\hookrightarrow \Z[\Lambda_n]$ is injective. This would suffice for our purposes, but we can say more: any such character is invariant under the Weyl group $W_n$, and in fact $R_n$ is exactly the subring $\Z[\Lambda_n]^{W_n}$ of invariants (\cite{FH}, Theorem 23.24, combined with Exercise 23.36(d) for $\GL_n\Q$).
Since $R_n$ is a domain, $V_n$ is the unique solution in $R_n$ to the equation $x\cdot [W_n]=[V_n\otimes W_n]$. It remains to see that given the decompositions of $W_n$ and $V_n\otimes W_n$, the solution to this equation does not depend on $n$. To do this, we need to relate the representation rings $R_n$ and $R_{n+1}$.
There is a natural homomorphism $R_{n+1}\to R_n$ given by restriction from $G_{n+1}$ to $G_n$, but this is \emph{not} the map we want, since restriction does not take irreducibles to irreducibles. Instead, assume first that $G_n=\SL_n\Q$; by identifying $V(\lambda)_n$ with $\lambda$, we get an identification of $R_n$ with the free abelian group $\Z[\{\lambda|\ell(\lambda)<n\}]$ on partitions with fewer than $n$ rows. The map we want is simply the projection
\[R_{n+1}=\Z[\{\lambda|\ell(\lambda)< n+1\}]\overset{\pi}{\twoheadrightarrow} \Z[\{\lambda|\ell(\lambda)< n\}]= R_n\]
which sends $\lambda\mapsto 0$ if $\ell(\lambda)=n$ and $\lambda\mapsto \lambda$ otherwise. With respect to this basis of partitions, multiplication in the ring is given by the Littlewood--Richardson coefficients; in a sense, Theorem~\ref{thm:classicalstability}(1) is based on the fact that this map is a ring homomorphism. This projection has a right inverse $i\colon R_n\to R_{n+1}$ defined by the inclusion ${\{\lambda|\ell(\lambda)< n\}}\subset \{\lambda|\ell(\lambda)< n+1\}$; this is not a ring homomorphism, however. Note that uniform
multiplicity stability is equivalent to $i([V_n]) = [V_{n+1}]$ for large enough $n$.
Assume that the sequences in question are uniformly stable, and that $n$ is large enough that the decompositions of $W_n$ and $V_n\otimes W_n$ have stabilized, meaning \[i([W_n])=[W_{n+1}]\ \ \text{ and }\ \ i([V_n\otimes W_n])=[V_{n+1}\otimes W_{n+1}].\] Then we have $\pi([W_{n+1}])=[W_n]$ and $\pi([V_{n+1}\otimes W_{n+1}])=[V_n\otimes W_n]$. Thus $[V_{n+1}]$ projects to a solution of $x\cdot [W_n]=[V_n\otimes W_n]$, so by uniqueness we have $\pi([V_{n+1}])=[V_n]$.
We want to prove that $i([V_n])=[V_{n+1}]$. Suppose not; that is, assume the difference $[V_{n+1}]-i([V_n])$ is not zero. Since $\pi([V_{n+1}])=[V_n]$, this difference consists of all those irreducibles $V(\lambda)_{n+1}$ contained in $V_{n+1}$ having $\ell(\lambda)=n$. It is easy to check from the definition of the Littlewood--Richardson coefficients that if a representation $V$ contains such a $V(\lambda)_{n+1}$, then for any nonzero representation $W$ the tensor $V\otimes W$ also contains some $V(\mu)_{n+1}$ with $\ell(\mu)=n$. Applying this to $V_{n+1}\otimes W_{n+1}$ gives that $[V_{n+1}\otimes W_{n+1}]\neq i([V_n\otimes W_n])$, contradicting the uniform stability of $V_n\otimes W_n$. We conclude that $[V_{n+1}]=i([V_n])$ and so $\{V_n\}$ is uniformly stable, as desired.
For $G_n=\Sp_{2n}\Q$, the argument proceeds identically, except that $R_n$ is identified with the free abelian group $\Z[\{\lambda|\ell(\lambda)\leq n\}]$ on partitions with at most $n$ rows. We can deduce from \eqref{eq:SpLR} the desired property that if $\ell(\lambda)={n+1}$, $V(\lambda)_{n+1}\otimes W$ contains some $V(\mu)_{n+1}$ with $\ell(\mu)={n+1}$. For $G_n=\GL_n\Q$, $R_n$ is the free abelian group $\Z[\{\lambda|\ell(\lambda)\leq n\}]$ on \emph{pseudo-}partitions with at most $n$ rows, and we also have to modify the maps between $R_n$ and $R_{n+1}$. In this case the inclusion $i\colon R_n\to R_{n+1}$ takes $\lambda=(\lambda_1\geq \cdots\geq \lambda_n)$ to $i(\lambda)=(\lambda_1\geq \cdots\geq \lambda_n\geq \lambda_n)$; the projection $\pi\colon R_{n+1}\to R_n$ sends the pseudo-partition $\lambda=(\lambda_1\geq \cdots\geq \lambda_n\geq \lambda_{n+1})$ to $0$ if $\lambda_n\neq \lambda_{n+1}$, and to $\pi(\lambda)=(\lambda_1\geq \cdots\geq \lambda_n)$ if $\lambda_n=\lambda_{n+1}$. The argument above then goes through, with the role of the partitions with $\ell(\lambda)=n$ played by the pseudo-partitions having $\lambda_n\neq \lambda_{n+1}$.
We sketch a compactness argument to extend this to the case when the sequences are multiplicity stable but not uniformly so. Consider for example the ideal of $R_n$ spanned by partitions $\lambda$ with $|\lambda|> k$. The corresponding quotients have basis the partitions $\lambda$ with $|\lambda|\leq k$. Since this set is finite, the corresponding subset of the multiplicities converges uniformly, and we can apply the argument above. Letting $k\to \infty$, we conclude that $\{V_n\}$ is multiplicity stable.
\end{proof}
\begin{remark}
A related theorem also holds: if $\{V_n\otimes V_n\}$ is multiplicity stable, then $\{V_n\}$ is multiplicity stable, and similarly for uniform stability. Both this claim and Theorem~\ref{thm:LRinvert} can be proved constructively, by an algorithm which we now sketch. The Littlewood--Richardson coefficients have the following property with respect to the lexicographic order on partitions: given $\lambda$ and $\mu$, the largest partition occurring in $V(\lambda)\otimes V(\mu)$ is $\lambda+\mu$, with multiplicity 1. Thus the largest partition occurring in $V_n\otimes V_n$ will be $\lambda+\lambda$, where $\lambda$ is the largest partition occurring in $V_n$. The next largest must be $\lambda+\mu$, where $\mu$ is the next largest partition in $V_n$. Continue, at each stage finding the largest irreducible in $V_n\otimes V_n$ not yet accounted for by those irreducibles already found. The algorithm pivots on partitions of the form $\lambda+\mu$, so since $\ell(\lambda)\leq \ell(\lambda+\mu)$ and $\ell(\mu)\leq \ell(\lambda+\mu)$, the steps in the algorithm will not depend on $n$ (if the sequences are not uniformly stable, we also need a compactness argument as above).
\end{remark}
\section{Cohomology of pure braid and related groups}
\label{section:braids}
Let $P_n$ denote the pure braid group, as discussed in the
introduction. As explained there, the action of $S_n$ on the
configuration space $X_n$ makes each cohomology group
$H^i(X_n;\Q)=H^i(P_n;\Q)$ into an $S_n$--representation. Explicit
formulas for the multiplicity of an irreducible $V(\lambda)$ in
$H^i(P_n;\Q)$ are not known. However, we do have the following.
\begin{theorem}
\label{thm:pure}
For each fixed $i\geq 0$, the sequence of $S_n$--representations
$\{H^i(P_n;\Q)\}$ is uniformly representation stable, and in fact stabilizes once $n\geq 4i$.
\end{theorem}
\noindent For example, for $n\geq 4$,
\[H^1(P_n;\Q)=V(0)\oplus V(1)\oplus V(2)\] and thanks to computations
by Hemmer, for $n\geq 7$ we have:
\begin{equation}
\label{eq:braid2stab}
H^2(P_n;\Q)= V(1)^{\oplus 2}\oplus V(1,1)^{\oplus 2}\oplus
V(2)^{\oplus 2}\oplus V(2,1)^{\oplus 2}\oplus V(3)\oplus V(3,1).
\end{equation}
As mentioned in the introduction, it is tempting to guess that the
reason for the stability in \eqref{eq:braid2stab} is that each factor
$V(\lambda)\subset H^2(P_n;\Q)$ has $S_{n+1}$--span inside
$H^2(P_{n+1};\Q)$ isomorphic to $V(\lambda)$. In the terminology of
\S\ref{section:strong}, we would hope that the natural homomorphism
$H^2(P_n;\Q)\to H^2(P_{n+1};\Q)$ is type-preserving and so the
sequence $\{H^2(P_n;\Q)\}$ is strongly stable. However, this is false
for $\{H^2(P_n;\Q)\}$ and indeed is false for $\{H^i(P_n;\Q)\}$ for
every $i\geq 1$.
We can see this failure explicitly for $H^1(P_n;\Q)$ as
follows. We will see below that $H^1(P_n;\Q)$ has basis
$\{w_{ij}|1\leq i<j\leq n\}$; after identifying $w_{ji}=w_{ij}$, the group $S_n$
acts on this basis by permuting the indices. Thus the unique trivial
subrepresentation $V(0)\subset H^1(P_n;\Q)$ is spanned by the vector
\[v=\sum_{1\leq i<j\leq n}w_{ij}.\] This vector is, up to a scalar,
the sum $\sum_{\sigma\in S_n}\sigma\cdot w_{12}$, and thus is
certainly $S_n$--invariant. But after including it into $H^1(P_{n+1};\Q)$ this
vector is not invariant under $S_{n+1}$ (for example, it does not involve any basis elements of the form $w_{i,n+1}$). In fact, it is not too
hard to check that the $S_{n+1}$--span of this vector is $V(0)\oplus
V(1)\subset H^1(P_{n+1};\Q)$.\\
In trying to prove Theorem~\ref{thm:pure}, we were able to use work of
Orlik--Solomon \cite{OS} and Lehrer--Solomon \cite{LS} to reduce the
problem to a stability statement for certain induced representations
of symmetric groups. We conjectured the following theorem to D. Hemmer
in certain special cases. The theorem was then proved by Hemmer in
much greater generality than we had hoped. This result itself provides
another example of representation stability.
We begin by presenting Hemmer's result, which we will use in the proof
of Theorem~\ref{thm:pure}. Fix a subgroup $H$ of the symmetric group
$S_k$, and fix any representation $V$ of $H$. For $n\geq k$ we may
extend the action of $H$ on $V$ to the subgroup $H\times S_{n-k}<S_n$
by letting $S_{n-k}$ act trivially on $V$; this representation of
$H\times S_{n-k}$ is denoted $V\boxtimes \Q$. Finally, we may consider
the induced representation $\Ind_{H\times S_{n-k}}^{S_n}(V\boxtimes
\Q)$, which is a representation of $S_n$.
\begin{theorem}[Hemmer \cite{He}]
\label{thm:hemmer}
Fix $k\geq 1$, a subgroup $H<S_k$, and a representation $V$ of $H$.
Then the sequence of $S_n$--representations $\{\Ind_{H\times
S_{n-k}}^{S_n}(V\boxtimes \Q)\}$ is uniformly
representation stable. The decomposition of this sequence stabilizes once $n\geq 2k$.
\end{theorem}
\noindent Injectivity and surjectivity are immediate from the
definition of induced representation; indeed $\Ind_{H\times
S_{n-k}}^{S_n}(V\boxtimes \Q)$ sits inside $\Ind_{H\times
S_{n-k+1}}^{S_{n+1}}(V\boxtimes \Q)$ as the $S_n$--span of
$V\boxtimes \Q$. For the proof of uniform multiplicity stability, see
Hemmer \cite{He}.
\subsection{Stability of the cohomology of pure braid groups}
\label{section:braid}
With the above tool in hand, we can now prove representation stability
for $\{H^i(P_n;\Q)\}$ for each fixed $i\geq 0$.
\begin{proof}[Proof of Theorem~\ref{thm:pure}] We continue with the
notation given in the introduction. The projections of configuration
spaces $X_{n+1}\to X_n$ given by forgetting the last coordinate give
surjections $\psi_n\colon P_{n+1}\to P_n$. These surjections induce
maps \[\psi_n^\ast\colon H^*(P_n;\Q)\to H^*(P_{n+1};\Q).\]
We will prove representation stability with respect to these maps.
For each pair $j\neq k$, let $w_{jk}\in H^1(P_n;\Q)$ be the
cohomology class represented by the differential form $\frac{1}{2\pi
i}\frac{dz_j-dz_k}{z_j-z_k}$ on $X_n\subset\C^n$. Note that
$w_{jk}=w_{kj}$. The vector space $H^1(P_n;\Q)$ is spanned by
the vectors $w_{jk}$, and the map $H^1(P_n;\Q)\to H^1(P_{n+1};\Q)$
sends $w_{jk}\in H^1(P_n;\Q)$ to $w_{jk}\in
H^1(P_{n+1};\Q)$. Furthermore, Arnol'd proved that $H^*(P_n;\Q)$ is
generated as a $\Q$--algebra by $H^1(P_n;\Q)$, subject only to the
relations
\[R_{jkl}\colon \quad w_{jk}\wedge w_{kl}+w_{kl}\wedge
w_{lj}+w_{lj}\wedge w_{jk}=0.\] This implies that $H^i(P_n;\Q)$ has
basis \[\big\{w_{j_1k_1}\wedge\cdots \wedge
w_{j_ik_i}\big|k_1<\cdots<k_i, \ \mbox{and $j_m<k_m$ for all
}m\big\}.\]
Injectivity of each $\psi_n^\ast$ is then immediate. To prove
surjectivity of $\psi_n^\ast$ (in the sense of the definition of
representation stability), consider an arbitrary basis element for
$H^i(P_{n+1};\Q)$. Note that for $n\geq 2i$, no basis element can
involve all the numbers from $1$ to $n+1$ as indices. It follows
that by applying some element of $S_{n+1}$, we may assume that our
basis element can be written without $n+1$ as an index. But such an
element is in the subalgebra of $H^*(P_{n+1};\Q)$ spanned by the
image of $H^1(P_n)$, and thus is contained in the image of
$H^i(P_n;\Q)$, as desired.
We now prove uniform stability of multiplicities; we will defer the computation of the stable range until afterwards. The work of
Orlik--Solomon on the cohomology of hyperplane complements implies
that $H^*(P_n;\Q)$ splits into pieces ``supported on the top
cohomology of Young subgroups'', as follows. For details of what follows,
see Lehrer--Solomon
\cite{LS}. Any subset of
$\{1,\ldots,n\}$ of cardinality $k$ determines a projection $P_n\to
P_k$ by forgetting the other $n-k$ coordinates (strands). Given a
partition $\mathcal{S}$ of $\{1,\ldots,n\}$ into subsets, the
product over all these projections gives a projection of $P_n$ onto
the group $P_{\mathcal{S}}$ defined as the product of the pure braid
groups of sizes corresponding to elements of the partition. For
concreteness we illustrate this explicitly for the partition of
$\{1,\ldots,n\}$ into $\{1,\ldots,k\}$ and $\{k+1,\ldots,n\}$, which
determines a projection $P_n\to P_{\mathcal{S}}=P_k\times
P_{n-k}$. There is always a splitting $P_{\mathcal{S}}\to P_n$,
given in this case for example by realizing $P_k$ and $P_{n-k}$
disjointly. Note that the partition may contain subsets of size
1. For example, the partition of $\{1,\ldots,n\}$ into
$\{1,\ldots,k\}, \{k+1\},\ldots,\{n\}$ determines the group
$P_k\times P_1\times\cdots P_1\approx P_k$.
We refer to these groups $P_\mathcal{S}$ as \emph{Young subgroups}
of $P_n$, by analogy with Young subgroups of symmetric groups, such
as $S_k\times S_{n-k}<S_n$. This is a slight abuse of notation,
since the embedding of $P_\mathcal{S}$ as a subgroup is not unique;
the important thing is the projection $P_n\to P_\mathcal{S}$. The
projection onto such a Young subgroup gives an inclusion
$H^*(P_\mathcal{S};\Q)\to H^*(P_n;\Q)$. We now consider the image in
$H^*(P_n;\Q)$ of the top cohomology of $P_{\mathcal{S}}$. For
example, the cohomological dimension of $P_k\times P_{n-k}$ is
$(k-1)+(n-k-1)=n-2$, and we consider the image of the top cohomology
$H^{n-2}(P_k\times P_{n-k};\Q)$ inside $H^{n-2}(P_n;\Q)$. For each
partition $\mathcal{S}$ of $\{1,\ldots,n\}$ into $i$ subsets, the
corresponding Young subgroup $P_{\mathcal{S}}$ is the product of $i$
pure braid groups, so the image of its top cohomology determines a
subspace $H^\mathcal{S}(P_n)$ of $H^{n-i}(P_n;\Q)$. Orlik--Solomon
\cite[Proposition 2.10]{OS} implies that $H^*(P_n;\Q)$ splits as an
$S_n$--module as a direct
sum \[H^*(P_n;\Q)=\bigoplus_{\mathcal{S}}H^\mathcal{S}(P_n)\] over
all partitions $\mathcal{S}$ of $\{1,\ldots,n\}$, and that $S_n$
permutes the summands according to its action on $\{1,\ldots,n\}$.
Every partition $\mathcal{S}$ of $\{1,\ldots,n\}$ determines a
partition $\overline{\mathcal{S}}$ of $n$, listing the sizes of the
subsets in $\mathcal{S}$. The term $H^{\mathcal{S}}(P_n)$
contributes to $H^i(P_n;\Q)$ exactly if $|\mathcal{S}|=\ell(\overline{\mathcal{S}})=n-i$. The action of $S_n$ on $\{1,\ldots,n\}$
induces an action on partitions $\mathcal{S}$ of $\{1,\ldots,n\}$,
and the summands $H^{\mathcal{S}}(P_n)$ are permuted according to
this action. In particular, for a fixed $\mu\vdash n$, the direct
sum $\bigoplus_{\overline{\mathcal{S}}=\mu}H^{\mathcal{S}}(P_n)$ is a
subrepresentation of $H^i(P_n;\Q)$. We will need explicit orbit representations, so for any partition $\mu\vdash
n$, let $\mathcal{S}_\mu$ be the partition of $\{1,\ldots,n\}$ given
by \[\{1,\ldots,\mu_1\},\{\mu_1+1,\ldots,\mu_1+\mu_2\},
\ldots,\{\mu_1+\cdots+\mu_{n-1}+1,\ldots,n\}.\] This gives for each $\mu$ an orbit representative
$\mathcal{S}_\mu$ with
$\overline{\mathcal{S}_\mu}=\mu$. For a fixed $\mu$, the
subrepresentation
$\bigoplus_{\overline{\mathcal{S}}=\mu}H^{\mathcal{S}}(P_n)$ is
generated by one summand $H^{\mathcal{S}_\mu}(P_n)$ and is the direct sum of its
translates. Thus by the definition of induced representation we have
\begin{equation}
\label{eq:mupiece}
\bigoplus_{\overline{\mathcal{S}}=\mu}H^{\mathcal{S}}(P_n)
=\Ind_{\Stab(\mathcal{S}_\mu)}^{S_n}H^{\mathcal{S}_\mu}(P_n).
\end{equation}
We would like to apply Theorem~\ref{thm:hemmer} to the terms
\eqref{eq:mupiece}. Consider the projection onto the Young subgroup
$P_n\to P_k\times P_{n-k}$. Pulling back by the projection
$P_{n+1}\to P_n$, this pulls back to the projection $P_{n+1}\to
P_k\times P_{n-k}\times P_1$. In general, the Young subgroup
$P_{\mathcal{S}}<P_n$ pulls back to $P_{\mathcal{S}\langle n+1\rangle}<P_{n+1}$,
where $\mathcal{S}\langle n+1\rangle$ is the partition $\mathcal{S}\cup\{n+1\}$. Note
that if $\overline{\mathcal{S}}=\mu=(\mu_1,\ldots,\mu_{n-i})$, then
$\overline{\mathcal{S}\langle n+1\rangle}=\mu\langle n+1\rangle\coloneq (\mu_1,\ldots,\mu_{n-i},1)$. For larger $m\geq n$, we define $\mathcal{S}\langle m\rangle\coloneq \mathcal{S}\cup\{n+1\}\cup \cdots\cup \{m\}$ and $\mu\langle m\rangle\coloneq (\mu_1,\ldots,\mu_{n-i},1,\ldots,1)$ similarly.
Since
$\mathcal{S}\langle n+1\rangle$ is a partition of $\{1,\ldots,n+1\}$ into $(n+1)-i$
sets, $H^{\mathcal{S}\langle n+1\rangle}(P_{n+1})$ is contained in $H^i(P_{n+1};\Q)$,
and in fact the natural map $H^*(P_n;\Q)\to H^*(P_{n+1};\Q)$ restricts to
an isomorphism $H^{\mathcal{S}}(P_n)\to H^{\mathcal{S}\langle n+1\rangle}(P_{n+1})$.
Certainly not every partition of $\{1,\ldots,n+1\}$ contains the
singleton set $\{n+1\}$. But fixing $i$, every partition of $n+1$
with $(n+1)-i$ entries must have some entry equal to 1 once $n\geq
2i$. This means that any such partition is equal to $\mu\langle n+1\rangle$ for some
$\mu\vdash n$. Note that we chose the definition of
$\mathcal{S}_\mu$ so that
$\mathcal{S}_{\mu\langle n+1\rangle}=\mathcal{S}_\mu\langle n+1\rangle$. Thus writing the
decomposition \[H^i(P_n;\Q)= \bigoplus_{\substack{\mu\vdash n\\
\ell(\mu)=n-i}}
\bigoplus_{\overline{\mathcal{S}}=\mu}H^{\mathcal{S}}(P_n)=
\bigoplus_{\substack{\mu\vdash n\\ \ell(\mu)=n-i}}
\Ind_{\Stab(\mathcal{S}_\mu)}^{S_n}H^{\mathcal{S}_\mu}(P_n)\] we
have for $n\geq 2i$ a decomposition of $H^i(P_{n+1};\Q)$ over the
same partitions $\mu$:
\begin{align*}
H^i(P_{n+1};\Q)&=\bigoplus_{\substack{\nu\vdash n+1\\\ell(\mu)=n+1-i}}
\Ind_{\Stab(\mathcal{S}_\nu)}^{S_{n+1}}H^{\mathcal{S}_\nu}(P_{n+1})\\
&=\bigoplus_{\substack{\mu\vdash n\\\ell(\mu)=n-i}}
\Ind_{\Stab(\mathcal{S}_\mu\langle n+1\rangle)}^{S_{n+1}}H^{\mathcal{S}_\mu\langle n+1\rangle}(P_{n+1})
\end{align*}
We already mentioned above that
$H^{\mathcal{S}_\mu\langle n+1\rangle}(P_{n+1})\approx
H^{\mathcal{S}_\mu}(P_n)$. The set of partitions
$\mu\vdash n$ with $\ell(\mu)=n-i$ rows is finite and independent of $n$ (subtracting 1 from each entry yields a partition of $i$), so it suffices to prove for each
$\mu$ that the sequence $\{\Ind_{\Stab(\mathcal{S}_\mu\langle m\rangle)}^{S_m}H^{\mathcal{S}_\mu}(P_m)\}$
is uniformly representation stable as $m\to \infty$.
Note that the stabilizer $\Stab(\mathcal{S})$ of a partition
$\mathcal{S}$ of $\{1,\ldots,n\}$ need not preserve the individual
subsets making up $\mathcal{S}$, only the overall decomposition into
subsets. Thus if $\mathcal{S}$ has $m_j$ subsets of size $j$, the
stabilizer $\Stab(\mathcal{S})$ will be a product of wreath products
$S_j\wr S_{m_j}=(S_j)^{m_j}\rtimes S_{m_j}$, where the $(S_j)^{m_j}$
factor acts on the subsets of size $j$, and the $S_{m_j}$ factor
permutes them. In particular, the $S_1\wr S_{m_1}=S_{m_1}$ factor
acts by permuting the singleton sets in $\mathcal{S}_\mu$. This
corresponds to rearranging the $P_1\times \cdots P_1$ factors in the
Young subgroup $P_\mathcal{S}$. From this we see that the $S_{m_1}$
factor of $\Stab(\mathcal{S})$ acts trivially on
$H^\mathcal{S}(P_n)$.
If we write $\Stab(\mathcal{S}_\mu)=H\times
S_{m_1}$, we have $\Stab(\mathcal{S}_\mu\langle n+1\rangle)=H\times S_{m_1+1}$, and so
on. Take $k=n-m_1$ and let $\nu\vdash k$ be the partition obtained
from $\mu$ by deleting those entries equal to $1$. The subgroup $H$
is exactly $H_\nu\coloneq \Stab(\mathcal{S}_\nu)<S_k$, and
identifying $H^{\mathcal{S}_\mu}(P_n)$ with
$H^{\mathcal{S}_{\nu}}(P_k)$, the sequence in question can be
written as $\{\Ind_{H_\nu\times
S_{n-k}}^{S_n}H^{\mathcal{S}_{\nu}}(P_k)\boxtimes \Q\}$. Thus Theorem~\ref{thm:hemmer} applies and gives that this
sequence is uniformly multiplicity stable, as desired.
To compute the stable range, it suffices to bound the number $k=|\nu|$ which appears in the last paragraph of the proof. It is not hard to check that for a fixed $i$, the maximum $k$ occurs for the partition $\mu=(2,2,\ldots,2,1,\ldots,1)$, corresponding to Young subgroups isomorphic to $P_2\times \cdots\times P_2$. For such $\mu$ we have $\nu=(2,\ldots,2)$ with $\ell(\nu)=i$, and thus the maximal $k$ is $k=2i$. Since the stability range is Theorem~\ref{thm:hemmer} is $n\geq 2k$, we conclude that $\{H^i(P_n;\Q)\}$ stabilizes once $n\geq 4i$, as claimed.
\end{proof}
\begin{remark}
By a careful analysis of the individual pieces $H^\mathcal{S}(P_n)$,
Lehrer--Solomon \cite{LS} decompose $H^i(P_n;\Q)$ into a direct sum
of representations induced from 1--dimensional representations of
certain centralizers in $S_n$. Though we did not need this
description to prove that $\{H^i(P_n;\Q)\}$ is representation
stable, it is indispensable when actually computing multiplicities
of irreducibles. We revisit these multiplicities in \cite{CEF},
where we explicitly compute the multiplicities of certain
irreducible representations in $H^i(P_n;\Q)$ and give arithmetic
consequences of their stable values.
\end{remark}
\bigskip
Arnol'd \cite{Ar} (see also F. Cohen \cite{Co}) established
homological stability for the integral homology groups $H_i(B_n;\Z)$.
He also showed that $H_i(B_n;\Z)$ is finite for $i\geq 2$, so that
$H_i(B_n;\Q)$ is trivial in this range. As a corollary of
Theorem~\ref{thm:pure}, we obtain homological stability for $B_n$ with
twisted coefficients. Any representation of $S_n$ can be regarded as a
representation of $B_n$ by composing with the standard projection
$B_n\to S_n$.
\begin{corollary}
\label{corollary:twistedstability}
For any partition $\lambda$ the sequence $\{H_*(B_n;V(\lambda)_n)\}$
of twisted homology groups satisfies classical homological
stability: for each fixed $i\geq 0$, once $n\geq 4i$ there is an isomorphism
\begin{equation}
\label{eq:twisted}
H_i(B_n;V(\lambda)_n) \approx H_i(B_{n+1}; V(\lambda)_{n+1}).
\end{equation}
\end{corollary}
\begin{proof}
There are no natural maps between the homology groups in
(\ref{eq:twisted}), but we show that their dimension is eventually
constant. Since $P_n$ has finite index in $B_n$ and our
coefficients are vector spaces over $\Q$, the transfer map gives an
isomorphism
\[H_i(B_n;V(\lambda)_n)\approx H_i(P_n;V(\lambda)_n)^{S_n}\] with
the $S_n$--invariants in $H_i(P_n;V(\lambda)_n)$. Since the action of $B_n$ on $V(\lambda)_n$
factors through $S_n$, the representation $V(\lambda)_n$ is trivial
when restricted to $P_n$. Thus \[H_i(P_n;V(\lambda)_n)^{S_n}\approx
\big(H_i(P_n;\Q)\otimes V(\lambda)_n\big)^{S_n}.\] Recall from Section~\ref{section:representationstability} that every representation of $S_n$ is self-dual. Schur's lemma thus gives that
$V(\mu)\otimes V(\nu)$ contains the trivial representation if and
only if $\mu=\nu$, in which case the trivial representation appears
with multiplicity 1. It follows that the dimension of
$\big(H_i(P_n;\Q)\otimes V(\lambda)_n\big)^{S_n}$ is exactly the
multiplicity of $V(\lambda)_n$ in $H_i(P_n;\Q)$, which is the same
as the multiplicity of $V(\lambda)_n$ in $H^i(P_n;\Q)$. By
Theorem~\ref{thm:pure}, this multiplicity is constant once $n\geq 4i$, as
desired.
\end{proof}
\begin{remark}
The space $X_n$ is the configuration space of $n$ ordered points in
$\C$, and Theorem~\ref{thm:pure} states that its
cohomology groups $\{H^i(X_n;\Q)\}$ are representation
stable. Similarly, the space $Y_n$ is the configuration space of $n$
unordered points in $\C$, and
Corollary~\ref{corollary:twistedstability} gives classical
homological stability for the sequence $\{H_i(Y_n;\Q)\}$. These results are extended to configuration spaces of arbitrary orientable manifolds in \cite{C}.
\end{remark}
\subsection{Generalized braid groups}
\label{section:gbg}
The \emph{generalized pure braid group} of type $B_n$ is the
fundamental group $WP_n\coloneq \pi_1(X'_n)$ of the configuration
space
\[X'_n\coloneq \big\{\mathbf{z}\in \C^{2n}\big|z_i\neq z_j, z_i\neq
-z_j, z_i\neq 0\big\}.\] This configuration space is aspherical, so
$H^*(WP_n;\Q)=H^*(X'_n;\Q)$. The hyperoctahedral group $W_n$ acts on $X'_n$ by permuting and negating the
coordinates; this induces an action of $W_n$ on $H^*(WP_n;\Q)$. The
quotient $Y'_n\coloneq X'_n/W_n$ is the space of unordered $n$--tuples
of distinct sets $\{z,-z\}$ with $z\neq 0$. Identifying each set
$\{z,-z\}$ with the point $z^2$, the space $Y'_n$ is identified with
the space of unordered $n$--tuples of distinct nonzero points. Thus
the \emph{generalized braid group} $WB_n\coloneq \pi_1(Y'_n)$ can be
identified with the subgroup $B_{1,n}<B_{n+1}$ which is the preimage
of the stabilizer $\Stab(1)<S_{n+1}$. See the survey by Vershinin
\cite{Ve} for an overview of generalized braid groups.
\begin{theorem}
\label{thm:genpure}
For each fixed $i\geq 0$, the sequence $\{H^i(WP_n;\Q)\}$ of
$W_n$--representations is uniformly representation
stable.
\end{theorem}
\begin{proof}
The results of Orlik--Solomon are a bit more involved in this case,
so we cover the necessary definitions in more detail. See Douglass
\cite{Do} for an excellent exposition of these results in the case
of $H^*(X'_n;\Q)$. Let $\mathcal{H}$ be the set of hyperplanes
defined by the equations $z_i=z_j$, $z_i=-z_j$, and $z_i=0$. To each
$H\in \mathcal{H}$ defined by the linear equation $L=0$ we associate
the element $w_H\in H^1(X'_n;\Q)$ represented by $\frac{1}{2\pi
i}\frac{dL}{L}$. The action of $W_n$ on the elements $w_H$ is the
same as the action of $W_n$ on $\mathcal{H}$.
Brieskorn \cite{Bri} proved that $H^*(WP_n;\Q)=H^*(X'_n;\Q)$ is
generated by the $w_H$, subject to certain relations \cite[Theorem
5.2]{OS}. Injectivity in the definition of representation stability
follows from the naturality of these relations; this is essentially
the observation that any relation supported on the image of
$H^*(X'_n;\Q)$ in $H^*(X'_{n+1};\Q)$ already holds in
$H^*(X'_n;\Q)$. For surjectivity, simply note that any monomial of
length $i$ involves at most $2i$ coordinates, and thus up to the
$W_{n+1}$--action is contained in $H^i(X'_n;\Q)$ once $n\geq 2i$.
Let $\mathcal{J}$ be the set of intersections of hyperplanes in
$\mathcal{H}$. The \emph{support} of a monomial $w_{H_1}\cdots
w_{H_k}$ is the intersection $H_1\cap \cdots\cap H_k\in
\mathcal{J}$. Orlik--Solomon \cite[Proposition 2.10]{OS} prove that
$H^*(X'_n;\Q)$ splits as a direct sum over $J\in\mathcal{J}$
\[H^*(X'_n;\Q)=\bigoplus_{J\in\mathcal{J}}\big\langle w_{H_1}\cdots
w_{H_k} \big|H_1\cap \cdots\cap H_k=J\big\rangle\] of the subspace
spanned by monomials with support $J$. The factors are permuted according to the action of
$W_n$ on $\mathcal{J}$. Let $H^J$ be the summand spanned by all monomials $w_{H_1}\cdots
w_{H_k}$ with $H_1\cap \cdots\cap H_k=J$, so the
splitting above is just \[H^*(X'_n;\Q)=\bigoplus_{J\in\mathcal{J}}H^J.\] To write this as a sum of induced representations, we need to understand the orbits of the $W_n$--action on $\mathcal{J}$.
Consider the elements of $\mathcal{J}$, that is the subspaces which
occur as intersections of the defining hyperplanes $\mathcal{H}$. A
representative example is the subspace defined by the equations
\[z_1=-z_2=-z_3,\quad z_4=z_5=-z_6,\quad z_7=z_8=0.\] In general,
any element $J\in \mathcal{J}$ splits into disjoint sets of indices
in this way. One subset corresponds to the $\ell$ coordinates which
are equal to 0, for some $0\leq \ell\leq n$. On each other subset,
the indices are split into two parts as in $z_1=-z_2=-z_3$. Since
$W_n$ not only can permute coordinates but can also negate them, this
internal division is not preserved by $W_n$. The orbits of $W_n$
acting on $\mathcal{J}$ correspond to pairs $(\lambda,\ell)$ where
$\lambda$ is a partition not involving 1 and $|\lambda|+\ell\leq
n$. We let $m=m(J)$ denote $|\lambda|+\ell$. For example, the
subspace above corresponds to $((3,3),2)$, with representative
\[J=J_{((3,3),2)}\colon\qquad z_1=z_2=z_3,\quad z_4=z_5=z_6,\quad
z_7=z_8=0.\] For this subspace $J$, the stabilizer $\Stab_{W_n}(J)$
is $(S_3\wr W_2)\times W_2\times W_{n-8}$. All we will need is that
in general, the stabilizer $\Stab_{W_n}(J)$ splits as a product
$G_J\times W_{n-m(J)}$, where $n-m(J)$ is the number of unrestricted
coordinates.
Let $H^J$ be the subspace spanned by all monomials $w_{H_1}\cdots
w_{H_k}$ with $H_1\cap \cdots\cap H_k=J$. The
splitting \[H^*(X'_n;\Q)=\bigoplus_{J\in\mathcal{J}}H^J\] can be
rewritten as a sum over $W_n$--orbit representatives of induced
representations \[H^*(X'_n;\Q)=\bigoplus_{J=J_{(\lambda,\ell)}}
\Ind_{\Stab_{W_n}(J)}^{W_n}H^J.\]
Unfortunately, unlike the case of $H^{\mathcal{S}}(P_n)$ in the
previous section, the subspace $H^J$ is not homogeneous: if $k$
denotes the number of entries in $\lambda$ then $H^J$ has components
in $H^i(X'_n;\Q)$ for all $m(J)-k\leq i\leq m(J)$. For such $i$,
define the \emph{homogeneous component} $H^{J|i}<H^i(X'_n;\Q)$
by \[H^{J|i}\coloneq\big\langle w_{H_1}\cdots w_{H_i}\big|H_1\cap
\cdots\cap H_i=J\big\rangle.\] We obtain the decomposition
\[H^i(X'_n;\Q)=\bigoplus_{\substack{J=J_{(\lambda,\ell)}\\
m(J)-k\leq i\leq m(J)}} \Ind_{\Stab_{W_n}(J)}^{W_n}H^{J|i}.\]
Note that for fixed $i$, the set of pairs $(\lambda,\ell)$ with
$|\lambda|+\ell-k\leq i\leq |\lambda|+\ell$ is finite and eventually
independent of $n$; that is, the collection of orbit representatives
$J=J_{(\lambda,\ell)}$ for which $H^J$ contributes to $H^i(X'_n;\Q)$
does not depend on $n$. Thus it would suffice to prove that for each
such $J$ and $i$, the sequence of representations
$\Ind_{\Stab_{W_n}(J)}^{W_n}H^{J|i}$ is uniformly representation stable.
We now mimic Hemmer's proof of Theorem~\ref{thm:hemmer} to finish
the proof. Fix $J=J_{(\lambda,\ell)}$ and $i$ such that $m-k\leq
i\leq m$, and take $n>m$. Recall that $\Stab_{W_n}(J)$ splits as
$G_J\times W_{n-m}$, where $n-m$ is the number of unrestricted
coordinates. Since $H^{J|i}$ is spanned by monomials for which the
intersection $H_1\cap\cdots\cap H_i$ is equal to $J$, no
unrestricted coordinate appears in any element of $H^{J|i}$. It
follows that the $W_{n-m}$ factor above acts trivially on
$H^{J|i}$. Thus we may consider $H^{J|i}$ as the representation
$H^{J|i}\boxtimes \Q$ of $G_J\times W_{n-m}$. Factor the desired
induction as
\begin{align*}
\Ind_{\Stab_{W_n}(J)}^{W_n}H^{J|i}&=
\Ind_{W_m\times W_{n-m}}^{W_n}
\Ind_{G_J\times W_{n-m}}^{W_m\times W_{n-m}}
H^{J|i}\boxtimes \Q\\
&=\Ind_{W_m\times W_{n-m}}^{W_n}
\left(\big(\Ind_{G_J}^{W_m}H^{J|i}\big)
\boxtimes \Q\right)
\end{align*}
Let $V_{J|i}$ be the representation $\Ind_{G_J}^{W_m}H^{J|i}$ of
$W_m$; note that this does not depend on $n$. Consider the
decomposition of $V_{J|i}$ into irreducible representations
$V_{(\lambda^+,\lambda^-)}$ of $W_m$. Since only finitely many
irreducibles $(\lambda^+,\lambda^-)$ occur in $V_{J|i}$, it suffices
to prove uniform stability for each factor $\Ind_{W_m\times
W_{n-m}}^{W_n}\big(V_{(\lambda^+,\lambda^-)} \boxtimes
\Q\big)$. The Littlewood--Richardson rule generalizes to
hyperoctahedral groups as \cite[Lemma 6.1.3]{GP}, giving:
\[\Ind_{W_m\times W_{n-m}}^{W_n}\big(V_{(\lambda^+,\lambda^-)}
\boxtimes
V_{(\mu^+,\mu^-)}\big)=\bigoplus_{\nu^+,\nu^-}
C_{\lambda^+,\mu^+}^{\nu^+}
C_{\lambda^-,\mu^-}^{\nu^-}
V_{(\nu^+,\nu^-)}.\] Applying this to the trivial
representation $\Q=V_{((n-m),(0))}$ yields
\[\Ind_{W_m\times W_{n-m}}^{W_n}\big(V_{(\lambda^+,\lambda^-)}
\boxtimes \Q\big)=\bigoplus_{\nu} C_{\lambda^+,(n-m)}^{\nu}
V_{(\nu,\lambda^-)}=\bigoplus_{\nu} V_{(\nu,\lambda^-)}\] where the
last sum is over those partitions $\mu$ obtained from $\lambda^+$ by
adding $n-m$ boxes, no two in the same column. For fixed $\lambda^+$
and large enough $n$, say $n-m>|\lambda^+|$, any such $\nu$ must
have multiple boxes added to the first row. This yields a bijection
between the partitions $\nu$ of $j\coloneq n-|\lambda^-|$ appearing
in this decomposition and their stabilizations $\nu[j+1]$ appearing
in the decomposition \[\Ind_{W_m\times
W_{n+1-m}}^{W_{n+1}}\big(V_{(\lambda^+,\lambda^-)} \boxtimes
\Q\big)=\bigoplus_{\nu} V_{(\nu[j+1],\lambda^-)},\] implying that
this sequence of induced representations is uniformly multiplicity
stable. This completes the proof that $H^i(WP_n;\Q)=H^i(X'_n;\Q)$ is
uniformly representation stable.
\end{proof}
This gives the following corollary, by the same argument as Corollary~\ref{corollary:twistedstability}. We remark that an explicit stability range for Theorem~\ref{thm:genpure} and Corollary~\ref{cor:gen} can be extracted from the proof of Theorem~\ref{thm:genpure}.
\begin{corollary}
\label{cor:gen}
For any double partition $\lambda=(\lambda^+,\lambda^-)$ the sequence $\{H_*(B_{1,n};V(\lambda)_n)=H_*(WB_n;V(\lambda)_n)\}$
of twisted homology groups satisfies classical homological
stability: for each fixed $i\geq 0$ and sufficiently large $n$ (depending on $i$), there is an isomorphism
\[H_i(B_{1,n};V(\lambda)_n) \approx H_i(B_{1,n+1}; V(\lambda)_{n+1}).\]
\end{corollary}
An explicit stability range for Theorem~\ref{thm:genpure} and Corollary~\ref{cor:gen} can be extracted from the proof.
\subsection{Groups of string motions}
A 3--dimensional analogue of the pure braid group is $P\Sigma_n$, the
group of pure string motions. Let $X''_n$ be the space of embeddings
of $n$ disjoint unlinked loops into 3--space, and define
$P\Sigma_n\coloneq \pi_1(X''_n)$. The hyperoctahedral group $W_n$ acts
on $X''_n$ by permuting the labels and reversing the orientations,
inducing an $W_n$--action on $H^*(P\Sigma_n;\Q)$. We remark that
$P\Sigma_n$ can also be identified with McCool's \emph{pure symmetric
automorphism group}, consisting of those automorphisms of the free
group $F_n$ sending each generator to a conjugate of itself. The
quotient $Y''_n\coloneq X''_n/W_n$ is the space of $n$ unordered
unoriented unlinked loops, and its fundamental group
$B\Sigma_n\coloneq \pi_1(Y''_n)$ is the group of \emph{string
motions}.
The cohomology ring of $P\Sigma_n$ has been computed by
Jensen--McCammond--Meier \cite{JMM}, who prove that $H^*(P\Sigma_n;\Q)$
is generated by classes $\alpha_{ij}\in H^1(P\Sigma_n;\Q)$ for all $i\neq
j$, $1\leq i,j\leq n$, subject to the relations
\[\alpha_{ij}\wedge \alpha_{ji}=0\qquad\text{and}\qquad
\alpha_{ij}\wedge \alpha_{jk}+\alpha_{kj}\wedge
\alpha_{ik}+\alpha_{ik}\wedge \alpha_{ij}.\] The action of $W_n$ is as
follows: $S_n$ acts by permuting the indices, while negating the $j$th
coordinate negates generators of the form $\alpha_{ij}$ and fixes all
other generators. There is a natural embedding $P_n\hookrightarrow
P\Sigma_n$, and the induced surjection $H^*(P\Sigma_n;\Q)\to
H^*(P_n;\Q)$ maps $\alpha_{ij}\mapsto w_{ij}$. Based on the results of
the previous sections, it is natural to make the following conjecture.
\begin{conjecture}
For each fixed $i\geq 0$, the sequence of $W_n$--representations
$\{H^i(P\Sigma_n;\Q)\}$ is uniformly representation
stable.
\end{conjecture}
Note that some element of $W_n$ negates $\alpha_{ij}$ and preserves
$\alpha_{ji}$, but both are mapped to $w_{ij}=w_{ji}\in H^1(P_n;\Q)$. Thus the action
of $W_n$ on $H^*(P\Sigma_n;\Q)$ does not descend to the action of
$S_n$ on $H^*(P_n;\Q)$, though of course the restricted action of
$S_n$ on $H^*(P\Sigma_n;\Q)$ does.
\section{Lie algebras and their homology}
\label{section:liealg}
In this section we show how the phenomenon of representation stability
occurs in the theory of Lie algebras. Our main result,
Theorem~\ref{thm:equivhomLie} below, relates representation stability
for a sequence of Lie algebras to representation stability for their
homology groups. We then give a number of applications, some of which
were already known by other methods.
\subsection{Graded Lie algebras and Lie algebra homology}
\para{Lie algebra homology} Given a Lie algebra $\L$ over $\Q$, its
\emph{Lie algebra homology} $H_*(\L;\Q)$ is computed by the chain
complex
\begin{equation}
\label{eq:HL}
\cdots\longrightarrow \bwedge^3\L
\overset{\partial_3}{\longrightarrow} \bwedge^2\L
\overset{\partial_2}{\longrightarrow} \L
\overset{\partial_1}{\longrightarrow} \Q,
\end{equation}
where the differential is given by
\[\partial_i(x_1\wedge\cdots\wedge x_i)=
\sum_{j<k}(-1)^{j+k+1}[x_j,x_k]\wedge x_1 \wedge\cdots\wedge
\widehat{x}_j\wedge\cdots \wedge \widehat{x}_k\wedge\cdots\wedge
x_i.\] Note that if $\GL(\L)$ denotes the group of Lie algebra
automorphisms of $\L$, the induced action of $\GL(\L)$ on $\bwedge^i
\L_n$ commutes with $\partial$. Thus an action of any group
$G$ on $\L$ by automorphisms induces an action of $G$ on
$H_i(\L;\Q)$ for each $i$.
\para{Homology with coefficients} If $M$ is an $\L$--module, the
\emph{homology} $H_*(\L;M)$ \emph{with coefficients in $M$} is the
homology of the complex
\[\cdots\to\bwedge^3\L\otimes M\to \bwedge^2\L\otimes M
\to\L\otimes M\to M\to 0,\] where the differential is the sum of the
previous differential on $\bwedge^*\L$, extended by the identity to
$\bwedge^*\L\otimes M$, plus $\partial'_i\colon
\bwedge^i\L\otimes M\to \bwedge^{i-1}\L\otimes M$ defined by
\[\partial'_i(x_1\wedge\cdots\wedge x_i\otimes m)=
\sum(-1)^{j+1}x_1\wedge\cdots\wedge\widehat{x}_j \wedge\cdots\wedge
x_i\otimes x_j\cdot m .\] A common example is the \emph{adjoint
homology} $H_*(\L;\L)$. If $G$ acts on $\L$ by automorphisms and
acts $\L$--equivariantly on $M$, meaning that $(g\cdot x)\cdot (g\cdot
m)=g\cdot(x\cdot m)$, then $\partial'$ commutes with the action of
$G$, inducing an action of $G$ on $H_i(\L;M)$ for each $i$.
\para{Graded Lie algebras} A Lie algebra $\L$ is called a \emph{graded
Lie algebra} if it decomposes into homogeneous components
$\L=\bigoplus_{j\geq 1} \L^j$ so that $[\L^j,\L^k]\subset
\L^{j+k}$. This induces a grading $\bwedge^i \L=\bigoplus_j
(\bwedge^i \L)^j$ under which, for example, the subspace $\bwedge^3
\L^2\subset\bwedge^3 \L$ has degree 6. From the definition above we
see that the differential $\partial$ preserves this grading. Thus it
descends to a grading $H_i(\L;\Q)=\bigoplus_j H_i(\L;\Q)^j$ of the Lie
algebra homology. If $M=\bigoplus_{j\geq 0} M^j$ is a graded
$\L$--module, meaning that $\L^j\cdot M^k\subset M^{j+k}$, then we
similarly obtain a grading $H_i(\L;M)=\bigoplus H_i(\L;M)^j$.
\begin{definition}[Consistent sequence of Lie algebras]
Let $G_n$ be $\SL_n\Q$, $\GL_n\Q$, or $\Sp_{2n}\Q$. Consider a
sequence of Lie algebras $\{\L_n\}$ with injections
$\L_n\hookrightarrow \L_{n+1}$ and with each $\L_n$ equipped with an
action of $G_n$ by Lie algebra automorphisms. We call the sequence
$\{\L_n\}$ \emph{consistent} if each of the following holds:
\begin{enumerate}
\item $\L_n$ is consistent when considered as a sequence of
$G_n$--representations.
\item Each $\L_n$ is graded, and both the maps $\L_n\to \L_{n+1}$ and
the action of $G_n$ preserve the grading.
\item The graded components $\L_n^j$ are finite-dimensional.
\end{enumerate}
\end{definition}
\noindent
It will also be useful to allow our coefficient modules to vary with $n$.
\begin{definition}[Admissible coefficients]
A sequence $\{M_n\}$ of nonzero graded $\L_n$--modules with maps $M_n\to M_{n+1}$ and
equivariant $G_n$--actions is \emph{admissible} if the following conditions hold:
for each $j\geq 0$ the sequence $\{M_n^j\}$ is strongly stable; each $M_n^j$ is
finite-dimensional; and $M_n^j$ is eventually nonzero for at least one $j\geq 0$.
\end{definition}
Our main result in this section is the following. It proves among
other things that strong representation stability for a sequence of
Lie algebras is actually equivalent to strong stability for its
homology. Each direction of this equivalence has applications.
\begin{theorem}[Stability of Lie algebras and their homology]
\label{thm:equivhomLie}
Let $G_n=\SL_n\Q$ or $\GL_n\Q$, and let $\{\L_n\}$ be a consistent
sequence of graded Lie algebras with $G_n$--actions which is type-preserving
(satisfies Condition IV). The following are equivalent:
\begin{enumerate}
\item For each fixed $j\geq 0$ the sequence $\{\L_n^j\}$ is strongly
stable.
\item For each fixed $i,j\geq 0$ the sequence $\{H_i(\L_n;\Q)^j\}$
is strongly stable.
\item For each fixed $i,j\geq 0$ the sequence $\{H_i(\L_n;\L_n)^j\}$
of graded adjoint homology groups is strongly stable.
\item For one admissible sequence of coefficients $\{M_n\}$, the
sequence $\{H_i(\L_n;M_n)^j\}$ is strongly stable for each fixed
$i,j\geq 0$.
\item For every admissible sequence of coefficients $\{M_n\}$, the
sequence $\{H_i(\L_n;M_n)^j\}$ is strongly stable for each fixed
$i,j\geq 0$.
\end{enumerate}
\end{theorem}
\begin{proof}
We will prove the equivalence for $G_n=\SL_n\Q$ and $\GL_n\Q$
simultaneously. Note that by taking coefficients $M=\Q$ concentrated
in grading 0 with trivial $\L$--action, (2) is a special case of (4), so
(2) $\implies$ (4). We will begin by proving that
{(4) $\implies$ (1)}. We will then modify this argument slightly to
prove that (3) $\implies$ (1). Note that under the assumption of
(1), $\{\L_n\}$ is an admissible sequence of coefficients, so that
under this assumption (3) follows from (5). Thus once we have proved
that (1) $\implies$ (5), it immediately follows that (1) $\implies$
(3). Since (5) also trivially implies (2) and (4), this will
complete the proof of equivalence.
\bigskip We first describe the complex computing graded
homology. Let $\L=\bigoplus_{j\geq 1} \L^j$ be a graded Lie algebra,
and $M=\bigoplus_{j\geq 0}M^j$ a graded $\L$--module. Since the
differential preserves the grading, we can decompose the complex
\eqref{eq:HL} computing $H_*(\L;M)$ into its graded pieces. The
slice of this complex in grading $k$, which computes
$H_*(\L;M)^k$, has the form:
\begin{equation}
\label{eq:gradedLhom}
\begin{split}
0\longrightarrow \bwedge^k\L^1\otimes
M^0\longrightarrow\big(\bwedge^{k-2}\L^1\otimes \L^2\otimes
M^0\big)\oplus\big(\bwedge^{k-1}\L^1\otimes
M^1\big)\longrightarrow\cdots \\\cdots\longrightarrow
\bigoplus_{1\leq j,j'<k}\L^j\otimes \L^{j'}\otimes
M^{k-j-j'}\longrightarrow \bigoplus_{1\leq j\leq k}\L^j\otimes
M^{k-j}\longrightarrow M^k\to 0
\end{split}
\end{equation}
Here the $\L^j\otimes \L^{j'}\otimes M^{k-j-j'}$ term is actually
$\bwedge^2 \L^j\otimes M^{k-2j}$ when $j=j'$. Note the following key
property: the graded piece $\L^k$ only appears in the second-to-last
term, in the term $\L^k\otimes M^0$; all other terms involve $\L^j$
only for smaller $j$ (for $j<k$).
\bigskip \textbf{(4) $\implies$ (1).} Assume that $\{\L_n\}$ is a
consistent $G_n$--sequence of graded Lie algebras, that $\{M_n\}$ is
an admissible $G_n$--sequence of graded $\L_n$--modules, and that
$\{H_i(\L_n;M_n)^j\}$ is strongly stable for each fixed $i,j\geq
0$. Assume for now that $M_n^0$ is eventually nonzero. We prove that
$\{\L_n^j\}$ is strongly stable by induction. Since we have assumed
that $\L_n\hookrightarrow \L_{n+1}$ is type-preserving, it suffices
to prove that $\{\L_n^j\}$ is uniformly multiplicity stable. In
the next two sections of the proof only, we will
abbreviate ``uniformly multiplicity stable'' to
``stable''. Furthermore, since stability is always taken over
sequences with respect to $n$, we suppress the subscript $n$ for
readability. To sum up, within the next two sections
``$\{H_1(\L;M)^1\}$ is stable'' means ``the sequence of
$G_n$--representations $\{H_1(\L_n;M_n)^1\}$ is uniformly
multiplicity stable''.
We first prove that $\{\L^1\}$ is stable. The following sequence
computes $H_*(\L;M)^1$:
\[0\to \L^1\otimes M^0\overset{\partial}{\longrightarrow} M^1\to 0\]
By assumption $\{M^1\}$ is stable, as are $\{\ker
\partial=H_1(\L;M)^1\}$ and $\{M^1/\im \partial=H_0(\L;M)^1\}$. We see
that $\{\im \partial\}$ is also stable, and thus $\{\L^1\otimes
M^0=\ker\partial \oplus\im\partial\}$ is stable as well. We now appeal
to Theorem~\ref{thm:LRinvert}, which states that if $\{M^0\}$ and
$\{\L^1\otimes M^0\}$ are stable, then $\{\L^1\}$ is stable as well.
The argument in the inductive step is similar. Assume that
$\{\L^j\}$ is stable for each $j<k$. Consider the sequence
\eqref{eq:gradedLhom} computing $H_*(\L;M)^k$. Since $\{M^j\}$ is
stable for each fixed $j\geq 0$, by repeatedly applying
Theorems~\ref{thm:classicalstability}(1) and
\ref{thm:classicalstability}(2) we conclude that each term of
\eqref{eq:gradedLhom} is stable except possibly the term
$\{\L^k\otimes M^0\}$. We now proceed along this complex from left
to right, comparing the complex itself with its homology. Start with
$\partial_k$, whose domain is $\bwedge^k\L^1\otimes M^0$. Since
$\{\bwedge^k\L^1\otimes M^0\}$ and $\{\ker\partial_k=H_k(\L;M)^k\}$
are stable, so is $\{\im\partial_k\}$. Since $\{\im\partial_k\}$
and $\{\ker\partial_{k-1}/\im\partial_k=H_{k-1}(\L;M)^k\}$ are
stable, so is $\{\ker\partial_{k-1}\}$. Since
$\{\ker\partial_{k-1}\}$ and the domain of $\partial_{k-1}$ are
stable, so is $\{\im\partial_{k-1}\}$. Continuing along the complex, we have by induction that
$\{\im\partial_2\}$ is stable, as is
$\{\ker\partial_1/\im\partial_2=H_1(\L;M)^k\}$, so
$\{\ker\partial_1\}$ is stable. Now moving to the right side,
$\{M^k\}$ and $\{M^k/\im\partial_1=H_0(\L;M)^k\}$ are stable, so
$\{\im\partial_1\}$ is stable. Combining these claims, we see that
$\{\ker\partial_1\oplus\im\partial_1=\bigoplus_{1\leq j\leq
k}\L^j\otimes M^{k-j}\}$ is stable. Since all but one term in this
sum is stable, the remaining term $\{\L^k\otimes M^0\}$ is stable as
well. Applying Theorem~\ref{thm:LRinvert}, we conclude that
$\{\L^k\}$ is stable.
In the previous two paragraphs we assumed that $M^0_n$ was
eventually nonzero, but a similar argument applies in general. For
example, consider the case when $M^0_n$ is zero for all $n$, but
$M^1_n$ is eventually nonzero. Then every term containing $M^0$
vanishes in the complex \eqref{eq:gradedLhom} computing
$H_*(\L;M)^k$. Among the remaining terms, $\L^k$ no longer appears,
and $\L^{k-1}$ appears only in the term $\L^{k-1}\otimes M^1$. Thus
assuming that $\{L^j\}$ is stable for $j<k-1$, an argument like the
one above shows that $\{\L^{k-1}\}$ is stable. This completes the
proof that (4) $\implies$ (1).
\bigskip \textbf{(3) $\implies$ (1).} This proof is exactly like the
proof that (4)$\implies$(1), except that we do not know at the
beginning that $\L_n$ is an admissible sequence of coefficients. To
prove this, first note that the complex computing $H_1(\L;\L)^1$ is
just $0\to \L^1\to 0$, so $\L^1$ must be stable. Since $\L$ has
positive grading, the complex computing $H_k(\L;\L)^k$ has the form
\begin{equation*}
0\longrightarrow
{\bwedge^{k-1}\L^1}\otimes\L^1\to\cdots\to
\bigoplus_{0<j<k}\L^j\otimes \L^{k-j}\longrightarrow
\L^k\longrightarrow 0
\end{equation*} In particular, $\L^k$
appears only in the last term. By induction, every term except
possibly the last is stable, and the homology in each dimension is
stable, so as above we can conclude that $\L^k$ is stable, as
desired. Note that we do not need a separate argument for the case
when $\L^1$ is trivial.
\bigskip \textbf{(1) $\implies$ (5).} Assume that $\{\L_n^j\}$ is
strongly stable for each $j\geq 0$. Let $N_n^{i,j}$ be the piece of
$\bwedge^i\L_n\otimes M_n$ in grading $j$, so that the complex
\eqref{eq:gradedLhom} computing $H_*(\L_n;M_n)^k$ has the form
\[0\to N_n^{k,k}\to N_n^{k-1,k}\to \cdots\to N_n^{2,k}\to
N_n^{1,k}\to N_n^{0,k}\to 0.\] We have already encountered these
subspaces; for example, $N_n^{k,k}=\bwedge^k\L_n^1\otimes M_n^0$ and
$N_n^{1,k}=\bigoplus_{1\leq j\leq k}\L_n^j\otimes M_n^{k-j}$. We
have already assumed that $\{\L_n\}$ and $\{M_n\}$ are strongly
stable. If both are finite-dimensional, then by
Theorems~\ref{thm:classicalstability}(1) and
\ref{thm:classicalstability}(2), the sequence
$\{\bwedge^i\L_n\otimes M_n\}$ is strongly stable for each fixed
$i\geq 0$. Even if $\L_n$ is not finite-dimensional, for fixed $i,j$
the term $N_n^{i,j}$ only involves finitely many graded pieces
$\L_n^\bullet$ and $M_n^\bullet$, as in the example
$N_n^{1,k}=\bigoplus_{1\leq j\leq k}\L_n^j\otimes M_n^{k-j}$
above. Since each graded piece is assumed finite-dimensional, we may
repeatedly apply Theorem~\ref{thm:classicalstability} to conclude
that $\{N_n^{i,j}\}$ is strongly stable, and in particular satisfies
Condition IV.
Let $\partial_i^n$ be the differential $\bwedge^i\L_n\otimes
M_n\to \bwedge^{i-1}\L_n\otimes M_n$, and let $(\partial_i^n)^j\colon N_n^{i,j}\to N_n^{i-1,j}$ be the restriction to $N_n^{i,j}$. The
commutativity of
\begin{equation}
\label{eq:wedgeLnsquare}
\xymatrix{
\bwedge^i\L_n\otimes M_n\ar^{\partial_i^n}[r]\ar[d]&\bwedge^{i-1}\L_n\otimes M_n\ar[d]\\
\bwedge^i\L_{n+1}\otimes M_{n+1}\ar_{\partial_i^{n+1}}[r]&\bwedge^{i-1}\L_{n+1}\otimes M_{n+1}
}
\end{equation}
implies that under the vertical inclusions, $\ker \partial_i^n$ maps to $\ker
\partial_i^{n+1}$, and that $\im\partial_i^n$ maps to $\im \partial_i^{n+1}$. Restricting to grading $j$, we similarly conclude that $\ker (\partial_i^n)^j$ maps to $\ker
(\partial_i^{n+1})^j$ and that $\im(\partial_i^n)^j$ maps to $\im(\partial_i^{n+1})^j$ under the inclusions $N_n^{i,j}\hookrightarrow N_{n+1}^{i,j}$.
Recall that Condition IV for $\{N_n^{i,j}\}$ says that for any
subspace isomorphic to $V(\lambda)_n^k$ in $N_n^{i,j}$, its
$G_{n+1}$--span in $N_{n+1}^{i,j}$ is isomorphic to
$V(\lambda)_{n+1}^k$. Applying this to $\ker(\partial_i^n)^j$ and $\im(\partial_i^n)^j$, the observation above implies that for fixed $i$, $j$ and
$\lambda$, the multiplicity of $V(\lambda)_n$ in
$\ker(\partial_i^n)^j$ and in $\im(\partial_i^n)^j$ is nondecreasing in $n$. The sum of these representations is $N_n^{i,j}$, whose decomposition is eventually constant by uniform multiplicity stability. Once the decomposition of $N_n^{i,j}$ has stabilized, an increase in $\ker(\partial_i^n)^j$ would necessitate a corresponding decrease in $\im(\partial_i^n)^j$, contradicting the observation for $\im(\partial_i^n)^j$, and vice versa. We conclude that $\{\ker(\partial_i^n)^j\}$ and $\{\im(\partial_i^n)^j\}$ are uniformly multiplicity stable for each $i$ and $j$, stabilizing once $N_n^{i,j}$ does. Thus for each $i$ and $j$ the
quotient $\{H_i(\L_n;M_n)^j=\ker
(\partial_{i-1}^n)^j/\im(\partial_i^n)^j\}$ is uniformly multiplicity stable, as desired.
Since $\{N_n^{i,j}\}$ is uniformly
multiplicity stable, for fixed $i,j\geq 0$ and sufficiently large
$n$ we have the following property: only finitely many partitions
$\lambda$ occur in $N_n^{i,j}$ (meaning the multiplicity of
$V(\lambda)_n$ in $N_n^{i,j}$ is nonzero). This property passes to
the subquotient $H_i(\L_n;M_n)^j$. But a sequence
$\{H_i(\L_n;M_n)^j\}$ which is multiplicity stable yet involves only
finitely many irreducibles is necessarily uniformly multiplicity
stable, and so we can promote Condition III to Condition III$'$.
By assumption $\L_n$ and $M_n$ are strongly stable. By
Proposition~\ref{prop:SLstrong}, this implies that $P_{n+1}$ acts
trivially on the image of $\L_n$ in $\L_{n+1}$ and of $M_n$ in
$M_{n+1}$. As noted above, this implies that $P_{n+1}$ acts
trivially on the image of $\bwedge^i\L_n\otimes M_n$ in
$\bwedge^i\L_{n+1}\otimes M_{n+1}$ for each $i$. But this condition
passes to subquotients, so $P_{n+1}$ acts trivially on the image of
$H_i(\L_n;M_n)$ in $H_i(\L_{n+1};M_{n+1})$, verifying Condition IV
for the sequences $\{H_i(\L_n;M_n)\}$ and
$\{H_i(\L_n;M_n)^j\}$. Since the $N_n^{i,j}$ are finite-dimensional,
the same is true of their subquotients $H_i(\L_n;M_n)^j$. By
Remark~\ref{remark:strong}, for a finite-dimensional sequence
Conditions III$'$ and IV together imply Conditions I and II.
This concludes the proof of strong stability of $\{H_i(\L_n;M_n)^j\}$.
\end{proof}
\para{Symplectic Lie algebras}
In the proof of (1) $\implies$ (5) of Theorem~\ref{thm:equivhomLie}
we used the assumption that
$\{\L_n\}$ is type-preserving. For representations of $\Sp_{2n}\Q$ we
do not have the appropriate analogue of
Proposition~\ref{prop:SLstrong}, so the argument does not work in this
case. But examining the proof above, we did not use that $\{\L_n\}$ is
type-preserving in the implications (4) $\implies$ (1) or (3)
$\implies (1)$. We needed only Theorem~\ref{thm:LRinvert} and Theorem~\ref{thm:classicalstability} for uniform multiplicity stable
sequences, and these theorems apply to $\Sp_{2n}\Q$--representations
as well. Thus we deduce the following from the proof of
Theorem~\ref{thm:equivhomLie}.
\begin{theorem}
\label{thm:equivSpLie}
Let $\{\L_n\}$ be a consistent $\Sp_{2n}\Q$--sequence of graded Lie
algebras. If the sequence $\{H_i(\L_n;\Q)^j\}$ is uniformly
multiplicity stable for each fixed $i,j\geq 0$, then the sequence
$\{\L_n^j\}$ is uniformly multiplicity stable for each fixed $j\geq
0$.
\end{theorem}
\subsection{Simple representation stability}
Certain classical families of representations satisfy a stronger form
of stability, which is in some sense as close to actual stability as a
sequence of $\GL_n\Q$--representations can be. Consider a partition
$\lambda$ with $\ell=\ell(\lambda)$ rows. As noted above, $\Schur_\lambda(\Q^n)$ is
trivial for $n<\ell$, and for such $n$ there is no irreducible
representation which could be called $V(\lambda)_n$. A sequence is
called simply representation stable if this is the only obstruction to
having constant multiplicities.
\begin{definition}[Simple representation stability] A consistent
sequence $\{V_n\}$ of $\GL_n\Q$--representations is called
\emph{simply representation stable} if for all $n\geq 1$ it
satisfies Conditions I and II, and if in addition it satisfies the
following:
\begin{enumerate}
\item[{\bf SIII.}] For each partition $\lambda$ with $\ell=\ell(\lambda)$ nonzero
rows, the multiplicity of the irreducible representation
$\Schur_\lambda(\Q^n)$ in $V_n$ is constant for all $n\geq \ell$. For
any pseudo-partition $\lambda=(\lambda_1\geq \cdots\geq
\lambda_\ell)$ which is not a partition (meaning $\lambda_\ell<0$), the
multiplicity of $V(\lambda)_n$ in $V_n$ is 0.
\item[{\bf SIV.}] For any subrepresentation $W\subset V_n$ so that
$W\approx \Schur_\lambda(\Q^n)$, the span of the $\GL_{n+1}\Q$--orbit of $\phi_n(W)$ is
isomorphic to $\Schur_\lambda(\Q^{n+1})$.
\end{enumerate}
\end{definition}
If we interpret $V(\lambda)_n$ as being trivial when $n$ is less than
the number of rows of $\lambda$, then simple representation stability
says there is a decomposition
\[V_n=\bigoplus c_{\lambda}\Schur_\lambda(\Q^n)=\bigoplus c_{\lambda}V(\lambda,0)_n\] over partitions $\lambda$
which is totally independent of $n$ and preserved by the maps
$V_n\hookrightarrow V_{n+1}$. Then Theorem~\ref{thm:equivhomLie} has
the following strengthening.
\begin{theorem}
\label{thm:simplehomLie}
Let $\{\L_n\}$ be a consistent $\GL_n\Q$--sequence of graded Lie
algebras which is type-preserving,
and $\{M_n\}$ an admissible sequence of coefficients which is simply stable. Then
Theorem~\ref{thm:equivhomLie} remains true if ``strongly stable'' is
replaced everywhere by ``simply stable''.
\end{theorem}
\begin{proof} We sketch the proof. The characterization of
Proposition~\ref{prop:SLstrong} still holds: given Conditions I, II,
and SIII, Condition SIV is equivalent to Condition IV$'$. Examining
the proof of Theorem~\ref{thm:classicalstability}, and in particular
that the formulas \eqref{eq:LR} and \eqref{eq:SLpleth} are
independent of $n$, we conclude that if $\{V_n\}$ and $\{U_n\}$ are
simply stable, the same is true of $\{V_n\otimes U_n\}$ and
$\{\Schur_\lambda(V_n)\}$. Similarly, from the proof of
Theorem~\ref{thm:LRinvert}, we conclude that if $\{U_n\}$ and
$\{V_n\otimes U_n\}$ satisfy Condition SIII, the same is true of
$\{V_n\}$. This has the folllowing implications for the proof of
Theorem~\ref{thm:equivhomLie}.
In the proofs of (4) $\implies$ (1) and of (3) $\implies$ (1) we can
replace ``stable'' with ``simply stable'' everywhere, and the
argument remains valid. For (1) $\implies$ (5), if the sequence
$\{\L_n\}$ is simply stable, the same is true of
$\{\bwedge^i\L_n\otimes M_n\}$ and of $\{N_n^{i,j}\}$. As before,
the multiplicity of $\Schur_\lambda(\Q^n)$ in $\ker
(\partial_i^n)^j$ and $\im(\partial_i^n)^j$ is nondecreasing. Their sum is the multiplicity of
$\Schur_\lambda(\Q^n)$ in $N_n^{i,j}$, which by simple stability of
$\{N_n^{i,j}\}$ is finite and constant, so the same is true for
$\ker\partial_i^n$ and $\im\partial_i^n$. Since this holds for each
$i$, we conclude that the multiplicity of $\Schur_\lambda(\Q^n)$ in
$H_i(\L_n;M_n)^j$ is constant, as desired. Condition SIV follows as
before, and we conclude that $\{H_i(\L_n;M_n)^j\}$ is simply stable.
\end{proof}
\subsection{Applications and examples}
\label{section:applications}
In this subsection we give a number of applications of
Theorem~\ref{thm:equivhomLie}, Theorem~\ref{thm:equivSpLie}, and
Theorem~\ref{thm:simplehomLie}.
\para{Free Lie algebras} Let $V_n$ be a
$\Q$--vector space with basis $x_1,\ldots,x_n$. Let
$\L(V_n)=\L(x_1,\ldots,x_n)$ be the free Lie algebra on $V_n$. The
action of $\GL(V_n)\approx \GL_n\Q$ on $V_n$ induces an action of
$\GL_n\Q$ on $\L(V_n)$. The Lie algebra $\L(V_n)$ has a natural
grading
\[\L(V_n)=\bigoplus_{i\geq 1} \L_i(V_n)\] which is preserved by the
action of $\GL_n\Q$. The obvious inclusion of $V_n \hookrightarrow
V_{n+1}$ induces natural maps $\L(V_n)\hookrightarrow \L(V_{n+1})$ and $\L_i(V_n)\hookrightarrow
\L_i(V_{n+1})$. These inclusions are respected by the inclusion of $\GL_n\Q \hookrightarrow \GL_{n+1}\Q$.
The free Lie algebra $\L(V_n)$ has the following homology groups for
all $n\geq 1$:
\[H_i(\L(V_n);\Q)=
\begin{cases}
\Q &i=0\\
V_n &i=1\\
0 & i\geq 2
\end{cases}\] This follows from \eqref{eq:freepn} below, and can also
be checked directly. As $\GL_n\Q$--representations, these are
$\Q=\Schur_{(0)}(\Q^n)$, with grading 0, and $V_n=\Schur_{(1)}(\Q^n)$,
with grading 1. Thus $\{H_*(\L(V_n);\Q)\}$ is simply
stable, so Theorem~\ref{thm:simplehomLie} gives the following
corollary.
\begin{corollary}\label{cor:freeLie}
For each fixed $m\geq 1$, the sequence of $\GL_n\Q$--representations
$\{\L_m(V_n)\}$ of degree $m$ components of the free Lie algebras
$\L(V_n)$ is simply representation stable.
\end{corollary}
In fact, the multiplicities of $V(\lambda)_n$ in $\L_m(V_n)$ are
known, at least in some sense. For the irreducible representation
$V(\lambda)_n$ to appear in $\L_m(V_n)$ it is necessary that $\lambda$
be a partition of $m$. For such $\lambda$, Bakhturin \cite[Proposition
3, \S3.4.5]{Ba} gives the following formula for the multiplicity. Let
$\chi_{\lambda}$ denote the character of the irreducible
representation of $S_m$ associated to $\lambda$, and let $\tau$ be the
$m$--cycle $(1\,2\,\ldots\, m)$. Then the multiplicity of
$V(\lambda)_n$ in $\L_m(V_n)$ is
\[c_\lambda\coloneq \frac{1}{m}\sum_{d|m}\mu(d)\chi_\lambda(\tau^{m/d}).\]
Despite this formula, due to the dependence on the values of irreducible
characters $\chi_\lambda$ of the symmetric group, explicitly
calculating these multiplicities remains an active area of research.
\para{Free nilpotent Lie algebras}
Let $\mathcal{N}_k(n)$ be the level $k$ truncation of the free Lie
algebra of rank $n$, meaning:
\[\mathcal{N}_k(n)=\L(V_n)/\L_{k+1}(V_n)=\bigoplus_{i\leq k} \L_i(V_n).\]
This is the free $k$--step nilpotent Lie algebra on $V_n$. Since
$\mathcal{N}_k(n)$ is a truncation of $\L(V_n)$,
Corollary~\ref{cor:freeLie} tells us that for each fixed $i$ the
sequence of $i^{\text{th}}$ graded pieces
$\{\mathcal{N}_k(n)^i=\L_i(V_n)\}$ is simply stable. Thus as a
corollary of Theorem~\ref{thm:simplehomLie}, we obtain the following
theorem of Tirao \cite[Theorem 2.9]{Ti}.
\begin{corollary}[Tirao]
\label{corollary:nilp}
Fix any $k\geq 1$. Then for each fixed $i\geq 0$ the sequence of
$\GL_n(\Q)$--representations $\{H_i(\mathcal{N}_k(n);\Q)\}$ is
simply stable.
\end{corollary}
The novelty of our deduction of Corollary~\ref{corollary:nilp} is that we start with the homology of the free
Lie algebra, which is quite easy to compute, then we move to the free Lie algebra itself using one direction of Theorem~\ref{thm:equivhomLie},
then to its nilpotent truncation, then to the homology of that truncation using the reverse implication in Theorem~\ref{thm:equivhomLie}.
We note that, while the $\mathcal{N}_k(n)$ are not themselves
complicated, their homology is quite complicated. For $k=2$, it
follows from work of Kostant (see \cite[Theorem 3.1]{CT1}) that the
multiplicity of $V(\lambda)_n$ in $H_i(\mathcal{N}_2(n);\Q)$ is 0
unless $\lambda$ is self-conjugate (i.e., its Young diagram is
symmetric under reflection across the diagonal) and has exactly $i$
boxes above the diagonal, in which case the multiplicity is 1. No
formula is known in general, but for some small values of $i$ and $k$,
Tirao \cite{Ti} computes the decomposition of this homology
explicitly. For example, he proves that (in our terminology):
\begin{align*}
H_3(\mathcal{N}_3(2);\Q)&=V(4,2)\\
H_3(\mathcal{N}_3(3);\Q)&=V(4,2)\oplus V(2,2,2)\oplus V(3,1,1)\oplus
V(3,3,1)\oplus V(4,2,1)\oplus V(5,1,1)\\
H_3(\mathcal{N}_3(4);\Q)&=V(4,2)\oplus V(2,2,2)\oplus V(3,1,1)\oplus
V(3,3,1)\oplus V(4,2,1)\oplus
V(5,1,1)\\&\quad\qquad\oplus V(3,1,1,1)\oplus V(3,2,1,1)\\
H_3(\mathcal{N}_3(n);\Q)&=V(4,2)\oplus V(2,2,2)\oplus V(3,1,1)\oplus
V(3,3,1)\oplus V(4,2,1)\oplus V(5,1,1)\\&\quad\qquad\oplus
V(3,1,1,1)\oplus V(3,2,1,1)\oplus V(2,2,1,1,1) \qquad\quad\text{for
}n\geq 5
\end{align*}
Note that, as guaranteed by simple stability, each representation
$V(\lambda)_n$ first appears in $H_3(\mathcal{N}_3(n))$ when $n$ is the
number of rows of $\lambda$, and persists with the same multiplicity
thereafter.
Also from Theorem~\ref{thm:simplehomLie}, we obtain the following
corollary on the homology of $\mathcal{N}_k(n)$ with coefficients
in the adjoint representation.
\begin{corollary}\label{cor:adjnil}
Fix $k\geq 1$. Then for each fixed $i\geq 0$ the sequence of
$\GL_n(\Q)$--representations
$\{H_i(\mathcal{N}_k(n);\mathcal{N}_k(n))\}$ is simply stable.
\end{corollary}
The adjoint homology of $\mathcal{N}_2(n)$ was studied in
Cagliero--Tirao \cite{CT1}, and in this case
Corollary~\ref{cor:adjnil} can be deduced from \cite[Theorem 4.4]{CT1}
combined with the description of $H_i(\mathcal{N}_2(n);\Q)$ above. For
$k\geq 3$, to the best of our knowledge this result was not previously
known
\para{Continuous cohomology and pseudo-nilpotent groups} The
\emph{continuous cohomology} of a group $\Gamma$ is the direct limit
\[H^*_{\text{cts}}(\Gamma;\Q)=\lim_{\longrightarrow} H^*(\Gamma/K;\Q)\]
of the cohomology of all its finitely generated nilpotent quotients
$\Gamma/K$. The basic properties of continuous cohomology are
established in Hain \cite{Ha2}. There is an obvious comparison map
$H^*_{\text{cts}}(\Gamma;\Q)\to H^*(\Gamma;\Q)$, which is always an
isomorphism on $H^0$ and $H^1$, and is always injective on $H^2$. A
finitely generated group $\Gamma$ is called \emph{pseudo-nilpotent} if
this map is an isomorphism in every degree.
Nomizu's theorem implies that for finitely generated groups,
$H^*_{\text{cts}}(\Gamma;\Q)$ coincides with the continuous cohomology
$H^*_{\text{cts}}(\mathfrak{g};\Q)$ of the Malcev Lie algebra
$\mathfrak{g}$ of $\Gamma$. The Malcev Lie algebra is a certain
pronilpotent $\Q$--Lie algebra associated to $\Gamma$. We will only
need the following property. Recall that the \emph{lower central
series}
\[\Gamma=\Gamma_1 >\Gamma_2 > \cdots\]
of a group $\Gamma$ is defined inductively by $\Gamma_1\coloneq
\Gamma$ and $\Gamma_{n+1}\coloneq [\Gamma,\Gamma_n]$. The
\emph{associated graded (rational) Lie algebra} $\gr(\Gamma)$ is the
$\Q$--Lie algebra defined by \[\gr(\Gamma)\coloneq
\bigoplus_{n=1}^\infty (\Gamma_n/\Gamma_{n+1})\otimes\Q\] where the
Lie bracket is induced by the group commutator. The Malcev Lie
algebra $\mathfrak{g}$ of $\Gamma$ has the property that the graded
Lie algebra associated to its lower central series is isomorphic to
$\gr(\Gamma)$. For the groups we consider, we have an isomorphism
$H^*_{\text{cts}}(\mathfrak{g};\Q)\approx H^*(\gr(\Gamma);\Q)$. Note
that the automorphism group $\Aut(\Gamma)$ acts on $\Gamma$, preserves
each $\Gamma_i$, and so acts on $\gr(\Gamma)$. It is well known that this action
factors through the representation
\begin{equation}
\label{eq:autrep}
\Aut(\Gamma)\to \Aut(\Gamma/[\Gamma,\Gamma]).
\end{equation}
For the free group $F_n$ it is well known that
$\gr(F_n)=\L(H_1(F_n;\Q))=\L(V_n)$. Since free groups are
pseudo-nilpotent (see \cite[Corollary 5.10]{Ha2}), we conclude that
\begin{equation}
\label{eq:freepn}
H^*(\gr(F_n);\Q)=H^*(F_n;\Q)=\Q\oplus
\Q^n.
\end{equation}
In this case the representation \eqref{eq:autrep} gives the natural
action of $\GL_n\Z$ on $\L(V_n)$, which extends to a representation of
$\GL_n\Q$. In Corollary~\ref{cor:freeLie}, we noted that the
isomorphism $H^*(\gr(F_n);\Q)=H^*(F_n;\Q)$ implies that
$\{H^i(\gr(F_n);\Q)\}$ is simply stable for each $i$, and so we concluded
that the sequence of graded components $\{\gr(F_n)^j\}$ is simply
stable for each $j\geq 0$ as well.
We would like to mimic this argument for surface groups. Let
$\pi_g=\pi_1(S_g)$ be the fundamental group of the closed, connected,
orientable surface of genus
$g\geq 2$. Labute \cite{La} proved that $\gr(\pi_g)$ is the quotient of the
free Lie algebra $\L(H_1(S_g;\Q))$ by the ideal generated by the
symplectic form:
\[\gr(\pi_g)\approx \L(H_1(S_g;\Q))/([a_1,b_1]+\cdots+[a_g,b_g])\]
Further, in this case the representation \eqref{eq:autrep} is known to
factor through the integral symplectic group $\Sp_{2g}\Z$. Hain
proved \cite[Proposition 5.11]{Ha2} that $\pi_g$ is pseudo-nilpotent,
and that the continuous cohomology of its Malcev Lie algebra coincides
with $H^*(\gr(\pi_g);\Q)$. Thus $H^*(\gr(\pi_g);\Q)\approx
H^*(S_g;\Q)$, which as an $\Sp_{2g}\Q$--representation decomposes as
follows for all $g\geq 1$:
\[H^i(S_g;\Q)=
\begin{cases}
V(0) &i=0,2\\
V(1) &i=1\\
0 & i\geq 3
\end{cases}\]
We do not have maps $\pi_g\to \pi_{g+1}$, but we do have surjections
$\pi_{g+1}\to \pi_g$ inducing surjections $\gr(\pi_{g+1})\to
\gr(\pi_{g})$, which induce maps $H^*(\gr(\pi_g))\to
H^*(\gr(\pi_{g+1}))$. For each $i$, this makes the cohomology
groups $\{H^i(\gr(\pi_g);\Q)=H^i(S_g;\Q)\}$ into a uniformly stable
sequence of
$\Sp_{2g}\Q$--representations. Theorem~\ref{thm:equivSpLie}
works just as well for cohomology, so we obtain as a corollary the
following result of Hain \cite[Corollary 8.5]{Ha}.
\begin{corollary}[Hain]\label{cor:hain}
For each fixed $i\geq 0$, the sequence of
$\Sp_{2g}\Q$--representations given by the graded components
$\{\gr(\pi_g)^i\}$ are uniformly representation stable for all $j$.
\end{corollary}\pagebreak
\para{Homology of symplectic Lie algebras}
Many sequences of Lie algebras $\L_n$ are naturally
$\Sp_{2n}\Q$--representations and in fact are uniformly stable, such
as the Heisenberg Lie algebras considered below. We would like to
conclude stability for the homology groups of this sequence of Lie
algebras $\{H_i(\L_n;\Q)\}$ as we did in
Corollary~\ref{corollary:nilp} above. But without a good notion of
strong stability for $\Sp_{2n}\Q$--representations, the proof of the
necessary implication in Theorem~\ref{thm:equivhomLie}, namely (1)
$\implies$ (5), does not work. However, in specific cases the argument
can be successfully modified.
We will give a concrete example of such a modification, but first we
extract from the proof of (1) $\implies$ (5) in
Theorem~\ref{thm:equivhomLie} exactly where strong stability was
used. Ignoring the grading for the moment, the homology $H_*(\L_n;\Q)$
is computed by the rows of the complex
\begin{equation}\label{eq:Heis}
\xymatrix{
\cdots\ar[r]&\bwedge^3\L_n\ar^{\partial_3}[r]\ar[d]&
\bwedge^2\L_n\ar^{\partial_2}[r]\ar[d]&
\L_n\ar^{\partial_1}[r]\ar[d]&
\Q\\
\cdots\ar[r]&\bwedge^3\L_{n+1}\ar^{\partial_3}[r]&
\bwedge^2\L_{n+1}\ar^{\partial_2}[r]&
\L_{n+1}\ar^{\partial_1}[r]&
\Q
}
\end{equation}
We know stability holds for each term $\{\bwedge^i\L_n\}$, but the
differentials between them introduce a possible source of
instability. To draw conclusions about stability for
$H_i(\L_n;\Q)=\ker\partial_i/\im\partial_{i+1}$, we need control over
how the differentials $\partial_i$ interact with the vertical maps
$\bwedge^i\L_n\hookrightarrow \bwedge^i\L_{n+1}$. For example, we
do not know that $\{\ker\partial_i\}$ is stable.
In Theorem~\ref{thm:equivhomLie}, the type-preserving assumption
guaranteed that the vertical maps preserved isotypic components. Then
since the vertical maps are injective, the commutativity of
\eqref{eq:Heis} implied that even if $\{\ker\partial_i\}$ and $\{\im\partial_i\}$ did not
stabilize immediately, their decompositions were nondecreasing in $n$. Thus once the terms, which here would be $\{\bwedge^i\L_n=\ker \partial_i\oplus \im\partial_i\}$, stabilized, the summands $\{\ker\partial_i\}$ and $\{\im\partial_i\}$ were forced to stabilize as well. However, for
$\Sp_{2n}\Q$--representations the vertical maps for $\bwedge^i\L_n$
are almost never type-preserving, as we will see in detail below. Thus
a new idea is needed.
Let $H_n\coloneq V(\lambda_1)_n=\Q^{2n}$ be the standard
representation of $\Sp_{2n}\Q$. For $i\leq n$, we have the
decomposition into irreducibles
\[\bwedge^i H_n=V(\lambda_i)_n\oplus V(\lambda_{i-2})_n\oplus \cdots
V(\lambda_\epsilon)_n\] where $\epsilon=0$ or $1$ if $i$ is even or
odd respectively. The inclusion $\bwedge^i H_n\hookrightarrow
\bwedge^i H_{n+1}$ does \emph{not} respect this decomposition. In
fact, we have the following:
\begin{lemma}
\label{lem:wedgeH}
For $n>i$ and any irreducible representation
$V(\lambda_k)_n\subset\bwedge^i H_n$ with $i>k$, the
$\Sp_{2n+2}\Q$--span of $V(\lambda_k)_n$, considered as a subspace
of $\bwedge^i H_{n+1}$, is isomorphic to
\[V(\lambda_{k+2})_{n+1}\oplus
V(\lambda_k)_{n+1}.\]
\end{lemma}
\begin{proof}
For an overview of the symplectic representation theory used here,
see \cite[\S 17]{FH}. For each $i\leq n$ there is a unique
contraction $C\colon \bwedge^iH\to \bwedge^{i-2}H$, with $\ker
C\approx V(\lambda_i)$ generated by $a_1\wedge \cdots \wedge
a_i$. This induces a filtration of $\bwedge^i H$ by \[\ker
C^j\approx V(\lambda_i)\oplus \cdots\oplus V(\lambda_{i-2j+2}).\] A
complement to $\ker C$ is given by the image of $\cdot\wedge
\omega_n\colon \bwedge^{i-2}H\to \bwedge^i H$, where
\[\omega_n=a_1\wedge b_1+\cdots+a_n\wedge b_n\] spans the trivial
subrepresentation of $\bwedge^2 H$. It follows that
$V(\lambda_k)_n\subset \bwedge^iH_n$ is the $\Sp_{2n}\Q$--span of
$v_{k,n}\coloneq a_1\wedge\cdots\wedge a_k\wedge (\omega_n)^j$,
where $j=(i-k)/2$. Let $W\subset \bwedge^i H_{n+1}$ be the desired
representation, the $\Sp_{2n+2}\Q$--span of $v_{k,n}$. The
contractions $C$ commute with the inclusion $\bwedge^i
H_n\hookrightarrow \bwedge^i H_{n+1}$. Thus since $v_{k,n}$ is
contained in $\ker C^{j+1}$ but not $\ker C^j$, we know that $W$ is
contained in $V(\lambda_i)_{n+1}\oplus \cdots\oplus
V(\lambda_k)_{n+1}$ but not in $V(\lambda_i)_{n+1}\oplus
\cdots\oplus V(\lambda_{k+2})_{n+1}$.
Under the inclusion $\bwedge^2 H_n\hookrightarrow \bwedge^2
H_{n+1}$, $\omega_n$ is not mapped to $\omega_{n+1}$. Instead we
have $\omega_n=\omega_{n+1}-a_{n+1}\wedge b_{n+1}$, and so
$(\omega_n)^j=(\omega_{n+1})^j-(\omega_{n+1})^{j-1}\wedge
a_{n+1}\wedge b_{n+1}.$ Writing \[v_{k,n}=a_1\wedge \cdots\wedge
a_k\wedge (\omega_{n+1}-a_{n+1}\wedge b_{n+1})\wedge
(\omega_{n+1})^{j-1},\] we see that $v_{k,n}$ is in the image of
$\cdot\wedge (\omega_{n+1})^{j-1}\colon \bwedge^{k+2}H_{n+1}\to
\bwedge^iH_{n+1}$. Combined with the above bound on $W$, this
implies that $W$ is contained in $V(\lambda_{k+2})_{n+1}\oplus
V(\lambda_k)_{n+1}$.
Since we know that $W$ is not contained in $V(\lambda_{k+2})_{n+1}$,
it remains to show that $W$ is not contained in
$V(\lambda_k)_{n+1}$. Note that
$C^j(v_{k,n})=\frac{(n-k)!}{(n-k-j)!}a_1\wedge \cdots\wedge
a_k$. Then if $A\in \Sp_{2n+2}\Q$ is any element fixing
$a_1\wedge\cdots\wedge a_k$ but not $v_{n,k}$ (for example, the
permutation matrix exchanging $a_n$ with $a_{n+1}$ and $b_n$ with
$b_{n+1}$), we have $C^j(A\cdot v_{k,n})=A\cdot
C^j(v_{k,n})=C^j(v_{k,n})$. In particular, the vector
$v_{k,n}-A\cdot v_{k,n}$ lies in $\ker C^j$. As explained above,
this shows that $W\approx V(\lambda_{k+2})_{n+1}\oplus
V(\lambda_k)_{n+1}$.
\end{proof}
\begin{remark}
\label{remark:substitute}
Lemma~\ref{lem:wedgeH} can serve as a substitute for the
type-preserving assumption in the argument outlined before the
lemma. Take any sequence of maps $f_n\colon
\bwedge^i H_n\to V_n$ commuting with the inclusions $\bwedge^i
H_n\hookrightarrow \bwedge^i H_{n+1}$ and $V_n\hookrightarrow
V_{n+1}$. If $V(\lambda_k)_n\subset \ker f_n$,
Lemma~\ref{lem:wedgeH} implies that $V(\lambda_{k+2})_{n+1}\oplus
V(\lambda_k)_{n+1}\subset \ker f_{n+1}$. Thus the multiplicity of
$V(\lambda_k)_n$ in $\ker f_n$ is nondecreasing. (In
fact, by induction we see there is some $k\leq i$ so that $\ker f_n$ is
always exactly $V(\lambda_i)_n\oplus \cdots\oplus V(\lambda_k)_n$
for sufficiently large $n$.) The same applies to $\im f_n$.
\end{remark}
\para{Heisenberg Lie algebras} As an explicit example to which Remark~\ref{remark:substitute} applies,
we consider the Heisenberg Lie algebras $\H_{2n+1}$, defined as
follows. Given a symplectic form $\omega$ on $\Q^{2n}$, this is the
central extension
\[0\to \Q\to \H_{2n+1}\to \Q^{2n}\to 0\] classified by $\omega\in
H^2(\Q^{2n};\Q)\approx \bwedge^2\Q^{2n}$. Since the natural action of
$\Sp_{2g}\Q$ on $\Q^{2n}$ preserves $\omega$, it extends to an action
of $\Sp_{2g}\Q$ on $\H_{2n+1}$. There is an obvious inclusion
$\H_{2n+1}\hookrightarrow\H_{2n+3}$ which makes $\{\H_{2n+1}\}$ into a
consistent sequence of Lie algebras. As an
$\Sp_{2n}\Q$--representation $\H_{2n+1}\approx V(0)\oplus
V(\lambda_1)$, so the sequence $\{\H_{2n+1}\}$ is uniformly representation
stable.
Note that \[\bwedge^i \H_{2n+1}\approx \bwedge^i(H_n\oplus \Q)\approx
\bwedge^i H_n\oplus \bwedge^{i-1}H_n\] and that this decomposition is
respected by the maps
$\bwedge^i\H_{2n+1}\hookrightarrow\bwedge^i\H_{2n+3}$. So $\{\bwedge^i \H_{2n+1}\}$
is uniformly stable for each $i\geq 0$ and we can apply
Remark~\ref{remark:substitute} to the complex
\[ \cdots\longrightarrow \bwedge^3\H_{2n+1}
\overset{\partial_3}{\longrightarrow} \bwedge^2\H_{2n+1}
\overset{\partial_2}{\longrightarrow} \H_{2n+1}
\overset{\partial_1}{\longrightarrow} \Q,
\]
to conclude that $\{\ker \partial_i\}$ and $\{\im\partial_i\}$ are uniformly stable for each
$i$. This shows that the homology of $\H_{2n+1}$ is
uniformly stable. (It turns out that in the complex above at most one
differential is nonzero in each grading, so it is easy to compute by
hand that $H_i(\H_{2n+1};\Q)=V(\omega_i)$ for $i\leq n$ and see
uniform stability directly; see, e.g., \cite[Theorem 4.2]{CT2}.) We
therefore have the following.
\begin{xample}
\label{example:Heis}
For each $i\geq 0$ the sequence of
$\Sp_{2n}\Q$--representations $\{H_i(\H_{2n+1};\Q)\}$ is uniformly
representation stable.
\end{xample}
To extend this argument to adjoint and exterior coefficients, all that
would be needed is to duplicate Lemma~\ref{lem:wedgeH} for
$\bwedge^iH_n\otimes H_n$ and $\bwedge^k H_n\otimes \bwedge^\ell
H_n$. However, we do not do this here, since Cagliero--Tirao have
already computed these homology groups. The adjoint homology
$H_i(\H_{2n+1};\H_{2n+1})$ is $V(\omega_1+\omega_i)\oplus
V(\omega_{i+1})$ for $i<n$ \cite[Corollary 4.15]{CT2}. For the
exterior homology, the irreducibles appearing in
$H_i(\H_{2n+1};\bwedge^k \H_{2n+1})$ always correspond to the sum of
two fundamental weights $V(\omega_j+\omega_\ell)$, with multiplicity
either 1 or 0, independent of $n$ for $i\leq n$ \cite[Theorem
4.13]{CT2}. In both cases we have uniform stability.
\begin{xample}[Cagliero--Tirao]
\label{example:adjHeis}
For each fixed $i\geq 0$ and $k\geq 0$ the sequences of
$\Sp_{2n}\Q$--representations $\{H_i(\H_{2n+1};\H_{2n+1})\}$ and
$\{H_i(\H_{2n+1};\bwedge^k \H_{2n+1})\}$ are uniformly
representation stable.
\end{xample}
\subsection{The Malcev Lie algebra of the pure braid group}
\label{section:malcev:braid}
In this subsection we describe a conjecture which can be thought of as
a ``infinitesimal'' version of Theorem~\ref{thm:pure}. Let $\Gamma=P_n$,
the pure braid group on $n$ strands, and let $\fp_n\coloneq\gr(P_n)$.
The Lie algebra $\fp_n$ occurs, among other places, in the theory of
Vassiliev invariants. Drinfeld and Kohno (see \cite{Ko}) gave a
finite presentation for $\fp_n$, as follows. Let
$\L(\{X_{ij}\})$ denote the free Lie
algebra on the set of formal symbols $\{X_{ij}: 1\leq i,j\leq n, i\neq
j\}$. Then for $n\geq 4$,
\[\fp_n=\L(\{X_{ij}\})/R\]
where $R$ is the ideal generated by the quadratic relations:
\[[X_{ij},X_{kl}] \ \ \text{with $i,j,k,l$ distinct}\]
\[[X_{ij},X_{ik}+X_{jk}] \ \ \text{with $i,j,k$ distinct}\] Consider
the action of $S_n$ on $\fp_n$. Since the relations above are
homogeneous, the grading on $\L(\{X_{ij}\})$ descends to a grading on
$\fp_n$, which is clearly preserved by the action of $S_n$. Let
$\fp_n^i$ denote the $i^{\text{th}}$ graded component of $\fp_n$.
\begin{conjecture}[Representation stability for $\fp_n$]
\label{conjecture:malcevpn}
For each fixed $i\geq 1$, the sequence $\{\fp_n^i\}$ is a
uniformly representation stable sequence of $S_n$--representations.
\end{conjecture}
As evidence for this conjecture, we point out that $P_n$ is
pseudo-nilpotent \cite[Example 5.12]{Ha2}. Furthermore, by
Theorem~\ref{thm:pure}, for each fixed $i\geq 0$ the cohomology
$H^i(\fp_n;\Q)=H^i(P_n;\Q)$ is a uniformly stable sequence of
$S_n$--representations. Thus Conjecture~\ref{conjecture:malcevpn}
would follow as in Corollary~\ref{cor:hain} if we had a version of
Theorem~\ref{thm:equivhomLie} for representations of $S_n$. We remark
that not all aspherical hyperplane complements have pseudo-nilpotent
fundamental group (see e.g.\ Falk \cite[Proposition 5.1, Example
5.3]{Fa}), so we do not expect Conjecture~\ref{conjecture:malcevpn} to
extend to all such groups.
\section{Homology of the Torelli subgroups of $\Mod(S)$ and
$\Aut(F_n)$}
\label{section:torelli}
In this section we discuss representation stability in the context of
the homology of the Torelli groups associated with mapping class
groups and automorphism groups of free groups. Most of the picture
here is conjectural. However, before the idea of representation
stability, even a conjectural picture of these homology groups was
lacking.
\subsection{Homology of the Torelli group}
Let $S_{g,1}$ be a connected, compact, oriented surface of genus
$g\geq 2$ with one boundary component. Let $H\coloneq
H_1(S_{g,1};\Q)$ and let $H_\Z\coloneq H_1(S_{g,1},\Z)$. The
\emph{mapping class group} $\Mod_{g,1}$ is the group of homotopy
classes of homeomorphisms of $S_{g,1}$, where both the homeomorphisms
and the homotopies fix $\partial S_{g,1}$ pointwise. The action of
$\Mod_{g,1}$ on $H_\Z$ preserves algebraic intersection number, which
is a symplectic form on $H_\Z$, yielding a symplectic representation
which fits into the exact sequence
\[1\to\I_{g,1}\to\Mod_{g,1}\to\Sp_{2g}\Z\to 1,\] where $\I_{g,1}$ is
the \emph{Torelli group}, consisting of those $f\in\Mod_{g,1}$ acting
trivially on $H_\Z$. The conjugation action of $\Mod_{g,1}$ on
$\I_{g,1}$ descends to an action of $\Sp_{2g}\Z$ by outer
automorphisms, which gives each $H_i(\I_{g,1};\Q)$ the structure of an
$\Sp_{2g}\Z$--module. The natural inclusion of surfaces
$S_{g,1}\hookrightarrow S_{g+1,1}$ induces an inclusion
$\I_{g,1}\hookrightarrow \I_{g+1,1}$, by extending by the
identity. For each $i\geq 0$ the induced homomorphism
$H_i(\I_{g,1};\Q)\to H_i(\I_{g+1};\Q)$ respects the action of
$\Sp_{2g}\Z$.
\bigskip If $G$ is any group and $V$ is any (perhaps infinite
dimensional) $G$--representation, we define the
\emph{finite-dimensional part} of $V$, denoted $V^{\fd}$, to be the
subspace of $V$ consisting of those vectors whose $G$--orbit spans a
finite-dimensional subspace of $V$. Note that $V^{\fd}$ may itself be
infinite dimensional. Our first conjecture about $H_i(\I_{g,1};\Q)$
makes a prediction about its finite-dimensional part. It is a slight
refinement of a conjecture we first stated in \cite{CF}.
\begin{conjecture}[Homology of the Torelli group]
\label{conjecture:torelli}
For each fixed $i\geq 1$, each of the following statements holds.
\bigskip
\noindent \textbf{Preservation of finite-dimensionality: }The
natural map \[H_i(\I_{g,1};\Q)^{\fd}\to H_i(\I_{g+1,1};\Q)\] induced
by the inclusion $\I_{g,1}\hookrightarrow \I_{g+1,1}$ has image
contained in $H_i(\I_{g+1,1};\Q)^{\fd}$.
\medskip
\noindent \textbf{Rationality:} Every irreducible
$\Sp_{2g}\Z$--subrepresentation in $H_i(\I_{g,1};\Q)^{\fd}$ is the
restriction of an irreducible $\Sp_{2g}\Q$--representation.
\medskip
\noindent \textbf{Stability: }The sequence of
$\Sp_{2g}\Q$--representations $\{H_i(\I_{g,1};\Q )^{\fd}\}$ is
uniformly representation stable.
\end{conjecture}\pagebreak
\para{Remarks}
\begin{enumerate}
\item Along with Conjecture~\ref{conjecture:torelli} for
$H_i(\I_{g,1};\Q)$, we have a corresponding, equivalent conjecture
for $H^i(\I_{g,1};\Q)$, with stability in the sense of
Definition~\ref{definition:repstabrev}.
\item A form of the Margulis Superrigidity Theorem (see \cite[Theorem
VIII.B]{Ma}) gives that any finite-dimensional representation (over
$\C$) of $\Sp_{2g}\Z$ virtually extends to a rational representation
of $\Sp_{2g}\Q$.\footnote{One can also use the solution to the
congruence subgroup property for $\Sp_{2g}\Z$, $g>1$ here; see
\cite{BMS}.} Thus the Rationality part of
Conjecture~\ref{conjecture:torelli} is meant to ensure that we can extend
to $\Sp_{2g}\Q$ without passing to a finite index subgroup. We also
remark that, by a statement close to the Borel Density Theorem
(namely Proposition 3.2 of \cite{Bo2}), a representation of
$\Sp_{2g}\Q$ is irreducible if and only if its restriction to
$\Sp_{2g}\Z$ is irreducible, so we can (and will) ignore this
distinction. Similar statements apply to $\GL_n\Q$ as well.
\item It is known that the ``finite-dimensional part'' of $H_i(\I_{g,1};\Q)$ is not all
of $H_i(\I_{g,1};\Q)$; see the examples discussed after Conjecture~\ref{conjecture:finitegen} below.
\item The Torelli group is often defined for closed surfaces or for
surfaces with punctures. In this case there are no maps connecting
the Torelli groups for different $g$, so the strongest statement one
could hope for is multiplicity stability for the homology of the corresponding
Torelli groups. We conjecture this to be true.
\end{enumerate}
Some evidence for Conjecture~\ref{conjecture:torelli} in each
dimension $i\geq 1$ is given and discussed in detail in \cite{CF}.
Further, we note that Conjecture~\ref{conjecture:torelli} is true for
$i=1$, by Johnson's computation that \[H_1(\I_{g,1})\approx
H_1(\I_{g,\ast})\approx V(\omega_3)\oplus V(\omega_1) \ \ \mbox{for
each $g\geq 3$}.\]
\noindent
Finally, we note the well-known analogy of $\I_{g,1}$ and $\Sp_{2g}\Z$
with $P_n$ and $S_n$. Since representation stability holds for the
latter example (Theorem~\ref{thm:pure}), one is led to believe it
holds for the former.
\para{Malcev Lie algebra of $\I_{g,1}$} There is a kind of
``infinitesimal'' version of Conjecture~\ref{conjecture:torelli},
parallel to Conjecture~\ref{conjecture:malcevpn} for the pure braid
group. Let $\gr(\I_{g,1})$ denote the graded rational Lie algebra
associated to the lower central series of $\I_{g,1}$ (see
\S\ref{section:applications}), and let $\gr(\I_{g,1})^i$ denote its
$i^{\text{th}}$ graded piece. Hain \cite{Ha} computed $\gr(\I_g)$ in \cite{Ha}.
This was extended by Habegger--Sorger \cite{HS} to the case of
surfaces with boundary. To state their result, let
$H=H_1(S_{g,1};\Q)$ as usual, and for any vector space $V$ let $\L(V)$
denote the free Lie algebra on $V$ as in \S\ref{section:applications}.
The extension by Habegger--Sorger of Hain's theorem states that, for all
$g\geq 6$, the rational Lie algebra $\gr(\I_{g,1})$ has a
presentation:
\[\gr(\I_{g,1})=\L(\bwedge^3H)/(R_1,R_2)\]
where $(R_1,R_2)$ denotes the ideal generated by the
$\Sp_{2g}(\Q)$--span of the two elements
\begin{align*}
R_1&=(a_1\wedge a_2\wedge b_2)\wedge (a_3\wedge a_4\wedge b_4)\\
R_2&=(a_1\wedge a_2\wedge b_2)\wedge (a_g\wedge \omega)
\end{align*}
where $w\coloneq\sum_{i=1}^ga_i\wedge b_i$. One nontrivial consequence of Hain's theorem is that the natural $\Sp_{2g}\Z$--action on $\gr(\I_{g,1})$ extends to an
$\Sp_{2g}\Q$--action. As previously mentioned, $\gr(\I_{g,1})$ is the
associated graded associated to the Malcev Lie algebra of $\I_{g,1}$,
and $\gr(\I_{g,1})^i$ denotes the $i^{\text{th}}$ graded component of
$\gr(\I_{g,1})$.
\begin{conjecture}[Stability of the Malcev Lie algebra
of $\I_{g,1}$]
\label{conjecture:malcev:torelli}
For each fixed $i\geq 1$ the sequence
$\{\gr(\I_{g,1})^i\}$ of $\Sp_{2g}\Q$-representations is uniformly representation
stable.
\end{conjecture}
As evidence for Conjecture~\ref{conjecture:malcev:torelli}, we remark
that the conjecture is true when $i=1$ and when $i=2$, as follows.
Johnson proved \cite{Jo2} that \[\gr(\I_{g,1})^1\approx
\bwedge^3H\approx V(1,1,1)\oplus V(1)\] as $\Sp_{2g}\Z$--modules, and Habegger--Sorger
\cite[Theorem 2.2]{HS} use the work of Hain \cite{Ha} to deduce that
(in our terminology) :
\[\gr(\I_{g,1})^2\approx V(2,2)\oplus V(1,1)\oplus V(0)^{\oplus 2}\]
as $\Sp_{2g}\Z$--modules.
\subsection{Homology of $\IA_n$}
The above discussion has an analogy in the case of free groups and
their automorphisms. Let $F_n$ denote the free group of rank $n$, and
let $\Aut(F_n)$ denote its automorphism group. The action of
$\Aut(F_n)$ on $H_1(F_n;\Z)$ gives the well-known exact
sequence
\[1\to \IA_n\to\Aut(F_n)\to\GL_n\Z\to1.\] The conjugation action of
$\Aut(F_n)$ on $\IA_n$ descends to an outer action of $\GL_n\Z$,
giving each $H_i(\IA_n;\Q)$ the structure of a $\GL_n\Z$--module. The
standard inclusion $F_n\hookrightarrow F_{n+1}$ induces an inclusion
$\IA_n\hookrightarrow\IA_{n+1}$ by extending by the identity. Thus
for each $i\geq 0$ we have an induced homomorphism $H_i(\IA_n;\Q)\to
H_i(\IA_{n+1};\Q)$ of $\GL_n\Q$--representations.
It is natural to conjecture the analogue of Conjecture
\ref{conjecture:torelli} for $\IA_n$, and in particular that
$\{H_i(\IA_n;\Q)\}$ is representation stable. However such a conjecture would not capture
what is going on, even in dimension 1: a computation of Andreadakis,
Farb, Kawazumi, and Cohen-Pakianathan (see e.g.\ \cite{Ka}) gives:
\begin{equation}
\label{eq:ia1}
H_1(\IA_n;\Q)\approx \bwedge^2\Q^n\otimes (\Q^n)^\ast\approx
V(L_1+L_2-L_n)\oplus V(L_1)
\end{equation}
from which we see that the decomposition of the sequence
$\{H_1(\IA_n;\Q)\}$ does not stabilize (except in the trivial sense, observing that \emph{no} representation ever appears twice as the first summand). However, the notion of
\emph{mixed} tensor stability, defined in \S\ref{section:strong}, suffices to capture
the stability here. Indeed, the computation in
\eqref{eq:ia1} shows that the sequence $\{H_1(\IA_n;\Q)\}$ is mixed
representation stable, since
\[H_1(\IA_n;\Q)=V(1,1;1)\oplus V(1)\]
for sufficiently
large $n$. With this alteration, we give the analogue of Conjecture
\ref{conjecture:torelli} for $\IA_n$.
\begin{conjecture}[Homology of $\IA_n$]
\label{conjecture:ian}
For each fixed $i\geq 1$, each of the
following statements hold.
\bigskip
\noindent \textbf{Preservation of finite-dimensionality: }The
natural map \[H_i(\IA_n;\Q)^{\fd}\to H_i(\IA_{n+1};\Q)\] induced by
the inclusion $\IA_n\hookrightarrow \IA_{n+1}$ has image contained
in $H_i(\IA_{n+1};\Q)^{\fd}$.
\medskip
\noindent \textbf{Rationality:} Every irreducible
$\GL_n\Z$--subrepresentation in $H_i(\IA_n;\Q)^{\fd}$ is the
restriction of an irreducible $\GL_n\Q$--representation.
\medskip
\noindent \textbf{Stability: }The sequence of
$\GL_n\Q$--representations $\{H_i(\IA_n;\Q )^{\fd}\}$ is
uniformly \emph{mixed} representation stable.
\end{conjecture}
As for the ``infinitesimal'' version of Conjecture~\ref{conjecture:ian}, we
conjecture that each of the $\GL_n\Z$--representations $\gr(\IA_n)^i$
extend to $\GL_n\Q$--representations, and that these form a uniformly
stable sequence. However, we would like to point out
that the Lie algebra $\gr(\IA_n)$ is still not known.
\subsection{Vanishing and finiteness conjectures for the (co)homology
of $\I_{g,1}$ and $\IA_n$}
We now make a few other natural conjectures concerning the
(co)homology of $\I_{g,1}$ and $\IA_n$. Our goal is to give as much
of a conjectural picture as possible where there was none before.
\para{A Morita-type conjecture for $\IA_n$} Let $e_i\in
H^i(\I_{g,1};\Q)$ denote the $i^{\text{th}}$ Morita--Mumford--Miller
class restricted to $\I_{g,1}$. The following is Conjecture 3.4 of
\cite{Mo1}.
\begin{conjecture}[Morita's Conjecture]
The $\Sp_{2g}\Z$--invariant stable rational cohomology of $\I_{g,1}$ is
generated as a $\Q$--algebra by $\{e_2,e_4,e_6,\ldots \}$.
\end{conjecture}
Note that all the $e_i$ generate the stable rational cohomology of
$\Mod_{g,1}$, by Madsen--Weiss \cite{MW}, and the odd classes
$e_1,e_3,e_5,\ldots$ vanish when restricted to $\I_{g,1}$. Morita's
Conjecture predicts the trivial representations that can occur in
$H^i(\I_{g,1};\Q)$. However, it is not known which of the
even Morita--Mumford--Miller classes $e_i$, or combinations thereof, are
nonzero in $H^*(\I_{g,1};\Q)$. Thus even an affirmative answer to
Morita's Conjecture would not imply
Conjecture~\ref{conjecture:torelli} for the trivial representation.
Since Galatius \cite{Ga} has proven that $H^i(\Aut(F_n);\Q)=0$ for
$n\gg i$, it is natural to make the following conjecture.
\begin{conjecture}[Vanishing conjecture]
\label{conjecture:invpart}
The $\GL_n\Z$--invariant part of the stable rational cohomology of
$\IA_n$ vanishes.
\end{conjecture}
By the computation $H_1(\IA_n;\Q)\approx\bwedge^2\Q^n\otimes
(\Q^n)^\ast$ for $n\geq 3$, which has no trivial subrepresentations,
Conjecture~\ref{conjecture:invpart} is true for cohomology in
dimension $1$.
\para{Two finiteness conjectures} The infinite-dimensional spaces
$H_1(\I_{2,1};\Q)$, $H_3(\I_3;\Q)$, $H_{3n-2}(\I_{n,1};\Q)$ and
$H_{2n-3}(\IA_n;\Q)$ mentioned above are not ``stable'' in $n$. One
might hope that stably such representations do not arise, and all
irreducible $\Sp_{2n}\Z$--submodules of $H_i(\I_{n,1};\Q)$ and
$\GL_n\Z$--submodules of $H_i(\IA_n;\Q)$ are finite-dimensional for
$n\gg i$. The limited evidence we have seems to point to the
following.
\begin{conjecture}[Stable finite-dimensionality]
\label{conjecture:stablyfinite}
For each $i\geq 1$ and each $n$ sufficiently large (depending on $i$), the
natural maps
\[H_i(\I_{n,1};\Q)^{\fd}\hookrightarrow
H_i(\I_{n,1};\Q)\]
and
\[H_i(\IA_n;\Q)^{\fd}\hookrightarrow
H_i(\IA_n;\Q)\]
are isomorphisms.
\end{conjecture}
One may even go so far as to give a conjectural picture of all of the
homology of $\I_{n,1}$ and $\IA_n$, including the infinite-dimensional
part.
\begin{conjecture}[Unstable finite generation]
\label{conjecture:finitegen}
For each $i\geq1$ and each $n\geq 1$:
\begin{enumerate}
\item The module $H_i(\I_{n,1};\Q)$ is a finitely-generated module
over $\Sp_{2n}\Z$.
\item The module $H_i(\IA_n;\Q)$ is a finitely-generated module over
$\GL_n\Z$.
\end{enumerate}
\end{conjecture}
Note that Conjecture~\ref{conjecture:finitegen} is consistent with all
known computations of the homology groups of $\I_{n,1}$ and $\IA_n$,
including those that are known to be infinite-dimensional over $\Q$.
Mess \cite[Corollary 1]{Me} proved that $H_1(\I_{2,1};\Q)$
contains an infinite-dimensional irreducible permutation
$\Sp_4\Z$--module, and Johnson--Millson showed that $H_3(\I_3;\Q)$
contains an infinite-dimensional irreducible permutation
$\Sp_6\Z$--module \cite[Proposition 5]{Me}. The classes
in $H_{2n-3}(\IA_n;\Q)$ found by
Bestvina--Bux--Margalit \cite{BBM1} span an infinite-dimensional
subspace, but as a $\GL_n\Z$--module this is a permutation module
generated by a single element; similarly, the classes in
$H_{3g-2}(\I_{g,1};\Q)$ found by Bestvina--Bux--Margalit
\cite{BBM2} span a cyclic
$\Sp_{2g}\Z$--module. In particular, the action of $\GL_n\Z$ or $\Sp_{2g}\Z$ on such a subspace cannot be extended to an action of the corresponding $\Q$--group $\GL_n\Q$ or $\Sp_{2g}\Q$.
\section{Flag varieties, Schubert varieties, and rank-selected posets}
\label{section:flags}
The goal of this section is to demonstrate the appearance of representation stability in
the cohomology of various natural families of algebraic
varieties, as well as in algebraic combinatorics. These results are used in \cite{CEF} to compute arithmetic statistics for maximal tori in $\GL_n(\F_q)$ and Lagrangian tori in $\Sp_{2g}(\F_q)$.
\subsection{Cohomology of flag varieties}
Let $\cF_n$ be the complete flag variety parametrizing complete flags
in $\C^n$; this can be identified with $G/B$ where $G=\GL_n\C$ and $B$
is the Borel subgroup consisting of upper triangular matrices. The
inclusion $\GL_n\C\hookrightarrow\GL_{n+1}\C$ induces an inclusion of
$\cF_n$ as a closed subvariety of $\cF_{n+1}$. In terms of flags, this
amounts to regarding a complete flag $V_1<\cdots<V_n=\C^n$ as a flag
in $\C^{n+1}$ by appending $\C^{n+1}$ itself. The unitary group $U(n)$
also acts on $\cF_n$, with stabilizer a maximal torus $T$, giving an
identification of $\cF_n$ with $U(n)/T$. The normalizer $N(T)$ acts on
$U(n)/T$ on the right, which factors through an action of the Weyl
group $W=N(T)/T$. In this case $W$ can be identified with the group
$S_n$ of permutation matrices, so we obtain an $S_n$--action on
$\cF_n$, and thus an $S_n$--action on $H^i(\cF_n;\Q)$ for each $i\geq 0$.
The inclusion $\cF_n\hookrightarrow \cF_{n+1}$ induces for each $i\geq
0$ a homomorphism $H^i(\cF_{n+1};\Q)\to H^i(\cF_n;\Q)$, and the
sequence $\{H^i(\cF_n;\Q)\}$ is easily seen to be a consistent
sequence of $S_n$--representations. We will prove that this sequence
is representation stable in the sense of
Definition~\ref{definition:repstabrev}.
\begin{theorem}[Stability for the cohomology of flag varieties]
\label{thm:flag}
For each fixed $i\geq 0$, the sequence $\{H^i(\cF_n;\Q)\}$ of
$S_n$--representations is representation stable.
\end{theorem}
\begin{proof}
The cohomology $H^*(\cF_n;\Q)$ is described as follows. The trivial
bundle $\cF_n\times \C^n$ is filtered by $k$--dimensional subbundles
$U_i$ for $0\leq i\leq n$, where $U_i$ over a given flag is the
$i$th subspace of that flag. The quotients $E_i\coloneq U_i/U_{i-1}$
are line bundles over $\cF_n$. Let $x_i\in H^2(\cF_n;\Q)$ be the
first Chern class $c_1(E_i)$. These classes $\{x_i\}$ generate
$H^*(\cF_n;\Q)$, as we will see in more detail below. Note that
$S_n$ acts on $H^2(\cF_n;\Q)$ by permuting the generators $x_i$.
We are trying to prove representation stability in the sense of
Definition~\ref{definition:repstabrev}. To prove the injectivity
and surjectivity conditions, first note that $x_i\in
H^2(\cF_{n+1};\Q)$ restricts to $x_i\in H^2(\cF_n;\Q)$. A basis for
$H^i(\cF_n;\Q)$ is given by $\mathcal{B}_n=\{x_1^{j_1}\cdots
x_n^{j_n}|0\leq j_k<k\}$ (see \cite[Proposition 10.3]{Fu}). Note that
the subset of $\mathcal{B}_{n+1}$ consisting of elements with
$j_{n+1}=0$ restricts bijectively to the basis
$\mathcal{B}_n$. Furthermore, as long as $n>i$, any element of
$\mathcal{B}_{n+1}$ with degree $i$ can be rearranged by a
permutation in $S_{n+1}$ to have $j_{n+1}=0$ while still satisfying
$0\leq j_k<k$ for all $k$. This shows that for large enough $n$, the
$S_{n+1}$--orbit of the degree $i$ terms of this subset spans
$H^i(\cF_{n+1};\Q)$, as desired.
Proving stability of multiplicities is more involved. A general
theorem of Borel \cite{Bo1} states that the cohomology
$H^*(\cF_n;\Q)$ is isomorphic to the co-invariant algebra on the
$x_i$, defined as follows. Let $\Q[x_1,\ldots,x_n]^{S_n}$ be the
ring of symmetric polynomials, and let $I_n$ be the ideal of
$\Q[x_1,\ldots,x_n]$ generated by all symmetric polynomials with
zero constant term. The \emph{co-invariant algebra}
$R[x_1,\ldots,x_n]$ is defined to be the quotient
\[R[x_1,\ldots,x_n]:=\Q[x_1,\ldots,x_n]/I_n.\] Thus
$R[x_1,\ldots,x_n]$ inherits a natural grading from
$\Q[x_1,\ldots,x_n]$, and $H^*(\cF_n;\Q)$ is isomorphic to
$R[x_1,\ldots,x_n]$ as a graded $S_n$--module (see \cite[Proposition
10.3]{Fu} for a combinatorial proof). It is not hard to see that
$R[x_1,\ldots,x_n]$, and thus $H^*(\cF_n;\Q)$, is in fact isomorphic
to the regular representation $\Q S_n$, which is \emph{not} representation
stable. However, looking at each homogeneous piece individually, we
have the following theorem of Stanley, Lusztig, and
Kraskiewicz--Weyman:
\begin{theorem}[\cite{Re}, Theorem 8.8]
\label{theorem:kw}
For any partition $\lambda$, as long as $i\leq \binom{n}{2}$, the
multiplicity of $V(\lambda)_n$ in $R_i[x_1,\ldots,x_n]$ equals the
number of standard tableaux of shape $\lambda[n]$ with major index
equal to $i$.
\end{theorem}
Recall that a \emph{standard tableau of shape $\lambda$} is a
bijective labeling of the boxes of the Young diagram for $\lambda$
by the numbers $1,\ldots,n$ with the property that in each row and
in each column the labels are increasing. Given such a labeling, the
\emph{descent set} is the set of numbers $i$ so that the box labeled
$i+1$ is in a lower row than the box labeled $i$. The \emph{major
index} of a tableau is the sum of the numbers in the descent set.
Fix a partition $\lambda$ and a finite set $S\subset \N$. Let
$\mathcal{S}_n$ be the set of standard tableaux of shape
$\lambda[n]$ with descent set exactly $S$. We will show below that
for sufficiently large $n$, the size of $\mathcal{S}_n$ is equal to
the size of $\mathcal{S}_{n+1}$. Since only finitely many $S\subset
\N$ have $\sum_{j\in S}j=i$, applying Theorem~\ref{theorem:kw} once
$n$ is sufficiently large will prove that the multiplicity of
$V(\lambda)_n$ in $R_i[x_1,\ldots,x_n]$ is eventually constant, as
desired.
First we exhibit an injection from $\mathcal{S}_n$ into
$\mathcal{S}_{n+1}$. Note that the Young diagram for $\lambda[n+1]$
is obtained from that of $\lambda[n]$ by adding an additional box at
the end of the first row. Our operation on tableaux will be simply
to fill this newly-added box with $n+1$. Since neither $n$ nor $n+1$
can be a descent in the resulting tableau, and whether any other $j$ is a
descent remains unchanged, the descent set is unchanged by this
operation. Thus this operation, which is clearly injective, maps
$\mathcal{S}_n\to\mathcal{S}_{n+1}$.
It remains to show that for sufficiently large $n$, the operation is
also surjective. Equivalently, we must show that for sufficiently
large $n$, any tableau of shape $\lambda[n]$ with descent set $S$
has the label $n$ in the top row. Let $k=\max S$. If the label $n$
is not in the top row, then no label greater than $k$ can be in the
top row, for otherwise at least one number between $k$ and $n-1$
would be a descent. But exactly $|\lambda|$ boxes are not contained
in the first row of $\lambda[n]$. Thus taking $n$ greater than
$k+|\lambda|$, the pigeonhole principle implies that in every
tableau some label greater than $k$ appears in the top row. It
follows that any tableau with descent set $S$ has the label $n$ in
the top row, as desired.
Applying Theorem~\ref{theorem:kw}, we see that the multiplicity of
$V(\lambda)_n$ in $H^i(\cF_n;\Q)$ is eventually independent of $n$,
as desired.
\end{proof}
It can be seen from the proof of Theorem~\ref{thm:flag} that $H^i(\cF_n;\Q)=R_i[x_1,\ldots,x_n]$ is
in fact uniformly representation stable.
\para{Lagrangian flags}
Let $\cF'_n$ be the flag variety parametrizing pairs of a Lagrangian
subspace $L$ of $\C^{2n}$, together with a complete flag on $L$. For
$G=\Sp_{2n}\C$ and $B$ a Borel subgroup, $\cF'_n$ is identified with
$G/B$. The Weyl group in this case is the hyperoctahedral group $W_n$.
Borel proved in \cite{Bo1} that $H^*(\cF'_n;\Q)$ is isomorphic to the
co-invariant algebra for $W_n$.
\begin{theorem}\label{thm:flagsp}
For each fixed $i\geq 0$, the sequence $\{H^i(\cF'_n;\Q)\}$ of
$W_n$--representations is representation stable (in the sense of
Definition~\ref{definition:repstabrev}).
\end{theorem}
\begin{proof}
Given a double partition $\lambda=(\lambda^+,\lambda^-)$, Stembridge
\cite[Theorem 5.3]{Ste} generalized Stanley's theorem and proved
that the multiplicity of $V(\lambda)_n$ in the $i^{\text{th}}$
graded piece of the co-invariant algebra for $W_n$ is the number of
double standard Young tableaux of shape $\lambda[n]$ whose flag
major index is $i$, as long as $n^2\geq i$. We now summarize the
necessary terminology. If $|\lambda^-|=k$, recall that
$\lambda[n]=(\lambda^+[n-k],\lambda^-)$. A \emph{double standard
Young tableau} is a bijective labeling by the labels $1,\ldots,n$
of the diagrams for $\lambda^+[n-k]$ and $\lambda^-$ together, which
within each diagram is increasing on each row and column. The
\emph{flag descent set} can be described as follows. Place the
diagram for $\lambda^-$ above the diagram for $\lambda^+[n-k]$. Then
the flag descent set consists of those $j$ for which $j+1$ appears
below $j$ in the tableau, together with $n$ if and only if $n$
appears in the diagram for $\lambda^-$. Finally, the \emph{flag
major index} is \[2\sum j+|\lambda^-|,\] where the sum is over those
$j$ in the flag descent set.
As in the proof of Theorem~\ref{thm:flag}, it will suffice to prove
that for each double partition $\lambda$ and each finite set
$S\subset \N$, the number of double standard tableaux of shape
$\lambda[n]$ with flag descent set $S$ is eventually
constant. Passing from double tableaux of shape $\lambda[n]$ to
$\lambda[n+1]$ requires adding a box to the first row of
$\lambda^+[n-k]$; we always fill that box with $n+1$. Call this the
\emph{main row} of the diagram. Note that the definition of flag descent
set is such that this operation does not change the descent
set. Thus it suffices to show that for sufficiently large $n$, every
double standard Young tableau of shape $\lambda[n]$ having flag
descent set $S$ has $n$ in the main row. When $n$ is larger than $\max
S$ it cannot appear in the diagram for $\lambda^-$ above the main
row. But there are exactly $|\lambda^+|$ boxes below the main row.
So once $n\geq |\lambda^+|+\max S$, if $n$ were below the main row,
some number larger than $\max S$ would appear in the descent
set. Thus for sufficiently large $n$, the label $n$ must appear in
the main row, as desired.
Since only finitely many descent sets $S\subset N$ have associated
flag major index $i$, we conclude that for each double partition
$\lambda$, the multiplicity of $V(\lambda)_n$ in $H^i(\cF'_n;\Q)$ is
eventually constant. Injectivity and surjectivity follow as in the
proof of Theorem~\ref{thm:flag}, so we conclude that
$\{H^i(\cF'_n;\Q)\}$ is representation stable.
\end{proof}
\subsection{Cohomology of Schubert varieties}
\label{section:schubert}
Recall from above that $\cF_n=G/B$ is the variety of complete flags in $\C^n$, where $G=\GL_n\C$ and $B$ is a Borel subgroup; $G$ naturally acts on $\cF_n=G/B$ by left multiplication. Choosing the standard flag in $\C^n$ as a basepoint, each permutation
$w\in S_n$ determines a flag, which can be identified with $[w]\in
G/B$. The orbits of
the flags $[w]$ under the Borel subgroup $B$ are the Bruhat cells
$BwB$. The \emph{Schubert variety} $X_w$ associated to $w$ is the
closure $\overline{B[w]}$ in $G/B$ of the Bruhat cell $BwB$.
Let $T$
be a maximal torus in $G$. Then the $G$--action on $G/B$ restricts to
a $T$--action and this $T$--action preserves $X_w$. We denote by
$H^*_T(X_w;\Q)$ the equivariant cohomology with respect to $T$.
There is an action of $S_n$ on $H^*_T(X_w;\Q)$, which is somewhat involved
to describe; it is given in Tymoczko \cite{Ty}.
Given $w\in S_n$, we can view it as an element of $S_{n+1}$ by the usual inclusion; let $X_w[n+1]$ be the corresponding Schubert variety in $\cF_{n+1}$, and so on. Then the equivariant cohomology $\{H^*_T(X_w[n];\Q)\}$ is a consistent sequence of $S_n$--representations.
\begin{theorem}[Stability for the cohomology of Schubert varieties]
\label{theorem:schubert}
Let $w$ be any permutation. Then for each fixed $i\geq 0$ the
sequence $\{H^i_T(X_w;\Q)\}$ of $S_n$--representations is
multiplicity stable.
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{theorem:schubert}]
For $v\in S_n$, let $\ell(v)$ denote the
length of $v$ with respect to the standard Coxeter generators. For a
graded ring $M$ let $M[n]$ denote the shift in grading by
$n$. Tymoczko proved \cite[Theorem 1.1]{Ty} that
\[H^*_T(X_w;\Q)=\bigoplus_{[v]\in X_w}\Q[t_1,\ldots,t_n][\ell(v)]\]
as graded $S_n$--modules. Here the sum is over those permutations
$v\in S_n$ whose image $[v]$ lies in $X_w$. It is standard (see, e.g.,
\cite[Proposition 10.7]{Fu}) that these are exactly the $v$ for
which $v\leq w$ in the Bruhat partial order. The Bruhat order has
the property that $v\leq w$ in $S_n$ if and only if $v\leq w$ when
considered as elements of $S_{n+1}$. Thus for fixed $w$ the
collection of $v$ in the sum is independent of $n$; similarly the
lengths $\ell(v)$ do not change. Denote the degree
$i$ homogeneous polynomials over $\Q$ by $P_i[x_1,\ldots,x_n]$. Since
\[H^i_T(X_w;\Q)=\bigoplus_{[v]\in
X_w}P_{i-\ell(v)}[t_1,\ldots,t_n],\] it suffices to prove that the
homogeneous polynomials $\{P_i[x_1,\ldots,x_n]\}$ are representation
stable for each $i\geq 0$.
As an aside, we remark that the surjection $H^i_T(X_w;\Q)\to
H^i(X_w;\Q)$ is given by mapping each $t_i\mapsto 0$, so
\[H^i(X_w;\Q)= \bigoplus_{\substack{[v]\in X_w,\\\ell(v)=i}}\Q.\]
Combining this with the preceding discussion, we see that classical
homological stability holds for the ordinary cohomology
$\{H^i(X_w;\Q)\}$ of Schubert varieties.
Note that $P_i[x_1,\ldots,x_{n+1}]$ is spanned by monomials which
involve at most $i$ variables; thus for $n\geq i$ any such monomial
is the image under $S_{n+1}$ of a monomial in
$P_i[x_1,\ldots,x_n]$. This verifies surjectivity, and injectivity
is immediate.
Let \[\Lambda[x_1,\ldots,x_n]\coloneq \Q[x_1,\ldots,x_n]^{S_n}\] be the ring of symmetric polynomials; $\Q[x_1,\ldots,x_n]$ is a free
$\Lambda[x_1,\ldots,x_n]$--module, and in
fact \[\Q[x_1,\ldots,x_n]\approx
R[x_1,\ldots,x_n]\otimes_{\Q}\Lambda[x_1,\ldots,x_n]\] as graded
$S_n$--modules (see, e.g., the proof of \cite[Theorem 8.8]{Re}). It
follows that \[P_i[x_1,\ldots,x_n]\approx \bigoplus_{j+k=i}
R_j[x_1,\ldots,x_n]^{\oplus \dim \Lambda_k[x_1,\ldots,x_n]}\] as
$S_n$--representations. We can see that the dimension $\dim\Lambda_k[x_1,\ldots,x_n]$ is eventually
constant as follows. It is classical that the ring of symmetric
functions
is a polynomial algebra $\Q[e_1,\ldots,e_n]$ on the elementary
symmetric polynomials $\{e_j\}$. Since the degree of $e_j$ is $j$, we
see that once $n$ is larger than $i$, the dimension of
$\Lambda_i[x_1,\ldots,x_n]$ is the number of partitions of $i$ and
thus does not depend on $n$.
For any $\lambda$ the multiplicity of
$V(\lambda)_n$ in $R_j[x_1,\ldots,x_n]$ is eventually constant by
Theorem~\ref{thm:flag}. Since there are finitely many solutions to $j+k=i$ once
$i$ is fixed, we may assume all these multiplicities have stabilized
for $n$ large enough. We conclude that the multiplicity of
$V(\lambda)_n$ in $P_i[x_1,\ldots,x_n]$ is eventually constant, as
desired.
\end{proof}
\begin{remark} The results of Tymoczko quoted in the proof of
Theorem~\ref{theorem:schubert} hold more generally for other
semisimple groups $G$, replacing the polynomial algebra with the
$W$--algebra induced by the coadjoint action on the root system
\cite[Theorem 4.10]{Ty}; here $W$ is the Weyl group of $G$. We believe that it should be possible to
prove representation stability for the equivariant cohomology of the
corresponding Schubert varieties.
\end{remark}
\subsection{Rank-selected posets}
\label{section:lefschetz}
The poset $Z_n$ of subsets of the finite set $\{1,\ldots,n\}$,
ordered by inclusion, is a basic object of study in combinatorics.
The group $S_n$ acts on $\{1,\ldots, n\}$, inducing an action on
$Z_n$. One can view this action as an analogue of the $S_n$--action
on the flag variety ${\cal F}_n$. In this subsection we prove some
stability results for some refinements of these actions on the
associated cohomology groups.
Suppose $G$ is a group acting on an $n$--dimensional space $X$. The \emph{Lefschetz representation} associated to this
action is the virtual $G$--representation
\[\sum_{i=0}^n(-1)^i H_i(X;\Q),\]
meaning the formal linear combination of the representations
$H_i(X;\Q)$. The name reflects the observation that for each $g\in
G$, the associated \emph{virtual character} is the Lefschetz number
\[\sum_{i=0}^n(-1)^i \tr\big(g_*\colon H_i(X;\Q)\to H_i(X;\Q)\big).\]
For any finite set $S\subset \N$ we may consider the
\emph{rank-selected} poset $Z_n(S)$. This is the poset consisting of
$\emptyset$ and $\{1,\ldots,n\}$, together with those
subsets of $\{1,\ldots,n\}$ whose cardinality lies in $S$. Let
$|Z_n(S)|$ be the geometric realization of this poset. The natural
action of the symmetric group $S_n$ on $Z_n$ preserves the subposet
$Z_n(S)$, yielding an action of $S_n$ on the geometric realization
$|Z_n(S)|$. Let $L_n(S)$ be the associated Lefschetz
representation \[L_n(S)\coloneq \sum_i(-1)^i H_i(|Z_n(S)|;\Q).\]
\begin{theorem}[Stability for Lefschetz representations of
rank-selected posets]
Let $S\subset \N$ be any finite set. Then the sequence $\{L_n(S)\}$
of virtual $S_n$--representations is multiplicity stable.
\end{theorem}
\begin{proof}
Consider the related virtual representation
\[L'_n(S):=(-1)^{|S|-1}(L_n(S)\oplus \Q).\] Clearly $\{L'_n(S)\}$ is
multiplicity stable if and only if $\{L_n(S)\}$ is multiplicity
stable. Given a partition $\lambda$, Stanley \cite[Theorem 4.3]{Sta}
proves that the multiplicity of $V(\lambda)_n$ in $L'_n(S)$ equals
the number of standard Young tableaux with shape $\lambda[n]$ whose
descent set is exactly $S\cap \{1,\ldots,n-1\}$. As we saw in the
proof of Theorem~\ref{thm:flag}, this implies that the multiplicity
of $V(\lambda)_n$ is constant for sufficiently large $n$, as desired.
\end{proof}
Let $C_n$ be the $n$--dimensional \emph{cross-polytope}, i.e.\ the
convex hull of the set of unit coordinate vectors $\{\pm e_1,\ldots,
\pm e_n\}$ in $\R^n$. Let $Q_n$ be the poset of \emph{faces} of
$C_n$, meaning convex hulls of subsets of vertices. For $S\subset \N$,
let $Q_n(S)$ be the rank-selected poset consisting of faces whose
dimension lies in $S$, together with $\emptyset$ and $C_n$. The hyperoctahedral group $W_n$ naturally acts on $C_n$,
and thus on the poset $Q_n(S)$ and its geometric realization
$|Q_n(S)|$. Let $L^C_n(S)$ be the associated Lefschetz representation
\[L^C_n(S)\coloneq\sum_i(-1)^i H_i(|Q_n(S)|;\Q).\]
\begin{theorem}[Stability for Lefschetz representations of
rank-selected cross-polytopes]
Let $S\subset \N$ be any finite set. Then the sequence
$\{L^C_n(S)\}$ of virtual $W_n$--representations is multiplicity
stable.
\end{theorem}
\begin{proof}
Given a double partition $\lambda=(\lambda^+,\lambda^-)$, Stanley
\cite[Theorem 6.4]{Sta} shows that the multiplicity of
$V(\lambda)_n$ in $(-1)^{|S|-1}(L^C_n(S)\oplus \Q)$ is the number of
double standard Young tableaux of shape $\lambda[n]$ whose flag
descent set is exactly $S\cap \{1,\ldots,n-1\}$. As we showed in the
proof of Theorem~\ref{thm:flagsp}, this implies that the
multiplicity of $V(\lambda)_n$ is constant for sufficiently large
$n$, as desired.
\end{proof}
\subsection{The $(n+1)^{n-1}$ conjecture}
\label{section:nplusone}
There is a variation of the co-invariant algebra (discussed in the
proof of Theorem~\ref{thm:flag} above) that has been intensely studied
by combinatorialists. The symmetric group $S_n$ acts on
$\Q[x_1,\ldots,x_n,y_1,\ldots ,y_n]$ diagonally, permuting the $x_\bullet$
and the $y_\bullet$ separately. The \emph{diagonal co-invariant algebra} is
the $\Q$--algebra defined by:
\[R_n:=\Q[x_1,\ldots ,x_n,y_1,\ldots ,y_n]/I_n\]
\noindent
where $I_n$ denotes the ideal generated by the $S_n$--invariant
polynomials without constant term. The bigrading of $\Q[x_1,\ldots
,x_n,y_1,\ldots ,y_n]$ by total degree in $\{x_\bullet\}$ and total
degree in $\{y_\bullet\}$ descends to a bigrading $(R_n)_{i,j}$ of the
algebra $R_n$. This bigrading is preserved by the action of $S_n$ on
$R_n$. The \emph{$(n+1)^{n-1}$ conjecture} was the conjecture
that \[\dim(R_n)=(n+1)^{n-1}.\] This conjecture was proved by Haiman
(see, e.g., the survey \cite{Hai}), using a connection between this
problem and the geometry of the Hilbert scheme of configurations of
$n$ points in $\C^2$. Just as with the classical co-invariant
algebra, the structure of $R_n$ as an $S_n$--representation has been
determined \cite[Theorem 4.24]{Hai}. However, the following seems to
be unknown. It can be viewed as an ``asymptotic refinement'' of the
$(n+1)^{n-1}$ conjecture.
\begin{question}
\label{question:coinv}
Is the sequence of $S_n$--representations
$\{(R_n)_{i,j}\}$ representation stable for each fixed $i,j\geq 1$?
\end{question}
This question has a
natural generalization to the ``$k$--diagonal co-invariant algebra''
$R_n^{(k)}$ for $k\geq 3$, by which we mean the algebra defined by the
same construction as above, with $kn$ variables partitioned into $k$
subcollections and $S_n$ acting diagonally on each subcollection
separately. In this case the dimension of $R^{(k)}_n$ is not known.
It would be especially interesting if representation
stability as in Question
\ref{question:coinv} could be proved without knowing the irreducible
decomposition, or even the dimension, of $R^{(k)}_n$.
\section{Congruence subgroups, modular representations and stable
periodicity}
\label{section:congruence}
Recall that a \emph{modular representation} of a finite group $G$ is an action of $G$ on a vector space over a field of positive characteristic dividing the order of $G$. Such representations need not decompose as a direct sum of irreducible representations and in general are very difficult to analyze. For finite groups of Lie type, for example $G=\SL_n(\F_p)$, the modular representation theory is significantly better understood in the \emph{defining characteristic} of $G$, meaning in this case over a field of characteristic $p$.
There are a number of important examples of groups $\Gamma$ whose cohomology $H^i(\Gamma;\F_p)$ is naturally a modular representation of a finite group of Lie type. Examples of such $\Gamma$ include various congruence subgroups
of arithmetic groups as well as congruence subgroups of mapping class
groups.
After explaining in detail a key motivating example, we briefly
review the modular representation theory that will be needed to
formulate representation stability in this context. One new
phenomenon here is that natural sequences of representations arise
that do not satisfy representation stability, but instead exhibit a
form of ``stable periodicity'' as representations. After defining
this precisely, we present several results and conjectures using this
concept.
\subsection{A motivating example}
Consider the following fundamental example from arithmetic. For any
prime $p$ the \emph{level $p$ congruence subgroup}
$\Gamma_n(p)<\SL_n\Z$ is the kernel
\[\Gamma_n(p)\coloneq \ker (\pi\colon\SL_n\Z\twoheadrightarrow\SL_n(\F_p))\]
where $\pi$ is the map reducing the entries of a matrix modulo
$p$. Charney proved in \cite{Ch} that over $\Q$ (indeed even over
$\Z[1/p]$) the sequence of groups $\{\Gamma_n(p)\}$ satisfy classical
homological stability. Furthermore, she proved that this is equivalent
to the claim that the natural action of $\SL_n(\F_p)$ on
$H^i(\Gamma_n(p);\Q)$ is trivial for large enough $n$, so that
\[H^i(\Gamma_n(p);\Q)^{\SL_n(\F_p)}
=H^i(\Gamma_n(p);\Q)=H^i(\SL_n\Z;\Q).\]
Replacing the coefficient field $\Q$ with $\F_p$ or its algebraic
closure $\Fpbar$, the situation becomes more interesting, and the
cohomology is much richer (see, e.g., \cite{Ad,As,CF2}). First note that Charney's result is not
true in this case: the action of $\SL_n(\F_p)$ on
$H^i(\Gamma_n(p);\Fpbar)$ is certainly not trivial. We can work this
out for $H_1(\Gamma_n(p);\F_p)$ explicity. Each $B\in\Gamma_n(p)$ can
be written as $B=I+pA$ for some $A$. It is easy to check that the map
$B\mapsto A\pmod{p}$ gives a surjective homomorphism
\begin{equation}
\label{eq:abelianize1}
\psi\colon \Gamma_n(p)\to\fsl_n(\F_p)
\end{equation}
where $\fsl_n(\F_p)$ is the abelian group of traceless $n\times n$
matrices with entries in $\F_p$. Lee--Szczarba \cite{LSz} observed
that the proof of the Congruence Subgroup Property implies that $\psi$
yields an isomorphism
\[H_1(\Gamma_n(p);\Z)\approx H_1(\Gamma_n(p);\F_p)\approx
\fsl_n(\F_p).\] We thus see that, since the dimension of
$H_1(\Gamma(n,p);\F_p)$ increases with $n$, the sequence of groups
$\{\Gamma_n(\F_p)\}$ does not satisfy homological stability over
$\F_p$ in the classical sense. However, it is clear from the
construction that the $\SL_n(\F_p)$--action on
$H_1(\Gamma_n(p);\F_p)\approx \fsl_n(\F_p)$ is just the usual
(modular) \emph{adjoint representation}; this is a modular
representation because $\fsl_n(\F_p)$ is a vector space over $\F_p$,
and $p$ divides the order of $\SL_n(\F_p)$. We can thus hope to use
the modular representation theory of $\SL_n(\F_p)$ to define and study
a version of representation stability for each sequence
$\{H^i(\Gamma_n(p),\F_p)\}$ of $\SL_n(\F_p)$--representations. For
example, $\fsl_n(\F_p)$ is an irreducible
$\SL_n(\F_p)$--representation, and so an appropriate form of
representation stability holds for $\{H_1(\Gamma_n(p);\F_p)\}$.
One can do all of the above for level $p$ congruence subgroups
$\Gamma^{\Sp}_{2g}(p)$ of $\Sp(2g,\Z)$. As we will explain in
\S\ref{section:periodicity}, something new happens here: the sequence
of $\Sp_{2g}\F_2$--representations $\{H_1(\Gamma^{\Sp}_{2g}(2);\F_2)\}$
is only representation stable when restricted to even $g$, or to odd
$g$. Indeed, for each $p\geq 2$ we will see below natural
examples of sequences that are ``stably periodic'' with period $p$.
\subsection{Modular representations of finite groups of Lie type}
In order to formalize the notion of representation stability in the
modular case, we need to review the pertinent representation theory.
\para{Representations of \boldmath$\SL_n(\Fpbar)$ and
\boldmath$\Sp_{2n}(\Fpbar)$ in their defining characteristic}
Before restricting to the finite group $\SL_n(\F_p)$, we consider
representations of the algebraic group $\SL_n(\Fpbar)$ in the defining
characteristic $p$. While it is not true in this context that every
representation is completely reducible, irreducible representations of
$\SL_n(\Fpbar)$ over $\Fpbar$ are still classified by highest weights,
as follows. We give the details for the case of $\SL_n$, but all
claims hold for $\Sp_{2n}$ as well. A nice reference for these
assertions is Humphreys \cite[Chapters 2 and 3]{Hu}.
Let $T<\SL_n(\Fpbar)$ be the maximal torus consisting of diagonal
matrices. Let ${U<\SL_n(\Fpbar)}$ be the subgroup of strictly
upper-triangular matrices. Any representation $V$ of $\SL_n(\Fpbar)$
decomposes into eigenspaces for $T$. A vector $v\in V$ is called a
\emph{highest weight vector} if $v$ is an eigenvector for $T$ and is
invariant under $U$, in which case its \emph{weight} is the
corresponding eigenvalue $\lambda\in T^*$. Writing $T^*$ additively,
we identify $T^*$ with $\Z[L_1,\ldots,L_n]/(L_1+\cdots+L_n)$. The same
applies to $\Sp_{2n}(\Fpbar)$, with $T^*=\Z[L_1,\ldots,L_n]$. In
either case, a weight is called \emph{dominant} if it can be written
as a nonegative integral combination of the fundamental weights
$\omega_i=L_1+\cdots+L_i$.
The basics of the classification of irreducible
$\SL_n(\Fpbar)$--representations are the same as in the characteristic
0 case: every irreducible representation contains a unique highest
weight vector; the highest weight $\lambda$ determines the irreducible
representation; and every dominant weight occurs as the highest weight
of an irreducible representation. Thus we may unambiguously denote by
$V(\lambda)_n$ the irreducible representation of $\SL_n(\Fpbar)$ or
$\Sp_{2n}(\Fpbar)$ with highest weight $\lambda$. However, much less
is known about these irreducible representations than in the
characteristic $0$ case, and there is no known way to uniformly
construct all irreducible representations. Even the dimensions of the
irreducible representations are not known in general.\pagebreak
One approach to the construction of irreducible
$\SL_n(\Fpbar)$--representations $V(\lambda)$ is through Weyl
modules. This process starts with the irreducible representation
$V(\lambda)_\Q$ of $\SL_n\Q$ with weight $\lambda$. There is then a
special $\Z$--form $V(\lambda)_\Z\subset V(\lambda)_\Q$ so that
$\SL_n\Fpbar$ acts on the \emph{Weyl module} $W(\lambda)\coloneq
V(\lambda)_\Z\otimes \Fpbar$. The Weyl module $W(\lambda)$ is
generated by a single highest weight vector with weight $\lambda$, but
in general $W(\lambda)$ will not be irreducible. However, $W(\lambda)$
always admits a unique simple quotient, which must be the irreducible
representation $V(\lambda)$. We will see below that for fixed
$\lambda$, the question of whether $W(\lambda)$ is irreducible can
depend on the residue of $n$ modulo $p$.
\para{Restriction to finite groups of Lie type}
Given any representation of $\SL_n(\Fpbar)$, we may ``twist'' it by
precomposing with the Frobenius map $\SL_n(\Fpbar)\to
\SL_n(\Fpbar)$. This twisted representation clearly remains
irreducible; in fact for any $\lambda$ the twist of $V(\lambda)_n$ by
the Frobenius is $V(p\lambda)_n$.
A dominant weight $\lambda$ is called \emph{$p$--restricted} if it can
be written as $\lambda=\sum c_i\omega_i$ with $0\leq c_i<p$.
If $\lambda$ is $p$--restricted, then the restriction of the
irreducible representation $V(\lambda)_n$ from $\SL_n(\Fpbar)$ to
$\SL_n(\F_p)$ remains irreducible. Every irreducible
representation of $\SL_n(\F_p)$ is of this form. Thus we have found
all $p^{n-1}$ irreducible representations of $\SL_n(\F_p)$ and all
$p^n$ irreducible representations of $\Sp_{2n}(\F_p)$.
\para{Uniqueness of composition factors} In the modular case we cannot
decompose a representation into a direct sum of irreducibles.
However, by the Jordan--H\"older theorem, the irreducible
representations that occur as the composition factors in any
Jordan--H\"older decomposition of any representation are indeed
unique.
\subsection{Stable periodicity and congruence subgroups}
\label{section:periodicity}
The definition of representation stability in the modular case needs
to be altered in a fundamental way in order to apply to several
natural examples. One of these examples is the \emph{level $p$
symplectic congruence subgroup} $\Gamma_{2g}^{\rm Sp}(p)<\Sp_{2g}\Z$
defined as the kernel
\[\Gamma_{2g}^{\rm Sp}(p)\coloneq \ker (\pi\colon\Sp_{2g}\Z\twoheadrightarrow\Sp_{2g}\F_p)\]
where $\pi$ is the map reducing the entries of a matrix modulo the
prime $p$. Building on work of Sato, Putman \cite{Pu} has shown, among many other things, that
for $g\geq 3$ and $p$ odd there is an $\Sp_{2g}\Z$--equivariant
isomorphism:
\[H_1(\Gamma_{2g}^{\rm Sp}(p),\Z)\approx H_1(\Gamma_{2g}^{\rm
Sp}(p),\F_p)\approx \fsp_{2g}(\F_p)\] where $\fsp_{2g}(\F_p)$ is the
adjoint representation of $\Sp_{2g}(\F_p)$ on its Lie algebra. Putman
also proved that the group $H_1(\Gamma_{2g}^{\rm Sp}(2),\F_2)$ is an
extension of $ \fsp_{2g}(\F_2)$ by $H:=H_1(S_g;\F_2)$.
Note that $\fsp_{2g}\F_p$ sits inside $\fgl_{2g}\F_p\approx H^*\otimes
H\approx H\otimes H$ as $\fsp_{2g}\F_p\approx \Sym^2 H$. When $p$ is
odd, $\fsp_{2g}\F_p\approx \Sym^2 H$ is irreducible with highest
weight vector $a_1\cdot a_1$ and highest weight $2\omega_1$ (see
Hogeweij \cite[Corollary 2.7]{Ho}). The situation is different for
$p=2$: the representation $\fsp_{2g}\F_2\approx \Sym^2 H$ is no longer
irreducible. Indeed since \[(x+y)^2=x^2+2x\cdot y+y^2=x^2+y^2\] there
is an embedding $H\hookrightarrow \Sym^2 H$ defined by $x\mapsto
x\cdot x$; this is a map of $\Sp_{2g}(\F_2)$--representations since
$a^2=a$ in $\F_2$. Recalling that $a_1\cdot a_1$ has highest weight
$2\omega_1$, over $\overline{\F}_2$ we see here the isomorphism
between $V(2\omega_1)$ and the twist of $V(\omega_1)\approx H$ by the
Frobenius map $a\mapsto a^2$. Since $x\cdot y=y\cdot x=-y\cdot x$,
the quotient $\Sym^2 H/H$ is isomorphic to $\bwedge^2 H$. This has an
invariant contraction $\bwedge^2 H\to \F_2$ (represented by the
symplectic form) and an invariant vector $\omega=a_1\cdot b_1+\cdots
a_g\cdot b_g$ (representing the symplectic form). These are
independent when $g$ is odd, but not when $g$ is even. Thus
$\fsp_{2g}\F_2$ has composition factors $V(0),V(\omega_1),V(\omega_2)$
if $g$ is odd, and $V(0)^2,V(\omega_1),V(\omega_2)$ if $g$ is even (see
\cite[Lemma 2.10]{Ho}).
In order to take situations like this into account, we must build periodicity into the
definition of stability.
\begin{definition}[Stable periodicity]
Let $G_n=\SL_n(\F_p)$ or $\Sp_{2n}(\F_p)$. Let $\{V_n\}$ be a
consistent (c.f.\ \S\ref{section:repstab:def}) sequence of modular
$G_n$--representations, i.e.\ representations of vector spaces over
$\F_p$. The sequence $\{V_n\}$ is \emph{stably
representation periodic}, or just \emph{stably periodic}, if
Condition I (Injectivity) and Condition II (Surjectivity) of
Definition~\ref{definition:repstab1} hold, together with the
following:
\begin{enumerate}
\item[{\bf PMIII.}](Stable periodicity of multiplicities): For each
highest weight vector $\lambda$, the multiplicity of $V(\lambda)$
as a composition factor in the Jordan--H\"older series for $V_n$
as a $G_n$--representation is \emph{stably periodic}: there exists
$C=C_\lambda$ so that for all sufficiently large $n$, this
multiplicity is periodic in $n$ with period $C$.
\end{enumerate}
\medskip Similarly we have the corresponding notion of
\emph{uniformly stably periodic}, where we additionally require that
the eventual period $C$ does not depend on $\lambda$, and also
\emph{mixed tensor stably periodic}. We note that a representation
stable sequence is also stably periodic with period $C$ for any
$C\geq 1$.
\end{definition}
We will apply the above definition to give a conjectural picture of
the cohomology of congruence groups.
\begin{conjecture}[Modular periodic stability for congruence groups]
\label{conjecture:modular1}
Fix any $i\geq 0$ and any prime $p$. Then
\begin{enumerate}
\item The sequence of $\SL_n(\F_p)$--representations
$\{H_i(\Gamma_n(p);\F_p)\}$ is uniformly mixed tensor stably
periodic with period $p$.
\item The sequence of $\Sp_{2n}(\F_p)$--representations
$\{H_i(\Gamma^{\rm Sp}_{2n}(p);\F_p)\}$ is uniformly stably periodic with
period $p$.
\end{enumerate}
\end{conjecture}
We note that mixed tensor representations are really needed in Part 1
of Conjecture~\ref{conjecture:modular1}, since for example
\[H_1(\Gamma_n(p))=\fsl_n\F_p=V(L_1-L_n)=V(\omega_1+\omega_{n-1})=V(1;1)_n\]
is not representation stable, but is mixed representation stable. We
also remark that periodicity is also needed in the conjecture. For
example, by the discussion above, the sequence $\{H_1(\Gamma^{\rm
Sp}_{2n}(2);\F_2)\}$ is a stably periodic sequence of
$\Sp_{2n}\F_2$--representations with stable period $2$. These
examples also verify that Conjecture~\ref{conjecture:modular1} is true
for $i=1$.
\subsection{The abelianization of the Torelli group}
Dennis Johnson computed that the abelianization of the Torelli group
$\I_{g,1}$ comes from two sources. The first is the so-called Johnson
homomorphism, which is purely algebraically defined, and captures the
action of $\I_{g,1}$ on the universal two-step nilpotent quotient of
$\pi_1(S_{g,1})$ (but see \cite{CF} for a geometric perspective); its
image is $\bwedge^3 H_1(S_{g,1};\Z)$. The second is the
Birman--Craggs--Johnson homomorphism, which views the Torelli group as
gluing maps for Heegard splittings and bundles together the Rokhlin
invariants of the resulting homology 3--spheres. Its image is
2--torsion and is isomorphic to the space $B_3$ of Boolean polynomials
on $H_1(S_{g,1};\F_2)$ of degree at most 3. Johnson showed that these
quotients exhaust the homology of the Torelli group, but with some
overlap. He concludes in \cite{Jo2} that for $g\geq 3$ there is an
isomorphism of abelian groups:
\[H_1(\I_{g,1},\Z)\approx\bwedge^3 H_1(S_{g,1};\Z)\oplus B_2,\]
where $B_2$ is the space of Boolean polynomials of degree at most 2.
The action of $\Sp_{2g}\Z$ on $H_1(\I_{g,1};\Z)$ descends to an action
of $\Sp_{2g}(\Z/2\Z)$ on the torsion subgroup
$\Torsion(H_1(\I_{g,1};\Z))\approx B_2$. Shvartsman \cite{Sh} has
recently determined the structure of $\Torsion(H_1(\I_{g,1};\Z))$ as
an $\Sp_{2g}(\F_2)$--module. From his calculation we deduce the
following.
\begin{theorem}
\label{thm:torabtor}
The torsion subgroup $\Torsion(H_1(\I_{g,1};\Z))$ of the
abelianization of $\I_{g,1}$ is uniformly stably periodic with period
2. The subsequence for even $g$ is uniformly representation
stable, and the subsequence for odd $g$ is uniformly representation
stable.
\end{theorem}
\begin{proof}
The results of Shvartsman \cite{Sh} give the following list of the
simple $\Sp_{2g}(\F_2)$--modules appearing in a composition series
for $\Torsion(H_1(\I_{g,1};\Z))$ for $g\geq 3$. We list modules by
their highest weight.
\begin{align*}
&V(0),V(\omega_1),V(0),V(\omega_2)&\text{ for $g$ odd }\\
&V(0),V(\omega_1),V(0),V(\omega_2),V(0)&\text{ for $g$ even}
\end{align*}
The discrepancy between $g$ even and $g$ odd arises from the same
source as the corresponding discrepancy for $\bwedge^2
H_1(S_{g,1};\F_2)$ discussed above.
\end{proof}
\subsection{Level $p$ mapping class groups}
The \emph{level $p$ mapping class group $\Mod_{g,1}(p)$} is the kernel
of the composition \[\Mod_{g,1}\twoheadrightarrow
\Sp_{2g}\Z\twoheadrightarrow \Sp_{2g}\F_p.\] The group $\Mod_{g,1}(p)$
is the ``mod $p$'' analogue of the Torelli group $\I_{g,1}$, since it
is the subgroup of $\Mod_{g,1}$ acting trivially on
$H_1(S_{g,1};\F_p)$. Hain \cite[Proposition 5.1]{Ha3} proved that for
$g\geq 3$ the group $H^1(\Mod_{g,1}(p);\Z)$ is trivial, so the
abelianization $H_1(\Mod_{g,1}(p);\Z)$ consists entirely of torsion
elements.
Putman \cite{Pu}, building on work of Sato, recently proved that elements of
$H_1(\Mod_{g,1}(p);\Z)$ come from three sources. The first is the
abelianization of the congruence subgroup $\fsp_{2g}(\F_p)$, which we
discussed above. The second source is a ``mod $p$'' version of the
Johnson homomorphism, which has image $\bwedge^3 H_1(S_{g,1};\F_p)$. The
third source contributes only when $p=2$, and is a quotient $B_2/\F_2$
coming from the Birman--Craggs--Johnson homomorphism. The quotient
$\Sp_{2g}\F_p$ naturally acts on $H_1(\Mod_{g,1}(p);\Z)$, and it
follows from Putman's characterization that $H_1(\Mod_{g,1}(p);\Z)$ is
in fact an $\F_p$--representation of $\Sp_{2g}\F_p$.
\begin{theorem}
\label{thm:ModgL}
Fix a prime $p$. Then the sequence $\{H_1(\Mod_{g,1}(p);\Z)\}$ of
$\Sp_{2g}\F_p$--representations is periodically uniformly
representation stable with period $p$.
\end{theorem}
\begin{proof}
Let $H:=H_1(S_{g,1};\F_p)$ be the standard representation of
$\Sp_{2g}\F_p$. For any prime $p$, the representation $\bwedge^3 H$
has as composition factors the simple $\Sp_{2g}\F_p$--modules:
\begin{align*}
&V(\omega_1),V(\omega_3)&\text{for }g\equiv 1\bmod{p}\\
&V(\omega_1),V(\omega_3),V(\omega_1)&\text{for }g\not\equiv
1\bmod{p}
\end{align*}
Putman proves in \cite[Theorem 7.8]{Pu} that for $p$ odd and $g\geq
5$, the group $H_1(\Mod_{g,1}(p);\Z)$ is an extension of
$\fsp_{2g}\F_p$ by $\bwedge^3 H$. Thus $H_1(\Mod_{g,1}(p);\Z)$ has
composition factors
\begin{align*}
&V(\omega_1),V(2\omega_1),V(\omega_3),
&\text{ for }g\equiv 1\bmod{p}\ \\
&V(\omega_1)^2,V(2\omega_1),V(\omega_3)&\text{ for }g\not\equiv
1\bmod{p}.
\end{align*}
For $p=2$, Putman proves that $H_1(\Mod_{g,1}(2);\Z)$ is an
extension of $H_1(\Gamma_{2g}^{\rm Sp}(p);\F_p)$ by
$\bwedge^3H\oplus B_2/\F_2$. The former has composition factors
$\fsp_{2g}\F_2$ and $V(\omega_1)$, and Shvartsman describes $B_2$ as
in Theorem~\ref{thm:torabtor}. We conclude that for $g\geq 5$, the
group $H_1(\Mod_{g,1}(2);\Z)$ has the following composition factors
as an $\Sp_{2g}\F_2$--module:
\begin{align*}
&V(0)^2,V(\omega_1)^4,V(\omega_2)^2,V(\omega_3)&\text{ for $g$ odd }\\
&V(0)^4,V(\omega_1)^5,V(\omega_2)^2,V(\omega_3)&\text{ for $g$
even}
\end{align*}
Thus in both cases we see that the abelianization is periodic and
uniformly multiplicity stable with period $p$.
\end{proof}
Given Theorem~\ref{thm:ModgL}, it is natural to make the following
conjecture.
\begin{conjecture}[Modular periodic stability for $\Mod_{g,1}(p)$]
Fix any $i\geq 0$ and a prime $p$. Then the sequence of
$\Sp_{2g}\F_p$--representations $\{H_1(\Mod_{g,1}(p);\Z)\}$ is
uniformly stably periodic with period $p$.
\end{conjecture}
We believe that all of the material in this section can be extended to
corresponding ``level $p$ congruence subgroups'' of $\IA_n$.
| {
"timestamp": "2011-10-07T02:01:33",
"yymm": "1008",
"arxiv_id": "1008.1368",
"language": "en",
"url": "https://arxiv.org/abs/1008.1368",
"abstract": "We introduce the idea of *representation stability* (and several variations) for a sequence of representations V_n of groups G_n. A central application of the new viewpoint we introduce here is the importation of representation theory into the study of homological stability. This makes it possible to extend classical theorems of homological stability to a much broader variety of examples. Representation stability also provides a framework in which to find and to predict patterns, from classical representation theory (Littlewood--Richardson and Murnaghan rules, stability of Schur functors), to cohomology of groups (pure braid, Torelli and congruence groups), to Lie algebras and their homology, to the (equivariant) cohomology of flag and Schubert varieties, to combinatorics (the (n+1)^(n-1) conjecture). The majority of this paper is devoted to exposing this phenomenon through examples. In doing this we obtain applications, theorems and conjectures.Beyond the discovery of new phenomena, the viewpoint of representation stability can be useful in solving problems outside the theory. In addition to the applications given in this paper, it is applied in [CEF] to counting problems in number theory and finite group theory. Representation stability is also used in [C] to give broad generalizations and new proofs of classical homological stability theorems for configuration spaces on oriented manifolds.",
"subjects": "Representation Theory (math.RT); Algebraic Topology (math.AT); Group Theory (math.GR); Geometric Topology (math.GT)",
"title": "Representation theory and homological stability",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109525293959,
"lm_q2_score": 0.817574471748733,
"lm_q1q2_score": 0.8051562942865874
} |
https://arxiv.org/abs/1905.05349 | Homogeneous surfaces admitting invariant connections | We compute all the simply connected homogeneous and infinitesimally homogeneous surfaces admitting one or more invariant affine connections. We find exactly six non equivalent simply connected homogeneous surfaces admitting more than one invariant connections and four classes of simply connected homogeneous surfaces admitting exactly one invariant connection. | \section{Introduction}
From the XIXth century on geometry has been understood by means of its transformation groups. An \emph{homogeneous} space $M$ is a smooth manifold endowed with a \emph{transitive smooth action} of a Lie group $G$. In particular, if $H$ is the stabilizer subgroup of a point $x\in M$ then the space $M$ can be recovered as the coset space $G/H$. Since the group $G$ acting in $M$ is seen as a transformation group, i.e., a subgroup of ${\rm Diff}(M)$ there is no loss of generality in assuming, whenever it is required, that the action of $G$ in $M$ is faithful.
The notion of homogeneous space has an infinitesimal version. A \emph{infinitesimal homogeneous space} is a manifold endowed with a \emph{Lie algebra representation} of a Lie algebra $\mathfrak g$ into the Lie algebra of smooth vector fields $\mathfrak X(M)$. As before, since the lie Lie algebra $\mathfrak g$ is seen as a Lie algebra of vector fields, there is no loss of generality in assuming, whenever it is required, that the representation is faithful.
The heuristics of the celebrated F. Klein's \emph{Erlangen program} is that geometry should studied through the invariants of the action of $G$ on $M$. As example, the invariants of the group of euclidean movements in the euclidean space are the distances, volumes, angles, etc.
An \emph{affine connection} in a manifold $M$ is certain geometric construction that allows a notion of parallel transport of vectors. Affine connections are a kind of geometric structures, and then they are transformed covariantly by diffeomorphisms. The problem of construction of invariant connections in an homogeneous space has been treated by S. Kobayashi and K. Nomizu in a series of papers that are summarized in \cite{kobayashi1996foundations2} chapter X. In several examples of homogeneous spaces, there is a unique invariant connection.
We ask the following question. Which homogeneous spaces of a fixed dimension do admit exactly one, or strictly more than one invariant connections? It is possible to give a complete list? We can solve this problem for low dimension manifolds. We start with the local problem, for infinitesimal homogeneous spaces. The point is that the structures of infinitesimal homogeneous spaces admitted by a surface are completely classified. S. Lie, who completed in \cite{Lie} the task for Lie algebras of analytic vector fields in $\mathbb C^2$ and $\mathbb C^3$. The local classification of finite dimensional Lie algebras of smooth vector fields in $\mathbb R^2$ was completed by González-López, Kamran and Olver in \cite{Olver}. The real list is larger than the complex one as some complex Lie algebra may have several possible real forms. A completely different approach leading to the same classifications was followed by Komrakov, Churyumov and Doubrov in \cite{komrakov}.
By means of the Lie derivative of connections (Definition \ref{df:LD_connection} and Lemma \ref{lm:Lie_connection_zero}) we compute the space of invariant connections by an infinitesimal action of Lie algebra. Then,
we go through the list of all possible structures of infinitesimal homogeneous surfaces (Tables \ref{tabla:acciones primitivas}, \ref{tabla:acciones no transitivas}, \ref{tabla:imprimitive actions}) obtaining the spaces of invariant connection for each local class of infinitesimal actions (Theorems \ref{th:primitive} and \ref{th:imprimitive}). We also perform the computation of the space of invariant connections for non-transitive actions, for the shake of completeness (Propositions \ref{pr:no_transitive1} and \ref{pr:no_transitive2}). Finally we integrate our results to obtain homogeneous surfaces admitting invariant connections. We obtain that there is a finite list of simply connected homogeneous surfaces admitting invariant connections. In particular there are $6$ non-equivalent simply connected homogeneous surfaces admitting more than one invariant connection. Our main results are the following:
\medskip
\noindent{\bf Theorem \ref{th:hs}} \emph{
Let $M$, endowed with an action of a connected Lie group $G$, be a simply connected homogeneous surface. Let us assume that $M$ admits at least two $G$-invariant connections. Then, $M$ is equivalent to one of the following cases:
\begin{enumerate}
\item[(a)] The affine plane $\mathbb R^2$ with one of the following transitive groups of affine transformations:
$${\rm Res}_{(2:1)}(\mathbb R^2) = \left\{ \left[ \begin{array}{c}x\\ y\end{array}\right] \mapsto A \left[ \begin{array}{c}x\\ y\end{array}\right] + \left[ \begin{array}{c}b_1\\ b_2\end{array}\right]\mid A =
\left[ \begin{array}{cc} \lambda^2 & 0 \\ 0 & \lambda \end{array} \right],\,\lambda>0
\right\},$$
$${\rm Trans}(\mathbb R^2) = \left\{ \left[ \begin{array}{c}x\\ y\end{array}\right] \mapsto \left[ \begin{array}{c}x\\ y\end{array}\right] + \left[ \begin{array}{c}b_1\\ b_2\end{array}\right] \mid b_1,b_2\in\mathbb R \right\},$$
$${\rm Res}_{(0:1)}(\mathbb R^2) = \left\{ \left[ \begin{array}{c}x\\ y\end{array}\right] \mapsto A \left[ \begin{array}{c}x\\ y\end{array}\right] + \left[ \begin{array}{c}b_1\\ b_2\end{array}\right]\mid A =
\left[ \begin{array}{cc} 1 & 0 \\ 0 & \lambda \end{array} \right],\,\lambda>0
\right\}.$$
\item[(b)] ${\rm SL}_2(\mathbb R)/U$ where $U$ is the subgroup of superior unipotent matrices
$$ U = \left\{ A\in {\rm SL}_2(\mathbb R) \mid A =
\left[ \begin{array}{cc} 1 & \lambda \\ 0 & 1 \end{array} \right]
\right\}.$$
\item[(c)] $\mathbb R\ltimes \mathbb R$ acting on itself by left translations.
\item[(d)] $G/H$ with $G = \mathbb R^2\ltimes \mathbb R$ and $H = (\mathbb R\times 0)\ltimes 0$.
\end{enumerate}
}
\medskip
\noindent{\bf Theorem \ref{th:hs2}} \emph{
Let $M$, with the action of $G$, be a simply connected homogeneous surface. Let us assume that $M$ admits exactly one $G$-invariant connection. Then, $M$ is equivalent to one of the following cases:
\begin{enumerate}
\item[(a)] The affine plane $\mathbb R^2$ with $G$ any connected transitive subgroup of the group ${\rm Aff}(\mathbb R^2)$ of affine transformations containing the group of translations and not conjugated with any of the groups,
$${\rm Res}_{(2:1)}(\mathbb R^2) = \left\{ \left[ \begin{array}{c}x\\ y\end{array}\right] \mapsto A \left[ \begin{array}{c}x\\ y\end{array}\right] + \left[ \begin{array}{c}b_1\\ b_2\end{array}\right]\mid A =
\left[ \begin{array}{cc} \lambda^2 & 0 \\ 0 & \lambda \end{array} \right],\,\lambda>0
\right\},$$
$${\rm Trans}(\mathbb R^2) = \left\{ \left[ \begin{array}{c}x\\ y\end{array}\right] \mapsto \left[ \begin{array}{c}x\\ y\end{array}\right] + \left[ \begin{array}{c}b_1\\ b_2\end{array}\right] \mid b_1,b_2\in\mathbb R \right\},$$
$${\rm Res}_{(0:1)}(\mathbb R^2) = \left\{ \left[ \begin{array}{c}x\\ y\end{array}\right] \mapsto A \left[ \begin{array}{c}x\\ y\end{array}\right] + \left[ \begin{array}{c}b_1\\ b_2\end{array}\right]\mid A =
\left[ \begin{array}{cc} 1 & 0 \\ 0 & \lambda \end{array} \right],\,\lambda>0
\right\}.$$
(this case includes the euclidean plane)
\item[(b)] ${\rm SL}_2(\mathbb R)/H$ where $H$ is the group of special diagonal matrices,
$$H = \left\{ A \in {\rm SL}_{2}(\mathbb R) \mid
A = \left[ \begin{array}{cc} \lambda & 0 \\ 0 & \lambda^{-1} \end{array} \right], \,\, \lambda>0 \right\}.$$
\item[(c)] The hyperbolic plane
$$\mathbb H = \{z\in\mathbb C \mid {\rm Im}(z)>0 \}$$
with the group ${\rm SL}_2(\mathbb R)$ of hyperbolic rotations.
\item[(d)] Spherical surface,
$$S^2 = \{(x,y,z)\in \mathbb R^3\mid x^2 + y^2 + z^2 =1 \}$$
with its group ${\rm SO}_3(\mathbb R)$ of rotations.
\end{enumerate}
}
\medskip
Some related work has been presented by Kowalski, Opozda, Vl\'a\v{s}ek and Arias-Marco \cite{kowalski2008, kowalski2004, opozda2004} who gave answer to a dual question in the local case: what are the local canonical forms of affine connections in surfaces whose symmetries act transitively? Their methods are similar to ours in the following sense: their results and ours are obtained through a careful analysis of the local classification of Lie algebra actions given in \cite{Olver}.
\section{Invariant connections}
Here we review the construction of the space of connections, the bundle of connections, and the action of diffeomorphisms and vector fields on them, see for instance \cite{castrillon, garcia}. Let $M$ be a smooth manifold, $\mathfrak X(M)$ the Lie algebra of smooth vector fields in $M$ and ${\rm End}_M(\mathfrak X(M))$ the module of $\mathcal C^{\infty}(M)$-linear endomorphisms of $\mathfrak X(M)$. An \emph{affine connection} in $M$ is a $\mathbb R$-linear map
$$\nabla \colon \mathfrak X(M)\to {\rm End}_M(\mathfrak X(M)),$$
such that $\nabla(fX) = df\otimes X + f\nabla X$.\footnote{In order to keep the usual notation for covariant derivatives we have $\nabla(X)(Y) = \nabla_Y X$.}
Let ${\rm Cnx}(M)$ the space of all affine connections in $M$. It is easy to check that if $\nabla\in {\rm Cnx}(M)$ is an affine connection and $\theta\in \Omega^1(M,{\rm End}(TM))$ is a $1$-form in $M$ with values endomorphisms of $TM$ then $\nabla+\theta$ is an affine connection in $M$. Indeed, the difference between two connections in a $1$-form with values endomorphisms of $TM$. Therefore, we have a free and transitive additive action,
$$\Omega^1(M,{\rm End}(TM))\times {\rm Cnx}(M) \to {\rm Cnx}(M),\quad
(\theta,\nabla)\to \nabla + \theta,$$
that gives to ${\rm Cnx}(M)$ the structure of an affine space modeled over the vector space $\Omega^1(M,{\rm End}(TM))$
It is clear the connections are local objects, they can be restricted to open subsets of $M$, and they are determined by its restrictions to a covering family of open subsets. The following construction is local.
Given a global frame\footnote{That is, and ordered base of sections of the tangent bundle of $M$.} $s = (X_1,\ldots,X_n)$ in $M$ there is an associated connection $\nabla^s$ such that $\nabla^s(X_i)=0$ for $i\in \{1,\ldots,n\}$.
We taking $\nabla^s$ as the initial point of ${\rm Cnx}(M)$ we have an identification,
$$\Gamma_s \colon {\rm Cnx}(M) \xrightarrow{\sim} \Omega^1(M,{\rm End}(TM)), \quad \nabla \mapsto \Gamma_s(\nabla) = \nabla - \nabla^s.$$
Here $\Gamma_s(\nabla)$ is the so-called Christoffel tensor of $\nabla$ with respect to the frame $s$. We say that two connections $\nabla$ and $\bar\nabla$ take the same value at the point $p$ of $M$ if their Christofell tensors with respect to any frame coincide on $p$. The set $CM$ of the values at points of $M$ of affine connections is therefore an affine bundle over $M$ modeled over $T^*M\otimes {\rm End}(TM)$. An affine connection $\nabla$ is thus seen in a canonical way as a global section of the affine bundle $CM\to M$.
Given a diffeomorphism $f\colon M\to N$ the push forward of connections is an isomorphism $f_*\colon {\rm Cnx}(M)\to {\rm Cnx}(N)$ defined in terms of the push forward of vector fields by $(f_*\nabla)_XY = f_*(\nabla_{f^{-1}_*X} f^{-1}_*Y)$. The push forward of connections is compatible with the affine structure, in the sense that $f_*(\nabla + \theta) = f_*(\nabla) + f_*(\theta)$ for a connection $\nabla$ and a $1$-form of endomorphisms $\theta$.
If we look at the connections as sections of the affine bundle $CM$, then we have that the push forward is given by a natural transformation $Cf$ of affine bundles,
\begin{equation}\label{eq:diagrama1}
\xymatrix{ CM\ar[d] \ar[r]^-{Cf} & CN \ar[d] \\
M \ar[r]^-{f} \ar@/^20pt/[u]^-{\nabla} & N \ar@/^-20pt/[u]_-{f_*\nabla}}
\end{equation}
here $(Cf)(\nabla(p)) = (f_*\nabla)(f(p))$.
Let us fix an smooth action\footnote{By convention, we consider the action on the left side, we use the standard notation $gp = L_g(p)$.} of a Lie group $G$ on $M$. A tensor $\theta$ is $G$-invariant if $(L_g)_*\theta = \theta$ for all $g$ in $G$. The space of $G$-invariant $1$-forms of endomorphisms is denoted by $\Omega^1(M,{\rm End}(TM))^G$. Analogously, we say that a connection $\nabla\in{\rm Cnx}(M)$ is $G$-invariant if $(L_g)_*\nabla=\nabla$, for all $g\in G$; we denote by ${\rm Cnx}(M)^G$ the set of $G$-invariant connections in $M$. It is clear that the sum of a $G$-invariant connection and a $G$-invariant $1$-form of endomorphisms yields a $G$-invariant connection. So that, ${\rm Cnx}(TM)^G$ is either empty or an a affine space modeled over the space $\Omega^1(M,\textrm{End}(TM))^G$.
Let us denote by $G_p$ the stabilizer\footnote{That is, the subgroup of $G$ formed by the elements $g\in G$ such that $gp = p$.} of a point $p\in M$. We have an action of $G_p$ on the fiber $(CM)_p$ by affine transformations.
There is also a natural linear representation of $G_p$ in $T_p^*M \otimes {\rm End}(T_pM)$.
\begin{proposition}\label{pr:inv1}
If $G$ acts transitively on $M$ then ${\rm Cnx}(M)^G$ has finite real dimension $\leq (\dim M)^3$. In such case:
\begin{enumerate}
\item ${\rm Cnx}(M)^G \simeq (CM)_p^{G_p}$ for any $p\in M$
\item If ${\rm Cnx}(M)^G\neq \emptyset$ then
$\dim {\rm Cnx}(M)^G = \dim \left(T_p^*M \otimes {\rm End}(T_pM)\right)^{G_p}$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) There is a natural map ${\rm Cnx}(M)\to (CM)_p$ of evaluation at $p$, $\nabla\mapsto \nabla(p)$. It is clear that it maps ${\rm Cnx}(M)^G$ on $(CM)_p^G$. By the action of $G$ on $M$ we construct an inverse map,
$$(CM)_p^G \xrightarrow{\,\sim\,} {\rm Cnx}(M), \quad c_p\mapsto \nabla$$
where $\nabla$ is defined as $\nabla(gp) = C(L_g)(c_p)$.
(2) It is also clear that if $(CM)_p$ is a linear affine space modeled over $T_p^*M \otimes {\rm End}(T_pM)$. The action of $G_p$ is compatible with the affine structure, therefore $(CM)_p^G$, if not empty, is a linear affine space modeled over $(T_p^*M \otimes {\rm End}(T_pM))^{G_p}$.
\end{proof}
The previous situation can be considered infinitesimally. Let us recall that an action of a Lie algebra $\g$ in $M$ is a Lie algebra morphism $\phi:\g \to \mathfrak{X}(M)$. Without loss of generality we may assume that the action is faithful, that is, that $\phi$ is inyective. In such case we see $\mathfrak g\subset\mathfrak X(M)$ as a Lie algebra of vector fields in $M$. Therefore, from now on we will consider the elements of $\mathfrak g$ as vector fields in $M$.
We say that a tensor $\theta$ in $M$ is $\mathfrak g$-invariant if for any local diffeomorphism $\sigma \colon U\xrightarrow{\sim} V$ that belongs to the flow pseudogroup of a vector field $X\in \mathfrak g$ we have $\sigma_*(\theta)|_{V} = \theta|_{V}$. The analogous definition of $\mathfrak g$-invariance applies to connections. As before, the space of $\mathfrak g$-invariant connections ${\rm Cnx}(M)^{\mathfrak g}$ is either empty of an affine space modeled over $\Omega^1(M,\textrm{End}(T(M)))^{\mathfrak g}$.
Let us recall that the $\mathfrak g$ acts transitively on $M$ if for all $p\in M$ the map $\phi_p\colon \mathfrak g\to T_pM$ $X\mapsto X_p$ is surjective. This implies that, for connected $M$, the flow pseudogroup of the action is also transitive.
Diagram \eqref{eq:diagrama1} in the particular case of $f = \sigma_t$, the time $t$ flow of a vector field $X$ in $M$, gives us the prolongation $\tilde X\in \mathfrak X(CM)$,
$$\tilde X = \left.\frac{d}{dt}\right|_{t=0}C\sigma_t.$$
It is clear that $\tilde X$ projects onto $X$ and the flow of $\tilde X$ gives affine transformations between the fibers of $CM\to M$. For a given $p\in M$ the kernel $\ker(\phi_p) = \mathfrak g_p$ is the so-called \emph{stabilizer algebra} of the point $p$. There is a natural representation,
$$\tilde \phi|_{CM_p}\colon \mathfrak g_p \to \mathfrak X(CM_p)$$
the set of zeroes in $CM_p$ of these vector fields is an affine subspace denoted $CM_p^{\mathfrak g_p}$.
\begin{proposition}\label{prop:finitedim_inf}
Let us assume that $\mathfrak g$ acts transitively in a connected manifold $M$. Then the space ${\rm Cnx}(M)^{\g}$ has finite real dimension. In such case
${\rm Cnx}(M)^{\g} \simeq CM_p^{\mathfrak g_p}$ for any $p\in M$.
\end{proposition}
\begin{proof}
It is a consequence of the same argument we exposed in Proposition \ref{pr:inv1}, but in this case we use the flow pseudogroup of $\mathfrak g$ instead of the Lie group $G$.
\end{proof}
Let us recall that given an action of $G$ in $M$ there is an associated infinitesimal action of ${\rm Lie}(G)$ given by,
$$\phi(A)_p = \left.\frac{d}{dt}\right|_{t=0} \exp(tA)\star p.$$
There is a natural relation between the invariants of an action and that of its associated infinitesimal action. In particular, for invariant connections, we have that if $G$ is a connected Lie group acting on $M$ then ${\rm Cnx}(M)^G$ $=$ ${\rm Cnx}(M)^{{\rm Lie}(G)}$.
\subsection{Lie derivative of a connection}
In this section we develop a method for computing the invariant connections of an infinitesimal action. The well known notion of Lie derivative for tensors can be extended to sections of natural bundles as in \cite{Kolar}. In the particular case of connections, it can be defined by as follows.
\begin{definition}\label{df:LD_connection}
The Lie derivative of a connection $\nabla$ in the direction of a field $X$ is defined as:
$$
\mathcal{L}_X\nabla = \lim_{t\to 0} \frac{\sigma_{-t*}\nabla - \nabla}{t},
$$
where $\{\sigma_t\}_{t\in\mathbb R}$ is the flow of the field $X.$
\end{definition}
Note that for an affine connection $\nabla$ and a vector field $X$ the Lie derivative $\mathcal L_X\nabla\in \Omega^1(M,{\rm End}(TM))$ is not a connection but a $1$-form of endomorphisms.
\begin{proposition}
The Lie derivative of $\nabla$ has the following properties:
\begin{enumerate}\label{leibniz}
\item (Linearity) $\mathcal{L}_{fX+gY}\nabla = f(\mathcal{L}_X\nabla) + g(\mathcal{L}_Y\nabla)$.
\item (Leibniz formula) $(\mathcal{L}_X\nabla)(Y,Z) = [X,\nabla_Y Z] - \nabla_{[X,Y]}Z - \nabla_Y[X,Z]$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) can be easily checked from the definition. Let us prove (2). Without lose of generality let us assume that $X$ is complete and $\{\sigma_t\}_{t\in\mathbb R}$ is its flow.
$$(\mathcal{L}_X\nabla)(Y,Z) = \lim_{t\to 0} \frac{\sigma_{-t*}\nabla - \nabla}{t}(Y,Z)
= \lim_{t\to 0} \frac{(\sigma_{-t*}\nabla)_YZ - \nabla_YZ}{t} = $$
$$ \lim_{t\to 0} \frac{\sigma_{-t*}(\nabla_YZ) - \nabla_YZ}{t} + \lim_{t\to 0} \frac{\sigma_{-t*}(\nabla_{\sigma_{t*}Y}\sigma_{t*}Z) - \sigma_{-t*}(\nabla_YZ)}{t} =$$
$$[X,\nabla_YZ] + \lim_{t\to 0} \sigma_{-t*} \frac{\nabla_{\sigma_{t*}Y}\sigma_{t*}Z - \nabla_YZ}{t} = $$
$$ [X,\nabla_Y Z]+ \lim_{t\to 0} \frac{\nabla_{\sigma_{t*}Y}\sigma_{t*}Z - \nabla_Y\sigma_{t*}Z}{t} +
\lim_{t\to 0}\frac{\nabla_Y\sigma_{t*}Z - \nabla_YZ}{t} = $$
$$[X,\nabla_Y Z] + \lim_{t\to 0} \nabla_{\frac{\sigma_{*t}Y-Y}{t}} \sigma_{t*}Z + \lim_{t\to 0} \nabla_Y \frac{\sigma_{t*}Z-Z}{t}=$$
$$[X,\nabla_Y Z] - \nabla_{[X,Y]}Z - \nabla_Y[X,Z].$$
\end{proof}
\begin{lemma}\label{lm:Lie_connection_zero}
Let $X$ be a vector field in $M$
and $\nabla$ be a connection. The following are equivalent:
\begin{enumerate}
\item $\mathcal{L}_X \nabla = 0$.
\item $\nabla$ is invariant by the flow of $X$.
\end{enumerate}
\begin{proof}
$(2)\Rightarrow(1)$ is clear from the definition. Let us see $(1)\Rightarrow(2)$.
Without lose of generality let us assume that $X$ is complete, and let $\sigma_t$ be its time $t$ flow. Let us define $\Gamma(t) = \sigma_{t*}(\nabla)-\nabla$. By hypothesis we have $\Gamma(0) = 0$ and $\left.\frac{d}{dt}\right|_{t=0}\Gamma(t)=0$. Moreover, for any $t$,
$$\frac{d}{dt}\Gamma(t) =
\lim_{s\to 0} \frac{\Gamma(t+s) - \Gamma(t)}{s} =
\lim_{s\to 0}
\frac{\sigma_{(t+s)*}\nabla - \sigma_{t*}\nabla}{s} = \sigma_{t*}(- \mathcal{L}_X(\nabla)) = 0.$$
therefore, $\frac{d}{dt}\Gamma(t) = 0$ and thus $\Gamma(t) = 0$.
\end{proof}
\end{lemma}\label{lm:inv}
\begin{theorem}
Let us consider $\mathfrak g$ a Lie algebra of smooth vector fields in a manifold $M$. The following are equivalent:
\begin{enumerate}
\item $\nabla$ is $\mathfrak g$-invariant.
\item $\mathcal{L}_{A}\nabla = 0$, for all $A\in\mathfrak g$.
\item $\mathcal{L}_{A_i}\nabla = 0$, for all $i=1,\ldots,n$, where $\{A_1,\ldots,A_n\}$ is a system of generators of $\mathfrak g$ as Lie algebra.
\end{enumerate}
\end{theorem}
\begin{proof}
If $\nabla$ is $\mathfrak g$-invariant, then $\sigma_{-t*}(\nabla) = \nabla$ for the flow of any vector field of the form $\phi(A)$ with $A\in\mathfrak g$. By definition of Lie derivative we have $\mathcal{L}_{A}\nabla = 0$, and $(1)\Rightarrow(2)$. We also have $(2)\Rightarrow (3)$. Finally to prove $(3)\Rightarrow (1)$ it is enough to note that, from Lemma \ref{lm:inv}, $\nabla$ is invariant with respect to the flow of the vector fields $A_i$. Thus, it is invariant with respect to any composition of these flows, and then with respect to the flow of any vector field $A\in\mathfrak g$.
\end{proof}
\begin{lemma}
Let us consider a Lie algebra $\mathfrak g$ of vector fields in an open subset
$U\subseteq \mathbb R^n$. If $\mathfrak g$ contains the translation vector field $\partial_{x_i}$ then for any $\mathfrak g$-invariant connection $\nabla$ the Christoffel symbols $\Gamma_{ij}^k$ are independt from $x_i$.
\end{lemma}
\begin{proof}
Using the Leibniz formula of \ref{leibniz} we have
$$
\left(\mathcal{L}_{\partial_{x_i}} \nabla \right)_{\partial_{x_j}} {\partial_{x_k}} = \left[\partial_{x_i},\nabla_{\partial_{x_j}}\partial_{x_k}\right] - \nabla_{\left[\partial_{x_i},\partial_{x_j}\right]}\partial_{x_k} - \nabla_{\partial_{x_j}}\left[\partial_{x_i},\partial_{x_k}\right],
$$
\noindent for $i,j, k \in \{1,2\}$. As $\left[\partial_{x_i},\partial_{x_j}\right]=0$, for any $i, j \in \{1,2\}$, the previous equality is reduced to
$$
\left(\mathcal{L}_{\partial_{x_i}} \nabla \right)_{\partial_{x_j}} {\partial_{x_k}} = \left[\partial_{x_i},\nabla_{\partial_{x_j}}\partial_{x_k}\right].
$$
If $\left[\partial_{x_i},\nabla_{\partial_{x_j}}\partial_{x_k}\right]=0$ then
$\partial_{x_i} \Gamma_{jk}^l = 0$, for $i,j,k,l \in \{1,2\}$.
\end{proof}
\begin{remark}\label{constante}
If the Lie algebra contains all the translations $\partial_{x_i}$, then the Christoffel symbols are constants.
Moreoever if all of them are null, the only invariant connection is the usual one.
\end{remark}
\subsection{Classification of finite dimensional Lie algebra actions on germs of surfaces}
The local classification of infinitesimal actions of finite dimensional Lie algebras in manifolds of complex dimension $2$ and $3$ was given by S. Lie in the XIX century. The real classification in dimension $2$ was completed in \cite{Olver} and \cite{komrakov}.
In Tables \ref{tabla:acciones primitivas}, \ref{tabla:acciones no transitivas} and \ref{tabla:imprimitive actions} we reproduce\footnote{
In Tables \ref{tabla:acciones primitivas}, \ref{tabla:acciones no transitivas} and \ref{tabla:imprimitive actions} we respect the numeration and cases of \cite{Olver}. However, since we diferenciate the transitive and non-transitive cases, some cases are listed in different order. This explains the gaps in the tables.
}
the results of \cite{Olver}: the local classification of faithful actions of Lie algebras of finite dimension over opens of the real plane.
\begin{table}[ht]
\centering
\begin{tabular}{p{0.1cm} p{9cm} p{2.5cm}}
\hline
& Generators & Structure \\
\hline
1.& $\{\partial_x,\partial_y,\alpha(x\partial_x+y\partial_y)+y\partial_x-x\partial_y\}$, $\alpha \geqslant 0 $ & $\mathbb{R} \ltimes \mathbb{R}^2$ \\
2.& $\{\partial_x,x\partial_x+y\partial_y,\left(x^2-y^2\right)\partial_x+2xy\partial_y\}$ & $\mathfrak{sl}(2)$ \\
3.& $\{y\partial_x-x\partial_y, (1+x^2-y^2)\partial_x+2xy\partial_y,2xy\partial_x+(1+y^2-x^2)\partial_y\}$ & $\mathfrak{so}(3)$ \\
4.& $\{\partial_x,\partial_y,x\partial_x+y\partial_y,y\partial_x-x\partial_y\}$ & $\mathbb{R}^2 \ltimes \mathbb{R}^2 $ \\
5.& $\{\partial_x, \partial_y,x\partial_x-y\partial_y, y\partial_x, x\partial_y \}$& $\mathfrak{sl}(2)\ltimes \mathbb{R}^2$ \\
6.& $\{\partial_x, \partial_y, x\partial_x, y\partial_y, x\partial_y,y\partial_y \}$& $\mathfrak{gl}(2)\ltimes \mathbb{R}^2$ \\
7.& $\{\partial_x, \partial_y, x\partial_x+y\partial_y, y\partial_x-x\partial_y, \left(x^2-y^2\right)\partial_x+2xy\partial_y, 2xy\partial_x+(y^2-x^2)\partial_y \}$& $\mathfrak{sl}(3,1)$ \\
8.& $\{\partial_x, \partial_y, x\partial_x, x\partial_y, y\partial_x, y\partial_y, x^2\partial_x+xy\partial_y, xy\partial_x+y^2\partial_y\}$& $\mathfrak{sl}(3)$ \\
\hline
\end{tabular}
\medskip
\caption{Local classification of faithful primitive actions of finite dimensional Lie algebras on the real plane \cite{Olver}.}
\label{tabla:acciones primitivas}
\end{table}
Table \ref{tabla:acciones primitivas} contains the classification of primitive actions. Let us recall that a Lie algebra action in a surface $M$ is primitive if the induced action in the projective bundle of directions in $M$, $\mathbb{P}(T_pM)$ has no fixed points. This is equivalent to say that the action is not by infinitesimal symmetries of a foliation in $M$. In physical terms, it also means that the we consider an \emph{isotropic} geometry in the surface. Therefore, the cases 1-8 correspond to classical 2-dimensional geometries, namely:
\begin{enumerate}
\item It depends on the parameter $\alpha$. The case $\alpha=0$ corresponds to the euclidean geometry. The case $\alpha\neq 0$ correspond to a primitive subgroup of the affine transformations of the complex plane spanned by translations and the exponential of a complex number of norm different from $1$.
\item Hyperbolic transformations of the half plane.
\item Rotations of the sphere.
\item Affine transformations of the complex plane.
\item Volume preserving affine transformations of the real plane.
\item Affine transformations of the real plane.
\item Conformal transformations of the Riemann sphere.
\item Projective transformations of the real plane.
\end{enumerate}
\begin{table}[ht]
\centering
\begin{tabular}{p{0.1cm} p{9cm} p{2.5cm}}
\hline
& Generators & Structure \\
\hline
9. & $\{\partial_x\}$ & $\mathbb{R}$\\
10.& $\{\partial_x, x\partial_x \}$ & $\mathfrak{h}_2$\\
11. & $\{\partial_x, x\partial_x, x^2\partial_x\}$& $\mathfrak{sl}(2)$\\
20. & $\{\partial_y, \xi _1(x)\partial_y,\cdots, \xi _r(x)\partial_y\}, r\geq 1$& $\mathbb{R}^{r+1}$\\
21. & $\{\partial_y, y\partial_y, \xi_1(x)\partial_y,\cdots, \xi_r(x)\partial_y\}$, con $r\geq 1$ & $\mathbb{R}\ltimes \mathbb{R}^{r+1}$\\
\hline
\end{tabular}
\medskip
\caption{Local classification of non-transitive faithful actions of finite dimensional Lie algebras in the real plane \cite{Olver}.
Functions $\xi_j$ are linearly independent. }
\label{tabla:acciones no transitivas}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{p{0.1cm} p{9.3cm} p{2.5cm}}
\hline
& Generators & Structure \\
\hline
12. & $\{\partial_x, \partial_y,x\partial_x+\alpha y\partial_y\}$& $\mathbb{R} \ltimes \mathbb{R}^2$\\
13. & $\{\partial_x,\partial_y, x\partial_x, y\partial_y \}$& $\mathfrak{h}_2\oplus \mathfrak{h}_2$ \\
14. & $\{\partial_x, \partial_y, x\partial_x, x^2\partial_x\}$& $\mathfrak{gl}(2)$\\
15. & $\{\partial_x,\partial_y, x\partial_x, y\partial_y, x^2\partial_x, y^2\partial_y\}$& $\mathfrak{sl}(2)\oplus \mathfrak{sl}(2)$\\
16. & $\{\partial_x,\partial_y, x\partial_x, y\partial_y, x^2\partial_x,\}$&$\mathfrak{sl}(2)\oplus \mathfrak{h}_2$ \\
17. & $\{\partial_x+\partial_y, x\partial_x+y\partial_y, x^2\partial_x+y^2\partial_y \}$& $\mathfrak{sl}(2)$\\
18. & $\{\partial_x, 2x\partial_x+y\partial_y, x^2\partial_x+xy\partial_y\}$& $\mathfrak{sl}(2)$ \\
19. & $\{\partial_x, x\partial_x, y\partial_y, x^2\partial_x+xy\partial_y\}$& $\mathfrak{gl}(2)$\\
22. & $\{\partial_x, \eta_1(x)\partial_y, \cdots, \eta_r(x)\partial_y \}, r\geq 1$& $\mathbb{R}\ltimes \mathbb{R}^{r} $\\
23. & $\{\partial_x, y\partial_y, \eta_1(x)\partial_y, \cdots, \eta_r(x)\partial_y \}, r\geq 1$& $\mathbb{R}^2\ltimes \mathbb{R}^r$\\
24. & $\{\partial_x, \partial_y, x\partial_x+\alpha y\partial_y, x\partial_y, \cdots, x^r\partial_y\}, r\geq 1$
& $\mathfrak{h}_2\ltimes \mathbb{R}^{r+1}$\\
25. & $\{\partial_x, \partial_y, x\partial_y, \cdots, x^{r-1}\partial_y, x\partial_x+\left(ry+x^r\right)\partial_y \}, r\geq 1$& $\mathbb{R} \ltimes(\mathbb{R}\ltimes \mathbb{R}^r)$\\
26. & $\{\partial_x, \partial_y, x\partial_x, x\partial_y, y\partial_y, x^2\partial_y, \cdots, x^r\partial_y \}, r\geq 1$ & $(\mathfrak{h}_2\oplus \mathbb{R})\ltimes \mathbb{R}^{r+1}$\\
27. & $\{\partial_x, \partial_y, 2x\partial_x+ry\partial_y, x\partial_y, x^2\partial_x+rxy\partial_y, x^2\partial_y, \cdots, x^r\partial_y\}, r\geq 1$& $\mathfrak{sl}(2)\ltimes \mathbb{R}^{r+1}$\\
28. & $\{\partial_x, \partial_y, x\partial_x, x\partial_y, y\partial_y, x^2\partial_x+rxy\partial_y, x^2\partial_y,\cdots, x^r\partial_y\}, r\geq 1$& $\mathfrak{gl}(2)\ltimes \mathbb{R}^{r+1}$\\
\hline
\end{tabular}
\medskip
\caption{Local classification of faithful transitive actions of finite dimensional Lie algebras on the real plane \cite{Olver}. Functions $\eta_j$ are a base of solutions of a lineal differential equation of order $r$, with constant coefficients.
}
\label{tabla:imprimitive actions}
\end{table}
Table \ref{tabla:acciones no transitivas} contains the local classification of non transitive actions. They are, by necessity, imprimitive. Here we can find the classical one-dimensional goeometries: euclidean (case 9), affine (case 10) and projective (case 11). In such cases Proposition \ref{prop:finitedim_inf} does not apply and we will obtain infinite dimensional spaces of invariant connections. Finally, Table \ref{tabla:imprimitive actions} contains the local classification of transitive imprimitive actions.
Finally we will compute the spaces of invariant connections for lie algebra actions on connected surfaces.
\section{Invariant connections for Lie algebra actions on connected surfaces}
\subsection{Primitive Lie algebra actions}
For this classical cases, the existence of invariant connections is well known. By application of Lemma \ref{lm:inv}, we can also recover the following result.
\begin{theorem}\label{th:primitive}
Let $M$ be a connected surface endowed with faithful primitive Lie algebra action of a finite dimensional Lie algebra $\mathfrak g$ there is either one or none $\mathfrak g$-invariant connection. Moreover, only one of the following cases hold:
\begin{itemize}
\item[(a)] $\mathfrak g$ is isomorphic to a Lie sub algebra of the Lie algebra of infinitesimal affine transformations of the plane. There is one $\mathfrak g$-invariant connection; it is flat and torsion free.
\item[(b)] $\mathfrak g$ is isomorphic to the Lie algebra of infinitesimal isometries of a surface of non vanishing constant curvature. There is one $\mathfrak g$-invariant connection; it is torsion free and of constant curvature.
\item[(c)] $\mathfrak g$ is isomorphic to either the Lie algebra of infinitesimal conformal transformations of the Riemann sphere or the Lie algebra of infinitesimal projective transformations of the real plane. There are no $\mathfrak g$-invariant connections.
\end{itemize}
\end{theorem}
\begin{proof}
As in the case (1), $\partial_x$ and $\partial_y$ are in $\g$ then $\Gamma_{i,j}^k$ are constants. If we consider $X=\alpha(x\partial_x+y\partial_y)+y\partial_x-x\partial_y$. Since $\left(L_X \nabla \right)_{\partial_{x_i}}\partial_{x_j}=0$ we have the following equations:
\begin{multicols}{2}
\begin{align*}
\ \alpha \Gamma_{11}^1-\Gamma_{11}^2-\Gamma_{12}^1-\Gamma_{21}^1&=0\\
\alpha \Gamma_{11}^2+\Gamma_{11}^1-\Gamma_{12}^2-\Gamma_{21}^2&=0
\end{align*}
\begin{align*}
\alpha \Gamma_{12}^1-\Gamma_{12}^2-\Gamma_{22}^1+\Gamma_{11}^1&=0 \\
\alpha\Gamma_{12}^2+\Gamma_{12}^1-\Gamma_{22}^2+\Gamma_{11}^2&=0
\end{align*}
\begin{align*}
\alpha \Gamma_{21}^1-\Gamma_{21}^2+\Gamma_{11}^1-\Gamma_{22}^1&=0 \\
\alpha \Gamma_{21}^2+\Gamma_{21}^1+\Gamma_{11}^2-\Gamma_{22}^2&=0
\end{align*}
\begin{align*}
\alpha \Gamma_{22}^1-\Gamma_{22}^2+\Gamma_{12}^1+\Gamma_{21}^1&=0\\
\alpha \Gamma_{22}^2+\Gamma_{22}^1+\Gamma_{12}^2+\Gamma_{21}^2&=0
\end{align*}
\end{multicols}
We can represent these systems in the following matrix array: \\
\begin{equation*}
\left[\begin{array}{rrrrrrrr}
\alpha &-1&-1&0&-1&0&0&0 \\
1 & \alpha & 0 & -1 & 0 & -1 & 0 &0\\
1&0&\alpha&-1&0&0&-1&0\\
0&1&1&\alpha&0&0&0&-1\\
1&0&0&0&\alpha&-1&-1&0 \\
0&1&0&0&1&\alpha&0&-1\\
0&0&1&0&1&0&\alpha&-1 \\
0&0&0&1&0&1&1&\alpha
\end{array}\right]
\begin{bmatrix}
\Gamma_{11}^1\\
\Gamma_{11}^2\\
\Gamma_{12}^1\\
\Gamma_{12}^2\\
\Gamma_{21}^1\\
\Gamma_{21}^2\\
\Gamma_{22}^1\\
\Gamma_{22}^2
\end{bmatrix}=
\begin{bmatrix}
0\\0\\0\\0\\0\\0\\0\\0
\end{bmatrix}
\end{equation*}
The determinant of the matrix is given by the polynomial $\alpha^8 + 12 \alpha^6 + 30\alpha^4 + 28 \alpha^2+9 $. Since this determinant is non-zero for $\alpha \geq 0$, then the system makes sense when $ \Gamma_{ij}^l=0 $ for all $i, j, l \in \{1,2\}. $
\end{proof}
\subsection{Non-transitive Lie algebra actions on germs of surfaces}
\begin{table}[ht]
\begin{tabular}{|p{2.5cm}|p{9.3cm}|}
\hline
Case & Christoffel symbols
\\
\hline \hline
9 & $\Gamma_{ij}^l$ are functions of $y.$
\\
\hline
10& $\Gamma_{ij}^l=0$ except $\Gamma_{22}^2, \Gamma_{21}^1, \Gamma_{12}^1$ that are functions of y $y.$
\\
\hline
20, with $r=1$&
{
\begin{flushleft}
\begin{align*}
&\Gamma_{22}^1=\Gamma_{21}^1=\Gamma_{12}^1=\Gamma_{22}^2=0,\\
&\Gamma_{12}^2+\Gamma_{21}^2-\Gamma_{11}^1=-\frac{\partial_x^2 \xi_k}{\partial_x \xi_k},
\end{align*}
\end{flushleft}
}
with $\Gamma_{11}^2, \Gamma_{12}^2, \Gamma_{21}^2$ y $\Gamma_{11}^1$ functions of $x.$
\\
\hline
21, with $r=1$&
{\begin{align*}
&\Gamma_{22}^1=\Gamma_{21}^1=\Gamma_{12}^1=\Gamma_{22}^2=\Gamma_{11}^2=0,\\
&\Gamma_{12}^2+\Gamma_{21}^2-\Gamma_{11}^1=-\frac{\partial_x^2 \xi_k}{\partial_x \xi_k},
\end{align*}
}
with $\Gamma_{12}^2, \Gamma_{21}^2$ y $\Gamma_{11}^1$ functions of $x.$
\\
\hline
\end{tabular}
\caption{Christofell symbols for invariant connections of non-transitive Lie algebra actions on surfaces in canonical coordinates.}\label{tabla:Christofell_nontransitive}
\end{table}
For non-transitive Lie algebra actions we have the following results.
\begin{proposition}\label{pr:no_transitive1}
Let us consider a non-transitive Lie algebra action of $\mathfrak g\simeq \mathbb{R}^{r+1}$ on an open subset $M\subseteq \mathbb R^2$ to case (20) in Table \ref{tabla:acciones no transitivas}. If ${\rm Cnx}(M)^{\mathfrak g}\neq \emptyset$ then the dimension of $\mathfrak g$ is equal to 2 (r=1). In such case, Christoffel symbols of $\mathfrak g$-invariant connections are characterized by the equations:
{\begin{align*}
&\Gamma_{22}^1=\Gamma_{21}^1=\Gamma_{12}^1=\Gamma_{22}^2=0\\
&\Gamma_{12}^2+\Gamma_{21}^2-\Gamma_{11}^1=-\frac{\partial_x^2 \xi}{\partial_x \xi},
\end{align*}
}
with $\Gamma_{11}^2, \Gamma_{12}^2, \Gamma_{21}^2$ y $\Gamma_{11}^1$ functions of $x.$
\end{proposition}
\begin{proof}
Let us consider $\mathfrak g = \langle \partial_y, \xi_1(x)\partial_y,\ldots, \xi_r(x)\partial_y \rangle$. By taking components of $$\na{\xi_k(x)\partial_y}{\partial_{x_i}}{\partial_{x_j}} $$
we obtain the following system:
\begin{align*}
\left(\Gamma_{12}^2+\Gamma_{21}^2-\Gamma_{11}^1\right)\partial_x \xi_k(x)+\partial_x^2\xi_k(x)&=0,
\\
\left(\Gamma_{21}^1-\Gamma_{22}^2 \right)\partial_x \xi_k(x)&=0,
\\
\left(\Gamma_{12}^1+\Gamma_{21}^1\right)\partial_x \xi_k(x)&=0,
\\
\left(\Gamma_{22}^2-\Gamma_{12}^1 \right)\partial_x \xi_k(x)&=0,
\\
\Gamma_{22}^1\partial_x \xi_k(x)&=0,
\\
\Gamma_{22}^2\partial_x \xi_k(x)&=0.
\end{align*}
This system is compatible if for $k\neq l$ we have
$$\frac{\partial_x^2 \xi_k}{\partial_x \xi_k} =
\frac{\partial_x^2 \xi_l}{\partial_x \xi_l}.$$
So that functions $\xi_k(x)$ and $\xi_l(x)$ are related by an affine transformation $\xi_l(x) = a\xi_k(x) + b$. Therefore, by taking $\xi = \xi_1$ we have that $\mathfrak g$ is spanned by $\partial_y$, $\xi(x)\partial_y$ and it corresponds to the case $r=1$. We obtain that the $\Gamma^k_{ij}$ are functions of $x$ satisfying the following relations:
\begin{subequations}\label{21e1}
\begin{align}
&\Gamma_{22}^1=\Gamma_{21}^1=\Gamma_{12}^1=\Gamma_{22}^2=0,\\
&\Gamma_{12}^2+\Gamma_{21}^2-\Gamma_{11}^1=-\frac{\partial_x^2 \xi}{\partial_x \xi}.
\end{align}
\end{subequations}
\end{proof}
\begin{proposition}\label{pr:no_transitive2}
Let us consider a non-transitive Lie algebra action of $\mathfrak g \simeq \mathbb{R}\ltimes \mathbb{R}^{r+1}$ on an open subset $M\subseteq\mathbb R^2$ corresponding to case (21) in Table \ref{tabla:acciones no transitivas}. If ${\rm Cnx}(M)^{\mathfrak g}\neq \emptyset$ then the dimension of $\mathfrak g$ is equal to 3 (r=1). In such case, Christoffel symbols of $\mathfrak g$-invariant connections are characterized by the equations:
{\begin{align*}
&\Gamma_{22}^1=\Gamma_{21}^1=\Gamma_{12}^1=\Gamma_{22}^2=\Gamma_{11}^2=0\\
&\Gamma_{12}^2+\Gamma_{21}^2-\Gamma_{11}^1=-\frac{\partial_x^2 \xi}{\partial_x \xi}.
\end{align*}
}
with $\Gamma_{12}^2, \Gamma_{21}^2$ y $\Gamma_{11}^1$ functions of $x.$
\end{proposition}
\begin{proof}
This algebra contains that of case 20. Therefore, we have that $r=1$ and the system of equations \ref{21e1} are satisfied. There is an additional equation $\na{y\partial_y}{\partial_{x_i}}{\partial_{x_j}}$ yielding
$\Gamma_{11}^2=0.$ We conclude that $\Gamma_{12}^2, \Gamma_{21}^2$ and $\Gamma_{11}^1$ are functions of $x$ and the following relations are satisfied:
\begin{subequations}
\begin{align}
&\Gamma_{22}^1=\Gamma_{21}^1=\Gamma_{12}^1=\Gamma_{22}^2=\Gamma_{11}^2=0,\\
&\Gamma_{12}^2+\Gamma_{21}^2-\Gamma_{11}^1=-\frac{\partial_x^2 \xi_k}{\partial_x \xi_k}.
\end{align}
\end{subequations}
\end{proof}
Lie algebra actions corresponding to 9, 10, and 11 are well known one dimensional geometries. Case 11 corresponds to the projective geometry of the real line and it does not admit invariant connections. A direct computation yields the space of invariant connections in canonical coordinates.
Summarizing, the following Table \ref{tabla:Christofell_nontransitive}, contains the computation of Christoffel symbols of invariant connections in canonical coordinates.
\subsection{Transitive imprimitive Lie algebra actions}
\begin{theorem}\label{th:imprimitive}
Let us consider a faithful transitive imprimitive action of a Lie algebra $\mathfrak g$ in a connected surface $M$. Then, if the affine space ${\rm Cnx}(M)^{\mathfrak g}$ is not empty, then it falls in one of the following cases:
\begin{enumerate}
\item[(a)] The action corresponds to one of the following cases of Table \ref{tabla:imprimitive actions}:
\begin{enumerate}
\item[(i)] Case (12) with $\alpha\neq \frac{1}{2}$,
\item[(ii)] Case (13),
\item[(iii)] Case (24) with $r=1$,
\item[(iv)] Case (25) with $r=1$,
\item[(v)] Case (26) with $r=1$.
\end{enumerate}
For any such cases, the only invariant connection, in canonical coordinates, is the standard affine connection.
\item[(a')] The action corresponds to case (17). There is only an invariant connection whose Christoffel symbols, in canonical coordinates are:
\begin{align*}
\Gamma_{11}^2&=\Gamma_{12}^1=\Gamma_{12}^2=\Gamma_{21}^1=\Gamma_{21}^2=\Gamma_{22}^1=0,\\
\Gamma_{11}^1&=-\frac{2}{x-y},\\
\Gamma_{22}^2&=-\frac{2}{y-x}.
\end{align*}
\item[(b)] The action corresponds to case (12) of Table \ref{tabla:imprimitive actions} with $\alpha =\frac{1}{2}$. The space ${\rm Cnx}(M)^{\mathfrak g}$ has dimension $1$. The equations for the Christoffel symbols in canonical coordinates are:
$$\Gamma_{22}^1 \,\, \mathrm{cte}, \mbox{ for all other symbols }\Gamma_{ij}^k = 0.$$
\item[(c)] The action corresponds to case (18). The affine space ${\rm Cnx}(M)^{\mathfrak g}$ has dimension $3$. The equations for the Christoffel symbols in canonical coordinates are:
{\begin{align*}
\Gamma_{11}^1&=\frac{a+b}{y^2},\quad \Gamma_{11}^2=\frac{c}{y^3},\\
\Gamma_{12}^2&=\frac{a}{y^2},\quad \Gamma_{21}^2=\frac{b}{y^2},\\
\Gamma_{22}^2&=-\frac{2}{y}\quad\Gamma_{22}^1=0\\
\Gamma_{12}^1&=\Gamma_{21}^1=-\frac{1}{y}.
\end{align*}}
for arbitrary constants $a,b,c$.
\item[(d)] The action corresponds to case (22) with $r=1$ and $\eta_1(x) = e^{\alpha x}$. The affine space ${\rm Cnx}(M)^{\mathfrak g}$ has dimension $8$. The equations for the Christoffel symbols in canonical coordinates are:
{\begin{align*}
\Gamma_{11}^2&=c_{22}^1\alpha^3y^3-c_{22}^2\alpha^2 y^2+(c_{11}^1-c_{21}^2-c_{12}^2)\alpha y-\alpha^2 y+c_{11}^2
\\
\Gamma_{11}^1&=c_{22}^1\alpha^2 y^2-(c_{12}^1+c_{21}^1)\alpha y+c_{11}^1
\\
\Gamma_{12}^2&=-c_{22}^1\alpha^2 y^2-(c_{22}^2-c_{12}^1)\alpha y+c_{12}^2
\\
\Gamma_{21}^2&=-c_{22}^1\alpha^2 y^2-(c_{22}^2+c_{21}^2)\alpha y+c_{21}^2
\\
\Gamma_{12}^1&=-c_{22}^1\alpha y+c_{12}^1
\\
\Gamma_{21}^1&=-c_{22}^1\alpha y+c_{21}^1
\\
\Gamma_{22}^2&=c_{22}^1\alpha y+c_{22}^2
\\
\Gamma_{22}^1&=c_{22}^1.
\end{align*}}
for arbitrary constants $c_{ij}^k$, with $i,j,k \in \{1,2\}$.
\item[(e)] The action corresponds to case (23) with $r=1$ and $\eta_1(x) = e^{\alpha x}$. The affine space ${\rm Cnx}(M)^{\mathfrak g}$ has dimension $4$. The equations for the Christoffel symbols in canonical coordinates are:
{\begin{align*}
\Gamma_{22}^2&=\Gamma_{12}^1=\Gamma_{21}^1=\Gamma_{22}^1=0\\
\Gamma_{11}^1&=a, \Gamma_{12}^2=c, \Gamma_{21}^2=d \\
\Gamma_{11}^2&=-\alpha (d+c-a)y-\alpha^2y+b.
\end{align*}
}
for arbitrary constants $a,b,c,d.$
\end{enumerate}
\end{theorem}
\begin{proof}
The proof is based in an analysis of the Lie derivative of a general connections by the generators of the Lie algebra action in canonical coordinates for each case appearing in Table \ref{tabla:imprimitive actions}.
\begin{enumerate}
\item [(a)] Let $\mathfrak g_{6}$, $\mathfrak g_{12,\alpha\neq 1/2}$, $\mathfrak g_{13}$, $\mathfrak g_{24,r=1}$, $\mathfrak g_{25,r=1}$, $\mathfrak g_{26,r=1}$ be Lie algebras correspond to their corresponding cases in the tables \ref{tabla:acciones primitivas}, \ref{tabla:imprimitive actions}. By the last result, the only invariant connection for the algebra $\mathfrak g_{6}$ is the affine standard connection. As the other algebras are contained in this algebra forming the lattice:
$$\xymatrix{
\mathfrak{g}_6 \\ \mathfrak g_{26,r=1} \ar@{^{(}->}[u] \\ \mathfrak g_{24,r=1} \ar@{^{(}->}[u] \\ \mathfrak g_{13} \ar@{^{(}->}[u] & \ \ \ \ \ \ \ \ \mathfrak g_{25,r=1} \ar@{^{(}->}[ul] \\ \mathfrak g_{12,\alpha\neq 1/2} \ar@{^{(}->}[u]
}$$
then is sufficient to show the result for the cases
$12$ with $\alpha\neq 1/2$, $25$ with $r=1$.
\begin{itemize}
\item Case (12), $\g=\left \{\partial_x, \partial_y,x\partial_x+\alpha y\partial_y\right \},$ $0<|\alpha|\leq 1.$
The coefficients $\Gamma_{ij}^l$ are constants by \ref{constante}. If $X=x\partial_x+\alpha y\partial_y$, with $\alpha \neq \frac{1}{2}$, then from that $\na{X}{\partial_{x_i}}{\partial_{x_j}}$ it follows $\Gamma_{ij}^l=0$.
\item Case (25), $r=1$. in this case the algebra is $\g=\{\partial_x, \partial_y, x\partial_x+\left(y+x\right)\partial_y \}$. As $\partial_x$ and $\partial_y$ are in $ \g$, then $\Gamma_{ij}^l$ are constants. Taking the field $Y=\partial_x+\left(y+x\right)\partial_y$, from $\na{Y}{\partial_{x_i}}{\partial_{x_j}}$, we have the system
\begin{align*}
-\Gamma_{11}^1&+\Gamma_{11}^2+\Gamma_{12}^2+\Gamma_{21}^2=0,\\
\Gamma_{11}^1&+\Gamma_{12}^1+\Gamma_{21}^1=0,\\
\Gamma_{21}^2&-\Gamma_{21}^1+\Gamma_{22}^2=0,\\
\Gamma_{12}^2&-\Gamma_{12}^1+\Gamma_{22}^2=0,\\
\Gamma_{12}^1&+\Gamma_{22}^1=0,\\
\Gamma_{21}^1&+\Gamma_{22}^1=0,\\
\Gamma_{22}^2&=\Gamma_{22}^1=0.
\end{align*}
With solution $\Gamma_{ij}^l=0$.
\end{itemize}
\item[(a')] Case (17), $\g=\left\{\partial_x+\partial_y, x\partial_x+y\partial_y, x^2\partial_x+y^2\partial_y \right\}$.
Considering $X=\partial_x+\partial_y$, $Y=x\partial_x+y\partial_y$ and $Z=x^2\partial_x+y^2\partial_y$,
from $\na{X}{\partial_{x_i}}{\partial_{x_j}}$ and $\na{Y}{\partial_{x_i}}{\partial_{x_j}}$, we obtain for $i,j,l\in \{1,2\}$ the system
\begin{align*}
\partial_x\Gamma_{ij}^l=-\partial_y\Gamma_{ij}^l,\\
(x-y)\partial_x\Gamma_{ij}^l+\Gamma_{ij}^l=0,\\
(y-x)\partial_y\Gamma_{ij}^l+\Gamma_{ij}^l=0.
\end{align*}
Replacing those equations in $\na{Z}{\partial_{x_i}}{\partial_{x_j}}$, we get the system
\begin{align*}
\Gamma_{11}^2&=\Gamma_{12}^1=\Gamma_{12}^2=\Gamma_{21}^1=\Gamma_{21}^2=\Gamma_{22}^1=0,\\
\Gamma_{11}^1&=-\frac{2}{x-y},\\
\Gamma_{22}^2&=-\frac{2}{y-x}.
\end{align*}
\item[(b)] We have that $\Gamma_{ij}^l$ are constants, from $\na{X}{\partial_{x_i}}{\partial_{x_j}}$, with $X=x\partial_x+\alpha y\partial_y$, we have the result.
\item[(c)] Case (12), $\g=\{\partial_x, 2x\partial_x+y\partial_y, x^2\partial_x+xy\partial_y\}$. As $\partial_x$ is in the algebra, then all the symbols $\Gamma_{ij}^l$ are functions of $y$. Taking $Y=2x\partial_x+y\partial_y$ and $Z=x^2\partial_x+xy\partial_y$,
from $\na{Y}{\partial_{x_i}}{\partial_{x_j}}$ and $\na{Z}{\partial_{x_i}}{\partial_{x_j}}$, we have the system:
\begin{multicols}{2}
\begin{align*}
y\partial_y\Gamma_{11}^1+2\Gamma_{11}^1&=0,\\
y\partial_y\Gamma_{11}^2+3\Gamma_{11}^2&=0,\\
y\partial_y\Gamma_{12}^1+\Gamma_{12}^1&=0,\\
y\partial_y\Gamma_{12}^2+2\Gamma_{12}^2&=0, \\
y\partial_y\Gamma_{21}^1+\Gamma_{21}^1&=0,\\
y\partial_y\Gamma_{21}^2+2\Gamma_{21}^2&=
\end{align*}
\begin{align*}
\Gamma_{12}^2+\Gamma_{21}^2-\Gamma_{11}^1&=0,\\
y\Gamma_{22}^2-y\Gamma_{12}^1+1&=0,\\
y\Gamma_{22}^2-y\Gamma_{21}^1+1&=0, \\
y\Gamma_{21}^1+y\Gamma_{12}^1+2&=0,\\
\Gamma_{22}^1&=0.
\end{align*}
\end{multicols}
Where the general solution are expressed in function of arbitrary constants
$a$, $b$, $c$ $\in\mathbb{R}$:
\begin{align*}
\Gamma_{11}^1&=\frac{a+b}{y^2},\quad \Gamma_{11}^2=\frac{c}{y^3},\\
\Gamma_{12}^2&=\frac{a}{y^2},\quad \Gamma_{21}^2=\frac{b}{y^2},\\
\Gamma_{22}^2&=-\frac{2}{y}\quad\Gamma_{22}^1=0,\\
\Gamma_{12}^1&=\Gamma_{21}^1=-\frac{1}{y}.
\end{align*}
\item[(d)] $\g=\left\langle\partial_x, \eta_1(x)\partial_y, \cdots, \eta_r(x)\partial_y \right\rangle,$ with $r\geq 1.$
Taking $k$ such that $1\leq k \leq r$.
The symbols $\Gamma_{ij}^l$ depends of $y$ because $\partial_x$ is in the algebra. From $\na{\eta_k(x)\partial_y}{\partial_{x_i}}{\partial_{x_j}}$, obtain the system
\begin{align*}
\eta_k(x)\partial_y\Gamma_{11}^1+\partial_x\eta_k(x)\left(\Gamma_{12}^2+\Gamma_{21}^2-\Gamma_{11}^1\right)+\partial_x^2\eta_k(x)&=0,
\\
\eta_k(x)\partial_y\Gamma_{11}^1+\partial_x\eta_k(x)\left(\Gamma_{21}^1+\Gamma_{12}^1\right)&=0,
\\
\eta_k(x)\partial_y\Gamma_{12}^1+\Gamma_{22}^1\partial_x\eta_k(x)&=0,
\\
\eta_k(x)\partial_y\Gamma_{12}^2+\partial_x\eta_k(x)\left(\Gamma_{22}^2-\Gamma_{12}^1\right)&=0,
\\
\eta_k(x)\partial_y\Gamma_{21}^1+\Gamma_{22}^1\partial_x\eta_k(x)&=0,
\\
\eta_k(x)\partial_y\Gamma_{21}^2+\partial_x\eta_k(x)\left(\Gamma_{22}^2-\Gamma_{21}^1\right)&=0,
\\
\eta_k(x)\partial_y\Gamma_{22}^2-\Gamma_{22}^1\partial_x\eta_k(x)&=0,
\\
\eta_k(x)\partial_y\Gamma_{22}^1&=0.
\end{align*}
Using this equations we obtain that $\alpha_k:=\frac{\partial_x\eta_k(x)}{\eta_k(x)}$ and $\alpha^2_k:=\frac{\partial_x^2\eta_k(x)}{\eta_k(x)}$ are constants.
Therefore $\eta_k(x)$ is multiple of $e^{\alpha_kx}$.
Finally, for $k$ fixed, we can solve the symbols in function of $8$ arbitrary constants $c_{ij}^k$:
\begin{subequations}
\begin{align}
\Gamma_{11}^2&=c_{22}^1\alpha^3_ky^3+(c_{22}^2-c_{12}^1-c_{21}^1)\alpha^2_ky^2+(c_{11}^1-c_{21}^2-c_{12}^2)\alpha_ky-\alpha^2_ky+c_{11}^2, \label{simbolos22}
\\
\Gamma_{11}^1&=c_{22}^1\alpha^2_ky^2-(c_{12}^1+c_{21}^1)\alpha_ky+c_{11}^1,
\\
\Gamma_{12}^2&=-c_{22}^1\alpha^2_ky^2-(c_{22}^2-c_{12}^1)\alpha_ky+c_{12}^2,
\\
\Gamma_{21}^2&=-c_{22}^1\alpha^2_ky^2-(c_{22}^2-c_{21}^1)\alpha_ky+c_{21}^2,
\\
\Gamma_{12}^1&=-c_{22}^1\alpha_ky+c_{12}^1,
\\
\Gamma_{21}^1&=-c_{22}^1\alpha_ky+c_{21}^1,
\\
\Gamma_{22}^2&=c_{22}^1\alpha_ky+c_{22}^2,
\\
\Gamma_{22}^1&=c_{22}^1. \label{simbolos22b}
\end{align}
\end{subequations}
This system is compatible if the constants $\alpha_k$ are equals for all $k$. In this case the functions $\eta_k(x)$ spam a space of dimension $1$, so $r=1$. That is, if $r=1$ and $\eta_1(x) = e^{\alpha_k x}$, there is a $8$-dimensional space of invariant connections. Otherwise, there is not invariant connections.
\item[(e)]
As this algebra contains the algebra of the case (22), then $r=1$ and $\eta_1(X) = e^{\alpha_1 x}$. We obtain a system of equations including
\eqref{simbolos22} -- \eqref{simbolos22b} and also from $\na{y\partial_y}{\partial_{x_i}}{\partial_{x_j}}$, we obtain additional equations:
\begin{align}\label{e23-1}
y\partial_y\Gamma_{11}^1=y\partial_y\Gamma_{12}^2=y\partial_y\Gamma_{21}^2=0
\end{align}
and the system
\begin{subequations}\label{e23-2}
\begin{align}
y\partial_y\Gamma_{11}^1-\Gamma_{11}^1&=0\label{e23-2-1},
\\
y\partial_y\Gamma_{12}^1+\Gamma_{12}^1&=0\label{e23-2-2},
\\
y\partial_y\Gamma_{21}^1+\Gamma_{21}^1&=0\label{e23-2-3},
\\
y\partial_y\Gamma_{22}^1+2\Gamma_{22}^1&=0\label{e23-2-4},
\\
y\partial_y\Gamma_{22}^2+\Gamma_{22}^2&=0\label{e23-2-5}.
\end{align}
\end{subequations}
From \ref{e23-1}, $\Gamma_{11}^1, \Gamma_{12}^2, \Gamma_{21}^2$ are constants. As $\Gamma_{22}^1$ is constant, from \ref{e23-2-4}, $\Gamma_{22}^1=0$. Combining this with the equations from the previous case we obtain $\Gamma_{22}^2=\Gamma_{21}^1=\Gamma_{12}^1=0$. So we have:
\begin{align*}
\Gamma_{22}^2&=\Gamma_{12}^1=\Gamma_{21}^1=\Gamma_{22}^1=0,\\
\Gamma_{21}^2&=c_{21}^2, \Gamma_{12}^2=c_{12}^2, \Gamma_{11}^1=c_{11}^1,\\
\Gamma_{11}^2&=-\alpha_1(c_{21}^2+c_{12}^2-c_{11}^1)y-\alpha ^2_1y+c_{11}^2,
\end{align*}
where $c_{21}^2, c_{12}^2, c_{11}^1$ and $c_{11}^2$ are arbitrary constants.
\end{enumerate}
It remains to check that all other cases of transitive imprimitive infinitesimal actions, namely cases (14), (15), (16), (19), (27) and (28) do not admit invariant connections.
The Lie algebra of case (14) is contained in that of cases (15) and (16). Thus, it suffices to show the case (14). In such case, since $\partial_x$ and $\partial_y$ are in the algebra, then $\Gamma_{ij}^l$ are constants. From $\na{x\partial_x}{\partial_{x_i}}{\partial_{x_j}}$, $\Gamma_{ij}^l=0$ , except the symbols $\Gamma_{12}^1, \Gamma_{21}^1, \Gamma_{22}^2$. As $\na{x^2\partial_x}{\partial_x}{\partial_x}$ implies that $2\partial_x=0$, this is impossible, then in this case there is not connection.
For the case (19), $\Gamma_{ij}^l$ depends of $y$. From $\na{x\partial_x}{\partial_{x_i}}{\partial_{x_j}}$ it follows that $\Gamma_{ij}^l=0$, except $\Gamma_{12}^1, \Gamma_{21}^1, \Gamma_{22}^2$.
From $\na{y\partial_y}{\partial_{x_i}}{\partial_{x_j}}$, we have the system
\begin{align*}
y\partial_y\Gamma_{12}^1+\Gamma_{12}^1&=0\\
y\partial_y\Gamma_{21}^1+\Gamma_{21}^1&=0\\
y\partial_y\Gamma_{22}^2+\Gamma_{22}^2&=0
\end{align*}
Using this in $\na{Y}{\partial_{x_i}}{\partial_{x_j}}$, with $Y=x^2\partial_x+xy\partial_y$ we have the system
\begin{align*}
-y\Gamma_{12}^1+y\Gamma_{22}^2+1&=0\\
-y\Gamma_{21}^1+y\Gamma_{22}^2+1&=0\\
xy\partial_y\Gamma_{22}^2+y\Gamma_{22}^2+1&=0\\
y\Gamma_{21}^1+y\Gamma_{12}^1+2&=0\\
\Gamma_{21}^1&=0
\end{align*}
Which is inconsistent, in this case there is not connection.
In the case (27), $\Gamma_{ij}^l$ are constants. For $Y=2x\partial_x+ky\partial_y$ with $1\leq k\leq r$, from $\na{Y}{\partial_{x_i}}{\partial_{x_j}}$ we have the system:
\begin{align*}
2\Gamma_{11}^1\partial_x+(4-k)\Gamma_{11}^2\partial_y=0,\\
k\Gamma_{12}^1\partial_x+2\Gamma_{12}^2\partial_y=0,\\
k\Gamma_{21}^1\partial_x+2\Gamma_{21}^2\partial_y=0,\\
(2k-2)\Gamma_{22}^1\partial_x+k\Gamma_{22}^2\partial_y=0.
\end{align*}
If $k=1$, then $\Gamma_{ij}^l=0$ except $\Gamma_{22}^1$, but from $\na{x\partial_y}{\partial_y}{\partial_y}$, $\Gamma_{22}^1=0.$
If $r=2$, from $\na{x^2\partial_y}{\partial_x}{\partial_x}$ we have a contradiction. So $r=1$, as $\Gamma_{ij}^l=0$ from $\na{Z}{\partial_x}{\partial_x}$, with $Z=x^2\partial_x+xy^2\partial_y$ we have also a contradiction. Therefore there are not invariant connections in this case.
Finally the Lie algebra of case (28) is contained in that of case (13), then $\Gamma_{ij}^l=0$. From $\na{Z}{\partial_x}{\partial_x}$, with $Z=x^2\partial_x+rxy\partial_y$, we obtain the incompatibility of the system for the Christoffel symbols and there are no invariant connections.
\end{proof}
\section{Homogeneous surfaces}
By an homogeneous manifold we understand an smooth manifold $M$ endowed with a transitive smooth action $\rho\colon G\to {\rm Diff}(M)$ of a connected Lie group $G$. As it is well known the surface can be recovered as the quotient $M\simeq G/H$ where $H$ is the stabilizer of a point in $M$. We say that two action
$\rho\colon G\to {\rm Diff}(M)$ and $\rho'\to {\rm Diff}(M')$ are equivalent if they induce the same transformation group in $M$, that is, $\rho(G) = \rho(G')$. Any action is always equivalent to a faithful action: we may consider the exact sequence,
$${\rm Id}\to {\rm ker}(\rho) \to G \xrightarrow{\rho} {\rm Diff}(M)$$
and replace $G$ by $\bar G = G/\ker{\rho}$.
Given a faithful action of $G$ on $M$ we can replace $G$ by its universal cover. The action of the universal cover of $G$ in $M$ is not faithful but at least infinitesimally faithful. Therefore, there is no loss of generality in assuming that the $G$ is simply connected and the action of $G$ on $M$ is infinitesimally faithful.
\begin{lemma}
Let $H$ be a Lie subgroup of a simply connected Lie group $G$. Then, the homogeneous manifold $G/H$ is simply connected if and only if $H$ is connected.
\end{lemma}
\begin{proof}
First let us see that if $G/H$ is simply connected, then $H$ is connected. Reasoning by contradiction let us assume that $H$ is not connected a, and let $H_0$ be its connected component of the the identity. Then, $G/H_0\to G/H$ is a non-trivial connected (since $G$ is connected) covering space. Therefore $G/H$ is not simply connected.
Let us assume now that $H$ is connected. Let us denote by $x_0$ the class $H$ in $G/H$. Let $\gamma\colon [0,1]\to G/H$ is a loop based on $x_0$. We can lift $\gamma$ to a path $\tilde \gamma$ in $G$ such that $\gamma(0) = e$ and $\gamma(1)\in H$. Since $H$ is connected there is path $\tau\colon [0,1]\to H$ such that $\tau(0) = e$ and $\tau(1) = \tilde\gamma(1)$. Since $G$ is simply connected there is an homotopy with fixed extremal points $\tilde\gamma \sim \tau$. The projection of the homotopy onto $G/H$ tell us that $\gamma$ is contractible.
\end{proof}
Therefore, simply connected homogeneous manifolds arise as quotients of simply connected Lie groups by connected Lie subgroups. On the other hand let us consider $M$ an homogeneous manifold, and $\pi\colon \tilde M \to M$ its universal cover. Since each diffeomorphism of $M$ can be lifted, up to choice of two points in the fibers of $\pi$, to a diffeomorphism of $\tilde M$, we have an exact sequence:
$${\rm Id} \to {\rm Aut}(\tilde M/M)\to {\rm Diff}(\tilde M/M)\xrightarrow{\pi_*} {\rm Diff}(M) \to {\rm Id}$$
where ${\rm Diff}(\tilde M/M)$ is the group of diffeomorphisms of $\tilde M$ that respect the fibers of $\pi$. Therefore, by taking $\tilde G$ the connected component of $\pi_*^{-1}(G)$, we have that $\tilde M$ is a simply connected homogeneous manifold with the action of $\tilde G$. Therefore, homogeneous manifold can be seen as quotients of simply connected homogeneous manifolds.
The analysis of the invariant connections for Lie algebra actions allow us to classify, up to equivalence, all the simply connected homogeneous surfaces having more than one, or exactly one, invariant connections.
\begin{theorem}\label{th:hs}
Let $M$, endowed with an action of a connected Lie group $G$, be a simply connected homogeneous surface. Let us assume that $M$ admits at least two $G$-invariant connections. Then, $M$ is equivalent to one of the following cases:
\begin{enumerate}
\item[(a)] The affine plane $\mathbb R^2$ with one of the following transitive groups of affine transformations:
$${\rm Res}_{(2:1)}(\mathbb R^2) = \left\{ \left[ \begin{array}{c}x\\ y\end{array}\right] \mapsto A \left[ \begin{array}{c}x\\ y\end{array}\right] + \left[ \begin{array}{c}b_1\\ b_2\end{array}\right]\mid A =
\left[ \begin{array}{cc} \lambda^2 & 0 \\ 0 & \lambda \end{array} \right],\,\lambda>0
\right\},$$
$${\rm Trans}(\mathbb R^2) = \left\{ \left[ \begin{array}{c}x\\ y\end{array}\right] \mapsto \left[ \begin{array}{c}x\\ y\end{array}\right] + \left[ \begin{array}{c}b_1\\ b_2\end{array}\right] \mid b_1,b_2\in\mathbb R \right\},$$
$${\rm Res}_{(0:1)}(\mathbb R^2) = \left\{ \left[ \begin{array}{c}x\\ y\end{array}\right] \mapsto A \left[ \begin{array}{c}x\\ y\end{array}\right] + \left[ \begin{array}{c}b_1\\ b_2\end{array}\right]\mid A =
\left[ \begin{array}{cc} 1 & 0 \\ 0 & \lambda \end{array} \right],\,\lambda>0
\right\}.$$
\item[(b)] ${\rm SL}_2(\mathbb R)/U$ where $U$ is the subgroup of superior unipotent matrices
$$ U = \left\{ A\in {\rm SL}_2(\mathbb R) \mid A =
\left[ \begin{array}{cc} 1 & \lambda \\ 0 & 1 \end{array} \right]
\right\}.$$
\item[(c)] $\mathbb R\ltimes \mathbb R$ acting on itself by left translations.
\item[(d)] $G/H$ with $G = \mathbb R^2\ltimes \mathbb R$ and $H = (\mathbb R\times 0)\ltimes 0$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $M$, endowed with an action of a connected Lie group $G$, be a simply connected homogeneous surface. Let us consider the infinitesimal lie algebra action induced by the action of $G$ in $M$. Since $G$ is connected then the space of $G$-invariant connections coincide with that of ${\rm Lie}(G)$-invariant connections.
In virtue of Theorems \ref{th:primitive} and \ref{th:imprimitive} if the infinitesimal action has more than one invariant connection, then it corresponds to cases (12) with $\alpha = \frac{1}{2}$, (18), (22) with $r=1$ or (23) with $r=1$ in table \ref{tabla:imprimitive actions}. Without loss of generality, we assume that $G$ is simply connected. Then, by third Lie theorem, it is completely determined by its Lie algebra. Therefore, by integrating the respective Lie algebras, we deduce the following:
\begin{itemize}
\item Any homogeneous surface corresponding to case (12) with $\alpha\neq 0$ or cases (22, 23) with trivial semidirect product in Table \ref{tabla:imprimitive actions}, is equivalent to one listed in the case (a) of the statement;
\item any homogeneous surface corresponding to case (18) in Table \ref{tabla:imprimitive actions}, is equivalent to that of case (b);
\item any homogeneous surface corresponding to case (22) with $r=1$ in Table \ref{tabla:imprimitive actions} and non trivial semidirect product is equivalent to that of case (c);
\item any homogeneous surface corresponding to case (23) with $r=1$ in Table \ref{tabla:imprimitive actions} and non trivial semidirect product is equivalent of that of case (d).
\end{itemize}
This completes the proof.
\end{proof}
\begin{remark}
In the cases (c) and (d) of Theorem \ref{th:hs} we assume that the semidirect product is not trivial. Otherwise we fall in the case (a).
\end{remark}
\begin{theorem}\label{th:hs2}
Let $M$, with the action of $G$, be a simply connected homogeneous surface. Let us assume that $M$ admits exactly one $G$-invariant connection. Then, $M$ is equivalent to one of the following cases:
\begin{enumerate}
\item[(a)] The affine plane $\mathbb R^2$ with $G$ any connected transitive subgroup of the group ${\rm Aff}(\mathbb R^2)$ of affine transformations containing the group of translations and not conjugated with any of the groups,
$${\rm Res}_{(2:1)}(\mathbb R^2) = \left\{ \left[ \begin{array}{c}x\\ y\end{array}\right] \mapsto A \left[ \begin{array}{c}x\\ y\end{array}\right] + \left[ \begin{array}{c}b_1\\ b_2\end{array}\right]\mid A =
\left[ \begin{array}{cc} \lambda^2 & 0 \\ 0 & \lambda \end{array} \right],\,\lambda>0
\right\},$$
$${\rm Trans}(\mathbb R^2) = \left\{ \left[ \begin{array}{c}x\\ y\end{array}\right] \mapsto \left[ \begin{array}{c}x\\ y\end{array}\right] + \left[ \begin{array}{c}b_1\\ b_2\end{array}\right] \mid b_1,b_2\in\mathbb R \right\},$$
$${\rm Res}_{(0:1)}(\mathbb R^2) = \left\{ \left[ \begin{array}{c}x\\ y\end{array}\right] \mapsto A \left[ \begin{array}{c}x\\ y\end{array}\right] + \left[ \begin{array}{c}b_1\\ b_2\end{array}\right]\mid A =
\left[ \begin{array}{cc} 1 & 0 \\ 0 & \lambda \end{array} \right],\, \lambda>0
\right\}.$$
(this case includes the euclidean plane)
\item[(b)] ${\rm SL}_2(\mathbb R)/H$ where $H$ is the group of special diagonal matrices,
$$H = \left\{ A \in {\rm SL}_{2}(\mathbb R) \mid
A = \left[ \begin{array}{cc} \lambda & 0 \\ 0 & \lambda^{-1} \end{array} \right], \,\, \lambda>0 \right\}.$$
\item[(c)] The hyperbolic plane
$$\mathbb H = \{z\in\mathbb C \mid {\rm Im}(z)>0 \}$$
with the group ${\rm SL}_2(\mathbb R)$ of hyperbolic rotations.
\item[(d)] Spherical surface,
$$S^2 = \{(x,y,z)\in \mathbb R^3\mid x^2 + y^2 + z^2 =1 \}$$
with its group ${\rm SO}_3(\mathbb R)$ of rotations.
\end{enumerate}
\end{theorem}
\begin{proof}
We follow the same reasoning that in the proof of Theorem \ref{th:hs}.
In virtue of Theorems \ref{th:primitive} and \ref{th:imprimitive} if the infinitesimal action has exactly one one invariant connection, then it corresponds to cases (1), (3), (5), (6), (12) with $\alpha \neq \frac{1}{2}$, (13), (24) with $r=1$, (25) with $r=1$, (26) with $r=1$, (17), (2) or (3). Then we check that cases (1), (3), (6), (5),(12) with $\alpha\neq 1/2$, (13), (24) with $r=1$, (25) with $r=1$ and (26) with $r=1$, all ot them correspond to case (a) in the statement. Finally, case (17) corresponds to case (b), case (2) corresponds to case (c) and case (3) corresponds to case (d).
\end{proof}
\begin{remark}
Note that the hyperbolic plane, case (c) in Theorem \ref{th:hs2}, corresponds to the remaining $2$-dimensional simply connected quotient of ${\rm SL}_2(\mathbb R)$; $\mathbb H \simeq {\rm SL}_2(\mathbb R)/H$ where
$$H = \left\{ \left[ \begin{array}{cc} a & -b \\ b & a \end{array} \right] \mid a^2 + b^2 = 1 \right\}.$$
\end{remark}
\subsection*{Acknowledgements}
The authors acknowledge the support of their host institutions Universidad de Antioquia and Universidad Nacional de Colombia - Sede Medell\'in. The research of D.B.-S. has been partially funded by Colciencias project ''Estructuras lineales en geometr\'ia and Topolog\'ia'' 776-2017 57708 (HERMES 38300).
\bibliographystyle{plain}
| {
"timestamp": "2019-05-15T02:06:39",
"yymm": "1905",
"arxiv_id": "1905.05349",
"language": "en",
"url": "https://arxiv.org/abs/1905.05349",
"abstract": "We compute all the simply connected homogeneous and infinitesimally homogeneous surfaces admitting one or more invariant affine connections. We find exactly six non equivalent simply connected homogeneous surfaces admitting more than one invariant connections and four classes of simply connected homogeneous surfaces admitting exactly one invariant connection.",
"subjects": "Differential Geometry (math.DG)",
"title": "Homogeneous surfaces admitting invariant connections",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982013790564298,
"lm_q2_score": 0.8198933359135361,
"lm_q1q2_score": 0.8051465626588589
} |
https://arxiv.org/abs/1408.5728 | Sinkhorn normal form for unitary matrices | Sinkhorn proved that every entry-wise positive matrix can be made doubly stochastic by multiplying with two diagonal matrices. In this note we prove a recently conjectured analogue for unitary matrices: every unitary can be decomposed into two diagonal unitaries and one whose row- and column sums are equal to one. The proof is non-constructive and based on a reformulation in terms of symplectic topology. As a corollary, we obtain a decomposition of unitary matrices into an interlaced product of unitary diagonal matrices and discrete Fourier transformations. This provides a new decomposition of linear optics arrays into phase shifters and canonical multiports described by Fourier transformations. | \section{Introduction}
For every $n\times n$ matrix $A$ with positive entries there exist two diagonal matrices $L,~R$ such that $LAR$ is doubly stochastic, i.e. the entries of each column and row sum up to one. This result was first obtained by Sinkhorn \cite{sin64}, who also gave an algorithm how to compute $L$ and $R$ by iterated left and right multiplication of diagonal matrices.
Recently, De Vos and De Baerdemacker studied the same problem for unitary matrices \cite{vos14a}. They conjectured that for every $n\times n$ unitary $U$ there exist two unitary diagonal matrices $L, R$ such that $LUR$ has all row and column sums equal to one. To support their conjecture, they construct an algorithm similar to the iteration procedure for matrices with positive entries from \cite{sin64,sin67}. They also provide numerical evidence that the algorithm always converges to a unitary matrix with row and column sums equal to one.
The goal of this paper is to prove the conjecture of De Vos and De Baerdemacker that such a normal form always exists by reformulating the problem in terms of symplectic topology. It turns out that the reformulated problem is a special case of the Arnold (sometimes Arnold-Givental) conjecture on the intersection of Lagrangian submanifolds \cite{mcd98}, which was solved for this case in \cite{bir04, cho04}. More precisely, in section \ref{sec:unit} we show:
\begin{customTheorem}{2}
For every unitary matrix $U\in U(n)$ there exist two diagonal unitary matrices $L,R\in U(n)$ such that $A:=LUR$ satisfies $\sum_j A_{ji} =\sum_j A_{ij} =1$ for all $i=1,\ldots n$.
\end{customTheorem}
For a given unitary $U\in U(n)$ the triple $(L,R,A)$ is certainly not unique, since multiplying $L$ by a global phase and $R$ by its inverse does not change $LAR$. Hence, it makes sense to consider the decomposition $U=e^{i\varphi}L^{\prime}AR^{\prime}$, where $L^{\prime},R^{\prime}$ are unitary diagonal such that $L^{\prime}_{11}=R^{\prime}_{11}=1$ and $\varphi\in[0,2\pi)$. In particular, for $U(2)$, a simple complete solution was given in \cite{vos14a} from which one can see that for every non-diagonal matrix, there are only two different $A$ such that $e^{i\varphi}LAR=U$. For $n>2$ the picture is less clear and the reformulation in terms of symplectic topology appears to give further insight into the freedom of the decomposition.
In addition to the Sinkhorn-type normal form above, in section \ref{sec:derived} we give several reformulations that might be interesting for applications, for instance regarding the decomposition of general $2n-$port linear optics devices into canonical multiports and phase shifters.
\section{Sinkhorn-type normal form} \label{sec:unit}
In order to prove the decomposition theorem, we reformulate the problem of rescaling a unitary matrix into a problem in symplectic topology. For the reader's convenience, necessary results including elementary calculations and definitions are included in \ref{sec:sympprel}. We only repeat the most important definitions for our reformulation. Recall that the complex projective space $\mathbb{C}P^n$ consists of all equivalence classes of $\mathbb{C}^{n+1}\backslash\{0\}$ w.r.t. $x\sim y\Leftrightarrow x=\lambda y$ with $\lambda\in\mathbb{C}\backslash\{0\}$.
\begin{dfn} \label{dfn:clifftor}
The \emph{Clifford Torus} is the $n$-dimensional torus embedded in $\mathbb{C}P^n$, i.e. the set of points
\begin{align}
T^n:=\{[w_0,\ldots, w_n]\in\mathbb{C}P^{n}\big||w_0|=|w_1|=\ldots=|w_n|\}.
\end{align}
\end{dfn}
This torus, as shown in the appendix in proposition \ref{prop:cliff}, is a Lagrangian submanifold of the symplectic manifold $\mathbb{C}P^n$. We obtain the following connection to our normal form:
\begin{lem} \label{lem:reform}
For any unitary $U\in U(n)$, there exist diagonal unitaries $L$ and $R$ such that $A:=LUR$ has row and column sums equal to one if and only if the Clifford torus $T^{n-1}\subset \mathbb{C}P^{n-1}$ fulfills $T^{n-1}\cap UT^{n-1}\neq \emptyset$.
\end{lem}
\begin{proof}
Let $U\in U(n)$ be arbitrary but fixed. We first consider the usual torus $\mathbb{T}^{n}\subset \mathbb{C}^{n}$, i.e. the set of all vectors for which each component has modulus one:
\begin{align*}
\mathbb{T}^n:=\{(e^{i\phi_1},\ldots,e^{i\phi_n})\subset \mathbb{C}^n\,|\,\phi_j\in\mathbb{R}\}
\end{align*}
Let us first show that the existence of a normal form is equivalent to $\mathbb{T}^n\cap U\mathbb{T}^n\neq \emptyset$. For one direction, let $\varphi\in\mathbb{T}^n$ such that $U\varphi\in\mathbb{T}^n$, i.e. $\varphi\in\mathbb{T}^n\cap U\mathbb{T}^n$. Define the two diagonal matrices $R^{-1}:=\operatorname{diag}(\varphi_1,\ldots,\varphi_n)\in U(n)$ and $L^{-1}:=\operatorname{diag}((U\varphi)_i^{-1})=\operatorname{diag}((\overline{U\varphi})_i)\in U(n)$. With $A:=L^{-1}UR^{-1}$ and $e:=(1,\ldots,1)^{T}$ we obtain:
\begin{align*}
Ae=L^{-1}U\varphi=e
\end{align*}
Likewise, since $\overline{A}e=Ae$ and $A$ is unitary, we obtain
\begin{align*}
A^Te&=A^T\overline{A}e=e.
\end{align*}
so that columns and rows of $A$ sum up to one.
For the other direction, suppose $U=LAR$ is a decomposition as proposed. Then $\varphi:=R^{-1}e\in\mathbb{T}^n$ and
\begin{align*}
U\varphi=LAR\varphi=LAe=Le\in\mathbb{T}^n
\end{align*}
hence $U\varphi\in\mathbb{T}^n\cap U\mathbb{T}^n$.
The next step is to reformulate the problem using the Clifford torus. Clearly, $T^{n-1}\cap UT^{n-1}\neq \emptyset$ iff $(\lambda \mathbb{T}^n)\cap U\mathbb{T}^n\neq \emptyset$ for some $\lambda \in \mathbb{C}\setminus \{0\}$. Since $U$ is norm preserving, any intersection requires $|\lambda|=1$ so that
\begin{align*}
T^{n-1}\cap UT^{n-1}\neq \emptyset \quad \Leftrightarrow \quad \mathbb{T}^n\cap U\mathbb{T}^n\neq \emptyset.\end{align*}
\end{proof}
One of the main conjectures in symplectic topology, the Arnold or Arnold-Givental conjecture, states that a Lagrangian submanifold and its image under a Hamiltonian isotopy intersect at least as often as the sum of the $\mathbb{Z}_2$-Betti-numbers. For $T^n$, this sum is not zero, thus, using proposition \ref{prop:unitham}, Arnold's conjecture states in particular that $T^n$ should intersect with $UT^n$ at least once. While the Arnold conjecture is wrong in all generality and most cases are unknown, there is a positive result to the weaker question whether the torus intersects with its displaced version (c.f. \cite{bir04,cho04}). In order to formulate this result, we need the following:
\begin{dfn}
Let $(\mathcal{M},\omega)$ be a closed symplectic manifold with Hamiltonian symplectomorphisms $\mathrm{Ham}(\mathcal{M})$. A Lagrangian submanifold $\mathcal{L}\subset\mathcal{M}$ is called \emph{displaceable} by a Hamiltonian diffeomorphism, if there exists a $\psi\in\mathrm{Ham}(\mathcal{M})$ such that
\begin{align*}
\mathcal{L}\cap\psi\mathcal{L}=\emptyset.
\end{align*}
\end{dfn}
The definition is slightly different from the one in \cite{bir04}, where the authors only consider nonempty open sets such that the restriction of $\omega$ to these sets is exact. However, they prove that the torus $T^n$ is displaceable in the above definition, if and only if there exists an open neighborhood $\mathcal{V}\supset T^n$ such that $\omega|_{\mathcal{V}}$ is exact and $\mathcal{V}$ is displaceable. With this we can state the final and crucial ingredient in the proof of the normal form:
\begin{thm}[\cite{bir04} theorem 1.3] \label{thm:thm13}
The Clifford torus $T^n\subset\mathbb{C}P^n$ cannot be displaced from itself by a Hamiltonian isotopy.
\end{thm}
Because every unitary matrix defines a Hamiltonian isotopy (see proposition \ref{prop:unitham} in the appendix), the theorem tells us in particular $T^n\cap UT^n\neq \emptyset$ for all unitaries $U\in U(n)$ so that together with lemma \ref{lem:reform} this proves the sought normal form:
\begin{thm} \label{thm:unitarynormal}
For every unitary matrix $U\in U(n)$ there exist two diagonal unitary matrices $L,R\in U(n)$ such that $A:=LUR$ fulfills $\sum_j A_{ji} =\sum_j A_{ij} =1$ for all $i=1,\ldots n$.
\end{thm}
\section{Equivalent normal forms for unitary matrices} \label{sec:derived}
To obtain equivalent normal forms, consider the $n\times n$ dimensional complex matrix $F_n$ with entries $(F_n)_{kl}:=\frac{1}{\sqrt{n}} \operatorname{exp}(\frac{2\pi i}{n} kl)$ with $k,l\in\{0,\ldots,n-1\}$, which is known as the \emph{discrete Fourier transformation}. It is easy to see that $F_n^{-1}=F^{\dagger}$, hence $F_n\in U(n)$. If we denote the standard basis of $\mathbb{C}^n$ by $\{e_i\}_{i=0}^{n-1}$ and $e:=(1,\ldots,1)^T$, then
\begin{align*}
F_ne_0=F_n^{\dagger}e_0=\frac{e}{\sqrt{n}}.
\end{align*}
Now let $A\in U(n)$ be such that $Ae=A^Te=e$. Then
$
F_n^{\dagger}AF_ne_0=e_0
$
and similarly, $(F_n^{\dagger}AF_n)^{T}e_0=F_nA^TF_n^{\dagger}e_0=e_0$, which shows that
\begin{align*}
F_n^{\dagger}AF_n=\begin{pmatrix}{} 1 & 0_{n-1}^T \\ 0_{n-1} & \tilde{U} \end{pmatrix}
\end{align*}
where $0_{n-1}:= 0\in\mathbb{C}^{n-1}$ and $\tilde{U}\in U(n-1)$. Thus, given a unitary $U\in U(n)$, we know that there exists a decomposition
\begin{align}
U=LF_n\begin{pmatrix}{} 1 & 0_{n-1}^T \\ 0_{n-1} & \tilde{U} \end{pmatrix}F_n^{\dagger}R
\end{align}
with $\tilde{U}\in U(n-1)$ and diagonal $L,R\in U(n)$. We can now iterate the procedure by applying it to the $(n-1)\times(n-1)$-dimensional submatrix $\tilde{U}$ and obtain the corollary:
\begin{cor} \label{cor:optics}
Let $U\in U(n)$, then there exist diagonal unitaries $D_1,\ldots,D_n$ and $\tilde{D}_1,\ldots,\tilde{D}_{n-1}$ and a $\varphi\in[0,2\pi)$ such that the first $i-1$ entries in each $D_i,\tilde{D}_i$ are equal to one and
\begin{align}
\begin{split}
U&=D_1F_nD_2(\mathbbm{1}_1\oplus F_{n-1})D_3(\mathbbm{1}_2\oplus F_{n-2})\cdots \\
&~~ D_{n-1}(\mathbbm{1}_{n-2}\oplus F_{2})D_n(\mathbbm{1}_{n-2}\oplus F_2^{\dagger})\tilde{D}_{n-1}\cdots (\mathbbm{1}_1\oplus F_{n-1}^{\dagger})\tilde{D}_2F_n^{\dagger}\tilde{D}_1 e^{i\varphi}.
\end{split}
\end{align}
\end{cor}
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\columnwidth]{multiport.pdf}
\caption{In quantum optics, passive transformations on $n$ modes are in one-to-one correspondence with $n\times n$ unitaries. Each unitary $U$ admits a decomposition into $2(n-1)$ canonical multiports (which are independent of $U$ and described by discrete Fourier transformations [hatched]) surrounded by $2n-1$ layers of single-mode phase shifters [grey]. Here, this is exemplified for $n=4$.}\label{multiport}
\end{figure}
In other words any unitary can be decomposed into diagonal unitaries and discrete Fourier transformations in this way. From the theorem, the first $k-1$ entries of the diagonal matrices $D_k,\tilde{D}_k$ are immediately known to be one. However one can achieve a better parameterisation by realizing that one can fix the first $k$ entries of $D_k,\tilde{D}_k$ to one for $k\leq n-1$, while absorbing all phases of the $k$-th entries of $D_k$ and $\tilde{D}_k$ into a diagonal unitary that replaces $D_n$ (this is immediately clear from a graphical representation as in Figure \ref{multiport}).
This decomposition has an immediate application in quantum optics, where any $n\times n$ unitary corresponds to a passive transformation on $n$ modes or a $2n-$multiport. In this scenario a diagonal unitary corresponds to a set of phase shifters, which are applied to the modes individually and the discrete Fourier transformation is known as canonical $2n$-multiport \cite{mat95}, which may be implemented by a symmetric fibre coupler. The structure of the corresponding decomposition is graphically depicted in Figure \ref{multiport}.
Another version of the normal form is found by using that $D$ is a diagonal matrix iff $FDF^{\dagger}$ is a \emph{circulant matrix}, i.e. $(FDF^{\dagger})_{i,j}=:\alpha_{i-j}\in\mathbb{C}$. Since the diagonal matrices form a group, so do the circulant matrices and we denote the group of $n\times n$ circulant matrices by $\mathrm{Circ}(n)$. Then:
\begin{cor}
Let $U\in U(n)$, then there exist $C_1,C_2\in\mathrm{Circ}(n)$ and $\tilde{U}\in U(n-1)$ such that
\begin{align}
U=C_1\operatorname{diag}(1,\tilde{U})C_2.
\end{align}
\end{cor}
Let us finally discuss the question of uniqueness of these decompositions and to this end come back to the original normal form
\begin{align}
U=e^{i\varphi}D_1AD_2 \label{eqn:nform},
\end{align}
where $D_1,D_2$ are unitary diagonal with $(D_i)_{11}=1$ and $A$ has row and column sums equal to 1. Counting parameters, using that the matrices $A$ are isomorphic to $U(n-1)$ as proven above, we have:
\begin{align*}
1+(n-1)+(n-1)^2+(n-1)=n^2
\end{align*}
parameters (c.f. \cite{vos14a}). Hence, the number of parameters matches exactly the dimension of $U(n)$. Given a unitary $U=e^{i\varphi}D_1AD_2$ as above, this means that it might be reasonable to expect only a discrete set of different decompositions or at least a discrete set of $A$ that $U$ can be scaled to. The exact number of different $A$ can easily be seen to be two for the case $n=2$ (c.f. \cite{vos14a}), but already for $n=3$ and $n=4$, there is only a conjectured bound (6 and 20, c.f. \cite{shc13}).
In \cite{cho04} it is proven that if $T^n$ and $UT^n$ intersect transversally, their number of distinct intersection points must be at least $2^n$, which follows from general results in Floer-homology theory when applied to Lagrangian intersection theory. Since transversality is a generic property for intersections, one might therefore conjecture that for a generic unitary $U\in U(n)$ \cite{cho04} implies a lower bound $2^{n-1}$ on the number of different normal forms. However, it is not true that we always have a discrete number of decompositions or (in contrast to the $2\times 2$ case) at least a discrete number of $A$ such that $A$ has row and column-sums equal to one and $e^{i\varphi}LAR=U$. A counterexample is given by the Fourier transform in $4\times 4$ dimensions, where we have for any $\varphi\in[0,2\pi)$:\footnote{We thank the anonymous referee for providing this counterexample.}
\begin{align}
\begin{split} \frac{1}{2}
\begin{pmatrix}{} 1 & 1 & 1 & 1 \\
1 & i & -1 & -i \\
1 & -1 & 1 & -1 \\
1 & -i & -1 & i
\end{pmatrix}
= \begin{pmatrix}{} 1 & 0 & 0 & 0 \\
0 & e^{i\varphi} & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -e^{-i\varphi} \\
\end{pmatrix}\cdot \\ \frac{1}{2}
\begin{pmatrix}{} 1 & -ie^{i\varphi} & 1 & ie^{i\varphi} \\
e^{-i\varphi} & 1 & -e^{-i\varphi} & 1 \\
1 & ie^{i\varphi} & 1 & -ie^{i\varphi} \\
-e^{-i\varphi} & 1 & e^{i\varphi} & 1
\end{pmatrix} \cdot
\begin{pmatrix}{} 1 & 0 & 0 & 0 \\
0 & ie^{-i\varphi} & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -ie^{i\varphi}
\end{pmatrix}
\end{split}
\end{align}
After completion of this document, we learned that part of this section, in particular corollary \ref{cor:optics} were independently found in \cite{vos14b}.
\section{Conclusion}
We have studied a variant of a Sinkhorn type normal form for unitary matrices. Its existence was conjectured in \cite{vos14a} and we give a nonconstructive proof. This means in particular that the question, whether the algorithm presented in \cite{vos14a} always converges for any set of starting conditions, remains open. Also, it would be nice to have an elementary proof of the fact that for any unitary matrix $U$ we have $T^n\cap UT^n\neq \emptyset$. The decomposition is in not unique: We provided an example where, contrary to the $2\times 2$-case, there is a one-parameter set of $A$ as well as $L$ and $R$, such that $LAR=U$. We suggested an argument that the number of different decompositions, if it is discrete, might grow exponentially. However this lower bound relies on a lower bound on Lagrangian intersections which holds only for transversal intersections.
\subsection*{Acknowledgements}
We thank Michael Keyl for many helpful comments on the parts involving symplectic topology. M. Idel is supported by the Studienstiftung des deutschen Volkes. M. Wolf acknowledges support from the CHIST-ERA/BMBF project CQC.
\bibliographystyle{alpha}
| {
"timestamp": "2015-09-07T02:08:04",
"yymm": "1408",
"arxiv_id": "1408.5728",
"language": "en",
"url": "https://arxiv.org/abs/1408.5728",
"abstract": "Sinkhorn proved that every entry-wise positive matrix can be made doubly stochastic by multiplying with two diagonal matrices. In this note we prove a recently conjectured analogue for unitary matrices: every unitary can be decomposed into two diagonal unitaries and one whose row- and column sums are equal to one. The proof is non-constructive and based on a reformulation in terms of symplectic topology. As a corollary, we obtain a decomposition of unitary matrices into an interlaced product of unitary diagonal matrices and discrete Fourier transformations. This provides a new decomposition of linear optics arrays into phase shifters and canonical multiports described by Fourier transformations.",
"subjects": "Mathematical Physics (math-ph); Quantum Physics (quant-ph)",
"title": "Sinkhorn normal form for unitary matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137863531805,
"lm_q2_score": 0.8198933359135361,
"lm_q1q2_score": 0.8051465592061917
} |
https://arxiv.org/abs/1310.7260 | Limit Theorems for Empirical Density of Greatest Common Divisors | The law of large numbers for the empirical density for the pairs of uniformly distributed integers with a given greatest common divisor is a classic result in number theory. In this paper, we study the large deviations of the empirical density. We will also obtain a rate of convergence to the normal distribution for the central limit theorem. Some generalizations are provided. | \section{Introduction}
Let $X_{1},\ldots,X_{n}$ be the random variables uniformly distributed on $\{1,2\ldots,n\}$.
It is well known that
\begin{equation}\label{LLN_1}
\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}1_{\text{gcd}(X_{i},X_{j})=\ell}\rightarrow\frac{6}{\pi^{2}\ell^{2}},
\quad\ell\in\mathbb{N}.
\end{equation}
The intuition is the following. If the law of large numbers holds, the limit is $\mathbb{P}(\text{gcd}(X_{1},X_{2})=\ell)$. Let $X_{1},X_{2}\in C_{\ell}:=\{\ell n:n\in\mathbb{N}\}$ that happens with probability $\frac{1}{\ell^{2}}$ as $n\rightarrow\infty$. Observe that
\begin{equation}
\{\text{gcd}(X_{1},X_{2})=\ell\}=\{X_{1},X_{2}\in C_{\ell},\text{gcd}(X_{1}/\ell,X_{2}/\ell)=1\},
\end{equation}
where $\{X_{1},X_{2}\in C_{\ell}\}$ and $\{\text{gcd}(X_{1}/\ell,X_{2}/\ell)=1\}$
are asymptotically independent. Therefore, we get \eqref{LLN_1} by noticing that $\sum_{\ell=1}^{\infty}\frac{1}{\ell^{2}}=\frac{\pi^{2}}{6}$.
On the other hand, two independent uniformly chosen integers are coprime if and only if they do not have a common prime factor.
For any prime number $p$, the probability that a uniformly random integer is divisible by $p$ is $\frac{1}{p}$ as $n$ goes to infinity.
Hence, we get an alternative formula,
\begin{equation}\label{LLN}
\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}1_{\text{gcd}(X_{i},X_{j})=1}\rightarrow\prod_{p\in\mathcal{P}}\left(1-\frac{1}{p^{2}}\right)
=\frac{1}{\zeta(2)}=\frac{6}{\pi^{2}},
\end{equation}
where $\zeta(\cdot)$ is the Riemann zeta function and throughout this paper $\mathcal{P}$ denotes the set
of all the prime numbers in an increasing order.
The fact that $\mathbb{P}(\text{gcd}(X_{i},X_{j}))\rightarrow\frac{6}{\pi^{2}}$ was first proved by Ces\`{a}ro \cite{CesaroIII}.
The identity relating the product over primes to $\zeta(2)$ in \eqref{LLN}
is an example of an Euler product, and the evaluation of $\zeta(2)$ as
$\pi^{2}/6$ is the Basel problem, solved by Leonhard Euler in 1735.
For a rigorous proof of \eqref{LLN}, see e.g. Hardy and Wright \cite{Hardy}.
For further details and properties of the distributions, moments and asymptotic for the greatest common divisors,
we refer to Ces\`{a}ro \cite{CesaroI}, \cite{CesaroII}, Cohen \cite{Cohen}, Diaconis and Erd\H{o}s \cite{Diaconis}
and Fern\'{a}ndez and Fern\'{a}ndez \cite{FernandezI}, \cite{FernandezII}.
Since the law of large numbers result is well-known, it is natural to study the fluctuations, i.e. central limit theorem
and the probabilities of rare events, i.e. large deviations. The central limit theorem was recently
obtained in Fern\'{a}ndez and Fern\'{a}ndez \cite{FernandezI}
and we will provide the sharp rate of convergence to normal distribution. The large deviations result is
the main contribution of this paper and the proofs are considerably more involved.
For the readers who are interested in the probabilistic methods in number theory, we refer to the books
by Elliott \cite{Elliott} and Tenenbaum \cite{Tenenbaum}.
The paper is organized in the following way. In Section \ref{MainSection}, we state the main results, i.e.
the central limit theorem and the convergence rate to the Gaussian distribution and the large deviation principle
for the empirical density.
The proofs for large deviation principle are given in Section \ref{LDPProofs},
and the proofs for the central limit theorem are given in Section \ref{CLTProofs}.
\section{Main Results}\label{MainSection}
\subsection{Central Limit Theorem}
In this section, we will show a central limit theorem and obtain the sharp rate of convergence
to the normal distribution. The method we will use is based on a result by Baldi et al. \cite{Baldi}
for Stein's method for central limit theorems. Before we proceed, let us define the Kolmogorov-Smirnov distance $d_{KS}$ as
\begin{equation}
d_{KS}(X_{1},X_{2})=\sup_{x\in\mathbb{R}}{\left|F_{1}(x)-F_{2}(x)\right|},
\end{equation}
where $X_{1}$ and $X_{2}$ are two random variables with cumulative distribution function $F_{1}(x)$ and $F_2(x)$, respectively.
Then, we have the following result.
\begin{theorem}\label{CLTThm}
Let $Z$ be a standard normal distribution with mean $0$ and variance $1$, and let $\ell\in\mathbb{N}$ .
\begin{equation}
d_{KS}\left(\frac{\sum_{1\leq i,j\leq n}1_{\text{gcd}(X_{i},X_{j})=\ell}-n^{2}\frac{6}{\ell^{2}\pi^{2}}}
{2\sigma^{2}n^{3/2}},Z\right)\leq\frac{C}{n^{1/2}},
\end{equation}
where $C>0$ is a universal constant and
\begin{equation}
\sigma^{2}:=\frac{1}{\ell^{3}}\prod_{p\in\mathcal{P}}
\left(1-\frac{2}{p^{2}}+\frac{1}{p^{3}}\right)-\frac{36}{\ell^{4}\pi^{4}}.
\end{equation}
\end{theorem}
\subsection{Large Deviation Principle}
In this section, we are interested to study the following probability,
\begin{equation}
\mathbb{P}\left(\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}1_{\text{gcd}(X_{i},X_{j})=\ell}\in A\right)
,\qquad\text{as $n\rightarrow\infty$},
\end{equation}
Indeed, later we will see that,
\begin{equation}
\mathbb{P}\left(\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}1_{\text{gcd}(X_{i},X_{j})=\ell}\in A\right)
=e^{-nH(A)+o(n)},
\quad\ell\in\mathbb{N},
\end{equation}
where $H(A)=0$ if $\frac{1}{\ell^{2}}\frac{6}{\pi^{2}}\in A$
and $H(A)>0$ if $\frac{1}{\ell^{2}}\frac{6}{\pi^{2}}\notin A$, i.e. this probability decays exponentially
fast as $n\rightarrow\infty$ if the empirical mean deviates aways from the ergodic mean.
This phenomenon is called large deviations in probability theory.
Before we proceed, let us introduce the formal definition of large deviations.
A sequence $(P_{n})_{n\in\mathbb{N}}$ of probability measures on a topological space $X$
satisfies the large deviation principle with rate function $I:X\rightarrow\mathbb{R}$ if $I$ is non-negative,
lower semicontinuous and for any measurable set $A$,
\begin{equation}
-\inf_{x\in A^{o}}I(x)\leq\liminf_{n\rightarrow\infty}\frac{1}{n}\log P_{n}(A)
\leq\limsup_{n\rightarrow\infty}\frac{1}{n}\log P_{n}(A)\leq-\inf_{x\in\overline{A}}I(x).
\end{equation}
Here, $A^{o}$ is the interior of $A$ and $\overline{A}$ is its closure.
We refer to Dembo and Zeitouni \cite{Dembo} or Varadhan \cite{VaradhanII} for general background of large deviations and the applications.
For the moment, let us
concentrate on $I_{1}(x)$, the case in which we consider the number of coprime pairs.
Let $S:=(s_{i})_{i\in\mathbb{N}}$ be a sequence of numbers on $[0,1]$. We define
the probability measure $\nu_{k}^{S}$ on $[0,1]$, for $k\in\mathbb{N}$, as follows.
\begin{equation}
\nu_{k}^{S}([0,b]):=\sum_{i=1}^{k}\prod_{j=1}^{i-1}(1-s_{j})(1-s_{i})^{b_{i}},
\end{equation}
where $b=0.b_{1}b_{2}\ldots$ is the binary expansion of $b$. As for the binary expansion of $b$, we always take the finite expansion, whenever there are more than one representation. However, that does not have any effect on our problem, since the set of such numbers is countable and has measure zero under $\nu_{k}^{S}$, for any $k\in\mathbb{N}$.
Now, if we draw a random variable $U^{k}$
according to the measure $\nu_{k}^{S}$ and consider the first $k$ digits in the binary expansion of $U^{k}$,
they are distributed as $k$ Bernoulli random variables with parameters $(s_{i})_{i=1}^{k}$. It is easy
to see that a measure $\nu^{S}$ exists as a weak limit of $\nu_{k}^{S}$ and let $\nu^{S}$ be its weak limit.
For example, if $s_{i}=\frac{1}{2}$, for $i\in\mathbb{N}$, then $\nu^{S}$ is simply the Lebesgue measure on $[0,1]$.
Let $(p_{i})_{i\in\mathbb{N}}$ be the members of $\mathcal {P}$ in the increasing order. From now on,
we work with $\nu_{k}$ and $\nu$, for which the $s_{i}$ is $\frac{1}{p_{i}}$, or
\begin{equation}\label{my_nu}
\nu=\nu^{P}, \text{ where } P:=\left( \frac{1}{p_{i}}\right)_{i\in\mathbb{N}}.
\end{equation}
In addition, for $a\in[0,1]$ and $i\in\mathbb{N}$, we define
\begin{equation}\label{chi}
\chi_{i}(a)=\text{the $i$th digit in the binary expansion of $a$}.
\end{equation}
We also define $f:[0,1]^{2}\rightarrow\{0,1\}$, for $k\in\mathbb{N}$, as follows
\begin{equation}\label{my_rate}
f(x,y):=1-\max_{i\in\mathbb{N}}\chi_{i}(x)\chi_{i}(y).
\end{equation}
In other words, $f(x,y)$ is $1$ if $x$ and $y$ do not share a common $1$ at the same place in their binary expansions
and $f$ is $0$ otherwise. Now, we are ready to state our main result.
\begin{theorem}\label{LDPThm}
Recall that random variables $X_{i},\ldots,X_{n}$, are distributed uniformly on $\{1,2\ldots,n\}$. The probability measures
$\mathbb{P}\left(\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}1_{\text{gcd}(X_{i},X_{j})=1}\in\cdot\right)$ satisfy
a large deviation principle with rate function
\begin{equation}\label{I_1}
I_{1}(x)=\inf_{\iint_{[0,1]^{2}}f(x,y)\mu(dx)\mu(dy)=x}\int_{[0,1]}\log\left(\frac{d\mu}{d\nu}\right)d\mu,
\end{equation}
where $\nu$ and $f$ are defined in \ref{my_nu} and \ref{my_rate}, respectively.
\end{theorem}
Let us get some intuition with \eqref{I_1}, before we see our next result. For $X\in\mathbb{N}$ and $p\in\mathcal{P}$, the indicator $\textbf{1}_{p|X}$ is $1$ if $p$ divides $X$, and $0$ otherwise. We let $a\in[0,1]$ be a number such that $\chi_{i}(a)=\textbf{1}_{p_{i}|X}$, where $p_{i}$ is the $i$th prime in $\mathcal{P}$ and $i\in\mathbb{N}$. In other words, the $i$th digit in the binary expansion of $a$ shows whether $X$ is divisible by $p_{i}$ or not. We also define
\begin{equation}
\psi : \mathbb{N}\to [0,1] \quad \text{as} \quad \psi(X):=a.
\end{equation}
Now, for integers $X,Y\in\mathbb{N}$, $gcd(X,Y)$ is $1$ if and only if, for every $p\in\mathcal{P}$, $p$ does not divide both $X$ and $Y$. So, comparing this with the definition \eqref{my_rate} of $f$, we get
\begin{equation}
f(\psi (X),\psi(Y))=\textbf{1}_{gcd(X,Y)=1}.
\end{equation}
Therefore, our problem is to show large deviation principle for probability measures
\begin{equation}
\mathbb{P}\left(\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}{f(\psi(X_{i}), \psi(X_{j}))}\in\cdot\right),
\end{equation}
where $X_{i}$, for $1\leq i\leq n$, are distributed uniformly on $\{1,2\ldots,n\}$. We note that, for $p,q\in\mathcal{P}$ and as $n$ goes to infinity, the probabilities for the events $\{p|X_{1}\}$, $\{q|X_{1}\}$ and $\{pq|X_{1}\}$ approach to $\frac{1}{p}$, $\frac{1}{q}$ and $\frac{1}{pq}$, respectively. Hence, as $n$ goes to infinity, the underlying measure of $\psi(X_{1})$ looks more like $\nu$. Although this is not precise, for large $n$, $\psi(X_{1})\cdots \psi(X_{N})$ are $n$ i.i.d. random variables with measure $\nu$. Thus, our hope is to use Sanov's theorem to obtain large deviation principle for random variables $\psi(X_{i})$, and then, we use the
contraction principle with the map $f$ to get the rate function \eqref{I_1}.
There are a few issues on our way that need to be addressed, e.g. $\psi(X_{i})$ , for $1\leq i\leq n$, are not distributed as $\nu$
and the mapping $f$ is not continuous at any point (to apply the contraction principle, the mapping is usually assumed to be
continuous). We will come back to these obstacles in the proof section along with the statement of Sanov's theorem and the contraction principle.
We can also consider the following large deviation problem,
\begin{equation}
\mathbb{P}\left(\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}1_{\text{gcd}(X_{i},X_{j})=\ell}\in\cdot\right).
\end{equation}
Write
\begin{equation}
\ell=q_{1}^{\beta_{1}}q_{2}^{\beta_{2}}\cdots q_{m}^{\beta_{m}},
\end{equation}
where $q_{i}$ are distinct primes and $\beta_{i}$ are positive integers for $1\leq i\leq m$.
For a fixed $\ell$, let $p_{1},\ldots,p_{k}$ be the smallest
$k$ primes distinct from $q_{1},\ldots,q_{m}$. Any positive integer can be written as
\begin{equation}
q_{1}^{\gamma_{1}}\cdots q_{m}^{\gamma_{m}}p_{1}^{\alpha_{1}}\cdots p_{k}^{\alpha_{k}},
\end{equation}
where $\gamma_{i}$ and $\alpha_{j}$ are non-negative integers.
Any number on $[0,1]$ can be written as
\begin{equation}
0.\gamma_{1}\gamma_{2}\cdots\gamma_{m}\alpha_{1}\alpha_{2}\cdots\alpha_{k}\cdots,
\end{equation}
where $\gamma_{1},\ldots,\gamma_{m}$ are obtained from ternary expansion and $\alpha_{1},\alpha_{2},\ldots$
are obtained from binary expansion.
The interpretation is that if an integer is not divisible by $q_{i}^{\beta_{i}}$, then $\gamma_{i}=0$.
If it is divisible by $q_{i}^{\beta_{i}}$ but not by $q_{i}^{\beta_{i}+1}$, then $\gamma_{i}=1$.
Finally, if it is divisible by $q_{i}^{\beta_{i}+1}$, then $\gamma_{i}=2$.
We also have $\alpha_{j}=0$ if an integer is not divisible by $p_{j}$ and $1$ otherwise.
Restrict to the first $m+k$ digits and define a probability measure $\nu_{k}$ that takes values
\begin{equation}
g(q_{1})\cdots g(q_{m})\left(\frac{1}{p_{1}}\right)^{\alpha_{1}}\left(1-\frac{1}{p_{i}}\right)^{1-\alpha_{1}}
\cdots\left(\frac{1}{p_{k}}\right)^{\alpha_{k}}\left(1-\frac{1}{p_{k}}\right)^{1-\alpha_{k}},
\end{equation}
where
\begin{equation}
g(q_{i})
=
\begin{cases}
1-\frac{1}{q_{i}^{\beta_{i}}} &\text{if $\gamma_{i}=0$}
\\
\frac{1}{q_{i}^{\beta_{i}}}-\frac{1}{q_{i}^{\beta_{i}+1}} &\text{if $\gamma_{i}=1$}
\\
\frac{1}{q_{i}^{\beta_{i}+1}} &\text{if $\gamma_{i}=2$}
\end{cases},
\quad 1\leq i\leq m.
\end{equation}
Let $\nu$ be the weak limit of $\nu_{k}$. We get the following result. The proofs are similar
to that of Theorem \ref{LDPThm} and are omitted here.
\begin{theorem} \label{LDPThm2}
For $\ell>1$, the probability measures
$\mathbb{P}\left(\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}1_{\text{gcd}(X_{i},X_{j})=\ell}\in\cdot\right)$ satisfy
a large deviation principle with rate function
\begin{equation}
I_{\ell}(x)=\inf_{\iint_{[0,1]^{2}}f(x,y)\mu(dx)\mu(dy)=x}\int_{[0,1]}\log\left(\frac{d\mu}{d\nu}\right)d\mu,
\end{equation}
where $f_{\ell}(x,y)=1$ if $0$ never occurs in the first $m$ digits in the expansions of $x$ and $y$; and $x$ and $y$ do
not share a common $1$ or $2$ in their expansions. Otherwise, $f_{\ell}(x,y)=0$.
\end{theorem}
\begin{remark}
It is interesting to observe that $\frac{6}{\pi^{2}}$ is also the density of square-free integers. That is because an
integer is square-free if and only if it is not divisible by $p^{2}$ for any prime number $p$.
Therefore, we have the law of large numbers, i.e.
\begin{equation}
\frac{1}{n}\sum_{i=1}^{n}1_{X_{i}\text{ is square-free}}\rightarrow
\prod_{p\in\mathcal{P}}\left(1-\frac{1}{p^{2}}\right)=\frac{6}{\pi^{2}}.
\end{equation}
The central limit theorem is standard,
\begin{equation}
\frac{\sum_{i=1}^{n}1_{X_{i}\text{ is square-free}}-\frac{6n}{\pi^{2}}}{\sqrt{n}}\rightarrow
N\left(0,\frac{6}{\pi^{2}}-\frac{36}{\pi^{4}}\right).
\end{equation}
The large deviation principle also holds with rate function
\begin{equation}
I(x):=x\log\left(\frac{x}{6/\pi^{2}}\right)+(1-x)\log\left(\frac{1-x}{1-6/\pi^{2}}\right).
\end{equation}
\end{remark}
\begin{remark}
One can also generalize the result to ask what it is the probability that if we uniformly randomly choose $d$ numbers
from $\{1,2,\ldots,n\}$ their greatest common divisor is $1$.
It is not hard to see that
\begin{equation}
\frac{1}{n^{d}}\sum_{1\leq i_{1},\ldots,i_{d}\leq n}1_{\text{gcd}(X_{i_{1}},\ldots,X_{i_{d}})=1}
\rightarrow\prod_{p\in\mathcal{P}}\left(1-\frac{1}{p^{d}}\right)=\frac{1}{\zeta(d)},
\quad\text{as $n\rightarrow\infty$}.
\end{equation}
There are $d^{2}n(n-1)\cdots(n-(2d-2))$ pairs $(i_{1},\ldots,i_{d})$ and $(j_{1},\ldots,j_{d})$ so that
$|\{i_{1},\ldots,i_{d}\}\cap\{j_{1},\ldots,j_{d}\}|=1$. It is also easy to see that
\begin{equation}
\mathbb{P}\left(\text{gcd}(X_{1},\ldots,X_{d})=\text{gcd}(X_{d},\ldots,X_{2d-1})=1\right)
=\prod_{p\in\mathcal{P}}\left(1-\frac{2}{p^{d}}+\frac{1}{p^{2d-1}}\right).
\end{equation}
Therefore, we have the central limit theorem.
\begin{align}
&\frac{1}{d\cdot n^{\frac{2d-1}{2}}}
\left\{\sum_{1\leq i_{1},\ldots,i_{d}\leq n}1_{\text{gcd}(X_{i_{1}},\ldots,X_{i_{d}})=1}
-n^{d}\prod_{p\in\mathcal{P}}\left(1-\frac{1}{p^{d}}\right)\right\}
\\
&\rightarrow
N\left(0,\prod_{p\in\mathcal{P}}\left(1-\frac{2}{p^{d}}+\frac{1}{p^{2d-1}}\right)
-\prod_{p\in\mathcal{P}}\left(1-\frac{1}{p^{d}}\right)^{2}\right).\nonumber
\end{align}
We also have the large deviation principle for $\mathbb{P}(\frac{1}{n^{d}}\sum_{1\leq i_{1},\ldots,i_{d}\leq n}1_{\text{gcd}(X_{i_{1}},\ldots,X_{i_{d}})=1}\in\cdot)$ with the rate function
\begin{equation}
I(x)=\inf_{\idotsint_{[0,1]^{d}}f(x_{1},x_{2},\ldots,x_{d})\mu(dx_{1})\cdots\mu(dx_{d})=x}\int_{[0,1]}\log\left(\frac{d\mu}{d\nu}\right)d\mu,
\end{equation}
where $\nu$ is the same as in Theorem \ref{LDPThm} and
\begin{equation}
f(x_{1},\ldots,x_{d})=
\begin{cases}
1 &\text{if $x_{1},\ldots,x_{d}$ do not share a common $1$ in their binary expansions}
\\
0 &\text{otherwise}
\end{cases}.
\end{equation}
\end{remark}
\section{Proofs of Large Deviation Principle}\label{LDPProofs}
The proof is the discussion that follows Theorem \ref{LDPThm}. In order to make that precise, we need
to prove a series of lemmas and theorems of superexponential estimates. It is also worth mentioning that the proof of Theorem \ref{LDPThm2} is very close to that of Theorem \ref{LDPThm} and we skip it.
Let us give the definitions of $Y_{p}$, $S(k_{1},k_{2})$ and $\tilde{\mathbb{P}}$ that will be used repeatedly throughout
this section.
\begin{definition}
For any prime number $p$, we define
\begin{equation}
Y_{p}:=\#\{1\leq i\leq n: X_{i}\text{ is divisible by $p$}\}.
\end{equation}
\end{definition}
\begin{definition}
For any $k_{1},k_{2}\in\mathbb{N}$, let us define
\begin{equation}
S(k_{1},k_{2}):=\{p\in\mathcal{P}:k_{1}<p\leq k_{2}\}.
\end{equation}
\end{definition}
\begin{definition}
We define a probability measure $\tilde{\mathbb{P}}$ under which $X_{i}$ are i.i.d. and $\tilde{\mathbb{P}}(\text{$X_{i}$ is divisible
by $p$})=\frac{1}{p}$ for $p\in\mathcal{P}$, $p\leq n$ and the events $\{\text{$X_{i}$ divisible by $p$}\}$ and
$\{\text{$X_{i}$ divisible by $q$}\}$
are independent for distinct $p,q\in\mathcal{P}$, $p,q\leq n$.
\end{definition}
\begin{lemma}\label{MGF}
Let $Y$ be a Binomial random variable distributed as $B(\alpha,n)$. For any $\lambda\in\mathbb{R}$, let $\lambda_{1}:=e^{\lambda}$.
If $2\alpha\lambda_{1}^{2}<1$ and $\alpha<\frac{1}{2}$, then, for sufficiently large $n$,
\begin{equation}
\frac{1}{n}\log\tilde{\mathbb{E}}\left[e^{\frac{\lambda}{n}Y^{2}}\right]
\leq 4\lambda\alpha^{2}\lambda_{1}^{4}+\frac{\log 4(n+1)}{n}.
\end{equation}
\end{lemma}
\begin{proof}
By the definition of Binomial distribution,
\begin{align}
\tilde{\mathbb{E}}\left[e^{\frac{\lambda}{n}Y^{2}}\right]
&=\sum_{i=0}^{n}\binom{n}{i}\alpha^{i}(1-\alpha)^{n-i}e^{\frac{\lambda i^{2}}{n}}
\\
&\leq(n+1)\max_{0\leq i\leq n}\binom{n}{i}\alpha^{i}(1-\alpha)^{n-i}e^{\frac{\lambda i^{2}}{n}}.\nonumber
\end{align}
Using Stirling's formula, for any $n\in\mathbb{N}$,
\begin{equation}
1\leq\frac{n!}{\sqrt{2\pi n}(n/e)^{n}}\leq\frac{e}{\sqrt{2\pi}}.
\end{equation}
Therefore, we have $\binom{n}{i}\leq 4e^{nH(i/n)}$, where
$H(x):=-x\log x-(1-x)\log(1-x)$, $0\leq x\leq 1$. Hence,
\begin{align}
&\frac{1}{n}\log\tilde{\mathbb{E}}\left[e^{\frac{\lambda}{n}Y^{2}}\right]
\\
&\leq\frac{\log 4(n+1)}{n}
+\max_{0\leq i\leq n}\left\{H\left(\frac{i}{n}\right)+\frac{i}{n}\log(\alpha)+\left(1-\frac{i}{n}\right)\log(1-\alpha)
+\lambda\left(\frac{i}{n}\right)^{2}\right\}.\nonumber
\end{align}
To find the maximum of
\begin{equation}
f(x):=H(x)+x\log(\alpha)+(1-x)\log(1-\alpha)+\lambda x^{2},
\end{equation}
it is sufficient to look at
\begin{equation}\label{maxI}
f'(x)=\log\left(\frac{\alpha}{1-\alpha}\right)-\log\left(\frac{x}{1-x}\right)+2\lambda x.
\end{equation}
The assumptions $2\alpha\lambda_{1}^{2}<1$ and $\alpha<\frac{1}{2}$ implies that
\begin{equation}\label{maxII}
\frac{\alpha}{1-\alpha}\lambda_{1}^{2}
\leq 2\alpha\lambda_{1}^{2}\leq\frac{2\alpha\lambda_{1}^{2}}{1-2\alpha\lambda_{1}^{2}}.
\end{equation}
Since logarithm is an increasing function, \eqref{maxI} and \eqref{maxII} imply
that $f'(x)<0$ for any $x\geq 2\alpha\lambda_{1}^{2}$. Therefore, the maximum of $f$ is attained
at some $x\leq 2\alpha\lambda_{1}^{2}$.
In addition, since $\log(\frac{x}{1-x})$ is increasing in $x$, the maximum of
\begin{equation}
g(x):=H(x)+x\log(\alpha)+(1-x)\log(1-\alpha)
\end{equation}
is achieved at $x=\alpha$, which is $g(\alpha)=0$. Hence,
\begin{equation}
\max_{0\leq x\leq 1}f(x)=\max_{0\leq x\leq 2\alpha\lambda_{1}}f(x)\leq 0+\lambda x^{2}\leq\lambda(2\alpha\lambda_{1}^{2})^{2},
\end{equation}
which concludes the proof.
\end{proof}
\begin{theorem}\label{SuperI}
For any $k,n\in\mathbb{N}$ sufficiently large and $\epsilon>0$,
\begin{equation}
\frac{1}{n}\log\tilde{\mathbb{P}}\left(\sum_{p\in S(k,n)}Y_{p}^{2}>n^{2}\epsilon\right)
\leq-\frac{\epsilon}{8}\log(k)+4.
\end{equation}
Therefore, we have the following superexponential estimate,
\begin{equation}
\limsup_{k\rightarrow\infty}\limsup_{n\rightarrow\infty}
\frac{1}{n}\log\tilde{\mathbb{P}}\left(\sum_{p\in S(k,n)}Y_{p}^{2}>n^{2}\epsilon\right)
=-\infty.
\end{equation}
\end{theorem}
\begin{proof}
Note that $Y_{p}=\#\{1\leq i\leq n:\tilde{X}_{i}\text{ is divisible by $p$}\}$. And
whether $\tilde{X}_{i}$ is divisible by $p$ is independent from $\tilde{X}_{i}$ being divisible by $q$
for distinct primes $p$ and $q$. In other words, $Y_{p}$ are independent for distinct primes $p\in\mathcal{P}$.
By Chebyshev's inequality,
\begin{align}
\frac{1}{n}\log\tilde{\mathbb{P}}\left(\sum_{p\in S(k,n)}Y_{p}^{2}>n^{2}\epsilon\right)
&\leq-\lambda\epsilon+\frac{1}{n}\log\tilde{\mathbb{E}}\left[e^{\frac{\lambda}{n}\sum_{p\in S(k,n)}Y_{p}^{2}}\right]
\label{UpperZero}
\\
&=-\lambda\epsilon+\frac{1}{n}\sum_{p\in S(k,n)}\log\tilde{\mathbb{E}}\left[e^{\frac{\lambda}{n}Y_{p}^{2}}\right].
\nonumber
\end{align}
We choose $k\in\mathbb{N}$ large enough so that $\lambda_{1}=e^{\lambda}<\sqrt{2k}$. For $k<p\leq n$, we have
$\frac{2}{p}\lambda_{1}^{2}<\frac{2}{k}\lambda_{1}^{2}<1$.
By Lemma \ref{MGF}, we have
\begin{equation}\label{UpperI}
\frac{1}{n}\sum_{p\in S(k,n)}\log\tilde{\mathbb{E}}\left[e^{\frac{\lambda}{n}Y_{p}^{2}}\right]
\leq\sum_{p\in S(k,n)}\frac{\log(4(n+1))}{n}+4\lambda\left(\frac{1}{p}\right)^{2}\lambda_{1}^{4}.
\end{equation}
Prime number theorem states that
\begin{equation}
\lim_{x\rightarrow\infty}\frac{\pi(x)}{x/\log(x)}=1,
\end{equation}
where $\pi(x)$ denotes the number of primes less than $x$. Therefore, $|k<p\leq n,p\in\mathcal{P}|\leq\frac{2n}{\log n}$
for sufficiently large $n$. Together with \eqref{UpperI}, for sufficiently large $n$, we get
\begin{align}
\frac{1}{n}\log\tilde{\mathbb{E}}\left[e^{\frac{\lambda}{n}\sum_{k<p\leq n,p\in\mathcal{P}}Y_{p}^{2}}\right]
&\leq\frac{2n}{\log n}\frac{\log 4(n+1)}{n}+4\lambda\lambda_{1}^{4}\sum_{k<p\leq n,p\in\mathcal{P}}\frac{1}{p^{2}}\label{UpperII}
\\
&\leq 3+4\lambda\lambda_{1}^{4}\sum_{\ell>k}\frac{1}{\ell^{2}}\nonumber
\\
&\leq 3+4\lambda\lambda_{1}^{4}\frac{1}{k}.\nonumber
\end{align}
Plugging \eqref{UpperII} into \eqref{UpperZero}, we get
\begin{align}
\frac{1}{n}\log\tilde{\mathbb{P}}\left(\sum_{k<p\leq n,p\in\mathcal{P}}Y_{p}^{2}>n^{2}\epsilon\right)
&\leq-\lambda\epsilon+3+\frac{4\lambda\lambda_{1}^{4}}{k}
\\
&=3-\lambda\left(\epsilon-\frac{4\lambda_{1}^{4}}{k}\right).\nonumber
\end{align}
We can choose $\lambda=\frac{1}{4}\log(\epsilon k)-3$ so that $\frac{4\lambda_{1}^{4}}{k}<\frac{\epsilon}{2}$
and it does not violate with our earlier assumption that $\lambda_{1}<\sqrt{2k}$ for large $k$. Hence,
\begin{align}
\frac{1}{n}\log\tilde{\mathbb{P}}\left(\sum_{k<p\leq n,p\in\mathcal{P}}Y_{p}^{2}>\epsilon n^{2}\right)
&\leq 3-\frac{\log(k\epsilon)}{8}\epsilon+3\epsilon
\\
&\leq 4-\frac{\log(k)}{8}\epsilon,\nonumber
\end{align}
which yields the desired result.
\end{proof}
\begin{lemma}\label{K1K2}
For $k_{1},k_{2}\in\mathbb{N}$ sufficiently large,
\begin{equation}\label{UpperIII}
\mathbb{P}\left(\sum_{S(k_{1},k_{2})}Y_{p}^{2}>\epsilon n^{2}\right)
\leq 4\log\log k_{2}+4-\frac{\log(k_{1})}{8}\epsilon.
\end{equation}
\end{lemma}
\begin{proof}
The idea of the proof is to change the measure from $\mathbb{P}$ to $\tilde{\mathbb{P}}$ and apply Lemma \ref{MGF}.
First, observe that $F(X_{1},\ldots,X_{n})=\sum_{p\in S(k_{1},k_{2})}Y_{p}^{2}$ only depends on the events
$\{X_{i}\in E_{p_{1},\ldots,p_{\ell}}\}$, where $i,\ell\in\{1,2,\ldots,n\}$ and $\{p_{1},\ldots,p_{\ell}\}\subset S(k_{1},k_{2})$
and
\begin{equation}
E_{p_{1},\ldots,p_{\ell}}:=
\left\{i\in\{1,2,\ldots,n\}|\text{Prime}(i)\cap S(k_{1},k_{2})=\{p_{1},\ldots,p_{\ell}\}\right\},
\end{equation}
where $\text{Prime}(x):=\{q\in\mathcal{P}:\text{$x$ is divisible by $q$}\}$.
We will show that the following uniform upper bound holds,
\begin{equation}\label{UpperIV}
\frac{\mathbb{P}(X_{1}\in E_{p_{1},\ldots,p_{\ell}})}
{\tilde{\mathbb{P}}(X_{1}\in E_{p_{1},\ldots,p_{\ell}})}
\leq e^{4\log\log k_{2}}.
\end{equation}
Before we proceed, let us show that \eqref{UpperIV} and Theorem \ref{SuperI} implies \eqref{UpperIII}.
Since $X_{i}$'s are independent and $\tilde{X}_{i}$'s are independent,
\begin{equation}
\frac{\mathbb{P}\left(X_{i}\in E_{p_{1}^{i},\ldots,p_{\ell}^{i}},1\leq i\leq n\right)}
{\tilde{\mathbb{P}}\left(X_{i}\in E_{p_{1}^{i},\ldots,p_{\ell}^{i}},1\leq i\leq n\right)}
\leq\left[e^{4\log\log k_{2}}\right]^{n},
\end{equation}
where $\{p_{1}^{i},\ldots,p_{\ell}^{i}\}\subset S(k_{1},k_{2})$ for $1\leq i\leq n$.
Recall that $F$ only depends on events $\{X_{i}\in E_{p_{1},\ldots,p_{\ell}}\}$, therefore,
\begin{align}
\frac{1}{n}\log\mathbb{P}(F>n^{2}\epsilon)
&\leq 4\log\log k_{2}+\frac{1}{n}\log\tilde{\mathbb{P}}(F>n^{2}\epsilon)
\\
&\leq 4\log\log k_{2}+4-\frac{\log(k_{1})}{8}\epsilon,\nonumber
\end{align}
where we used Theorem \ref{SuperI} at the last step. Now, let us prove \eqref{UpperIV}.
First, let us give an upper bound for the numerator, that is,
\begin{equation}\label{UpperV}
\mathbb{P}\left(X_{1}\in E_{p_{1},\ldots,p_{\ell}}\right)
=\frac{1}{n}\#|E_{p_{1},\ldots,p_{\ell}}|
\leq\frac{\left[\frac{n}{p_{1}\cdots p_{\ell}}\right]}{n}\leq\frac{1}{p_{1}\cdots p_{\ell}},
\end{equation}
where $[x]$ denotes the largest integer less or equal to $x$ and we used the simple fact that $\frac{[x]}{x}\leq 1$
for any positive $x$.
As for the lower bound for the denominator, we have
\begin{align}
\tilde{\mathbb{P}}\left(X_{1}\in E_{p_{1},\ldots,p_{\ell}}\right)
&=\prod_{q\in\{p_{1},\ldots,p_{\ell}\}}\frac{1}{q}\prod_{q\in S(k_{1},k_{2})\backslash\{p_{1},\ldots,p_{\ell}\}}
\left(1-\frac{1}{q}\right)\label{UpperVI}
\\
&\geq\prod_{q\in\{p_{1},\ldots,p_{\ell}\}}\frac{1}{q}\prod_{q\in S(k_{1},k_{2})}\left(1-\frac{1}{q}\right)
\nonumber
\\
&\geq\prod_{q\in\{p_{1},\ldots,p_{\ell}\}}\frac{1}{q}e^{-2\sum_{q\in S(k_{1},k_{2})}\frac{1}{q}},
\nonumber
\end{align}
where we used the inequality that $1-x\geq e^{-2x}$ for $x\leq\frac{1}{2}$. Notice that
\begin{equation}
\lim_{n\rightarrow\infty}\left\{-\sum_{q\in S(1,n)}\frac{1}{q}+\log\log n\right\}=M,
\end{equation}
where $M=0.261497\ldots$ is the Meissel-Mertens constant.
Therefore, for sufficiently large $k_{2}$,
\begin{equation}\label{UpperVII}
\sum_{q\in S(k_{1},k_{2})}\frac{1}{q}\leq\sum_{q\in S(1,k_{2})}\frac{1}{q}\leq 2\log\log k_{2}.
\end{equation}
Combining \eqref{UpperV}, \eqref{UpperVI} and \eqref{UpperVII}, we have proved the upper bound in \eqref{UpperIV}.
\end{proof}
\begin{lemma}\label{CRTLemma}
Let $p_{j}$, $1\leq j\leq\ell$, $\ell\in\mathbb{N}$ be the primes such that $S(k_{1},k_{2})=\{p_{1},\ldots,p_{\ell}\}$ and
\begin{equation}
m\prod_{1\leq j\leq\ell}p_{j}\leq n<(m+1)\prod_{1\leq j\leq\ell}p_{j},
\end{equation}
where $m\in\mathbb{N}$. Then, there exists a coupling of vectors of random variables $X_{i}$ and $\tilde{X}_{i}$
for $1\leq i\leq n$, i.e. a measure $\mu$ with marginal distributions the same as $X_{i}$
and $\tilde{X}_{i}$ such that
\begin{equation}\label{UpperVIII}
\mu\left(\sum_{q\in S(k_{1},k_{2})}Y_{q}^{2}
-\sum_{q\in S(k_{1},k_{2})}\tilde{Y}_{q}^{2}\geq n^{2}\epsilon\right)
\leq 2^{n}\left(\frac{1}{m}\right)^{\frac{n\epsilon}{2k_{2}}}.
\end{equation}
\end{lemma}
\begin{proof}
The main ingredient of the proof is the Chinese Remainder Theorem which states that the set of equations
\begin{equation}\label{CRT}
\begin{cases}
x\equiv a_{1} &\text{mod}(p_{1})
\\
\quad\vdots
\\
x\equiv a_{\ell} &\text{mod}(p_{\ell})
\end{cases}
\end{equation}
has a unique solution $1\leq x\leq p_{1}\cdots p_{\ell}$, where $0\leq a_{i}<p_{i}$, $i\in\{1,2,\ldots,\ell\}$.
Hence, for each sequence of $a_{i}$'s, the set of equations in \eqref{CRT} has exactly $m$ solutions
for $1\leq x\leq mp_{1}\cdots p_{\ell}$. We denote these solutions by $R_{i}(a_{1},\ldots,a_{\ell})$ for $i\in\{1,2,\ldots,m\}$.
Given $X_{i}$ uniformly distributed on $\{1,2,\ldots,n\}$,
we define $\tilde{X}_{i}$ as follows. We generate Bernoulli random variables $c_{j}$ for $1\leq j\leq\ell$,
with parameters $\frac{1}{p_{j}}$ and independent of each other. Now, define
\begin{equation}
\tilde{X}_{i}=
\begin{cases}
p_{1}^{c_{1}}\cdots p_{\ell}^{c_{\ell}} &\text{if $X_{i}>mp_{1}\cdots p_{\ell}$}
\\
p_{1}^{b_{1}}\cdots p_{\ell}^{b_{\ell}} &\text{otherwise}
\end{cases},
\end{equation}
where $b_{j}$ is $1$ if $X_{i}$ is divisible by $p_{j}$ and $0$ otherwise for $1\leq j\leq\ell$.
By the definition, if we condition on $X_{i}>mp_{1}\cdots p_{\ell}$, $\tilde{X}_{i}$ is the multiplication
of $p_{j}^{c_{j}}$ and $c_{j}$'s are independent. Now, conditional on $X_{i}\leq mp_{1}\cdots p_{\ell}$ and
let $\text{Prime}(X_{i})=\{p\in\mathcal{P}:\text{$X_{i}$ is divisible by $p$}\}$.
Thus, for a vector $\overrightarrow{b}=(b_{j})_{j=1}^{\ell}\in\{0,1\}^{\ell}$, we have
\begin{align}
\Delta &:=\mu\left(\tilde{X}_{i}=\prod_{j=1}^{\ell}p_{j}^{b_{j}}|X_{i}\leq mp_{1}\cdots p_{\ell}\right)
\\
&=\mu\left(\text{Prime}(X_{i})\cap\{p_{1},\ldots,p_{\ell}\}=S(\overrightarrow{b})\right),
\nonumber
\end{align}
where $S(\overrightarrow{b}):=\{p_{j}|b_{j}=1,1\leq j\leq\ell\}$. But that is equivalent to
\begin{align}
\Delta &=\frac{\#\{R_{i}(a_{1},\ldots,a_{\ell})|a_{j}=0 \text{ if and only if } b_{j}=0,1\leq i\leq m\}}{mp_{1}\cdots p_{\ell}}
\\
&=\frac{m\prod_{b_{j}\neq 0}(p_{j}-1)}{mp_{1}\cdots p_{\ell}}\nonumber
\\
&=\prod_{j:b_{j}=0}\frac{1}{p_{j}}\prod_{j:b_{j}\neq 0}\left(1-\frac{1}{p_{j}}\right).\nonumber
\end{align}
Therefore, we get
\begin{equation}
\mu\left(X_{i}=\prod_{j=1}^{\ell}p_{j}^{b_{j}}\right)
=\prod_{j:b_{j}=0}\frac{1}{p_{j}}\prod_{j:b_{j}\neq 0}\left(1-\frac{1}{p_{j}}\right).
\end{equation}
Let us define
\begin{equation}
g(X_{i},\tilde{X}_{i}):=
\begin{cases}
1 &\text{if $\{\text{Prime}(X_{i})\cap S(k_{1},k_{2})\}\neq\{\text{Prime}(\tilde{X}_{i})\cap S(k_{1},k_{2})\}$}
\\
0 &\text{otherwise}
\end{cases}.
\end{equation}
By the definition of the coupling of $\overrightarrow{X}$ and $\overrightarrow{\tilde{X}}$, we have
$\mathbb{P}(g(X_{i},\tilde{X}_{i})=1)\leq\frac{1}{m}$ since the event $g(X_{i},\tilde{X}_{i})=1$ implies
that $X_{i}>mp_{1}\cdots p_{\ell}$ which occurs with probability
\begin{equation}
\frac{n-mp_{1}p_{2}\cdots p_{\ell}}{n}\leq 1-\frac{mp_{1}p_{2}\cdots p_{\ell}}{(m+1)p_{1}\cdots p_{\ell}}
=\frac{1}{m+1}<\frac{1}{m}.
\end{equation}
Now, let us go back to prove the superexponential bound in \eqref{UpperVIII}.
Observe that
\begin{equation}
f(X_{1},\ldots,X_{n}):=\sum_{q\in S(k_{1},k_{2})}Y_{q}^{2}
=\sum_{q\in S(k_{1},k_{2})}Y_{q}
+\sum_{q\in S(k_{1},k_{2})}\sum_{i\neq j}1_{q|\text{gcd}(X_{i},X_{j})}.
\end{equation}
Hence,
\begin{equation}
\left|\sum_{q\in S(k_{1},k_{2})}Y_{q}^{2}-\tilde{Y}_{q}^{2}\right|\leq\#\{i|g(X_{i},\tilde{X}_{i})=1\}2k_{2}n.
\end{equation}
That is because if we change one of $X_{i}$'s, the function $f(X_{1},\ldots,X_{n})$ changes by at most $k_{2}(n+1)\leq 2k_{2}n$.
Therefore,
\begin{align}
&\mu\left(\sum_{q\in S(k_{1},k_{2})}Y_{q}^{2}-\tilde{Y}_{q}^{2}\geq n^{2}\epsilon\right)
\\
&\leq\mu\left(2k_{2}n\#\{i|g(X_{i},\tilde{X}_{i})=1\}\geq n^{2}\epsilon\right)\nonumber
\\
&=\mu\left(\#\{i|g(X_{i},\tilde{X}_{i})=1\}\geq n\frac{\epsilon}{2k_{2}}\right).\nonumber
\end{align}
Notice that $\#\{i|g(X_{i},\tilde{X}_{i})=1\}=\sum_{i=1}^{n}1_{g(X_{i},\tilde{X}_{i})=1}$ is the sum of i.i.d. indicator functions and $\mu(g(X_{1},\tilde{X}_{1})=1)\leq\frac{1}{m}$.
Hence, by Chebychev's inequality, by choosing $\theta=\log m>0$, we have
\begin{align}
&\mu\left(\#\{i|g(X_{i},\tilde{X}_{i})=1\}\geq n\frac{\epsilon}{2k_{2}}\right)
\\
&\leq\mathbb{E}\left[e^{\theta 1_{g(X_{1},\tilde{X}_{1})=1}}\right]^{n}e^{-\theta n\frac{\epsilon}{2k_{2}}}
\nonumber
\\
&\leq\left(\frac{e^{\theta}}{m}+1\right)^{n}e^{-\theta n\frac{\epsilon}{2k_{2}}}
\nonumber
\\
&\leq2^{n}e^{-(\log m)n\frac{\epsilon}{2k_{2}}}
\nonumber
\end{align}
which yields the desired result.
\end{proof}
\begin{theorem}\label{SuperII}
For any $\epsilon>0$, we have the following superexponential estimates,
\begin{equation}
\limsup_{k\rightarrow\infty}\limsup_{n\rightarrow\infty}\frac{1}{n}
\log\mathbb{P}\left(\sum_{q\in S(k,n)}Y_{q}^{2}>n^{2}\epsilon\right)=-\infty.
\end{equation}
\end{theorem}
\begin{proof}
Let us write
\begin{equation}\label{ThreeTerms}
\sum_{q\in S(k,n)}Y_{q}^{2}
=\sum_{q\in S(k,M_{1})}Y_{q}^{2}
+\sum_{q\in S(M_{1},M_{2})}Y_{q}^{2}
+\sum_{q\in S(M_{2},n)}Y_{q}^{2},
\end{equation}
where $M_{1}:=[\log\log n]^{\frac{120}{\epsilon}}$ and $M_{2}:=[\log n]^{\frac{120}{\epsilon}}$.
By Lemma \ref{K1K2}, for the second and third terms in \eqref{ThreeTerms}, we have
\begin{align}
&\frac{1}{n}\log\mathbb{P}\left(\sum_{q\in S(M_{1},M_{2})}Y_{q}^{2}>\frac{n^{2}\epsilon}{3}\right)\label{UpperIX}
\\
&\leq 4\log\log M_{2}+4-\frac{\log M_{1}}{8}\frac{\epsilon}{3}\nonumber
\\
&=4\log(\log([\log n]^{\frac{120}{\epsilon}}))+4-\frac{\epsilon}{24}\log([\log(\log(n))]^{\frac{120}{\epsilon}})
\nonumber
\\
&=4\log\left(\log\left(\frac{120}{\epsilon}\right)+\log\log n\right)+4-5\log\log\log n
\nonumber
\\
&=4\log\log\left(\frac{120}{\epsilon}\right)+4-\log\log\log n,\nonumber
\end{align}
and similarly,
\begin{equation}\label{UpperX}
\frac{1}{n}\log\mathbb{P}\left(\sum_{q\in S(M_{2},n)}Y_{q}^{2}>\frac{n^{2}\epsilon}{3}\right)
\leq-\log(\log n)+4.
\end{equation}
In addition, for the first term in \eqref{ThreeTerms}, by Lemma \ref{CRTLemma}, we get
\begin{equation}\label{UpperXI}
\frac{1}{n}\log\mu\left(\left|\sum_{q\in S(k,M_{1})}Y_{q}^{2}-\sum_{q\in S(k,M_{1})}\tilde{Y}_{q}^{2}\right|>\frac{n^{2}\epsilon}{6}\right)
\leq\log 2-\frac{\epsilon}{12M_{1}}\log M_{0},
\end{equation}
where
\begin{align}
M_{0}&:=\frac{n}{\prod_{q\in S(k,M_{1})}q}\label{UpperXII}
\\
&\geq\frac{n}{M_{1}^{M_{1}}}\nonumber
\\
&=\exp\left\{\log(n)-\frac{120}{\epsilon}(\log\log n)^{\frac{120}{\epsilon}}\log\log\log n\right\}.\nonumber
\end{align}
By Theorem \ref{SuperI},
\begin{equation}\label{UpperXIII}
\frac{1}{n}\log\tilde{\mathbb{P}}\left(\sum_{p\in S(k,M_{1})}Y_{p}^{2}\geq\frac{n^{2}\epsilon}{6}\right)
\leq-\frac{\epsilon}{48}\log(k)+4.
\end{equation}
Combining \eqref{UpperIX}, \eqref{UpperX}, \eqref{UpperXI}, \eqref{UpperXII} and \eqref{UpperXIII}, we get the desired result.
\end{proof}
Finally, we are ready to prove Theorem \ref{LDPThm}.
\begin{proof}[Proof of Theorem \ref{LDPThm}]
We let $U_{i}$, for $1\leq i\leq n$, be i.i.d. random variables chosen from measure $\nu$ as in \eqref{my_nu}. In addition, we define $U_{i}^{k}$, for $k\in \mathbb{N}$, as the restriction of $U_{i}$ to its first $k$ digits, i.e.
\begin{equation}
\chi_{j}(U_{i}^{k}) =
\begin{cases}
\chi_{j}(U_{i}) &\text{if $j\leq k$}
\\
0 &\text{if $j>k$}
\end{cases},
\end{equation}
where $\chi$ is defined in \eqref{chi}.
Let $L_{n}$, $L_{n}^{k}$ be the empirical measures of $U_{i}$, $U_{i}^{k}$, i.e.
\begin{equation}
L_{n}(x):=\frac{1}{n}\sum_{i=1}^{n}\delta_{U_{i}}(x),
\end{equation}
and
\begin{equation}
L_{n}^{k}(x):=\frac{1}{n}\sum_{i=1}^{n}\delta_{U_{i}^{k}}(x).
\end{equation}
In large deviations theory, Sanov's theorem (see e.g. Dembo and Zeitouni \cite{Dembo}) says that,
for a sequence of i.i.d. random variables $X_{1},X_{2},\ldots,X_{n}$ taking values in a Polish space $\mathbb{X}$
with common distribution $\alpha\in\mathcal{M}(\mathbb{X})$, the space of probability measures on $\mathbb{X}$
equipped with weak topology, the probability measures $\mathbb{P}(\frac{1}{n}\sum_{i=1}^{n}\delta_{X_{i}}\in\cdot)$
induced by the empirical measures $\frac{1}{n}\sum_{i=1}^{n}\delta_{X_{i}}$ satisfy
a large deviation principle with rate function $I(\beta)$ given by
\begin{equation}
I(\beta)=\int_{\mathbb{X}}\frac{d\beta}{d\alpha}\log\frac{d\beta}{d\alpha}\alpha(dx),
\end{equation}
if $\beta\ll\alpha$ and $\frac{d\beta}{d\alpha}|\log\frac{d\beta}{d\alpha}|\in L^{1}(\alpha)$ and $I(\beta)=+\infty$ otherwise.
Therefore, by Sanov's theorem,
$\mathbb{P}(L_{n}\in\cdot)$ satisfies a large deviation
principle on $\mathcal{M}[0,1]$, the space of probability measures on $[0,1]$, equipped with the weak topology
and the rate function
\begin{equation}
I(\mu)=
\begin{cases}
\int_{[0,1]}\log\left(\frac{d\mu}{d\nu}\right)d\mu &\text{if $\mu\ll\nu$ and $|\log\frac{d\mu}{d\nu}|\in L^{1}(\mu)$}
\\
+\infty &\text{otherwise}
\end{cases}.
\end{equation}
We define $f_{k}:[0,1]^{2}\rightarrow\{0,1\}$, for $k\in\mathbb{N}$, and redefine $f$ from \eqref{my_rate} as follows
\begin{equation}
f_{k}(x,y)=1-\max_{1\leq i\leq k}\chi_{i}(x)\chi_{i}(y),
\quad
\text{and}
\quad
f(x,y):=1-\max_{i\in\mathbb{N}}\chi_{i}(x)\chi_{i}(y).
\end{equation}
In other words, $f$ is $1$ if $x$ and $y$ do not share a common $1$ at the same place in their binary expansions
and $f$ is $0$ otherwise. Similar interpretation holds for $f_{k}$. Clearly, $f_{k}\geq f$ and
$\lim_{k\rightarrow\infty}f_{k}(x,y)=f(x,y)$. Again, let $\nu$ be the probability measure on $[0,1]$ such
that for a random variable $x$ with measure $\nu$, $\chi_{i}(x)$ are i.i.d. Bernoulli random variables
with parameters $\frac{1}{p_{i}}$, where $p_{i}$ is the $i$th smallest prime number.
Let $\alpha^{k}:=\{\alpha\in[0,1]|\chi_{i}(\alpha)=0\text{ for }i>k\}$ be the set
of numbers on $[0,1]$ with $k$-digit binary expansion. We define
\begin{equation}
A_{\alpha}:=\{x\in[0,1]|\chi_{i}(\alpha)=\chi_{i}(x), 1\leq i\leq k\}.
\end{equation}
Let $F_{k}(\mu):=\iint_{[0,1]^{2}}f_{k}(x,y)d\mu(x)d\mu(y)$ and $F(\mu):=\iint_{[0,1]^{2}}f(x,y)d\mu(x)d\mu(y)$.
We have
\begin{equation}
F_{k}(\mu)=\sum_{\alpha,\beta\in A^{k}}f_{k}(\alpha,\beta)\mu(A_{\alpha})\mu(A_{\beta}).
\end{equation}
Hence the map $\mu\mapsto F_{k}(\mu)$ is continuous, i.e. for $\mu_{n}\rightarrow\mu$ in the weak topology,
$F_{k}(\mu_{n})\rightarrow F_{k}(\mu)$.
In large deviations theory, the contraction principle (see e.g. Dembo and Zeitouni \cite{Dembo}) says that
if $\mathbb{P}_{n}$ satisfies a large deviation principle on a Polish space $\mathbb{X}$ with rate function $I(\cdot)$
and $F$ is a continuous mapping from $\mathbb{X}$ to another Polish space $\mathbb{Y}$, then $\mathbb{P}_{n}F^{-1}$
satisfies a large deviation principle on $\mathbb{Y}$ with a rate function $J(\cdot)$ given by $J(y)=\inf_{x:F(x)=y}I(x)$.
Therefore, by the contraction principle, $\mathbb{P}(L_{n}\circ F_{k}^{-1}\in\cdot)$
satisfies a large deviation principle with good rate function
\begin{equation}
I^{(k)}(x)=\inf_{\iint_{[0,1]^{2}}f_{k}(x,y)d\mu(x)d\mu(y)=x}\int_{[0,1]}\log\left(\frac{d\mu}{d\nu}\right)d\mu.
\end{equation}
Moreover, in Theorem \ref{SuperI}, we proved that
\begin{equation}
\limsup_{k\rightarrow\infty}\limsup_{n\rightarrow\infty}
\frac{1}{n}\log\mathbb{P}\left(\iint_{[0,1]^{2}}(f_{k}-f)dL_{n}(x)dL_{n}(y)\geq\delta\right)=-\infty,
\end{equation}
for any $\delta>0$. In other words,
the family $\{L_{n}\circ F_{k}^{-1}\}$ are exponentially good approximation of $\{L_{n}\circ F^{-1}\}$,
see Definition 4.2.14 in Dembo and Zeitouni \cite{Dembo}.
Now, by Theorem 4.2.16 in Dembo and Zeitouni \cite{Dembo}, $\mathbb{P}(L_{n}\circ F^{-1}\in\cdot)$ satisfies a weak large deviation principle (for the definition of weak large deviation principle, we refer to page 7 of Dembo and Zeitouni \cite{Dembo})
with the rate function
\begin{equation}
I_{1}(x)=\sup_{\delta>0}\liminf_{k\rightarrow\infty}\inf_{|w-x|<\delta}I^{(k)}(w).
\end{equation}
Since the interval $[0,1]$ is compact, $\mathbb{P}(L_{n}\circ F^{-1}\in\cdot)$ satisfies the full large deviation principle with
good rate function $I_{1}(x)$ as above and it is easy to check that
\begin{equation}
I_{1}(x)=\inf_{\iint_{[0,1]^{2}}f(x,y)\mu(dx)\mu(dy)=x}\int_{[0,1]}\log\left(\frac{d\mu}{d\nu}\right)d\mu.
\end{equation}
For any $p\in\mathcal{P}$, let us recall that $Y_{p}=\sum_{i=1}^{n}1_{p|X_{i}}$ and for any $k_{1},k_{2}\in\mathbb{N}$,
$S(k_{1},k_{2})=\{p\in\mathcal{P}:k_{1}<p\leq k_{2}\}$.
By Theorem \ref{SuperI}, we have
\begin{align}
&\limsup_{k\rightarrow\infty}\limsup_{n\rightarrow\infty}\frac{1}{n}\log
\tilde{\mathbb{P}}\left(\frac{1}{n^{2}}
\sum_{1\leq i,j\leq n}1_{\text{gcd}(X_{i}^{k},X_{j}^{k})=1}-1_{\text{gcd}(X_{i},X_{j})=1}\geq\epsilon\right)
\\
&=\limsup_{k\rightarrow\infty}\limsup_{n\rightarrow\infty}\frac{1}{n}\log
\tilde{\mathbb{P}}\left(\frac{1}{n^{2}}
\sum_{1\leq i,j\leq n}\sum_{p\in S(k,n)}1_{p|X_{i},p|X_{j}}\geq\epsilon\right)
\nonumber
\\
&\leq\limsup_{k\rightarrow\infty}\limsup_{n\rightarrow\infty}\frac{1}{n}\log
\tilde{\mathbb{P}}\left(\frac{1}{n^{2}}
\sum_{p\in S(k,n)}Y_{p}^{2}\geq\epsilon\right)
\nonumber
\\
&=-\infty.\nonumber
\end{align}
Next, notice that the difference between $(\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}1_{\text{gcd}(X_{i}^{k},X_{j}^{k})=1}\in\cdot)$
under $\mathbb{P}$ and $\tilde{\mathbb{P}}$ is superexponentially small by Lemma \ref{CRTLemma}.
Finally, by Theorem \ref{SuperII},
\begin{equation}
\limsup_{k\rightarrow\infty}\limsup_{n\rightarrow\infty}\frac{1}{n}\log
\mathbb{P}\left(\frac{1}{n^{2}}
\sum_{1\leq i,j\leq n}1_{\text{gcd}(X_{i}^{k},X_{j}^{k})=1}-1_{\text{gcd}(X_{i},X_{j})=1}\geq\epsilon\right)
=-\infty.
\end{equation}
This implies that
\begin{align}
-\inf_{x\in A^{o}}I_{1}(x)&\leq\liminf_{n\rightarrow\infty}\frac{1}{n}\log
\mathbb{P}\left(\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}1_{\text{gcd}(X_{i},X_{j})=1}\in A\right)
\\
&\leq\limsup_{n\rightarrow\infty}\frac{1}{n}\log
\mathbb{P}\left(\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}1_{\text{gcd}(X_{i},X_{j})=1}\in A\right)
\leq-\inf_{x\in\overline{A}}I_{1}(x).
\nonumber
\end{align}
\end{proof}
\section{Proofs of Central Limit Theorem}\label{CLTProofs}
\begin{proof}[Proof of Theorem \ref{CLTThm}]
Here, we prove our result for $\ell=1$. The proof for $\ell>1$ is the same that is skipped.
Instead of summing over $1\leq i,j\leq n$, we only need to consider $1\leq i\neq j\leq n$.
The reason is because if $i=j$, then $\text{gcd}(X_{i},X_{i})=1$ if and only if $X_{i}=1$
which occurs with probability $\frac{1}{n}$ and therefore $\frac{1}{2n^{3/2}}\sum_{i=1}^{n}1_{\text{gcd}(X_{i},X_{i})=1}$
is negligible in the limit as $n\rightarrow\infty$. Moreover,
\begin{equation}
\sum_{1\leq i\neq j\leq n}1_{\text{gcd}(X_{i},X_{j})=1}
=2\sum_{1\leq i<j\leq n}1_{\text{gcd}(X_{i},X_{j})=1},
\end{equation}
and we can therefore concentrate on $1\leq i<j\leq n$.
Let us define $a_{ij}=1_{\text{gcd}(X_{i},X_{j})=1}$ for $1\leq i<j\leq n$.
$a_{ij}$ have the same distribution and let $\alpha_{n}$ be the mean of $a_{12}$.
Then, we have
\begin{equation}
\alpha_{n}=\mathbb{E}[a_{12}]=\mathbb{P}(\text{gcd}(X_{1},X_{2})=1)
\rightarrow\prod_{p\in\mathcal{P}}\left(1-\frac{1}{p^{2}}\right),
\end{equation}
as $n\rightarrow\infty$.
Define $\tilde{a}_{ij}:=a_{ij}-\alpha_{n}$ and $W=\sum_{(i,j)\in I}\tilde{a}_{ij}$,
where the sum is taken over the set $I$ that is all the pairs of $i,j\in[n]$ and $i<j$.
Therefore,
\begin{equation}
\sigma_{n}^{2}:=\text{Var}(W)=\mathbb{E}\left[\left(\sum_{(i,j)\in I}\tilde{a}_{ij}\right)^{2}\right]
=\sum_{(i,j)}\sum_{(\ell,k)}\mathbb{E}[\tilde{a}_{ij}\tilde{a}_{k\ell}].
\end{equation}
Note that if the intersection of $\{i,j\}$ and $\{k,\ell\}$ is empty, then
$\tilde{a}_{ij}$ and $\tilde{a}_{k\ell}$ are independent and
$\mathbb{E}[\tilde{a}_{ij}\tilde{a}_{k\ell}]=\mathbb{E}[\tilde{a}_{ij}]
\mathbb{E}[\tilde{a}_{k\ell}]=0$. The remaining two cases are either
$\{i,j\}=\{k,\ell\}$ or $|\{i,j\}\cap\{k,\ell\}|=1$.
For the former, we have
\begin{equation}
\mathbb{E}[\tilde{a}_{ij}\tilde{a}_{ij}]=\mathbb{E}[a_{ij}^{2}]-\alpha_{n}^{2}
=\mathbb{E}[a_{ij}]-\alpha_{n}^{2}=\alpha_{n}-\alpha_{n}^{2}.
\end{equation}
For the latter, assuming without loss of generality that $i=k$ and $j\neq\ell$,
we get
\begin{align}
\mathbb{E}\left[\tilde{a}_{ij}\tilde{a}_{k\ell}\right]
&=\mathbb{E}\left[a_{ij}a_{i\ell}\right]-\alpha_{n}^{2}
\\
&=\mathbb{P}\left(\text{gcd}(X_{i},X_{j})=\text{gcd}(X_{i},X_{\ell})=1\right)-\alpha_{n}^{2}
\nonumber
\\
&=\beta_{n}-\alpha_{n}^{2},\nonumber
\end{align}
where $\beta_{n}:=\mathbb{P}\left(\text{gcd}(X_{i},X_{j})=\text{gcd}(X_{i},X_{\ell})=1\right)$.
Let $p$ be a prime number and $\tilde{X}_{1}$, $\tilde{X}_{2}$, $\tilde{X}_{3}$ be three i.i.d. integer valued random
variables so that $\tilde{X}_{1}$ is divisible by $p$ with probability $\frac{1}{p}$. Then, by inclusion-exclusion principle,
\begin{equation}
\mathbb{P}(\text{gcd}(\tilde{X}_{1},\tilde{X}_{2})=\text{gcd}(\tilde{X}_{1},\tilde{X}_{3})=1)
=\prod_{p\in\mathcal{P}}\left(1-\frac{2}{p^{2}}+\frac{1}{p^{3}}\right).
\end{equation}
It is easy to see that
\begin{equation}
\beta_{n}
\rightarrow\prod_{p\in\mathcal{P}}\left(1-\frac{2}{p^{2}}+\frac{1}{p^{3}}\right),
\end{equation}
as $n\rightarrow\infty$.
We have $\binom{n}{2}$ pairs that $\{i,j\}=\{k,\ell\}$
and $3\times 2\times\binom{n}{3}$ pairs that $|\{i,j\}\cap\{k,\ell\}|=1$.
(We pick three numbers from $1$ to $n$. Then we pick one of them to be
duplicated, say $i$. Finally, we have two pairs as $(i,j)(i,k)$
and $(i,k)(i,j)$). Thus
\begin{equation}\label{sigmasquare}
\sigma_{n}^{2}=\binom{n}{2}(\alpha_{n}-\alpha_{n}^{2})+3\cdot 2\cdot\binom{n}{3}(\beta_{n}-\alpha_{n}^{2}),
\end{equation}
and we have
\begin{equation}
\sigma_{n}^{2}\rightarrow\prod_{p\in\mathcal{P}}
\left(1-\frac{2}{p^{2}}+\frac{1}{p^{3}}\right)-\frac{36}{\pi^{4}},
\end{equation}
as $n\rightarrow\infty$.
Now, our goal is to use the general theorem for random dependency graphs
to prove that $W=\frac{1}{\sigma_{n}}\sum_{(i,j)\in I}\tilde{a}_{ij}$
converges to a standard normal random variable.
We have a collection of dependent random variables $(\tilde{a}_{ij})_{(i,j)\in I}$.
We say $\tilde{a}_{ij}$ and $\tilde{a}_{k\ell}$ are neighbors if they
are dependent, i.e. $\{i,j\}\cap\{k,\ell\}\neq\emptyset$.
Let $N(i,j)=\{\text{neighbors of }(i,j)\}\cup\{(i,j)\}$.
Hence, $N(i,j)$ has $D=2n-5$ elements. In addition, let $Z$
be a standard normal random variable.
By a result of Baldi et al. \cite{Baldi}, we have
\begin{align}
d_{KS}(W,Z)&=\sup_{t\in\mathbb{R}}|\mathbb{P}(W\leq t)-\mathbb{P}(Z\leq t)|
\\
&\leq\frac{D^{2}}{\sigma_{n}^{3}}\sum_{(i,j)\in I}\mathbb{E}|\tilde{a}_{ij}|^{3}
+\frac{\sqrt{2\sigma_{n}}}{\sqrt{\pi}}\frac{D^{3/2}}{\sigma_{n}^{2}}
\sqrt{\sum_{(i,j)\in I}\mathbb{E}|\tilde{a}_{ij}|^{4}}.\nonumber
\end{align}
Note that $\tilde{a}_{ij}$ is bounded by $1$. Thus, using \eqref{sigmasquare}, we have
\begin{align}
d_{KS}(W,Z)&\leq\frac{D^{2}}{\sigma_{n}^{3}}\binom{n}{2}
+\frac{\sqrt{2\sigma_{n}}}{\sqrt{\pi}}\frac{D^{3/2}}{\sigma_{n}^{2}}\sqrt{\binom{n}{2}}
\\
&\leq\frac{(2n)^{2}\cdot n^{2}}{\sigma_{n}^{3}}+5\frac{(2n)^{3/2}n}{\sigma_{n}^{2}}
\nonumber
\\
&\leq\frac{C}{n^{1/2}}.\nonumber
\end{align}
where $C$ is a universal constant.
\end{proof}
\section*{Acknowledgements}
The authors are very grateful to Professor S. R. S. Varadhan for helpful discussions and generous suggestions.
| {
"timestamp": "2014-01-16T02:14:36",
"yymm": "1310",
"arxiv_id": "1310.7260",
"language": "en",
"url": "https://arxiv.org/abs/1310.7260",
"abstract": "The law of large numbers for the empirical density for the pairs of uniformly distributed integers with a given greatest common divisor is a classic result in number theory. In this paper, we study the large deviations of the empirical density. We will also obtain a rate of convergence to the normal distribution for the central limit theorem. Some generalizations are provided.",
"subjects": "Probability (math.PR); Number Theory (math.NT)",
"title": "Limit Theorems for Empirical Density of Greatest Common Divisors",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137926698567,
"lm_q2_score": 0.8198933293122507,
"lm_q1q2_score": 0.805146557902639
} |
https://arxiv.org/abs/1309.0243 | Local fractal functions and function spaces | We introduce local iterated function systems and present some of their basic properties. A new class of local attractors of local iterated function systems, namely local fractal functions, is constructed. We derive formulas so that these local fractal functions become elements of various function spaces, such as the Lebesgue spaces $L^p$, the smoothness spaces $C^n$, the homogeneous Hölder spaces $\dot{C}^s$, and the Sobolev spaces $W^{m,p}$. | \section{Introduction}\label{sec1}
Iterated function systems, for short IFSs, are a powerful means for describing fractal sets and for modeling or approximating natural objects. IFSs were first introduced in \cite{BD,Hutch} and subsequently investigated by numerous authors. Within the fractal image compression community a generalization of IFSs was proposed in \cite{barnhurd} whose main purpose was to obtain efficient algorithms for image coding.
In \cite{BHM1}, this generalization of a traditional IFS, called a local IFS, was reconsidered but now from the viewpoint of approximation theory and from the standpoint of developing computationally efficient numerical methods based on fractal methodologies. In the current paper, we continue this former exploration of local IFSs and consider a special class of attractors, namely those that are the graphs of functions. We will derive conditions under which such local fractal functions are elements of certain function spaces which are important in harmonic analysis and numerical mathematics.
The structure of this paper is as follows. We present the traditional IFSs in Section \ref{sec2} in a more general and modern setting and state some of their properties. Section \ref{sec3} introduces local IFSs and discusses some characteristics of this newly rediscovered concept. Local fractal functions and their connection to local IFSs are investigated in Section \ref{sec4}. In Section \ref{sec5} we briefly consider tensor products of local fractal functions. Local fractal functions in Lebesgue spaces are presented in Section \ref{sec6}, in smoothness and H\"older spaces in Section \ref{sec7}, and in Sobolev spaces in Section \ref{sec8}.
\section{Iterated Function Systems}\label{sec2}
In this section, we introduce the traditional IFS and highlight some of its fundamental properties. For more details and proofs, we refer the reader to \cite{B,BD,bm,Hutch} and the references stated therein.
Throughout this paper, we use the following notation. The set of positive integers is denoted by $\mathbb{N} := \{1, 2, 3, \ldots\}$, the set of nonnegative integers by $\mathbb{N}_0 = \mathbb{N}\cup\{0\}$, and the ring of integers by $\mathbb{Z}$. We denote the closure of a set $S$ by $\overline{S}$ and its interior by $\overset{\circ}{S}$. In the following, $(\mathbb{X},d_X)$ always denotes a complete metric space with metric $d_{\mathbb{X}}$.
\begin{definition}
Let $N\in\mathbb{N}$. If $f_{n}:\mathbb{X}\rightarrow\mathbb{X}$,
$n=1,2,\dots,N,$ are continuous mappings, then $\mathcal{F} :=\left(
\mathbb{X};f_{1},f_{2},...,f_{N}\right) $ is called an \textbf{iterated
function system} (IFS).
\end{definition}
By a slight abuse of notation and terminology, we use the same symbol $\mathcal{F}$ for the
IFS, the set of functions in the IFS, and for the following set-valued mapping defined on the class of all subsets $2^\mathbb{X}$ of $\mathbb{X}.$ Define $\mathcal{F}:2^{\mathbb{X}}\rightarrow 2^{\mathbb{X}}$ by
\[
\mathcal{F}(B) := \bigcup_{f\in\mathcal{F}}f(B), \quad B\in 2^\mathbb{X}.
\]
Denote by $\mathbb{H=H(\mathbb{X})}$ the hyperspace of all nonempty compact subsets of $\mathbb{X}$. The hyperspace $(\mathbb{H},d_\mathbb{H})$ becomes a complete metric space when endowed with the Hausdorff metric $d_{\mathbb{H}}$ (cf. \cite{Engel})
\[
d_\mathbb{H} (A,B) := \max\{\max_{a\in A}\min_{b\in B} d_X (a,b),\max_{b\in B}\min_{a\in A} d_X (a,b)\}.
\]
Since $\mathcal{F}\left( \mathbb{H}\right) \subset\mathbb{H}$, we can also treat $\mathcal{F}$ as a mapping $\mathcal{F}:\mathbb{H} \rightarrow \mathbb{H}$. When
$U\subset\mathbb{X}$ is nonempty, we may write $\mathbb{H}(U)=\mathbb{H(X)}%
\cap2^{U}$. We denote by $\left\vert \mathcal{F}\right\vert $ the number of
distinct mappings in $\mathcal{F}$.
A metric space $\mathbb{X}$ is termed \textbf{locally compact} if every point of $\mathbb{X}$ has a neighborhood that contains a compact neighborhood. The following information, a proof of which can be found in \cite{bm}, is foundational.
\begin{theorem}
\label{ctythm}
\begin{itemize}
\item[(i)] If $(\mathbb{X},d_{\mathbb{X}})$ is compact then $(\mathbb{H}%
,d_{\mathbb{H}})$ is compact.
\item[(ii)] If $(\mathbb{X},d_{\mathbb{X}})$ is locally compact then $(\mathbb{H}%
,d_{\mathbb{H}})$ is locally compact.
\item[(iii)] If $\mathbb{X}$ is locally compact, or if each $f\in\mathcal{F}$ is
uniformly continuous, then $\mathcal{F}:\mathbb{H\rightarrow H}$ is continuous.
\item[(iv)] If $f:\mathbb{X\rightarrow}\mathbb{X}$ is a contraction mapping for each
$f\in\mathcal{F}$, then $\mathcal{F}:\mathbb{H\rightarrow H}$ is a contraction mapping.
\end{itemize}
\end{theorem}
\noindent
For $B\subset\mathbb{X}$, let $\mathcal{F}^{k}(B)$ denote the $k$-fold
composition of $\mathcal{F}$, i.e., the union of $f_{i_{1}}\circ f_{i_{2}%
}\circ\cdots\circ f_{i_{k}}(B)$ over all finite words $i_{1}i_{2}\cdots i_{k}$
of length $k.$ Define $\mathcal{F}^{0}(B) := B.$
\begin{definition}
\label{attractdef}A nonempty compact set $A\subset\mathbb{X}$ is said to be an
\textbf{attractor} of the IFS $\mathcal{F}$ if
\begin{itemize}
\item[(i)] $\mathcal{F}(A)=A$, and if
\item[(ii)] there exists an open set $U\subset\mathbb{X}$ such that $A\subset U$ and
$\lim_{k\rightarrow\infty}\mathcal{F}^{k}(B)=A,$ for all $B\in\mathbb{H(}U)$,
where the limit is in the Hausdorff metric.
\end{itemize}
The largest open set $U$ such that $\mathrm{(ii)}$ is true is called the \textbf{basin of
attraction} (for the attractor $A$ of the IFS $\mathcal{F}$).
\end{definition}
Note that if $U_1$ and $U_2$ satisfy condition $\mathrm{(ii)}$ in Definition 2 for the same attractor $A$ then so does
$U_1 \cup U_2$. We also remark that the invariance condition $\mathrm{(i)}$ is not needed; it follows from $\mathrm{(ii)}$ for $B := A$.
We will use the following observation \cite[Proposition 3 (vii)]{lesniak},
\cite[p.68, Proposition 2.4.7]{edgar}.
\begin{lemma}
\label{intersectlemma}Let $\left\{ B_{k}\right\} _{k=1}^{\infty}$ be a
sequence of nonempty compact sets such that $B_{k+1}\subset B_{k}$, for all
$k\in\mathbb{N}$. Then $\cap_{k\geq1}B_{k}=\lim_{k\rightarrow\infty}B_{k}$ where
convergence is with respect to the Haudorff metric $d_\mathbb{H}$.
\end{lemma}
The next result shows how one may obtain the attractor $A$ of an IFS. For the proof, we refer the reader to \cite{bm}. Note that we do not assume that the functions in the IFS $\mathcal{F}$ are contractive.
\begin{theorem}
\label{attractorthm}Let $\mathcal{F}$ be an IFS with attractor $A$ and basin
of attraction $U.$ If the map $\mathcal{F}:\mathbb{H(}U)\mathbb{\rightarrow H(}U)$ is
continuous then%
\[
A=\bigcap\limits_{K\geq1}\overline{\bigcup_{k\geq K}\mathcal{F}^{k}(B)},%
\quad\text{ for all }B\subset U\text{ such that }\overline{B}\in
\mathbb{H(}U)\text{.}%
\]
\end{theorem}
The quantity on the right-hand side here is sometimes called the
\textbf{topological upper limit }of the sequence $\left\{ \mathcal{F}^{k}(B)\,\vert\, k\in \mathbb{N}\right\}$. (See, for instance, \cite{Engel}.)
A subclass of IFSs is obtained by imposing additional conditions on the functions that comprise the IFS. The definition below introduces this subclass.
\begin{definition}
An IFS $\mathcal{F} = (\mathbb{X}; f_1, f_2, \ldots, f_N)$ is called \textbf{contractive} if there exists a metric $d^*$ on $\mathbb{X}$, which is equivalent to $d$, such that each $f\in \mathcal{F}$ is a contraction with respect to the metric $d^*$, i.e., there is a constant $c \in [0, 1)$
such that
$$
d^*(f(x_1), f(x_2)) \leq c\,d(x_1, x_2),
$$
for all $x_1, x_2 \in \mathbb{X}$.
\end{definition}
By item $\mathrm{(iv)}$ in Theorem 1, the mapping
$\mathcal{F} : \mathbb{H} \to \mathbb{H}$ is then also contractive on the complete metric space $(\mathbb{H}, d_{\mathbb{H}})$, and thus possesses a unique attractor $A$. This attractor satisfies the \textbf{self-referential equation}
\begin{equation}\label{self}
A = \mathcal{F}(A) = \bigcup_{f\in\mathcal{F}}f(A).
\end{equation}
In the case of a contractive IFS, the basin of attraction for $A$ is $\mathbb{X}$ and the attractor can be computed via the following procedure: Let $K_0$ be any set in $\mathbb{H}(\mathbb{X})$ and consider the sequence of iterates
\[
K_m := \mathcal{F}(K_{m-1}) = \mathcal{F}^m (K_0), \quad m\in \mathbb{N}.
\]
Then $K_m$ converges in the Hausdorff metric to the attractor $A$ as $m\to\infty$, i.e., $d_\mathbb{H}(K_m, A) \to 0$ as $m\to\infty$.
For the remainder of this paper, the emphasis will be on contractive IFSs, respectively, contractive local IFSs. We will see that the self-referential equation \eqref{self} plays a fundamental role in the construction of fractal sets and in the determination of their geometric and analytic properties.
\section{From IFS to Local IFS}\label{sec3}
The concept of \textit{local} iterated function system is a generalization of an IFS as defined above and was first introduced in \cite{barnhurd} and reconsidered in \cite{BHM1}. In what follows, $N\in\mathbb{N}$ always denotes a positive integer and $\mathbb{N}_N := \{1, \ldots, N\}$.
\begin{definition}\label{localIFS}
Suppose that $\{\mathbb{X}_i \,\vert\, i \in \mathbb{N}_N\}$ is a family of nonempty subsets of a metric space $\mathbb{X}$. Further assume that for each $\mathbb{X}_i$ there exists a continuous mapping $f_i: \mathbb{X}_i\to\mathbb{X}$, $i\in \mathbb{N}_N$. Then $\mathcal{F}_{\mathrm{loc}} := \{\mathbb{X}; (\mathbb{X}_i, f_i)\,\vert\, i \in \mathbb{N}_N\}$ is called a \textbf{local iterated function system} (local IFS).
\end{definition}
Note that if each $\mathbb{X}_i = \mathbb{X}$, then Definition \ref{localIFS} coincides with the usual definition of a standard (global) IFS on a complete metric space. However, the possibility of choosing the domain for each continuous mapping $f_i$ different from the entire space $X$ adds additional flexibility as will be recognized in the sequel.
\begin{definition}
A local IFS $\mathcal{F}_{\mathrm{loc}}$ is called \textbf{contractive} if there exists a metric $d^*$ equivalent to $d$ with respect to which all functions $f\in \mathcal{F}_{\mathrm{loc}}$ are contractive (on their respective domains).
\end{definition}
\noindent
With a local IFS we associate a set-valued operator $\mathcal{F}_\mathrm{loc} : 2^\mathbb{X} \to 2^\mathbb{X}$ by setting
\begin{equation}\label{hutchop}
\mathcal{F}_\mathrm{loc}(S) := \bigcup_{i=1}^N f_i (S\cap \mathbb{X}_i).
\end{equation}
By a slight abuse of notation, we use the same symbol for a local IFS and its associated operator.
\begin{definition}
A subset $A\in 2^\mathbb{X}$ is called a \textbf{local attractor} for the local IFS $\{\mathbb{X}; (\mathbb{X}_i, f_i)\,\vert\, i \in \mathbb{N}_N\}$ if
\begin{equation}\label{attr}
A = \mathcal{F}_\mathrm{loc} (A) = \bigcup_{i=1}^N f_i (A\cap \mathbb{X}_i).
\end{equation}
\end{definition}
In \eqref{attr} we allow for $A\cap \mathbb{X}_i$ to be the empty set. Thus, every local IFS has at least one local attractor, namely $A = \emptyset$. However, it may also have many distinct ones. In the latter case, if $A_1$ and $A_2$ are distinct local attractors, then $A_1\cup A_2$ is also a local attractor. Hence, there exists a largest local attractor for $\mathcal{F}_\mathrm{loc}$, namely the union of all distinct local attractors. We refer to this largest local attractor as {\em the} local attractor of a local IFS $\mathcal{F}_\mathrm{loc}$.
\begin{remark}
There exists an alternative definition for \eqref{hutchop}. We could consider the mappings $f_i$ as defined on all of $\mathbb{X}$ in the following sense: For any $S\in 2^\mathbb{X}$, let
\[
f_i (S) := \begin{cases} f_i (S\cap \mathbb{X}_i), & S\cap \mathbb{X}_i\neq \emptyset;\\ \emptyset, & S\cap \mathbb{X}_i = \emptyset,\end{cases} \qquad i\in \mathbb{N}_N.
\]
\end{remark}
Now suppose that $\mathbb{X}$ is compact and the $\mathbb{X}_i$, $i\in \mathbb{N}_N$, are closed, i.e., compact in $\mathbb{X}$. If in addition the local IFS $\{\mathbb{X}; (\mathbb{X}_i, f_i)\,\vert\, i \in \mathbb{N}_N\}$ is contractive then the local attractor can be computed as follows. Let $K_0:= \mathbb{X}$ and set
\[
K_n := \mathcal{F}_\mathrm{loc} (K_{n-1}) = \bigcup_{i\in \mathbb{N}_N} f_i (K_{n-1}\cap \mathbb{X}_i), \quad n\in \mathbb{N}.
\]
Then $\{K_n\,\vert\, n\in\mathbb{N}_0\}$ is a decreasing nested sequence of compact sets. \textit{If} each $K_n$ is nonempty then by the Cantor Intersection Theorem,
\[
K:= \bigcap_{n\in \mathbb{N}_0} K_n \neq \emptyset.
\]
Using \cite[Proposition 3 (vii)]{lesniak}, we see that
\[
K = \lim_{n\to\infty} K_n,
\]
where the limit is taken with respect to the Hausdorff metric on $\mathbb{H}$. This implies that
\[
K = \lim_{n\to\infty} K_n = \lim_{n\to\infty} \bigcup_{i\in \mathbb{N}_N} f_i (K_{n-1}\cap \mathbb{X}_i) = \bigcup_{i\in \mathbb{N}_N} f_i (K\cap \mathbb{X}_i) = \mathcal{F}_\mathrm{loc} (K).
\]
Thus, $K = A_\mathrm{loc}$. A condition which guarantees that each $K_n$ is nonempty is that $f_i(\mathbb{X}_i) \subset\mathbb{X}_i$, $i\in \mathbb{N}_N$. (See also \cite{barnhurd}.)
In the above setting, one can derive a relation between the local attractor $A_\mathrm{loc}$ of a contractive local IFS $\{\mathbb{X}; (\mathbb{X}_i, f_i)\,\vert\, i \in \mathbb{N}_N\}$ and the (global) attractor $A$ of the associated (global) IFS $\{\mathbb{X}; f_i\,\vert\, i \in \mathbb{N}_N\}$. To this end, let the sequence $\{\mathbb{K}_n\,\vert\, n\in \mathbb{N}_0\}$ be defined as above. The unique attractor $A$ of the IFS $\mathcal{F}:= \{\mathbb{X}; f_i\,\vert\, i \in \mathbb{N}_N\}$ is obtained as the fixed point of the set-valued map $\mathcal{F}: \mathbb{H}\to \mathbb{H}$,
\begin{equation}\label{setvalued}
\mathcal{F} (B) = \bigcup_{i\in \mathbb{N}_N} f_i (B),
\end{equation}
where $B\in \mathbb{H}$. If the IFS $\mathcal{F}$ is contractive, then the set-valued mapping \eqref{setvalued} is contractive on $\mathbb{H}$ and its unique fixed point is obtained as the limit of the sequence of sets $\{A_n\,\vert\, n\in \mathbb{N}_0\}$ with $A_0 := \mathbb{X}$ and
\[
A_n := \mathcal{F}(A_{n-1}), \quad n\in \mathbb{N}.
\]
Note that $K_0 = A_0 = \mathbb{X}$ and, assuming that $K_{n-1}\subseteq A_{n-1}$, $n\in\mathbb{N}$, it follows by induction that
\begin{align*}
K_n & = \bigcup_{i\in \mathbb{N}_N} f_i (K_{n-1}\cap X_i) \subseteq \bigcup_{i\in \mathbb{N}_N} f_i (K_{n-1}) \subseteq \bigcup_{i\in \mathbb{N}_N} f_i (A_{n-1}) = A_n.
\end{align*}
Hence, upon taking the limit with respect to the Hausdorff metric as $n\to\infty$, we obtain $A_\mathrm{loc} \subseteq A$. This proves the next result.
\begin{proposition}
Let $\mathbb{X}$ be a compact metric space and let $\mathbb{X}_i$, $i\in \mathbb{N}_N$, be closed, i.e., compact in $\mathbb{X}$. Suppose that the local IFS $\mathcal{F}_\mathrm{loc} := \{\mathbb{X}; (\mathbb{X}_i, f_i)\,\vert\, i \in \mathbb{N}_N\}$ and the IFS $\mathcal{F}:=\{\mathbb{X}; f_i\,\vert\, i \in \mathbb{N}_N\}$ are both contractive. Then the local attractor $A_\mathrm{loc}$ of $\mathcal{F}_\mathrm{loc}$ is a subset of the attractor $A$ of $\mathcal{F}$.
\end{proposition}
Contractive local IFSs are point-fibered provided $\mathbb{X}$ is compact and the subsets $\mathbb{X}_i$, $i\in \mathbb{N}_N$, are closed. To show this, define the code space of a local IFS by $\Omega:= \prod_{n\in\mathbb{N}}\mathbb{N}_N$ and endowed it with the product topology $\mathfrak{T}$. It is known that $\Omega$ is metrizable and that $\mathfrak{T}$ is induced by the metric $d_F: \Omega\times\Omega\to \mathbb{R}$,
\[
d_F(\sigma,\tau) := \sum_{n\in \mathbb{N}} \frac{|\sigma_n - \tau_n|}{(N+1)^n},
\]
where $\sigma = (\sigma_1\ldots\sigma_n\ldots)$ and $\tau = (\tau_1\ldots\tau_n\ldots)$. (As a reference, see for instance \cite{Engel}, Theorem 4.2.2.) The elements of $\Omega$ are called codes.
Define a set-valued mapping $\gamma :\Omega \to \mathbb{K}(\mathbb{X})$, where $\mathbb{K}(\mathbb{X})$ denotes the hyperspace of all compact subsets of $X$, by
\[
\gamma (\sigma) := \bigcap_{n=1}^\infty f_{\sigma_1}\circ \cdots \circ f_{\sigma_n} (\mathbb{X}),
\]
where $\sigma = (\sigma_1\ldots\sigma_n\ldots)$. Then $\gamma (\sigma)$ is point-fibered, i.e., a singleton. Moreover, in this case, the local attractor $A$ equals $\gamma(\Omega)$. (For details about point-fibered IFSs and attractors, we refer the interested reader to \cite{K}, Chapters 3--5.)
\begin{example}
Let $\mathbb{X} := [0,1]\times [0,1]$ and suppose that $0 < x_2 < x_1 < 1$ and $0 < y_2 < y_1 < 1$. Define
\[
\mathbb{X}_1 := [0,x_1]\times [0,y_1]\qquad\text{and}\qquad \mathbb{X}_2 := [x_2,1]\times [y_2,1].
\]
Furthermore, let $f_i:\mathbb{X}_i \to \mathbb{X}$, $i=1,2$, be given by
\[
f_1(x,y) := (s_1 x, s_1 y)\quad\text{and}\quad f_2(x,y) := (s_2 x + (1-s_2) x_2, s_2 y + (1-s_2)y_2),
\]
respectively, where $s_1,s_2\in [0,1)$.
The (global) IFS $\{\mathbb{X}; f_1, f_2\}$ has the line segment $A = \{(x, \frac{y_2}{x_2}\, x)\,\vert\, 0\leq x \leq x_2\}$ as its unique attractor. The local attractor of the local IFS $\{\mathbb{X}; (\mathbb{X}_1, f_1), (\mathbb{X}_2, f_2)\}$ is given by $A_\mathrm{loc} = \{(0,0)\}\cup\{(x_2,y_2)\}$, the union of the fixed point for $f_1$ and $f_2$, respectively.
\end{example}
\section{Local Fractal Functions}\label{sec4}
In this section, we introduce bounded local fractal functions as the fixed points of operators acting on the complete metric space of bounded functions. We will see that the graph of a local fractal functions is the local attractor of an associated local IFS and that the set of discontinuities of a bounded local fractal function is at most countably infinite. We follow the exhibition presented in \cite{BHM1}.
To this end, let $X$ be a nonempty connected set and $\{X_i \,\vert\, i \in\mathbb{N}_N\}$ a family of nonempty connected subsets of $X$. Suppose $\{u_i : X_i\to X \,\vert\, i \in \mathbb{N}_N\}$ is a family of bijective mappings with the property that
\begin{enumerate}
\item[(P)] $\{u_i(X_i)\,\vert\, i \in\mathbb{N}_N\}$ forms a (set-theoretic) partition of $X$: $X = \bigcup_{i=1}^N u_i(X_i)$ and $u_i(X_i)\cap u_j(X_j) = \emptyset$, for all $i\neq j\in \mathbb{N}_N$.
\end{enumerate}
\noindent
Now suppose that $(\mathbb{Y},d_\mathbb{Y})$ is a complete metric space with metric $d_\mathbb{Y}$. A mapping $f:X\to \mathbb{Y}$ is called \textbf{bounded} (with respect to the metric $d_\mathbb{Y}$) if there exists an $M> 0$ so that for all $x_1, x_2\in X$, $d_\mathbb{Y}(f(x_1),f(x_2)) < M$.
Denote by $B(X, \mathbb{Y})$ the set
\[
B(X, \mathbb{Y}) := \{f : X\to \mathbb{Y} \,\vert\, \text{$f$ is bounded}\}.
\]
Endowed with the metric
\[
d(f,g): = \displaystyle{\sup_{x\in X}} \,d_\mathbb{Y}(f(x), g(x)),
\]
$(B(X, \mathbb{Y}), d)$ becomes a complete metric space. In a similar fashion, we define $B(X_i, \mathbb{Y})$, $i \in\mathbb{N}_N$.
Under the usual addition and scalar multiplication of functions, the spaces $B(X_i,\mathbb{Y})$ and $B(X,\mathbb{Y})$ become metric linear spaces \cite{Rol}. Recall that a \textbf{metric linear space} is a vector space endowed with a metric under which the operations of vector addition and scalar multiplication become continuous.
For $i \in \mathbb{N}_N$, let $v_i: X_i\times \mathbb{Y} \to \mathbb{Y}$ be a mapping that is uniformly contractive in the second variable, i.e., there exists an $\ell\in [0,1)$ so that for all $y_1, y_2\in \mathbb{Y}$
\begin{equation}\label{scon}
d_\mathbb{Y} (v_i(x, y_1), v_i(x, y_2)) \leq \ell\, d_\mathbb{Y} (y_1, y_2), \quad\forall x\in X.
\end{equation}
Define a \textbf{Read-Bajactarevi\'c (RB) operator} $\Phi: B(X,\mathbb{Y})\to \mathbb{Y}^{X}$ by
\begin{equation}\label{RB}
\Phi f (x) := \sum_{i=1}^N v_i (u_i^{-1} (x), f_i\circ u_i^{-1} (x))\,\chi_{u_i(X_i)}(x),
\end{equation}
where $f_i := f\vert_{X_i}$ and
$$
\chi_M (x) := \begin{cases} 1, & x\in M\\ 0, & x\notin M\end{cases},
$$
denotes the characteristic function of a set $M$. Note that $\Phi$ is well-defined, and since $f$ is bounded and each $v_i$ contractive in its second variable, $\Phi f\in B(X,\mathbb{Y})$.
Moreover, by \eqref{scon}, we obtain for all $f,g\in B(X, \mathbb{Y})$ the following inequality:
\begin{align}\label{estim}
d(\Phi f, \Phi g) & = \sup_{x\in X} d_\mathbb{Y} (\Phi f (x), \Phi g (x))\nonumber\\
& = \sup_{x\in X} d_\mathbb{Y} (v(u_i^{-1} (x), f_i(u_i^{-1} (x))), v(u_i^{-1} (x), g_i(u_i^{-1} (x))))\nonumber\\
& \leq \ell\sup_{x\in X} d_\mathbb{Y} (f_i\circ u_i^{-1} (x), g_i \circ u_i^{-1} (x)) \leq \ell\, d_\mathbb{Y}(f,g).
\end{align}
To simplify notation, we had set $v(x,y):= \sum_{i=1}^N v_i (x, y)\,\chi_{X_i}(x)$ in the above equation. In other words, $\Phi$ is a contraction on the complete metric space $B(X,\mathbb{Y})$ and, by the Banach Fixed Point Theorem, has therefore a unique fixed point $\mathfrak{f}$ in $B(X,\mathbb{Y})$. This unique fixed point will be called a \textbf{local fractal function} $\mathfrak{f} = \mathfrak{f}_\Phi$ (generated by $\Phi$).
Next, we would like to consider a special choice of mappings $v_i$. To this end, we require the concept of an $F$-space. We recall that a metric $d:\mathbb{Y}\times\mathbb{Y}\to \mathbb{R}$ is called \textbf{complete} if every Cauchy sequence in $\mathbb{Y}$ converges with respect to $d$ to a point of $\mathbb{Y}$, and \textbf{translation-invariant} if $d(x+a,y+a) = d(x,y)$, for all $x,y,a\in \mathbb{Y}$.
\begin{definition}
A topological vector space $\mathbb{Y}$ is called an \textbf{$\boldsymbol{F}$-space} \cite{Rol} if its topology is induced by a complete translation-invariant metric $d$.
\end{definition}
Now suppose that $\mathbb{Y}$ is an $F$-space. Denote its metric by $d_\mathbb{Y}$. We define mappings $v_i:X_i\times\mathbb{Y}\to \mathbb{Y}$ by
\begin{equation}\label{specialv}
v_i (x,y) := \lambda_i (x) + S_i (x) \,y,\quad i \in \mathbb{N}_N,
\end{equation}
where $\lambda_i \in B(X_i,\mathbb{Y})$ and $S_i : X_i\to \mathbb{R}$ is a function.
If in addition we require that the metric $d_\mathbb{Y}$ is homogeneous, that is,
\[
d_\mathbb{Y}(\alpha y_1, \alpha y_2) = |\alpha| d_\mathbb{Y}(y_1,y_2), \quad \forall \alpha\in\mathbb{R}\;\forall y_1.y_2\in \mathbb{Y},
\]
then $v_i$ given by \eqref{specialv} satisfies condition \eqref{scon} provided that the functions $S_i$ are bounded on $X_i$ with bounds in $[0,1)$. For then
\begin{align*}
d_\mathbb{Y} (\lambda_i (x) + S_i (x) \,y_1,\lambda_i (x) + S_i (x) \,y_2) &= d_\mathbb{Y}(S_i (x) \,y_1,S_i (x) \,y_2) \\
& = |S_i(x)| d_\mathbb{Y} (y_1, y_2)\\
& \leq \|S_i\|_{\infty,X_i}\, d_\mathbb{Y} (y_1, y_2)\\
& \leq s\,d_\mathbb{Y} (y_1, y_2).
\end{align*}
Here, we denoted the supremum norm with respect to $X_i$ by $\|\bullet\|_{\infty, X_i}$, and set $s := \max\{\|S_i\|_{\infty,X_i}\,\vert\,$ $i\in \mathbb{N}_N\}$.
Thus, for a {\em fixed} set of functions $\{\lambda_1, \ldots, \lambda_N\}$ and $\{S_1, \ldots, S_N\}$, the associated RB operator \eqref{RB} has now the form
\[
\Phi f = \sum_{i=1}^N \lambda_i\circ u_i^{-1} \,\chi_{u_i(X_i)} + \sum_{i=1}^N (S_i\circ u_i^{-1})\cdot (f_i\circ u_i^{-1})\,\chi_{u_i(X_i)},
\]
or, equivalently,
\[
\Phi f_i\circ u_i = \lambda_i + S_i\cdot f_i, \quad \text{on $X_i$, $\forall\;i\in\mathbb{N}_N$,}
\]
with $f_i = f\vert_{X_i}$.
\begin{theorem}
Let $\mathbb{Y}$ be an $F$-space with homogeneous metric $d_\mathbb{Y}$. Let $X$ be a nonempty connected set and $\{X_i \,\vert\, i \in\mathbb{N}_N\}$ a collection of nonempty connected subsets of $X$. Suppose that $\{u_i : X_i\to X \,\vert\, i \in \mathbb{N}_N\}$ is a family of bijective mappings satisfying property $\mathrm{(P)}$.
Let $\blambda := (\lambda_1, \ldots, \lambda_N)\in \underset{i=1}{\overset{N}{\times}} B(X_i,\mathbb{Y})$ and $\boldsymbol{S} := (S_1, \ldots, S_N)\in \underset{i=1}{\overset{N}{\times}} B (X_i,\mathbb{R})$. Define a mapping $\Phi: \left(\underset{i=1}{\overset{N}{\times}} B(X_i,\mathbb{Y})\right)\times \left(\underset{i=1}{\overset{N}{\times}} B (X_i,\mathbb{R})\right) \times B(X,\mathbb{Y})\to B(X,\mathbb{Y})$ by
\begin{equation}\label{eq3.4}
\Phi(\blambda)(\boldsymbol{S}) f = \sum_{i=1}^N \lambda_i\circ u_i^{-1} \,\chi_{u_i(X_i)} + \sum_{i=1}^N (S_i\circ u_i^{-1})\cdot (f_i\circ u_i^{-1})\,\chi_{u_i(X_i)}.
\end{equation}
If $\max\{\|S_i\|_{\infty,X_i}\,\vert\, i\in \mathbb{N}_N\} < 1$ then the operator $\Phi(\blambda)(\boldsymbol{S})$ is contractive on the complete metric space $B(X, \mathbb{Y})$ and its unique fixed point $\mathfrak{f}$ satisfies the self-referential equation
\begin{equation}\label{3.4}
\mathfrak{f} = \sum_{i=1}^N \lambda_i\circ u_i^{-1} \,\chi_{u_i(X_i)} + \sum_{i=1}^N (S_i\circ u_i^{-1})\cdot (\mathfrak{f}_i\circ u_i^{-1})\,\chi_{u_i(X_i)},
\end{equation}
or, equivalently
\begin{equation}
\mathfrak{f}\circ u_i = \lambda_i + S_i\cdot \mathfrak{f}_i, \quad \text{on $X_i$, $\forall\;i\in\mathbb{N}_N$,}
\end{equation}
where $\mathfrak{f}_i = \mathfrak{f}\vert_{X_i}$.
\end{theorem}
\begin{proof}
The statements follow directly from the considerations preceding the theorem.
\end{proof}
The fixed point $\mathfrak{f}$ in \eqref{3.4} is called a \textbf{bounded local fractal function} or, for short, \textbf{local fractal function}.
\begin{remark}
Note that the local fractal function $\mathfrak{f}$ generated by the operator $\Phi$ defined by \eqref{eq3.4} does not only depend on the family of subsets $\{X_i \,\vert\, i \in \mathbb{N}_N\}$ but also on the two $N$-tuples of bounded functions $\blambda\in \underset{i=1}{\overset{N}{\times}} B(X_i,\mathbb{Y})$ and $\boldsymbol{S}\in \underset{i=1}{\overset{N}{\times}} B (X_i,\mathbb{R})$. The fixed point $\mathfrak{f}$ should therefore be written more precisely as $\mathfrak{f} (\blambda)(\boldsymbol{S})$. However, for the sake of notational simplicity, we usually suppress this dependence for both $\mathfrak{f}$ and $\Phi$.
\end{remark}
\begin{example}
Suppose $X:= [0,1]$ and $\mathbb{Y}:=\mathbb{R}$. In Figure \ref{fig:randfracfun}, we display the graph of a randomly generated local fractal function where the $\lambda_i$'s and the $S_i$'s were chosen to have random constant values.
\begin{figure}[h!]
\centerline{\includegraphics[width=0.5\textwidth]{ranfracfun}}
\caption{A randomly generated local fractal function\label{fig:randfracfun}}
\end{figure}
\end{example}
The following result found in \cite{GHM} and, in more general form, in \cite{M97} is the extension to the setting of local fractal functions.
\begin{theorem}\label{thm3.3}
The mapping $\blambda \mapsto \mathfrak{f}(\blambda)$ defines a linear isomorphism from $\underset{i=1}{\overset{N}{\times}} B(X_i,\mathbb{Y})$ to $B(X,\mathbb{Y})$.
\end{theorem}
\begin{proof}
Let $\alpha, \beta \in\mathbb{R}$ and let $\blambda, \bmu\in \underset{i=1}{\overset{N}{\times}} B(X_i,\mathbb{Y})$. Injectivity follows immediately from the fixed point equation \eqref{3.4} and the uniqueness of the fixed point: $\blambda = \bmu$ $\Longleftrightarrow$ $\mathfrak{f}(\blambda) = \mathfrak{f}(\bmu)$, .
Linearity follows from \eqref{3.4}, the uniqueness of the fixed point and injectivity:
\begin{align*}
\mathfrak{f}(\alpha\blambda + \beta \bmu) & = \sum_{i=1}^N (\alpha\lambda_i + \beta \mu_i) \circ u_i^{-1} \,\chi_{u_i(X_i)}\\
& \qquad + \sum_{i=1}^N (S_i\circ u_i^{-1})\cdot (f_i^*(\alpha\blambda + \beta \bmu)\circ u_i^{-1})\,\chi_{u_i(X_i)}
\end{align*}
and
\begin{align*}
\alpha \mathfrak{f}(\blambda) + \beta \mathfrak{f}(\bmu) & = \sum_{i=1}^N (\alpha\lambda_i + \beta \mu_i) \circ u_i^{-1} \,\chi_{u_i(X_i)}\\
& \qquad + \sum_{i=1}^N (S_i\circ u_i^{-1})\cdot (\alpha f_i^*(\blambda) + \beta f_i^*(\bmu))\circ u_i^{-1})\,\chi_{u_i(X_i)}.
\end{align*}
Hence, $\mathfrak{f}(\alpha\blambda + \beta \bmu) = \alpha \mathfrak{f}(\blambda) + \beta \mathfrak{f}(\bmu)$.
For surjectivity, we define $\lambda_i := \mathfrak{f}\circ u_i - S_i \cdot \mathfrak{f}$, $i\in \mathbb{N}_N$. Since $\mathfrak{f}\in B(X,\mathbb{Y})$, we have $\blambda\in \underset{i=1}{\overset{N}{\times}} B(X_i,\mathbb{Y})$. Thus, $\mathfrak{f}(\blambda) = \mathfrak{f}$.
\end{proof}
The next results gives information about the set of discontinuities of a bounded local fractal function $\mathfrak{f}$. The proof can be found in \cite{BHM1}.
\begin{theorem}\label{discont}
Let $\Phi$ be given as in \eqref{eq3.4}. Assume that for all $i\in \mathbb{N}_N$ the $u_i$ are contractive and the $\lambda_i$ are continuous on $\overline{X_i}$. Further assume that
\[
\max\left\{\|S_i\|_{\infty,X_i}\,\vert\, i\in\mathbb{N}_N\right\} < 1,
\]
and that the fixed point $\mathfrak{f}$ is bounded everywhere. Then the set of discontinuities of $\mathfrak{f}$ is at most countably infinite.
\end{theorem}
Next, we exhibit the relation between the graph $G$ of the fixed point $\mathfrak{f}$ of the operator $\Phi$ given by \eqref{eq3.4} and the local attractor of an associated contractive local IFS. To this end, we need to require that $\mathbb{X}$ is a closed subset of a complete metric space, hence complete itself. Consider the complete metric space $\mathbb{X}\times\mathbb{Y}$ and define mappings $w_i:\mathbb{X}_i\times\mathbb{Y}\to \mathbb{X}\times\mathbb{Y}$ by
\[
w_i (x, y) := (u_i (x), v_i (x,y)), \quad i\in \mathbb{N}_N.
\]
Assume that the mappings $v_i: \mathbb{X}_i\times \mathbb{Y}\to \mathbb{Y}$ in addition to being uniformly contractive in the second variable are also uniformly Lipschitz continuous in the first variable, i.e., that there exists a constant $L > 0$ so that for all $y\in \mathbb{Y}$,
\[
d_\mathbb{Y}(v_i(x_1, y),v_i(x_2, y)) \leq L \, d_\mathbb{X} (x_1,x_2), \quad\forall x_1, x_2\in \mathbb{X}_i,\quad\forall i\in \mathbb{N}_N.
\]
Denote by $a:= \max\{a_i\,\vert\, i\in \mathbb{N}_N\}$ the largest of the Lipschitz constants of the mappings $u_i:\mathbb{X}_i\to \mathbb{X}$ and let $\theta := \frac{1-a}{2L}$. It is straight-forward to show that the mapping $d_\theta : (\mathbb{X}\times\mathbb{Y})\times(\mathbb{X}\times\mathbb{Y}) \to \mathbb{R}$ given by
\[
d_\theta := d_\mathbb{X} + \theta\,d_\mathbb{Y}
\]
is a metric for $\mathbb{X}\times\mathbb{Y}$ compatible with the product topology on $\mathbb{X}\times\mathbb{Y}$.
\begin{theorem}
The family $\mathcal{W}_\mathrm{loc} := \{\mathbb{X}\times\mathbb{Y}; (\mathbb{X}_i\times\mathbb{Y}, w_i)\,\vert\, i\in \mathbb{N}_N\}$ is a contractive local IFS in the metric $d_\theta$ and the graph $G(\mathfrak{f})$ of the local fractal function $\mathfrak{f}$ associated with the operator $\Phi$ given by \eqref{eq3.4} is an attractor of $\mathcal{W}_\mathrm{loc}$. Moreover,
\begin{equation}\label{GW}
G(\Phi \mathfrak{f}) = \mathcal{W}_\mathrm{loc} (G(\mathfrak{f})),
\end{equation}
where $\mathcal{W}_\mathrm{loc}$ denotes the set-valued operator \eqref{hutchop} associated with the local IFS $\mathcal{W}_\mathrm{loc}$.
\end{theorem}
\begin{proof}
We first show that $\{\mathbb{X}\times\mathbb{Y}; (X_i\times\mathbb{Y}, w_i)\,\vert\, i\in \mathbb{N}_N\}$ is a contractive local IFS. For this purpose, let $(x_1,y_1), (x_2,y_2)\in \mathbb{X}_i\times\mathbb{Y}$, $i\in \mathbb{N}_N$, and note that
\begin{align*}
d_\theta (w_i(x_1,y_1), w_i(x_2,y_2)) & = d_\mathbb{X} (u_i (x_1), u_i(x_2)) + \theta d_\mathbb{Y}(v_i (x_1,y_1), v_i (x_2,y_2)) \\
& \leq a\, d_\mathbb{X}(x_1, x_2) + \theta d_\mathbb{Y}(v_i (x_1,y_1), v_i (x_2,y_1))\\
& \qquad + \theta d_\mathbb{Y}(v_i (x_2,y_1), v_i (x_2,y_2))\\
& \leq (a + \theta L) d_\mathbb{X}(x_1, x_2) + \theta\,s \,d_\mathbb{Y}(y_1,y_2) \\
& \leq q\,d_\theta ((x_1,y_1), (x_2,y_2)).
\end{align*}
Here we used \eqref{scon} and set $q:= \max\{a + \theta L, s\} < 1$.
The graph $G(\mathfrak{f})$ of $\mathfrak{f}$ is an attractor for the contractive local IFS $\mathcal{W}_\mathrm{loc}$, for
\begin{align*}
\mathcal{W}_\mathrm{loc} (G(\mathfrak{f})) & = \bigcup_{i=1}^N w_i (G(\mathfrak{f})\cap \mathbb{X}_i) = \bigcup_{i=1}^N w_i (\{(x, \mathfrak{f}(x)\,\vert\, x\in \mathbb{X}_i\}\\
& = \bigcup_{i=1}^N \{(u_i (x), v_i(x, \mathfrak{f}(x)))\,\vert\, x\in \mathbb{X}_i\} = \bigcup_{i=1}^N \{(u_i(x), \mathfrak{f}(u_i(x)))\,\vert\, x\in \mathbb{X}_i\}\\
& = \bigcup_{i=1}^N \{(x, \mathfrak{f}(x)) \,\vert\, x\in u_i(\mathbb{X}_i)\} = G(\mathfrak{f}).
\end{align*}
That \eqref{GW} holds follows from the above computation and the fixed point equation for $\mathfrak{f}$ written in the form
\[
\mathfrak{f}\circ u_i (x) = v_i (x, \mathfrak{f} (x)), \quad x\in \mathbb{X}_i, \quad i\in \mathbb{N}_N.\qedhere
\]
\end{proof}
\section{Tensor Products of Local Fractal Functions}\label{sec5}
In this section, we define the tensor product of local fractal functions thus extending the previous construction to higher dimensions.
For this purpose, we follow the notation and of the previous section, and assume that $X$ and $\overline{X}$ are nonempty connected sets, and $\{X_i\,\vert\, i\in \mathbb{N}_N\}$ and $\{\overline{X}_i\,\vert\, i\in \mathbb{N}_N\}$ are families of nonempty connected subsets of $X$ and $\overline{X}$, respectively. Analogously, we define finite families of bijections $\{u_i: X_i\to X\,\vert\, i\in \mathbb{N}_N\}$ and $\{\overline{u}_i: \overline{X}_i\to X\,\vert\, i\in \mathbb{N}_N\}$ requiring both to satisfy condition (P).
Furthermore, we assume that $(\mathbb{Y}, \|\bullet\|_\mathbb{Y})$ is a \textbf{Banach algebra}, i.e., a Banach space that is also an associate algebra for which multiplication is continuous:
$$
\|y_1y_2\|_\mathbb{Y} \leq \|y_1\|_\mathbb{Y}\,\|y_2\|_\mathbb{Y}, \quad\forall\,y_1,y_2\in \mathbb{Y}.
$$
Let $f\in B(X,\mathbb{Y})$ and $\overline{f}\in B(\overline{X},\mathbb{Y})$. The tensor product of $f$ with $\overline{f}$, written $f\otimes\overline{f}: X\times\overline{X}\to \mathbb{Y}$, with values in $\mathbb{Y}$ is defined by
\[
(f\otimes\overline{f}) (x,\overline{x}) := f(x) \overline{f}(\overline{x}),\quad\forall\,(x,\overline{x})\in X\times\overline{X}.
\]
As $f$ and $\overline{f}$ are bounded, the inequality
\[
\|(f\otimes\overline{f})(x,\overline{x})\|_{\mathbb{Y}} = \|f(x)\overline{f}(\overline{x}\|_{\mathbb{Y}} \leq \|f(x)\|_\mathbb{Y} \, \|\overline{f}(\overline{x})\|_\mathbb{Y},
\]
implies that $f\otimes\overline{f}$ is bounded. Under the usual addition and scalar multiplication of functions, the set
\[
B(X\times\overline{X}, \mathbb{Y}) := \{f\otimes\overline{f} : X\times \overline{X}\to \mathbb{Y} \,\vert\, \text{$f\otimes\overline{f}$ is bounded}\}
\]
becomes a complete metric space when endowed with the metric
\[
d(f\otimes\overline{f}, g\otimes\overline{g}) := \sup_{x\in X} \|f(x) - g(x)\|_\mathbb{Y} + \sup_{\overline{x}\in\overline{X}} \|\overline{f}(\overline{x}) - \overline{g}(\overline{x})\|_\mathbb{Y}.
\]
Now let $\Phi: B(X,\mathbb{Y})\to B(X,\mathbb{Y})$ and $\overline{\Phi}: B(\overline{X},\mathbb{Y})\to B(\overline{X},\mathbb{Y})$ be contractive RB-operators of the form \eqref{RB}. We define the tensor product of $\Phi$ with $\overline{\Phi}$ to be the RB-operator $\Phi\otimes\overline{\Phi}: B(X\times\overline{X}, \mathbb{Y})\to B(X\times\overline{X}, \mathbb{Y})$ given by
\[
(\Phi\otimes\overline{\Phi})(f\otimes\overline{f}) := (\Phi f)\otimes (\overline{\Phi} \overline{f}).
\]
It follows that $\Phi\otimes\overline{\Phi}$ maps bounded functions to bounded functions. Furthermore, $\Phi\otimes\overline{\Phi}$ is contractive on the complete metric space $(B(X\times\overline{X}, \mathbb{Y}),d)$. To see this, note that
\begin{align*}
\sup_{x\in X}\|(\Phi f)(x) &- (\Phi g)(x)\|_\mathbb{Y} + \sup_{\overline{x}\in \overline{X}}\|(\Phi \overline{f})(\overline{x}) - (\Phi \overline{g})(\overline{x})\|_\mathbb{Y} \\
& \leq \ell \sup_{x\in X}\|f(x) - g(x)\|_\mathbb{Y} + \overline{\ell} \sup_{\overline{x}\in \overline{X}} \|\overline{f}(\overline{x}) - \overline{g}(\overline{x})\|_\mathbb{Y}\\
& \leq \max\{\ell, \overline{\ell}\}\, d(f\otimes\overline{f}, g\otimes\overline{g}),
\end{align*}
where we used \eqref{estim} and denoted the uniform contractivity constant of $\overline{\Phi}$ by $\overline{\ell}$.
The unique fixed point of the RB-operator $\Phi\otimes\overline{\Phi}$ will be called a \textbf{tensor product local fractal function} and its graph a \textbf{tensor product local fractal surface}.
\section{Lebesgue Spaces $L^p(\mathbb{R})$}\label{sec6}
We may construct local fractal functions on spaces other than $B(X,\mathbb{Y})$. (See also \cite{BHM1}.) In this section, we derive conditions under which local fractal functions are elements of the Lebesgue spaces $L^p$ for $p>0$. To this end, we assume again that the functions $v_i$ are given by \eqref{specialv} and that $\mathbb{X} := [0,1]$ and $\mathbb{Y} := \mathbb{R}$. We consider the metric on $\mathbb{R}$ and $\mathbb{X}=[0,1]$ as being induced by the $L^1$-norm. Note that endowed with this norm $B(\mathbb{X},\mathbb{R})$ becomes a Banach space.
Recall that the Lebesgue spaces $L^p [0,1]$, $1\leq p\leq \infty$, are obtained as the completion of the space $C[0,1]$ of real-valued continuous functions on $[0,1]$ with respect to the $L^p$-norm
\[
\|f\|_{L^p} := \left(\int_{[0,1]} |f(x)|^p \,dx\right)^{1/p}.
\]
For $0 < p <1$, the spaces $L^p(\mathbb{R})$ are defined as above but instead of a norm, a metric is used to obtain completeness. More precisely, define
\[
d_p (f,g) := \|f - g\|_{L^p}^p,
\]
where $\|\bullet\|_{L^p}$ is the norm introduced above. Then $(L^p(\mathbb{R}), d_p)$ is an $F$-space. (Note that the inequality $(a+b)^p \leq a^p + b^p$ holds for all $a,b\geq 0$.) For more details, we refer to \cite{rudin}.
We have the following result for RB-operators defined on the Lebesgue spaces $L^p[0,1]$, $0 < p \leq \infty$. The case $p\in [1, \infty]$ was already considered in \cite{BHM1}, but for the sake of completeness we reproduce the proof.
\begin{theorem}\label{thm7}
Suppose that $\{X_i \,\vert\, i \in \mathbb{N}_N\}$ is a family of half-open intervals of $[0,1]$. Further suppose that $\{x_0 := 0 < x_1 < \cdots < x_N := 1\}$ is a partition of $[0,1]$ and that $\{u_i \,\vert\, i \in\mathbb{N}_N\}$ is a family of affine mappings from $X_i$ onto $[x_{i-1}, x_i)$, $i = 1, \ldots, N-1$, and from $X_N^+ := X_N\cup u_N^{-1}(1-)$ onto $[x_{N-1},x_N]$, where $u_N$ maps $X_N$ onto $[x_{N-1}, x_N)$.
The operator $\Phi: L^p [0,1]\to \mathbb{R}^{[0,1]}$, $p\in (0,\infty]$, defined by
\begin{equation}\label{Phi}
\Phi g := \sum_{i=1}^N (\lambda_i \circ u_i^{-1})\,\chi_{u_i(X_i)} + \sum_{i=1}^N (S_i\circ u_i^{-1})\cdot (g_i\circ u_i^{-1})\,\chi_{u_i(X_i)},
\end{equation}
where $g_i = g\vert_{X_i}$, $\lambda_i\in L^p (X_i, [0,1])$ and $S_i\in L^\infty (X_i, \mathbb{R})$, $i \in\mathbb{N}_N$, maps $L^p [0,1]$ into itself. Moreover, if
\begin{equation}\label{condition}
\begin{cases}
\displaystyle{\sum_{i=1}^N}\, a_i \,\|S_i\|_{\infty, X_i}^p < 1, & p\in (0,1);\\ \\
\left(\displaystyle{\sum_{i=1}^N}\, a_i \,\|S_i\|_{\infty, X_i}^p\right)^{1/p} < 1, & p\in[1,\infty);\\ \\
\max\left\{\|S_i\|_{\infty,X_i}\,\vert\, i\in\mathbb{N}_N\right\} < 1, & p = \infty,
\end{cases}
\end{equation}
where $a_i$ denotes the Lipschitz constant of $u_i$, then $\Phi$ is contractive on $L^p [0,1]$ and its unique fixed point $\mathfrak{f}$ is an element of $L^p [0,1]$.
\end{theorem}
\begin{proof}
Note that under the hypotheses on the functions $\lambda_i$ and $S_i$ as well as the mappings $u_i$, $\Phi f$ is well-defined and an element of $L^p[0,1]$. It remains to be shown that under conditions \eqref{condition}, $\Phi$ is contractive on $L^p[0,1]
$.
We start with $1\leq p<\infty$. If $g,h \in L^p [0,1]$ then
\begin{align*}
\|\Phi g - \Phi h\|^{L^p}_{p} & = \int\limits_{[0,1]} |\Phi g (x) - \Phi h (x)|^p dx\\
& = \int\limits_{[0,1]} \left|\sum_{i=1}^{N} (S_i\circ u_i^{-1})(x) [(g_i\circ u_i^{-1})(x) - (h_i\circ u_i^{-1})(x)]\,\chi_{u_i(X_i)}(x)\right|^p\, dx\\
& = \sum_{i=1}^{N}\,\int\limits_{[x_{i-1},x_i]}\left| (S_i\circ u_i^{-1})(x) [(g_i\circ u_i^{-1})(x) - (h_i\circ u_i^{-1})(x)]\right|^p\,dx\\
& = \sum_{i=1}^{N}\,a_i\,\int\limits_{X_i} \left| S_i (x) [g_i(x)- h_i(x)]\right|^p\,dx\\
& \leq \sum_{i=1}^{N}\,a_i\,\|S_i\|^p_{\infty, X_i}\,\int\limits_{X_i} \left| g_i(x) - h_i(x)\right|^p\,dx = \sum_{i=1}^{N}\,a_i\,\|S_i\|^p_{\infty, X_i}\,\|g_i - h_i\|^p_{{L^p},X_i}\\
& = \sum_{i=1}^{N}\,a_i\,\|S_i\|^p_{\infty, X_i}\,\|g_i - h_i\|^p_{{L^p}} \leq \left(\sum_{i=1}^{N}\,a_i\,\|S_i\|^p_{\infty, X_i}\right) \|g - h\|^p_{{L^p}}.
\end{align*}
The case $0<p<1$ now follows in similar fashion. We again have after substitution and rearrangement
\begin{align*}
d_p(\Phi g,\Phi h) & = \sum_{i=1}^{N}\,a_i\,\int\limits_{X_i} \left| S_i (x) [g_i(x)- h_i(x)]\right|^p\,dx\\
& = \sum_{i=1}^{N}\,a_i\,\|S_i\|^p_{\infty, X_i}\,\|g_i - h_i\|^p_{{L^p}} \leq \left(\sum_{i=1}^{N}\,a_i\,\|S_i\|^p_{\infty, X_i}\right) \|g - h\|^p_{{L^p}}\\
& = \left(\sum_{i=1}^{N}\,a_i\,\|S_i\|^p_{\infty, X_i}\right) d_p(g, h).
\end{align*}
Now let $p= \infty$. Then
\begin{align*}
\|\Phi g - \Phi h\|_{\infty} & = \left\|\sum_{i=1}^{N} (S_i\circ u_i^{-1})(x) [(g_i\circ u_i^{-1})(x) - (h_i\circ u_i^{-1})(x)]\,\chi_{u_i(X_i)}(x)\right\|_\infty\\
& \leq \max_{i\in\mathbb{N}_N}\,\left\| (S_i\circ u_i^{-1})(x) [(g_i\circ u_i^{-1})(x) - (h_i\circ u_i^{-1})(x)]\right\|_{\infty,X_i}\\
& \leq \max_{i\in\mathbb{N}_N}\|S_i\|_{\infty,X_i} \left\|g_i - h_i]\right\|_{\infty,X_i} = \max_{i\in\mathbb{N}_N}\|S_i\|_{\infty,X_i} \left\|g_i - h_i]\right\|_{\infty}\\
& \leq \left(\max_{{i\in\mathbb{N}_N}}\,\|S_i\|_{\infty,X_i}\right) \left\|g - h]\right\|_{\infty}
\end{align*}
These calculations prove the claims.
\end{proof}
\begin{remark}
The proof of the theorem shows that the conclusions also hold under the assumption that the family of mappings $\{u_i: X_i\to \mathbb{X}\,\vert\, i\in \mathbb{N}_N\}$ is generated by the following functions.
\begin{enumerate}
\item[$\mathrm{(i)}$] Each $u_i$ is a bounded diffeomorphism of class $C^k$, $k\in \mathbb{N}\cup\{\infty\}$, from $X_i$ to $[x_{i-1}, x_i)$ (obvious modification for $i = N$). In this case, the $a_i$'s are given by $a_i = \sup\{\left\vert \frac{du_i}{dx} (x)\right\vert\,\vert\, x$ $\in X_i\}$, $i\in\mathbb{N}_N$.
\item[$\mathrm{(ii)}$] Each $u_i$ is a bounded invertible function in $C^\omega$, the class of real-analytic functions from $X_i$ to $[x_{i-1}, x_i)$ and its inverse is also in $C^\omega$. (Obvious modification for $i = N$.) The $a_i$'s are given as above in item $\mathrm{(i)}$.
\end{enumerate}
\end{remark}
\section{Smoothness spaces $C^n$ and H\"older Spaces $\dot{C}^s$}\label{sec7}
Our next objective is to derive conditions on the partition $\{X_i\,\vert\, i\in \mathbb{N}_N\}$ of $\mathbb{X}:= [0,1]$ and the function tuples $\blambda$ and $\boldsymbol{S}$ so that we obtain a continuous or even differentiable local fractal function $\mathfrak{f}:[0,1]\to\mathbb{R}$. To this end, consider the complete metric linear space $C := C^0 (\mathbb{X}) := \{f: [0,1]\to \mathbb{R}\,\vert\, \text{$f$ continuous}\}$ endowed with the supremum norm $\|\bullet\|_\infty$.
\subsection{Binary partition of $\mathbb{X}$}
We introduce the following subsets of $\mathbb{X}=[0,1]$ which play an important role in fractal-based numerical analysis as they give discretizations for efficient computations. For more details, we refer to \cite{BHM1} and partly to \cite{BHM}.
Assume that $N\in 2\mathbb{N}$ and let
\begin{equation}\label{subsets}
\mathbb{X}_{2j-1} := \mathbb{X}_{2j} := \left[\frac{2(j-1)}{N},\frac{2j}{N}\right], \quad j = 1, \ldots, \frac2N.
\end{equation}
Define affine mappings $u_i:\mathbb{X}_i\to [0,1]$ so that
\begin{equation}\label{uis}
u_i(\mathbb{X}_i) := \left[\frac{i - 1}{N},\frac{i}{N}\right], \quad i= 1, \ldots, N.
\end{equation}
In explicit form, the $u_i$'s are given by
\[
u_{2j-1} (x) = \frac{x}{2} + \frac{j-1}{N} \quad\text{and}\quad u_{2j} (x) = \frac{x}{2} + \frac{j}{N}, \quad x \in \mathbb{X}_{2j-1} = \mathbb{X}_{2j}.
\]
Note that here $u_i(\mathbb{X}_i) \subsetneq \mathbb{X}_i$, $\forall\,i\in\mathbb{N}_N$. Clearly, $\{u_i(\mathbb{X}_i)\,\vert\, i\in \mathbb{N}_N\}$ is a partition of $[0,1]$. We denote the distinct endpoints of the partitioning intervals $\{u_i(\mathbb{X}_{i})\}$ by $\{x_0 < x_1 <\ldots < x_{N}\}$ where $x_0 = 0$ and $x_N = 1$, and refer to them as \textbf{knot points} or simply as \textbf{knots}.
Furthermore, we assume that we are given interpolation values at the endpoints of the intervals $\mathbb{X}_{2j-1} = \mathbb{X}_{2j}$:
\begin{equation}\label{I}
\mathscr{I} := \left\{(x_{2j}, y_j)\,\vert\, j = 0, 1, \ldots, N/2\right\}.
\end{equation}
Let
$$
C_{\mathscr{I}} := \{f\in C \,\vert\, f(x_{2j}) = y_j, \,\forall\, j = 0, 1, \ldots, N/2\}.
$$
Then $C_{\mathscr{I}}$ is a closed metric subspace of $C$. We consider an RB operator $\Phi$ of the form \eqref{eq3.4} acting on $C_{\mathscr{I}}$.
In order for $\Phi$ to map $C_{\mathscr{I}}$ into itself one needs to require that $\lambda_i, S_i \in C(\mathbb{X}_i) := C(\mathbb{X}_i,\mathbb{R}) := \{f: \mathbb{X}_i\to \mathbb{R}\,\vert\, \text{$f$ continuous}\}$ and that
\begin{equation}\label{intcon}
y_{j-1} = \Phi f (x_{2(j-1)}) \quad \wedge \quad y_{j} = \Phi f (x_{2j}), \quad j = 1, \ldots, N/2,
\end{equation}
where $x_{2j} := (2j)/N$. Note that the preimages of the knots $x_{2(j-1)}$ and $x_{2j}$ are the endpoints of $\mathbb{X}_{2j-1} = \mathbb{X}_{2j}$. Substituting the expression for $\Phi$ into \eqref{intcon} and collecting terms yields
\begin{equation}\label{intcons}
\begin{split}
\lambda_{2j-1} (x_{2(j-1)}) + \left(S_{2j-1}(x_{2(j-1)}) - 1\right) y_{j-1} & = 0,\\
\lambda_{2j} (x_{2j}) + \left(S_{2j}(x_{2j}) - 1\right) y_{j} & = 0,
\end{split}
\end{equation}
for all $j=1, \ldots, N/2$.
To ensure continuity of $\Phi f$ across $[0,1]$, the following join-up conditions at the oddly indexed knots need to be imposed. (They are the images of the midpoints of the intervals $\mathbb{X}_{2j-1} = \mathbb{X}_{2j}$.)
\begin{equation}\label{prejoin}
\Phi f (x_{2j-1}-) = \Phi f (x_{2j-1}+) , \quad j = 1, \ldots, N/2.
\end{equation}
A simple calculation gives
\begin{equation}\label{joinup}
\lambda_{2j} (x_{2(j-1)}) + S_{2j} (x_{2(j-1)}) y_{j-1} = \lambda_{2j-1} (x_{2j}) + S_{2j-1} (x_{2j}) y_j,
\end{equation}
for all $j = 1, \ldots, N/2$. In case all functions $\lambda_i$ and $S_i$ are constant, \eqref{joinup} reduces to the condition given in \cite[Example 2]{BHM}. Two tuples of functions $\blambda, \boldsymbol{S} \in \underset{i=1}{\overset{N}{\times}} C(\mathbb{X}_i)$ are said to have property (J) if they satisfy \eqref{intcons} and \eqref{joinup}.
We summarize these results in the next theorem.
\begin{theorem}\label{thm8}
Let $\mathbb{X}:= [0,1]$ and let $N\in 2\mathbb{N}$. Suppose that subsets of $\mathbb{X}$ are given by \eqref{subsets} and the associated mappings $u_i$ by \eqref{uis}. Further suppose that $\mathscr{I}$ is as in \eqref{I} and that $\blambda, \boldsymbol{S} \in \underset{i=1}{\overset{N}{\times}} C(\mathbb{X}_i)$ have property (J). Then the RB operator $\Phi$ as given in \eqref{eq3.4} maps $C_\mathscr{I}$ into itself and is well-defined. If, in addition, $\max\left\{\|S_i\|_{\infty,\mathbb{X}_i}\,\vert\, i\in\mathbb{N}_N\right\} < 1$, then $\Phi$ is a contraction and thus possesses a unique fixed point $\mathfrak{f}:[0,1]\to \mathbb{R}$ satisfying $\mathfrak{f}(x_{2j}) = y_j$, $\forall\, j = 0, 1, \ldots, N/2$.
\end{theorem}
We call this unique fixed point a \textbf{continuous local fractal interpolation function}.
\begin{proof}
It remains to be shown that under the condition $\max\left\{\|S_i\|_{\infty,\mathbb{X}_i}\,\vert\, i\in\mathbb{N}_N\right\} < 1$, $\Phi$ is contractive on $C_\mathscr{I}$. This, however, follows immediately from the case $p=\infty$ in the proof of Theorem \ref{thm7}.
\end{proof}
Theorem \ref{thm8} can be adapted to the setting of H\"older spaces. For this purpose, we introduce the \textbf{homogeneous H\"older space $\dot{C}^s(\Omega)$}, $0 < s < 1$, as the family of all functions $f\in C(\Omega)$, $\Omega\subseteq\mathbb{R}$, for which
\[
|f|_{\dot{C}^s(\Omega)} := \sup_{x\neq x' \in \Omega} \frac{|f(x) - f(x')|}{|x - x'|^s} < \infty.
\]
$|\bullet|_{\dot{C}^s(\Omega)}$ is a homogeneous semi-norm making $\dot{C}^s$ into a complete locally convex topological vector space, i.e., a Fr\'echet space.
\begin{theorem}
Let $\mathbb{X}:= [0,1]$ and let $N\in 2\mathbb{N}$. Assume that subsets of $\mathbb{X}$ are given by \eqref{subsets}, associated mappings $u_i$ by \eqref{uis}, and that $\mathscr{I}$ is as in \eqref{I}. Assume further that $\blambda\in \underset{i=1}{\overset{N}{\times}} \dot{C}^s(\mathbb{X}_i)$, $\boldsymbol{S}\in \underset{i=1}{\overset{N}{\times}} C(\mathbb{X}_i)$, and that have property condition (J). Then the RB operator \eqref{eq3.4} maps $\dot{C}^s := \dot{C}^s (\mathbb{X})$ into itself and is well defined. Furthermore, if
\[
2^s \max\left\{\|S_i\|_{\infty,\mathbb{X}_i}\,\vert\, i\in\mathbb{N}_N\right\} < 1
\]
then $\Phi$ is contractive on $\dot{C}^s$ and has a unique fixed point $\mathfrak{f}\in\dot{C}^s$.
\end{theorem}
In case the last conclusion of the above theorem holds, we say that the fixed point $\mathfrak{f}$ is a \textbf{local fractal function of class $\dot{C}^s$}.
\begin{proof}
First we show that $\Phi f\in \dot{C}^s$. For $x,x'\in [0,1]$, note that there exist $i,i'\in \mathbb{N}_N$ so that $x\in u_i(\mathbb{X}_i)$ and $x'\in u_{i'}(\mathbb{X}_{i'})$. Therefore,
\begin{align*}
|\Phi f(x) - \Phi f(x')| & \leq \left\vert \lambda_i(u_i^{-1}(x)) - \lambda_{i'}(u_{i'}^{-1}(x')) \right\vert \\
& \quad + \left\vert(S_i (u_i^{-1}(x))\cdot (f_i(u_i^{-1}(x)) - (S_{i'} (u_{i'}^{-1}(x'))\cdot (f_{i'}(u_{i'}^{-1}(x'))\right\vert\\
& \leq \left\vert \lambda_i(u_i^{-1}(x)) - \lambda_{i'}(u_{i'}^{-1}(x')) \right\vert\\
& \quad + \max\left\{\|S_i\|_{\infty,\mathbb{X}_i}\right\}
\left\vert f_i(u_i^{-1}(x)) - f_{i'}(u_{i'}^{-1}(x'))\right\vert.
\end{align*}
Using the fact that $|x - x'|^s = 2^{-s} |u_i^{-1}(x) - u_{i'}^{-1}(x')|$ and employing the the properties of the supremum, we thus obtain
\begin{align*}
|\Phi f|_{\dot{C}^s} \leq 2^s \left(\sum_{i\in\mathbb{N}_N} |\lambda_i|_{\dot{C}^s(X_i)} + \max\left\{\|S_i\|_{\infty,\mathbb{X}_i}\right\} |f|_{\dot{C}^s}\right) < \infty.
\end{align*}
To establish the contractivity of $\Phi$, note that
\begin{align*}
|(\Phi f - \,&\Phi g)(x) - (\Phi f - \Phi g)(x')| = \\
& |S_i (u_i^{-1}(x))\cdot (f_i - g_i)(u_i^{-1}(x)) - S_{i'} (u_{i'}^{-1}(x'))\cdot (f_{i'} - g_{i'})(u_{i'}^{-1}(x'))|\\
& \leq \max\left\{\|S_i\|_{\infty,\mathbb{X}_i}\right\} |(f_i - g_i)(u_i^{-1}(x)) - (f_{i'} - g_{i'})(u_{i'}^{-1}(x'))|
\end{align*}
As above, using again $|x - x'|^s = 2^{-s} |u_i^{-1}(x) - u_{i'}^{-1}(x')|$ and that $f$ is defined on all of $[0,1]$, this yields
\[
|\Phi f - \Phi g|_{\dot{C}^s} \leq 2^s \max\left\{\|S_i\|_{\infty,\mathbb{X}_i}\right\} |f - g|_{\dot{C}^s}.\qedhere
\]
\end{proof}
Just as in the case of splines, we can impose join-up conditions and choose the function tuples $\blambda$ and $\boldsymbol{S}$ so that the RB operator \eqref{eq3.4} maps the space of continuously differentiable functions into itself. More precisely, suppose that $\Omega\subseteq \mathbb{R}$. Let $C^n (\Omega):= C^n(\Omega,\mathbb{R}) := \{f: \Omega\to \mathbb{R}\,\vert\, D^k f \in C, \,\forall k = 1, \ldots,n\}$, where $D$ denotes the ordinary differential operator. The linear space $C^n(\Omega)$ is a Banach space under the norm
\[
\|f\|_{C^n(\Omega)} := \sum_{k=0}^n \|D^k f\|_{\infty, \Omega}.
\]
We write $C^n$ for $C^n (\mathbb{X})$, and will delete the $\Omega$ from the norm notation when $\Omega := \mathbb{X} = [0,1]$.
As we require $C^n$-differentiability across $\mathbb{X} =[0,1]$, we impose $C^n$-interpolation values at the endpoints of the intervals $\mathbb{X}_{2j-1} = \mathbb{X}_{2j}$:
\begin{equation}\label{In}
\mathscr{I}^{(n)} := \left\{(x_{2j}, \by_j^{(n)})\,\vert\, j = 0, 1, \ldots, N/2\right\},
\end{equation}
where $\by_j^{(n)} := (y_j^{(0)},y_j^{(1)},\ldots,y_j^{(n)})^T\in \mathbb{R}^{n+1}$ is a given interpolation vector. Let
$$
C^n_{\mathscr{I}^{(n)}} := \{f\in C^n \,\vert\, D^k f(x_{2j}) = y_j^{(k)}, \,\forall\, k = 0,1,\ldots, n; \,\forall\, j = 0, 1, \ldots, N/2\}.
$$
Then $C^n_{\mathscr{I}^{(n)}}$ is a closed metric subspace of $C^n$.
In order for $\Phi$ to map $C^n_{\mathscr{I}^{(n)}}$ into itself, choose $\lambda_i, S_i \in C^n(\mathbb{X}_i)$, $i\in\mathbb{N}_N$, so that
\begin{equation}\label{diffintcon}
y_{j-1}^{(k)} = D^k\Phi f (x_{2(j-1)}) \quad \wedge \quad y_{j}^{(k)} = D^k \Phi f (x_{2j}),
\end{equation}
for all $k = 0, 1, \ldots, n$ and for all $j = 1, \ldots, N/2$.
At the midpoints of the intervals $\mathbb{X}_{2j-1} = \mathbb{X}_{2j}$, the function tuples $\blambda$ and $\boldsymbol{S}$ need to additionally satisfy the $C^n$-join-up conditions
\begin{equation}\label{diffcon}
D^k \Phi f (x_{2j-1}-) = D^k\Phi f (x_{2j-1}+) , \quad\,\forall k = 0,1,\ldots, n;\,\forall j = 1, \ldots, N/2.
\end{equation}
\begin{theorem}
Let $\mathbb{X}:= [0,1]$ and let $N\in 2\mathbb{N}$. Assume that subsets of $\mathbb{X}$ are given by \eqref{subsets}, associated mappings $u_i$ by \eqref{uis}, and that $\mathscr{I}^{(n)}$ is as in \eqref{In}. Assume further that $\blambda, \boldsymbol{S}\in \underset{i=1}{\overset{N}{\times}} C^n(\mathbb{X}_i)$, and that they satisfy conditions \eqref{diffintcon} and \eqref{diffcon}. Then the RB operator \eqref{eq3.4} maps $C^n_{\mathscr{I}^{(n)}}$ into itself and is well defined. Furthermore, if
\begin{equation}\label{123}
2^n \max_{i\in\mathbb{N}_N}\max_{k=0,1,\ldots,n} \left\{\sum_{l=0}^k \binom{n-k+l}{l}\|D^lS_i\|_{\infty,\mathbb{X}_i}\right\} < 1
\end{equation}
then $\Phi$ is contractive on $C^n_{\mathscr{I}^{(n)}}$ and has a unique fixed point $\mathfrak{f}\in\mathbb{C}^n_{\mathscr{I}^{(n)}}$.
\end{theorem}
We refer to this fixed point $\mathfrak{f}$ as a l\textbf{ocal fractal function of class $C^n_{\mathscr{I}^{(n)}}$}.
\begin{proof}
The statements that $\Phi$ is well defined and maps $C^n_{\mathscr{I}^{(n)}}$ into itself is implied by the conditions imposed on $\blambda$ and $\boldsymbol{S}$. It remains to be shown that under condition \eqref{123} the RB operator $\Phi$ is contractive. To this end, consider $f,g\in C^n_{\mathscr{I}^{(n)}}$. Then
\begin{align*}
D^k\Phi f (x) - &\, D^k\Phi g(x) = \sum_{i\in\mathbb{N}_N} D^k\left[S_i(u_i^{-1}(x)) \cdot (f_i(u_i^{-1}(x)) - g_i(u_i^{-1}(x)))\right]\chi_{u_i(\mathbb{X}_i)}\\
& = \sum_{i\in\mathbb{N}_N} \sum_{l=0}^k \binom{k}{l}\, 2^k\left[(D^{k-l}(f_i - g_i))(u_i^{-1}(x)) \cdot (D^l S_i)(u_i^{-1}(x)\right]\chi_{u_i(\mathbb{X}_i)},\\
\end{align*}
where we applied the Leibnitz Differentiation Rule. Therefore,
\begin{align*}
\|D^k \Phi f - D^k \Phi g\|_\infty &\leq 2^k \sum_{i\in\mathbb{N}_N} \sum_{l=0}^k \binom{k}{l} \|D^l S_i\|_{\infty,\mathbb{X}_i} \|D^{k-l} (f-g)\|_\infty.
\end{align*}
Hence,
\begin{align*}
\|\Phi f - \Phi g\|_{C^n} & = \sum_{k=0}^n \|D^k \Phi f - D^k \Phi g\|_\infty\\
& \leq 2^n \sum_{i\in\mathbb{N}_N}\sum_{k=0}^n \sum_{l=0}^k \binom{k}{l} \|D^l S_i\|_{\infty,\mathbb{X}_i} \|D^{k-l} (f-g)\|_\infty\\
& = 2^n \sum_{i\in\mathbb{N}_N}\sum_{k=0}^n \sum_{l=0}^k \binom{n-k+l}{l} \|D^l S_i\|_{\infty,\mathbb{X}_i} \|D^{n-k} (f-g)\|_\infty
\end{align*}
The last equality is proven directly by computation or mathematical induction. Thus,
\[
\|\Phi f - \Phi g\|_{C^n} \leq \left(2^n \max_{i\in\mathbb{N}_N}\max_{k=0,1,\ldots,n} \left\{\sum_{l=0}^k \binom{n-k+l}{l}\|D^lS_i\|_{\infty,\mathbb{X}_i}\right\}\right) \|f - g\|_{C^n},
\]
and the statement follows.
\end{proof}
\subsection{Vanishing endpoint conditions for $S_i$}
Here, we consider a more general set-up than in the previous subsection. We assume again that $\mathbb{X} := [0,1]$ and let $\mathbb{X}_i:=[a_i,b_i]$, for $i\in\mathbb{N}_N$, be $N$ different subintervals of positive length. We further assume that $\{0 =: x_0 < x_1 < \ldots < x_{N-1} < x_N := 1\}$ is a partition of $\mathbb{X}$ and that we have chosen an enumeration in such a way that the mappings $u_i:\mathbb{X}_i\to \mathbb{X}$ satisfy
\[
u_i ([a_i, b_i]) := [x_{i-1}, x_i], \quad\forall\,i \in \mathbb{N}_N.
\]
In particular, note that $a_1 = x_0$, $b_N = x_N$, and $u_i(b_i) = x_i = u_{i+1}(a_{i+1})$, for all interior knots $x_1, \ldots, x_{N-1}$. We assume that the $u_i$ are affine functions but that they are not necessarily contractive.
Let
\begin{equation}\label{Ia}
\mathscr{I} := \left\{(x_{j}, y_j)\,\vert\, j = 0, 1, \ldots, N\right\}.
\end{equation}
be a given set of interpolation points and let
\begin{equation}\label{111}
C_{\mathscr{I}} := \{f\in C \,\vert\, f(x_{j}) = y_j, \,\forall\, j = 0, 1, \ldots, N\}.
\end{equation}
Our objective in this subsection is to construct a local fractal function that belongs to $C_{\mathscr{I}}$ and which is generated by an RB operator of the form \eqref{eq3.4}. For this purpose, we need to impose continuity conditions at the interpolation points. More precisely, we require that for an $f\in C_{\mathscr{I}}$,
\begin{equation}\label{condS}
\begin{split}
\Phi f(x_0)& = y_0, \quad \Phi f(x_N) = y_N,\\
\Phi f (x_i-) = y_i = &\,\Phi f (x_i+),\quad i = 1, \ldots, N-1.
\end{split}
\end{equation}
Substituting the expression for $\Phi$ into these equations and simplifying yields
\begin{gather*}
\lambda_1 (x_0) + S_1(x_0) y_0 = y_0, \quad \lambda_N (x_N) + S_N(x_N) y_N = y_N\\
\lambda_i(b_i) + S_i(b_i) f(b_i) = y_i = \lambda_{i+1} (a_{i+1}) + S_{i+1}(a_{i+1}) f(a_{i+1}), \quad i = 1, \ldots, N-1.
\end{gather*}
Since these equation require unavailable knowledge of $f$ at the points $a_i$ and $b_i$, we impose the following vanishing endpoint conditions on the functions $S_i$:
\begin{equation}\label{S}
S_i (a_i) = 0 = S_i (b_i), \,\forall i = 1, \ldots, N.
\end{equation}
Thus the requirements on the functions $\lambda_i$ reduce to
\begin{gather*}
\lambda_1 (x_0) = y_0, \quad \lambda_N (x_N) = y_N\\
\lambda_i(b_i) = y_i = \lambda_{i+1} (a_{i+1}), \quad i = 1, \ldots, N-1.
\end{gather*}
Function tuples $\blambda$ and $\boldsymbol{S}$ satisfying \eqref{condS} and \eqref{S} are said to have property (S).
A class of functions $S_i$ for which conditions \eqref{S} hold is, for instance, the class of polynomial B-splines $B_n$ of order $2 < n\in \mathbb{N}$ centered at the midpoint of the interval $[a_i,b_i]$. Polynomial B-splines $B_n$ have even the property that all derivatives up to order $n-2$ vanish at the endpoints: $D^k B_n (a_i) = 0 = D^k B_n (b_i)$, for all $k = 0, 1\ldots, n-2$.
The above considerations now entail the next theorem.
\begin{theorem}
Let $\mathbb{X}$ and $\mathbb{X}_i$, $i\in \mathbb{N}_N$, be as defined above. Let $\mathscr{I}$ be as in \eqref{Ia}. Suppose that $\blambda, \boldsymbol{S}\in \underset{i=1}{\overset{N}{\times}} C(\mathbb{X}_i)$ and that they have property (S). The RB operator \eqref{eq3.4} maps $C_\mathscr{I}$ as given by \eqref{111} into itself and is well defined. If in addition
\[
\max\left\{\|S_i\|_{\infty,\mathbb{X}_i}\,\vert\, i\in\mathbb{N}_N\right\} < 1,
\]
then $\Phi$ is contractive on $C_\mathscr{I}$.
\end{theorem}
The fixed point $\mathfrak{f}$ of $\Phi$ is called again a \textbf{continuous local fractal interpolation function}.
\begin{proof}
The assumptions on $\blambda$ and $\boldsymbol{S}$ guarantee that $\Phi$ is well defined and maps $C_\mathscr{I}$ into itself. The contractivity of $\Phi$ under the given condition follows immediately from the proof of Theorem \ref{thm7}.
\end{proof}
For the particular setting at hand, one may, of course, also construct fractal functions of class $\dot{C}^s$ and $C^n$ by imposing the relevant conditions on the function tuples $\blambda$ and $\boldsymbol{S}$ and choose the appropriate interpolation sets. We rely on the diligent reader to provide these conditions and prove the corresponding results.
\section{Sobolev Spaces $W^{m,p}$}\label{sec8}
The final type of function space we consider are the Sobolev spaces $W^{m,p}$ with $m\in \mathbb{N}_0$ and $1\leq p \leq \infty$. To this end, let $\Omega\subset\mathbb{R}$ be open and
$$
C^{m,p}(\Omega) :=\{f\in C^\infty (\Omega)\,\vert\, D^k f\in L^p(\Omega),\,\forall\,k=0,1,\ldots, m\}.
$$
Define functionals $\|\bullet\|_{m,p}$, $m\in \mathbb{N}_0$ and $1\leq p \leq \infty$, as follows:
\[
\|f\|_{m,p} := \begin{cases}
\displaystyle{\left(\sum_{k=0}^m \|D^k f\|^p_{L^p}\right)^{1/p}}, & 1\leq p < \infty;\\ \\
\displaystyle{\max_{k\in \{0,1,\ldots, m\}}\{\|D^kf\|_\infty\}}, & p = \infty.
\end{cases}
\]
The closure of $C^{m,p}(\Omega)$ in the norm $\|\bullet\|_{m,p}$ produces the Sobolev space $W^{m,p}(\Omega)$. The ordinary derivatives $D^k$ in $C^{m,p}(\Omega)$ have a continuous extension to $W^m(L^p)(\Omega)$. These extensions are then the weak derivatives $D^{(k)}$. The Sobolev space $W^{m,p}(\Omega)$ is a Banach spaces when endowed with the norm $\|\bullet\|_{m,p}$. For more details, we refer the reader to \cite{A}.
Now suppose $X := (0,1)$ and $\{X_i\,\vert\, i\in \mathbb{N}_N\}$ is a collection of nonempty open intervals of $X$. Further suppose that $\{x_1 < \cdots < x_{N-1}\}$ is a partition of $X$ and that $\{u_i:X_i\to X\}$ is a family of affine mappings with the property that
$u_i(X_i) = (x_{i-1}, x_i)$, for all $i\in \mathbb{N}_N$, where we set $x_0:= 0$ and $x_N:= 1$. We write $W^{m,p}$ for $W^{m,p}(X)$.
\begin{theorem}
Under the assumptions stated above, let $\blambda\in \underset{i=1}{\overset{N}{\times}} W^{m,p}(X_i)$ and let $\boldsymbol{S} := (s_1, \ldots, s_N)\in \mathbb{R}^N$. Then the RB operator $\Phi: W^{m,p}\to \mathbb{R}^{(0,1)}$, $m\in \mathbb{N}_0$ and $1\leq p \leq \infty$, defined by
\[
\Phi g := \sum_{i=1}^N (\lambda_i \circ u_i^{-1})\,\chi_{u_i(X_i)} + \sum_{i=1}^N s_i (g_i\circ u_i^{-1})\,\chi_{u_i(X_i)},
\]
has range contained in $W^{m,p}$ and is well defined. Moreover, if
\begin{equation}\label{W}
\begin{cases}
\displaystyle{\left(\max_{k\in \{0,1,\ldots,m\}} \sum_{i\in \mathbb{N}_N} \frac{|s_i|^p}{a_i^{kp -1}}\right)^{1/p} < 1}, & 1\leq p < \infty;\\
\displaystyle{\sum_{i\in \mathbb{N}_N} \frac{|s_i|}{a_i^k} < 1}, & p = \infty,
\end{cases}
\end{equation}
then $\Phi$ is contractive on $W^{m,p}$.
\end{theorem}
The unique fixed point $\mathfrak{f}$ of $\Phi$ is called a \textbf{local fractal function of class $W^{m,p}$}.
\begin{proof}
That $\Phi$ is well defined and has range contained in $W^{m,p}$ follows from the assumption on the function tuple $\blambda$ and the fact that if the weak derivative of a function $f$ exits and $u_i$ is a diffeomorphism, then the weak derivative of $f\circ u_i^{-1}$ exists and equals $(D^{(1)}f)(u_i^{-1})\cdot D u_i^{-1}$.
To prove contractivity on $W^{m,p}$, suppose that $g,h\in W^{m,p}$, $k\in \{0,1,\ldots, m\}$. Denote the ordinary derivative of $u_i$ by $a_i$. Note that $a_i > 0$ but may be larger than one. Then, for $1\leq p < \infty$, we obtain the following estimates.
\begin{align*}
\|D^{(k)}\Phi g - D^{(k)}\Phi h\|_{L^p}^p & = \int_X \left\vert D^{(k)}\sum_{i\in \mathbb{N}_N} s_i (g_i - h_i)(u_i^{-1})(x)\right\vert^p \chi_{u_i(X_i)} dx \\
& \leq \sum_{i\in \mathbb{N}_N} |s_i|^p \int_{u_i(X_i)} \left\vert D^{(k)}(g_i - h_i)(u_i^{-1}(x))\right\vert^p\left(\frac{1}{a_i}\right)^{k p} dx\\
& \leq \sum_{i\in \mathbb{N}_N} |s_i|^p \left(\frac{1}{a_i}\right)^{k p-1} \int_{X_i} \left\vert D^{(k)}(g_i - h_i)(x)\right\vert^p dx\\
& \leq \left(\sum_{i\in \mathbb{N}_N} |s_i|^p \left(\frac{1}{a_i}\right)^{k p-1}\right) \|D^{(k)} g - D^{(k)} h\|_{L^p}^p.
\end{align*}
Summing over $k = 0,1,\ldots, m$, and factoring out the maximum value of the expression in parentheses, proves the statement.
Similarly, for $p=\infty$, we get
\begin{align*}
\left\vert D^{(k)} g (x) - D^{(k)} h (x)\right\vert & = \left\vert\sum_{i\in \mathbb{N}_N} s_i D^{(k)} (g_i - h_i) (u_i^{-1})(x) \left(\frac{1}{a_i^k}\right)\chi_{u_i(X_i)}(x)\right\vert\\
& \leq \sum_{i\in \mathbb{N}_N} \frac{|s_i|}{a_i^k} \left\vert D^{(k)} (g_i - h_i) (u_i^{-1})(x)\chi_{u_i(X_i)}(x)\right\vert\\
& \leq \sum_{i\in \mathbb{N}_N} \frac{|s_i|}{a_i^k} \left\| D^{(k)} g - D^{(k)} h \right\|_{\infty},
\end{align*}
verifying the assertion.
\end{proof}
\section*{Acknowledgment}
The author wishes to thank the Mathematical Sciences Institute of The Australian National University for its kind hospitality and support during his research visit in May 2013 which initiated the investigation into local IFSs.
| {
"timestamp": "2013-09-06T02:03:31",
"yymm": "1309",
"arxiv_id": "1309.0243",
"language": "en",
"url": "https://arxiv.org/abs/1309.0243",
"abstract": "We introduce local iterated function systems and present some of their basic properties. A new class of local attractors of local iterated function systems, namely local fractal functions, is constructed. We derive formulas so that these local fractal functions become elements of various function spaces, such as the Lebesgue spaces $L^p$, the smoothness spaces $C^n$, the homogeneous Hölder spaces $\\dot{C}^s$, and the Sobolev spaces $W^{m,p}$.",
"subjects": "Functional Analysis (math.FA)",
"title": "Local fractal functions and function spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683500615298,
"lm_q2_score": 0.8152324826183822,
"lm_q1q2_score": 0.8050977977760004
} |
https://arxiv.org/abs/1911.10146 | Hypersimplices are Ehrhart Positive | We consider the Ehrhart polynomial of hypersimplices. It is proved that these polynomials have positive coefficients and we give a combinatorial formula for each of them. This settles a problem posed by Stanley and also proves that uniform matroids are Ehrhart positive, an important and yet unsolved particular case of a conjecture posed by De Loera et al. To this end, we introduce a new family of numbers that we call weighted Lah numbers and study some of their properties. | \section{Introduction}
Let us fix two positive integers $n$ and $k$ with $k\leq n$. The $(k,n)$-hypersimplex, denoted by $\Delta_{k,n}$ is defined by:
\[ \Delta_{k,n} := \left\{x\in [0,1]^n : \sum_{i=1}^n x_i = k\right\}.\]
This polytope appears naturally in several contexts within geometric and algebraic combinatorics. For example, it can be seen as a weight polytope of the fundamental representation of the general linear groups $\operatorname{GL}(n)$ or as the basis polytope of the uniform matroid $U_{k,n}$.\\
The basis polytope (also known as the \textit{matroid polytope}) of a matroid is defined as the convex hull of the indicator functions of its bases \cite{GGMS}, \cite{welsh}. It encodes all the information about the matroid, hence providing a geometric point of view on matroidal notions and problems. These polytopes are relevant objects in algebraic combinatorics, toric geometry, combinatorial optimization, and as of today a matter of extensive research \cite{ardila}, \cite{fink}, \cite{feichtner}, \cite{deloera}, \cite{knauer2}.
The uniform matroid $U_{k,n}$ is in particular defined as the matroid on the set $\{1,2,\ldots,n\}$ having set of bases $\mathscr{B}:= \{ B\subseteq \{1,\ldots,n\} : |B|=k\}$ and, as we said above, its basis polytope coincides with the hypersimplex $\Delta_{k,n}$.\\
An important invariant of a polytope $\mathscr{P}$ whose vertices lie in $\mathbb{Z}^d$, is the so-called \textit{Ehrhart polynomial} \cite{ehrhart}, \cite{beck}. It is defined as the polynomial $p\in \mathbb{Q}[t]$ such that $p(t) = |\mathbb{Z}^d\cap t\mathscr{P}|$ for $t\in\mathbb{Z}_{\geq 0}$, being $t\mathscr{P}$ the \textit{dilation} of $\mathscr{P}$ with respect to the origin by the factor $t$. In particular, since the basis polytope of a matroid has vertices with $0/1$ coordinates, we can consider its Ehrhart polynomial, which we will sometimes refer as the Ehrhart Polynomial of the matroid itself.\\
In the paper \cite[Conjecture 2(B)]{deloera}, the authors posed the following conjecture:
\begin{conj}[De Loera et al]\label{conje}
Let $\mathscr{P}(M)$ be the basis polytope of a matroid $M$. Then the coefficients of the Ehrhart polynomial of $\mathscr{P}(M)$ are positive.
\end{conj}
Also in \cite[Lemma 29]{deloera} the authors proved that hypersimplices $\Delta_{2,n}$ have an Ehrhart polynomial with positive coefficients. Their proof is based on inequalities, using a result of Katzman \cite{katzman} regarding a formula for the Ehrhart polynomial of hypersimplices, found in the context of algebras of Veronese type. Also, as a corollary of a result in \cite{ohsugi} regarding the complex roots of the Ehrhart polynomial of $\Delta_{3,n}$ one can also conclude the positivity of the Ehrhart coefficients for this particular case.\\
We will restate Katzman's formula in the following section, prove it using generating functions and exploit it to prove the main result of this article.
\begin{teo}\label{main}
The coefficients of the Ehrhart polynomial of all hypersimplices $\Delta_{k,n}$ are positive.
\end{teo}
Moreover, we are going to introduce the notion of \textit{weighted Lah number} and to provide a combinatorial formula for each coefficient in terms of them. We thus settle an open problem posed in Richard Stanley's book \cite[Ch. 4, Problem 62e]{stanley}.
It is worth noting that according to \cite{stanleyeulerian} the calculation of the principal coefficient of these polynomials (it is, the normalized volume of the hypersimplex) dates back to Laplace, though apparently he did not do it explicitly. The principal coefficients are what in the literature are called Eulerian numbers and several combinatorial interpretations exist for them \cite{stanley,knuth}. Thanks to some properties of the Ehrhart polynomials and the fact that the facets of a hypersimplex are also hypersimplices, there was also a combinatorial interpretation of the second highest coefficient in terms of Eulerian numbers. However the rest of them remained elusive. More recently, independent proofs of the positivity of the linear coefficient of the Ehrhart polynomial of all hypersimplices were found in \cite{liutodd} and \cite{jochemko2019generalized}.\\
In the paper \cite{fuliu} the Conjecture \ref{conje} has been strengthened and reformulated in a more general setting, asserting that indeed all \textit{generalized permutohedra} \cite{postnikov} are Ehrhart positive. Since it is known \cite{ardila} that this family contains all matroid polytopes, this conjecture is indeed stronger. Also, in \cite{liu} there is a survey on the families of polytopes that are known to be Ehrhart positive, and those that are conjectured to also have this property. \\
The Ehrhart $h^*$-polynomial \cite{beck,stanleyhstar} of hypersimplices is itself a rich object of study. For example, the unimodality of its coefficients is still an open problem \cite{braununimodality}. Recently the author conjectured that the $h^*$-polynomial of all matroid polytopes (in particular of hypersimplices) are real-rooted \cite{ferroni2}. Some combinatorial interpretations for the coefficients of the $h^*$-polynomials of hypersimplices were found in \cite{nanli}, \cite{early2017conjectures} and \cite{kim}.
\section{The Ehrhart Polynomial of Hypersimplices}
We will base our computations on a formula found by Katzman in \cite[Corollary 2.2]{katzman} for $E_{k,n}$. We will provide a generating-function based proof.
\begin{teo}\label{katz}
The Ehrhart Polynomial $E_{k,n}(t)$ of the hypersimplex $\Delta_{k,n}$ is given by:
\begin{equation}\label{formula}
E_{k,n}(t) = \sum_{j=0}^{k-1} (-1)^j \binom{n}{j} \binom{(k-j)t+n-1-j}{n-1}.
\end{equation}
\end{teo}
It is not at all apparent from this formula that the coefficients of the polynomial $E_{k,n}$ are positive. Indeed the alternating factor $(-1)^j$ and the fact that the variable $t$ appears inside a binomial coefficient which in turn for $j>1$ is a polynomial with some negative coefficients do not permit us to see this fact directly. Before proving Theorem \ref{katz} we establish a useful Lemma.
\begin{lema}\label{formulita}
If $1\leq k\leq n-1$ and $t\geq 0$, then the coefficient of $x^{kt}$ in the polynomial $(1+x+x^2+\ldots+x^t)^n$ is exactly $E_{k,n}(t)$.
\end{lema}
\begin{proof}
By definition, the polynomial $E_{k,n}(t)$ counts the number of elements in the set $t\Delta_{k,n} \cap \mathbb{Z}^n $. This set can be rewritten as:
\[\left \{ y \in \{0,1,\ldots,t\}^n : \sum_{i=1}^n y_i = kt\right\}.\]
But notice that the coefficient of $x^{kt}$ in the product
\[(1+x+x^2+\ldots+x^t)^n= \underbrace{(1+x+x^2+\ldots+x^t) \cdot \ldots \cdot (1+x+x^2+\ldots+x^t)}_{n \text{ times}} \]
is exactly the number of ways of choosing a sequence of $n$ elements in the set $\{0,1,\ldots,t\}$ in such a way that their sum is exactly $kt$. That is exactly the cardinality of our set.
\end{proof}
Recall that if one has a (formal) power series $f(x):=\sum_{j=0}^{\infty} a_j x^j$, it is customary to use the notation $[x^\ell]f(x):=a_{\ell}$.
\begin{proof}[Proof of Theorem \ref{katz}]
We will use generating functions to compute the coefficient of $x^{kt}$ in $(1+x+\ldots+x^t)^n$ and then we will use the preceding Lemma. Notice that:
\begin{align*}
[x^{kt}] \left(1+x+\ldots+x^t\right)^n &= [x^{kt}]\left(\frac{1-x^{t+1}}{1-x}\right)^n\\
&= [x^{kt}] \left((1-x^{t+1})^n \cdot \frac{1}{(1-x)^n}\right)
\end{align*}
So writing $(1-x^{t+1})^n = \sum_{j=0}^n (-1)^j \binom{n}{j} x^{(t+1)j}$ and $\frac{1}{(1-x)^n} = \sum_{j=0}^{\infty} \binom{n-1+j}{n-1} x^j$, the coefficient of $x^{kt}$ in this product can be computed as a convolution:
\[ \sum_{j=0}^{k-1} (-1)^j \binom{n}{j} \binom{n-1+(k-j)t-j}{n-1},\]
where the sum ends in $k-1$ since in the first of our two formal series we have $x^{(t+1)j}$ and we are computing the coefficient of $x^{kt}$. Also, the second binomial coefficient in our expression comes from the fact that $(t+1)j + ((k-j)t-j)=kt$.
\end{proof}
\section{Weighted Lah Numbers}
In this section we develop some useful tools to prove Theorem \ref{main}. We recall the definition of \textit{Lah numbers} (also known as \textit{Stirling Numbers of the 3rd kind}).
\begin{defi}
The \textit{Lah number} $L(n,m)$ is defined as the number of ways of partitioning the set $\{1,2,\ldots,n\}$ in exactly $m$ linearly ordered blocks. We will denote the set of all such partitions by $\mathscr{L}(n,m)$.
\end{defi}
\begin{ej}
$L(3,2)=6$ because we have the following possible partitions:
\[ \{(1,2),(3)\}, \{(2,1),(3)\},\]
\[ \{(1,3),(2)\}, \{(3,1),(2)\},\]
\[ \{(2,3),(1)\}, \{(3,2),(1)\}.\]
\end{ej}
If $\pi$ is a partition of $\{1,\ldots,n\}$ in $m$ linearly ordered blocks, for any of these blocks $b$, we will write $b\in \pi$. So, for example $(2,3)\in \{(2,3),(1)\}$. Also, we will use the notation $|b|$ to denote the number of elements in $b$.
\begin{obs}
We have the equality $L(n,m)=\frac{n!}{m!}\binom{n-1}{m-1}$. This can be proven easily by a combinatorial argument as follows. Order the $n$ numbers on the set in any fashion. To get the partition we can put $m-1$ divisions in any of the $n-1$ spaces between two consecutive numbers. Then divide by $m!$, the number of ways of ordering all the blocks.
\end{obs}
There already exist some generalizations of these numbers \cite{genlah}. We will introduce a new one that we will call \textit{weighted Lah numbers}.
\begin{defi}
Let $\pi$ be a partition of the set $\{1,\ldots,n\}$ into $m$ linearly ordered blocks. We define the \textit{weight of $\pi$} by the following formula:
\[ w(\pi) := \sum_{b\in\pi} w(b),\]
where $w(b)$ is the number of elements in $b$ that are smaller (as positive integers) than the first element in $b$.
\end{defi}
\begin{ej}\label{ejemplito}
Among the $6$ partitions that we have seen that exist of $\{1,2,3\}$ into $2$ blocks, we have:
\[ w(\{(1,2),(3)\}) = 0 + 0 = 0,\;\; w(\{(2,1),(3)\}) = 1 + 0 = 1,\]
\[ w(\{(1,3),(2)\}) = 0 + 0 = 0,\;\; w(\{(3,1),(2)\}) = 1 + 0 = 1,\]
\[ w(\{(2,3),(1)\}) = 0 + 0 = 0,\;\; w(\{(3,2),(1)\}) = 1 + 0 = 1.\]
Note that there are exactly $3$ of these partitions of weight $0$ and exactly $3$ of weight $1$.
\end{ej}
\begin{defi}
We define the \textit{weighted Lah Numbers} $W(\ell,n,m)$ as the number of partitions of weight $\ell$ of $\{1,\ldots,n\}$ into exactly $m$ linearly ordered blocks.
\end{defi}
\begin{ej}
Rephrasing the conclusion of the Example \ref{ejemplito}, we have that $W(0,3,2)=3$ and $W(1,3,2)=3$.
\end{ej}
\begin{table}
\parbox{.45\linewidth}{\begin{tabular}{>{$}l<{$}|*{5}{c}}
\multicolumn{1}{l}{$m$} &&&&&\\\cline{1-1}
1 &24&24&24&24&24\\
2 &50&70&70&50\\
3 &35&50&35&&\\
4 &10&10&&&\\
5 &1&&&&\\\hline
\multicolumn{1}{l}{} &0&1&2&3&4\\\cline{2-6}
\multicolumn{1}{l}{} &\multicolumn{5}{c}{$\ell$}
\end{tabular}
\caption{$W(\ell,5,m)$}}
\quad\quad\quad
\parbox{.45\linewidth}{\begin{tabular}{>{$}l<{$}|*{6}{c}}
\multicolumn{1}{l}{$m$} &&&&&&\\\cline{1-1}
1 &120&120&120&120&120&120\\
2 &274&404&444&404&274\\
3 &225&375&375&225&&\\
4 &85&130&85&&&\\
5 &15&15&&&&\\
6 &1&&&&&\\\hline
\multicolumn{1}{l}{} &0&1&2&3&4&5\\\cline{2-7}
\multicolumn{1}{l}{} &\multicolumn{6}{c}{$\ell$}
\end{tabular}
\caption{$W(\ell,6,m)$}}
\end{table}
The set of all partitions of $\{1,\ldots,n\}$ into $m$ linearly ordered blocks and weight $\ell$ will be denoted by $\mathscr{W}(\ell,n,m)$.
\begin{obs}
Observe that $W(\ell,n,m)\neq 0$ only for $0\leq \ell \leq n-m$. This is because the maximum weight can be obtained by ordering every block in such a way that its maximum element is on the first position. Also, we have the following:
\[ W(0,n,m) = {n \brack m}\]
where the brackets denote the \textit{(unsigned) Stirling numbers of the first kind} \cite{knuth}. This can be proven combinatorially by noticing that for every permutation with exactly $m$ cycles, we can present it in a unique way as a partition of $\{1,\ldots,n\}$ into $m$ linearly ordered blocks having every block its minimum element in the first position.
\end{obs}
\begin{obs}
We have symmetry, namely:
\[ W(\ell,n,m) = W(n-m-\ell,n,m).\]
This equality is a consequence of the fact that for $\pi \in \mathscr{W}(\ell,n,m)$ we can associate bijectively an element $\pi'\in \mathscr{W}(n-m-\ell,n,m)$ as follows. In $\pi$ interchange the positions of $1$ and $n$, of $2$ and $n-1$, and so on. What one gets is exactly a partition of weight $n-m-\ell$.
\end{obs}
It is possible to obtain many recurrences to compute $W(\ell,n,m)$ recursively. For instance we include the following:
\begin{prop}
The following recurrence holds for $n,m\geq 2$:
\[W(\ell,n,m) = (n-1) W(\ell-1,n-1,m) + \sum_{j=0}^{n-1} \binom{n-1}{j} j! W(\ell,n-1-j,m-1).\]
\end{prop}
\begin{proof}
Every $\pi\in\mathscr{W}(\ell,n,m)$ has the number $1$ inside a block. If this number is \textit{not} the first element of its block, this means that if we remove it from $\pi$ we end up getting an element of $\mathscr{W}(\ell-1,n-1,m)$ (with every element shifted by one). Analogously, we can pick an element of $\mathscr{W}(\ell-1,n-1,m)$ (which we think of as having every element shifted by one) and reconstruct an element of $\mathscr{W}(\ell,n,m)$ by adjoining the element $1$ in such a way that it is not the first element of a block. There are $n-1$ possibilities of where to put the number $1$ to get an element of $\mathscr{W}(\ell,n,m)$. So we get the first summand.
The remaining cases to consider are those on which $1$ is the first element of its block. In this case we choose $j$ elements to be in this block, and in every possible order of these elements, the block will always have weight $0$. So the remaining $n-j-1$ elements will have to be arranged in $m-1$ blocks of total weight $\ell$.
\end{proof}
\begin{obs}
The last proposition tells us that if we make the subtraction $W(\ell,n,m)-(n-1)W(\ell,n-1,m)$ we end up getting an expression for which the sum cancels out to give just the recurrence:
\begin{align*}
W(\ell,n,m) &= (n-1)W(\ell-1,n-1,m) + (n-1)W(\ell,n-1,m) \\
&\;+ W(\ell,n-1,m-1) -(n-1)(n-2)W(\ell-1,n-2,m).
\end{align*}
\end{obs}
We establish now a bivariate generating function for $W(\ell,n,m)$ for a fixed $m$.
\begin{teo}\label{genfun}
We have the equality:
\[W(\ell,n,m) = \frac{n!}{m!}[x^n s^\ell] \left(\tfrac{1}{(1-s)^m} \left(\log\left(\tfrac{1}{1-x}\right) - \log\left(\tfrac{1}{1-sx}\right)\right)^m\right) \]
\end{teo}
\begin{proof}
Notice that it suffices to prove that:
\begin{equation} \label{critica}
W(\ell,n,m) = \frac{n!}{m!} [x^n s^{\ell}] \left( \sum_{k=1}^{\infty} \frac{x^k}{k} (1+s+\ldots+s^{k-1})\right)^m.
\end{equation}
This is because using the formula for the geometric series, the sum in the parentheses can be rewritten as $\frac{1}{1-s}\left(\sum_{k=1}^{\infty} \frac{x^k}{k} - \sum_{k=1}^{\infty} \frac{(sx)^k}{k}\right)$ which in turn is just \[\frac{1}{1-s}\left(\log\left(\frac{1}{1-x}\right) - \log\left(\frac{1}{1-sx}\right) \right)\] which gives the desired result. Now, to prove \eqref{critica} we proceed as follows. First notice that:
\begin{align}
m! W(\ell,n,m) = \sum_{\widetilde{\pi}} \sum_{\substack{(j_1,\ldots,j_m)\in \mathbb{Z}^m\\ j_1+\ldots+j_m = \ell \\ 0\leq j_i < |b_i|}} \prod_{i=1}^m (|b_i|-1)!
\end{align}
where the first sum runs over all the orderings $\widetilde{\pi}=(b_1,\ldots,b_m)$ of all elements $\pi=\{b_1,\ldots,b_m\}\in \mathscr{L}(n,m)$. This comes from the fact that for every such $\widetilde{\pi}$, if we choose how much weight to assign to each of the blocks, each block has its first element determined, and the remaining elements can be reordered in any fashion. Of course, this way we count every element of $\mathscr{W}(\ell,n,m)$ exactly $m!$ times. Taking out the product out of the second sum above, we get:
\begin{align*}
m! W(\ell,n,m) &= \sum_{\widetilde{\pi}} \left(\prod_{i=1}^m (|b_i|-1)! \right)\left| \left\{(j_1,\ldots,j_m)\in \mathbb{Z}^m : \sum_{i=1}^m j_i = \ell, 0\leq j_i < |b_i|\right\}\right|\\
&= \sum_{\widetilde{\pi}} \left(\prod_{i=1}^m (|b_i|-1)! \right) [s^\ell]\left( \prod_{i=1}^{m}\sum_{j=0}^{|b_i|-1} s^{j} \right)\\
&= [s^{\ell}] \sum_{\widetilde{\pi}} \left(\prod_{i=1}^m (|b_i|-1)! \sum_{j=0}^{|b_i|-1} s^{j}\right)
\end{align*}
Notice that the term inside the last sum does not take into account the whole element $\widetilde{\pi}=(b_1,\ldots,b_m)$, but only the size $|b_i|$ of each block. Thus, if we fix the sizes $|b_1|, \ldots, |b_m|$ of the blocks, we can recover exactly how many elements $\widetilde{\pi}$ have blocks of such sizes. Using multinomial coefficients, and abusing notation to write $b_i=|b_i|$:
\begin{align*}
m! W(\ell,n,m) &= [s^{\ell}] \sum_{\substack{(b_1,\ldots,b_m)\in\mathbb{Z}^m\\ b_1+\ldots+b_m = n \\ b_i \geq 0}} \binom{n}{b_1,\ldots,b_m}\left(\prod_{i=1}^m (b_i-1)! \sum_{j=0}^{b_i-1}
s^{j}\right)\\
&= [s^{\ell}] \sum_{\substack{(b_1,\ldots,b_m)\in\mathbb{Z}^m\\ b_1+\ldots+b_m = n \\ b_i \geq 0}} n! \left(\prod_{i=1}^m \frac{1}{b_i} \sum_{j=0}^{b_i-1} s_j\right)\\
&= [s^\ell x^n] n! \left(\sum_{k=1}^n \frac{x^k}{k} (1+s+\ldots+s^{k-1})\right)^m,
\end{align*}
which proves \eqref{critica}.
\end{proof}
\begin{coro}\label{clave}
For all $\ell,n,m$ one has:
\[ W(\ell,n,m) = \sum_{j=0}^\ell \sum_{i=0}^{n-m} (-1)^{i+j} \binom{n}{j} \binom{m+\ell-j-1}{m-1} {j\brack {j-i}}{{n-j}\brack {m+i-j}}.\]
\end{coro}
\begin{proof}
From the exponential generating function of the Stirling numbers of the first kind \cite[pg. 351]{knuth} one has:
\[ {\alpha \brack \beta} = \frac{\alpha!}{\beta!} [x^\alpha] \left(\log\left(\tfrac{1}{1-x}\right)\right)^\beta.\]
Now, using Theorem \ref{genfun}, we have the chain of equalities:
\begin{align}
W(\ell,n,m) &= \frac{n!}{m!}[x^n s^\ell] \left(\tfrac{1}{(1-s)^m} \left(\log\left(\tfrac{1}{1-x}\right) - \log\left(\tfrac{1}{1-sx}\right)\right)^m\right) \nonumber \\
&= \frac{n!}{m!} [x^ns^\ell] \left(\frac{1}{(1-s)^m} \sum_{k=0}^m (-1)^k \binom{m}{k} \left(\log\left(\tfrac{1}{1-x}\right)\right)^{m-k} \left(\log\left(\tfrac{1}{1-sx}\right)\right)^k \right)\nonumber\\
&= n! [x^n s^\ell] \left(\frac{1}{(1-s)^m} \sum_{k=0}^m (-1)^k \frac{\log\left(\tfrac{1}{1-x}\right)^{m-k}}{(m-k)!}\frac{\log\left(\tfrac{1}{1-sx}\right)^{k}}{k!} \right)\nonumber\\
&= n! [s^\ell] \left(\frac{1}{(1-s)^m} \sum_{k=0}^m (-1)^k \sum_{j=0}^n [x^{n-j}]\left(\frac{\log\left(\tfrac{1}{1-x}\right)^{m-k}}{(m-k)!}\right) [x^j]\left( \frac{\log\left(\tfrac{1}{1-sx}\right)^{k}}{k!}\right) \right)\nonumber\\
&= n! [s^\ell] \left(\frac{1}{(1-s)^m} \sum_{k=0}^m (-1)^k \sum_{j=k}^{n-m+k} [x^{n-j}]\left(\frac{\log\left(\tfrac{1}{1-x}\right)^{m-k}}{(m-k)!}\right) [x^j]\left( \frac{\log\left(\tfrac{1}{1-sx}\right)^{k}}{k!}\right) \right)\nonumber\\
&= n![s^\ell] \left( \frac{1}{(1-s)^m} \sum_{k=0}^m (-1)^k \sum_{j=k}^{n-m+k} \frac{1}{(n-j)!} {{n-j}\brack{m-k}} \frac{1}{j!} s^j{j\brack k}\right)\nonumber\\
&= [s^\ell]\left( \frac{1}{(1-s)^m} \sum_{k=0}^m\sum_{j=k}^{n-m+k} (-1)^k \binom{n}{j}s^j {{n-j}\brack{m-k}} {{j}\brack{k}}\right)\nonumber\\
&= [s^\ell]\left( \frac{1}{(1-s)^m} \sum_{j=0}^n\sum_{k=j-m+n}^{j} (-1)^k \binom{n}{j}s^j {{n-j}\brack{m-k}} {{j}\brack{k}}\right)\nonumber\\
&= [s^\ell]\left( \frac{1}{(1-s)^m} \sum_{j=0}^n\sum_{i=0}^{n-m} (-1)^{j-i} \binom{n}{j} {{n-j}\brack{m+i-j}} {{j}\brack{j-i}}s^j\right)\nonumber\\
&= \sum_{j=0}^{\ell} \sum_{i=0}^{n-m}(-1)^{j-i} \binom{n}{j} \binom{m-1+\ell-j}{m-1} {{n-j}\brack{m+i-j}} {j\brack{j-i}}\nonumber
\end{align}
where in the fifth equation we changed the limits from $0\leq j \leq n$ to $k\leq j \leq n-m+k$ given that the coefficients of the first factor inside the sum are zero for degree $n-j < m-k$ and the coefficients of the second factor are zero for $j< k$.
\end{proof}
\section{The Proof of Theorem \ref{main}}
For $0\leq m\leq n-1$, we will call $e_{k,n,m}$ the coefficient of $t^m$ in the polynomial $E_{k,n}(t)$. Our aim is to show that all these $e_{k,n,m}$ are positive.\\
For $a,b,u$ integer numbers such that $u\geq 0$, we will denote $P_{a,b}^u$ the sum of all possible products of $u$ different integer numbers chosen in the interval of integers $[a,b]$. This is:
\[ P_{a,b}^u := \sum_{a\leq x_1 < \ldots < x_u \leq b} x_1\cdot\ldots\cdot x_u.\]
It is easy to see that for $a=1$ one gets:
\begin{equation}\label{prostir}
P^u_{1,b} = {{b+1}\brack b+1-u},
\end{equation}
where the brackets denote the (unsigned) Stirling numbers of the first kind \cite{stanley}.
\begin{lema}\label{coeficientes}
The following formula holds:
\[ e_{k,n,m} = \frac{1}{(n-1)!} \sum_{j=0}^{k-1} \sum_{i=0}^{n-m-1} (-1)^{i+j} \binom{n}{j} (k-j)^m {{n-j}\brack{m+1+i-j}}{j \brack {j-i}}.\]
\end{lema}
\begin{proof}
We will work with the formula \eqref{formula}. Observe that:
\begin{align*}
[t^m] \binom{(k-j)t+n-1-j}{n-1} &= \frac{1}{(n-1)!} [t^m] \left( ((k-j)t + n-1-j) \cdot \ldots \cdot ((k-j)t + 1 - j)\right)\\
&= \frac{1}{(n-1)!} (k-j)^m P^{n-1-m}_{1-j,n-1-j},
\end{align*}
Observe that one has the following equality:
\begin{align*}
P^{n-1-m}_{1-j,n-1-j} &= \sum_{i=0}^{n-m-1} P^i_{1-j,-1} P^{n-1-m-i}_{1,n-1-j}\\
&= \sum_{i=0}^{n-m-1} (-1)^i P^i_{1,j-1} P^{n-1-m-i}_{1,n-1-j}.
\end{align*}
Therefore, using \eqref{prostir} we have that \[P_{1-j,n-1-j}^{n-m-1} = \sum_{i=0}^{n-m-1} (-1)^i {j\brack {j-i}} {{n-j}\brack {m+1+i-j}},\]
so, in particular,
\[ [t^m] \binom{(k-j)t+n-1-j}{n-1} =\frac{1}{(n-1)!}\sum_{i=0}^{n-m-1} (-1)^i (k-j)^m {j\brack {j-i}} {{n-j}\brack {m+1+i-j}}.\]
The result follows easily from \eqref{formula} and this last identity.
\end{proof}
\begin{obs}
If we use the shorter name \[f_{j,n,m}:=\sum_{i=0}^{n-m-1}(-1)^i {j\brack {j-i}} {{n-j}\brack {m+1+i-j}},\] we can rewrite the formula of Lemma \ref{coeficientes} as follows:
\begin{equation} \label{coefi}
e_{k,n,m} = \frac{1}{(n-1)!} \sum_{j=0}^{k} (-1)^j \binom{n}{j} (k-j)^m f_{j,n,m},
\end{equation}
where we changed the upper limit of the sum since, when $j=k$, we are adding $0$.
\end{obs}
Now we are ready to state and prove the result from which our main theorem follows.
\begin{teo}
For all $k,n,m$ as above, we have that:
\[ e_{k,n,m} = \frac{1}{(n-1)!}\sum_{\ell=0}^{k-1} W(\ell,n,m+1) A(m,k-\ell-1),\]
where $A(m,k-\ell-1)$ stands for the \textit{Eulerian numbers} \cite{knuth,stanley}. In particular $e_{k,n,m}$ is positive.
\end{teo}
\begin{proof}
From equation \eqref{coefi} we can see that:
\[ e_{k,n,m} = \frac{1}{(n-1)!} [x^k] F_{n,m}(x) \cdot G_m(x),\]
where $F_{n,m}(x) := \sum_{j=0}^n (-1)^j \binom{n}{j} f_{j,n,m} x^j$ and $G_m(x) := \sum_{j=0}^{\infty} j^m x^j$. It is a well known consequence of the \textit{Worpitzky Identity} \cite{knuth} that:
\[ G_m(x) = \frac{1}{(1-x)^{m+1}} \sum_{j=0}^m A(m,j) x^{j+1},\]
where $A(m,j)$ is an Eulerian number (the number of permutations of $m$ elements with exactly $j$ descents).\\
So we have that the product $F_{n,m}(x)\cdot G_m(x)$ is equal to:
\[ \frac{1}{(1-x)^{m+1}} F_{n,m}(x) \sum_{j=0}^m A(m,j) x^{j+1}.\]
We compute the product of the first two factors. Let:
\[ C_{n,m}(x) := \frac{1}{(1-x)^{m+1}} F_{n,m}(x),\]
and observe that:
\begin{align*}
[x^{\ell}] C_{n,m}(x) &= [x^\ell] \left( \frac{1}{(1-x)^{m+1}} F_{n,m}(x)\right) \\
&= \sum_{j=0}^{\ell} (-1)^j \binom{n}{j} f_{j,n,m} \binom{m+\ell-j}{m}\\
&= \sum_{j=0}^\ell \sum_{i=0}^{n-m-1} (-1)^{i+j} \binom{n}{j}\binom{m+\ell-j}{m} {j\brack{j-i}} {{n-j}\brack{m+1+i-j}}\\
&= W(\ell,n,m+1).
\end{align*}
where in the last step we used Corollary \ref{clave}. In particular $C_{n,m}(x)$ is a polynomial, and the result now follows computing the product $C_{n,m}(x) \cdot \sum_{j=0}^m A(m,j) x^{j+1}$ to get the identity of the statement.
\end{proof}
\begin{obs}
For small values of $n$ and $m$, the author at first observed that the series $C_{n,m}$ as defined in the above proof always happened to be a polynomial with positive coefficients, and that $C_{n,m}(1) = L(n,m+1)$. Then he tried to see how to define a statistic on the set $\mathscr{L}(n,m)$ in such a way that all these coefficients were captured, and it turned out that experimenting numerically the statistic defined as the weight of a linearly ordered partition did work for all small cases. The general proof was then carried out.
\end{obs}
\section{Acknowledgments}
The author wants to thank Davide Bolognini, Luca Moci and Matthias Beck for the useful suggestions and careful reading of the first draft, and Mat\'ias Hunicken for writing a part of the program that the author used to come up with the idea of defining the numbers $W(\ell,n,m)$. Also he wants to thank the reviewers of the article for the remarks and suggestions made to improve it. The author is supported by the Marie Sk{\l}odowska-Curie PhD fellowship as part of the program INdAM-DP-COFUND-2015.
| {
"timestamp": "2020-11-13T02:26:31",
"yymm": "1911",
"arxiv_id": "1911.10146",
"language": "en",
"url": "https://arxiv.org/abs/1911.10146",
"abstract": "We consider the Ehrhart polynomial of hypersimplices. It is proved that these polynomials have positive coefficients and we give a combinatorial formula for each of them. This settles a problem posed by Stanley and also proves that uniform matroids are Ehrhart positive, an important and yet unsolved particular case of a conjecture posed by De Loera et al. To this end, we introduce a new family of numbers that we call weighted Lah numbers and study some of their properties.",
"subjects": "Combinatorics (math.CO)",
"title": "Hypersimplices are Ehrhart Positive",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9904406012172463,
"lm_q2_score": 0.8128673155708976,
"lm_q1q2_score": 0.8050967927438889
} |
https://arxiv.org/abs/1912.04933 | On $q$-analogs of descent and peak polynomials | Descent polynomials and peak polynomials, which enumerate permutations with given descent and peak sets respectively, have recently received considerable attention. We give several formulas for $q$-analogs of these polynomials which refine the enumeration by the length of the permutations. In the case of $q$-descent polynomials we prove that the coefficients in one basis are strongly $q$-log concave, and conjecture this property in another basis. For peaks, we prove that the $q$-peak polynomial is palindromic in $q$, resolving a conjecture of Diaz-Lopez, Harris, and Insko. | \section{Introduction} \label{sec:intro}
For $\pi=\pi_1 \ldots \pi_n$ a permutation in the symmetric group $\mathfrak{S}_n$ written in one-line notation, the \emph{descent set} of $\pi$ is
\[
\Des(\pi)=\{i \in [n-1] \: | \: \pi_i > \pi_{i+1}\},
\]
where $[n-1]$ denotes the set $\{1,\ldots,n-1\}$; we write $\des(\pi)$ for the number $|\Des(\pi)|$ of descents in $\pi$. The \emph{length} of $\pi$ is the number of \emph{inversions}:
\[
\ell(\pi)=|\{(i,j) \: | \: 1\leq i < j \leq n, \: \pi_i>\pi_j\}|.
\]
Both statistics $\ell(\pi)$ and $\des(\pi)$ are of fundamental importance in the combinatorics of the symmetric group, as are their generalizations in other Coxeter groups.
The generating polynomial of the statistic $\des$ on $\mathfrak{S}_n$ is known as the \emph{Eulerian polynomial}:
\[
A_n(t)=\sum_{\pi \in \mathfrak{S}_n} t^{\des(\pi)}.
\]
These polynomials can be succinctly encoded in the generating function
\begin{equation} \label{eq:eulerian-gf}
\sum_{n \geq 0} \frac{x^n}{n!} A_n(t) = \frac{(1-t)e^{x(1-t)}}{1-te^{x(1-t)}}.
\end{equation}
Keeping track of the joint distribution of $\des$ and $\ell$ gives the following elegant $q$-analog of (\ref{eq:eulerian-gf}) due to Stanley \cite{Stanley-binomial-posets} (see Reiner \cite{Reiner-descents-and-length} for the generalization to Coxeter groups):
\begin{equation} \label{eq:q-eulerian-gf}
\sum_{n \geq 0} \frac{x^n}{\qfac{n}} \sum_{\pi \in \mathfrak{S}_n} t^{\des(\pi)}q^{\ell(\pi)} = \frac{(1-t)\exp(x(1-t);q)}{1-t\exp(x(1-t);q)}.
\end{equation}
Here $\qfac{n}=\qnum{1} \cdots \qnum{n}$ where $\qnum{k}=1+q+\cdots +q^{k-1}$ and $\exp(x;q)=\sum_{n \geq 0} x^n/\qfac{n}$.
Rather than considering the distribution of $\des$ on $\mathfrak{S}_n$ for each $n$ as in (\ref{eq:eulerian-gf}), MacMahon \cite{MacMahon} showed that if one fixes a finite subset $S \subset \mathbb{Z}_{>0}$ then the function
\[
D_S(n)=|\{\pi \in \mathfrak{S}_n \: | \: \Des(\pi)=S\}|
\]
is a polynomial in $n$. These \emph{descent polynomials}, although defined in 1915, have received significant attention only recently, beginning with the work of Diaz-Lopez et al. \cite{descent-polynomials} and continuing with a flurry of work (see \cite{Bencs, Jiradilok-McConville, Oguz}) on several open problems raised there.
The \emph{peak set} of $\pi$ is
\begin{align*}
\Peak(\pi)&=\{i \in \{2,3,...,n-1\} \: | \: \pi_{i-1} < \pi_i > \pi_{i+1}\}\\ &=\{i \in \{2,3,...,n-1\} \: | \: i \in \Des(\pi), i-1 \not \in \Des(\pi)\}.
\end{align*}
Similarly to the case of descent sets, we let
\[
P_S(n)=|\{\pi \in \mathfrak{S}_n \: | \: \Peak(\pi)=S\}|.
\]
Billey, Burdzy, and Sagan \cite{Billey-Burdzy-Sagan} proved the remarkable result that, after accounting for a power of two, this function is also polynomial:
\begin{equation}\label{eq:peak-is-power-times-poly}
P_S(n) = 2^{n-|S|-1} p_S(n),
\end{equation}
where $p_S(n)$ is an integer-valued polynomial called the \emph{peak polynomial}.
In light of the elegant $q$-analog (\ref{eq:q-eulerian-gf}) of (\ref{eq:eulerian-gf}), and motivated by a conjecture of Diaz-Lopez, Harris, and Insko (see Corollary \ref{cor:symmetry-conjecture}), this papers studies the natural $q$-analogs of the descent and peak polynomials
\[
D_S(n,q) = \sum_{\substack{\pi \in \mathfrak{S}_n \\ \Des(\pi)=S}} q^{\ell(\pi)}
\]
and
\[
P_S(n,q) = \sum_{\substack{\pi \in \mathfrak{S}_n \\ \Peak(\pi)=S}} q^{\ell(\pi)}
\]
with the aim of understanding them uniformly in $n$ and $q$. Note that specializing $q=1$ recovers the usual functions $D_S(n)$ and $P_S(n)$.
The remainder of Section \ref{sec:intro} covers background which will be needed later; none of this is new except for Proposition \ref{prop:closed-under-convolution}. Section \ref{sec:descent-and-peak-background} covers background material on descent and peak polynomials, while Section \ref{sec:strong-q-log-concavity} recalls the notion of the \emph{strong $q$-log concavity} of a sequence of polynomials which is the subject of Theorem \ref{thm:log-concavity-in-b-basis} and Conjecture \ref{conj:log-concavity-in-a-basis}.
Section \ref{sec:q-descent} provides two formulas, in Theorems \ref{thm:q-des-formula} and \ref{thm:log-concavity-in-b-basis}, for the $q$-analog $D_S(n,q)$ of the descent polynomial in two different bases of $q$-binomial coefficients. Theorem \ref{thm:log-concavity-in-b-basis} also establishes the strong $q$-log concavity of the coefficients in one of these bases, generalizing a result of Bencs \cite{Bencs}. This property for the coefficients in the other basis is the content of Conjecture \ref{conj:log-concavity-in-a-basis}.
In Section \ref{sec:q-peak}, Theorem \ref{thm:q-peak-formula} provides a formula for the $q$-analog $P_S(n,q)$ as a weighted count of certain \emph{$S$-compatible sets}, which is new even in the specialization $q=1$. Corollary \ref{cor:symmetry-conjecture} then resolves a conjecture of Diaz-Lopez, Harris, and Insko by proving that this function is a palindromic polynomial in $q$ for fixed $n$. Section \ref{sec:alternate-proof} provides an alternate proof of this corollary, with another expression for $P_S(n,q)$ in terms of $q$-Eulerian polynomials given in Lemma \ref{lem:PIE} and Proposition \ref{prop:expression-Q}.
\subsection{Descent and peak polynomials}
\label{sec:descent-and-peak-background}
Let $A(S;n)=\{ \pi \in \mathfrak{S}_n \: | \: \Des(\pi)=S \}$. In Theorem \ref{thm:q-des-formula} we prove a $q$-analog of the following theorem.
\begin{theorem}[Diaz-Lopez et al. \cite{descent-polynomials}] \label{thm:des-poly-a-basis}
Let $S \subset \mathbb{Z}_{>0}$ be a finite set of positive integers and $m=\max(S)$, then:
\[
D_S(n)=\sum_{k=0}^m a_k(S) {n-m \choose k},
\]
where $a_0(S)=0$ and for $k \geq 1$ the constant $a_k(S)$ is the number of $\pi \in A(S;2m)$ such that $\{\pi_1,...,\pi_m\} \cap [m+1,2m]=[m+1,m+k]$.
\end{theorem}
Conjecture \ref{conj:log-concavity-in-a-basis} suggests a $q$-analog of the following result of Bencs (see Section \ref{sec:strong-q-log-concavity} for the notions of \emph{log-concavity} and \emph{strong $q$-log concavity}).
\begin{theorem}[Bencs \cite{Bencs}]
The sequence $(a_i(S))_{i=0,1,...,m}$ is log-concave.
\end{theorem}
\subsection{Strong $q$-log concavity}
\label{sec:strong-q-log-concavity}
A sequence $(a_i)_{i=0,1,...,k}$ of nonnegative real numbers is \emph{log-concave} if it has no internal zeroes (meaning that if $a_i, a_k \neq 0$ and $i < j < k$, then $a_j \neq 0$) and if $a_i^2 \geq a_{i-1}a_{i+1}$ for all $i$. This notion for combinatorially-defined sequences is extremely well-studied (see, for example, Stanley's survey \cite{Stanley-log-concave-survey}).
Following Sagan \cite{Sagan-inductive}, we say that a sequence $(a_i(q))_{i=0,...,k}$ of polynomials from $\mathbb{R}_{\geq 0}[q]$ is \emph{strongly $q$-log concave} if it has no internal zeroes and if
\begin{equation} \label{eq:strong-lc-condition}
a_i(q)a_j(q)-a_{i-1}(q)a_{j+1}(q) \in \mathbb{R}_{\geq 0}[q]
\end{equation}
for all $i<j$. Setting $q$ to 1 gives a log-concave sequence of real numbers, and no generality is lost in this specialization by imposing the condition (\ref{eq:strong-lc-condition}) for all $i<j$, rather than just $i=j$ as in the definition for sequences of real numbers, as the two conditions are equivalent for sequences of real numbers. It is not true, however, that the case $i=j$ implies the general case for polynomials, as Example \ref{ex:strong-q-log-concave} demonstrates.
\begin{ex} \label{ex:strong-q-log-concave}
Consider the sequence
\[
(a_0(q),a_1(q),a_2(q),a_3(q))=(2q, 1+q+q^2, 1+q+q^2,2q).
\]
We see that $a_1(q)^2-a_0(q)a_2(q)$ and $a_2(q)^2-a_1(q)a_3(q)$ both have nonnegative coefficients, but $a_1(q)a_2(q)-a_0(q)a_3(q)=1+2q-q^2+2q^3+q^4$ does not. Thus the case $i=j$ in (\ref{eq:strong-lc-condition}) does not imply the general case, unlike for sequences of real numbers.
\end{ex}
The notion of strong $q$-log concavity has been proven for many sequences of combinatorial interest (see, e.g. \cite{Sagan-inductive,Sagan-symm-func}). The following proposition illustrates the naturality of the definition of strong $q$-log concavity; to our knowledge it has not appeared before in the literature. The analogous statement for sequences of real numbers is well known.
\begin{prop} \label{prop:closed-under-convolution}
Let $(a_i(q))_{i=0,1,...,k}$ and $(b_i(q))_{i=0,1,...,\ell}$ be strongly $q$-log concave sequences. Define the \emph{convolution} $(c_i(q))_{i=0,1,...,k+\ell}$ of these sequences by
\[
\left(\sum_i a_i(q) t^i \right) \left(\sum_i b_i(q) t^i \right)=\sum_i c_i(q) t^i.
\]
Then $(c_i(q))_{i=0,1,...,k+\ell}$ is a strongly $q$-log concave sequence.
\end{prop}
\begin{proof}
We will adapt a proof of Stanley \cite{Stanley-log-concave-survey} for the case of real numbers to the polynomial setting.
We make the convention that both sequences are zero outside of the given indexing sets. Define matrices $A=(a_{i+j}(q))_{i,j=0,1,...,k+\ell}$ and $B=(b_{i+j}(q))_{i,j=0,1,...,k+\ell}$, and notice that $AB=(c_{i+j}(q))_{i=0,1,...,k+\ell}$. Condition (\ref{eq:strong-lc-condition}) implies that all $2 \times 2$ minors of $A$ and $B$ lie in $\mathbb{R}_{\geq 0}[q]$. The Cauchy-Binet formula expresses the $2 \times 2$ minors of $AB$ as sums of products of such minors of $A$ and $B$, thus the $2 \times 2$ minors of $AB$ also lie in $\mathbb{R}_{\geq 0}[q]$; this implies condition (\ref{eq:strong-lc-condition}) for the sequence $(c_i(q))_i$.
\end{proof}
We remark that Proposition \ref{prop:closed-under-convolution} is false if one only imposes (\ref{eq:strong-lc-condition}) for $i=j$. Take $a_0=q^2$, $a_1=q+q^2$, $a_2=1+2q+q^2$, $a_3=4+2q+q^2$ so that $a$ satisfies $a_i^2-a_{i-1}a_{i+1}$ for all $i$ (this sequence is provided in \cite{Sagan-inductive}) and take $b_0=b_1=1$. We obtain $c_1=q+2q^2$, $c_2=1+3q+2q^2$ and $c_3=5+4q+2q^2$ with $c_2^2-c_1c_3=1+q-q^2+2q^3$.
\section{$q$-analogs of descent polynomials}
\label{sec:q-descent}
We write $\qbinom{n}{k}$ for the \emph{q-binomial coefficient}
\[
\qbinom{n}{k}=\frac{\qfac{n}}{\qfac{n-k}\qfac{k}}.
\]
There are many combinatorial interpretations of the $q$-binomial coefficient. We will mainly use the following:
\[
\qbinom{n}{k}=\sum_{w\in\mathfrak{S}_n,\Des(w)\subset\{k\}}q^{\ell(w)}=\sum_{w\in\mathfrak{S}_n,\Des(w)\subset\{n-k\}}q^{\ell(w)}.
\]
Similarly, we have the $q$-\textit{multinomial} coefficient
\[
\qbinom{n}{n_1,n_2,\ldots,n_k}=\frac{[n]!_q}{[n_1]!_q[n_2]!_q\cdots[n_k]!_q}=\qbinom{n}{n_1}\qbinom{n-n_1}{n_2}\cdots\qbinom{n_k}{n_k}
\]
for $n=n_1+\cdots+n_k$, with the combinatorial interpretation
\[
\qbinom{n}{n_1,n_2,\ldots,n_k}=\sum_{\substack{w\in\mathfrak{S}_n\\\Des(w)\subset\{n_1,\ldots,n_{k-1}\}}}q^{\ell(w)}.
\]
Theorem \ref{thm:q-des-formula} is a direct $q$-analog of Theorem \ref{thm:des-poly-a-basis}.
\begin{theorem} \label{thm:q-des-formula}
Let $S \subset \mathbb{Z}_{>0}$ be a finite set of positive integers and $m=\max(S)$, then:
\[
D_S(n,q)=\sum_{k=0}^m a_k(S;q) \qbinom{n-m}{k},
\]
where $a_0(S)=0$ and for $k \geq 1$ the polynomial $a_k(S;q) \in \mathbb{R}_{\geq 0}[q]$ is given by
\[
a_k(S;q)=\sum_{\substack{\pi \in A(S;m+k) \\ [m+1,m+k]\subset \{\pi_1,...,\pi_m\}}} q^{\ell(\pi)}.
\]
\end{theorem}
\begin{proof}
The proof is modeled on the proof of Theorem \ref{thm:des-poly-a-basis} by Diaz-Lopez et al., taking account of the distribution of the lengths of the permutations involved. For $0\leq k\leq m$, let \[A^{(k)}(S;n)=\{\pi\in A(S;n):\{\pi_1,\ldots,\pi_m\}\cap[m+1,n]=[m+1,m+k]\}.\]
Notice that for $\pi\in A(S;n)$, $\pi(m+1)\leq m$ so $A^{(0)}(S;n)=\emptyset$. Also we see that for $\pi\in A^{(k)}(S;n)$, $\pi(j)=j$ for $j>m+k$. This directly implies that $\sum_{\pi\in A^{(k)}(S;n)}=a_k(S;q)$ by definition.
For $w\in A(S;n)$ such that $\{\pi_1,\ldots,\pi_m\}\cap[m+1,n]$ has cardinality $k$, we can construct a unique permutation $w'=f(w)\in A^{(k)}(S;n)$ such that $w'(i)=w(i)$ if $w(i)\leq m$ and the relative ordering of $w'(1),\ldots,w'(m)$ is the same as the relative ordering of $w(1),\ldots,w(m)$. In other words, to obtain $w'$ from $w$, we fix the entries with indices and values both not exceeding $m$, replace entries within $w(1),\ldots,w(m)$ that exceed $m$ by $m+1,\ldots,m+k$ in their original order, and require $w(m+1),\ldots,w(n)$ to be increasing. It is shown in \cite{descent-polynomials} that this map $f$ is well-defined, i.e., $f(w)\in A^{(k)}(S;n)$, and that it is ${n-m\choose k}$ to 1. Let $B_{\pi}(S;n)=f^{-1}(\pi)$. We see that
\[
\ell(f(w))-\ell(w)=\#\{(i,j):i\leq m<j,w(i)>w(j)>m\},
\]
which fits well with the combinatorial interpretation for the $q$-binomial coefficients. Now
\begin{align*}
D_S(n,q)=&\sum_{k=1}^m\sum_{\pi\in A^{(k)}(S;n)}\sum_{w\in B_{\pi}(S;n)}q^{\ell(w)}\\
=&\sum_{k=1}^m\sum_{\pi\in A^{(k)}(S;n)}q^{\ell(\pi)}\qbinom{n-m}{k}\\
=&\sum_{k=1}^m a_k(S;q)\qbinom{n-m}{k}.
\end{align*}
\end{proof}
Theorem~\ref{thm:log-concavity-in-b-basis} below is a $q$-analogue of Corollary 4.2 in \cite{Bencs}. Our proof is different from that given in \cite{Bencs} for the $q=1$ case, and in particular does not require Stanley's result \cite{Stanley-log-concave-survey} regarding log-concavity of sequences coming from linear extensions of posets, which relies on the difficult Aleksandrov-Fenchel inequalities in geometry.
\begin{theorem} \label{thm:log-concavity-in-b-basis}
Let $S\subset\mathbb{Z}_{>0}$ be a finite set of positive integers and $m=\max(S)$, then:
\[
D_S(n,q)=\sum_{k=0}^m b_k(S;q)\qbinom{n-k}{m-k+1},
\]
where $b_0(S;q)=0$ and for $k\geq1$, the polynomial $b_k(S;q)\in\mathbb{R}_{\geq0}[q]$ is given by
\[
b_k(S;q)=\sum_{\substack{\pi\in A(S;m+1)\\\pi(m+1)=k}}
q^{\ell(\pi)}.\]
Moreover, $(b_k(S;q))_{k=0,\ldots,m}$ is a strongly $q$-log concave sequence.
\end{theorem}
\begin{proof}
We first justify the expansion regarding $D_S(n,q)$. As a piece of notation, for any permutation $w\in \mathfrak{S}_n$ and $k\leq n$, let $w|_k\in\mathfrak{S}_k$ be the permutation obtained from the relative ordering of $w(1),\ldots,w(k)$. Define $A_{\pi}(S;n):=\{w\in A(S;n): w|_{m+1}=\pi\}$. Fix $\pi\in\mathfrak{S}_{m+1}$ with $\pi(m+1)=k$ and consider $w\in A_{\pi}(S;n)$. We have
\[
\ell(w)=\ell(\pi)+\#\{i<m+1<j:w(i)>w(j)\}.
\]
Such $w$ is uniquely determined by $w(m+2),\ldots,w(n)$, which corresponds to a subset of $\{k+1,\ldots,n\}$ of size $n-m-1$. By the combinatorial interpretation of $q$-binomial coefficient provided in the beginning of this section, we see that
\[
\sum_{w\in A_{\pi}(S;n)}q^{\ell(w)}=q^{\ell(\pi)}\qbinom{n-k}{m-k+1}.
\]
As a result,
\begin{align*}
D_S(n,q)=&\sum_{\pi\in A(S;m+1)}\sum_{w\in A_{\pi}(S;n)}q^{\ell(w)}=\sum_{k=1}^m\sum_{\substack{\pi\in A(S;m+1)\\\pi(m+1)=k}}\sum_{w\in A_{\pi}(S;n)}q^{\ell(w)}\\
=&\sum_{k=1}^m\sum_{\substack{\pi\in A(S;m+1)\\\pi(m+1)=k}}\qbinom{n-k}{m-k+1}\\
=&\sum_{k=1}^mb_k(S;q)\qbinom{n-k}{m-k+1}.
\end{align*}
Next we show that $(b_k(S;q))_{k=1,\ldots,m}$ is strongly $q$-log concave. We make a slight generalization for the sake of induction. Define
\[
b_k(S,n;q):=\sum_{\substack{w\in A(S;n)\\w(n)=k}}q^{\ell(w)}
\]
so that $b_k(S;q)=b_k(S,m+1;q)$. We use induction on $n$ to show that $(b_k(S,n;q))_{k=1,\ldots,n}$ is strongly $q$-log concave. The base case $n=1$ with $S=\emptyset$ is trivial. There are two cases to be considered: $n-1\in S$ and $n-1\notin S$, which are analogous to each other.
Assume that $n-1\notin S$, meaning that $w(n-1)<w(n)$ for $w\in A(S;n)$. For $w(n)=k$, we know $\ell(w)=\ell(w|_{n-1})+n-k$. Thus,
\[
b_k(S,n;q)=q^{n-k}\sum_{i=k+1}^n \sum_{\substack{w'\in A(S;n-1)\\w'(n-1)=i-1}}q^{\ell(w')}=q^{n-k}\sum_{i=k}^{n-1}b_i(S,n-1;q).
\]
By Proposition~\ref{prop:closed-under-convolution} with the sequence $a_i=1$ for all $i$, we know that if $(b_i)_{i\geq1}$ is $q$-log concave, then $(c_i)_{i\geq1}$ is $q$-log concave where $c_i=b_1+\cdots+b_i$. By induction hypothesis, $(b_i(S,n-1;q))_{i\geq1}$ is strongly $q$-log concave, so $(b_k(S,n-1;q))_{k\geq1}$ is strongly $q$-log concave as well.
Assume that $n-1\in S$. With the same argument, we obtain
$$b_k(S,n;q)=q^{n-k}\sum_{i=1}^{k-1}b_i(S\setminus\{n-1\},n-1;q)$$
which is also strongly $q$-log concave by induction hypothesis and convolution.
\end{proof}
\begin{conj} \label{conj:log-concavity-in-a-basis}
The coefficients $(a_k(S;q))_{k\geq1}$ (defined in Theorem~\ref{thm:q-des-formula}) form a strongly $q$-log concave sequence.
\end{conj}
\begin{remark}
The coefficients $a_k(S;q)$ and $b_k(S;q)$ from Theorems \ref{thm:q-des-formula} and \ref{thm:log-concavity-in-b-basis} are related in the following way:
\[
a_k(S;q)=q^{k(k-1)}\sum_{i=1}^{m-k+1}\qbinom{m-i}{k-1}b_i(S;q).
\]
When $q=1$, the log-concavity of $(b_i)_{i\geq1}$ implies the log-concavity of $(a_k)_{k\geq1}$ for any sequences $b$ and $a$ related as above (see Theorem 2.5.4 of \cite{Brenti-thesis}). However, this is no longer true for strong $q$-log concavity. A counterexample is given by $m=4$, $b_1=1$, $b_2=q^{10}$, $b_3=q^{20}$ and $b_4=q^{30}$, where we compute:
\begin{align*}
a_1=&q^{30}+q^{20}+q^{10}+1,\\
a_2=&q^{22} + q^{13} + q^{12} + q^4 + q^3 + q^2,\\
a_3=&q^{16} + q^{8} + q^7 + q^6,\\
a_4=&q^{12},
\end{align*}
and $a_2^2-a_1a_3$ has negative coefficients.
\end{remark}
\section{$q$-analogs of peak polynomials}
\label{sec:q-peak}
We say that a polynomial $f\in\mathbb{R}[q]$ is \textit{palindromic in degree} $d$ if $f(q)=q^df(1/q)$; note that this does not necessarily mean that $f$ has degree $d$. It is easy to see that if both $f$ and $g$ are palindromic in degree $d$, so is $f+g$ and if $f$ is palindromic in degree $d_1$ and $g$ is palindromic in degree $d_2$, then $fg$ is palindromic in degree $d_1+d_2$. Moreover, it is well-known that the $q$-binomial coefficient $\qbinom{n}{k}$ is palindromic in degree $k(n-k)$.
We say a set $S=\{s_1 < \cdots < s_r\}$ is $n$-\textit{admissible}, if $S=\Peak(\pi)$ for some $\pi \in \mathfrak{S}_n$. It is not hard to see that $S$ is $n$-admissible if and only if $s_1>1$ and $s_{i+1}>s_i+1$ for all $i$.
We use the following standard notation, a special case of the $q$-Pochhammer symbol:
\[
(-q;q)_k = \prod_{i=1}^k (1+q^i).
\]
Given $S=\{s_1 < \cdots < s_r\}$ an $n$-admissible set, we say that $T \subset \mathbb{Z}_{>0}$ is \emph{$S$-compatible} if:
\begin{itemize}
\item $T \cap S = \emptyset$,
\item $\max(T)<s_r$, and
\item at most one element of $T$ lies between any pair of consecutive elements of $S \sqcup \{0\}$ (in particular, $|T| \leq |S|)$.
\end{itemize}
When $T$ is understood, for $s \in S$ let $s^-$ be the largest element of $S \sqcup T \sqcup \{0\}$ less than $s$. We define
\[
\varepsilon(S,T)=\begin{cases} 0, & \substack{\text{\normalsize{if $\exists s \in S$ with $s-s^-$ odd and $s^- \in S \sqcup \{0\}$,}}\\ \text{\normalsize{or $s-s^-$ even and $s^- \in T$}}} \\ (-1)^{|S|-|T|}, &\text{otherwise.} \end{cases}
\]
\begin{theorem} \label{thm:q-peak-formula}
Let $S=\{s_1 < \cdots < s_r\}$ be $n$-admissible:
\[
P_S(n,q)=\sum_T \varepsilon(S,T) \prod_{i=1}^{r'+1} \qbinom{t_{i}}{t_{i-1}} (-q;q)_{t_{i}-t_{i-1}-1},
\]
where the sum is over all $S$-compatible sets $T=\{t_1 < \cdots < t_{r'}\}$, with the convention that $t_{r'+1}=n$.
\end{theorem}
\begin{remark}
The term in Theorem \ref{thm:q-peak-formula} corresponding to the $S$-compatible set $T$ is divisible by $n-|T|-1$ factors of the form $(1+q^j)$. Thus, since $|T| \leq |S|$, setting $q=1$ we recover the fact (\ref{eq:peak-is-power-times-poly}) that $P_S(n)$ is $2^{n-|S|-1}$ times an integer-valued polynomial $p_S(n)$. The factors of the form $(1+q^j)$ differ from term to term, however, so it is not clear whether it is possible to produce a meaningful $q$-analog of $p_S(n)$ itself.
\end{remark}
\begin{ex}
Let $S=\{4,6\}$, which is $n$-admissible for $n \geq 7$. The $S$-compatible sets are $T_1=\{1,5\}, T_2=\{2,5\}, T_3=\{3,5\}, T_4=\{1\}, T_5=\{2\}, T_6=\{3\},$ and $T_7=\emptyset$.
We have $\varepsilon(S,T_2)=\varepsilon(S,T_5)=0$, since when $T=T_2$ or $T_5$ we have $4^-=2 \in T$ and $4-2=2$ is even. The other terms are nonzero with signs $\varepsilon(S,T_1)=\varepsilon(S,T_3)=1$ and $\varepsilon(S,T_4)=\varepsilon(S,T_6)=-1$.
Thus for $n \geq 7$:
\begin{align*}
P_{\{4,6\}}(n,q)&=\qbinom{n}{5}\qbinom{5}{1}(-q;q)_{n-6}(-q;q)_{3}+\qbinom{n}{5}\qbinom{5}{3}(-q;q)_{n-6}(-q;q)_{1}(-q;q)_2 \\
& - \qbinom{n}{1}(-q;q)_{n-2} - \qbinom{n}{3}(-q;q)_{n-4}(-q;q)_{2}.
\end{align*}
\end{ex}
The following corollary proves a conjecture stated by Alexander Diaz-Lopez in his talk at Discrete Math Days of the Northeast. An alternative proof of Corollary \ref{cor:symmetry-conjecture} appears in Section \ref{sec:alternate-proof}.
\begin{cor} \label{cor:symmetry-conjecture}
For $n$ fixed and $S$ $n$-admissible, $P_S(n,q)$ is palindromic in degree $n\choose2$.
\end{cor}
\begin{proof}
It is well known that the $q$-binomial coefficients $\qbinom{x}{y}$ are palindromic in degree ${x \choose 2} - {y \choose 2} - {x-y \choose 2}$, and that products of palindromic polynomials are palindromic with degrees adding. Thus it suffices to check that all terms in the sum in Theorem \ref{thm:q-peak-formula} are of the same degree in $q$, so that the sum preserves the symmetry. Indeed, since $(-q;q)_k$ has degree ${k \choose 2}$, the degree of each summand is a telescoping sum equal to ${n \choose 2}$.
\end{proof}
\subsection{Proof of Theorem \ref{thm:q-peak-formula}}
\begin{lemma} \label{lem:emptyset-peaks}
For $n\geq1$, $P_{\emptyset}(n,q)=(-q;q)_{n-1}=(1+q)(1+q^2)\cdots(1+q^{n-1})$.
\end{lemma}
\begin{proof}
Given $\pi'\in \mathfrak{S}_{n-1}$ with no peaks, we can insert $n$ at either the beginning or the end to obtain a permutation $\pi \in\mathfrak{S}_n$ with no peaks. In the first case $\ell(\pi)=\ell(\pi')$, and in the second $\ell(\pi)=\ell(\pi')+(n-1)$, so the lemma follows.
\end{proof}
Lemma \ref{lem:q-peak-recurrence} modifies an idea of Billey, Burdzdy, and Sagan \cite{Billey-Burdzy-Sagan} to take account of the lengths of the permutations involved.
\begin{lemma} \label{lem:q-peak-recurrence}
Suppose that $S=\{s_1<\cdots<s_r\}$ is $n$-admissible, then
\[
P_S(n,q)=\qbinom{n}{k} \cdot P_{S_1}(k,q) \cdot (-q;q)_{n-k-1} - P_{S_1}(n,q) - P_{S_2}(n,q),
\]
where $S_1=S \setminus \{s_r\}$, $k=s_r-1$, and $S_2=S_1 \cup \{k\}$.
\end{lemma}
\begin{proof}
Consider the set $\Pi$ of permutations in $\mathfrak{S}_n$ such that $\Peak(\pi_1\cdots\pi_k)=S_1$ and $\Peak(\pi_{k+1}\cdots\pi_n)=\emptyset$. By choosing the first $k$ coordinates first, we have
\[
\sum_{\pi\in\Pi}q^{\ell(\pi)}=\qbinom{n}{k} \cdot P_{S_1}(k,q) \cdot P_{\emptyset}(n-k,q).
\]
On the other hand, since $\Peak(\pi)$ is one of $S,S_1,$ or $S_2$, and all possibilities are covered, we have
\[
\sum_{\pi\in\Pi}q^{\ell(\pi)}=P_{S}(n,q)+P_{S_1}(n,q)+P_{S_2}(n,q).
\]
Rearranging terms and applying Lemma \ref{lem:emptyset-peaks} completes the proof.
\end{proof}
We are now ready to complete the proof of Theorem \ref{thm:q-peak-formula}.
\begin{proof}[Proof of Theorem \ref{thm:q-peak-formula}]
Consider repeatedly applying the recursion in Lemma \ref{lem:q-peak-recurrence} in order to compute $P_S(n,q)$, continuing until the base case of Lemma \ref{lem:emptyset-peaks} is reached in every branch of the recursion. The branches consist of a choice, each time the recursion is applied, of one of the terms $\qbinom{n}{k} P_{S_1}(k,q)(-q;q)_{n-k-1}$, or $-P_{S_1}(n,q)$, or $-P_{S_2}(n,q)$. Thus we think of the branches as sequences of sets, beginning with $S$, modified one step at a time, and ending with $\emptyset$; branches also accrue weights corresponding to the coefficients of the $P$'s in the above three terms. The information about a branch which is relevant to computing its weight will be encoded in a set $\tilde{T}$, which we will see is of the form $T \sqcup \{n\}$, with $T$ an $S$-compatible set indexing a summand in Theorem \ref{thm:q-peak-formula}.
Start with fixed $S$ and $n$ as desired in order to compute $P_S(n,q)$; let $\tilde{T}=\{n\}$. The possible modifications are:
\begin{enumerate}
\item[(i)] Replace $S=\{s_1 < ... < s_r\}$ with $S_1=\{s_1,...,s_{r-1}\}$ (that is, delete $s_r$) and multiply the weight of the branch by $\binom{n}{s_r-1}(-q;q)_{n-s_r}$. In this case we add $s_r-1$ to $\tilde{T}$. Replace $n$ with $s_r-1$.
\item[(ii)] Replace $S=\{s_1 < ... < s_r\}$ with $S_1=\{s_1,...,s_{r-1}\}$ (that is, delete $s_r$) and multiply the weight of the branch by $x$ ($x$ will later be specialized to $-1$ as in the second term in the recurrence) and leave $\tilde{T}$ and $n$ unchanged.
\item[(iii)] Replace $S=\{s_1 < ... < s_r\}$ with $S_2=\{s_1,...,s_{r-1},s_r-1 \}$ (that is, decrement $s_r$ by one) and multiply the weight of the branch by $x$. Again leave $\tilde{T}$ and $n$ unchanged. If $S_2$ is no longer admissible, meaning $s_r-1=s_{r-1}+1$, then the weight of this branch becomes 0, since $P_{S_2}(n,q)=0$.
\end{enumerate}
We now observe several facts:
\begin{itemize}
\item[(a)] The sets $\tilde{T}$ which correspond to branches with nonzero weights are exactly those of the form $T \sqcup \{n\}$ with $T$ an $S$-compatible set.
\item[(b)] The weight of a branch depends, up to multiplying by a power of $x$, only on the corresponding set $\tilde{T}$, and this weight is the product appearing in Theorem \ref{thm:q-peak-formula}.
\item[(c)] After grouping branches according to $\tilde{T}$, the polynomial $f_{\tilde{T}}(x)$ coming from summing the powers of $x$ associated to each branch satisfies $f_{\tilde{T}}(-1)=\varepsilon(S,\tilde{T} \setminus \{n\})$.
\end{itemize}
Together, these facts imply Theorem \ref{thm:q-peak-formula}, as $P_S(n,q)$ is the weighted sum over all branches and (a), (b), and (c) imply that this weighted sum is given by the formula in the theorem.
To see that (a) holds, note that we only add an element to $\tilde{T}$ in case (i). Let $T=\tilde{T} \setminus \{n\}$, and suppose that $\tilde{T}$ corresponds to a branch with a nonzero weight. The largest possible element added to $\tilde{T}$ is $s_r-1$, so $\max(T)<s_r$. Because $S$ is admissible, the added element $s_r-1 \not \in S$, so $T \cap S = \emptyset$. Since only the largest element of $S$ is modified in each step, and since we only add to $\tilde{T}$ when we delete an element of $S$, it is not possible for two elements of $\tilde{T}$ to lie between consecutive elements of $S$. Thus $T$ is $S$-compatible. Conversely, any $S$-compatible set $T$ can clearly be realized, since we may use case (iii) to decrement the largest element of $S$ until (i) can be applied to add a desired element to $\tilde{T}$; if $s_{r-1}$ is larger than the element we wish to add to $\tilde{T}$, simply delete $s_r$ using case (i) or (ii) and continue as before.
Except for the factors of $x$ which are produced in steps (ii) and (iii), the only time the weight of the branch is modified is in step (i), and $\tilde{T}$ is modified at the same time. By design, the product $\prod \qbinom{t_i}{t_{i-1}}(-q;q)_{t_i-t_{i-1}-1}$ associated to $T$ changes in exactly the same way as the weight of the branch. This proves (b).
Given $s \in S$ and an $S$-compatible set $T$, if $s^- \in S \sqcup \{0\}$, then any branch yielding $T$ must not use case (i) when $s$ is the maximal element. Thus it must be that case (iii) is applied some number of times, moving the maximal element left, and then (ii) is applied to delete it, without modifying $\tilde{T}$. This contributes a factor of $x(1+x+\cdots+x^{s-s^--2})$ to the $x$-weight of $T$. In particular, if $s-s^-$ is odd, when we specialize $x=-1$ we get 0; this agrees with the definition of $\varepsilon(S,T)$. Otherwise, $s^- \in T$; now case (iii) must be used to move the maximal element to $s^-+1$ and then case (i) applied. This results in an $x$-weight of $(1+x+\cdots + x^{s-s^--1}$. If $s-s^-$ is even, setting $x=-1$ gives 0, agreeing with $\varepsilon(S,T)$. If none of the terms $(1+x+\cdots +x^k)$ evaluate to zero when $x=-1$, then they all evaluate to 1 and we are in the second case of the definition of $\varepsilon$. The power of $x$ dividing the $x$-weight of $T$ is $(-1)^{|S|-|T|}$, since we gain a factor of $x$ in the above analysis only when we delete an element of $S$ without modifying $\tilde{T}$. This proves (c), completing the proof.
\end{proof}
\subsection{Another proof of Corollary~\ref{cor:symmetry-conjecture}} \label{sec:alternate-proof}
In this section, we present a separate, more direct proof of Corollary~\ref{cor:symmetry-conjecture} by writing $P_S(n,q)$ as an alternating sum of products of $q$-multinomial coefficients and $q$-Eulerian polynomials. Define
\[
Q_S(n,q):=\sum_{\substack{\pi\in\mathfrak{S}_n\\\Peak(\pi)\supset S}}q^{\ell(\pi)}=\sum_{S'\supset S}P_{S'}(n,q).
\]
The principle of inclusion and exclusion immediately gives us the following lemma.
\begin{lemma}\label{lem:PIE}
For $n$-admissible set $S$, \[
P_S(n,q)=\sum_{n\text{-admissible }S'\supset S}(-1)^{|S'|-|S|}Q_{S'}(n,q).\]
\end{lemma}
We now compute $Q_S(n,q)$ explicitly. For $S=\{s_1<\cdots<s_r\}$, we divide $S$ into blocks $S=S_1\sqcup S_2\sqcup\cdots \sqcup S_k$ such that $\max S_i<\min S_{i+1}$, and $s_j$ and $s_{j+1}$ belong to the same block if and only if $s_{j+1}-s_j=2$. For $S_i=\{s_j<\cdots<s_{j+a-1}\}$, let $\bar S_i:=\{s_j-1,\ldots,s_{j+a-1}+1\}$ of cardinality $2m+1$. By definition, $\bar S_i$ does not intersect $\bar S_j$ for distinct $i\neq j$. So we can partition $\{1,2,\ldots,n\}$ into consecutive intervals in the order of $U_0\sqcup \bar S_0\sqcup U_1\sqcup \bar S_1\sqcup\cdots\sqcup \bar S_k\sqcup U_k$.
As an example, take $n=12$, $S=\{3,5,8\}$. Then $S_1=\{3,5\}$ and $S_2=\{8\}$. Moreover, $\bar S_1=\{2,3,4,5,6\}$ and $\bar S_2=\{7,8,9\}$. Further, $U_0=\{1\}$, $U_1=\emptyset$ and $U_2=\{10,11,12\}$.
\begin{prop}\label{prop:expression-Q}
For an $n$-admissible set $S$ and $\{1,2,\ldots,n\}=U_0\sqcup \bar S_0\sqcup U_1\sqcup \bar S_1\sqcup\cdots\sqcup \bar S_k\sqcup U_k$ as above. Let $|U_i|=u_i$ and $|\bar S_i|=r_i$. Then
\[
Q_S(n,q)=\qbinom{n}{u_0,r_1,u_1,\ldots,r_k,u_k}\cdot\prod_{j=1}^k E_{r_j}(q)\cdot \prod_{i=0}^k[u_i]!_q
\]
where \[E_m(q)=\sum_{w\in\mathfrak{S}_m,w(1)<w(2)>w(3)<\cdots}q^{\ell(w)}\] is the length generating function for alternating permutations of size $m$.
\end{prop}
\begin{proof}
Requiring $w\in\mathfrak{S}_n$ to satisfy $\Peak(w)\supset S$ is the same as requiring that $w$ is an alternating permutation when restricted to $\bar S_i$ for each $i=1,\ldots,k$. Thus, the formula follows because the $q$-multinomial coefficient corresponds to assigning values of $\{1,2,\ldots,n\}$ to blocks $U_0,\bar S_1,U_1,\ldots,\bar S_k,U_k$ while keeping track of the number of inversions between distinct blocks, $E_{r_j}(q)$ is the length generating function for the block $\bar S_j$ and $[u_i]!_q$ is the length generating function for the block $U_i$ where no conditions are imposed.
\end{proof}
Now Corollary~\ref{cor:symmetry-conjecture} follows.
\begin{proof}[Proof of Corollary~\ref{cor:symmetry-conjecture}]
We first show that $Q_S(n,q)$ is palindromic in degree $n\choose2$ by Proposition~\ref{prop:expression-Q}. The $q$-multinomial coefficient on the right hand side of Proposition~\ref{prop:expression-Q} is known to be palindromic in degree ${n\choose2}-{u_0\choose2}-{r_1\choose2}-\cdots-{u_k\choose2}$, by a straightforward calculation. Each $r_j$ is odd by construction so for an alternating permutation $w\in\mathfrak{S}_{r_j}$, its reverse $w'$ is also alternating and $\ell(w)+\ell(w')={r_j\choose2}$. This means $E_{r_j}(q)$ is palindromic in degree ${r_j\choose2}$. In addition, $[u_i]!_q$ is palindromic in degree ${u_i\choose 2}$. Multiplying terms together, we see that $Q_S(n,q)$ is palindromic in degree $n\choose2$. As an alternating sum of palindromic polynomials in degree $n\choose2$ (Lemma~\ref{lem:PIE}), we conclude that $P_S(n,q)$ is palindromic in degree $n\choose2$.
\end{proof}
\section*{Acknowledgements}
We are grateful to Alexander Diaz-Lopez for piquing our interest in descent and peak polynomials and to Alex Postnikov for his helpful comments. We also wish to thank the Northeast Combinatorics Network for organizing the Discrete Math Days conference at which this project was initiated.
\bibliographystyle{plain}
| {
"timestamp": "2019-12-12T02:00:55",
"yymm": "1912",
"arxiv_id": "1912.04933",
"language": "en",
"url": "https://arxiv.org/abs/1912.04933",
"abstract": "Descent polynomials and peak polynomials, which enumerate permutations with given descent and peak sets respectively, have recently received considerable attention. We give several formulas for $q$-analogs of these polynomials which refine the enumeration by the length of the permutations. In the case of $q$-descent polynomials we prove that the coefficients in one basis are strongly $q$-log concave, and conjecture this property in another basis. For peaks, we prove that the $q$-peak polynomial is palindromic in $q$, resolving a conjecture of Diaz-Lopez, Harris, and Insko.",
"subjects": "Combinatorics (math.CO)",
"title": "On $q$-analogs of descent and peak polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9902915258107348,
"lm_q2_score": 0.8128673201042493,
"lm_q1q2_score": 0.80497561870772
} |
https://arxiv.org/abs/1706.01579 | Progressions and Paths in Colorings of $\mathbb Z$ | A $\textit{ladder}$ is a set $S \subseteq \mathbb Z^+$ such that any finite coloring of $\mathbb Z$ contains arbitrarily long monochromatic progressions with common difference in $S$. Van der Waerden's theorem famously asserts that $\mathbb Z^+$ itself is a ladder. We also discuss variants of ladders, namely $\textit{accessible}$ and $\textit{walkable}$ sets, which are sets $S$ such that any coloring of $\mathbb Z$ contains arbitrarily long (for accessible sets) or infinite (for walkable sets) monochromatic sequences with consecutive differences in $S$. We show that sets with upper density 1 are ladders and walkable. We also show that all directed graphs with infinite chromatic number are accessible, and reduce the bound on the walkability order of sparse sets from 3 to 2, making it tight. | \section{Introduction}
In 1927, van der Waerden proved his famous theorem concerning arithmetic progressions in finite colorings of $\Z$, which asserts that any finite coloring of $\Z$ contains arbitrarily long arithmetic progressions. Brown, Graham and Landman study variants of this result by considering other classes of sequences and whether these classes must also appear in finite colorings of $\Z$ \cite{BGL99}. One such class that they study are subsets of the set of arithmetic progressions whose common differences all lie in some set $S \subseteq \Z^+$. When any finite coloring of $\Z$ contains arbitrarily long arithmetic progressions whose common difference is in $S$, $S$ was said to be ``large,'' though such sets are now called \textbf{ladders} \cite{GRS16}. Another related class of sequences are walks, defined over some set $S$, which are sequences of integers whose consecutive differences are in $S$. If any finite coloring of $\Z$ contains arbitrarily long walks over $S$, then $S$ is said to be \textbf{accessible} (see \cite{jungic2005conjecture,landman2007avoiding,landman2010avoiding}). Finally, when any finite coloring of $\Z$ contains walks of infinite length over $S$, $S$ is said to be \textbf{infinitely walkable}.
Instead of asking whether these properties hold for arbitrary finite colorings of $\Z$, we may ask for a set $S$ whether the above properties hold for any $k$-coloring of $\Z$ for fixed $k$. This leads to the analagous notions of \textbf{$k$-ladders}, \textbf{$k$-accessible} sets, and \textbf{$k$-walkable} sets.
This paper is largely motivated by the work of Guerreiro, Ruzsa, and Silva in \cite{GRS16}, and answers three conjectures posed in that paper as well as one conjecture from \cite{BGL99}.
\subsection{Organization of Paper}
Section \ref{Counterexamples Section} presents a selection of known conditions that are necessary or sufficient for a set to be a ladder, and counterexamples to their converses. Some of these counterexamples were either not stated or unknown in the literature, and Counterexample \ref{Countability of Polynomials} in particular answers a conjecture of \cite{GRS16}. Section \ref{Density} examines density results for ladders and outlines further possibilities for research. Section \ref{Section 3} deals with accessible and walkable sets.
\section{Ladders}
Perhaps the most natural question to ask regarding ladders is how to determine whether a set is or is not a ladder. However, as of yet, there currently exists no easily checked condition that is necessary and sufficient for a set to be a ladder.
\begin{theorem}[Brown, Graham, Landman \cite{BGL99}]\label{Modular Restrictions}
A set $S \subseteq \Z^+$ is a ladder if and only if $S \cap n\Z$ is a ladder for every $n$.
\end{theorem}
The proof is reasonably simple and informative.
\begin{proof}
If any such intersection were a non-ladder, then there would be some coloring $\chi$ on which all arithmetic progressions over $S \cap n\Z$ had length bounded by some constant. Then consider the coloring $\pi: x \mapsto x~ \mod n$. The Cartesian product $\chi \times \pi$ produces a coloring of $\Z$ whose monochromatic arithmetic sequences over $S$ lie in the subset $S \cap n\Z$, and are therefore of bounded length. We then conclude $S$ is not a ladder.
The other direction is seen immediately by taking $n = 1$.
\end{proof}
\subsection{Known Results and New Counterexamples}\label{Counterexamples Section}
We begin with some known results that give necessary or sufficient conditions for sets to be ladders, and provide counterexamples to their converses, some of which (for example, the counterexample to the converse of Theorem \ref{Polynomial}) were not in previous literature.
\begin{theorem}[Brown, Graham, Landman \cite{BGL99}]\label{Exponential} If a set $S = \{s_1,s_2,\dots\}$ satisfies $s_{i+1} \geq (1+\epsilon) s_i$ for all $i$ and some $\epsilon > 0$, then $S$ is not a ladder.
\end{theorem}
\begin{counterexample}[Converse of Theorem \ref{Exponential}]
The set of odd integers provides a counterexample to the converse of this theorem. It is a non-ladder by Theorem \ref{Modular Restrictions}.
\end{counterexample}
It may also be tempting to hypothesize that sets with exponential growth rates are non-ladders, but this is not the case. Indeed, we will see as a consequence of Theorem \ref{combinatorial cube} that for any $f: \Z\to\Z$, there exists a ladder $S = \{s_1,s_2,\dots\}$ such that $s_i > f(i)$ for all $i$.
\begin{theorem}[Brown, Graham, Landman \cite{BGL99}]\label{Complement}
The complement of a non-ladder is a ladder.
\end{theorem}
\begin{counterexample}[Converse of Theorem \ref{Complement}]
As we will see in Theorem \ref{Polynomial}, the set $\{4n^2\}_{n \in \N}$ is a ladder, and its complement contains $\{2n^2\}_{n \in \N}$, which is also a ladder by the same theorem.
\end{counterexample}
\begin{theorem}[Brown, Graham, Landman \cite{BGL99}]\label{Polynomial}
Let $P$ be some polynomial with integer coefficients such that $P(0) = 0$. Then if a set $S$ contains $P(\Z) \cap \Z^+$ then $S$ is a ladder.
\end{theorem}
This theorem follows from an extension of Van der Waerden's theorem to such polynomials due to Bergelson and Leibman in \cite{bergelson1996polynomial}. Again, the converse is not always true.
\begin{counterexample}[Converse of Theorem \ref{Polynomial}]\label{Countability of Polynomials}
By the countability of polynomials with integer coefficients, we can construct a set $S$ that contains one element of $P(\Z) \cap \Z^+$ and excludes one element of $P(\Z) \cap \Z^+$ for each nonconstant polynomial $P$ with integer coefficients. Then neither this set nor its complement contain all values of $P(\Z) \cap \Z^+$ for any $P \in \Z[x]$, yet by Theorem \ref{Complement} at least one of these sets must be a ladder.
\end{counterexample}
For the next theorem, we introduce the following notation: A \textbf{combinatorial cube} of dimension $k$ is the set of all subset sums of a multiset of cardinality $k$.
\begin{theorem}[Brown, Graham, Landman \cite{BGL99}]\label{combinatorial cube}
If a set $S \subset \Z$ contains combinatorial cubes of arbitrarily large dimension, then $S$ is a ladder.
\end{theorem}
\begin{counterexample}[Converse of Theorem \ref{combinatorial cube}]
The set of perfect cubes $\{n^3 : n \in \Z\}$ provides a counterexample to the converse of this statement. Such a set is a ladder by Theorem \ref{Polynomial}. If it contained a combinatorial cube of dimension at least 2, it would contain two elements $a$ and $b$, as well as their sum $a+b$. This, however, would violate Fermat's Last Theorem.
\end{counterexample}
Theorem \ref{combinatorial cube} also implies that we can construct ladders that are arbitrarily sparse by taking a set of combinatorial cubes that are sufficiently far apart from each other. As such, it seems unlikely that any simple density notion can be a necessary and sufficient condition for a set to be a ladder.
\subsection{Density 1 Sets are Ladders}\label{Density}
\begin{theorem}
Any set $S \subset \Z^+$ with upper density 1 is a ladder.
\end{theorem}
\begin{proof}
We show that any set with upper density 1 contains arbitrarily long sequences of the form $\{x,2x,3x,\dots\}$. Each of these sequences is a combinatorial cube, therefore this would imply that such a set is a ladder (Theorem \ref{combinatorial cube}). Recall that a set has upper density 1 in $\Z^+$ if
$$
\limsup_{n \to \infty} \frac{|S\cap [1,n]|}{n} = 1.
$$
Then for any $n$, we can find some $N$ with $|S \cap [1,N]| > N(1-\frac 1 {n^2})$. Now consider the sequences $\{x,2x,\dots,nx\}$ for $x \in [1,\frac Nn]$. Assume for the sake of contradiction that $S$ contains at most $n-1$ elements in each of these sequences. Then the complement of $S$ contains at least one element from each sequence. We note that each number $t$ can appear in at most $n$ sequences. Otherwise, we would have $t$ appearing in the $k^\text{th}$ spot in two distinct sequences $\{x,2x,\dots,nx\}$ and $\{y,2y,\dots,ny\}$ for some $k$ by pigeonhole principle. This would imply $t = kx = ky$, so $x = y$ and the two sequences are precisely the same, which is a contradiction. We then conclude that the complement of $S$ must contain at least one element for every $n$ sequences constructed above, so its size is at least $N/n^2$, contradicting the density assumption above. Thus $S$ contains $\{x,2x,\dots,nx\}$ for some $x$ in this interval. Since $n$ was chosen arbitrarily, the result follows.
\end{proof}
\begin{corollary}
Let $S$ be a set of the form $P(\Z) \cap \Z^+$ for some $P \in \Z[x]$ of degree at least 2 such that $P(0) = 0$. Then $S$ and its complement are both ladders.
\end{corollary}
This negatively resolves a question in \cite{BGL99} as to whether sets of the form $S \cap P(\Z)$ are ladders for all ladders $S$ and any non-linear $P$ as above. By taking $S$ to be the complement of $P(\Z)$ for any choice of $P$, we see that this intersection is empty and thus not a ladder.
To conclude the section, we present a conjectural density condition for a set to be a ladder. From Theorem \ref{Modular Restrictions} we see that $\Z \setminus n\Z$ is a non-ladder, which means that we can construct ladders with density $1-\epsilon$. The following conjecture asserts that this ``modular restriction'' is the only such obstacle to a density condition. Specifically,
\begin{conjecture}
Any set $S \subset \Z$ with positive relative upper density in each subgroup $n\Z$ is a ladder, where the {relative upper density} of $S$ in a subgroup $n\Z$ is defined as
$$
\limsup_{k \to \infty} \frac{\left|S \cap \{n,2n,\dots,kn\}\right|}{k}.
$$
\end{conjecture}
Any partial results or weaker variants would still be quite interesting.
\section{Accessible and Walkable sets}\label{Section 3}
We now define accessible and walkable sets, which are two commonly-studied variants of ladders \cite{GRS16,jungic2005conjecture, landman2007avoiding, landman2010avoiding}.
For a set $S \subset \Z^+$, define its \textbf{distance graph} $G(S) = (V,E)$ with $V = \Z$ and $E = \{(v_1,v_2) \in V \times V \mid |v_1-v_2| \in S\}$.
A \textbf{walk} over a set $S$ is a sequence $\{a_1,a_2,\dots\}$, of either finite or infinite length, such that for all $i$, $a_{i+1}-a_i \in S$. Equivalently, it is the set of vertices of some path in $G(S)$.
We say a set $S \subseteq Z^+$ is \textbf{accessible} if any finite coloring of $\Z$ admits arbitrarily long monochromatic walks over $S$.
We say a set $S \subset \Z^+$ is \textbf{$k$-walkable} if for any $k$-coloring of $\Z$, there are infinitely long monochromatic walks over $S$. A set that is $k$-walkable for all $k$ is called infinitely walkable. (Note the slight distinction between accessible and walkable sets.)
We briefly note the connections between these types of sets and ladders. It is clear that all ladders are accessible, but Jungi\'c provides an example of an accessible sequence that is not a ladder \cite{jungic2005conjecture}. It is not immediately obvious whether a ladder should be infinitely walkable, or vice-versa, however the authors of \cite{GRS16} provide examples of ladders that are not infinitely walkable and of infinitely walkable sets that are not ladders.
The following result parallels our earlier density result regarding ladders.
\begin{theorem}\label{Walkable Density 1}
Any set $S \subset \Z^+$ with upper density 1 is infinitely walkable.
\end{theorem}
\begin{proof}
We will construct an infinite set $H$ such that $H-H \subseteq S$. This will immediately imply $S$ is infinitely walkable (see \cite{GRS16}). We proceed inductively, by constructing a sequence of sets $H_1 \subset H_2 \subset H_3 \subset \cdots$ such that $H_i - H_i \subset S$ for all $i$.
To begin, take $H_1 = \{h_1\}$ for some $h_1 \in S$. Such an element must exist by the density of $S$. Now, say we have $H_k = \{h_1,\dots,h_k\}$ such that $H_k-H_k \subseteq S$. Then fix $n > h_k$ and consider the sets $n-H_k, 2n-H_k,3n-H_k,\dots$. Each of these sets $tn-H_k$ lies in the interval $((t-1)n,tn)$ and so they are mutually disjoint. Assume for the sake of contradiction that none of these sets is contained entirely in $S$. Then in each interval $((t-1)n,tn)$, $S$ is missing at least one element, and so for all $N > 2n(n-1)$ the density of $S$ on an interval $[1,N]$ is bounded by $$\frac {n-1}N \left\lceil{\frac Nn}\right\rceil < \frac {n-1}{n}+\frac{n-1}{N} < \frac{2n-1}{2n},$$ contradicting the upper density assumption on $S$.
Then, by contradiction, we have some $tn$ such that all of its differences with elements of $H$ are in $S$. Then we have $H \cup \{tn\} - H \cup \{tn\} \subseteq S$, so we let $H_{k+1} = H_k \cup \{tn\}$.
Finally, take $H = \bigcup_{i = 1}^\infty H_i$. This forms an infinite subset of $S$. Any two elements lie in some $H_k$ for sufficiently large $k$, and so their difference lies in $S$. Thus, we conclude that $H-H \subset S$, completing the proof.
\end{proof}
\subsection{Walkability Order}
For sets that are not infinitely walkable we define the order of a set as follows. $$\ord(S) := \sup\{k \mid S \text{ is $k$-walkable}\}.$$
We prove the following theorem concerning the order of sets whose elements grow quickly, improving on a similar result of Guerreiro, Ruzsa, and Silva in \cite{GRS16} by nearly a factor of two.
\begin{theorem}\label{Walkability Order}
Say $S \subset \Z^+$ and $S = \{s_1,s_2,\dots\}$ such that $\liminf \{s_{i+k}-s_i\}$ is infinite. Then $\ord(S) \le k+1$.
\end{theorem}
For comparison, see the proof of Theorem 9 in \cite{GRS16}.
\begin{proof}
We construct a $(k+2)$-coloring of $\Z^+$ such that every monochromatic walk over $S$ is of finite length. First, partition $\Z$ into intervals in the following manner:
\begin{enumerate}
\item Set $I_1 = \{1\}$.
\item For all $t > 1$, $I_t$ begins immediately after $I_{t-1}$ ends, and $|I_t| = s_N$, where $N$ is chosen such that for all $n > N$, $s_{n+k}-s_{n} > \sum_{i = 1}^{t-1} |I_i|$. Such an $N$ must exist by the assumptions on $S$.
\end{enumerate}
With this partition, we have the following fact: An element $x$ in interval $I_t$ is adjacent to at most $k$ elements in the set $I_1 \cup \dots \cup I_{t-2}$. This follows from the fact that for $x$ to be adjacent to an element $y$ of this set, these two elements must differ by some element greater than $s_n > |I_{t-1}| = S_N$ as above. Then $x-s_{n+k} < x-s_n-\sum_{i = 1}^{t-2} |I_i| < 0$. And so there are at most $k$ values of $s_i$ such that $x-s_i \in I_1 \cup \dots \cup I_{t-2}$.
We now color the integers using the elements of $[k+2]$. In each interval $I_t$, we will restrict ourselves to using the $k+1$ elements of $[k+2]$ not equivalent to $t ~\mod k+2$. The coloring proceeds as follows: each element $t \in I_t$ is adjacent to at most $k$ elements in $I_1 \cup \dots \cup I_{t-2}$ from above. We have $k+1$ choices for colors of elements in this interval, so choose a color for $t$ that is not equal to any of the colors of these neighbors, if they exist. Now, in this coloring, no element is adjacent to any element that is greater than one interval away. Then any monochromatic walk over $S$ contains elements in a consecutive set of intervals. If this consecutive set contained $k+2$ intervals, it would have to contain an interval with no elements of its color, which is impossible. Thus each monochromatic walk over $S$ in this coloring is contained in a set of at most $k+1$ finite intervals, and is therefore finite, which completes the proof.
\end{proof}
As a specific useful case of this theorem, we present the following corollary.
\begin{corollary}\label{Walkability Order 2}
Let $S = \{s_i\}$ with $\liminf\{s_{i+1}-s_i\} = \infty$. Then $\ord(S) \leq 2$.
\end{corollary}
This answers a problem of \cite{GRS16} that asks for the order of the set of squares, which the authors of the conjecture showed to be at least 2. Corollary \ref{Walkability Order 2} shows that the order of this set must then equal 2. Moreover, this example shows that given only the above assumptions on $S$, the bound given by Theorem \ref{Walkability Order} on the walkability order of $S$ is tight.
\subsection{Accessibility of General Directed Graphs}
We can also study accessibility and walkability through the distance graph $G(S)$. Guerreiro, Ruzsa, and Silva show that a set $S$ is accessible if and only if $G(S)$ has infinite chromatic number, that is, any finite coloring of $G(S)$ contains a pair of adjacent vertices of the same color \cite{GRS16}. This proof extends readily to all acyclic directed graphs, but is an open question for general directed graphs. We state their result here and then extend it to arbitrary directed graphs.
\begin{theorem}[Guerreiro, Ruzsa, Silva \cite{GRS16}]\label{Accessible Graph}
Let $G$ be an acyclic directed graph. Then $G$ has infinite chromatic number if and only if $G$ is accessible.
\end{theorem}
We prove the following extension of this theorem.
\begin{theorem}\label{General Directed Graphs}
Let $G$ be a directed graph with no loops. Then $G$ has infinite chromatic number if and only if $G$ is accessible.
\end{theorem}
\begin{proof}
Recall that $G$ having infinite chromatic number means that any finite coloring admits monochromatic paths of a single edge, whereas accessible means that any finite coloring admits arbitrarily long monochromatic paths. Then accessibility implies infinite chromatic number.
For the other direction, impose an arbitrary ordering on the vertices $V$ and then partition the edges of $G$ into two sets $E_1$ and $E_2$, where $E_1 = \{(v_1,v_2) \mid v_1 < v_2\}$ and $E_2$ its complement. Then it is clear that the two graphs $(V,E_1)$ and $(V,E_2)$ are acyclic, and both are subgraphs of $G$. Now we show that at least one of these graphs has infinite chromatic number. Assuming otherwise, there would exist some finite colorings $\chi_1$ and $\chi_2$ such that for all $(x,y) \in E_1$, $\chi_1(x) \neq \chi_1(y)$, and similarly for $(x,y) \in E_2$, $\chi_2(x) \neq \chi_2(y)$. Then consider the Cartesian product $\chi_1 \times \chi_2$, which is again a finite coloring. Any pair of adjacent vertices in $G$ must be connected by an edge in either $E_1$ or $E_2$, and therefore differ in at least one coordinate of their color in this coloring. Thus in this finite coloring of $G$, no pair of adjacent vertices share the same color, which contradicts the assumption that $G$ has infinite chromatic number. Then, by contradiction, one of these two acyclic subgraphs of $G$ also has infinite chromatic number. Then any coloring of the vertices of $G$ is a coloring of this subgraph, which by Theorem \ref{Accessible Graph} contains arbitrarily long monochromatic paths.
\end{proof}
\section{Further Work}
This paper answers a number of questions from \cite{GRS16}, but not all of them. One of the very interesting open problems is the question of whether a $2$-ladder is necessarily a ladder. We offer a density variant as well: Say $S \subseteq \Z$ is $\alpha$-Szemer\'edi if any $X \subseteq \Z^+$ with upper density $\alpha$ contains arbitrarily long arithmetic progressions with common difference in $S$. The conjecture states that for any $S$, $\inf\{\alpha: S \text{ is $\alpha$-Szemer\'edi}\} \in \{0,1\}$.
\section{Acknowledgements}
This research was carried out at the Duluth REU under the supervision of Joe Gallian. Duluth REU is supported by the University of Minnesota Duluth and by grants NSF-1358659 and NSA H98230-16-1-0026. Special thanks to Eric Riedl and Joe Gallian for editing help. Thanks as well to the UMD car rental program, because I was surprisingly productive during the long walks back to my apartment after returning their cars.
\bibliographystyle{acm}
| {
"timestamp": "2017-06-07T02:02:46",
"yymm": "1706",
"arxiv_id": "1706.01579",
"language": "en",
"url": "https://arxiv.org/abs/1706.01579",
"abstract": "A $\\textit{ladder}$ is a set $S \\subseteq \\mathbb Z^+$ such that any finite coloring of $\\mathbb Z$ contains arbitrarily long monochromatic progressions with common difference in $S$. Van der Waerden's theorem famously asserts that $\\mathbb Z^+$ itself is a ladder. We also discuss variants of ladders, namely $\\textit{accessible}$ and $\\textit{walkable}$ sets, which are sets $S$ such that any coloring of $\\mathbb Z$ contains arbitrarily long (for accessible sets) or infinite (for walkable sets) monochromatic sequences with consecutive differences in $S$. We show that sets with upper density 1 are ladders and walkable. We also show that all directed graphs with infinite chromatic number are accessible, and reduce the bound on the walkability order of sparse sets from 3 to 2, making it tight.",
"subjects": "Combinatorics (math.CO)",
"title": "Progressions and Paths in Colorings of $\\mathbb Z$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9932024688520369,
"lm_q2_score": 0.810478913248044,
"lm_q1q2_score": 0.8049696575904731
} |
https://arxiv.org/abs/1209.1406 | Adaptive Smolyak Pseudospectral Approximations | Polynomial approximations of computationally intensive models are central to uncertainty quantification. This paper describes an adaptive method for non-intrusive pseudospectral approximation, based on Smolyak's algorithm with generalized sparse grids. We rigorously analyze and extend the non-adaptive method proposed in [6], and compare it to a common alternative approach for using sparse grids to construct polynomial approximations, direct quadrature. Analysis of direct quadrature shows that O(1) errors are an intrinsic property of some configurations of the method, as a consequence of internal aliasing. We provide precise conditions, based on the chosen polynomial basis and quadrature rules, under which this aliasing error occurs. We then establish theoretical results on the accuracy of Smolyak pseudospectral approximation, and show that the Smolyak approximation avoids internal aliasing and makes far more effective use of sparse function evaluations. These results are applicable to broad choices of quadrature rule and generalized sparse grids. Exploiting this flexibility, we introduce a greedy heuristic for adaptive refinement of the pseudospectral approximation. We numerically demonstrate convergence of the algorithm on the Genz test functions, and illustrate the accuracy and efficiency of the adaptive approach on a realistic chemical kinetics problem. | \section*{Acknowledgments}
The authors would like to thank Paul Constantine for helpful
discussions and for sharing a preprint of his paper, which inspired
this work. We would also like to thank Omar Knio and Justin Winokur
for many helpful discussions, and Tom Coles for help with the chemical
kinetics example. P.\ Conrad was supported during this work by a
Department of Defense NDSEG Fellowship and an NSF graduate
fellowship. P.\ Conrad and Y.\ Marzouk acknowledge additional support
from the Scientific Discovery through Advanced Computing (SciDAC)
program funded by the US Department of Energy, Office of Science,
Advanced Scientific Computing Research under award number
DE-SC0007099.
\bibliographystyle{siam}
\section{Adaptive polynomial approximations}
\label{sec:adaptive}
When constructing a polynomial approximation of a black-box computational model, there are two essential questions: first, which basis terms should be included in the expansion; and second, what are the coefficients of those basis terms? The Smolyak construction allows detailed control over the truncation of the polynomial expansion and the work required to compute it. Since we typically do not have \textit{a priori} information about the functional relationship generated by a black-box model, we develop an adaptive approach to tailor the Smolyak approximation to this function, following the dimension-adaptive quadrature approaches of Gerstner \& Griebel~\cite{Gerstner2003} and Hegland~\cite{Hegland2003}.
The Smolyak algorithm is well suited to an adaptive approach. The telescoping sum converges in the same sense as the constituent one-dimensional operators as the index set grows to include $\mathbb{N}^d_0$, so we can simply add more terms to improve the approximation until we are satisfied. We separate our adaptive approach into two components: local improvements to the Smolyak multi-index set and a global stopping criterion.
\subsection{Dimension adaptivity}
Dimension adaptivity is responsible for identifying multi-indices to add to a Smolyak multi-index set in order to improve its accuracy. A standard refinement approach is simply to grow an isotropic simplex of side length $n$. \cite{Gerstner2003} and \cite{Hegland2003} instead suggest a series of greedy refinements that customize the Smolyak algorithm to a particular problem.
The refinement used in \cite{Gerstner2003} is to select a multi-index $\mathbf{k} \in \mathcal{K}$ and to add the forward neighbors of $\mathbf{k}$ that are admissible. The multi-index $\mathbf{k}$ is selected via an error indicator $\epsilon(\mathbf{k})$. We follow \cite{Gerstner2003} and assume that whenever $\mathbf{k}$ contributes strongly to the result of the algorithm, it represents a subspace that likely needs further refinement.
Let $\mathbf{k}$ be a multi-index such that $\mathcal{K}^\prime := \mathcal{K} \cup \mathbf{k}$, where $\mathcal{K}$ and $\mathcal{K}^\prime$ are admissible multi-index sets. The triangle inequality (for some appropriate norm, see Section \ref{s:adaptivecomments}) bounds the change in the Smolyak approximation produced by adding $\mathbf{k}$ to $\mathcal{K}$, yielding a useful error indicator:
\begin{equation}
\label{eq:localError}
\|A(\mathcal{K}^\prime,d,\mathcal{L}) - A(\mathcal{K},d,\mathcal{L}) \| \leq \| \Delta_{{k}_1}^1 \otimes \cdots \otimes \Delta_{{k}_d}^d \| =: \epsilon(\mathbf{k}) .
\end{equation}
Conveniently, this error indicator does not change as $\mathcal{K}$ evolves, so we need only compute it once. At each adaptation step, we find the $\mathbf{k}$ that maximizes $\epsilon(\mathbf{k})$ and that has at least one admissible forward neighbor. Then we add those forward neighbors.
\subsection{Termination criterion}
Now that we have a strategy to locally improve a multi-index set, it is useful to have a global estimate of the error of the approximation, $\epsilon_g$. We cannot expect to compute the exact error, but even a rough estimate is useful. We follow Gerstner \& Griebel's choice of global error indicator
\begin{equation}
\label{eq:globalError}
\epsilon_g := \sum \epsilon(\mathbf{k}) ,
\end{equation}
where the sum is taken over all the multi-indices that are eligible for local adaptation at any particular step (i.e., that have admissible forward neighbors) \cite{Gerstner2003}. The algorithm may be terminated when a particular threshold of the global indicator is reached, or when it falls by a specified amount.
\subsection{Error indicators and work-considering algorithms}
\label{s:adaptivecomments}
Thus far we have presented the adaptation strategy without reference to the problem of polynomial approximation. In this specific context, we use the $L^2(\mathbf{X}, w)$ norm in (\ref{eq:localError}), because it corresponds to the convergence properties of pseudospectral approximation and thus seems an appropriate target for greedy refinements. This choice is a heuristic to accelerate performance---albeit one that is simple and natural, and has enjoyed success in numerical experiments (see Section~\ref{sec:experiments}). Moreover, the analysis of external aliasing in Theorem \ref{thm:smolyakExternal} suggests that, in the case of pseudospectral approximation, significant missing polynomial terms alias onto some of the \textit{included} lower-order coefficients, giving the algorithm a useful indication of which direction to refine. This behavior helps reduce the need for smoothness in the coefficient pattern. Section \ref{sec:quadRules} provides a small fix that further helps with even or odd functions.
One is not required to use this norm to define $\epsilon(\mathbf{k})$, however, and it is possible that other choices could serve as better heuristics for some problems. Unfortunately, making definitive statements about the properties or general utility of these heuristic refinement schemes is challenging. The approach described above is intended to be broadly useful, but specific applications may require experimentation to find better choices.
Beyond the choice of norm, a commonly considered modification to $\epsilon (\mathbf{k})$ is to incorporate a notion of the computational effort required to refine $\mathbf{k}$. Define $n(\mathbf{k})$ as the amount of work to refine the admissible forward neighbors of $\mathbf{k}$, e.g., the number of new function evaluation points. \cite{Gerstner2003} discusses an error indicator that provides a parameterized sliding scale between selecting the term with highest $\epsilon (\mathbf{k})$ and the lowest $n(\mathbf{k})$:
\begin{equation}
\epsilon_{w,1}(\mathbf{k}) = \max \left\{ w \frac{\epsilon (\mathbf{k})}{\epsilon (\mathbf{1})}, (1 - w)\frac{n (\mathbf{1})}{n (\mathbf{k})}\right\}.
\end{equation}
Here $w \in [0,1]$ is the tuning parameter, and $\epsilon (\mathbf{1})$ and $n (\mathbf{1})$ are the indicator and cost of the first term. Putting $w=0$ considers only the standard error indicator and $w=1$ considers only the cost. A different indicator with a similar intent is
\begin{equation}
\epsilon_{w,2}(\mathbf{k}) = \epsilon (\mathbf{k}) - \tilde{w} n(\mathbf{k}),
\end{equation}
where $\tilde{w}>0$ describes a conversion between error and work. Both of these methods will sometimes select terms of low cost even if they do not appear to provide immediate benefit to the approximation. Yet we find both methods to be challenging to use in practice, because of the difficulty of selecting the tuning parameter. One can remove this particular tuning requirement by taking a ratio:
\begin{equation}
\epsilon_{w,3}(\mathbf{k}) = \frac{\epsilon (\mathbf{k}) }{n(\mathbf{k})}.
\end{equation}
This indicator looks for ``efficient'' terms to refine---ones that are expected to yield greater error reduction at less cost---rather than simply the highest-error directions. We performed some numerical experiments with these methods, but none of the examples demonstrated significant improvement. Furthermore, poor choices of tuning parameters can be harmful because they can essentially make the algorithm revert to a non-adaptive form. We do not give detailed results here for those experiments because they are not particularly conclusive; some types of coefficient patterns may benefit from work-considering approaches, but this remains an open problem.
On a similar note, using $\epsilon_g$ as a termination criteria is also a heuristic. As our experiments in Section \ref{sec:experiments} will show, for most smooth functions $\epsilon_g$ is an excellent estimate of the approximation accuracy. In other cases, the indicator can be quite poor; hence one should not rely on it exclusively. In practice, we typically terminate the algorithm based on a combination of elapsed wall clock time, the global error indicator, and an error estimate computed from limited \textit{ad hoc} sampling.
\section{Full tensor approximations}
\label{sec:approximations}
Tensorization is a common approach for lifting one-dimensional operators to higher dimensions. Not only are tensor products computationally convenient, but they provide much useful structure for analysis. In this section, we develop some essential background for computable tensor approximations, then apply it to problems of (i) approximating integrals with numerical quadrature; and (ii) approximating projection onto polynomial spaces with pseudospectral methods. In particular, we are interested in analyzing the errors associated with these approximations and in establishing conditions under which the approximations are \textit{exact}.
\subsection{General setting}
Consider a collection of one-dimensional linear operators $\mathcal{L}^{(i)}$, where $(i)$ indexes the operators used in different dimensions. In this work, $\mathcal{L}^{(i)}$ will be either an integral operator or an orthogonal projector onto some polynomial space. We can extend a collection of these operators into higher dimensions by constructing the tensor product operator
\begin{equation}
\label{e:tpoperator}
\mathcal{L}^{(\mathbf{d})} := \mathcal{L}^{(1)} \otimes \cdots \otimes \mathcal{L}^{(d)} .
\end{equation}
The one-dimensional operators need not be identical; the properties of the resulting tensor operator are constructed independently from each dimension. The bold parenthetical superscript refers to the tensor operator instead of the constituent one-dimensional operators.
As the true operators are not available computationally, we work with a convergent sequence of computable approximations, $\mathcal{L}^{(i)}_m$, such that
\begin{equation}
\| \mathcal{L}^{(i)} - \mathcal{L}^{(i)}_m \| \to 0 \ \mathrm{ as } \ m \to \infty
\end{equation}
in some appropriate norm. Taking the tensor product of these approximations provides an approximation to the full tensor operator, $\mathcal{L}^{(\mathbf{d})}_\mathbf{m}$, where the level of the approximation may be individually selected in each dimension, so the tensor approximation is identified by a multi-index $\mathbf{m}$. Typically, and in the cases of interest in this work, the tensor approximation will converge in the same sense as the one-dimensional approximation as all components of $\mathbf{m} \to \infty$.
An important property of approximation algorithms is whether they are \textit{exact} for some inputs; characterizing this set of inputs allows us to make useful statements at finite order.
\begin{Definition}[Exact Sets]
For an operator $\mathcal{L}$ and a corresponding approximation $\mathcal{L}_m$, define the exact set as $\mathcal{E}(\mathcal{L}_m) := \{f: \mathcal{L}(f) = \mathcal{L}_m(f)\}$ and the half exact set $\mathcal{E}_2(\mathcal{L}_m) := \{f : \mathcal{L}(f^2) = \mathcal{L}_m(f^2)\}$.
\end{Definition}
The half exact set will help connect the exactness of a quadrature rule to that of the closely related pseudospectral operators. This notation is useful in proving the following lemma, which relates the exact sets of one-dimensional approximations and tensor approximations.
\begin{lemma}
\label{thm:tensorAccuracy}
If a tensor approximation $\mathcal{L}^{(\mathbf{d})}_\mathbf{m}$ is constructed from one-dimensional approximations $\mathcal{L}^{(i)}_m$ with known exact sets, then
\begin{equation}
\mathcal{E}(\mathcal{L}^{(1)}_{{m}_1}) \otimes \cdots \otimes \mathcal{E}(\mathcal{L}^{(d)}_{{m}_d}) \, \subseteq \, \mathcal{E}(\mathcal{L}^{(\mathbf{d})}_\mathbf{m})
\end{equation}
\end{lemma}
\proof{It is sufficient to show that the approximation is exact for an arbitrary monomial input $f (\mathbf{x} ) = f^{(1)}(x^{(1)}) f^{(2)}(x^{(2)}) \cdots f^{(d)}(x^{(d)})$ where $f^{(i)}(x^{(i)}) \in \mathcal{E}(\mathcal{L}^{(i)}_{{m}_i})$, because we may extend to sums by linearity:
\begin{eqnarray*}
\mathcal{L}^{(\mathbf{d})}_{\mathbf{m}}(f^{(1)} \cdots f^{(d)}) & = & \mathcal{L}^{(1)}_{{m}_1}(f^{(1)}) \otimes \cdots \otimes \mathcal{L}^{(d)}_{{m}_d}(f^{(d)}) \\
& = & \mathcal{L}^{(1)}(f^{(1)}) \otimes \cdots \otimes \mathcal{L}^{(d)}(f^{(d)}) = \mathcal{L}^{(\mathbf{d})} ( f ) .
\end{eqnarray*}
The first step uses the tensor product structure of the operator and the second uses the definition of exact sets.
}\endproof
\subsection{Multi-indices}
Before continuing, we must make a short diversion to multi-indices, which provide helpful notation when dealing with tensor problems. A multi-index is a vector $\mathbf{i} \in \mathbb{N}^d_0$. An important notion for multi-indices is that of a \textit{neighborhood}.
\begin{Definition}[Neighborhoods of multi-indices]
A \emph{forward neighborhood} of a multi-index $\mathbf{k}$ is the multi-index set $n_f(\mathbf{k}) := \{\mathbf{k}+\mathbf{e}_i: \forall i \in \{1 \ldots d\} \}$, where $\mathbf{e}_i$ are the canonical unit vectors. The \emph{backward neighborhood} of a multi-index $\mathbf{k}$ is the multi-index set $n_b(\mathbf{k}) := \{\mathbf{k}-\mathbf{e}_i : \forall i \in \{1 \ldots d\}, \mathbf{k}-\mathbf{e}_i \in \mathbb{N}^d_0 \}$.
\end{Definition}
Smolyak algorithms rely on multi-index sets that are \emph{admissible}.
\begin{Definition}[Admissible multi-indices and multi-index sets]
A multi-index $\mathbf{k}$ is admissible with respect to a multi-index set $\mathcal{K}$ if $n_b(\mathbf{k}) \subseteq \mathcal{K}$. A multi-index set $\mathcal{K}$ is admissible if every $\mathbf{k} \in \mathcal{K}$ is admissible with respect to $\mathcal{K}$.
\end{Definition}
Two common admissible multi-index sets with simple geometric structure are \emph{total order} multi-index sets and \emph{full tensor} multi-index sets. One often encounters total order sets in the sparse grids literature and full tensor sets when dealing with tensor grids of points. The total order multi-index set $\mathcal{K}^{t}_n$ comprises those multi-indices that lie within a $d$-dimensional simplex of side length $n$:
\begin{equation}
\mathcal{K}^{t}_n := \{\mathbf{k} \in \mathbb{N}_0^d: \|\mathbf{k}\|_1 \leq n\}
\end{equation}
The full tensor multi-index set $\mathcal{K}^{f}_\mathbf{n}$ is the complete grid of indices bounded term-wise by a multi-index $\mathbf{n}$:
\begin{equation}
\mathcal{K}^{f}_\mathbf{n} := \{\mathbf{k} \in \mathbb{N}_0^d: \forall i \in \{1 \ldots d\}, \, {k}_i < {n}_i\}
\end{equation}
\subsection{Integrals and quadrature}
Let $X^{(i)}$ be an open or closed interval of the real line $\mathbb{R}$. Then we define the weighted integral operator in one dimension as follows:
\begin{equation}
\mathcal{I}^{(i)}(f) := \int_{X^{(i)}} f(x) w^{(i)}(x)\, dx
\end{equation}
where $f:X^{(i)} \to \mathbb{R}$ is some real-valued function and $w^{(i)}: X^{(i)} \to \mathbb{R}^+$ is an integrable weight function. We may extend to higher dimensions by forming the tensor product integral $\mathcal{I}^{(\mathbf{d})}$, which uses separable weight functions and Cartesian product domains.
Numerical quadrature approximates the action of an integral operator $\mathcal{I}^{(i)}$ with a weighted sum of point evaluations. For some family of quadrature rules, we write the ``level $m$'' quadrature rule, comprised of $p^{(i)}(m): \mathbb{N} \rightarrow \mathbb{N}$ points, as
\begin{equation}
\mathcal{I}^{(i)}(f) \approx \mathcal{Q}^{(i)}_m(f) := \sum_{j=1}^{p^{(i)}(m)} w_j^{(i)} f(x_j^{(i)}) .
\end{equation}
We call $p^{(i)}(m)$ the growth rate of the quadrature rule, and its form depends on the quadrature family; some rules only exist for certain numbers of points and others may be tailored, for example, to produce linear or exponential growth in the number of quadrature points with respect to the level.
Many quadrature families are exact if $f$ is a polynomial of a degree $a^{(i)}(m)$ or less, which allows us to specify a well-structured portion of the exact set for these quadrature rules:
\begin{align}
\mathbb{P}_{a^{(i)}(m)} & \subseteq \mathcal{E}(\mathcal{Q}^{(i)}_m)\\
\mathbb{P}_{\mathrm{floor}(a^{(i)}(m)/2)} & \subseteq \mathcal{E}_2(\mathcal{Q}^{(i)}_m),
\end{align}
where $\mathbb{P}_a$ is the space of polynomials of degree $a$ or less. It is intuitive and useful to draw the exact set as in Figure \ref{fig:Quadrature1D}. For this work, we rely on quadrature rules that exhibit polynomial accuracy of increasing order, which is sufficient to demonstrate convergence for functions in $L^2$ \cite{Canuto2006}.
\begin{figure}
\centering
\includegraphics[scale=.7]{figures/PCE1D.pdf}
\caption{Consider, one-dimensional Gaussian quadrature rule with three points, $\mathcal{Q}^{(i)}_3$, which is exact for fifth degree polynomials. This diagram depicts the exact set, $\mathcal{E}(\mathcal{Q}^1_3)$, and half exact set, $\mathcal{E}_2(\mathcal{Q}^1_3)$, of this quadrature rule.}
\label{fig:Quadrature1D}
\end{figure}
Tensor product quadrature rules are straightforward approximations of tensor product integrals that inherit convergence properties from the one-dimensional case. The exact set of a tensor product quadrature rule includes the tensor product of the constituent approximations' exact sets, as guaranteed by Lemma \ref{thm:tensorAccuracy} and depicted in Figure \ref{fig:SingleTensorQuadrature}.
\begin{figure}
\centering
\includegraphics[scale=.6]{figures/SingleTensorQuadrature.pdf}
\caption{Consider a two dimensional quadrature rule constructed from three point Gaussian quadrature rules, $\mathcal{Q}^{(\mathbf{2})}_{(3,3)}$. This diagram depicts the exact set, $\mathcal{E}(\mathcal{Q}^{(\mathbf{2})}_{(3,3)})$, and half exact set, $\mathcal{E}_2(\mathcal{Q}^{(\mathbf{2})}_{(3,3)})$.}
\label{fig:SingleTensorQuadrature}
\end{figure}
\subsection{Polynomial projection}
A polynomial chaos expansion approximates a function with a weighted sum of orthonormal polynomials \cite{Wiener1938,Xiu2002}. Let $\mathcal{H}^{(i)} := L^2 \left (X^{(i)}, w^{(i)} \right )$ be the separable Hilbert space of square-integrable functions $f: X^{(i)} \rightarrow \mathbb{R}$, with inner product defined by the weighted integral $\langle f,g \rangle = \mathcal{I}^{(i)}(fg)$, and $w^{(i)}(x)$ normalized so that it may represent a probability density. Let $\mathbb{P}^{(i)}_n$ be the space of univariate polynomials of degree $n$ or less. Now let $\mathcal{P}_n^{(i)}:\mathcal{H}^{(i)} \rightarrow \mathbb{P}^{(i)}_n$ be an orthogonal projector onto this subspace, written in terms of polynomials $\{ \psi_j^{(i)}(x): j \in \mathbb{N}_0 \}$ orthonormal with respect to the inner product of $\mathcal{H}^{i}$:
\begin{equation}
\label{eq:truncated1DPCE}
\mathcal{P}^{(i)}_n(f) := \sum_{j=0}^n \left \langle f(x), \psi^{(i)}_j(x) \right \rangle \psi^{(i)}_j(x) = \sum_{i=0}^n f_j \psi^{(i)}_j(x).
\end{equation}
The polynomial space $\mathbb{P}^{(i)}_n$ is of course the \textit{range} of the projection operator.
These polynomials are dense in $\mathcal{H}^{(i)}$, so the polynomial approximation of any $f \in \mathcal{H}^{(i)}$ converges in the $L^2$ sense as $n \to \infty$ \cite{Canuto2006, Xiu2002}.
If $f \in \mathcal{H}^{(i)}$, the coefficients must satisfy $\sum_{i=0}^\infty f_i^2 < \infty$.
Projections with finite degree $n$ omit terms of the infinite series, thus incurring \emph{truncation error}. We can write this error as
\begin{equation}
\label{eq:truncation}
\left \| f - \mathcal{P}^{(i)}_n(f) \right \|_2^2 = \left \| \sum_{j=n+1}^\infty f_j \psi_j^{(i)} \right \|_2^2 = \sum_{j=n+1}^\infty f_j^2 < \infty.
\end{equation}
Hence, we may reduce the truncation error to any desired level by increasing $n$, removing terms from the sum in (\ref{eq:truncation}) \cite{Canuto2006,Hesthaven2007}.
The $d$-dimensional version of this problem requires approximating functions in the Hilbert space $\mathcal{H}^{(\mathbf{d})} := \mathcal{H}^{(1)} \otimes \cdots \otimes \mathcal{H}^{(d)}$ via a tensor product basis of the univariate polynomials defined above:
\begin{equation}
\mathcal{P}_\mathbf{n}^{(\mathbf{d})}(f) = \sum_{{i}_1=0}^{{n}_1} \ldots \sum_{{i}_d=0}^{{n}_d} \left \langle f \Psi_\mathbf{i} \right \rangle \Psi_\mathbf{i}
\end{equation}
where $\Psi_\mathbf{i}(\mathbf{x}) := \prod_{j=1}^d \psi_{i_j}^{(j)} \left ( x^{(j)} \right )$. The multi-index $\mathbf{n}$ tailors the range of the projection to include a rectangular subset of polynomials.
As in the one-dimensional case, truncation induces error equal to the sum of the squares of the omitted coefficients, which we may similarly reduce to zero as ${n}_i \to \infty$, $\forall i$. The multivariate polynomial expansion also converges in an $L^2$ sense for any $f \in \mathcal{H}^{(\mathbf{d})}$ \cite{Canuto2006}
\subsection{Aliasing errors in pseudospectral approximation}
\label{sec:aliasing}
The inner products defining the expansion coefficients above are not directly computable. Pseudospectral approximation provides a practical non-intrusive algorithm by approximating these inner products with quadrature. Define the pseudospectral approximation in one dimension as
\begin{eqnarray}
\mathcal{S}_m^{(i)} (f) &:= &\sum_{j=0}^{q^{(i)}(m)} \mathcal{Q}^{(i)}_m \left ( f\psi_j^{(i)} \right ) \psi_j^{(i)}(x) \nonumber \\
&= & \sum_{j=0}^{q^{(i)}(m)} \tilde{f}_j \psi^{(i)}_j(x)
\end{eqnarray}
where $q^{(i)}(m)$ is the polynomial truncation at level $m$, to be specified shortly \cite{Canuto2006,Hesthaven2007}. Pseudospectral approximations are constructed around a level $m$ quadrature rule, and are designed to include as many terms in the sum as possible while maintaining accuracy. Assuming that $f \in L^2$, we can compute the $L^2$ error between the pseudospectral approximation and an exact projection onto the same polynomial space:
\begin{align}
\left \| \mathcal{P}_{q^{(i)}(m)}^{(i)}(f) - \mathcal{S}^{(i)}_m(f) \right \|_2^2 &= \left \| \sum_{j=0}^{q^{(i)}(m)} f_j \psi_j^{(i)} - \sum_{k=0}^{q^{(i)}(m)} \tilde{f}_k \psi_k^{(1)} \right \|^2_2 = \sum_{j=0}^{q^{(i)}(m)} (f_j - \tilde{f}_j )^2
\end{align}
This quantity is the \emph{aliasing error} \cite{Canuto2006,Hesthaven2007}. The error is non-zero because quadrature in general only approximates integrals; hence each $\tilde{f}_i$ is an approximation of $f_i$. The pseudospectral operator also incurs truncation error, as before, which is orthogonal to the aliasing error. We can expand each approximate coefficient as
\begin{eqnarray}
\tilde{f}_j & = & \mathcal{Q}^{(i)}_m \left (f \psi^{(i)}_j \right ) \nonumber \\
&= & \sum_{k=0}^\infty f_k \mathcal{Q}^{(i)}_m \left (\psi^{(i)}_j \psi^{(i)}_k \right) \nonumber \\
&= &\sum_{k=0}^{q^{(i)}(m)} f_k \mathcal{Q}^{(i)}_m \left (\psi^{(i)}_j \psi^{(i)}_k \right ) + \sum_{l=q^{(i)}(m)+1}^\infty f_l \mathcal{Q}^{(i)}_m \left (\psi^{(i)}_j \psi^{(i)}_l \right ) .
\label{e:coeff}
\end{eqnarray}
The first step substitutes in the polynomial expansion of $f$, which we assume is convergent, and rearranges using linearity. The second step partitions the sum around the truncation of the pseudospectral expansion. Although the basis functions are orthonormal, $\langle \psi^{(i)}_j, \psi^{(i)}_k \rangle = \delta_{jk}$, we cannot assume in general that the approximation $\mathcal{Q}^{(i)}_m \left (\psi^{(i)}_j \psi^{(i)}_k \right ) = \delta_{jk}$. Now substitute (\ref{e:coeff}) back into the aliasing error expression:
\begin{equation}
\label{eqn:errorDecomp1D}
\sum_{j=0}^{q^{(i)}(m)} (f_j - \tilde{f}_j )^2 = \sum_{j=0}^{q^{(i)}(m)} \left( f_j - \sum_{k=0}^{q^{(i)}(m)} f_k \, \mathcal{Q}^{(i)}_m \left (\psi^{(i)}_j \psi^{(i)}_k \right ) - \sum_{l=q^{(i)}(m)+1}^\infty f_l \, \mathcal{Q}^{(i)}_m \left (\psi^{(i)}_j \psi^{(i)}_l \right ) \right) ^2
\end{equation}
This form reveals the intimate link between the accuracy of pseudospectral approximations and the polynomial accuracy of quadrature rules. All aliasing is attributed to the inability of the quadrature rule to determine the orthonormality of the basis polynomials, causing one coefficient to corrupt another. The contribution of the first two parenthetical terms on the right of (\ref{eqn:errorDecomp1D}) is called \emph{internal aliasing}, while the third term is called \emph{external aliasing}. Internal aliasing is due to inaccuracies in $\mathcal{Q}(\psi^{(i)}_j \psi^{(i)}_k)$ when \textit{both} $\psi^{(i)}_j$ and $\psi^{(i)}_k$ are included in the expansion, while external aliasing occurs when \textit{only one} of these polynomials is included in the expansion. For many practical quadrature rules (and for all those used in this work), if $j\neq k$ and $\psi^{(i)}_j \psi^{(i)}_k \notin \mathcal{E}(\mathcal{Q})$, and hence the discrete inner product is not zero, then $\| \mathcal{Q}(\psi^{(i)}_j \psi^{(i)}_k) \|_2 $ is \OO{1} \cite{Trefethen2008}. As a result, the magnitude of an aliasing error typically corresponds to the magnitude of the aliased coefficients.
In principle, both types of aliasing error are driven to zero by sufficiently powerful quadrature, but we are left to select $q^{(i)}(m)$ for a particular quadrature level $m$. External aliasing is \OO{1} in the magnitude of the truncation error, and thus it is driven to zero as long as $q^{(i)}(m)$ increases with $m$. Internal aliasing could be \OO{1} with respect to the function of interest, meaning that the procedure neither converges nor provides a useful approximation. Therefore, the obvious option is to include as many terms as possible while setting the internal aliasing to zero.
For quadrature rules with polynomial exactness, we may accomplish this by setting $q^{(i)}(m) = \mathrm{floor}( a^{(i)}(m)/2)$. This ensures that the internal aliasing of $\mathcal{S}^{(i)}_m$ is zero, because $\forall j,k \leq q^{(i)}(m),\\ \psi^{(i)}_j \psi^{(i)}_k \in \mathcal{E}(\mathcal{Q}^{(i)}_m)$. Equivalently, a pseudospectral operator $\mathcal{S}^{(i)}_m$ using quadrature $\mathcal{Q}^{(i)}_m$ has a range corresponding to the half exact set $\mathcal{E}_2(\mathcal{Q}^{(i)}_m)$. Alternatively, we may justify this choice by noting that it makes the pseudospectral approximation exact on its range, $\mathbb{P}_{q^{(i)}(m)}^{(i)} \subseteq \mathcal{E}(\mathcal{S}^{(i)}_m)$.
Given this choice of $q^{(i)}(m)$, we wish to show that the pseudospectral approximation converges to the true function, where the magnitude of the error is as follows:
\begin{equation}
\left \| f - \mathcal{S}^{(i)}_m(f) \right \|^2_2 = \sum_{j=0}^{q^{(i)}(m)} \left( \sum_{k=q^{(i)}(m)+1}^\infty f_k \mathcal{Q} \left (\psi^{(i)}_j \psi^{(i)}_k \right ) \right)^2 + \sum_{l={q^{(i)}(m)} +1}^\infty f_l^2 .
\end{equation}
The two terms on right hand side comprise the external aliasing and the truncation error, respectively. We already know that the truncation error goes to zero as $q^{(i)}(m) \to \infty$. The external aliasing also vanishes for functions $f \in L^2$, as the truncated portion of $f$ likewise decreases \cite{Trefethen2008}. In the case of Gaussian quadrature rules, a link to interpolation provides precise rates for the convergence of the pseudospectral operator based on the regularity of $f$ \cite{Canuto2006}.
As with quadrature algorithms, our analysis of pseudospectral approximation in one dimension is directly extensible to multiple dimensions via full tensor products. We may thus conclude that $\mathcal{S}^{(\mathbf{d})}_\mathbf{m}$ converges to the projection onto the tensor product polynomial space in the same sense. The exact set follows Lemma $\ref{thm:tensorAccuracy}$, and hence the tensor product approximation inherits zero internal aliasing if suitable one-dimensional operators are used.
\section{Conclusions}
This paper gives a rigorous development of Smolyak pseudospectral algorithms, a practical approach for constructing polynomial chaos expansions from point evaluations of a function. A common alternative approach, direct quadrature, has previously been shown to suffer from large errors. We explain these errors as a consequence of internal aliasing and delineate the exact circumstances, derived from properties of the chosen polynomial basis and quadrature rules, under which internal aliasing will occur. Internal aliasing is a problem inherent to direct quadrature approaches, which use a single (sparse) quadrature rule to compute a set of spectral coefficients.
These approaches fail because they substitute a numerical approximation for only a portion of the algorithm, i.e., the evaluation of integrals, without considering the impact of this approximation on the entire construction. For almost all sparse quadrature rules, internal aliasing errors may be overcome only through an inefficient use of function evaluations.
In contrast, the Smolyak pseudospectral algorithm computes spectral coefficients by assembling tensor-product pseudospectral approximations in a coherent fashion that avoids internal aliasing by construction; moreover, it has smaller external aliasing errors. To establish these properties, we extend the known result that the exact set of a Smolyak pseudospectral approximation contains a union of the exact sets of all its constituent tensor-product approximation operators to the case of arbitrary admissible Smolyak multi-index sets. These results are applicable to any choice of quadrature rule and generalized sparse grid, and are verified through numerical demonstrations; hence, we suggest that the Smolyak pseudospectral algorithm is a superior approach in almost all contexts.
A key strength of Smolyak algorithms is that they are highly customizable through the choice of admissible multi-index sets. To this end, we describe a simple alteration to the adaptive sparse quadrature approaches of \cite{Gerstner2003, Hegland2003}, creating a corresponding method for adaptive pseudospectral approximation. Numerical experiments then evaluate the performance of different quadrature rules and of adaptive versus non-adaptive pseudospectral approximation. Tests of the adaptive method on a realistic chemical kinetics problem show multiple order-of-magnitude gains in accuracy over a non-adaptive approach. Although the adaptive strategy will not improve approximation performance for every function, we have little evidence that it is ever harmful and hence widely recommend its use.
While the adaptive approach illustrated here is deliberately simple, many extensions are possible. For instance, as described in Section~\ref{s:adaptivecomments}, measures of computational cost may be added to the dimension refinement criterion. One could also use the gradient of the $L^2$ error indicator to identify optimal directions in the space of multi-indices along which to continue refinement, or to avoid adding all the forward neighbors of the multi-index selected for refinement. These and other developments will be pursued in future work.
A flexible open-source \CC \ code implementing the adaptive approximation method discussed in this paper is available at \url{https://bitbucket.org/mituq/muq/}.
\section{Comparing direct quadrature to Smolyak pseudospectral approximation}
\label{sec:comparison}
The current UQ literature often suggests a \emph{direct quadrature} approach for constructing polynomial chaos expansions \cite{Xiu2009,LeMaitre2010,Eldred2009:local,Huan2012}. In this section, we describe this approach and show that, in comparison to a true Smolyak algorithm, it is inaccurate or inefficient in almost all cases. Our comparisons will contrast the theoretical error performance of the algorithms and provide simple numerical examples that illustrate typical errors and why they arise.
\subsection{Direct quadrature polynomial expansions}
At first glance, direct quadrature is quite simple. First, choose a multi-index set $\mathcal{J}$ to define a truncated polynomial expansion:
\begin{equation}
f \approx \sum_{\mathbf{j \in \mathcal{J}}} \tilde{f}_\mathbf{j} \Psi_\mathbf{j} .
\end{equation}
The index set $\mathcal{J}$ is typically admissible, but need not be. Second, select any $d$-dimensional quadrature rule $\mathcal{Q}^{(\mathbf{d})}$, and estimate every coefficient as:
\begin{equation}
\tilde{f_\mathbf{j}}= \mathcal{Q}^{(\mathbf{d})}(f\Psi_\mathbf{j}) .
\end{equation}
Unlike the Smolyak approach, we are left to choose $\mathcal{J}$ and $\mathcal{Q}^{(\mathbf{d})}$ independently, giving the appearance of flexibility. In practice, this produces a more complex and far more subtle version of the truncation trade-off discussed in Section \ref{sec:approximations}. Below, we will be interested in selecting $\mathcal{Q}$ and $\mathcal{J}$ to replicate the quadrature points and output range of the Smolyak approach, as it provides a benchmark for achievable performance.
Direct quadrature does not converge for every choice of $\mathcal{J}$ and $\mathcal{Q}^{(\mathbf{d})}$; consider the trivial case where $\mathcal{J}$ does not grow infinitely. It is possible that including far too many terms in $\mathcal{J}$ relative to the polynomial exactness of $\mathcal{Q}^{(\mathbf{d})}$ could lead to a non-convergent algorithm. Although this behavior contrasts with the straightforward convergence properties of Smolyak algorithms, most reasonable choices for direct quadrature do converge, and hence this is not our primary argument against the approach.
Instead, our primary concern is aliasing in direct quadrature and how it reduces performance at finite order. Both internal and external aliasing are governed by the same basic rule, which is just a restatement of how we defined aliasing in Section \ref{sec:aliasing}.
\begin{Remark}
\label{rem:dqAliasing}
For a multi-index set $\mathcal{J}$ and a quadrature rule $\mathcal{Q}^{(\mathbf{d})}$, the corresponding direct quadrature polynomial expansion has no aliasing between two polynomial terms if $\Psi_\mathbf{j} \Psi_\mathbf{j^\prime} \in \mathcal{E}(\mathcal{Q}^{(\mathbf{d})})$.
\end{Remark}
The next two sections compare the internal and external aliasing with both theory and simple numeric examples.
\subsection{Internal aliasing in direct quadrature}
As an extension of Remark \ref{rem:dqAliasing}, direct quadrature has no internal aliasing whenever every pair $\mathbf{j},\mathbf{j^\prime} \in \mathcal{J}$ has no aliasing. We can immediately conclude that for any basis set $\mathcal{J}$, there is some quadrature rule sufficiently powerful to avoid internal aliasing errors. In practice, however, this rule may not be a desirable one.
\begin{Example}
Assume that for some function with two inputs, we wish to include the polynomial basis terms $(a,0)$ and $(0,b)$. By Remark \ref{rem:dqAliasing}, the product of these two terms must be in the exact set; hence, the quadrature must include at least a full tensor rule of accuracy $(a,b)$. Although we have not asked for any coupling, direct quadrature must assume full coupling of the problem in order to avoid internal aliasing.
\end{Example}
Therefore we reach the surprising conclusion that direct quadrature inserts significant coupling into the problem, whereas we selected a Smolyak quadrature rule in hopes of leveraging the absence of that very coupling---making the choice inconsistent. For most sparse quadrature rules, we cannot include as many polynomial terms as the Smolyak pseudospectral approach without incurring internal aliasing, because the quadrature is not powerful enough in the mixed dimensions.
\begin{Example}
\label{ex:directAliasing}
Let $\mathbf{X}$ be the two-dimensional domain $[-1,1]^2$. Select a uniform weight function, which corresponds to a Legendre polynomial basis. Let $f(x,y) = \psi_0(x)\psi_4(y)$. Use Gauss-Legendre quadrature and an exponential growth rule, such that $p^{(i)}(m) = 2^{m-1}$. Select a sparse quadrature rule based on a total order multi-index set $\mathcal{Q}^2_{\mathcal{K}^{t}_5}$. Figure \ref{fig:ExpDirectQuadrature} shows the exact set of this Smolyak quadrature rule (solid line) along with its half-exact set (dashed line), which encompasses all the terms in the direct quadrature polynomial expansion.
Now consider the $\mathbf{j} = (8,0)$ polynomial, which is in the half-exact set. The product of the $(0,4)$ and $(8,0)$ polynomial terms is $(8,4)$, which is not within the exact set of the sparse rule. Hence, $(0,4)$ aliases onto $(8,0)$ because this quadrature rule has limited accuracy in the mixed dimensions.
Using both the Smolyak pseudospectral and direct quadrature methods, we numerically compute the polynomial expansion for this example. The resulting coefficients are shown in Figure \ref{fig:ExpInternalAliasing}. Even though the two methods use the same information and project $f$ onto the same basis, the Smolyak result has no internal aliasing while direct quadrature shows significant internal aliasing. Although both methods correctly compute the $(0,4)$ coefficient, direct quadrature shows aliasing on $(8,0)$ as predicted, and also on $(10,0)$, $(12,0)$, and $(14,0)$. In this case, direct quadrature is unable to determine the order of the input function or even whether the input is function of $x_1$ or $x_2$. Alternating terms are computed correctly because of the parity of the functions.
\end{Example}
The \OO{1} errors observed in this simple example demonstrate why it is crucial to eliminate internal aliasing in the construction of one-dimensional pseudospectral approximations and to ensure that the full tensor and Smolyak algorithms inherit that property. More complex functions demonstrate the same type of error, except that the errors resulting from multiple source terms are superimposed. Examples of the latter are given by Constantine \textit{et al.} \cite{Constantine2012}.
\begin{figure}[htb]
\centering
\includegraphics[scale=.5]{figures/GaussExpDirectQuadrature.pdf}
\caption{The exact set and polynomials included in the direct quadrature construction from Example \ref{ex:directAliasing}.}
\label{fig:ExpDirectQuadrature}
\end{figure}
\begin{figure}
\centering
\subfloat[Smolyak Pseudospectral]
{
\includegraphics[scale=.7]{figures/InternalSmol.pdf}
}
\qquad
\subfloat[Direct Quadrature]
{
\includegraphics[scale=.7]{figures/InternalDQ.pdf}
}
\caption{Numerical results for Example \ref{ex:directAliasing}; each color square indicates the log of the coefficient magnitude for the basis function at that position. The circle identifies the correct non-zero coefficient.
}
\label{fig:ExpInternalAliasing}
\end{figure}
There are some choices for which direct quadrature has no internal aliasing: full tensor quadrature rules and, notably, Smolyak quadrature constructed from one-dimensional Gaussian quadrature rules with $p^{(i)}(m) = m$, truncated according to an isotropic total-order multi-index set. However, many useful sparser or more tailored Smolyak quadrature rules, e.g., based on exponential growth quadrature rules or adaptive anisotropic Smolyak index sets, will incur internal aliasing if the basis selection matches the range of the Smolyak algorithm. This makes them a poor choice when a comparable Smolyak pseudospectral algorithm uses the same evaluation points and produces an approximation with the same polynomial terms, but is guaranteed by construction to have zero internal aliasing. Alternately, it is possible to select a sufficiently small polynomial basis to avoid internal aliasing, but this approach requires unnecessary conservatism that could easily be avoided with a Smolyak pseudospectral approximation.
\subsection{External aliasing}
The difference in external aliasing between direct quadrature and Smolyak pseudospectral approximation is much less severe. Both methods exhibit external aliasing from terms far outside the range of the approximation, as such errors are a necessary consequence of using finite order quadrature. Since the methods are constructed from similar constituent one-dimensional quadrature rules, aliasing is of similar magnitude when it occurs.
Comparing Theorem \ref{thm:smolyakExternal}, condition \textit{(b}), and Remark \ref{rem:dqAliasing}, we observe that if the direct quadrature method has no external aliasing between two basis terms, the equivalent Smolyak pseudospectral algorithm will not either. Yet the two methods perform differently because of their behavior on separable functions. Condition \textit{(a)} of Theorem \ref{thm:smolyakExternal} provides an additional condition under which external aliasing will not occur under a Smolyak pseudospectral algorithm, and thus it has strictly less external aliasing in general.
\begin{Example}
\label{ex:externalAliasing}
If we repeat Example \ref{ex:directAliasing} but choose $f$ to be a polynomial outside the approximation space, $f = \psi_6(x)\psi_6(y)$, we obtain the results in Figure \ref{fig:externalAliasing}. Now every non-zero coefficient is the result of external aliasing. Direct quadrature correctly computes some terms because of either parity or the few cases where Remark \ref{rem:dqAliasing} is satisfied. However, the Smolyak approach has fewer errors because the terms not between $(0,0)$ and $(6,6)$ are governed by condition \textit{(a)} of Theorem \ref{thm:smolyakExternal}, and hence have no external aliasing.
\end{Example}
This example is representative of the general case. Direct quadrature always incurs at least as much external aliasing as the Smolyak approach, and the methods become equivalent if the external term causing aliasing is of very high order. Although both methods will always exhibit external aliasing onto coefficients of the approximation for non-polynomial inputs, the truncation can in principle be chosen to include all the important terms, so that the remaining external aliasing is acceptably small.
\begin{figure}
\centering
\subfloat[Smolyak Pseudospectral]
{
\includegraphics[scale=.7]{figures/ExternalSmol.pdf}
}
\qquad
\subfloat[Direct Quadrature]
{
\includegraphics[scale=.7]{figures/ExternalDQ.pdf}
}
\caption{Numerical results for Example \ref{ex:externalAliasing}; each color square indicates the log of the coefficient magnitude for the basis function at its position. The circle indicates the correct non-zero coefficient. The Smolyak pseudospectral approach has fewer terms corrupted by external aliasing in this case.
}
\label{fig:externalAliasing}
\end{figure}
\subsection{Summary of comparison}
Compared to the Smolyak pseudospectral approach, direct quadrature yields larger internal \textit{and} external aliasing errors. Because of these aliasing errors, direct quadrature is essentially unable to make efficient use of most sparse quadrature rules. The Smolyak pseudospectral approach, on the other hand, is guaranteed never to have internal aliasing if the one-dimensional pseudospectral operators are chosen according to simple guidelines. We therefore recommend against using direct quadrature. The remainder of the paper will focus on extensions of the basic Smolyak pseudospectral approach.
\section{Introduction}
This file is documentation for the SIAM \LaTeX\ macros, and
provides instruction for submission of your files.
To accommodate authors who electronically typeset their manuscripts,
SIAM supports the use of \LaTeX. To ensure quality typesetting according
to SIAM style standards, SIAM provides a \LaTeX\ macro style file.
Using \LaTeX\ to format a manuscript should simplify the editorial process
and lessen the author's proofreading burden. However,
it is still necessary to proofread the galley proofs with care.
Electronic files should not be submitted until the paper has been
accepted, and then not until requested to do so by someone in the SIAM
office. Once an article is slated for an issue,
someone from the SIAM office will contact the author about any or all
of the following: editorial and stylistic queries,
supplying the source files (and any supplementary macros)
for the properly formatted article, and handling figures.
When submitting electronic files (electronic submissions)
(to {\tt tex@siam.org}) include the journal, issue, and author's
name in the subject line of the message.
Authors are responsible for ensuring that the paper generated
from the source files exactly matches the paper that
was accepted for publication by the review editor. If it does not,
information on how it differs should be indicated in the transmission
of the file. When submitting a file, please be sure to include any
additional macros (other than those provided by SIAM) that will be
needed to run the paper.
SIAM uses MS-DOS-based computers for \LaTeX\ processing. Therefore
all filenames should be restricted to eight characters or less,
plus a three character extension.
Once the files are corrected here at SIAM, we will mail the revised
proofs to be read against the original edited hardcopy
manuscript. We are not
set up to shuttle back and forth varying electronic versions of each
paper, so we must rely on hard copy of the galleys. The author's proofreading
is an important but easily overlooked step. Even if SIAM were not
to introduce a single editorial change into your manuscript, there
would still be a need to check, because electronic transmission
can introduce errors.
The distribution contains the following items: {\tt
siamltex.cls}, the main macro package based on {\tt
article.cls}; {\tt siam10.clo}, for the ten-point size option;\linebreak
{\tt subeqn.clo}, a style option for equation numbering (see \S3 for
an explanation); and {\tt siam.bst},
the style file for use with {\sc Bib}\TeX. Also included are this
file {\tt docultex.tex} and a sample file {\tt lexample.tex}.
The sample file represents a standard application of
the macros. The rest of this paper will highlight
some keys to effective macro use, as well as point out options and
special cases, and describe SIAM style standards to which
authors should conform.
\section{Headings}
The top matter of a journal paper falls into a standard
format. It begins of course with the \verb|\documentclass| command
\begin{verbatim}
\documentclass{siamltex}
\end{verbatim}
Other class options can be included
in the bracketed argument of the command, separated by commas.
Optional arguments include:
\begin{description}
\item[final] Without this option, lines which extend past the
margin will have black boxes next to them to help authors identify
lines that they need to fix, by re-writing or inserting breaks. \verb|final|
turns these boxes off, so that very small margin breaks which are not
noticible will not cause boxes to be generated.
\item[oneeqnum] Normally \verb|siamltex.cls| numbers equations,
tables, figures, and theorem environments with a decimal number,
composed of the section of the paper, a period, and the number of
the enumerated object (example: 1.2). The sequence of numbering
is also restarted with each new section, so that, for example, the last
equation of section 3 may be 3.10, but the first equation of section
4 would be 4.1. Using \verb|oneeqnum| numbers all equations consecutively
throughout a paper with a single digit.
\item[onethmnum] Using \verb|onethmnum| numbers all theorem-like
environments consecutively throughout a paper with a single digit.
\item[onefignum] Using \verb|onethmnum| numbers all figures
consecutively throughout a paper with a single digit.
\item[onetabnum] Using \verb|onethmnum| numbers all tables
consecutively throughout a paper with a single digit.
\end{description}
The title and author parts are formatted using the
\verb|\title| and \verb|\author| commands as described in Lamport
\cite{Lamport}. The \verb|\date|
command is not used. \verb|\maketitle| produces the actual
output of the commands.
The addresses and support acknowledgments are put into the
\verb|\author| commands via \verb|\thanks|. If support is
overall for the authors, the support acknowledgment should
be put in a \verb|\thanks| command in the \verb|\title|.
Specific support should go following the addresses of the
individual authors in the same \verb|\thanks| command.
Sometimes authors have support or addresses in common which
necessitates having multiple \verb|\thanks| commands for
each author. Unfortunately \LaTeX\ does not normally allow this,
so a special procedure must be used. An example of this procedure
follows. Grant information can also be run into both authors'
footnotes.
\begin{verbatim}
\title{TITLE OF PAPER}
\author{A.~U. Thorone\footnotemark[2]\ \footnotemark[5]
\and A.~U. Thortwo\footnotemark[3]\ \footnotemark[5]
\and A.~U. Thorthree\footnotemark[4]}
\begin{document}
\maketitle
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\footnotetext[2]{Address of A.~U. Thorone}
\footnotetext[3]{Address of A.~U. Thortwo}
\footnotetext[4]{Address of A.~U. Thorthree}
\footnotetext[5]{Support in common for the first and second
authors.}
\renewcommand{\thefootnote}{\arabic{footnote}}
\end{verbatim}
Notice that the footnote marks begin with {\tt [2]}
because the first mark (the asterisk) will be used in the
title for date-received information by SIAM, even if not
already used for support data. This is just one example;
other situations follow a similar pattern.
Following the author and title is the abstract, key words
listing, and AMS subject classification number(s),
designated using the \verb|{abstract}|, \verb|{keywords}|,
and \verb|{AMS}| environments. If
there is only one AMS number, the commands
\verb|\begin{AM}| and \verb|\end{AM}| are used
instead of \verb|{AMS}|. This causes the heading to be
in the singular. Authors are responsible for providing AMS numbers.
They can be found in the Annual Index of Math Reviews, or
through {\tt e-Math} ({\tt telnet e-math.ams.com}; login
and password are both {\tt e-math}).
Left and right running heads should be provided in the
following way.
\begin{verbatim}
\pagestyle{myheadings}
\thispagestyle{plain}
\markboth{A.~U. THORONE AND A.~U. THORTWO}{SHORTER PAPER
TITLE}
\end{verbatim}
\section{Equations and mathematics}
One advantage of \LaTeX\ is that it can automatically number
equations and refer to these equation numbers in text. While plain \TeX's
method of equation numbering (explicit numbering using
\verb|\leqno|) works in the SIAM macro, it is not preferred
except in certain cases. SIAM style guidelines call for
aligned equations in many circumstances, and \LaTeX's
\verb|{eqnarray}| environment is not compatible with
\verb|\leqno| and \LaTeX\ is not compatible with the plain
\TeX\ command \verb|\eqalign| and \verb|\leqalignno|. Since
SIAM may have to alter or realign certain groups of
equations, it is necessary to use the \LaTeX\ system of
automatic numbering.
Sometimes it is desirable to designate subequations of a larger
equation number. The subequations are designated with
(roman font) letters appended after the number. SIAM has
supplemented its macros with the {\tt subeqn.clo} option which
defines the environment \verb|{subequations}|.
\begin{verbatim}
\begin{subequations}\label{EKx}
\begin{equation}
y_k = B y_{k-1} + f, \qquad k=1,2,3,\ldots
\end{equation}
for any initial vector $ y_0$. Then
\begin{equation}
y_k\rightarrow u \mbox{\quad iff\quad} \rho( B)<1.
\end{equation}
\end{subequations}
\end{verbatim}
All equations within the \verb|{subequations}| environment
will keep the same overall number, but the letter
designation will increase.
Clear equation formatting using \TeX\ can be challenging. Aside from
the regular \TeX\ documentation, authors will find Nicholas
J. Higham's book {\em Handbook of Writing for the Mathematical
Sciences\/} \cite{Higham} useful for guidelines and tips on
formatting with \TeX. The book covers many other topics related
to article writing as well.
Authors commonly make mistakes by using
\verb|<|, \verb|>|, \verb|\mid|, and
\verb|\parallel| as delimiters, instead of
\verb|\langle|, \verb|\rangle|, \verb:|:,
and \verb:\|:. The incorrect symbols have particular
meanings distinct from the correct ones and should not be confused.
\begin{table}[htbp]
\caption{Illustration of incorrect delimiter use.}
\begin{center}\footnotesize
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|ll|ll|}\hline
\multicolumn{2}{|c|}{{\bf Wrong}} & \multicolumn{2}{c|}{{\bf Right}}\\ \hline
\verb|<x, y>| & $<x, y>$ & \verb|\langle x, y\rangle| & $\langle x, y\rangle$\\
\verb|5 < \mid A \mid| & $5 < \mid A \mid$ & \verb:5 < |A|: & $5 < |A|$\\
\verb|6x = \parallel x|&&&\\
\verb| - 1\parallel_{i}| & $6x = \parallel x - 1\parallel_{i}$ &
\verb:6x = \|x - 1\|_{i}: & $6x = \| x - 1\|_{i}$\\ \hline
\end{tabular}
\end{center}
\end{table}
Another common author error is to put large (and even medium sized)
matrices in-line with the text, rather than displaying them. This
creates unattractive line spacing problems, and should be assiduously
avoided. Text-sized matrices (like $({a \atop b} {b \atop c})$) might
be used but anything much more complex than the example cited will
not be easy to read and should be displayed.
More information on the formatting of equations and aligned
equations is found in Lamport \cite{Lamport}. Authors bear
primary responsibility for formatting their equations within
margins and in an aesthetically pleasing and informative manner.
The SIAM macros include additional roman math words, or ``log-like"
functions, to those provided in standard \TeX. The following
commands are added: \verb:\const:, \verb:\diag:, \verb:\grad:,
\verb:\Range:, \verb:\rank:, and \verb:\supp:.
These commands produce the same word as the command name
in math mode, in upright type.
\section{Special fonts}
SIAM supports the use of the AMS-\TeX\ fonts (version 2.0
and later). The package \verb|amsfonts| can be included with
the command\linebreak \verb|\usepackage{amsfonts}|. This package
is part of the AMS-\LaTeX distribution, available
from the AMS or from the Comprehensive TeX Archive
Network (anonymous ftp to ftp.shsu.edu). The blackboard bold font in this
font package can be used for designating number sets.
This is preferable to other methods of combining letters
(such as I and R for the real numbers) to produce pseudo-bold
letters but this is tolerable as well. Typographically speaking,
number sets may simply be designated using regular bold letters;
the blackboard bold typeface was designed to fulfil a desire
to simulate the limitations of a chalk board in printed type.
\subsection{Punctuation}
All standard punctuation and all numerals should be set in roman type
(upright) even within italic text. The only exceptions are periods and
commas. They may be set to match the surrounding text.
References to sections should use the symbol \S, generated by
\verb|\S|. (If the reference begins a sentence, the term ``Section''
should be spelled out in full.) Authors should not redefine
\verb|\S|, say, to be a calligraphic S, because \verb|\S|
must be reserved for use as the section symbol.
Authors sometimes confuse the use of various types of dashes.
Hyphens (\verb|-|, -) are used for some compound words (many
such words should have no hyphen but must be run together,
like ``nonzero,'' or split apart, like ``well defined'').
Minus signs (\verb|$-$|, $-$)
should be used in math to represent subtraction or negative numbers.
En dashes (\verb|--|, --) are used for ranges (like 3--5,
June--August), or for joined names (like Runge--Kutta). Em dashes
(\verb|---|, ---) are used to set off a clause---such as this
one---from the rest of the sentence.
\subsection{Text formatting}
SIAM style preferences do not make regular use of the \verb|{enumerate}|
and \verb|{itemize}| environments. Instead,
{\tt siamltex.cls} includes definitions of two alternate list
environments, \verb|{remunerate}| and \verb|{romannum}|.
Unlike the standard itemized lists, these environments do
not indent the secondary lines of text. The labels, whether
defaults or the optional user-defined, are always aligned
flush right.
The \verb|{remunerate}| environment consecutively numbers
each item with an arabic numeral followed by a period. This
number is always upright, even in slanted
environments. (For those wondering at the unusual
naming of this environment, it comes from Seroul and Levy's
\cite{SerLev} definition of a similar macro for plain \TeX:
\verb|\meti| which is
\protect\verb|\item| spelled backwards. Thus
\verb|{remunerate}|
a portion of
\verb|{enumerate}|
spelled backwards.)
The \verb|{romannum}| environment consecutively numbers
each item with a lower-case roman numeral enclosed in
parentheses. This number will always be upright within
slanted environments (as in theorems).
\section{Theorems and Lemmas}
Theorems, lemmas, corollaries, definitions, and propositions are covered
in the SIAM macros by the theorem-environments
\verb|{theorem}|, \verb|{lemma}|, \verb|{corollary}|,
\verb|{definition}| and \verb|{proposition}|. These are all
numbered in the same sequence and produce labels in small
caps with an italic body. Other environments may be specified by the
\verb|\newtheorem| command. SIAM's style is for Remarks and Examples
to appear with italic labels and an upright roman body.
\begin{verbatim}
\begin{theorem}
Sample theorem included for illustration.
Numbers and parentheses, like equation $(3.2)$, should be set
in roman type. Note that words (as opposed to ``log-like''
functions) in displayed equations, such as
$$ x^2 = Y^2 \sin z^2 \mbox{ for all } x $$
will appear in italic type in a theorem, though normally
they should appear in roman.\end{theorem}
\end{verbatim}
This sample produces Theorem 4.1 below.
\begin{theorem}
Sample theorem included for illustration.
Numbers and parentheses, like equation $(3.2)$, should be set
in roman type. Note that words (as opposed to ``log-like''
functions) in displayed equations, such as
$$ x^2 = Y^2 \sin z^2 \mbox{ for all } x $$
will appear in italic type in a theorem, though normally
they should appear in roman.
\end{theorem}
Proofs are handled with the \verb|\begin{proof}|
\verb|\end{proof}| environment. A ``QED'' box \endproof\ is created
automatically by \verb|\end{proof}|, but this should be
preceded with a \verb|\qquad|.
Named proofs, if used, must be done independently by the
authors. SIAM style specifies that proofs which end with
displayed equations should have the QED box two ems (\verb|\qquad|)
from the end of the equation on line with it horizontally.
Below is an example of how this can be done:
\begin{verbatim}
{\em Proof}. Proof of the previous theorem
.
.
.
thus,
$$
a^2 + b^2 = c^2 \qquad\endproof
$$
\end{verbatim}
\section{Figures and tables}
Figures and tables sometimes require special consideration.
Tables in SIAM style are need to be set in eight point size
by using the \verb|\footnotesize| command inside the
\verb|\begin{table}| environment. Also, they should be designed
so that they do not extend beyond the text margins.
SIAM style requires that no figures or tables appear in the
references section of the paper. \LaTeX\ is notorious for
making figure placement difficult, so it is important to
pay particular attention to figure placement near the
references in the text. All figures and tables should
be referred to in the text.
SIAM supports the use of {\tt epsfig} for including {\sc PostScript}
figures. All {\sc Post\-Script} figures should be sent in separate
files. See the {\tt epsfig} documentation (available via
anonymous ftp from CTAN: ftp.shsu.edu) for more details on the use
of this style option. It is a good idea to submit high-quality
hardcopy of all {\sc Post\-Script} figures just in case there
is difficulty in the reproduction of the figure. Figures produced
by other non-\TeX\ methods should be included as high-quality
hardcopy when the manuscript is submitted.
{\sc PostScript} figures that are sent should be generated with
sufficient line thickness. Some past figures authors have sent
had their line widths become very faint when SIAM set the papers
using a high-quality 1200dpi printer.
Hardcopy for non-{\sc PostScript} figures should be included in
the submission of the hardcopy of the manuscript. Space
should be left in the \verb|{figure}| command for the
hardcopy to be inserted in production.
\section{Bibliography and Bib\TeX}
If using {\sc Bib}\TeX, authors need not submit the {\tt .bib} file for
their papers. Merely submit the completed {\tt .bbl} file, having used
{\tt siam.bst} as their bibliographic style file. {\tt siam.bst}
only works with Bib\TeX\ version 99i and later. The use of
Bib\TeX\ and the preparation of a {\tt .bib} file is
described in greater detail in \cite{Lamport}.
If not using Bib\TeX, SIAM bibliographic references follow
the format of the following examples:
\begin{verbatim}
\bibitem{Ri} {\sc W. Riter},
{\em Title of a paper appearing in a book}, in The Book
Title, E.~D. One, E.~D. Two, and A.~N. Othereditor, eds.,
Publisher, Location, 1992, pp.~000--000.
\bibitem{AuTh1} {\sc A.~U. Thorone}, {\em Title of paper
with lower case letters}, SIAM J. Abbrev. Correctly, 2
(1992), pp.~000--000.
\bibitem{A1A2} {\sc A.~U. Thorone and A.~U. Thortwo}, {\em
Title of paper appearing in book}, in Book Title: With All
Initial Caps, Publisher, Location, 1992.
\bibitem{A1A22} \sameauthor,
{\em Title of Book{\rm :} Note Initial Caps and {\rm ROMAN
TYPE} for Punctuation and Acronyms}, Publisher,
Location, pp.~000--000, 1992.
\bibitem{AuTh3} {\sc A.~U. Thorthree}, {\em Title of paper
that's not published yet}, SIAM. J. Abbrev. Correctly, to appear.
\end{verbatim}
Other types of references fall into the same general
pattern. See the sample file or any SIAM journal for other
examples. Authors must correctly format their bibliography to
be considered as having used the macros correctly. An incorrectly
formatted bibliography is not only time-consuming for SIAM to
process but it is possible that errors may be introduced into
it by keyboarders/copy editors.
As an alternative to the above style of reference, an alphanumeric
code may be used in place of the number (e.g., [AUTh90]). The same
commands are used, but \verb|\bibitem| takes an optional argument
containing the desired alphanumeric code.
Another alternative is no number, simply the authors' names and
the year of publication following in parentheses. The rest of the
format is identical. The macros do not support this alternative
directly, but modifications to the macro definition are possible
if this reference style is preferred.
\section{Conclusion} Many other style suggestions and tips
could be given to help authors but are beyond the scope of this
document. Simple mistakes can be avoided by increasing your familiarity
with how \LaTeX\ functions. The books referred to throughout this document
are also useful to the author who wants clear, beautiful typography
with minimal mistakes.
\Appendix
\section{The use of appendices}
The \verb|
\section{Introduction}
A central issue in the field of uncertainty quantification is understanding the
response of a model to random inputs. When model evaluations are
computationally intensive, techniques for \textit{approximating} the
model response in an efficient manner are essential. Approximations
may be used to evaluate moments or the probability distribution of a
model's outputs, or to evaluate sensitivities of model outputs with
respect to the inputs~\cite{LeMaitre2010, Xiu2010,
Sudret2008}. Approximations may also be viewed as \textit{surrogate
models} to be used in optimization~\cite{march:cmo:2012} or
inference~\cite{Marzouk2007}, replacing the full model entirely.
Often one is faced with black box models that can only be evaluated at
designated input points. We will focus on constructing multivariate
polynomial approximations of the input-output relationship generated
by such a model; these approximations offer fast convergence for
smooth functions and are widely used. One common strategy for
constructing a polynomial approximation is interpolation, where
interpolants are conveniently represented in Lagrange form
\cite{Babuska2007,Xiu2005}. Another strategy is
projection, particularly orthogonal projection with respect to some
inner product. The results of such a projection are conveniently
represented with the corresponding family of orthogonal polynomials
\cite{Canuto2006,LeMaitre2010,Xiu2002}. When the inner product is
chosen according to the input probability measure, this construction
is known as the (finite dimensional) polynomial chaos expansion (PCE)
\cite{Ghanem1991,Soize2004,ernst:ocg:2012}. Interpolation
and projection are closely linked, particularly when projection is
computed via discrete model evaluations. Moreover, one can always
realize a change of basis \cite{gander:cbp:2005} for the polynomial
resulting from either operation. Here we will favor orthogonal
polynomial representations, as they are easy to manipulate and their
coefficients have a useful interpretation in probabilistic settings.
This paper discusses \emph{adaptive Smolyak pseudospectral
approximation}, an accurate and computationally efficient
approach to constructing multivariate polynomial chaos expansions.
Pseudospectral methods allow the construction of polynomial
approximations from point evaluations of a function \cite{Canuto2006,
Boyd2001}. We combine these methods with \emph{Smolyak's algorithm},
a general strategy for sparse approximation of linear operators on
tensor product spaces, which saves computational effort by weakening
the assumed coupling between the input dimensions. Gerstner \&
Griebel~\cite{Gerstner2003} and Hegland~\cite{Hegland2003} developed adaptive variants of Smolyak's
algorithm for numerical integration and illustrated the effectiveness
of on-the-fly heuristic adaptation. We extend their approach to
the pseudospectral approximation of functions. Adaptivity is expected
to yield substantial efficiency gains in high
dimensions---particularly for functions with anisotropic dependence on
input parameters and functions whose inputs might not be strongly
coupled at high order.
Previous attempts to extend pseudospectral methods to multivariate
polynomial approximation with sparse model evaluations employed
{ad hoc} approaches that are not always accurate. A common
procedure has been to use sparse quadrature, or even
dimension-adaptive sparse quadrature, to evaluate polynomial
coefficients directly \cite{Xiu2007,LeMaitre2010}.
This leads to at least two difficulties. First, the truncation of the
polynomial expansion must be specified independently of the quadrature
grid, yet it is unclear how to do this, particularly for anisotropic
and generalized sparse grids. Second, unless one uses excessively
high-order quadrature, significant aliasing errors may
result. Constantine \etal \cite{Constantine2012} provided the first
clear demonstration of these aliasing errors and proposed a Smolyak
algorithm that does not share them. That work also demonstrated a link
between Smolyak pseudospectral approximation and an extension to
Lagrange interpolation called \emph{sparse interpolation}, which uses
function evaluations on a sparse grid and has well characterized
convergence properties \cite{Nobile2007, Barthelmann2000}.
The first half of this work performs a theoretical analysis, placing
the solution from \cite{Constantine2012} in the broader context of
Smolyak constructions, and explaining the origin of the observed
aliasing errors for general (e.g., anisotropic) choices of sparse grid
and quadrature rule. We do so by using the notion of polynomial
exactness, without appealing to interpolation properties of particular
quadrature rules. We establish conditions under which tensorized
approximation operators are exact for particular polynomial inputs,
then apply this analysis to the specific cases of quadrature and
pseudospectral approximation; these cases are closely related and
facilitate comparisons between Smolyak pseudospectral
algorithms and direct quadrature.
Section \ref{sec:approximations} develops \textit{computable}
one-dimensional and tensorized approximations for these
settings. Section \ref{sec:smolyak} describes general Smolyak
algorithms and their properties, yielding our principal theorem about
the polynomial exactness of Smolyak approximations, and then applies
these results to quadrature and pseudospectral approximation. Section
\ref{sec:comparison} compares the Smolyak approach to conventional
direct quadrature. Our error analysis of direct quadrature shows why
the approach goes wrong and allows us to draw an important conclusion:
in almost all cases, direct quadrature is not an appropriate method
for constructing polynomial expansions and should be superseded by
Smolyak pseudospectral methods.
These results provide a rigorous foundation for \textit{adaptivity},
which is the second focus of this paper. Adaptivity makes it possible
to harness the full flexibility of Smolyak algorithms in practical
settings. Section \ref{sec:adaptive} introduces a fully adaptive
algorithm for Smolyak pseudospectral approximation, which uses a
single tolerance parameter to drive iterative refinement of both the
polynomial approximation space and the corresponding collection of
model evaluation points. As the adaptive method is largely heuristic,
Section \ref{sec:experiments} demonstrates the benefits of this
approach with numerical examples.
\section{Introduction and examples}
This paper presents a sample file for the use of SIAM's
\LaTeX\ macro package. It illustrates the features of the
macro package, using actual examples culled from various
papers published in SIAM's journals. It is to be expected
that this sample will provide examples of how to use the
macros to generate standard elements of journal papers,
e.g., theorems, definitions, or figures. This paper also
serves as an example of SIAM's stylistic preferences for
the formatting of such elements as bibliographic references,
displayed equations, and equation arrays, among others.
Some special circumstances are not dealt with in this
sample file; for such information one should see the
included documentation file.
{\em Note:} This paper is not to be read in any form for content.
The conglomeration of equations, lemmas, and other text elements were
put together solely for typographic illustrative purposes and don't
make any sense as lemmas, equations, etc.
\subsection{Sample text}
Let $S=[s_{ij}]$ ($1\leq i,j\leq n$) be a $(0,1,-1)$-matrix
of order $n$. Then $S$ is a {\em sign-nonsingular matrix}
(SNS-matrix) provided that each real matrix with the same
sign pattern as $S$ is nonsingular. There has been
considerable recent interest in constructing and
characterizing SNS-matrices \cite{bs}, \cite{klm}. There
has also been interest in strong forms of
sign-nonsingularity \cite{djd}. In this paper we give a new
generalization of SNS-matrices and investigate some of
their basic properties.
Let $S=[s_{ij}]$ be a $(0,1,-1)$-matrix of order $n$ and
let $C=[c_{ij}]$ be a real matrix of order $n$. The pair
$(S,C)$ is called a {\em matrix pair of order} $n$.
Throughout, $X=[x_{ij}]$ denotes a matrix of order $n$
whose entries are algebraically independent indeterminates
over the real field. Let $S\circ X$ denote the Hadamard
product (entrywise product) of $S$ and $X$. We say that the
pair $(S,C)$ is a {\em sign-nonsingular matrix pair of
order} $n$, abbreviated SNS-{\em matrix pair of order} $n$,
provided that the matrix \[A=S\circ X+C\] is nonsingular
for all positive real values of the $x_{ij}$. If $C=O$
then the pair $(S,O)$ is a SNS-matrix pair if and only if
$S$ is a SNS-matrix. If $S=O$ then the pair $(O,C)$ is a
SNS-matrix pair if and only if $C$ is nonsingular. Thus
SNS-matrix pairs include both nonsingular matrices and
sign-nonsingular matrices as special cases.
The pairs $(S,C)$ with
\[S=\left[\begin{array}{cc}1&0\\0&0\end{array}\right],\qquad
C=\left[\begin{array}{cc}1&1\\1&1\end{array}\right]\] and
\[S=\left[\begin{array}{ccc}1&1&0\\1&1&0\\0&0&0\end{array}\right],\qquad
C=\left[\begin{array}{ccc}0&0&1\\0&2&0\\
3&0&0\end{array}\right]\] are examples of SNS-matrix pairs.
\subsection{A remuneration list}
In this paper we consider the evaluation of integrals of the
following forms:
\begin{equation}
\int_a^b \left( \sum_i E_i B_{i,k,x}(t) \right)
\left( \sum_j F_j B_{j,l,y}(t) \right) dt,\label{problem}
\end{equation}
\begin{equation}
\int_a^b f(t) \left( \sum_i E_i B_{i,k,x}(t) \right) dt,\label{problem2}
\end{equation}
where $B_{i,k,x}$ is the $i$th B-spline of order $k$ defined over the
knots $x_i, x_{i+1}, \ldots, x_{i+k}$.
We will consider B-splines normalized so that their integral is one.
The splines may be of different orders and
defined on different knot sequences $x$ and $y$.
Often the limits of integration will be the entire real line, $-\infty$
to $+\infty$. Note that (\ref{problem}) is a special case of (\ref{problem2})
where $f(t)$ is a spline.
There are five different methods for calculating (\ref{problem})
that will be considered:
\begin{remunerate}
\item Use Gauss quadrature on each interval.
\item Convert the integral to a linear combination of
integrals of products of B-splines and provide a recurrence for
integrating the product of a pair of B-splines.
\item Convert the sums of B-splines to piecewise
B\'{e}zier format and integrate segment
by segment using the properties of the Bernstein polynomials.
\item Express the product of a pair of B-splines as a linear combination
of B-splines.
Use this to reformulate the integrand as a linear combination
of B-splines, and integrate term by term.
\item Integrate by parts.
\end{remunerate}
Of these five, only methods 1 and 5 are suitable for calculating
(\ref{problem2}). The first four methods will be touched on and the
last will be discussed at length.
\subsection{Some displayed equations and \{{\tt eqnarray}\}s}
By introducing the product topology on $R^{m \times m} \times
R^{n \times n}$ with the induced inner product
\begin{equation}
\langle (A_{1},B_{1}), (A_{2},B_{2})\rangle := \langle A_{1},A_{2}\rangle
+ \langle B_{1},B_{2}\rangle,\label{eq2.10}
\end{equation}
we calculate the Fr\'{e}chet derivative of $F$ as follows:
\begin{eqnarray}
F'(U,V)(H,K) &=& \langle R(U,V),H\Sigma V^{T} + U\Sigma K^{T} -
P(H\Sigma V^{T} + U\Sigma K^{T})\rangle \nonumber \\
&=& \langle R(U,V),H\Sigma V^{T} + U\Sigma K^{T}\rangle \label{eq2.11} \\
&=& \langle R(U,V)V\Sigma^{T},H\rangle + \langle \Sigma^{T}U^{T}R(U,V),K^{T}\rangle. \nonumber
\end{eqnarray}
In the middle line of (\ref{eq2.11}) we have used the fact that the range of
$R$ is always perpendicular to the range of $P$. The gradient $\nabla F$ of
$F$, therefore, may be interpreted as the
pair of matrices:
\begin{equation}
\nabla F(U,V) = (R(U,V)V\Sigma^{T},R(U,V)^{T}U\Sigma ) \in
R^{m \times m} \times R^{n \times n}. \label{eq2.12}
\end{equation}
Because of the product topology, we know
\begin{equation}
{\cal T}_{(U,V)}({\cal O} (m) \times {\cal O} (n)) =
{\cal T}_{U}{\cal O} (m) \times {\cal T}_{V}{\cal O} (n), \label{eq2.13}
\end{equation}
where ${\cal T}_{(U,V)}({\cal O} (m) \times {\cal O} (n))$ stands for the
tangent space to the manifold ${\cal O} (m) \times {\cal O} (n)$ at $(U,V)
\in {\cal O} (m) \times {\cal O} (n)$ and so on. The projection of
$\nabla F(U,V)$ onto ${\cal T}_{(U,V)}({\cal O} (m) \times {\cal O} (n))$,
therefore, is the product of the projection of the first component of
$\nabla F(U,V)$ onto ${\cal T}_{U}{\cal O} (m)$ and the projection of the
second component of $\nabla F(U,V)$ onto ${\cal T}_{V}{\cal O} (n)$.
In particular, we claim that the
projection $ g(U,V)$ of the gradient $\nabla F(U,V)$ onto
${\cal T}_{(U,V)}({\cal O} (m) \times {\cal O} (n))$ is given by the pair of
matrices:
\begin{eqnarray}
g(U,V) = && \left( \frac{R(U,V)V\Sigma^{T}U^{T}-U\Sigma V^{T}R(U,V)^{T}}{2}U,
\right. \nonumber \\[-1.5ex]
\label{eq2.14}\\[-1.5ex]
&&\quad \left. \frac{R(U,V)^{T}U\Sigma V^{T}-V
\Sigma^{T}U^{T}R(U,V)}{2}V \right).\nonumber
\end{eqnarray}
Thus, the vector field
\begin{equation}
\frac{d(U,V)}{dt} = -g(U,V) \label{eq2.15}
\end{equation}
defines a steepest descent flow on the manifold ${\cal O} (m) \times
{\cal O} (n)$ for the objective function $F(U,V)$.
\section{Main results}
Let $(S,C)$ be a matrix pair of order $n$. The determinant
\[\det (S\circ X+C)\]
is a polynomial in the indeterminates of $X$ of degree at
most $n$ over the real field. We call this polynomial the
{\em indicator polynomial} of the matrix pair $(S,C)$
because of the following proposition.
\begin{theorem}
\label{th:prop}
The matrix pair $(S,C)$ is a {\rm SNS}-matrix pair if and
only if all the nonzero coefficients in its indicator
polynomial have the same sign and there is at least one
nonzero coefficient.
\end{theorem}
\begin{proof}
Assume that $(S,C)$ is a SNS-matrix pair. Clearly the
indicator polynomial has a nonzero coefficient. Consider a
monomial
\begin{equation}
\label{eq:mono}
b_{i_{1},\ldots,i_{k};j_{1},\ldots,j_{k}}x_{i_{1}j_{1}}\cdots
x_{i_{k}j_{k}}
\end{equation}
occurring in the indicator polynomial with a nonzero
coefficient. By taking the $x_{ij}$ that occur in
(\ref{eq:mono}) large and all others small, we see that any
monomial that occurs in the indicator polynomial with a
nonzero coefficient can be made to dominate all others.
Hence all the nonzero coefficients have the same sign. The
converse is im-\linebreak mediate. \qquad\end{proof}
For SNS-matrix pairs $(S,C)$ with $C=O$ the indicator
polynomial is a homogeneous polynomial of degree $n$. In
this case Theorem \ref{th:prop} is a standard fact about
SNS-matrices.
\begin{lemma}[{\rm Stability}]
\label{stability}
Given $T>0$, suppose that $\| \epsilon (t) \|_{1,2} \leq h^{q-2}$
for $0 \leq t \leq T$ and $q \geq 6$.
Then there exists a positive number $B$ that depends on
$T$ and the exact solution $\psi$ only such that for all $0 \leq t \leq T$,
\begin{equation}
\label{Gron}
\frac {d}{dt} \| \epsilon (t) \| _{1,2} \leq B
( h^{q-3/2} + \| \epsilon (t) \|_{1,2})\;.
\end{equation}
The function $B(T)$ can be chosen to be nondecreasing in time.
\end{lemma}
\begin{theorem}
\label{th:gibson}
The maximum number of nonzero entries in a {\rm SNS}-matrix
$S$ of order $n$ equals \[\frac{n^{2}+3n-2}{2}\] with
equality if and only if there exist permutation matrices
such that $P|S|Q=T_{n}$ where
\begin{equation}
\label{eq:gibson}
T_{n}=\left[\begin{array}{cccccc} 1&1&\cdots&1&1&1\\
1&1&\cdots&1&1&1\\ 0&1&\cdots&1&1&1\\
\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\
0&0&\cdots&1&1&1\\ 0&0&\cdots&0&1&1\end{array}\right].
\end{equation}
\end{theorem}
We note for later use that each submatrix of $T_{n}$ of
order $n-1$ has all 1s on its main diagonal.
We now obtain a bound on the number of nonzero entries of
$S$ in a SNS-matrix pair $(S,C)$ in terms of the degree of
the indicator polynomial. We denote the strictly upper
triangular (0,1)-matrix of order $m$ with all 1s above the
main diagonal by $U_{m}$. The all 1s matrix of size $m$ by
$p$ is denoted by $J_{m,p}$.
\begin{proposition}[{\rm Convolution theorem}]
\label{pro:2.1} Let
\begin{eqnarray*}
a\ast u(t) = \int_0^t a(t- \tau) u(\tau) d\tau, \hspace{.2in} t \in
(0, \infty).
\end{eqnarray*}
Then
\begin{eqnarray*}
\widehat{a\ast u}(s) = \widehat{a}(s)\widehat{u}(s).
\end{eqnarray*}
\end{proposition}
\begin{lemma}
\label{lem:3.1}
For $s_0 >0$, if
$$
\int_0^{\infty} e^{-2s_0 t}v^{(1)}(t) v(t) dt \; \leq 0 \;,
$$
then
\begin{eqnarray*}
\int_0^{\infty} e^{-2s_0 t} v^2(t) dt \; \leq \; \frac{1}{2s_0} v^2(0).
\end{eqnarray*}
\end{lemma}
{\em Proof}. Applying integration by parts, we obtain
\begin{eqnarray*}
\int_0^{\infty} e^{-2s_0 t} [v^2(t)-v^2(0)] dt
&=&\lim_{t\rightarrow \infty}\left (
-\frac{1}{2s_0}e^{-2s_0 t}v^2(t) \right ) +\frac{1}{s_0}
\int_0^{\infty} e^{-2s_0 t}v^{(1)}(t)v(t)dt\\
&\leq& \frac{1}{s_0} \int_0^{\infty} e^{-2s_0 t} v^{(1)}(t)v(t) dt \;\;
\leq \;\; 0.
\end{eqnarray*}
Thus
$$
\int_0^{\infty} e^{-2s_0 t} v^2(t) dt \;\;\leq v^2(0) \int_0^{\infty}
\;\;e^{-2s_0 t} dt\;\;=\;\;\frac{1}{2s_0} v^2(0).\eqno\endproof
$$
\begin{corollary}\label{c4.1}
Let $ \mbox{\boldmath$E$} $ satisfy $(5)$--$(6)$ and
suppose $ \mbox{\boldmath$E$}^h $ satisfies $(7)$ and $(8)$
with a general $ \mbox{\boldmath$G$} $. Let $ \mbox{\boldmath$G$}= \nabla \times {\bf \Phi} + \nabla p,$
$p \in H_0^1 (\Omega) $. Suppose that $\nabla p$ and $ \nabla \times
{\bf \Phi} $ satisfy all the assumptions of Theorems $4.1$ and
$4.2$, respectively. In addition suppose all the regularity
assumptions of Theorems $4.1$--$4.2$ are satisfied. Then
for $ 0 \le t \le T $ and $ 0 < \epsilon \le \epsilon_0 $ there exists a
constant $ C = C(\epsilon, T) $ such that
$$
\Vert (\mbox{\boldmath$E$} - \mbox{\boldmath$E$}^h)(t) \Vert_0 \le C h^{k+1- \epsilon},
$$
where $ C $ also depends on the constants given in Theorems
$4.1$ and $4.2$.
\end{corollary}
\begin{definition}
Let $S$ be an isolated invariant set with isolating neighborhood $N$.
An {\em index pair} for $S$ is a pair of compact sets $(N_{1},N_{0})$
with $N_{0} \subset N_{1} \subset N$ such that:
\begin{romannum}
\item $cl(N_{1} \backslash N_{0})$
is an isolating neighborhood for $S$.
\item $N_{i}$ is positively invariant relative to $N$ for $i=0,1$,
i.e., given
$x \in N_{i}$ and $x \cdot [0,t] \subset N$, then $x \cdot [0,t] \subset
N_{i}$.
\item $N_{0}$ is an exit set for $N_{1}$, i.e. if $x \in N_{1}$,
$x \cdot [0, \infty ) \not\subset N_{1}$, then there is a $T \geq 0$ such
that $x \cdot [0,T] \subset N_{1}$ and $x \cdot T \in N_{0}$.
\end{romannum}
\end{definition}
\subsection{Numerical experiments} We conducted numerical experiments
in computing inexact Newton steps for discretizations of a
{\em modified Bratu problem}, given by
\begin{eqnarray}
{\ds \Delta w + c e^w + d{ {\partial w}\over{\partial x} } }
&=&{\ds f \quad {\rm in}\ D, }\nonumber\\[-1.5ex]
\label{bratu} \\[-1.5ex]
{\ds w }&=&{\ds 0 \quad {\rm on}\ \partial D , } \nonumber
\end{eqnarray}
where $c$ and $d$ are constants. The actual Bratu problem has $d=0$ and
$f \equiv0$. It provides a simplified model of nonlinear diffusion
phenomena, e.g., in combustion and semiconductors, and has been
considered by Glowinski, Keller, and Rheinhardt \cite{GloKR85},
as well as by a number of other investigators; see \cite{GloKR85}
and the references therein. See also problem 3 by Glowinski and Keller
and problem 7 by Mittelmann in the collection of nonlinear model
problems assembled by Mor\'e \cite{More}. The modified problem
(\ref{bratu}) has been used as a test problem for inexact Newton
methods by Brown and Saad \cite{Brown-Saad1}.
In our experiments, we took $D = [0,1]\times[0,1]$, $f \equiv0$,
$c=d=10$, and discretized (\ref{bratu}) using the usual second-order
centered differences over a $100\times100$ mesh of equally
spaced points in $D$. In \gmres($m$), we took $m=10$ and used fast
Poisson right preconditioning as in the experiments in \S2. The computing
environment was as described in \S2. All computing was done
in double precision.
\begin{figure}[ht]
\vspace{2.5in}
\caption{{\rm Log}$_{10}$ of the residual norm versus the number of
{\rm GMRES$(m)$} iterations for the finite difference methods.}
\label{diff}
\end{figure}
In the first set of experiments, we allowed each method to
run for $40$ {\gmresm} iterations, starting with zero as the initial
approximate solution, after which the limit of residual norm
reduction had been reached. The results are shown in Fig.~\ref{diff}.
In Fig.~\ref{diff}, the top curve was produced by method FD1.
The second curve from the top is actually a superposition of
the curves produced by methods EHA2 and FD2; the two curves are
visually indistinguishable. Similarly, the third curve from
the top is a superposition of the curves produced by methods EHA4
and FD4, and the fourth curve from the top, which lies barely above
the bottom curve, is a superposition of the curves produced by
methods EHA6 and FD6. The bottom curve was produced by method A.
In the second set of experiments, our purpose was to assess the
relative amount of computational work required by the methods
which use higher-order differencing to reach comparable levels
of residual norm reduction. We compared pairs of methods EHA2
and FD2, EHA4 and FD4, and EHA6 and FD6 by observing in each of
20 trials the number of {\gmresm} iterations, number of $F$-evaluations,
and run time required by each method to reduce the residual norm
by a factor of $\e$, where for each pair of methods $\e$ was chosen
to be somewhat greater than the limiting ratio of final to
initial residual norms obtainable by the methods. In these trials,
the initial approximate solutions were obtained by generating random
components as in the similar experiments in \S2. We note that for every
method, the numbers of {\gmresm} iterations and $F$-evaluations required
before termination did not vary at all over the 20 trials. The {\gmresm}
iteration counts, numbers of $F$-evaluations, and means and standard
deviations of the run times are given in Table \ref{diffstats}.
\begin{table}
\caption{Statistics over $20$ trials of {\rm GMRES$(m)$} iteration numbers,
$F$-evaluations, and run times required to reduce the residual norm by
a factor of $\e$. For each method, the number of {\rm GMRES$(m)$} iterations
and $F$-evaluations was the same in every trial.}
\begin{center} \footnotesize
\begin{tabular}{|c|c|c|c|c|c|} \hline
&& Number of & Number of & Mean Run Time & Standard \\
Method & $\e$ & Iterations & $F$-Evaluations& (Seconds) & Deviation \\ \hline
\lower.3ex\hbox{EHA2} & \lower.3ex\hbox{$10^{-10}$} & \lower.3ex\hbox{26} &
\lower.3ex\hbox{32} & \lower.3ex\hbox{47.12} & \lower.3ex\hbox{.1048} \\
FD2 & $10^{-10}$ & 26 & 58 & 53.79 & .1829 \\ \hline
\lower.3ex\hbox{EHA4} & \lower.3ex\hbox{$10^{-12}$} & \lower.3ex\hbox{30} &
\lower.3ex\hbox{42} & \lower.3ex\hbox{56.76} & \lower.3ex\hbox{.1855} \\
FD4 & $10^{-12}$ & 30 & 132 & 81.35 & .3730 \\ \hline
\lower.3ex\hbox{EHA6} & \lower.3ex\hbox{$10^{-12}$} & \lower.3ex\hbox{30} &
\lower.3ex\hbox{48} & \lower.3ex\hbox{58.56} & \lower.3ex\hbox{.1952} \\
FD6 & $10^{-12}$ & 30 & 198 & 100.6 & .3278 \\ \hline
\end{tabular}
\end{center}
\label{diffstats}
\end{table}
In our first set of experiments, we took $c=d=10$ and used right
preconditioning with a fast Poisson solver from {\fishpack}
\cite{Swarztrauber-Sweet}, which is very effective for these
fairly small values of $c$ and $d$. We first started each method
with zero as the initial approximate solution and allowed it
to run for 40 {\gmresm} iterations, after which the limit of residual
norm reduction had been reached. Figure \ref{pdep} shows plots
of the logarithm of the Euclidean norm of the residual versus
the number of {\gmresm} iterations for the three methods. We note
that in Fig.~\ref{pdep} and in all other figures below, the plotted
residual norms were not the values maintained by {\gmresm}, but rather
were computed as accurately as possible ``from scratch.'' That is,
at each {\gmresm} iteration, the current approximate solution was
formed and its product with the coefficient matrix was subtracted
from the right-hand side, all in double precision.
It was important to compute the residual norms in this way because
the values maintained by {\gmresm} become increasingly untrustworthy
as the limits of residual norm reduction are neared; see \cite{Walker88}.
It is seen in Fig.~\ref{pdep} that Algorithm EHA achieved
the same ultimate level of residual norm reduction as the FDP
method and required only a few more {\gmresm} iterations to do
so.
\begin{figure}[t]
\vspace{3in}
\caption{{\rm Log}$_{10}$ of the residual norm versus the number of
{\rm GMRES}$(m)$ iterations for $c=d=10$ with fast Poisson
preconditioning. Solid curve: Algorithm {\rm EHA}; dotted
curve: {\rm FDP} method; dashed curve: {\rm FSP} method.}
\label{pdep}
\end{figure}
In our second set of experiments, we took $c=d=100$ and carried out
trials analogous to those in the first set above. No preconditioning
was used in these experiments, both because we wanted to compare
the methods without preconditioning and because the fast
Poisson preconditioning used in the first set of experiments is
not cost effective for these large values of $c$ and $d$. We first
allowed each method to run for 600 {\gmresm} iterations,
starting with zero as the initial approximate solution, after which
the limit of residual norm reduction had been reached.
\section*{Acknowledgments}
The author thanks the anonymous authors whose work largely
constitutes this sample file. He also thanks the INFO-TeX mailing
list for the valuable indirect assistance he received.
\section{One Dimensional and Full Tensor Approximations}
\label{sec:approximations}
\subsection{General setting}
\subsection{One-dimensional quadrature}
\section{Outline}
\begin{easylist}
\ListProperties(Progressive*=3ex)
# Introduction
## Introducing PCEs
### Approximate functions in distribution, useful for a wide variety of problems.
### Intrusive, create systems of equations for coefficients, giving PCEs that tell you the outputs for all inputs, in distribution.
### Non-intrusive has a different purpose, good when you need to sample at smaller spacings than its ``characteristic length''
### MCMC is a good prototype - often calls over and over on points near each other, if $f$ is smooth, this is not necessarily needed
### The idea of a PCE is to approximate a smooth function up front by systematically exploring the likely parameter space, then make it very easy to evaluate.
## Introducing where things went wrong (I need more references for this part)
### Initially, everything was hand tuned and conservative, but when we turn to numeric methods to approximate complex forward models, we need to be more careful
### As we push forward, want to ensure we have the ``right'' strategy
### Forward models are expensive, so we want to make the best possible use of evaluations
### We note that forward models may have low coupling, so quadrature is replaced by sparse quadature, but it's unclear how to synchronize quad/pce
### Some researchers noticed it didn't work as well as interpolation (ie pre Paul's paper)
### Paul showed that significant errors can arise, which are avoided through a smolyak approach (We need to call out his work and contribution in the intro)
## This work - presents a clear and principled way to make PCEs
### We present a Smolyak based approximation scheme for creating PCEs (how do we say it's the same as Paul's, but explained correctly? Say it's a re-interpretation?)
### Smolyak is a principled way to turn 1D algorithms into multivariate algorithms, so we have evidence that it works, and in some cases is optimal or near-optimal
### Allows us to introduce adaptivity derived from DASQ
### Our goal is to present this method, discuss practical considerations, and present enough theory to make our algorithm well founded, although our goal is not especially to provide new proofs of convergence. Maybe we'll adapt some proofs.
### Furthermore, show in detail why the traditional (direct quadrature) strategy doesn't work, and why our method doesn't fall into the same traps
### Finally, make the discussion more practical by discuss practical matters like quadrature, polynomials, growth rules, and convergence
# 1D Polychaos
## Expansion in ortho poly basis
## Converges in distribution for functions in $L_2$.
## show how to compute coeff
## in practice, often substitute integrals for quadrature
## if you use gaussian quadrature, same as interpolation on the same points
## truncation errors in each coeff, discuss their magnitude
## present pce as an operator, maps from function space to function space
## if we can figure it out, show operator error bounds for 1D case
# Extension to Smolyak
## If you're in a tensor product space, Smolyak converts 1D algorithms to sparse tensor algorithms, typically if you have some sort of bounded mixed derivatives, then more efficient than the full tensor algorithms
## Give smolyak formulation
## Useful properties
### Nested gives optimal use
### Nested interpolator gives interpolator, hence equivalence to lagrange interpolation on the same points
### Converges as you get enough terms, clearly state any assumptions needed for this
## This is enough to make this a good idea and usable
## Leaves us with a fair bit of room to alter the 1D rule and thus change the resulting full rule.
## Explain growth rules and quadrature types.
# Comparison to traditional
## Although in some sense not necessary, we feel it's necessary to be very clear about why the traditional strategy is a bad idea.
## It's tough to be completely general about this, but you don't have general criteria that it converges, as we do for the new method. I don't know how to do an operator bounds proof for all cases.
## Instead, we're going to settle for the Gaussian quadrature case, show two ways it's worse, internal and truncation
## Summary of traditional
### Directly compute coefficients independently via quadrature
### Select a fixed index set for quadrature and a separate index set for your pce, sometimes they're the same
### We don't know of any widespread guidance about how to do this, so it's rather ad hoc
### Adaptive strategies lead us to codify our right strategy is, ad hoc or not
### This gets you into trouble when you try to use the same index set, which intuitively *should* work, the clear candidate for adaptive algorithms
## Contrast methods
### Show which coeff are computed with which quadrature terms
## Internal errors
### Direct sparse quadrature won't work if there is a product of included basis terms that gets you outside the coverage of your quadrature surface
### Smolyak PCE never does that
### Show how even one dimension included will be safe because of telescoping effect
## Truncation Errors
### Truncation errors are the same in the worst case, and have the same source, ie polys outside your quadrature power
### Since Smolyak PCE is more careful than direct quadrature, nearby truncation terms (those that are most likely to be large) have more limited ranges of influence
## All in all, traditional is never better, and often much less accurate than Smolyak PCE. If you fix that, it's going to be much less efficient, since you can't estimate as many pce terms.
# Adaptation
## Back to a method we like.
## The whole point of sparsity is to take advantage of weak coupling, but where is it? Often don't know in advance.
## DASQ provides a perfect model, add terms to refine terms that alter your estimate the most, it's trivial to adjust to another Smolyak algorithm
## The combined method has one place for adaptivity, it's much clearer than the direct quad strategy.
## Simple, probably effective, but heuristic.
## Can't help you for pathological cases with arbitrary blocks of zeros and non-zeros.
## We know coeff will eventually decay exponentially, just need to initialize with enough terms for this to kick in (rightfully hard to determine). Furthermore, extra terms should induce errors in useful places to suggest where to go. Although, I'm not going to swear to that.
## Biggest question is which norm-ish value to use. An upper bound on the variance is a decent way to go.
## We need to settle the question about which terms are expandable - I differ from DASQ. Not sure if I'm right, he's right, or there's no clear reason for either one.
## Should you consider how much work the terms are? Probably. Haven't done that yet.
# Other practical matters - quadrature selection
## We know nestedness is not necessary, but is really nice. Gauss patterson is nice, guaranteed poly order, but there's only a few levels.
## Although not perfect, clenshaw curtis is good, transplanted is awesome. Possibly more stable than Gauss for truncated terms (some evidence in the paper for this). Hard to prove some of the previous things for, but maybe a good idea anyway.
## We have to handle different measures - then what? Since measures are often analytic, transplanted clenshaw curtis may still work? Maybe you need higher order for each rule?
## Linear growth should always skip by 2 to avoid even/oddness.
## Maybe we could construct patterson for other measures? Possible but super annoying because they're almost unstable to generate? (yay mathemetica?)
## Exp growth gauss will work, pretty simple, even though it's not optimal
# Experiments
## Don't bother to compare to traditional, will have shown previously just how bad it does, is no longer a fair reference with theory showing how bad it is
## Instead, compare against different attempts to use isotropic shapes
## Hopefully, on moderately sized examples, try to show that against the ``true'' solution, adaptation can pick the right terms, and compute them almost right. Compare the different growth strategies we want to recommend. Possibly L2 convergence or capturing the variance correctly is a good measure
## Make sure we do both Legendre and Hermite examples.
# Conclusions
## Smolyak PCE is principled and efficient, so its definitely the way to go.
## Still allows a bit of freedom in index sets and quadature, which we can exploit to make it go faster.
## In lieu of rigorous theory about the exact limited space our function lives in, just use adaptation to figure it out
## Make suggestions about the right quadrature rules/sequences to use, although this part isn't definite
\end{easylist}
\section{One Dimensional and Tensor Problems}
\label{sec:problems}
\subsection{General setting}
\subsection{One-dimensional integration}
\subsection{Multi-dimensional integration}
One-dimensional integration extends easily to higher-dimensional integrals over product domains with separable weight functions. Let $X^{(1)}, \ldots, X^{(d)}$ be a collection of intervals on the real line, as in the one dimensional case. Then let
\begin{equation}
\mathbf{X} := X^{(1)} \times \cdots \times X^{(d)} \subseteq \mathbb{R}^d
\end{equation}
be the domain of integration, defined by the Cartesian product of one-dimensional intervals. Consider a real-valued function defined on this product space, $f:\mathbf{X} \to \mathbb{R}$. Then the multi-dimensional integral is given by the tensor product of one-dimensional integral operators:
\begin{eqnarray}
\label{eq:multi-int}
\mathcal{I}^{(\mathbf{d})}(f) & = & \mathcal{I}^{(1)} \otimes \cdots \otimes \mathcal{I}^{(d)} (f) \nonumber \\
&= &\int_{X^{(1)}} \ldots \int_{X^{(d)}} w^{(1)}(x^{(1)}) \ldots w^{(d)}(x^{(d)}) f(x^{(1)}, \ldots, x^{(d)}) \, dx^{(d)} \ldots dx^{(1)} \\
&=& \int_\mathbf{X} w(\mathbf{x})f(\mathbf{x}) \, d \mathbf{x}, \nonumber \\
\mbox{where } w(\mathbf{x}) &:=& \prod_{i=1}^d w^{(i)}({x}^{(i)}) . \nonumber
\end{eqnarray}
\section{Numerical experiments}
\label{sec:experiments}
Our numerical experiments focus on evaluating the performance of
different quadrature rules embedded within the Smolyak pseudospectral
scheme, and on evaluating performance of the adaptive Smolyak
approximation strategy. Aside from the numerical examples of
Section~\ref{sec:comparison}, we do not investigate the performance of
direct quadrature any further. Given our theoretical analysis of
aliasing errors and the numerical demonstrations in
\cite{Constantine2012}, one can conclude without further demonstration
that destructive internal aliasing indeed appears in practice.
This section begins by discussing practical considerations in the
selection of quadrature rules. Then we evaluate convergence of Smolyak
pseudospectral approximation schemes (non-adaptive and adaptive) on
the Genz test functions. Next, we approximate a larger chemical
kinetic system, illustrating the efficiency and accuracy of the
adaptive method. Finally, we evaluate the quality of the global error
indicator on all of these examples.
\subsection{Selection of quadrature rules}
\label{sec:quadRules}
Thus far we have sidestepped practical questions about which quadrature rules exist or are most efficient. Our analysis has relied only on polynomial accuracy of quadrature rules; all quadrature rules with a given polynomial accuracy allow the same truncation of a pseudospectral approximation. In practice, however, we care about the cumulative cost of the adaptive algorithm, which must step through successive levels of refinement.
Integration over a bounded interval with uniform weighting offers the widest variety of quadrature choices, and thus allows a thorough comparison. Table \ref{tab:quadCost} summarizes the costs of several common quadrature schemes. First, we see that linear-growth Gaussian quadrature is asymptotically much less efficient than exponential-growth in reaching any particular degree of exactness. However, for rules with fewer than about ten points, this difference is not yet significant. Second, Clenshaw-Curtis shows efficiency equivalent to exponential-growth Gaussian: both use $n$ points to reach $n$th order polynomial exactness \cite{Clenshaw1960}. However, their performance with respect to external aliasing differs: Clenshaw-Curtis slowly loses accuracy if the integrand is of order greater than $n$, while Gaussian quadrature gives \OO{1} error even on $(n+1)$-order functions \cite{Trefethen2008}. This may make Clenshaw-Curtis Smolyak pseudospectral estimates more efficient. Finally, we consider Gauss-Patterson quadrature, which is nested and has significantly higher polynomial exactness---for a given cumulative cost---than the other types \cite{Patterson1968}. Computing the quadrature points and weights in finite precision (even extended-precision) arithmetic has practically limited Gauss-Patterson rules to 255 points, but we recommend them whenever this is sufficient.
\begin{table}
\centering
\footnotesize
\begin{tabular}{r || c | c | c || c | c | c || c |c || c |c}
& \multicolumn{3}{c||}{Lin.\ G} & \multicolumn{3}{c||}{Exp.\ G} & \multicolumn{2}{c||}{C-C} & \multicolumn{2}{c}{G-P} \\
Order & $p$ & $a$ & $t$ & $p$ & $a$ & $t$ & $p$ & $a$ & $p$ & $a$ \\ \hline
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
2 & 2 & 3 & 3 & 2 & 3 & 3 & 3 & 3 & 3 & 5 \\
3 & 3 & 5 & 6 & 4 & 7 & 7 & 5 & 5 & 7 & 10\\
4 & 4 & 7 & 10 & 8 & 15 & 15 & 9 & 9 & 15 & 22\\
5 & 5 & 9 & 15 & 16 & 31 & 31 & 17 & 17 & 31 & 46\\
6 & 6 & 11 & 21 & 32 & 63 & 63 & 31 & 31 & 63 & 94\\
$m$ & $m$ & $2m-1$ & $m^2-m/2$ & $2^{m-1}$ & $2^{m}-1 $ & $2^{m}-1 $ & $2^{m-1}+1$ & $2^{m-1}+1$ & &
\end{tabular}
\caption{The cost of four quadrature strategies as their order increases: linear growth Gauss-Legendre quadrature (Lin.\ G), exponential growth Gauss-Legendre quadrature (Exp.\ G), Clenshaw-Curtis quadrature (C-C), and Gauss-Patterson quadrature (G-P). We list the number of points used to compute the given rule (p), the polynomial exactness (a), and the total number of points used so far (t). For nested rule, (p) = (t), so the total column is omitted.}
\label{tab:quadCost}
\end{table}
For most other weights and intervals, there are fewer choices that provide polynomial exactness, so exponential-growth Gaussian quadrature is our default choice. In the specific case of Gaussian weight, Genz has provided a family of Kronrod extensions, similar to Gauss-Patterson quadrature, which may be a useful option \cite{Genz1996}.
If a linear growth rule is chosen and the domain is symmetric, we suggest that each new level include at least two points, so that the corresponding basis grows by at least one even and one odd basis function. This removes the possibility for unexpected effects on the adaptive strategy if the target function is actually even or odd.
\subsection{Basic convergence: Genz functions}
The Genz family \cite{Genz1984,Genz1987} comprises six parameterized functions, defined from
$[-1,1]^d \to \mathbb{R}$. They are commonly
used to investigate the accuracy of quadrature rules and interpolation
schemes \cite{Barthelmann2000,Klimke2005}. The purpose of this example
is to show that different Smolyak pseudospectral strategies behave
roughly as expected, as evidenced by decreasing $L^2$ approximation
errors as more function evaluations are employed. These functions are as follows:
\allowdisplaybreaks
\begin{eqnarray*}
\mbox{oscillatory: } f_1(x) &=& \cos\left(2\pi w_1+\sum_{i=1}^d c_ix_i\right)\\
\mbox{product peak: } f_2(x) &=& \prod_{i=1}^d\left(c_i^{-2}+(x_i-w_i)^2\right)^{-1}\\
\mbox{corner peak: } f_3(x) &=& \left(1+\sum_{i=1}^d c_ix_i\right)^{-(d+1)}\\
\mbox{Gaussian: } f_4(x) &=& \exp\left(-\sum_{i=1}^d c_i^2\dot(x_i-w_i)^2\right)\\
\mbox{continuous: } f_5(x) &=& \exp\left(-\sum_{i=1}^d c_i^2\dot(|x_i-w_i|)^2\right)\\
\mbox{discontinuous: } f_6(x) &=& \begin{cases}
0 & \mbox{if } x_1>w_1 \mbox{ or } x_2> w_2\\
\exp{\left( \sum_{i=1}^d c_i x_i \right)} & \mbox{otherwise}\\
\end{cases}
\end{eqnarray*}
Our first test uses five isotropic and \textit{non-adaptive} pseudospectral
approximation strategies. The initial strategy is the isotropic full
tensor pseudospectral algorithm, based on Gauss-Legendre quadrature,
with order growing exponentially with level. The other four strategies
are total-order expansions of increasing order based on the following
quadrature rules: linear growth Gauss-Legendre, exponential growth
Gauss-Legendre, Clenshaw-Curtis, and Gauss-Patterson. All the rules
were selected so that the final rule would have around $10^4$ points.
We consider 30 random realizations of each Genz function in $d=5$
dimensions; random parameters for the Genz functions are drawn
uniformly from $[0,1]$, then normalized so that $\|\mathbf{w}\|_1
= 1 $ and $\|\mathbf{c}\|_1 = b_j$, where $j$ indexes the Genz
function type and the constants $b_j$ are as chosen
in~\cite{Barthelmann2000,Klimke2005}.
This experiment only uses the first four Genz functions, which are in
$C^\infty$, as pseudospectral methods have well known difficulties on
functions with discontinuities or discontinuous derivatives
\cite{Canuto2006}. Each estimate of $L^2$ approximation error is
computed by Monte Carlo sampling with 10$^4$ samples. Figure
\ref{fig:GenzResults} plots $L^2$ error at each stage, where each
point represents the mean error over the 30 random functions.
Relatively simple conclusions can be drawn from this data. All the
methods show fast convergence, indicating that the internal aliasing issues have
indeed been resolved. In contrast, one would expect direct quadrature
to suffer from large aliasing errors for the three super-linear growth
rules. Otherwise, judging the efficiency of the different rules is not
prudent, because differences in truncation and the structure of the test functions
themselves obscure differences in efficiency. In deference to our
adaptive strategy, we ultimately do not recommend this style of
isotropic and function-independent truncation anyway.
To test our \textit{adaptive} approach, Figure \ref{fig:GenzScatter}
shows results from a similar experiment, now comparing the convergence
of an adaptive Smolyak pseudospectral algorithm with that of a
non-adaptive algorithm. To make the functions less isotropic, we
introduce an exponential decay, replacing each $c_i$ with $c_i
e^{i/5}$, where the $c_i$ are generated and normalized as above. For
consistency, both algorithms are based on Gauss-Patterson
quadrature. As we cannot synchronize the number of evaluations used by
the adaptive algorithm for different functions, we plot individual
errors for the 30 random functions instead of the mean error. This
reveals the variability in difficulty of the functions, which was
hidden in the previous plot. We conclude that the adaptive algorithm
also converges as expected, with performance comparable to or better
than the non-adaptive algorithm. Even though we have included some
anisotropy, these functions include relatively high degrees of
coupling; hence, in this case the non-adaptive strategy is a fairly
suitable choice. For example, the ``product peak'' function shows
little benefit from the adaptive strategy. Although omitted here for
brevity, other quadrature rules produce similar results when comparing
adaptive and non-adaptive algorithms.
\begin{figure}[htb]
\centering
\includegraphics{figures/GenzImprovements.pdf}
\caption{Mean $L^2$ convergence of the non-adaptive isotropic total-order Smolyak pseudospectral algorithm with various quadrature rules, compared to the full tensor pseudospectral algorithm, on the Genz test functions.}
\label{fig:GenzResults}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics{figures/GenzAdaptiveScatter.pdf}
\caption{$L^2$ convergence of the adaptive and non-adaptive Gauss-Patterson Smolyak pseudospectral algorithm. Individual results for 30 random instances of the Genz functions are shown.}
\label{fig:GenzScatter}
\end{figure}
\subsection{Adaptivity: chemical kinetics}
To further illustrate the benefits of an adaptive Smolyak approach, we build a
surrogate for a realistic simulation of a combustion kinetics problem.
Specifically, we consider the auto-ignition of a methane-air mixture
given 14 uncertain rate parameters. Governing equations for this
process are a set of stiff nonlinear ODEs expressing conservation of
energy and of chemical species \cite{Kee2003}. The uncertain rate
parameters represent activation energies of reactions governing the conversion of methane to methyl, each endowed with a uniform distribution varying over $[0.8, 1.25]$ of the nominal value. These parameters appear in Arrhenius
expressions for the species production rates, with the reaction
pathways and their nominal rate parameters given by the GRIMech 3.0
mechanism \cite{grimech3:local}. The output
of interest is the logarithm of the ignition time, which is a
functional of the trajectory of the ODE system, and is continuous over the selected parameter ranges. Simulations were
performed with the help of the TChem software library \cite{tchem:local},
which provides convenient evaluations of thermodynamic properties and
species production rates, along with Jacobians for implicit time
integration.
Chemical kinetics are an excellent testbed for adaptive
approximation because, by the nature of detailed kinetic systems, we
expect strong coupling between some inputs and weak coupling between
others, but we cannot predict these couplings \emph{a
priori}.
We test the effectiveness of adaptive Smolyak pseudospectral methods
based on the four quadrature rules discussed earlier. As our earlier
analysis suggested that Gauss-Patterson quadrature should be most
efficient, our reference solution is a non-adaptive Gauss-Patterson
total-order Smolyak pseudospectral expansion. We ran the non-adaptive
algorithm with a total order index set truncated at $n=5$ (which includes monomial basis terms up through
$\psi_{23}^{(i)}$), using around 40000 point evaluations and taking over
an hour to run. We tuned the four adaptive algorithms to terminate
with approximately the same number of evaluations.
Figure \ref{fig:combustionConvergence} compares convergence of the
five algorithms. The $L^2$ errors reported on the vertical axis are
Monte Carlo estimates using $10^4$ points. Except for a small deviation
at fewer than 200 model evaluations, all of the adaptive methods
significantly outperform the non-adaptive method. The performance of
the different quadrature rules is essentially as predicted in Section
\ref{sec:quadRules}: Gauss-Patterson is the most efficient,
exponential growth Gauss-Legendre and Clenshaw-Curtis are nearly
equivalent, and linear growth Gauss-Legendre performs worse as the
order of the polynomial approximation increases. Compared to the
non-adaptive algorithm, adaptive Gauss-Patterson yields more than two
orders of magnitude reduction in the error at the same number of model
evaluations. Linear growth Gaussian quadrature is initially comparable
to exponential growth Gaussian quadrature, because the asymptotic
benefits of exponential growth do not appear while the algorithm is
principally using very small one-dimensional quadrature rules. At the
end of these experiments, a reasonable number of higher order
quadrature rules are used and the difference becomes visible.
\begin{figure}[htb] \centering
\includegraphics[scale=.6]{figures/CombustionConvergence.pdf}
\caption{$L^2$ convergence of ignition delay in a 14-dimensional chemical kinetic system; comparing a
non-adaptive isotropic total-order Gauss-Patterson-based Smolyak
pseudospectral algorithm to the adaptive algorithm with various
quadrature rules.}
\label{fig:combustionConvergence}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics{figures/SimpleCombustionCoeffDiff.pdf}
\caption{The plot depicts the difference between the
\emph{number} of coefficients of a particular magnitude and
order in the final \textit{adaptive} and
\textit{non-adaptive} Gauss-Patterson based expansions. The
horizontal axis is the order of the term and the vertical
axis specifies $\log_{10}$ of the coefficient value. The
color represents $\log_{10}$ of the difference between
the two methods, where positive values indicate more terms
in the non-adaptive expansion. Hence, the dark blue at
$(6,-10)$ indicates that the non-adaptive expansion includes
around 3,000 extra terms of magnitude $10^{-10}$ and the
dark red at $(10,-8)$ indicates that the adaptive expansion
includes about 1,000 extra terms of magnitude $10^{-8}$.
Grey squares are the same for both expansions and white
squares are not present in either.}
\label{fig:combustionCoeffs}
\end{figure}
We conclude by illustrating that the adaptive algorithm is effective
because it successfully focuses its efforts on high-magnitude
coefficients---that is, coefficients that make the most significant
contributions to the function. Even though the non-adaptive expansion
has around 37,000 terms and the final adaptive Gauss-Patterson
expansion only has about 32,000 terms, the adaptive expansion exhibits
much lower error because most of the additional terms in the
non-adaptive expansion are nearly zero. By skipping many near-zero
coefficients, the adaptive approach is able to locate and estimate a
number of higher-order terms with large magnitudes. Figure
\ref{fig:combustionCoeffs} depicts this pattern by plotting the
difference between the numbers of included terms in the final adaptive
Gauss-Patterson and non-adaptive expansions. The adaptive algorithm
does not actually add any higher order monomials; neither uses
one-dimensional basis terms of order higher than $\psi^{(i)}_{23}$.
Instead, the adaptive algorithm adds mixed terms of higher total
order, thus capturing the coupling of certain variables in more detail
than the non-adaptive algorithm. The figure shows that terms through
30\textsuperscript{th} order are included in the adaptive expansion,
all of which are products of non-constant polynomials in more than one
dimension.
\subsection{Performance of the global error indicator}
To evaluate the termination criterion, we collected the global error
indicator during runs of the adaptive algorithm for all of the test functions
described above, including the slowly converging non-smooth Genz
functions omitted before. The discontinuous Genz function does not
include the exponential coefficient decay because the discontinuity
already creates strong anisotropy. Results are shown for
Gauss-Patterson quadrature. The relationship between the estimated
$L^2$ error and the global error indicator $\epsilon_g$ is shown in
Figure \ref{fig:terminationPlot}. For the smooth test functions,
$\epsilon_g$ is actually an excellent indicator, as it is largely
within an order of magnitude of the correct value and essentially linearly related to it. However, the non-smooth Genz functions illustrate the hazard of relying too heavily on this indicator: although the adaptive algorithm does decrease both the errors and the indicator, the relationship between the two appears far less direct.
\begin{figure}[htb] \centering
\includegraphics[scale=.65]{figures/GlobalErrorIndicator.pdf}
\caption{Relationship between the termination criterion
(\ref{eq:globalError}) and the estimated $L^2$ error for every
function tested.}
\label{fig:terminationPlot}
\end{figure}
\section{Smolyak algorithms}
\label{sec:smolyak}
Thus far, we have developed polynomial approximations of multivariate functions by taking tensor products of one-dimensional pseudospectral operators. Smolyak algorithms avoid the exponential cost of full tensor products when the input dimensions are not fully coupled, allowing the use of a telescoping sum to blend different lower-order full tensor approximations.
\begin{Example}
Suppose that $f(x,y) = x^{7} + y^7+x^3 y$. To construct a polynomial expansion with both the $x^7$ and $y^7$ terms, a full tensor pseudospectral algorithm would estimate all the polynomial terms up to $x^7y^7$, because tensor algorithms fully couple the dimensions. This mixed term is costly, requiring, in this case, an $8\times 8$ point grid for Gaussian quadratures. The individual terms can be had much more cheaply, using $8\times 1$, $1\times 8$, and $4 \times 2$ grids, respectively. Smolyak algorithms help realize such savings in practice.
\end{Example}
This section reviews the construction of Smolyak algorithms and presents a new theorem about the exactness of Smolyak algorithms built around arbitrary admissible index sets. We apply these results to quadrature and pseudospectral approximation, allowing a precise characterization of their errors.
\subsection{General Smolyak algorithms}
\label{s:generalsmolyak}
As in Section~\ref{sec:approximations}, assume that we have for every dimension $i=1\ldots d$ a convergent sequence $\mathcal{L}^{(i)}_k$ of approximations. Let $\mathcal{L}$ denote the collection of these sequences over all the dimensions. Define the difference operators
\begin{eqnarray}
\Delta^{(i)}_0 & := & \mathcal{L}^{(i)}_0 = 0, \\
\Delta^{(i)}_n & := & \mathcal{L}^{(i)}_n - \mathcal{L}^{(i)}_{n-1} .
\end{eqnarray}
For any $i$, we may write the exact or ``true'' operator as the telescoping series
\begin{equation}
\mathcal{L}^{(i)} = \sum_{k=0}^\infty \mathcal{L}^{(i)}_k - \mathcal{L}^{(i)}_{k-1} = \sum_{k=0}^\infty \Delta^{(i)}_k.
\end{equation}
Now we may write the tensor product of the exact operators as the tensor product of the telescoping sums, and interchange the product and sum:
\begin{eqnarray}
\mathcal{L}^{(1)} \otimes \cdots \otimes \mathcal{L}^{(d)} &= & \sum_{k_1=0}^\infty \Delta^{(1)}_{k_1} \otimes \cdots \otimes \sum_{k_d=0}^\infty \Delta^{(d)}_{k_d} \nonumber \\
&= & \sum_{\mathbf{k} = 0}^\infty \Delta_{k_1}^{(1)} \otimes \cdots \otimes \Delta_{k_d}^{(d)}
\end{eqnarray}
\noindent Smolyak's idea is to approximate the tensor product operator with truncations of this sum \cite{Smolyak1963}:
\begin{equation}
\label{eq:generalSmolyakDiff}
A(\mathcal{K},d,\mathcal{L}) := \sum_{\mathbf{k} \in \mathcal{K}} \Delta_{k_1}^{(1)} \otimes \cdots \otimes \Delta_{k_d}^{(d)} .
\end{equation}
We refer to the multi-index set $\mathcal{K}$ as the \emph{Smolyak multi-index set}, and it must be admissible for the sum to telescope correctly. Smolyak specifically suggested truncating with a total order multi-index set, which is the most widely studied choice. However, we can compute the approximation with any admissible multi-index set. Although the expression above is especially clean, it is not the most useful form for computation. We can reorganize the terms of (\ref{eq:generalSmolyakDiff}) to construct a weighted sum of the tensor operators:
\begin{equation}
\label{eq:generalSmolyakWeighted}
A(\mathcal{K}, d, \mathcal{L}) = \sum_{\mathbf{k} \in \mathcal{K}} c_\mathbf{k} \, \mathcal{L}^{(1)}_{k_1} \otimes \cdots \otimes \mathcal{L}^{(d)}_{k_d} ,
\end{equation}
where $c_\mathbf{k}$ are integer \emph{Smolyak coefficients} computed from the combinatorics of the difference formulation. One can compute the coefficients through a simple iteration over the index set and use (\ref{eq:generalSmolyakDiff}) to determine which full tensor rules are incremented or decremented. In general, these coefficients are non-zero near the leading surface of the Smolyak multi-index set, reflecting the mixing of the most accurate constituent full tensor approximations.
If each sequence of one-dimensional operators converges, then the Smolyak approximation converges
{to the tensor product of exact operators}
as $\mathcal{K} \to \mathbb{N}^d_0$. For the isotropic simplex index set, some precise rates of convergence are known with respect to the side length of the simplex \cite{Wasilkowski1999,Wasilkowski2005a,Wasilkowski1995,Sickel2007a,Sickel2009}. Although general admissible Smolyak multi-index sets are difficult to study theoretically, they allow detailed customization to the anisotropy of a particular function.
\subsection{Exactness of Smolyak algorithms}
In the one-dimensional and full tensor settings, we have characterized approximation algorithms through their exact sets---those inputs for which the algorithm is precise. This section shows that if the constituent one-dimensional approximations have nested exact sets, Smolyak algorithms are the ideal blending of different full tensor approximations from the perspective of exact sets; that is, the exact set of the Smolyak algorithm contains the union of the exact sets of the component full tensor approximations. This result will facilitate subsequent analysis of sparse quadrature and pseudospectral approximation algorithms. This theorem and our proof closely follow the framework provided by Novak and Ritter \cite{Novak1996,Novak1999a,Barthelmann2000}, but include a generalization to arbitrary Smolyak multi-index sets.
\begin{theorem}
\label{thm:SmolyakAccuracy}
Let $A(\mathcal{K}, d, \mathcal{L})$ be a Smolyak algorithm composed of linear operators with nested exact sets, i.e., with $m \leq m^\prime$ implying that $\mathcal{E} (\mathcal{L}^{(i)}_m ) \subseteq \mathcal{E} ( \mathcal{L}^{(i)}_{m^\prime})$ for $i=1 \ldots d$, where $\mathcal{K}$ is admissible. Then the exact set of $A(\mathcal{K}, d, \mathcal{L})$ contains
\begin{eqnarray}
\mathcal{E}\left (A(\mathcal{K}, d,\mathcal{L})\right ) &\supseteq &\bigcup_{\mathbf{k} \in \mathcal{K}} \mathcal{E} \left ( \mathcal{L}^{(1)}_{k_1} \otimes \cdots \otimes \mathcal{L}^{(d)}_{k_d} \right ) \nonumber \\
&\supseteq & \bigcup_{\mathbf{k} \in \mathcal{K}} \mathcal{E}(\mathcal{L}^{(1)}_{k_1}) \otimes \cdots \otimes \mathcal{E}(\mathcal{L}^{(d)}_{k_d}) .
\end{eqnarray}
\end{theorem}
\proof{
We begin by introducing notation to incrementally build a multi-index set dimension by dimension. For a multi-index set $\mathcal{K}$ of dimension $d$, let the restriction of the multi-indices to the first $i$ dimensions be $\mathcal{K}^{(i)} := \{\mathbf{k}_{1:i} = (k_1, \ldots, k_i) : \mathbf{k} \in \mathcal{K}\}$. Furthermore, define subsets of $\mathcal{K}$ based on the $i$\textsuperscript{th} element of the multi-indices, $\mathcal{K}^{(i)}_j := \{\mathbf{k}_{1:i} : \mathbf{k} \in \mathcal{K}^{(i)} \ \mathit{and} \ {k}_{i+1} = j\}$. These sets are nested, $\mathcal{K}^{(i)}_j \supseteq \mathcal{K}^{(i)}_{j+1}$, because $\mathcal{K}$ is admissible.
Also let ${k}^\mathrm{max}_i$ denote the maximum value of the $i$\textsuperscript{th} component of the multi-indices in the set $\mathcal{K}$.
Using this notation, one can construct $\mathcal{K}$ inductively,
\begin{eqnarray}
\mathcal{K}^{(1)} &= & \{1, \ldots, {k}^\mathrm{max}_{1}\}\\
\mathcal{K}^{(i)} &= & \bigcup_{j = 1}^{{k}^\mathrm{max}_{i}} \mathcal{K}^{(i-1)}_j \otimes j, \ \ i = 2 \ldots d. \label{e:inductK}
\end{eqnarray}
It is sufficient to prove that the Smolyak operator is exact for an arbitrary $f$ with tensor structure, $f = f_1 \times \cdots \times f_{d}$. Suppose there exists a $\mathbf{k}^\ast$ such that $f \in \mathcal{E}(\mathcal{L}^{(\mathbf{d})}_{\mathbf{k}^\ast})$. We will show that if $\mathcal{K}$ is an admissible multi-index set containing $\mathbf{k}^\ast$, then $A (\mathcal{K}, d,\mathcal{L})$ is exact on $f$. We do so by induction on the dimension $i$ of the Smolyak operator and the function.
First, consider the $i=1$ case. $A(\mathcal{K}^{(1)}, 1,\mathcal{L}) = \mathcal{L}^{(1)}_{{k}^\mathrm{max}_1}$, where ${k}^\mathrm{max}_1 \geq {k}^\ast_1$. Hence $\mathcal{E}(A(\mathcal{K}^{(1)}, 1,\mathcal{L})) = \mathcal{E}(\mathcal{L}^{(1)}_{{k}^\mathrm{max}_1}).$
For the induction step, we construct the $(i+1)$-dimensional Smolyak operator in terms of the $i$-dimensional operator:
\begin{equation}
A(\mathcal{K}^{(i+1)}, i+1, \mathcal{L}) = \sum_{j = 1}^{{k}^\mathrm{max}_{i+1}} A(\mathcal{K}^{(i)}_j, i, \mathcal{L}) \otimes (\mathcal{L}^{(i+1)}_{j}-\mathcal{L}^{(i+1)}_{j-1}) .
\label{e:fullsum}
\end{equation}
This sum is over increasing levels of accuracy in the $i+1$ dimension. We know the level required for the approximate operator to be exact in this dimension; this may be expressed as
\begin{equation}
\mathcal{L}^{(i+1)}_{j}(f_{i+1}) = \mathcal{L}^{(i+1)}_{j-1}(f_{i+1}) = \mathcal{L}^{(i+1)}(f_{i+1}) \ \mathrm{when} \ j-1 \geq {k}^\ast_{i+1} .
\end{equation}
Therefore the sum (\ref{e:fullsum}) can be truncated at the ${k}^\ast_{i+1}$ term, as the differences of higher terms are zero when applied to $f$:
\begin{equation}
A(\mathcal{K}^{(i+1)}, i+1, \mathcal{L}) = \sum_{j = 1}^{{k}^\ast_{i+1}} A(\mathcal{K}^{(i)}_j, i,\mathcal{L}) \otimes (\mathcal{L}^{(i+1)}_{j}-\mathcal{L}^{(i+1)}_{j-1}).
\end{equation}
Naturally, $\mathbf{k}^\ast_{1:i} \in \mathcal{K}^{(i)}_{{k}^\ast_{i+1}}$. By nestedness, $\mathbf{k}^\ast_{1:i}$ is also contained in $\mathcal{K}^{(i)}_{j}$ for $j \leq {k}^\ast_{i+1}$. The induction hypothesis then guarantees
\begin{equation}
f_1 \otimes \cdots \otimes f_i \in \mathcal{E}(A(\mathcal{K}^{(i)}_j, i, \mathcal{L})), \ \forall j \leq {k}^\ast_{i+1} .
\end{equation}
Applying the $(i+1)$-dimensional Smolyak operator to the truncated version of $f$ yields
\begin{eqnarray}
& & A(\mathcal{K}^{(i+1)}, i+1, \mathcal{L})(f_1 \otimes \cdots \otimes f_{i+1}) \nonumber \\
& = & \sum_{j = 1}^{{k}^\ast_{i+1}} A(\mathcal{K}^{(i)}_j, i, \mathcal{L})(f_1 \otimes \cdots \otimes f_i) \otimes (\mathcal{L}^{(i+1)}_{j}-\mathcal{L}^{(i+1)}_{j-1})(f_{i+1}) .
\end{eqnarray}
Since each of the $i$-dimensional Smolyak algorithms is exact, by the induction hypothesis, we replace them with the true operators and rearrange by linearity to obtain
\begin{eqnarray}
A(\mathcal{K}^{(i+1)}, i+1, \mathcal{L})(f_1 \otimes \cdots \otimes f_{i+1}) &= & \mathcal{L}^{(\mathbf{i})}(f_1 \otimes \cdots \otimes f_i)\otimes \sum_{j = 1}^{{k}^\ast_{i+1}} (\mathcal{L}^{(i+1)}_{j}-\mathcal{L}^{(i+1)}_{j-1})(f_{i+1}) \nonumber \\
&= & \mathcal{L}^{(\mathbf{i})}(f_1 \otimes \cdots \otimes f_i)\otimes \mathcal{L}^{(i+1)}_{{k}^\ast_{i+1}}(f_{i+1}). \label{e:lastdim}
\end{eqnarray}
The approximation in the $i+1$ dimension is exactly of the level needed to be exact on the $(i+1)$\textsuperscript{th} component of $f$. Then (\ref{e:lastdim}) becomes
\begin{equation}
\mathcal{L}^{(\mathbf{i})}(f_1 \otimes \cdots \otimes f_i)\otimes \mathcal{L}^{(i+1)}(f_{i+1}) = \mathcal{L}^{(\mathbf{i+1})}(f_1 \otimes \cdots \otimes f_{i+1})
\end{equation}
Thus the Smolyak operator is precise for $f$, and the claim is proven.} \endproof
\subsection{Smolyak quadrature}
We recall the most familiar use of Smolyak algorithms, sparse quadrature. Consider a family of one-dimensional quadrature rules $ \mathcal{Q}_k^{(i)} $ in each dimension $i=1 \ldots d$; denote these rules by ${\mathcal{Q}}$. The resulting Smolyak algorithm is written as:
\begin{equation}
A(\mathcal{K}, d, {\mathcal{Q}}) = \sum_{\mathbf{k} \in \mathcal{K}} c_\mathbf{k} \mathcal{Q}^{(\mathbf{d})}_\mathbf{k}.
\end{equation}
This approximation inherits its convergence from the one-dimensional operators. The set of functions that are exactly integrated by a Smolyak quadrature algorithm is described as a corollary of Theorem \ref{thm:SmolyakAccuracy}.
\begin{corollary}
\label{thm:sparseQuadAccuracy}
For a sparse quadrature rule satisfying the hypotheses of Theorem \ref{thm:SmolyakAccuracy},
\begin{equation}
\mathcal{E} \left (A (\mathcal{K},d,{\mathcal{Q}}) \right ) \supseteq
\bigcup_{\mathbf{k} \in \mathcal{K}} \mathcal{E}(\mathcal{Q}^{(\mathbf{d})}_\mathbf{k})
\end{equation}
\end{corollary}
Quadrature rules with polynomial accuracy do have nested exact sets, as required by the theorem. An example of Smolyak quadrature exact sets is shown in Figure \ref{fig:smolyakQuadratureAccuracies}.
\begin{figure}
\centering
\subfloat[The exact set for a level-four Smolyak quadrature in two dimensions, based on \emph{linear} growth Gaussian quadrature rules.]
{
\label{fig:smolyakQuadratureAccuraciesA}
\includegraphics[scale=.7]{figures/GaussLinearQuadrature.pdf}
}
\qquad
\subfloat[The exact set for a level-three Smolyak quadrature in two dimensions, based on \emph{exponential} growth Gaussian quadrature rules.]
{
\label{fig:smolyakQuadratureAccuraciesB}
\includegraphics[scale=.7]{figures/GaussExpQuadrature.pdf}
}
\caption{The exact set diagram for two Smolyak quadrature rules, and the corresponding basis for a Smolyak pseudospectral approximation. $\mathcal{E}(\mathcal{Q})$ is shown in a solid line, $\mathcal{E}_2(\mathcal{Q})$ is the dashed line. The staircase appearance results from the superposition of rectangular full tensor exact sets.}
\label{fig:smolyakQuadratureAccuracies}
\end{figure}
\subsection{Smolyak pseudospectral approximation}
Applying Smolyak's algorithm to pseudospectral approximation operators yields a sparse algorithm that converges under similar conditions as the one-dimensional operators from which it is constructed. This algorithm is written as
\begin{equation}
A(\mathcal{K}, d, {\mathcal{S}}) = \sum_{\mathbf{k} \in \mathcal{K}} c_\mathbf{k} \mathcal{S}^{(\mathbf{d})}_\mathbf{k} .
\end{equation}
The Smolyak algorithm is therefore a sum of different full tensor pseudospectral approximations, where each approximation is built around the polynomial accuracy of a single full tensor quadrature rule. It is not naturally expressed as a set of formulas for the polynomial coefficients, because different approximations include different polynomials. The term $\Psi_\mathbf{j}$ is included in the Smolyak approximation if and only if $\exists \mathbf{k} \in \mathcal{K}: \Psi_\mathbf{j} \in \mathcal{E}_2(\mathcal{Q}^{(\mathbf{d})}_\mathbf{k})$. Here, $\mathcal{Q}^{(\mathbf{d})}_\mathbf{k}$ is the full tensor quadrature rule used by the full tensor pseudospectral approximation $\mathcal{S}^{(\mathbf{d})}_\mathbf{k}$. As in the full tensor case, the half exact set of a Smolyak quadrature rule defines the range of the Smolyak pseudospectral approximation.
Once again, the Smolyak construction guarantees that the convergence of this approximation is inherited from its constituent one-dimensional approximations. Our choices for the pseudospectral operators ensure nestedness of the constituent exact sets, so we may use Theorem~\ref{thm:SmolyakAccuracy} to ensure that Smolyak pseudospectral algorithms are exact on their range.
\begin{corollary}
If the constituent one-dimensional pseudospectral rules have no internal aliasing and satisfy the conditions of Theorem \ref{thm:SmolyakAccuracy}, then the resulting Smolyak pseudospectral algorithm has no internal aliasing.
\end{corollary}
We additionally provide a theorem that characterizes the external aliasing properties of Smolyak pseudospectral approximation, which the next section will contrast with direct quadrature.
\begin{theorem}
\label{thm:smolyakExternal}
Let $\Psi_\mathbf{j}$ be a polynomial term included in the expansion provided by the Smolyak algorithm $A(\mathcal{K}, d, {\mathcal{S}})$, and let $\Psi_{\mathbf{j}^{\prime}}$ be a polynomial term not included in the expansion. There is no external aliasing of $\Psi_{\mathbf{j}^{\prime}}$ onto $\Psi_\mathbf{j}$ if any of the following conditions is satisfied: (a) there exists a dimension $i$ for which ${j}^{\prime}_i < {j}_i$; or (b) there exists a multi-index $\mathbf{k} \in \mathcal{K}$ such that $\Psi_\mathbf{j}$ is included in the range of $\mathcal{S}^{(\mathbf{d})}_\mathbf{k}$ and $\Psi_{\mathbf{j}^{\prime}} \Psi_\mathbf{j} \in \mathcal{E}(\mathcal{Q}^{(\mathbf{d})}_\mathbf{k})$, where $\mathcal{Q}^{(\mathbf{d})}_\mathbf{k}$ is the quadrature rule used in $\mathcal{S}^{(\mathbf{d})}_\mathbf{k}$.
\end{theorem}
\proof{If condition (a) is satisfied, then $\Psi_\mathbf{j}$ and $\Psi_{\mathbf{j}^{\prime}}$ are orthogonal in dimension $i$, and hence that inner product is zero. Every quadrature rule that computes the coefficient $f_\mathbf{j}$ corresponding to basis term $\Psi_\mathbf{j}$ is accurate for polynomials of at least order $2\mathbf{j}$. Since ${j}^{\prime}_i + {j}_i < 2{j}_i$, every rule that computes the coefficient can numerically resolve the orthogonality, and therefore there is no aliasing. If condition (b) is satisfied, then the result follows from the cancellations exploited by the Smolyak algorithm, as seen in the proof of Theorem \ref{thm:SmolyakAccuracy}.} \endproof
These two statements yield extremely useful properties. First, any Smolyak pseudospectral algorithm, regardless of the admissible Smolyak multi-index set used, has no internal aliasing; this feature is important in practice and not obviously true. Second, while there is external aliasing as expected, the algorithm uses basis orthogonality to limit which external coefficients can alias onto an included coefficient. The Smolyak pseudospectral algorithm is thus a practically ``useful'' approximation, in that one can tailor it to perform a desired amount of work while guaranteeing reliable approximations of the selected coefficients. Computing an accurate approximation of the function only requires including sufficient terms so that the truncation and external aliasing errors are small.
| {
"timestamp": "2013-06-27T02:00:18",
"yymm": "1209",
"arxiv_id": "1209.1406",
"language": "en",
"url": "https://arxiv.org/abs/1209.1406",
"abstract": "Polynomial approximations of computationally intensive models are central to uncertainty quantification. This paper describes an adaptive method for non-intrusive pseudospectral approximation, based on Smolyak's algorithm with generalized sparse grids. We rigorously analyze and extend the non-adaptive method proposed in [6], and compare it to a common alternative approach for using sparse grids to construct polynomial approximations, direct quadrature. Analysis of direct quadrature shows that O(1) errors are an intrinsic property of some configurations of the method, as a consequence of internal aliasing. We provide precise conditions, based on the chosen polynomial basis and quadrature rules, under which this aliasing error occurs. We then establish theoretical results on the accuracy of Smolyak pseudospectral approximation, and show that the Smolyak approximation avoids internal aliasing and makes far more effective use of sparse function evaluations. These results are applicable to broad choices of quadrature rule and generalized sparse grids. Exploiting this flexibility, we introduce a greedy heuristic for adaptive refinement of the pseudospectral approximation. We numerically demonstrate convergence of the algorithm on the Genz test functions, and illustrate the accuracy and efficiency of the adaptive approach on a realistic chemical kinetics problem.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Adaptive Smolyak Pseudospectral Approximations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754470129648,
"lm_q2_score": 0.8175744828610095,
"lm_q1q2_score": 0.804963761929272
} |
https://arxiv.org/abs/1706.09649 | Counting chambers in restricted Coxeter arrangements | Solomon showed that the Poincaré polynomial of a Coxeter group $W$ satisfies a product decomposition depending on the exponents of $W$. This polynomial coincides with the rank-generating function of the poset of regions of the underlying Coxeter arrangement. In this note we determine all instances when the analogous factorization property of the rank-generating function of the poset of regions holds for a restriction of a Coxeter arrangement. It turns out that this is always the case with the exception of some instances in type $E_8$. | \section{Introduction}
Much of the motivation
for the study of arrangements
of hyperplanes comes
from Coxeter arrangements.
They consist of the reflecting hyperplanes
associated with the
reflections of the underlying Coxeter group.
Solomon showed that
the Poincar\'e polynomial $W(t)$
of a Coxeter group $W$ satisfies a product
decomposition depending on the exponents of $W$, see \eqref{eq:solomon}.
This polynomial coincides with the
rank-generating function of the poset of regions of
the underlying Coxeter arrangement, see \S \ref{s:rankgenerating}.
The aim of this note is to
classify all cases when
the analogous factorization property of
the rank-generating function of the poset of
regions holds for an arbitrary restriction of a
Coxeter arrangement.
It turns out that this is always the case
with the exception of some instances in type $E_8$,
see Theorem \ref{thm:main}.
The analogous factorization property for
a localization of a Coxeter arrangement
is an immediate consequence of Solomon's theorem
and a theorem of Steinberg \cite[Thm.~1.5]{steinberg:invariants},
see Remark \ref{rems:thmmain}(iv).
\subsection{The Poincar\'e polynomial of a Coxeter group}
\label{ssec:coxeter}
Let $(W,S)$ be a Coxeter group with a distinguished set of generators, $S$,
see \cite{bourbaki:groupes}. Let $\ell$ be the length function
of $W$ with respect to $S$.
The \emph{Poincar\'e polynomial} $W(t)$ of
the Coxeter group $W$ is the polynomial in $\BBZ[t]$ defined by
\begin{equation}
\label{eq:poncarecoxeter}
W(t) := \sum_{w \in W} t^{\ell(w)}.
\end{equation}
The following factorization of
$W(t)$ is due to Solomon \cite{solomon:chevalley}:
\begin{equation}
\label{eq:solomon}
W(t) = \prod_{i=1}^n(1 + t + \ldots + t^{e_i}),
\end{equation}
where $\{e_1, \ldots, e_n\}$ is the
set of exponents of $W$.
See also Macdonald \cite{macdonald:coxeter}.
\subsection{The rank-generating function of the posets of regions}
\label{s:rankgenerating}
Let $\CA = (\CA,V)$ be a
hyperplane arrangement in the real vector space $V=\BBR^n$.
A \emph{region} of $\CA$ is a connected component of the
complement $V \setminus \cup_{H \in \CA}H$ of $\CA$.
Let $\RR := \RR(\CA)$ be the set of regions of $\CA$.
For $R, R' \in \RR$, we let $\CS(R,R')$ denote the
set of hyperplanes in $\CA$ separating $R$ and $R'$.
Then with respect to a choice of a fixed
base region $B$ in $\RR$, we can partially order
$\RR$ as follows:
\[
R \le R' \quad \text{ if } \quad \CS(B,R) \subseteq \CS(B,R').
\]
Endowed with this partial order, we call $\RR$ the
\emph{poset of regions of $\CA$ (with respect to $B$)} and denote it by
$P(\CA, B)$. This is a ranked poset of finite rank,
where $\rk(R) := |\CS(B,R)|$, for $R$ a region of $\CA$,
\cite[Prop.~1.1]{edelman:regions}.
The \emph{rank-generating function} of $P(\CA, B)$ is
defined to be the following polynomial in
$\BBZ[t]$
\begin{equation*}
\label{eq:rankgen}
\zeta(P(\CA,B), t) := \sum_{R \in \RR}t^{\rk(R)}.
\end{equation*}
\bigskip
Let $W = (W,S)$ be a Coxeter group with associated reflection arrangement
$\CA = \CA(W)$ which consists of the reflecting hyperplanes of
the reflections in $W$ in the real space $V=\BBR^n$, where $|S| = n$.
Note that the Poincar\'e polynomial $W(t)$
associated with $W$
given in \eqref{eq:poncarecoxeter}
coincides with the rank-generating function of the poset of regions of
the underlying reflection arrangement
$\CA(W)$ with respect to $B$ being the dominant Weyl chamber of $W$
in $V$;
see \cite{bjoerneredelmanziegler} or \cite{jambuparis:factored}.
\bigskip
Thanks to work of Bj\"orner, Edelman, and Ziegler
\cite[Thm.~4.4]{bjoerneredelmanziegler}
(see also Paris \cite{paris:counting}), respectively
Jambu and Paris \cite[Prop.~3.4, Thm.~6.1]{jambuparis:factored},
in case of a real arrangement $\CA$
which is supersolvable (see see \S \ref{ssect:supersolv}),
respectively inductively factored (see \S \ref{ssect:factored}),
there always exists a suitable base region $B$ so that
$\zeta(P(\CA,B), t)$
admits a multiplicative decomposition which
is equivalent to \eqref{eq:solomon}
determined by the exponents of $\CA$,
see Theorem \ref{thm:mult-zeta}.
\subsection{Restricted Coxeter arrangements}
\label{s:restrictedCoxeter}
Let $W$ be a Coxeter group with reflection arrangement
$\CA = \CA(W)$ in $V=\BBR^n$.
We consider the following generalization of the
Poincar\'e polynomial $W(t)$ of $W$.
Let $X$ be in the intersection lattice $L(\CA)$ of $\CA$,
i.e.~$X$ is the subspace in $V$ given by the intersection
of some hyperplanes in $\CA$.
Then we can consider the restricted arrangement
$\CA^X$ which is the induced arrangement in $X$ from $\CA$,
see \S \ref{ssect:arrangements}.
In a case-by-case study,
Orlik and Terao showed in \cite{orlikterao:free} that the restricted
arrangement
$\CA^X$ is always free, so we can speak of the exponents of $\CA^X$,
see \cite[\S 4]{orlikterao:arrangements}.
In case $W$ is a Weyl group, Douglass \cite[Cor.~6.1]{douglass:adjoint}
gave a uniform proof
of this fact by means of an elegant, conceptual Lie theoretic argument.
It follows from the discussion above that in the special instances
when either $\CA^X$ is supersolvable
(which is for instance always the case for $X$ of dimension at most $2$)
or inductively factored, or else if $X$ is just the ambient space $V$
(so that $\CA^V = \CA)$, then
$\zeta(P(\CA^X,B), t)$ is known to factor analogous to
\eqref{eq:solomon} involving the exponents of $\CA^X$.
Fadell and Neuwirth \cite{fadellneuwirth}
showed that the braid arrangement
is fiber type and Brieskorn \cite{brieskorn:tresses}
proved this for the reflection arrangement of the
hyperoctahedral group. This property is equivalent to
being supersolvable, see \cite{terao:modular}.
Therefore, since any restriction of a supersolvable
arrangement is again supersolvable, \cite{stanley:super}, in
case of the symmetric or hyperoctahedral group $W$,
we see that $\CA(W)^X$ is supersolvable for any $X$.
Thus in each of these cases the rank
generating function of the poset of regions of
$\CA(W)^X$ factors as in \eqref{eq:solomon},
thanks to Theorem \ref{thm:mult-zeta}.
Therefore,
it is natural to study the rank-generating function
of the poset of regions of an arbitrary restriction of a
Coxeter arrangement.
The following gives a complete classification of all instances
when $\zeta(P(\CA^X,B), t)$ factors analogous to
\eqref{eq:solomon}.
\begin{theorem}
\label{thm:main}
Let $W$ be a finite, irreducible Coxeter group with
reflection arrangement $\CA = \CA(W)$.
Let $\CA^X$ be the restricted arrangement
associated with $X \in L(\CA)\setminus\{V\}$.
Then there is a suitable choice of a base region $B$ so that
the rank-generating function of the poset of regions of $\CA^X$
satisfies the multiplicative formula
\begin{equation}
\label{eq:poinprod}
\zeta(P(\CA^X,B), t) = \prod_{i=1}^n (1 + t + \ldots + t^{e_i}),
\end{equation}
where $\{e_1, \ldots, e_n\}$ is the
set of exponents of $\CA^X$
if and only if one of the following holds:
\begin{itemize}
\item[(i)] $W$ is not of type $E_8$;
\item[(ii)] $W$ is of type $E_8$ and either the rank of $X$ is at most $3$,
but $\CA^X \not \cong (E_8,A_2A_3)$ and $\CA^X \not \cong (E_8,A_1A_4)$,
or else $\CA^X \cong (E_8,D_4)$.
\end{itemize}
\end{theorem}
We prove Theorem \ref{thm:main} in Section \ref{sec:proof}.
For classical $W$, either $\CA(W)^X$ is supersolvable and
the result follows from Theorem \ref{thm:mult-zeta},
or else $W$ is of type $D$ and
$\CA(W)^X$ belongs to a particular family of arrangements
$\CD_p^k$ for $0 \le k \le p$ studied by Jambu and Terao,
\cite[Ex.~2.6]{jambuterao:free}.
We prove Theorem \ref{thm:main} for the family
$\CD_p^k$ in Lemma \ref{lem:dn}.
For $W$ of exceptional type,
there are $31$ restrictions $\CA(W)^X$ of rank at least $3$
(up to isomorphism)
that need to be considered. These are handled by
computational means, see Remark \ref{rem:exc}.
\begin{remarks}
\label{rems:thmmain}
(i).
In the statement of the theorem and later on
we use the convention to label the $W$-orbit
of $X \in L(\CA)$ by the Dynkin type $T$
of the stabilizer $W_X$ of $X$ in $W$ which
is itself a Coxeter group,
by Steinberg's theorem \cite[Thm.~1.5]{steinberg:invariants}.
So we denote the restriction $\CA^X$ just by the pair
$(W,T)$; see also \cite[App.~C, D]{orlikterao:arrangements}.
(ii).
Among the restrictions $\CA(W)^X$ all
supersolvable and all inductively factored instances are known,
see Theorems \ref{thm:super-restriction} and
\ref{thm:nice-restriction} below.
Thus, by Theorem \ref{thm:mult-zeta}, in each of these cases
$\zeta(P(\CA^X,B), t)$
factors as in \eqref{eq:poinprod}.
(iii).
Hoge checked that the exceptional case
$(E_8,A_2A_3)$ from Theorem \ref{thm:main}
is isomorphic to
the real simplicial arrangement
``$A_4(17)$'' from Gr\"unbaum's list \cite{gruenbaum}.
It was observed by Terao that the latter does
not satisfy the product rule \eqref{eq:poinprod},
\cite[p.~277]{bjoerneredelmanziegler}.
It is rather remarkable that this arrangement
makes an appearance as a restricted Coxeter arrangement.
In contrast, according to Theorem \ref{thm:main},
the rank-generating function of the poset of regions
of $(E_8,A_1^2A_3)$ does factor according to
\eqref{eq:poinprod}. In particular,
these two arrangements are not isomorphic, as
claimed erroneously in \cite[App.~D]{orlikterao:arrangements}.
(iv).
For $X$ in $L(\CA(W))$ consider
the localization $\CA(W)_X$ of $\CA(W)$ at $X$,
which consists of all members of $\CA(W)$ containing $X$,
see \S \ref{ssect:arrangements}.
Then, since
the stabilizer $W_X$ in $W$ of $X$
is itself a Coxeter group,
by Steinberg's theorem \cite[Thm.~1.5]{steinberg:invariants},
and since $\CA(W)_X = \CA(W_X)$,
by \cite[Cor.~6.28(2)]{orlikterao:arrangements},
it follows from Solomon's factorization \eqref{eq:solomon}
that the rank generating function of the poset of regions of
$\CA(W)_X$
(with respect to the base chamber being the unique chamber of
$\CA(W)_X$ containing the dominant Weyl chamber of $W$) factors
analogous to \eqref{eq:solomon} involving
the exponents of $W_X$.
(v).
In Lie theoretic terms,
for $W$ a Weyl group,
$W(t^2)$ is the Poincar\'e polynomial
of the flag variety of a semisimple
linear algebraic group with Weyl group $W$.
The formula \eqref{eq:solomon} then gives a well-known
factorization of the Poincar\'e polynomial of the flag variety.
If $W$ is of type $A$ or $B$, then
each restriction $\CA(W)^X$ is
the Coxeter arrangement of the same Dynkin type of
smaller rank,
cf.~\cite[Props.~6.73, 6.77]{orlikterao:arrangements}.
Thus, by the previous paragraph, in these instances,
$\zeta(P(\CA^X,B), t^2)$
is just the Poincar\'e polynomial
of the flag variety of a semisimple
linear algebraic group of the same Dynkin type as $W$ but of smaller rank.
In view of these examples, it is natural to wonder whether in general
there is a suitable projective variety
associated with a fixed semisimple group $G$ with Weyl group $W$
whose Poincar\'e polynomial is related to
the rank-generating function of the poset of
regions for any restriction of $\CA(W)$ in the same manner
as in these special instances above, relating to
and generalizing the flag variety of $G$.
\end{remarks}
For general information about arrangements and Coxeter groups,
we refer the reader to \cite{bourbaki:groupes} and
\cite{orlikterao:arrangements}.
\section{Recollections and Preliminaries}
\label{sect:prelims}
\subsection{Hyperplane arrangements}
\label{ssect:arrangements}
Let $V = \BBR^n$ be an $n$-dimensional real vector space.
A \emph{(real) hyperplane arrangement} $\CA = (\CA, V)$ in $V$
is a finite collection of hyperplanes in $V$ each
containing the origin of $V$.
We denote the empty arrangement in $V$ by $\Phi_n$.
The \emph{lattice} $L(\CA)$ of $\CA$ is the set of subspaces of $V$ of
the form $H_1\cap \ldots \cap H_i$ where $\{ H_1, \ldots, H_i\}$ is a subset
of $\CA$.
For $X \in L(\CA)$, we have two associated arrangements,
firstly
$\CA_X :=\{H \in \CA \mid X \subseteq H\} \subseteq \CA$,
the \emph{localization of $\CA$ at $X$},
and secondly,
the \emph{restriction of $\CA$ to $X$}, $\CA^X = (\CA^X,X)$, where
$\CA^X := \{ X \cap H \mid H \in \CA \setminus \CA_X\}$.
Note that $V$ belongs to $L(\CA)$
as the intersection of the empty
collection of hyperplanes and $\CA^V = \CA$.
The lattice $L(\CA)$ is a partially ordered set by reverse inclusion:
$X \le Y$ provided $Y \subseteq X$ for $X,Y \in L(\CA)$.
Throughout, we only consider arrangements $\CA$
such that $0 \in H$ for each $H$ in $\CA$.
These are called \emph{central}.
In that case the \emph{center}
$T(\CA) := \cap_{H \in \CA} H$ of $\CA$ is the unique
maximal element in $L(\CA)$ with respect
to the partial order.
A \emph{rank} function on $L(\CA)$
is given by $r(X) := \codim_V(X)$.
The \emph{rank} of $\CA$
is defined as $r(\CA) := r(T(\CA))$.
\subsection{Free arrangements}
\label{ssect:free}
Free arrangements play a fundamental role in the
theory of hyperplane arrangements,
see \cite[\S 4]{orlikterao:arrangements} for the definition and
properties of this notion. Crucial for our purpose is the fact that
associated with a free arrangement is a set of important invariants, its
(multi)set of \emph{exponents}, denoted by $\exp \CA$.
\subsection{Supersolvable arrangements}
\label{ssect:supersolv}
We say that $X \in L(\CA)$ is \emph{modular}
provided $X + Y \in L(\CA)$ for every $Y \in L(\CA)$,
\cite[Cor.~2.26]{orlikterao:arrangements}.
\begin{defn}
[{\cite{stanley:super}}]
\label{def:super}
Let $\CA$ be a central arrangement of rank $r$.
We say that $\CA$ is
\emph{supersolvable}
provided there is a maximal chain
\[
V = X_0 < X_0 < \ldots < X_{r-1} < X_r = T(\CA)
\]
of modular elements $X_i$ in $L(\CA)$,
cf.~\cite[Def.~2.32]{orlikterao:arrangements}.
\end{defn}
\noindent Note that arrangements of rank at most $2$ are always supersolvable,
e.g.~see \cite[Prop.~4.29(iv)]{orlikterao:arrangements} and
supersolvable arrangements are always free, e.g.~see
\cite[Thm.~4.58]{orlikterao:arrangements}.
Also, restrictions of a supersolvable
arrangement are again supersolvable, \cite[Prop.~3.2]{stanley:super}.
\subsection{Nice and inductively factored arrangements}
\label{ssect:factored}
The notion of a \emph{nice} or \emph{factored}
arrangement is due to Terao \cite{terao:factored}.
It generalizes the concept of a supersolvable arrangement, e.g.~see
\cite[Prop.~2.67, Thm.~3.81]{orlikterao:arrangements}.
Terao's main motivation was to give a
general combinatorial framework to
deduce tensor factorizations of the underlying Orlik-Solomon algebra,
see also \cite[\S 3.3]{orlikterao:arrangements}.
We refer to \cite{terao:factored} for the relevant
notions and properties
(cf.~ \cite[\S 2.3]{orlikterao:arrangements}).
There is an analogue of Terao's
Addition Deletion Theorem for
free arrangements
(\cite[Thm.~4.51]{orlikterao:arrangements})
for the class of
nice arrangements, see \cite[Thm.~3.5]{hogeroehrle:factored}.
In analogy to the case of free arrangements, this motivates
the notion of an
\emph{inductively factored} arrangement,
see \cite{jambuparis:factored}, \cite[Def.~3.8]{hogeroehrle:factored}
for further details on this concept.
The connection with the previous notions is as follows.
Supersolvable arrangements are always inductively factored
(\cite[Prop.~3.11]{hogeroehrle:factored})
and inductively factored arrangements are always free
(\cite[Prop.~2.2]{jambuparis:factored}) so
that we can talk about the exponents of such arrangements.
The following theorem due to Jambu and Paris,
\cite[Prop.~3.4, Thm.~6.1]{jambuparis:factored},
was first shown by Bj\"orner, Edelman and Ziegler
for $\CA$ supersolvable in \cite[Thm.~4.4]{bjoerneredelmanziegler}
(see also Paris \cite{paris:counting}).
\begin{theorem}
\label{thm:mult-zeta}
If $\CA$ is inductively factored, then
there is a suitable choice of a base region $B$ so that
$\zeta(P(\CA,B), t)$ satisfies the multiplicative formula
\begin{equation}
\label{eq:poinprod2}
\zeta(P(\CA,B), t) = \prod_{i=1}^n (1 + t + \ldots + t^{e_i}),
\end{equation}
where $\{e_1, \ldots, e_n\} = \exp \CA$ is the
set of exponents of $\CA$.
\end{theorem}
\subsection{Restricted root systems}
\label{ssect:restrictedroots}
Given a root system for $W$,
associated with a member $X$ from $L(\CA(W))$ we have a
\emph{restricted root system} which consists of the restrictions
of the roots of $W$ to $X$, see \cite[\S 2]{brundangoodwin:grading}.
As in the absolute case, bases of the restricted root system correspond
bijectively to chambers of the arrangement $\CA(W)^X$,
\cite[Cor.~7]{brundangoodwin:grading}.
More specifically, let $\Phi$ be a root system for $W$
and let $\Delta\subset \Phi$
be a set of simple roots.
In view of Remark \ref{rems:thmmain}(i),
choosing $X\in L(\CA(W))$ amounts to specifying the Dynkin type $T$
of the parabolic subgroup $W_X$, so that the pair
$(W,T)$ characterizes $A(W)^X$.
Let $\mathcal{B}_T$ be the set of all subsets of $\Delta$
that generate a root system of Dynkin type $T$.
Fixing an element $\Delta_J\in \mathcal{B}_T$,
the bases for $\Phi$ containing $\Delta_J$ are
in bijective correspondence with
the bases for the restricted root system,
\cite[Thm.~10]{brundangoodwin:grading}.
Furthermore, the set $\mathcal{B}_T$ characterizes a set of
representatives for the action of the \emph{restricted Weyl group}
on the set of chambers of the arrangement $\CA(W)^X$,
\cite[Lem.~11]{brundangoodwin:grading}.
Thus there is a suitable choice of a base region $B$
such that $\zeta(P(\CA(W)^X, B),t)$ factors according to \eqref{eq:poinprod},
if and only if there is
such a choice among regions that arise from elements in $\mathcal{B}_T$.
\section{Proof of Theorem \ref{thm:main}}
\label{sec:proof}
It is well known that
if $W$ is of type $A$ or $B$, then
the Coxeter arrangement $\CA(W)$ is supersolvable and
so is every restriction thereof.
So Theorem \ref{thm:main} follows in this
case from Theorem \ref{thm:mult-zeta}.
Therefore, for $W$ of classical type, we only need to consider
restrictions for $W$ of type $D$.
The restrictions
$\CD_p^k$ for $0 \le k \le p$
of Coxeter arrangements
of type $D$ are given by
the defining polynomial
\[
Q(\CD_p^k) := x_{p-k+1} \cdots x_p \prod_{1 \le i < j \le p}(x_i^2 - x_j^2),
\]
see \cite[Ex.~2.6]{jambuterao:free} (\cite[Cor.~6.86]{orlikterao:arrangements}).
In view of Theorem \ref{thm:mult-zeta}, we next recall
the relevant parts of the classifications of the
supersolvable and inductively factored restrictions of reflection
arrangements from \cite{amendhogeroehrle:super} and
\cite{moellerroehrle:nice}, respectively.
Here we focus on such $X$ in $L(\CA)$ of dimension at least $3$,
as a restriction to a smaller dimensional member of $L(\CA)$
is already supersolvable.
\begin{theorem}
[{\cite[Thm.~1.3]{amendhogeroehrle:super}}]
\label{thm:super-restriction}
Let $W$ be a finite, irreducible Coxeter group
with reflection arrangement
$\CA = \CA(W)$ and let $X \in L(\CA)\setminus\{V\}$
with $\dim X \ge 3$.
Then the restricted arrangement
$\CA^X$ is supersolvable
if and only if
one of the following holds:
\begin{itemize}
\item[(i)] $\CA$ is of type $A$ or of type $B$, or
\item[(ii)] $W$ is of type $D_n$ for $n \ge 4$ and
$\CA^X \cong \CD^k_p$, where $p = \dim X$ and $p - 1 \leq k \leq p$;
\item[(iii)] $\CA^X$ is $(E_6,A_3)$, $(E_7, D_4)$, $(E_7, A_2^2)$, or $(E_8, A_5)$.
\end{itemize}
\end{theorem}
As noted above, every supersolvable restriction from
Theorem \ref{thm:super-restriction}
is inductively factored.
\begin{theorem}
[{\cite[Thms.~1.5, 1.6]{moellerroehrle:nice}}]
\label{thm:nice-restriction}
Let $W$ be a finite, irreducible Coxeter group
with reflection arrangement
$\CA = \CA(W)$ and let $X \in L(\CA)\setminus\{V\}$
with $\dim X \ge 3$.
Then the restricted arrangement
$\CA^X$ is inductively factored
if and only if
one of the following holds:
\begin{itemize}
\item[(i)] $\CA^X$ is supersolvable, or
\item[(ii)] $W$ is of type $D_n$ for $n \ge 4$ and
$\CA^X \cong \CD^{p-2}_p$, where $p = \dim X$;
\item[(iii)]
$\CA^X$ is one of $(E_6, A_1A_2), (E_7, A_4)$, or $(E_7, (A_1A_3)'')$.
\end{itemize}
\end{theorem}
It follows from Theorem \ref{thm:mult-zeta} that in all instances
covered in Theorem \ref{thm:nice-restriction},
$\zeta(P(\CA,B), t)$ satisfies the factorization property of
\eqref{eq:poinprod2} with respect to a suitable choice of
base region $B$.
In particular, Theorem \ref{thm:main} holds in all these instances.
It is not apparent that the rank-generating function
of the poset of regions of $\CD_p^k$ factors
according to \eqref{eq:poinprod} for $1\leq k \leq p-3$.
For, these arrangements are neither reflection arrangements
nor are they inductively factored, by the results above.
To show that
the factorization property from \eqref{eq:poinprod}
also holds in these instances,
we first parameterize the regions $\RR(\CD_p^k)$ suitably
and then prove a recursive formula for $\zeta(P(\CD_p^k,B),t)$.
\begin{remark}
\label{rem:regionsDlk}
Since the inequalities given by the hyperplanes
do not change within a region,
the set of regions is uniquely determined
by specifying one interior point for each region. Let
$$M_p^k:= \left\{(x_1,\dots,x_p)\in \{\pm 1,\dots,\pm p\}^p
\mid x_1,\dots,x_{p-k}\neq -1,\hspace{0.4em} |x_i| \neq |x_j|
\hspace{0.4em} \forall i\neq j \right\}.$$
It is easy to verify that each region in
$\RR := \RR(\CD_p^k)$ contains
exactly one element of $M_p^k$.
So this gives a parametrization for the regions in $\RR$.
Without further comment, we frequently identify points in $M_p^k$ with their
respective regions in $\RR$.
For $x\in M_p^k$, write $R_x\in \RR$ for the unique region containing $x$.
Once a base region $B$ in $\RR$
is chosen so that $\RR$ becomes a ranked poset,
we may write
$$\zeta(P(\CD_p^k,B),t) = \sum\limits_{x\in M_p^k} t^{\rk(R_x)}. $$
Using this notation it is easy to see which regions are
adjacent and which hyperplanes are walls of a given region.
Let $x=(x_1,\dots,x_p)\in M_p^k$.
If $x_j = x_i \pm 1$,
then $\ker(x_i-x_j)$ is a wall of $R_x$
and the corresponding adjacent region
is obtained from $x$ by exchanging
$x_i$ and $x_j$ in $x$. If $x_j = -(x_i\pm 1)$,
then $\ker(x_i+x_j)$ is a wall of $R_x$
and the adjacent region again originates
from $x$ by exchanging
$x_i$ and $x_j$ \emph{but maintaining their respective signs}.
Finally, if $x_i=\pm 1$ and $p-k<i\leq p$, then $\ker(x_i)$
is a wall of $R_x$ and the adjacent region
is obtained by exchanging $x_i$ with $-x_i$.
\end{remark}
For our subsequent results,
we choose $B_p := R_y\in \RR$ for $y=(p,p-1,\dots,1)$
as our base chamber independent of $k$.
\begin{lemma}
\label{lem:dnhilf}
Let $p\geq 3$, $k\in\{0,\dots,p\}$ and $B_p\in \RR$ as above.
For an arbitrary $i\in\{1,\dots,p\}$, we have
\begin{equation}
\label{eq:p}
\sum_{\substack{x\in M_p^k \\ x_i = p }} t^{\rk(R_x)} =
\begin{cases}
t^{i-1} \cdot \zeta(P(\CD_{p-1}^{k },B_{p-1}),t) &\text{ if } i\leq p-k, \\
t^{i-1} \cdot \zeta(P(\CD_{p-1}^{k-1},B_{p-1}),t) &\text{ if } i > p-k,
\end{cases}\\
\end{equation}
and
\begin{equation}
\label{eq:-p}
\sum\limits_{\substack{x\in M_p^k \\ x_i = -p }} t^{\rk(R_x)} =
\begin{cases}
t^{2p-i-1} \cdot \zeta(P(\CD_{p-1}^{k},B_{p-1}),t) &\text{ if } i\leq p-k, \\
t^{2p - i} \cdot \zeta(P(\CD_{p-1}^{k-1},B_{p-1}),t) &\text{ if } i > p-k.
\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
Set $N^- := \{x\in M_p^k\mid x_i = -p\}$.
Thanks to Remark \ref{rem:regionsDlk},
no hyperplane involving the coordinate $x_i$
lies between any two regions of $N^-$.
Setting
\[
z = (z_1,\dots,z_i,\dots,z_p) :=
(p-1,p-2, \ldots,p-i+1, -p,p-i-1, \ldots,2, 1)\in N^-,
\]
there are only hyperplanes involving $x_i$ between $B_p$ and $R_z$.
More precisely, we have
\[
\CS(B_p,R_z)=
\begin{cases}
\{\ker(x_i - x_j)\mid j\leq i\} \cup \{\ker( x_i \pm x_j )\mid i<j \leq p) \} &\text{ for } i\leq p-k, \\
\{\ker(x_i - x_j)\mid j\leq i\} \cup \{\ker( x_i \pm x_j )\mid i<j \leq p) \} \cup \{\ker(x_i)\} &\text{ for } i>p-k.
\end{cases}
\]
So if we choose an arbitrary $x\in N^-$, we have
$$\CS(B_p,R_x) = \CS(B_p,R_z) \mathbin{\dot\cup} \CS(R_z,R_x).$$
Consequently, we obtain
\begin{equation}
\label{eq:rk}
\rk(R_x) = |\CS(B_p,R_z)| + |\CS(R_z,R_x)| =
\begin{cases}
t^{2p-i-1} + |\CS(R_z,R_x)| &\text{ for } i\leq p-k, \\
t^{2p - i} + |\CS(R_z,R_x)| &\text{ for } i>p-k.
\end{cases}
\end{equation}
Now set
\begin{equation}
\label{eq:A}
\CA :=
\begin{cases}
\CD_{p-1}^{k} &\text{ if } i\leq p-k, \\
\CD_{p-1}^{k-1} &\text{ if } i > p-k,
\end{cases}
\end{equation}
and identify the set of regions $\RR(\CA)$
of $\CA$ with the corresponding set of
$(p-1)$-tuples as in Remark \ref{rem:regionsDlk}.
Then simply omitting the $i$-th coordinate defines a map
\[
h:N^- \longrightarrow \RR(\CA)
\]
which is bijective, $h(R_z)=B_{p-1}$
and if $\widetilde\rk$ denotes the rank function on $P(\CA,B_{p-1})$,
then we get $|\CS(R_z,R_x)| = \widetilde\rk(h(R_x))$.
Therefore, by \eqref{eq:rk}, \eqref{eq:A}
and the bijectivity of $h$, we get
\begin{align*}
\sum\limits_{\substack{x\in M_p^k \\ x_i = -p }} t^{\rk(R_x)}
&= t^{|\CS(B_p,R_z)|} \sum\limits_{x\in N^-} t^{|\CS(R_z,R_x)|} \\
&= t^{|\CS(B_p,R_z)|} \sum\limits_{x\in N^-} t^{\widetilde\rk(h(R_x))} \\
&= t^{|\CS(B_p,R_z)|} \sum\limits_{x\in \RR(\CA)} t^{\widetilde\rk(R_x)} \\
&= t^{|\CS(B_p,R_z)|} \zeta(P(\CA,B_{p-1}),t) \\
& =
\begin{cases}
t^{2p-i-1} \cdot \zeta(P(\CD_{p-1}^{k},B_{p-1}),t) &\text{ if } i\leq p-k, \\
t^{2p - i} \cdot \zeta(P(\CD_{p-1}^{k-1},B_{p-1}),t) &\text{ if } i > p-k.
\end{cases}
\end{align*}
So \eqref{eq:p} follows.
Next let $N^+:=\{x\in M_p^k\mid x_i = p\}$ and set
\[
z = (z_1,\dots,z_i,\dots,z_p) :=
(p-1,p-2, \ldots,p-i+1,p,p-i-1, \ldots,2, 1)\in N^+.
\]
Then $\CS(B_p,R_z)=\{\ker(x_i - x_j) \mid 1\leq j < i\}$ has cardinality $i-1$. The proof of this case is similar to the one above,
and is left to the reader.
So \eqref{eq:-p} follows.
\end{proof}
The next technical lemma is needed in the proof of Lemma \ref{lem:dn}.
For ease of notation, we set
\[
F(e_1,\dots,e_m) := \prod_{i=1}^{m} (1+t+\dots+t^{e_i}) \in \BBZ[t]
\]
for any $m\geq 1$ and integers $e_1,\dots,e_m \geq 1$.
In particular, $F(e) = 1+t+\dots+t^e$.
Also note that for $j>0$, we have
\begin{equation}
\label{eq:F}
F(j-1)(1+t^j) = F(2j-1).
\end{equation}
\begin{lemma}
\label{lem:delta}
Let $p\geq 3$ and $0\leq k\leq p$. Define
\[
\Delta_p^k :=
\sum_{i=1}^{p-k} (t^{i-1}+t^{2p-i-1}) F(p+k-2)
+ \sum_{i=p-k+1}^{p} (t^{i-1}+t^{2p-i}) F(p+k-3).
\]
Then
\[
\Delta_p^k
= F(p+k-1,2p-3).
\]
\end{lemma}
\begin{proof}
We argue by induction on $k$. First let $k=0$. Then, using \eqref{eq:F}, we have
\begin{align*}
\Delta_p^0 &= \sum_{i=1}^{p} (t^{i-1}+t^{2p-i-1}) F(p-2) \\
&= (1+\dots+t^{p-1})F(p-2) + (t^{p-1}+\dots+t^{2p-2}) F(p-2) \\
&= F(p-1,p-2) + t^{p-1}F(p-1,p-2) \\
&= F(p-1,p-2)\left(1+t^{p-1}\right) \\
&= F(p-1,2p-3).
\end{align*}
Now let $k>0$ and assume that the statement is true for $k'<k$. Then using
the inductive hypothesis, we get
\begin{align*}
\Delta_p^k &= \Delta_p^{k-1} + t^{p+k-2}\sum_{i=1}^{p-k}\left(t^{i-1}+t^{2p-i-1}\right)
+ t^{p+k-3}\sum_{i=p-k+1}^{p}\left(t^{i-1}+t^{2p-i}\right) \\
&\hspace{1em} - F(p+k-3)\left(t^{p-k}+t^{p+k-2}\right)
+ F(p+k-4)\left(t^{p-k}+t^{p+k-1}\right) \\
&= F(2p-3,p+k-2)+(t^{p+k-2}+\dots+t^{2p-3}+t^{2p+2k-3}+\dots+t^{3p+k-4}) \\
&\hspace{1em} + (t^{2p-3}+\dots+t^{2p+k-4}+t^{2p+k-3}+\dots+t^{2p+2k-4}) - (t^{2p-3}+t^{p+k-2}) \\
&= F(2p-3,p+k-2) + t^{p+k-1}(1+\dots+t^{2p-3}) \\
&= F(2p-3)(F(p+k-2)+t^{p+k-1}) \\
& = F(2p-3,p+k-1),
\end{align*}
as claimed.
\end{proof}
Finally, armed with Lemmas \ref{lem:dnhilf} and \ref{lem:delta}, we are able to
prove the desired result for the arrangements $\CD_p^k$.
\begin{lemma}
\label{lem:dn}
The rank-generating function
of the poset of regions of $\CD_p^k$ factors according to
\eqref{eq:poinprod} for all $1 \le k \le p-3$ and $p \ge 4$.
\end{lemma}
\begin{proof}
We argue by induction on $n=p+k$. For $n=3$, the result holds vacuously.
So let $1 \le k \le p-3$ and $p \ge 4$ and assume that for all $p'$, $k'$,
with $1 \le k' \le p'-3$, $p' \ge 4$ and $n > p'+k'$,
the arrangement $\CD_{p'}^{k'}$ satisfies \eqref{eq:poinprod}.
Note that
\begin{equation}
\label{eq:expD}
\exp(\CD_p^k) = \exp(\CD_{p-1}^{p-1}) \cup \{p+k-1\},
\end{equation}
see \cite[Ex.~2.6]{jambuterao:free}.
Then the inductive hypothesis together
with Lemmas \ref{lem:dnhilf} and \ref{lem:delta}
and \eqref{eq:expD} imply
\begin{align*}
&\zeta(P(\CD_p^k,B_p),t)
=\sum_{x\in M_p^k} t^{\rk(R_x)}
=\sum_{i=1}^{p} \sum_{\substack{x\in M_p^k \\ x_i = \pm p}} t^{\rk(R_x)} \\
&=\sum_{i=1}^{p-k} (t^{i-1}+t^{2p-i-1}) \zeta(P(\CD_{p-1}^{k},B_{p-1}),t)
+ \sum_{i=p-k+1}^{p} (t^{i-1}+t^{2p-i}) \zeta(P(\CD_{p-1}^{k-1},B_{p-1}),t) \\
&= \sum_{i=1}^{p-k} (t^{i-1}+t^{2p-i-1}) F\left(\exp(\CD_{p-1}^{k}) \right)
+ \sum_{i=p-k+1}^{p} (t^{i-1}+t^{2p-i}) F\left( \exp(\CD_{p-1}^{k-1}) \right) \\
&= F\left(\exp(\CD_{p-2}^{p-2})\right) \left( \sum_{i=1}^{p-k} (t^{i-1}+t^{2p-i-1}) F(p+k-2)
+ \sum_{i=p-k+1}^{p} (t^{i-1}+t^{2p-i}) F(p+k-3) \right) \\
&= F\left(\exp(\CD_{p-2}^{p-2})\right) \Delta_p^k \\
& = F\left(\exp(\CD_{p-2}^{p-2})\right) F(2p-3,p+k-1) \\
&= F\left(\exp(\CD_{p}^{k})\right).
\end{align*}
This completes the proof of the lemma.
\end{proof}
\begin{remark}
\label{rem:exc}
In view of Theorems \ref{thm:mult-zeta},
\ref{thm:super-restriction} and
\ref{thm:nice-restriction},
Lemma \ref{lem:dn} settles all the remaining
classical instances of Theorem \ref{thm:main}.
It follows from
Theorems \ref{thm:super-restriction} and
\ref{thm:nice-restriction}
that there are $31$ instances for $W$ of exceptional type to
be checked (here we take the isomorphisms of rank $3$
restrictions $\CA(W)^X$ into account,
cf.~\cite[App.~D]{orlikterao:arrangements}).
We have verified that
$\zeta(P(\CA(W)^X,B), t)$ satisfies
the factorization property \eqref{eq:poinprod}
precisely in all the instances when $W$ is of exceptional type, as
specified in Theorem \ref{thm:main}.
In the listed exceptions, $\zeta(P(\CA(W)^X,B), t)$ does not factor
according to this rule with respect to any choice of base region.
This was checked using the computer algebra package \Sage, \cite{sage}.
We used the \Sage-package
\emph{HyperplaneArrangements} which provides
methods to compute
$\zeta(P(\CA, B), t))$ for given $\CA$ and $B$.
More specifically, the algorithm is initiated
with a list containing the vector space $V$ as a polytope and
for each hyperplane in $\CA$ splits each polytope in the
current list into two polytopes,
defined by a positive resp.~negative inequality,
while discarding all empty solutions. This results in a list of chambers
implemented as polytopes.
After specifying a base region $B$
the algorithm checks for each region $R$
and each hyperplane $H$ whether $H$ separates $B$ from $R$.
In addition, we used the results from \cite[\S 2]{brundangoodwin:grading}, as
detailed in Section \ref{ssect:restrictedroots}
to greatly reduce the number of chambers that have to be tested.
This method worked for all exceptional restrictions other than
$(E_8,A_1)$, as the latter is simply too big for \Sage\ to compute
all its chambers at once.
For this case we instead used the
bijective correspondences recalled in
\ref{ssect:restrictedroots} to compute
the chambers directly from the elements of
the Weyl group $W(E_8)$.
By ordering the group elements by length using
a depth-first search algorithm implemented
in the \Sage-package \emph{ReflectionGroup}, we were able to compute
the chambers of the restricted arrangement ordered by rank,
so we could conclude that the rank-generating polynomial
of the poset of regions for the restriction
$\CA^X = (E_8,A_1)$ does not factor
according to \eqref{eq:poinprod}
after computing only a small portion of the entire polynomial
$\zeta(P(\CA^X, B), t))$.
\end{remark}
\bigskip
{\bf Acknowledgments}:
We are grateful to T.~Hoge for checking that
the simplicial arrangement
``$A_4(17)$'' from Gr\"unbaum's list
coincides with the restriction $(E_8,A_2 A_3)$.
We would also like to thank C.~Stump for
helpful discussions concerning computations in \Sage.
The research of this work was supported by
DFG-grant RO 1072/16-1.
\bibliographystyle{amsalpha}
\newcommand{\etalchar}[1]{$^{#1}$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} }
\providecommand{\href}[2]{#2}
| {
"timestamp": "2017-06-30T02:04:19",
"yymm": "1706",
"arxiv_id": "1706.09649",
"language": "en",
"url": "https://arxiv.org/abs/1706.09649",
"abstract": "Solomon showed that the Poincaré polynomial of a Coxeter group $W$ satisfies a product decomposition depending on the exponents of $W$. This polynomial coincides with the rank-generating function of the poset of regions of the underlying Coxeter arrangement. In this note we determine all instances when the analogous factorization property of the rank-generating function of the poset of regions holds for a restriction of a Coxeter arrangement. It turns out that this is always the case with the exception of some instances in type $E_8$.",
"subjects": "Combinatorics (math.CO); Group Theory (math.GR)",
"title": "Counting chambers in restricted Coxeter arrangements",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754442973824,
"lm_q2_score": 0.8175744695262777,
"lm_q1q2_score": 0.8049637465800316
} |
https://arxiv.org/abs/1911.07426 | What is the Perfect Shuffle? | When shuffling a deck of cards, one probably wants to make sure it is thoroughly shuffled. A way to do this is by sifting through the cards to ensure that no adjacent cards are the same number, because surely this is a poorly shuffled deck. Unfortunately, human intuition for probability tends to lead us astray. For a standard 52-card deck of playing cards, the event is actually extremely likely. This report will attempt to elucidate how to answer this surprisingly difficult combinatorial question directly using rook polynomials. | \section{Introduction}
We will say that a shuffle of a standard 52-card deck is a \emph{perfect shuffle} if any pair of adjacent cards in the deck have a different value from one another.
\begin{figure}[h!]
\centering
\includegraphics[scale=.08]{imperfect_example.jpg}
\caption{An imperfect shuffle}
\label{fig:universe}
\end{figure}
Formally, we can see this as a permutation on 52 elements where the first four elements are of a first color, the second four elements are of a second color, and on until the last four elements are of the thirteenth color.
Answering this problem was looked at in a 2013 IJPAM article by Yutaka Nishiyama \citep{nishiyama13}.
Unfortunately, the perspective he took to analyze the problem became very computationally intensive for decks with 52 cards, so he used Monte Carlo methods to approximate the answer for the 52-card deck\citep{nishiyama13}.
The only other places I have found the exact answer to this question is on a swedish forum from 2009 \citep{swedish} and a french blog from 2014 \citep{french}.
As far as I can tell, the two who answered these questions computed the answer with brute force using more efficient code and more resources than Nishiyama's attempt.
Here, we address the problem using Ira Gessel's 1988 generalization of rook polynomials to achieve a solution which is much less computationally restrictive \citep{gessel88}.
\section{Introduction to Rook Polynomials}
Rook polynomials were developed studying the number of ways to place rooks on a chessboard.
Our study will fairly closely follow a combination of Gessels' work in \citep{gessel88} and \citep{gessel13}.
For a given size of chessboard, the rook polynomial counts the number of ways to place differing amounts of non-attacking rooks on that chessboard.
Let $n \in \mathbb{N}$ and let $[n]$ denote $\{1,...,n\}$.
We consider our chessboard to be $[n] \times [n]$. Let a \emph{board}, B, be a subset of the chessboard $B \subseteq [n] \times [n]$.
For this board, we define the rook number $r_k(B)$ to be the number of ways to put $k$ rooks onto this board such that none of them are `attacking' the other (none are in the same row or column). We will take $B' = \{(2,2), (3,2), (3,3)\}$ for our illustrative examples:
\begin{figure}[h!]
\centering
\includegraphics[scale=.23]{rook_number.png}
\caption{We see: $r_0(B') = 1$, $r_1(B') = 3$, and $r_2(B') = 1$.}
\label{fig:rooks}
\end{figure}
Let $S_n$ denote the set of permutations of $[n]$.
We can associate each $\pi \in S_n$ to another subset of our chessboard.
We do this with the set $\{ (i,\pi(i)) : i \in [n] \}$.
We can now see how many `hits' each permutation has with a given board.
Formally, we can define a `hit function'
\[ h_B : S_n \rightarrow \mathbb{N}_0 = {0,1,2,...}\]
\[ h_B(\pi) := | \{ (i,\pi(i)) : i \in [n] \} \cap B | \]
The associated hit numbers count how many of the $n!$ permutations hit that many times.
\[ h_k(B) := | \{ \pi \in S_n : h_B(\pi) = k \} | \hspace{17pt} \textrm{for} \hspace{3pt} k \in \mathbb{N}_0\]
\begin{figure}[h!]
\centering
\includegraphics[scale=.42]{permutations_to_hit_numbers.png}
\caption{We see: $h_0(B') = 1$, $h_1(B') = 4$, $h_2(B') = 1$.}
\label{fig:hits}
\end{figure}
\newpage
We will now see the identity which relates the rook numbers and the hit numbers by considering arrangements of rooks as partial permutations of $[n]$.
\[ \mathlarger{{\sum_i}} h_i(B) \binom{i}{j} = r_j(B)\cdot(n-j)! \hspace{17pt} \forall j\in \mathbb{N}_0 \]
\begin{proof}
This equality will be achieved by double counting the number of pairs $(\pi, H)$ where $\pi$ is a permutation and $H$ is a j-subset of the set of $\pi$'s hits ($\{ (i,\pi(i)) : i \in [n] \} \cap B$)
The left-hand side picks $\pi$ first.
Call i the number of hits of $\pi$, $i=h_B(\pi)$, and then take all $\binom{i}{j}$ subsets of the i hits which have size j.
Since there are $h_i(B)$ permutations which have hit number i and contribute $\binom{i}{j}$ pairs to our sum, we have ${\sum_i} h_i(B) \binom{i}{j}$ pairs in total.
\begin{figure}[h!]
\centering
\includegraphics[scale=.36]{lhs.png}
\caption{We first pick $\pi = e$ and then choose the $\binom{2}{1}$ different subsets (of size 1) of its 2 hits.}
\label{fig:LHS}
\end{figure}
The right-hand side picks H first.
We know that there are $r_j(B)$ subsets of the board which have size j.
For each of these, we can then extend them to a permutation by filling the empty rows and columns in (n-j)! ways.
\begin{figure}[h!]
\centering
\includegraphics[scale=.36]{rhs.png}
\caption{We first pick $H=\{(2,2)\}$ and extend to the $(3-1)!$ different permutations}
\label{fig:RHS}
\end{figure}
\end{proof}
\subsection{Defining the Rook Polynomial}
Given this identity for all j, we can multiply each of these identities by $t^j$, yielding:
\[ \mathlarger{{\sum_i}} h_i(B) \binom{i}{j} t^j = r_j(B)\cdot(n-j)!\cdot t^j \hspace{17pt} \forall j\in\mathbb{N}_0\]
Summing over j yields
\[ \mathlarger{{\sum_i}} h_i(B) \mathlarger{{\sum_j}} \binom{i}{j} t^j = \mathlarger{{\sum_j}} r_j(B)\cdot(n-j)!\cdot t^j \]
\[ \mathlarger{{\sum_i}} h_i(B) (1+t)^i = \mathlarger{{\sum_j}} r_j(B)\cdot(n-j)!\cdot t^j\]
Plugging in $t=-1$ yields
\[ h_0(B) = \mathlarger{{\sum_j}} (-1)^j\cdot r_j(B)\cdot(n-j)! \]
This number $h_0(B)$ counts how many permutations totally avoid our subset B.
This is the critical identity which fuels the study of rook polynomials, but this identity can equally be derived through the principle of inclusion and exclusion.
Regardless, corresponding to this equation, let us define the rook polynomial:
\[ r_B(x) := \mathlarger{{\sum_k}} (-1)^k\cdot r_k(B)\cdot x^{n-k} \]
Let $\phi$ be a linear functional on polynomials in x with the effect:
\[ \phi(x^k) = k! \]
Thus, $h_0(B) = \phi(r_B(x))$ which may seem a little convoluted at first, but it results in the following fantastic property:
$r_{B_1}(x) * r_{B_2}(x) = r_{B_1\oplus B_2}(x).\hspace{17pt}$ where $B_1\oplus B_2$ is the direct sum of two boards as depicted below.
\begin{figure}[h!]
\centering
\includegraphics[scale=.36]{composition.png}
\caption{Direct sum of two boards}
\label{fig:comp}
\end{figure}
\subsection{Implications of the Product Formula}
Take $l_n(x)$ to be for the complete board
$l_n(x) = r_{[n]\times[n]}(x)$.
We need to count the number of ways to put k rooks on the nxn board.
The first rook has $n^2$ places to go, the second will then have $(n-1)^2$ places, then $(n-2)^2$, etc. Since we picked these with respect to order, we need to divide by the $k!$ different orders to get the number of ways and then:
\[
\frac{(n)^2\cdot(n-1)^2\cdot ... \cdot(n-(k-1))^2 }{k!} = k! \cdot \frac{((n)\cdot(n-1)\cdot ... \cdot(n-(k-1)))^2}{(k!)^2} = k! \cdot\binom{n}{k}^2
\]
So, $l_n(x) = \sum_{k=0}^n (-1)^k \binom{n}{k}^2 k! x^{n-k}$. This allows to write the solution for the number of `generalized derrangements.'
The number of permutations of $n = n_1 + ... + n_r$ objects where $n_i$ objects have the color i such that i and $\pi(i)$ have different colors is:
\[
\phi\bigg{(}\mathlarger{\prod_{i=1}^r} l_{n_i}(x) \bigg{)}
\]
This fact simply follows from our product of boards identity by using the full boards $[n_i]\times[n_i]$ and full rook polynomials $l_{n_i}(x)$.
(The case of derrangements is $n_i = 1$ for all $i\in[r]$.)
This result was proved by Evens and Gillis in 1976, without using the connection to rook theory.
\begin{figure}[h!]
\centering
\includegraphics[scale=.33]{gen_derr.png}
\caption{An example of a generalized derrangement with $n_1=3,n_2=4,n_3=4$}
\label{fig:der}
\end{figure}
\section{Generalized Rook Polynomials}
To generalize this rook polynomial beyond permutations of $[n]$ and the conditions $\pi(i) = j$, we can use Ira Gessel's \citep{gessel88} definition of a generalized rook polynomial.
Let us have sets $T_0,T_1,T_2,...$ which have cardinalities $M_0,M_1,M_2,...$ .
For each $n\in\mathbb{N}_0$ we will additionally have a ``set of conditions'' $C_n$ which satisfy $C_0 \subseteq C_1 \subseteq C_2 \subseteq ...$.
To each condition $c\in C_n$ we will have a set $T_n^c \subseteq T_n$ which are the elements of $T_n$ which satisfy the condition.
For a set of multiple conditions $A\subseteq C_n$ we will say the set satisfying all of these properties is $T_n^A = \cap_{a\in A} T_n^a$.
The following property is what we need to stay in the rook polynomial structure:
If $A\subseteq C_n$, one of the following occurs:
\vspace{-5mm}
\begin{itemize}
\item $T_m^A = \emptyset$ for all $m\geq n$. ``A is incompatible''
\item There is a $\rho(A)\in\mathbb{N}_0$ such that for every $m\geq n$ there is a bijection $T_m^A\rightarrow T_{m-\rho(A)}$
\end{itemize}
Another technical condition we need is that our sequence $(M_n)\in\mathbb{N}_0$ does not satisfy any linear homogeneous recurrence equation.
This condition is needed so that $\rho(A)$ will be uniquely determined when it exists.
For a set of conditions in $C_n$, take $B\subseteq C_n$.
This means $B\subseteq C_m$ for all $m\geq n$.
We want to count the elements of $T_m$ for $m\geq n$ which satisfy \textit{none} of the conditions in B.
We will denote this set $T_m/B$.
By inclusion-exclusion, we have that
\[ T_m/B = \sum_{\substack{A\subseteq B \\ compatible}} (-1)^{|A|}\cdot M_{m-\rho(A)} \]
Correspondingly, we will finally define our ``generalized rook polynomial'' to be:
\[ r_B(x) = \sum_{\substack{A\subseteq B \\ compatible}} (-1)^{|A|}\cdot x^{n-\rho(A)} \]
If we define $\phi(x^n) = |T_n| = M_n$, then for all $m\geq n$, $\hspace{17pt} |T_m/B| = \phi( r_B(x) \cdot x^{m-n} )$.
Because of our linear recurrence restriction on $(M_n)$, this equation is actually able to uniquely determine $r_B(x)$.
This new definition still has the property that the product of the polynomials for two disjoint `boards' are equal to the the polynomial of their disjoint attachment.
This will again be useful to us.
\subsection{Connection to the Original Rook Polynomial}
We will first tie this more general definition back to our original setting.
First, our sets $T_n$ were the sets of permutations of $[n]$, so $T_n = S_n$ for all $n\in\mathbb{N}_0$, hence $M_n =n!$ for all $n\in\mathbb{N}_0$.
The conditions are slightly more tricky.
These conditions are associated to the `boards' which were subsets of $[n] \times [n]$ and prescribed which spots on the board the permutations were not allowed to `hit.'
So, $C_n = \{ ``\pi(i) = j''$ for $(i,j)\in [n]\times[n] \}$ and a set of conditions in $C_n$ $B\subseteq C_n$ is the same as a board $B\subseteq [n]\times [n]$
We can now check if our original setting indeed satisfies the property of compatibility claimed to be essential.
Let $A \subseteq C_n$.
Suppose there is a distinct pair $(i,j),(i',j')\in A$ with $i=i'$ or $j=j'$, then $T_m^a = \emptyset$ and A is incompatible, because there is no permutation which sends one element to two different values and there is no permutation which sends two elements to the same value.
Otherwise, there is a pretty simple bijection by seeing that each condition in A fixes exactly one input and output of our permutation.
Each time we fix an element of our permutation of n elements, we are left with a permutation of (n-1) elements.
This is the bijection $S_m^A \rightarrow S_{m-|A|}$ for all A which are compatible.
This means that $\rho(A) = |A|$.
Now we can see that our definitions now align because the compatible subsets of size k correspond exactly to placements of non-attacking rooks onto the board:
\[ \mathlarger{{\sum_k}} (-1)^k\cdot r_k(B)\cdot x^{n-k} = r_B(x) = \sum_{\substack{A\subseteq B \\ compatible}} (-1)^{|A|}\cdot x^{n-\rho(A)} \]
\subsection{Linear Permutations}
In this setting, we will also take $T_n = S_n$ and $M_n = n!$, but our conditions will be very different.
Our elementary conditions will be ``i is immediately followed by j'' where we see a permutation as the one-line notation.
Equivalently, this condition is ``for some k, $\pi(k) = i$ and $\pi(k+1) = j$.''
So our set of conditions are $C_n = \{$``i is immediately followed by j'' $: i\neq j\in [n] \} $ and $C_0\subseteq C_1\subseteq C_2\subseteq ...$ is $\{\}\subseteq \{\}\subseteq \{$``1 follows 2'', ``2 follows 1'' $\}\subseteq ... $
We will now show that the compatibility property holds.
Let $A\subseteq C_n$
Similar to before, if we have two distinct conditions $(i,j),(i',j')\in A$ with $i=i'$ or $j=j'$, then A must be incompatible because $i=i'$ can't be immediately followed by two different numbers $j,j'$ or $j=j'$ can't be immediately preceded by two different numbers $i,i'$.
Otherwise, the bijection to a permutation of a smaller number of elements comes by viewing each adjacent pair i,j as a single element.
Each time we group two elements of our permutation of n elements, we are left with a permutation of (n-1) elements (where one element consists of two).
For larger strings of pairs i,j,k,l,... which are all adjacent we can see these r elements all as one element or we can individually put together pairs of elements.
Regardless, we ultimately see that we have a bijection $S_m^A \rightarrow S_{m-|A|}$ for all A which are compatible.
This means that $\rho(A) = |A|$.
\subsection{Product Formula Application}
We will again take the full set of conditions in order to yield a useful formula.
Take $l_n^*(x) = r_{C_n}(x)$ where $C_n$ is defined as above.
We only need to look at compatible subsets, so we only need to pick elements in different rows and columns if we see $C_n$ as the chessboard without the diagonal.
We will now choose a compatible set of conditions of size k.
The first condition we choose has $n^2-n = n(n-1)$ places to go, the second will then have $(n-1)^2-(n-1) = (n-1)(n-2)$ places, then $(n-2)(n-3)$, etc.
Since we picked these in a particular order, we need to divide by the $k!$ to correctly count the number of subsets of size k.
\[
\frac{1}{k!}\cdot(n)(n-1)\cdot(n-1)(n-2)\cdot ... \cdot(n-(k-1))(n-k) = k! \cdot \frac{(n)\cdot ... \cdot(n-(k-1))}{k!} \cdot \frac{(n-1)\cdot ... \cdot(n-(k))}{k!}\]
\[
= k! \cdot\binom{n}{k}\cdot\binom{n-1}{k}
\]
So, $l_n^*(x) = \sum_{k=0}^n (-1)^k \binom{n}{k}\cdot\binom{n-1}{k} k! x^{n-k}$.
This allows to write the solution for the number of a specific type of linear permutation.
The number of linear arrangements of $n = n_1 + ... + n_r$ objects where $n_i$ are of color i, such that every adjacent object has a different colors is:
\[
\phi\bigg{(}\mathlarger{\prod_{i=1}^r} l_{n_i}^*(x) \bigg{)}
\]
This work on generalized rook polynomials has many more applications.
There are weighted sums instead of simple counting and the polynomials are used on other sets than just permutations.
If you are interested in this style of combinatorics, I highly encourage you to check out Gessel's paper \citep{gessel88}.
If you are not so interested, however, you are in luck because the linear permutation case is the solution to our original problem.
\section{Solution}
Our original problem asked about a set of 52 objects/ cards.
We asked that our permutation had no adjacent object of the same value.
We can see that we have 13 different values which we can now see as colors to see that we have 13 colors each with 4 associated objects/ cards.
So, we can take $r = 13$ and $n_i = 4$ for each $i \in [r]$.
We first want to calculate
\[
\mathlarger{\prod_{i=1}^r} l_{n_i}^*(x) = (l_4^*(x))^{13}
\]
\begin{multline}
(l_4^*(x))^{13} = \\
-876488338465357824 x^{13} + 17091522600074477568 x^{14} - 159520877600695123968 x^{15} \\
+ 949054268820802240512 x^{16} - 4044281535242623254528 x^{17} + 13151570567369808936960 x^{18} \\
- 33954920849889627734016 x^{19} + 71502295779064701517824 x^{20} - 125212657768448227540992 x^{21} \\
+ 185006084370341623234560 x^{22} - 233228682051017005596672 x^{23} + 253073982060156904538112 x^{24} \\
- 238025750670961148952576 x^{25} + 195147037097635696607232 x^{26} - 140102373840493649854464 x^{27} \\
+ 88405409991914856382464 x^{28} - 49175456453520166748160 x^{29} + 24169421980306186960896 x^{30} \\
- 10514786687648809353216 x^{31} + 4054104097647470051328 x^{32} - 1386375667685767249920 x^{33} \\
+ 420612294417061773312 x^{34} - 113190888701156917248 x^{35} + 27000049659200077824 x^{36} \\
- 5701677221962874880 x^{37} + 1063971192922619904 x^{38} - 175008802134196224 x^{39} \\
+ 25291193280417792 x^{40} - 3197671558907904 x^{41} + 351835440473088 x^{42} \\
- 33462483664896 x^{43} + 2727515172096 x^{44} - 188444475648 x^{45} \\
+ 10878057216 x^{46} - 514605312 x^{47} + 19420128 x^{48} \\
- 561912 x^{49} + 11700 x^{50} - 156 x^{51} + x^{52} \\
\end{multline}
So,
\[ \phi(\hspace{1pt} (l_4^*(x))^{13} ) =\]\[ 3,668,033,946,384,704,437,729,512,814,619,767,610,579,526,911,188,666,362,431,432,294,400 \]
Hence, the probability of a perfect shuffle is:
\[ \frac{\phi(\hspace{1pt} (l_4^*(x))^{13} ) }{52!} =
\frac{672,058,204,939,482,014,438,623,912,695,190,927,357}{14,778,213,400,262,135,041,705,388,361,938,994,140,625} \approx 0.045476282331.\]
This means the chance of two adjacent cards being the same value is about $95.45\%$.
Interestingly, in the above probability, the numerator is prime and the denominator is $3^{5}\cdot 5^{10}\cdot 7^7\cdot 11^3\cdot 13^3\cdot 17^3\cdot 19^2\cdot 23^2\cdot29\cdot31\cdot37\cdot41\cdot43\cdot47$ which is always $p^{\lfloor{\frac{51}{p}}\rfloor}$ except for the lower prime factors of 2,3 which points to some small degree of symmetry in the space of ``perfect shuffles''.
\section{Conclusion}
The chance of a perfect shuffle is $\approx 4.5476282331\%$ and the chance of not is $\approx 95.4523717669\%$.
This easy to formulate question had a surprisingly sophisticated but rather elegant solution.
There are some questions which are obvious future directions of this problem.
The first is to consider how our probability changes when we consider the first and last cards of the deck to be `adjacent' to one another so that our deck of cards becomes a cyclic object.
The second is to consider instead of only $h_0(B)$ for our situation where we count the number of permutations satisfying none of the properties, we instead count each number which satisfy k of the properties (corresponding to $h_k(B)$.)
This will then give a distribution over the $52!$ permutations which count how many pairs of adjacent cards have the same value for a given shuffle.
Additionally, both of these questions can be asked simultaneously to give a distribution over the cyclic shuffles.
Hopefully this exposition was sufficient to understand the proof behind the coveted `probability of a perfect shuffle' and hopefully these future questions find their own answers as well.
\setstretch{0.8}
\bibliographystyle{plain}
| {
"timestamp": "2019-11-19T02:21:27",
"yymm": "1911",
"arxiv_id": "1911.07426",
"language": "en",
"url": "https://arxiv.org/abs/1911.07426",
"abstract": "When shuffling a deck of cards, one probably wants to make sure it is thoroughly shuffled. A way to do this is by sifting through the cards to ensure that no adjacent cards are the same number, because surely this is a poorly shuffled deck. Unfortunately, human intuition for probability tends to lead us astray. For a standard 52-card deck of playing cards, the event is actually extremely likely. This report will attempt to elucidate how to answer this surprisingly difficult combinatorial question directly using rook polynomials.",
"subjects": "Combinatorics (math.CO)",
"title": "What is the Perfect Shuffle?",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9790357622402971,
"lm_q2_score": 0.8221891370573388,
"lm_q1q2_score": 0.8049525685046237
} |
https://arxiv.org/abs/1711.03615 | Roots of random functions: A framework for local universality | We investigate the local distribution of roots of random functions of the form $F_n(z)= \sum_{i=1}^n \xi_i \phi_i(z) $, where $\xi_i$ are independent random variables and $\phi_i (z) $ are arbitrary analytic functions. Starting with the fundamental works of Kac and Littlewood-Offord in the 1940s, random functions of this type have been studied extensively in many fields of mathematics.We develop a robust framework to solve the problem by reducing, via universality theorems, the calculation of the distribution of the roots and the interaction between them to the case where $\xi_i$ are gaussian. In this special case, one can use the Kac-Rice formula and various other tools to obtain precise answers.Our framework has a wide range of applications, which include the most popular models of random functions, such as random trigonometric polynomials and all basic classes of random algebraic polynomials (Kac, Weyl, and elliptic). Each of these ensembles has been studied heavily by deep and diverse methods. Our method, for the first time, provides a unified treatment of all of them.Among the applications, we derive the first local universality result for random trigonometric polynomials with arbitrary coefficients. When restricted to the study of real roots, this result extends several recent results, proved for less general ensembles. For random algebraic polynomials, we strengthen several recent results of Tao and the second author, with significantly simpler proofs. As a corollary, we sharpen a classical result of Erd{ö}s and Offord on real roots of Kac polynomials, providing an optimal error estimate. Another application is a refinement of a recent result of Flasche and Kabluchko on the roots of random Taylor series. | \section{Introduction} Let $n$ be a positive integer or $\infty$. Let $\phi_1, \dots, \phi_n $ be deterministic functions and $\xi_1, \dots, \xi_n $ be independent random variables. Consider the random function/series
\begin{equation}\label{F}
F_n = \sum_{i = 1}^{n} \xi_i\phi_i.
\end{equation}
A fundamental task is to understand the distribution of and the interaction between
the roots (both real and complex) of $F_n$. For several decades, this task has been carried out in many different areas of mathematics such as analysis, numerical analysis, probability, mathematical physics; see \cite{EK, HKPV, kahane1985, Far, BS, sodin2005zeroes, forrester1999exact, zelditch2001random}, for example.
The most studied subcases are when $\phi_i = c_i x^i$ (in which case $F_n$ is a random algebraic polynomial) and $\phi_i =c_i \cos ix $ (in which case $F_n$ is a random trigonometric polynomial); here and later, the $c_i$ are deterministic coefficients that may depend on $i$ and $n$. In fact, these classes split further, according to the values of $c_i$. For instance, three important classes of random algebraic polynomials are: Kac polynomials $\left (c_i=1\right )$, Weyl polynomials $\left (c_i= \frac{1}{ \sqrt {i!}}\right )$ and elliptic polynomials
$\left (c_i= \sqrt { {n \choose i }}\right )$. For random trigonometric polynomials, most papers seem to focus on the case $c_i=1$.
A very significant part of the literature on random functions focuses on these special classes.
Even for these classical cases, the problem is already hard; see \cite{iksanov2016local, TVpoly, DHV, angstpoly, azais2015local, kabluchko2014asymptotic, flasche, pritsker2014zero, soze1, soze2} for a partial list of recent developments. It requires a full book to
discuss the results and methods concerning random polynomials, but one feature stands out.
The distributions of the roots in different classes are quite different, and the methods to study them are often specialized.
In this paper, we aim to develop a robust framework to solve the general problem. The leading idea is to utilize universality theorems to reduce the problem of calculating the distribution of the roots
and the interaction between them to the case where the $\xi_i$ are gaussian.
In the gaussian case,
the answers can be (or, for most ensembles, have already been) computed in a precise form, using the Kac-Rice formula and various other tools which make use of special properties of gaussian random variables and gaussian processes; see, for instance \cite{HKPV, EK, MP, prosen1996, sodin2005zeroes, TVpoly, GKZ}.
In particular, when the $\xi_i$ are complex gaussian variables, $F_n$ is called a gaussian analytic function, and
we refer to Sodin's paper \cite{sodin2005zeroes} for an in-depth survey.
Universality theorems of this type have
recently been proved for many classes of random algebraic polynomials \cite{TVpoly, DOV} of various types, using complex machinery (see also \cite{KZ3, pritsker1, pritsker2} for related works concerning
global universality). The method built in these papers is sensitive. It does not apply to random trigonometric polynomials and many other ensembles.
In this paper, we are going to establish a new and general condition which guarantees universality for a wide class of random functions.
This class contains all popular random functions. Among others, it covers all classical random algebraic polynomials (such as those considered in \cite{TVpoly, DOV} and many others).
Quite remarkably, it also covers random trigonometric polynomials with
general coefficients, whose behavior is totally different. (For readers not familiar with the theory of random functions, let us point out that
random trigonometric polynomials typically have
$\Theta (n)$ real roots while Kac polynomials have only $\Theta (\log n) $.)
We would like to emphasize the simplicity and robustness of our approach.
Proofs of local universality results have been, so far, considerably complex and long. Furthermore, different ensembles require proofs which are different in at least a few key technical aspects. Our proofs, based on new observations, are quite simple and robust. The proof for the general theorem is only a few pages long.
Next, and more importantly, we can deduce universality results for completely different ensembles of random functions from this general theorem in an identical way using (essentially) one simply stated lemma.
In each ensemble considered, we either obtain completely new results or a short, new proof of the most current result, many times with a quantitative improvement. The length of the paper is due
to the number of applications. The reader is invited to read Section \ref{mainideas} for a
discussion of our method and a comparison with the previous ones.
Let us now briefly discuss the applications.
Consider two random functions
$F_n = \sum_{i = 1}^{n} \xi_i\phi_i$ and $\tilde F_n = \sum_{i = 1}^{n} \tilde \xi_i\phi_i$, where $\xi_i$ and $\tilde \xi_i$ can have different distributions. We show (under some mild assumptions) that
the local statistics of the roots of two functions are asymptotically the same. In practice, we can set $\tilde \xi_i$ to be gaussian, and thus reduce the study to this case. The local information can be used to derive
certain global properties; for instance, the number of roots in a large region (which has been partitioned into many local cells) is simply the sum of the numbers of roots in each cell.
As mentioned earlier, the main strength of our result is its applicability, as it covers a large collection of random functions.
\begin{itemize}
\item We study random trigonometric polynomials in Section \ref{app1}. We derive (to the best of our knowledge) the first local universality of correlation
for this class. Our setting is more flexible than most
previous works on this topic, as we allow a large degree of freedom in choosing the deterministic coefficients $c_i$.
While we do not find comparable previous local universality results for random trigonometric polynomials, we can still make some comparisons to previous works by restricting to the popular sub-problem of estimating the density of the real roots. For this problem, our universality result yields new estimates which extend several existing results, some of which are quite recent and have been proved by totally different methods; see
Section \ref{app1} for details.
\item In Section \ref{app2}, we discuss Kac polynomials. We derive a short proof for a
strengthening of a recent result of Tao and the second author \cite{TVpoly}. By almost the same argument, one could also recover the main result of Yen Do and the authors \cite{DOV} which applies for generalized Kac polynomials.
As a corollary, we obtain a more precise version of the classical result of Erd{\"o}s and Offord \cite{EO} on the number of real roots.
\item In Section \ref{app3}, we study Weyl series. Our universality result here provides
an exact estimate for the expectation of the number of roots in any fixed domain $B$. Previous to our result, such
an estimate was only known for sets of the form
$rB$, where $r$ is a parameter tending to infinity, thanks to a very recent work of
Kabluchko and Zaporozhets \cite{kabluchko2014asymptotic}.
\item In Section \ref{app4}, we apply our results to random elliptic polynomials.
We give a short proof of a recent result from \cite{TVpoly}, which generalizes an earlier result of Bleher and Di \cite{BD}.
\item The above applications already cover all traditional classes of random functions in the literature. To illustrate the generality of our result, in Section \ref{app5}, we
present one more application, concerning random series with regularly varying coefficients, a class
defined and studied by Flasche and Kabluchko very recently \cite{FK}.
\end{itemize}
In most applications, we will work out corollaries concerning the problem of counting real roots. While our results yield much more than just the density
function of real roots, we focus on this subproblem since it is, traditionally, one of the most natural and appealing problems in the field. (Technically speaking, the study of random functions started with papers of Littlewood-Offord and Kac in the 1940s, studying the number of real roots of Kac polynomials.) Our corollaries provide many new contribution to the existing vast literature on this subject. As a matter of fact, our results allow us to study
any level set $L_a:= \{z\in \C: F_n (z)=a \} $ for any fixed $a$ (the roots form the level set $L_0$) at no extra cost.
The rest of the paper is organized as follows. In the next section, we first describe our goal, namely, what we mean by universality.
We then establish the general condition that guarantees
universality, and comment on its strength. We next state the general universality theorems along with a discussion of the main ideas
in the proof.
The next 5 sections (Sections \ref{app1} - \ref{app5}) are devoted to the applications mentioned above. We state universality theorems for various classes of random functions, and derive corollaries
concerning the density of both real and complex roots. In Section \ref{app1_proof_1}, we prove the general universality theorems stated in Section \ref{framework}. The rest of the paper is devoted to the verification of the
applications in Sections \ref{app1} - \ref{app5}. We also include a short appendix at the end of the paper, which contains the proofs of a few lemmas (some of which were proved elsewhere), for the sake of completeness.
\section {Universality theorems}\label{framework}
In the first subsection, we describe the traditional way to compare local statistics of the roots.
Next, we provide the assumptions under which our theorems hold, and comment on their strength. The precise statements
come in the final subsection.
As customary, we assume that $n$ is sufficiently large, whenever needed. All asymptotic notation are used under the
assumption that $n \rightarrow \infty$. The notation ${\bf 1}_E$ denotes the indicator of an event $E$; it takes value 1 if $E$ holds and $0$ otherwise.
\subsection{Comparing local statistics}
For simplicity, let us
first focus on the complex roots of $F_n$.
These roots form a random point set on the plane.
The first interesting local statistics is the density. In order to understand the density
around a point $z$, we consider the unit disk $ B(z, 1)$ centered at $z$. By normalization, we can assume that
the number of roots in this disk is typically of order $\Theta (1)$.
The number of roots in the disk can be written as
$$\sum_{i } \E f(\zeta_i ) $$ where $\zeta_1, \zeta_2, \dots$ are the roots of $F_n$, and $f$ is the indicator
function of $B(z,1)$; in other words, $f(x)=1 $ if $x \in B(z,1)$ and zero otherwise.
If one is interested in the pairwise correlation between the roots near $z$, then it is natural to look at
$$\sum_{i, j} \E f(\zeta_i, \zeta_j ) $$ where $f(x,y)$ is the indicator
function of $B(z,1)^2 :=B(z,1) \times B(z,1)$; in other words, $f(x,y)=1 $ if both $x, y \in B(z,1)$ and zero otherwise.
In general, the $k$-wise correlation can be computed from
$$\sum_{ i_1, \dots, i_k } \E f(\zeta_{i_1}, \dots, \zeta_{i_k} ) $$ where $f(x_1, \dots, x_k)$ is the indicator
function of $B(z,1)^k$. A good estimate for these quantities tells us how the nearby roots repel or attract each other.
Even more generally, one can study the interaction of roots near different centers by looking at
$$\sum_{ i_1, \dots, i_k } \E f(\zeta_{i_1}, \dots, \zeta_{i_k} ) $$ where $f(x_1, \dots, x_k)$ is the indicator function of $B(z_1,1) \times B(z_2,1) \dots \times B(z_k,1)$ with $B(z_i, 1)$ being the unit disk centered at $z_i$.
Now, consider another random function
$$\tilde F_n = \sum_{i = 1}^{n} \tilde \xi_i\phi_i$$
where the $\tilde \xi_i$ are independent random variables distributed differently from the $\xi_i$. We end up with two sets of quantities $$\sum_{ i_1, \dots, i_k} \E f\left (\zeta_{i_1}, \dots, \zeta_{i_k} \right ) $$ and $$ \sum_{ i_1, \dots, i_k } \E f(\tilde\zeta_{i_1}, \dots, \tilde \zeta_{i_k} )$$
where the $\tilde \zeta_i$ are the roots of $\tilde F_n$.
We would like to show (under certain assumptions) that these two quantities are asymptotically the same, namely
\begin{equation} \label{asymp} \left | \sum_{ i_1, \dots, i_k} \E f(\zeta_{i_1}, \dots, \zeta_{i_k} ) - \sum_{ i_1, \dots, i_k } \E f\left (\tilde\zeta_{i_1}, \dots, \tilde \zeta_{i_k} \right ) \right | \le \delta_n \end{equation} for some $\delta_n$ tending to zero as $n$ goes to infinity.
For technical convenience, we will replace the indicator function $f$ by a smoothed approximation. This, in applications, makes no difference. On the other hand, our results hold for any smoothed test function $f$, which may have nothing to do with the indicator function.
If one cares about the real roots, one replaces the disk $B(z,1)$ by the interval of length 1 centered at a real number $z$. In general, instead of the product $B(z_1,1) \times B(z_2,1) \dots \times B(z_k,1)$, one can consider a mixed product of disks and intervals. This enables one to understand the interaction between nearby roots of both types (complex and real).
One, of course, could have made the previous discussion using the notion of correlation functions. However, we find the current format direct and intuitive. We refer to \cite{HKPV} or \cite{TVpoly} for more detailed discussions concerning local statistics using
correlation functions.
\subsection{Assumptions} \label{condi}
Before stating the result, let us discuss the assumptions. There are two assumptions. The first is for the random variables $\xi_i$ and $\tilde \xi_i$. The second concerns the deterministic functions $\phi_i$.
For the random variables, our assumption is close to minimal. In the case that both $\xi_i$ and $\tilde \xi_i$ are real, our simplest assumption is
{\bf Condition C0.} The random variables $\xi_1, \dots, \xi_n, \tilde \xi_1, \dots, \tilde \xi_n$ are independent real random variables
with the same mean $\E \xi_i = \E \tilde \xi_i$ for each $i$, variance one, and (uniformly) bounded $(2+\ep)$ central moments, for some constant $0<\ep <1$.
In fact, we can relax the assumption of matching means and variances, allowing a finite number of exceptions. If the $\xi_i$ and $\tilde \xi_i$ are complex, the matching mean and variance need to be adjusted to address both real and imaginary parts.
{\bf Condition C1.} Two sequences of random variables $(\xi_1, \dots, \xi_n)$ and $(\tilde \xi_1, \dots, \tilde \xi_n)$ are said to satisfy this condition if the following hold, for some constants $N_0, \tau >0$ and $0<\ep<1$.
\begin{enumerate} [(i)]
\item {\it Bounded $(2+\ep)$ central moments:} \label{cond-moment} The random variables $\xi_i$ (and similarly $\tilde \xi_i$), $1\le i \le n$, are independent (real or complex, not necessarily identically distributed) random variables with unit
variance\footnote{For a complex random variable $Z$, the variance of $Z$ is defined to be $\E|Z - \E Z|^{2}$.} and bounded $(2+\ep)$ central moments, namely $\E\left |\xi_i - \E\xi_i\right |^{2+\ep} \le \tau$.
\item {\it Matching moments to second order with finite exceptions:}\label{cond-matching} For any $i\ge N_0$, for all $a, b\in \{0, 1, 2\}$ with $a+b\le 2$, $$\E \Re\left (\xi_i\right )^{a}\Im \left (\xi_i\right )^{b} = \E \Re\left (\tilde \xi_i\right )^{a}\Im \left (\tilde \xi_i\right )^{b},$$ and for $0\le i< N_0$, $\left | \E \xi_i -\E \tilde \xi_i\right |\le \tau$.
\end{enumerate}
It is trivial that Condition {\bf C1} contains Condition {\bf C0} as a special case.
We find it rewarding to go with the more general, but slightly technical, assumption \eqref{cond-matching}, which allows non-matching means, as it
leads to an interesting phenomenon that changing a finite number of terms in $F_n(z)$ does not influence the distribution of the roots.
Among other benefits, this allows us to generalize all results to level sets $\{ z\in \C: F_n (z) =a \} $ for any fixed $a$; see Remark \ref{rmk1} for more details.
\vskip2mm
We now turn to the assumption on the deterministic functions $\phi_i$.
The statement of our theorems will involve two parameters, an error term $0<\delta_n<1$ (see \eqref{asymp})
and a region $D_n\subset \C$, from which the base points $z_1, \dots, z_k$ are chosen.
As their subscripts indicate, both $\delta_n$ and $D_n$ can depend on $n$. In most of our applications,
$\delta_n$ tends to zero with $n$.
The assumption below is tailored to these two parameters.
For two sets $A, B\subset \C$, define $A+B: = \{a+b: a\in A, b\in B\}$.
Let $k, C_1, \alpha_1, A, c_1, C$ be positive constants. We say that $F_n$ satisfies Condition {\bf C2} with parameters
$(k, C_1, \alpha_1, A,c_1, C) $ if the following holds.
{\bf Condition C2.}
\begin{enumerate}
\item \label{cond-poly} For any $z\in D_n$, $F_n$ is analytic on the disk $B(z, 2)$ with probability 1 and
$$\E N^{k+2}\textbf{1}_{N\ge \delta_n^{-C_1}} \le C$$
where $N$ is the number of zeros of $F_n$ in the disk $B(z, 1)$.
\item {\it Anti-concentration:}\label{cond-smallball} For every $z\in D_n$, with probability at least $1 - C\delta_n^{A}$, there exists $z'\in B (z, 1/100)$ for which $|F_n(z')|\ge \exp(-\delta_n^{-c_1})$.
\item {\it Boundedness:} \label{cond-bddn} For any $z\in D_n$, with probability at least $1 - C\delta_n ^{A}$, $|F_n(w)|\le \exp(\delta_n^{-c_1})$ for all $w\in B (z, 2)$.
\item {\it Delocalization:}\label{cond-delocal} For every $z\in D_n+B (0, 1)$, for every $i = 1, \dots, n$,
$$\frac{|\phi_i(z)|}{\sqrt{\sum _{j = 1}^{n}|\phi_j(z)|^{2}}}\le C\delta_n^{\alpha_1}.$$
\item {\it Derivative growth:}\label{cond-repulsion} For any real number $x\in D_n + B(0, 1)$,
\begin{equation}
\sum_{j=1}^{n} |\phi_j'(x)|^{2}\le C\delta_n ^{-c_1}\sum_{j=1}^{n} |\phi_j(x)|^{2},\nonumber
\end{equation}
\begin{equation}
\sum_{j=1}^{n} \sup_{z\in B(x, 1)}|\phi_j''(z)|^{2}\le C\delta_n ^{-c_1}\sum_{j=1}^{n} |\phi_j(x)|^{2},\nonumber
\end{equation}
and
\begin{equation}
\sum_{j=1}^{n} |\E \xi_j|\sup_{z\in B(x, 1)}|\phi_j''(z)|\le C\delta_n ^{-c_1}\sqrt{\sum_{j=1}^{n} |\phi_j(x)|^{2}}.\nonumber
\end{equation}
\end{enumerate}
\begin{remark}
While Condition {\bf C2} still involves the random variables $\xi_i$, in the verification of these conditions, we only need to use basic information about the mean of these variables. On the other hand, the type of arguments one needs to use in the verification depends strongly on the functions $\phi_i$.
\end{remark}
\begin{remark}
The last condition {\bf C2} \eqref{cond-repulsion} is important only in the study of real roots; in particular, it is used to prove the repulsion of the real roots (Lemma \ref{lmrepulsion}). It can be ignored in the study of complex roots.
\end{remark}
Let us now comment on the verification of {\bf C2} in practice.
\begin{remark} \label{anticond}
Typically, we assume $\delta_n$ tends to zero with $n$. We normalize the spectrum so that the expectation of $N$ is of order $1$ where $N$ is the number of roots of $F_n$ in a disk $B(z, 1)$, $z\in D_n$. With this in mind, the first condition is a large deviation estimate on $N$ and can be proved using standard
large deviation tools combined with classical complex analytic estimates such as Jensen's inequality. The third condition (boundedness) is also a large deviation statement and can be dealt with using standard tools, since for any fixed $w$, $F_n (w) $ is a sum of independent random variables.
The last two conditions ((4) and (5)) are deterministic properties of the functions $\phi_i$ and hold for many natural classes of functions. The forth condition (delocalization) simply says that
in the vector $(\phi_i (z))_1^n $, no coordinate dominates. The fifth condition asserts that the first and second derivatives of $\phi_i$ do not exceed the value of the function itself by a large multiplicative factor, in an average sense. Checking these conditions is usually a routine task.
Furthermore, the proof allows us to easily modify these conditions, if necessary.
The second (anti-concentration) condition is the one that may require some work. However, this condition is trivial if (some of) the random variables $\xi_i$ have continuous distributions with bounded density. For instance, if $\phi_1=1$ (constant function) and $\xi_1$ has a continuous distribution with bounded density, then the required anti-concentration property holds trivially by conditioning on the rest of the random variables (which can have arbitrary distributions). There is a sizable literature
focusing on continuous ensembles, and our results allow us to recover, in a straightforward manner, a number of
existing results, whose original proofs were quite technical; see Sections \ref{app2} and \ref{app4} for examples.
\end{remark}
\subsection{Results}
Given the assumptions discussed in the previous section, we are now ready to state our universality theorems.
\begin{definition} \label{defnorm} For any function $G:\R^{k}\to \R$ and any natural number $a$, we define $\norm{\triangledown^aG}_{\infty}$ to be the supremum over $x\in \R^{k}$ of the absolute value of all partial derivatives of total order $a$ of $G$ at $x$. For a function $G:\R^{k}\times \C^{l}\to \C$, we define $\norm{\triangledown^aG}_{\infty}$ to be the maximum of $\norm{\triangledown^aG_1}_{\infty}$ and $\norm{\triangledown^aG_2}_{\infty}$, where $G_1, G_2:\R^{k+2l}\to \R$ are the real and imaginary parts of $G$:
$$G_1(x_1, \dots, x_k, u_1, \dots, u_l, v_1, \dots, v_l)=\Re (G(x_1, \dots, x_k, u_1+iv_1, \dots, u_l+iv_l)),$$
$$G_2(x_1, \dots, x_k, u_1, \dots, u_l, v_1, \dots, v_l)=\Im (G(x_1, \dots, x_k, u_1+iv_1, \dots, u_l+iv_l)).$$
\end{definition}
\begin{theorem}[General Complex universality]\label{gcomplex}
Assume that the coefficients $\xi_i$ and $\tilde \xi_i$ satisfy Condition {\bf C1} for some constants $N_0, \tau, \ep$. Let $\alpha_1, C_1$ be positive constants and $k$ be a positive integer. Set $A := 2kC_1 + \frac{\alpha_1\ep }{60}$ and
$c_1:= \frac{\alpha_1 \ep }{10^{5} k^{2}}$. Assume that there exists a constant $C>0$ such that the random functions $F_n$ and $\tilde F_n$ satisfy Conditions {\bf C2} \eqref{cond-poly}-{\bf C2} \eqref{cond-delocal} with parameters $(k, C_1, \alpha_1, A,c_1, C)$. Then there exist positive constants $C', c$ depending only on the constants in Conditions {\bf C1} and {\bf C2} (but not on $\delta_n$, $D_n$ and $n$) such that the following holds.
For any complex numbers $z_1, \dots, z_k$ in $D_n$ and any function $G: \mathbb{C}^{k}\to \mathbb{C}$ supported on \newline$\prod_{i=1}^{k} B (z_i, 1/100) $ with continuous derivatives up to order $2k+4$ and $\norm{\triangledown^aG}_{\infty}\le 1$ for all $0\le a\le 2k+4$, we have
\begin{eqnarray}
\left |\E\sum G\left (\zeta_{i_1}, \dots, \zeta_{i_k}\right) -\E\sum G\left (\tilde \zeta_{i_1}, \dots, \tilde \zeta_{i_k}\right) \right |\le C'\delta_n^{c},\label{gcomplexb}
\end{eqnarray}
where the first sum runs over all $k$-tuples\footnote{For example, if $k=2$ and $F_n$ only has two roots $\zeta_1$ and $\zeta_2$, then the first sum is $G(\zeta_1, \zeta_1) + G(\zeta_1, \zeta_2)+G(\zeta_2, \zeta_1)+G(\zeta_2, \zeta_2)$.} $(\zeta_{i_1}, \dots, \zeta_{i_k})$ of the roots $\zeta_1, \zeta_2, \dots$ of $F_n$, and
the second sum runs over all $k$-tuples $(\tilde \zeta_{i_1}, \dots, \tilde \zeta_{i_k})$ of the roots $\tilde \zeta_1, \tilde \zeta_2, \dots$ of $ \tilde F_n$.
\end{theorem}
\begin{theorem}[General Real universality]\label{greal} Assume that $\phi_i(\R)\subset \R$ and $\xi_i$ and $\tilde \xi_i$ are real random variables that satisfy Condition {\bf C1} for some constants $N_0, \tau, \ep$. Let $\alpha_1, C_1$ be positive constants and $k, l$ be nonnegative integers with $k+l\ge 1$. Set $A = 2(k+l+2)(C_1+2) + \frac{\alpha_1\ep }{60}$ and $c_1 = \frac{\alpha_1\ep }{10^9(k+l)^{4}}$.
Assume that there exists a constant $C>0$ such that the random functions $F_n$ and $\tilde F_n$ satisfy Condition {\bf C2} with parameters $(k+l, C_1, \alpha_1, A,c_1, C)$. Then there exist positive constants $C', c$ depending only on $k, l$ and the constants in Conditions {\bf C1} and {\bf C2} (but not on $\delta_n$, $D_n$ and $n$) such that the following holds.
For any real numbers $x_1,\dots, x_k$, complex numbers $z_1, \dots, z_l$, all of which are in $D_n$, and any function $G: \mathbb{R}^{k}\times\mathbb{C}^{l}\to \mathbb{C}$ supported on $\prod_{i=1}^{k}[x_i-1/100, x_i+1/100] \times \prod_{j=1}^{l}B (z_j, 1/100)$ with continuous derivatives up to order $2(k+l)+4$ and $\norm{\triangledown^aG}_{\infty}\le 1$ for all $0\le a\le 2(k+l)+4$, we have
\begin{eqnarray}
\left |\E\sum G\left (\zeta_{i_1}, \dots, \zeta_{i_k}, \zeta_{j_1} , \dots, \zeta_{j_l}\right)
-\E\sum G\left (\tilde \zeta_{i_1}, \dots, \tilde \zeta_{i_k}, \tilde \zeta_{j_1}, \dots, \tilde \zeta_{j_l}\right) \right |\le C' \delta_n^{c},\nonumber
\end{eqnarray}
where the first sum runs over all $(k+l)$-tuples $(\zeta_{i_1}, \dots, \zeta_{i_k}, \zeta_{j_1}, \dots, \zeta_{j_l}) \in \R^{k}\times \C_{+}^{l}$ of the roots $\zeta_1, \zeta_2, \dots$ of $F_n$, and the second sum runs over all $(k+l)$-tuples $(\tilde \zeta_{i_1}, \dots, \tilde \zeta_{i_k}, \tilde \zeta_{j_1}, \dots, \tilde \zeta_{j_l}) \in \R^{k}\times \C_{+}^{l}$ of the roots $\tilde \zeta_1, \tilde \zeta_2, \dots$ of $ \tilde F_n$.
\end{theorem}
\begin{remark}\label{rmkconstants} The setting of $A$ and $c_1$ in both theorems is for the sake of explicitness. The theorems hold for any bigger $A$ and any smaller $c_1$.
The constant $c$ in both theorems can be chosen to be $c_1$, namely $\frac{\alpha_1 \ep }{10^{5} k^{2}}$ and $\frac{\alpha_1\ep }{10^9(k+l)^{4}}$, respectively. We make no attempt to optimize these constants.
\end{remark}
\subsection{Main ideas and technical novelties} \label{mainideas}
Let us consider the simplest setting where $k=1, l=0$ and we need to show
\begin{equation} \nonumber
\sum_{i=1}^n \E G ( \zeta_i )= \sum_{i=1}^n \E G (\tilde \zeta_i ) +O\left (\delta_n^{c}\right ) ,\end{equation}
where the $\zeta_i$ (and the $\tilde \zeta_i $) are the roots of $F_{n}$ (and $\tilde F_{n}$, respectively) and $G$ is a (smooth) test function supported on a disk $B(z_0, 1/100)$.
\noindent Our starting point is the Green's formula, which asserts that
$$ G(0) = -\frac{1}{2\pi}\int_{\C} \log |z| \Delta G(z) dz. $$
\noindent By change of variables, this implies that for all $i$,
$$G(\zeta_i) = -\frac{1}{2\pi}\int_{\C} \log |z -\zeta_i | \Delta G(z) dz, $$ which, in turn, yields
\begin{equation} \nonumber
\sum_i \E G ( \zeta_i ) = -\frac{1}{2\pi}\E \int_{\C} \log | \prod_{i=1}^n (z -\zeta_i ) | \Delta G(z) dz = -\frac{1}{2\pi}\E \int_{B(z_0, 1/100)} \log | F_{n} (z) | \Delta G(z) dz. \end{equation}
An obvious, and major, technical difficulty here is that the logarithmic function has a pole at 0. This, naturally, leads to the anti-concentration issue that we discussed earlier, namely we need to bound the probability that
$|F_n(z)|$ is close to zero. Condition {\bf C2} (2) has been introduced to address this issue.
Let us assume, for a moment, that the pole problem has been handled properly (we will discuss the anti-concentration property a few paragraphs later). Then, by using Conditions {\bf C2} \eqref{cond-poly}-\eqref{cond-bddn}, we can show that the function $F_n$ is nice enough that we can replace $\log|F_n|$ by $K(F_n)$ where $K$ is a bounded smooth function. The key argument of this part is to bound the error term, which turns out to be relatively simple.
The task is now reduced to showing that
$$ \E \int_{B(z_0, 1/100)} K\left (F_n(z)\right )\Delta G(z)dz - \E \int_{B(z_0, 1/100)} K\left (\tilde F(z)\right )\triangle G(z)dz= O(\delta^{c}).$$
Because of the boundedness of $G$, for each $z\in B(z_0, 1/100)$, it suffices to show that
$$ \E K\left (F_n(z)\right ) - \E K\left (\tilde F(z)\right )= O(\delta^{c}).$$
Since for each fixed $z$, $F_n(z)$ is a sum of independent random variables, the desired bound can be viewed, in some sense, as a quantitative version of the Central Limit Theorem. We will actually prove it by the Lindeberg swapping method, which, by now, is a standard tool for proving local universality.
Generalizing the whole scheme to the general case of $k$ and $l$ requires several additional technical steps, but the spirit of the method remains the same.
Our method differs from that of \cite{TVpoly} at essential steps. The first key idea in \cite{TVpoly} is to handle the integral
$$ \frac{1}{2\pi}\E \int_{B(z_0, 1/100)} \log | F_{n} (z) | \Delta G(z) dz $$ by
a random Riemann sum. One tries to approximate this integration by $\frac{c}{m} (f(z_1) + \dots f(z_m)) $, where $z_i$ are iid random points sampled from the disk, $m$ is a properly chosen parameter which tends to infinity with $n$, $c$ is a normalizing constant, and $f:= \log |F_n| \Delta G $.
With the approach, one faces two major technical tasks. The first (and harder one) is to control the error term in the approximation. This leads to the
problem of estimating the variance in the sampling process. The other task is to prove a comparison estimate for the
random vector $(f(z_1), \dots, f(z_m))$, where we now view
the points $z_1, \dots, z_m$ as fixed, with the randomness comes from $F_n (z)$. This, again, can be done using a Lindeberg type argument (applying to high dimensional setting).
Our new proof avoids this sampling argument completely , making the argument much shorter and more direct. For instance, the proof of Theorem \ref{gcomplex}, barring some lemmas in the appendix, is now only 3 pages.
Let us now discuss the critical anti-concentration property. In practice, it has been a major issue to prove that a random function satisfies the anti-concentration phenomenon in some way. (As pointed out earlier, this is needed in order to address the pole problem concerning the logarithmic function.)
In earlier papers \cite{TVpoly} and \cite{DOV}, every class of random (algebraic) polynomials requires a different proof.
In \cite{TVpoly}, for Weyl and elliptic polynomials, the authors used
Littlewood-Offord arguments for lacunary sequences. In the same paper, the proof for Kac polynomials required a much more sophisticated argument, based
on the Inverse Littlewood-Offord theory and a weak version of
the quantitative (Gromov) rigidity theorem. However, this proof does not hold for
the derivatives of Kac polynomials and random polynomials with
slowly growing coefficients. In order to handle these classes, in \cite{DOV},
the authors needed to use the Nishry-Nazarov-Sodin log-integrability theorem, a very recent development.
However, none of these tools works for random trigonometrical polynomials, whose roots behave quite differently.
An important new point in our proof is that we require a much weaker
anti-concentration property than in previous papers.
We only require that $F_n(z)$, as a random variable, satisfies the anti-concentration for only one point $z$ in the whole neighborhood, while in \cite{TVpoly} one requires anti-concentration to hold for most points in the same neighborhood. (Notice that since we are taking an integration with respect to $z$, this earlier requirement from \cite{TVpoly} looks natural.) The key to this observation is our Lemma \ref{2norm}, which asserts that under favorable conditions, a lower bound
on $|F_n(w)|$ guarantees a weaker, but still useful, lower bound for $|F_n (z)|$ for any $z$ in a neighborhood of
$w$.
Building upon this new observation, we have developed a novel method (based on old results of Tur\'an and Hal\'asz)
to verify the anti-concentration property in a simple and robust manner. This effort leads to Lemma \ref{lmanti_concentration}, which we can use, in a rather straightforward way, to prove the desired anti-concentration property for all
ensembles of random functions discussed in this paper (including all the algebraic polynomials discussed above, random trigonometric polynomials with general coefficients, and a very recent ensemble studied by Flache-Kabluchko.).
\section{Application: Universality for random trigonometric polynomials} \label{app1}
In this section, we apply our theorems to study
{\it random trigonometric polynomials} of the following form
\begin{equation}
P_n(x) = \sum_{j=0}^{n} c_j\xi_j\cos(jx) + \sum_{j=1}^{n} d_j\eta_j\sin(jx)\nonumber
\end{equation}
where $c_j$ and $d_j$ are deterministic coefficients, which may depend on both $j$ and $n$,
and $\xi_0, \xi_1, \dots, \xi_n$ and $\eta_1, \dots, \eta_n$ are independent random variables with unit variance.
Most of the existing literature deals with the special case $c_i =d_i=1$ or $c_i= 1, d_i=0 $ for every $i$.
The generality of our study enables us to consider more general coefficients. All we need to assume about the coefficients $c_i, d_i$ is the following
{\bf Condition C3}.
There exist positive constants $\tau_1, c$ and an interval $AP_0 \subset \{1, \dots, n\}$ of size at least $cn$ such that
\begin{equation}
|c_i|\ge \tau_1 \max_{0\le j\le n} \{|c_j|, |d_j|\} \qquad\text{for all } i\in AP_0.
\end{equation}
With regard to the random variables, we assume
{\bf Condition C4}. There is a constant $N_0 >0$ such that for $i\ge N_0$, $\E\xi_i = \E\eta_i = 0$ and for $0\le i< N_0$, $|\E \xi_i|\le n^{\tau_0}$, and $|\E \eta_i|\le n^{\tau_0}$, where $\tau_0 := 1/2+ 10^{-11}\ep$.
The $\ep$ in this condition is the $\ep$ in Condition
{\bf C1}. The constant $\tau_0$ is not optimal but we make no attempt to improve it.
We use the same notation $N_0$ in both Condition {\bf C4} and Condition {\bf C1}, as we can always replace two different $N_0$ by their maximum. The assumption that $AP_0$ is an interval is only used in the following simple lemma.
\begin{lemma}\label{J}
Let $AP_0$ be an interval in $\{1, \dots, n \} $ of length $\beta n$, for some constant $\beta >0$. Then there is a
constant $\beta' >0$ such that for any real number $a$,
$AP_0$ contains a subset $J_a$ of size at least $\beta'n$, where
$|2aj (\hbox{\rm mod} \,\, 2\pi)-\pi|\ge \beta'$ for all $j\in J_{a}$.
\end{lemma}
Let
\begin{equation}
\tilde P_n(x) = \sum_{j=0}^{n} c_j\tilde \xi_j\cos(jx) + \sum_{j=1}^{n} d_j\tilde \eta_j\sin(jx)\nonumber
\end{equation}
where $\tilde \xi_0, \tilde \xi_1, \dots, \tilde \xi_n$ and $\tilde \eta_1, \dots, \tilde \eta_n$ are some other independent random variables.
\begin{theorem}[Complex universality for trigonometric polynomials] \label{complex} Let $k$ be a positive integer.
Assume that the two sequences $(\xi_0, \dots, \xi_n, \eta_1, \dots, \eta_n)$ and $(\tilde \xi_0, \dots, \tilde \xi_n, \tilde \eta_1, \dots, \tilde \eta_n)$ satisfy Condition {\bf C1}
and the coefficients $c_i, d_i$ satisfy Condition {\bf C3}. Then for any positive constant $C$, there exist positive constants $C', c$ depending only on $C, k$ and the constants in Conditions {\bf C1, C3} such that the following holds.
For any complex numbers $z_1, \dots, z_k$ with $|\Im(z_j)|\le C/n$ for all $0\le j\le k$, and for any function $G: \mathbb{C}^{k}\to \mathbb{C}$ supported on $\prod_{i=1}^{k} B (z_i, 1/n)$ with continuous derivatives up to order $2k+4$ and $\norm{\triangledown^aG}_\infty\le n^{a}$ for all $0\le a\le 2k+4$, we have
\begin{eqnarray}\nonumber
\left |\E\sum G\left (\zeta_{i_1}, \dots, \zeta_{i_k}\right) -\E\sum G\left (\tilde \zeta_{i_1}, \dots, \tilde \zeta_{i_k}\right) \right |\le C'n^{-c},
\end{eqnarray}
where the first sum runs over all $k$-tuples $(\zeta_{i_1}, \dots, \zeta_{i_k})$ of the roots $\zeta_1, \zeta_2, \dots$ of $P_n$, and
the second sum runs over all $k$-tuples $(\tilde \zeta_{i_1}, \dots, \tilde \zeta_{i_k})$ of the roots $\tilde \zeta_1, \tilde \zeta_2, \dots$ of $ \tilde P_n$.
\end{theorem}
\begin{theorem}[Real universality for trigonometric polynomials] \label{real} Let $k, l$ be nonnegative integers.
Assume that the real coefficients $c_i$ and $d_i$ satisfy Condition {\bf C3} and the two sequences of real random variables $(\xi_0, \dots, \xi_n, \eta_1, \dots, \eta_n)$ and $(\tilde \xi_0, \dots, \tilde \xi_n, \tilde \eta_1, \dots, \tilde \eta_n)$ satisfy Conditions {\bf C1} and {\bf C4}. Then for any positive constant C, there exist positive constants $C', c$ depending only on $C, k, l$ and the constants in Conditions {\bf C1, C3, C4} such that the following holds.
For any real numbers $x_1,\dots, x_k$, and complex numbers $z_1, \dots, z_l$ such that $|Im(z_j)|\le C/n$ for all $1\le j\le l$, and for any function $G: \mathbb{R}^{k}\times\mathbb{C}^{l}\to \mathbb{C}$ supported on $\prod_{i=1}^{k}[x_i-1/n, x_i+1/n] \times \prod_{j=1}^{l}B (z_j, 1/n)$ with continuous derivatives up to order $2(k+l)+4$ and $\norm{\triangledown^aG}_\infty\le n^{a}$ for all $0\le a\le 2(k+l)+4$, we have
\begin{eqnarray}\nonumber
\left |\E\sum G\left (\zeta_{i_1}, \dots, \zeta_{i_k}, \zeta_{j_1} , \dots, \zeta_{j_l}\right)
-\E\sum G\left (\tilde \zeta_{i_1}, \dots, \tilde \zeta_{i_k}, \tilde \zeta_{j_1}, \dots, \tilde \zeta_{j_l}\right) \right |\le C'n^{-c},
\end{eqnarray}
where the first sum runs over all $(k+l)$-tuples $(\zeta_{i_1}, \dots, \zeta_{i_k}, \zeta_{j_1}, \dots, \zeta_{j_l}) \in \R^{k}\times \C_{+}^{l}$ of the roots $\zeta_1, \zeta_2, \dots$ of $P_n$, and the second sum runs over all $(k+l)$-tuples $(\tilde \zeta_{i_1}, \dots, \tilde \zeta_{i_k}, \tilde \zeta_{j_1}, \dots, \tilde \zeta_{j_l}) \in \R^{k}\times \C_{+}^{l}$ of the roots $\tilde \zeta_1, \tilde \zeta_2, \dots$ of $ \tilde P_n$.
\end{theorem}
To the best of our knowledge, the above theorems seem to be the first universality results concerning local statistics of the roots of
random trigonometric polynomials. To make a comparison to existing literature, let us focus on
the distribution of real roots, which is the case $k=1, l=0$ in Theorem \ref{real}).
The number of real roots has been a main focus of the study of random trigonometric polynomials.
The gaussian setting has been investigated by a number of reseachers, including Dunnage \cite{Dunnage1966number}, Sanbandham \cite{sambandham1978number}, Das \cite{das1968trig}, Wilkins \cite{wilkins1991trig}, Edelman and Kostlan
\cite{EK} and many others. One can compute an exact
answer for the expectation using either Kac-Rice formula or Edelman-Kostlan formula \cite{EK}.
For the non-gaussian case, little has been known until very recently. Angst and Poly \cite{angstpoly}, in a recent preprint, proved the asymptotics of the mean number of roots of $P_n$ in a fixed interval $[a, b]$ under the assumptions of finite fifth moment and a Cramer-type condition. Their approach introduced a novel way to work with the Kac-Rice formula which had been considered to be difficult in discrete settings. Using an approach originated by Erd{\"o}s-Offord \cite{EO} and later developed by Ibragimov-Maslova \cite{Ibragimov1968average} \cite{Ibragimov1971expected1}, Flasche \cite{flasche} extended the result in \cite{angstpoly} with assumptions on the first two moments only. Let $N_{P_n}(a, b)$ denote the number of real roots of $P_n$ in an interval $[a, b]$.
\begin{theorem} [Flasche \cite{flasche}]\label{flasche}
Let $u\in \R$ and $0\le a< b\le 2\pi$ be fixed numbers. Let $P_n(x) = u\sqrt n + \sum_{j=0}^{n} \xi_j\cos(jx) + \sum_{j=1}^{n} \eta_j\sin(jx)$ where $\xi_j$ and $\eta_j$, $j\in \N$, are iid random variables with mean 0 and variance 1. Then
$$\lim_{n\to \infty} \dfrac{\E N_{P_n}(a, b)}{n} = \frac{b-a}{\pi\sqrt 3} \exp\left (-\frac{u^{2}}{2}\right ).$$
\end{theorem}
Notice that in this theorem, the interval $[a,b]$ contains a linear number of roots.
For smaller intervals, a few years ago, Aza{\"\i}s and coauthors \cite{azais2015local} showed that if $\xi_i$ and $\eta_i$ are iid with a smooth density function, then in an interval of size $\Theta(1/n)$, the number of real zeros converges in distribution to that of a suitable Gaussian process (and is thus universal). In an even more recent paper \cite{iksanov2016local}, Iksanov-Kabluchko-Marynych removed the assumption of smooth density, using a different method.
\begin{theorem} [Iksanov-Kabluchko-Marynych \cite{iksanov2016local}]\label{IKK}
Let $P_n(x) = \sum_{j=0}^{n} \xi_j\cos(jx) + \sum_{j=1}^{n} \eta_j\sin(jx)$ where $(\xi_j,\eta_j)$, $j\in \N$, are iid real random vectors with mean 0 and unit covariance matrix. Let $(s_n)$ be any sequence of real numbers and $[a, b]\subset \R$ a fixed interval. Then
$$N_{P_n} \left (s_n + \frac{a}{n}, s_n + \frac{b}{n}\right ) \underset{n\to \infty}{\overset{d}{\longrightarrow}} N_{Z}(a, b)$$
where $(Z(t))_{t\in R}$ is the stationary gaussian process with mean 0 and covariance matrix
$$\Cov(Z(t), Z(s))=\begin{cases}
\frac{\sin(t-s)}{t-s} \quad\mbox{if } t\neq s\\
1 \quad\mbox{if } t = s.
\end{cases}$$
\end{theorem}
In all of these previous works, the coefficients $c_i, d_i$ are restricted: $c_i=d_i=1$ or $c_i=1, d_i=0$.
Our setting is much more general, as we only require a linear fraction of the $c_i$ to be sufficiently large and
allow the rest of the (smaller)
coefficients to be arbitrary.
Our result implies the following corollary concerning the number of real roots.
\begin{theorem}\label{comparison}
Under the assumptions of Theorem \ref{real}, there exist positive constants $C$ and $c$ such that for any $n$ and for any numbers $a_n< b_n$, we have
$$\frac{|\E N_{P_n}(a_n, b_n) - \E N_{\tilde P_n}(a_n, b_n)|}{(b_n-a_n)n}\le Cn^{-c}\left (1 + \frac{1}{(b_n-a_n)n}\right ).$$
\end{theorem}
By using the Kac-Rice formula (Proposition \ref{KacRice}) for the gaussian case, we obtain the following
precise estimate.
\begin{cor}\label{maincor} Let $C, \ep$ and $\tau_1$ be positive constants. Let
$-C\le u_n\le C$ be a deterministic number. Let
$$P_n(x) = u_n\sqrt{\sum_{i=0}^{n}c_i ^{2}} + \sum_{j=0}^{n}c_j \xi_j\cos(jx) + \sum_{j=1}^{n} c_j\eta_j\sin(jx)$$
where $\xi_j$ and $\eta_j$, $j\le n$, are independent (not necessarily identically distributed) real random variables with mean 0, variance 1 and $(2+\ep)$-moments bounded by $C$, and the real coefficients $c_j$ satisfy condition {\bf C3}.
Then for any numbers $a_n< b_n$, we have
$$\E N_{P_n}(a_n, b_n) = \frac{b_n-a_n}{\pi} \sqrt{\frac{\sum_{j=0}^{n} c_j^{2}j^{2}}{\sum_{j=0}^{n} c_j^{2}}}\exp\left (-\frac{u_n^{2}}{2}\right ) + O\left (n^{-c} \right ) ((b_n-a_n)n + 1) $$
where the positive constant $c$ and the implicit constant depend only on $C, \ep$ and $\tau_1$.
\end{cor}
This corollary extends both Theorems \ref{flasche} and \ref{IKK} in the sense that it holds for general coefficients $c_i, d_i$ and intervals of all scales. It does not seem that the methods used in these papers can cover
the same range. On the other hand, our atom variables are required to have bounded $(2+ \ep)$-moments. It is an interesting open problem to see
to what extent this assumption is necessary.
\begin{remark} \label{rmk1}
In the proof, we will show that Corollary \ref{maincor} holds for a more general case in which
\begin{eqnarray}
P_n(x) &=& u_n\sqrt{\sum_{i=0}^{n}c_i ^{2}} + \sum _{j=0}^{N_0} u_j n^{-\alpha} \sqrt{\sum_{i=0}^{n}c_i ^{2}} \cos(jx)+ \sum_{j=1}^{N_0} v_j n^{ -\alpha}\sqrt{\sum_{i=0}^{n}c_i ^{2}} \sin(jx) \nonumber\\
&&+ \sum_{j=0}^{n}c_j \xi_j\cos(jx) + \sum_{j=1}^{n} c_j\eta_j\sin(jx)\label{newP}
\end{eqnarray}
where $N_0, \alpha>0$ are any constants and $-C\le u_j, v_j\le C$ are deterministic numbers that can depend on $n$. This means that the result is applicable to not only the number of zeros of $P_n$ but also the number of intersections between $P_n$ and a deterministic trigonometric polynomial
$$Q(x) := u_n'\sqrt{\sum_{i=0}^{n}c_i ^{2}} + \sum _{j=0}^{N_0} u_j n^{-\alpha} \sqrt{\sum_{i=0}^{n}c_i ^{2}} \cos(jx)+ \sum_{j=1}^{N_0} v_j n^{ -\alpha}\sqrt{\sum_{i=0}^{n}c_i ^{2}} \sin(jx) $$ where $u_n'$, $u_j$ and $v_j$ are bounded deterministic numbers. To see this, one only needs to apply the result to the random polynomial $P_n - Q$.
\end{remark}
Now let us go back to the special case with $c_i=d_i =1$
\begin{equation}
P_n(x) = \sum_{i=0}^{n}\xi_i \cos(ix) + \sum_{i=1}^{n}\eta_i \sin(ix). \nonumber
\end{equation}
By applying Corollary \ref{maincor} directly to the derivatives of $P_n$, we get the following result.
\begin{cor}
Let $k$ be a nonnegative integer and $C$ be a positive constant. Assume that the random variables $\xi_i$ and $\eta_i$, $i\le n$, are independent (not necessarily identically distributed) real random variables with mean 0, variance 1 and $(2+\ep)$-moments bounded by $C$. For any numbers $a_n< b_n$, the expected number of real zeros of the $k$-th derivative of $P_n$ in an interval $[a_n, b_n]$ is
$$\E N_{P_n^{(k)}}(a_n, b_n) = \sqrt{\frac{2k+1}{2k+3}} \frac{(b_n-a_n)n}{\pi}+ O\left (n^{-c} \right ) ((b_n-a_n)n + 1)$$
where the positive constant $c$ and the implicit constant depend only on $k, C$ and $\ep$.
\end{cor}
The key to our proof is the new technique to verify anti-concentration, which we discussed at the end of the Introduction (see also Remark \ref{anticond}) and at the end of the previous section. For details, see Section \ref{proof-main}.
\section{Application: Universality for Kac polynomials}\label{app2}
In this section, we apply our result to Kac polynomials,
$$P_n(x) = \sum_{i=0}^{n} \xi_i x^{i}$$
where $\xi_0, \xi_1, \dots, \xi_n$ are iid copies of a real random variable $\xi$ with mean zero and unit variance.
This is perhaps the most studied model of random polynomials. Indeed, the starting point of the theory of random functions was a series of papers in the early 1900s examining the number of real roots of the Kac polynomials.
The first rigorous work on random polynomials was due to Bloch and Polya in 1932 \cite{BP},
who considered the Kac polynomial with $\xi$ being
Rademacher, namely $\P(\xi=1)=\P(\xi=-1)=1/2$. In what follows, we denote by $N_{n, \xi}$ the number of real roots of $P_n (x)$.
Next came the ground breaking series of papers by Littlewood and Offord \cite{LO1, LO2, LO3} in the early 1940s,
which, to the surprise of many mathematicians at the time, showed that $N_{n, \xi}$ is typically poly-logarithmic in $n$.
\begin{theorem} [Littlewood-Offord]
For $\xi$ being Rademacher, Gaussian, or uniform on $[-1,1]$,
$$ \frac{\log n} {\log \log n} \le N_{n, \xi} \le \log^2 n$$ with probability $1-o(1)$.
\end{theorem}
During more or less the same time, Kac \cite{Kac1943average} discovered his famous formula for the density function $\rho(t)$ of $N_{n, \xi}$
\begin{equation} \nonumber \rho(t) = \int_{- \infty} ^{\infty} |y| p(t,0,y) dy, \end{equation} where
$p(t,x,y)$ is the joint probability density of $P_{n} (t) =x$ and the derivative $P'_{n} (t) =y$.
Consequently,
\begin{equation} \label{Kacformula} \E N_{n ,\xi} = \int_{-\infty}^{\infty} dt \int_{- \infty} ^{\infty} |y| p(t,0,y) dy.
\end{equation}
In the Gaussian case ($\xi$ is Gaussian), one can compute the joint distribution of $P_{n} (t)$ and $P'_{n}(t)$ rather easily.
Kac showed in \cite{Kac1943average} that
\begin{equation} \nonumber \E N_{n, Gauss} = \frac{1}{\pi} \int_{-\infty} ^{\infty} \sqrt { \frac{1}{(t^2-1) ^2} + \frac{(n+1)^2 t^{2n}}{ (t^{2n+2} -1)^2} } dt = \left (\frac{2}{\pi} +o(1)\right ) \log n. \end{equation}
In his original paper \cite{Kac1943average}, Kac thought that his formula would lead to the same estimate for $\E N_{n, \xi}$ for all other random variables $\xi$. It has turned out not to be the case, as the right-hand side of
\eqref{Kacformula} is often hard to compute, especially when $\xi$ is discrete (Rademacher for instance).
Technically, the computation of the joint distribution of $P_{n} (t)$ and $P'_{n}(t)$ is easy in the Gaussian case, thanks to special properties of the Gaussian distribution,
but can pose a great challenge
in general. Kac admitted this in a later paper \cite{Kac2} in which he managed to push his method to
treat the case $\xi$ being uniform in $[-1,1]$, using analytic tools. A further extension was made by Stevens \cite{Stev}, who evaluated Kac's formula for a large class of $\xi$ having continuous and smooth distributions
with certain regularity properties (see \cite[page 457]{Stev} for details). Since the distributions are smooth, the two later results
follow rather easily from our universality results; see the discussion at the end of the last section and Remark
\ref{anticond}; we leave the routine verification as an exercise for the interested reader.
The computation of $\E N_{n, \xi}$ for discrete random variables $\xi$ required a considerable effort. It took more than 10 years until Erd{\"o}s and Offord \cite{EO} found a completely new approach to handle the Rademacher case, proving the following.
\begin{theorem} \cite{EO} \label{e.erdos-offord} Let $\xi_i$ be iid Rademacher random variables. Then
\begin{equation}
N_{n, \xi} = \frac{2}{\pi} \log n + o\left ((\log n)^{2/3} \log \log n\right )\nonumber
\end{equation}
with probability at least $1 - o\left (\frac{1}{\sqrt{\log \log n}}\right )$.
\end{theorem}
The argument of Erd\"os and Offord is combinatorial and very delicate, even by today's standards. Their main idea is to approximate the number of roots by the number of sign changes
in $P_{n} (x_1) , \dots, P_{n}(x_k)$ where $(x_1, \dots, x_k)$ is a carefully chosen deterministic sequence of points of length $k = (\frac{2}{\pi} +o(1)) \log n$. The authors showed that with high probability, almost every interval
$(x_i, x_{i+1} )$ contains exactly one root, and used this fact to prove Theorem \ref{e.erdos-offord}.
Our main result in this section is the following universality statement.
\begin{theorem}[Universality for Kac polynomials]\label{kacreal}
Let $k, l$ be nonnegative integers with $k+l\ge 1$.
Assume that $\xi_0, \dots, \xi_n$ and $\tilde \xi_0, \dots, \tilde \xi_n$ are real random variables with mean 0, satisfying Condition {\bf C1} and the polynomials $P_n$, $\tilde P_n$
are Kac polynomials with respect to these variables. Then there exist positive constants $C', c$ depending only on $k, l$ and the constants in Condition {\bf C1} such that the following holds.
For every $0 < \theta_n < 1$, for any real numbers $x_1,\dots, x_k$, and complex numbers $z_1, \dots, z_l$ with $1-2\theta_n \le |x_i|, |z_j|\le 1-\theta_n +1/n$ for all $i, j$, and for any function $G: \mathbb{R}^{k}\times\mathbb{C}^{l}\to \mathbb{C}$ supported on $\prod_{i=1}^{k}[x_i-10^{-3}\theta_n, x_i+10^{-3}\theta_n] \times \prod_{j=1}^{l}B (z_j, 10^{-3}\theta_n)$ with continuous derivatives up to order $2(k+l)+4$ and $\norm{\triangledown^aG}_\infty\le (\theta_n+1/n)^{-a}$ for all $0\le a\le 2(k+l)+4$, we have
\begin{eqnarray}\nonumber
\left |\E\sum G\left (\zeta_{i_1}, \dots, \zeta_{i_k}, \zeta_{j_1} , \dots, \zeta_{j_l}\right)
-\E\sum G\left (\tilde \zeta_{i_1}, \dots, \tilde \zeta_{i_k}, \tilde \zeta_{j_1}, \dots, \tilde \zeta_{j_l}\right) \right |\le C'\theta_n^{c} + C'n^{-c},
\end{eqnarray}
where the first sum runs over all $(k+l)$-tuples $(\zeta_{i_1}, \dots, \zeta_{i_k}, \zeta_{j_1}, \dots, \zeta_{j_l}) \in \R^{k}\times \C_{+}^{l}$ of the roots $\zeta_1, \zeta_2, \dots$ of $P_n$, and the second sum runs over all $(k+l)$-tuples $(\tilde \zeta_{i_1}, \dots, \tilde \zeta_{i_k}, \tilde \zeta_{j_1}, \dots, \tilde \zeta_{j_l}) \in \R^{k}\times \C_{+}^{l}$ of the roots $\tilde \zeta_1, \tilde \zeta_2, \dots$ of $ \tilde P_n$.
\end{theorem}
\begin{remark} \label{Qrmk}
Theorem \ref{kacreal} provides universality result for the polynomial $P_n$ on the disk $B(0, 1+1/n)$. For the complement of this disk, consider $Q_n(z): = z^{n} P_n(z^{-1})$ which is another Kac polynomial. Since the roots of $Q_n$ are just the reciprocal of the roots of $P_n$, the universality of $Q_n$ in $B(0, 1)$ implies the universality of $P_n$ outside the disk $B(0, 1)$.
\end{remark}
As a corollary, we get the following result on the number of real roots of these polynomials.
\begin{cor}\label{kacmean}
Let $C$ be a positive constant. Assume that the random variables $\xi_i$ are independent (not necessarily identically distributed) real random variables with mean 0, variance 1 and $(2+\ep)$-moments bounded by $C$. Then
$$\E N_{P_n}(\R) =\frac{2}{\pi} \log n +O(1)$$
where the implicit constant depends only on $C$ and $\ep$.
\end{cor}
Theorem \ref{kacreal}
strengthens an earlier result of Tao and the second author \cite{TVpoly}. The result in \cite{TVpoly} only covers
the bulk of the spectrum, namely the region $1-n^{-\ep}\le |x|\le 1+n^{-\ep}$. Restricting to the number of real roots, it yields
$$\E N_{P_n}(\R) = O\left (\log n\right )$$ instead of the more precise (and optimal) estimate in Corollary \ref{kacmean}.
Another new feature is that our result also yields sharp estimates for the size of level sets $\{ z\in \C: P_n (z) =a \}$, for any fixed $a$, since we can
allow that in Theorem \ref{kacreal} and Corollary \ref{kacmean}, $\xi_0 $ (and in fact any finite number of $\xi_i$) has non-zero, bounded mean. Our proofs work automatically under this extension.
The proof in \cite{TVpoly} made use of a deep anti-concentration lemma \cite[Lemma 14.1]{TVpoly} whose proof
relies on the Inverse Littlewood-Offord theory and a weak quantitative version of Gromov's theorem. The proof we will provide here
is simple and almost identical to the one used to treat random trigonometric polynomials in the last section. For random variables having continuous distributions (such as the cases treated by Kac and Stevens mentioned above), the anti-concentration
property (see Remark \ref{anticond}) is immediate.
\begin{remark}
One can routinely modify the proofs of Theorem \ref{kacreal} and Corollary \ref{kacmean} to show that these results hold for more general settings. For example, the proofs can be used to show that these results apply for
$$P_n(x) = \sum_{i=0}^{n} c_i \xi_i x^{i}$$
where $\xi_i$ are independent (not necessarily identically distributed) random variables satisfying Condition {\bf{C1}} with zero mean and the deterministic coefficients $c_i$ grow polynomially. Specifically, these results hold for derivatives of the Kac polynomials of any given order. We leave the details to the interested reader. The aforementioned results for this general version were proven in the previous work \cite{DOV} using much more involved tools and arguments.
\end{remark}
We defer the proofs of Theorem \ref{kacreal} and Corollary \ref{kacmean} to Section \ref{kacproof}.
\section{Application: Universality for Weyl series}\label{app3}
In this section, we discuss an application of our main theorems to Weyl series
$$P(z) = \sum_{j=0}^{\infty} \frac{\xi_j z^{j}}{\sqrt{j!}}$$
where $\xi_j$ are independent complex random variables satisfying the matching condition {\bf C1} with the $\tilde \xi_j$ being standard complex gaussian random variables with density $\frac{1}{\pi} e^{-|z|^{2}}$. In the literature, Weyl series are also referred to as
flat series.
The flat series $\tilde P(z) = \sum_{j=0}^{\infty} \frac{\tilde \xi_j z^{j}}{\sqrt{j!}}$ is also known as the flat Gaussian analytic function and has been studied intensively over the past few decades. See, for example, \cite{HKPV}, \cite{sodin2005zeroes}, \cite{sodin2004random}, and the references therein. Using the Edelman-Kostlan formula \cite{EK}, one can show that for any Borel set $B\subset \C$, the expected number of roots of $\tilde P$ in $B$ is
\begin{equation}\label{flat1} \E N_{\tilde P}(B) = \frac{1}{\pi} m(B) \end{equation}
where $m(B)$ is the Lebesgue measure of $B$.
For general random variables, to compare the distribution of the roots of $P$ with that of $\tilde P$, Kabluchko and Zaporozhets (2014) \cite{kabluchko2014asymptotic} showed that with probability $1$, the rescaled empirical measure $\mu_{r}$ defined by
$$\mu_r(A) = \frac{1}{r}\sum_{\zeta: P(\zeta) = 0} \textbf{1}_{\zeta\in \sqrt r A}$$
converges vaguely\footnote{A sequence of measures $\mu_r$ is said to converge vaguely to a measure $\mu$ if $\lim _{r\to\infty}\int fd\mu_r = \int fd\mu$ for every continuous, compactly supported function $f$.} as $r\to\infty$ to the measure $\frac{1}{\pi}m(\cdot)$, which is, as mentioned above, the corresponding measure for $\tilde P$.
The above mention result of \cite{kabluchko2014asymptotic} is about the rescaled measures $\mu_r$. Thus, it provides an asymptotically sharp estimate on the number of roots of $P$ in large domains of the form $\sqrt{r} B$ where $r\to \infty$ and $B$ is a fixed ``nice" measurable domain, but does not give estimates for the number of roots in
domains with fixed area, as in \eqref{flat1}.
Using our framework, we obtain the following result at the local scale.
\begin{theorem}\label{uni_flat}(Universality for random flat series)
Assume that the complex random variables $\xi_j$ satisfy the matching condition {\bf C1} with the $\tilde \xi_j$ being standard complex gaussian random variables and the random variables $\Re(\xi_0), \Im(\xi_0), \Re(\xi_1), \Im(\xi_1), \dots$ are independent. Then there exist positive constants $C, c$ depending only on the constants in Condition {\bf C1} such that the following holds.
For any complex number $z_0$ and for any function $G: \mathbb{C} \to \mathbb{C}$ supported on $ B (z_0, 1)$ with continuous derivatives up to order $6$ and $\norm{\triangledown^aG}_\infty\le 1$ for all $0\le a\le 6$, we have
\begin{eqnarray}
\left |\E\sum G\left (\zeta \right) -\E\sum G\left (\tilde \zeta\right) \right |\le C |z_0|^{-c},\nonumber
\end{eqnarray}
where the first sum runs over all the roots $\zeta_1, \zeta_2, \dots$ of $P$, and the second sum runs over all the roots $\tilde \zeta_1, \tilde \zeta_2, \dots$ of $ \tilde P$.
\end{theorem}
As a corollary, we obtain a sharp estimate on the number of roots in regions with a fixed area.
\begin{cor}\label{mean_flat} Under the assumption of Theorem \ref{uni_flat}, for any constant $C>0$, let $B$ be an angular square $B = \{Re^{i\theta}: R\in [r, r+1], \theta\in [\theta_0, \theta_0 + C/r]$ for some numbers $r>0$ and $\theta_0$, we have
$$\E N_{P}(B) = \frac{1}{\pi} m (B)+ O(r^{-c})$$
where $c$ and the implicit constant only depend on $C$ and the constants in Condition {\bf C1}.
\end{cor}
The angular square $B$ can be replaced by a disk, square, or any other nice domains whose indicator functions can be well approximated by smooth functions, with only a nominal modification of the proof. Thus, we have a generalization of \eqref{flat1} for flat series with general
random coefficients.
To the best of our knowledge, Theorem \ref{uni_flat} and Corollary \ref{mean_flat} are new. We present a short proof of these results in Section \ref{proof_flat}.
\section{Application: Universality for elliptic polynomials}\label{app4}
In this section, we briefly illustrate how to apply our framework to elliptic polynomial
$$P_n(z) = \sum_{i=0}^{n}\sqrt{n\choose i} \xi_i z^{i}.$$
where $\xi_j$ are independent real random variables satisfying the matching condition {\bf C1} with the $\tilde \xi_j$ being standard real gaussian random variables.
For the gaussian case, the polynomial $\tilde P_n(z) = \sum_{i=0}^{n}\sqrt{n\choose i} \tilde \xi_i z^{i}$ has exactly $\sqrt n$ real roots in expectation (see, for example, \cite{bleher1997correlations}, \cite{EK}).
In their paper \cite{BD}, among other results, Bleher and Di extended this result to the non-gaussian setting.
\begin{theorem}\cite[Theorem 5.3]{BD}\label{BDthm}
Let $\xi_j$ be iid random variables with mean 0 and variance 1. Assume furthermore that they are continuously distributed with sufficiently smooth density. Then
\begin{equation}
\lim _{n\to \infty }\frac{\E N_{P_n}(\R)}{\sqrt{n}}=1\nonumber.
\end{equation}
\end{theorem}
We refer the reader to the original paper \cite{BD} for the precise description of ``sufficiently smooth".
Later, Tao and the second author in \cite[Theorem 5.6]{TVpoly} showed that the same result holds when the random variables $\xi_j$ are only required to be independent with mean 0, variance 1, and finite $(2+\ep)$-moments. Here we apply our framework to recover these results assuming the more flexible Condition {\bf C1}, which allows a constant number of $\xi_j$ to have non-zero means. Let us first start with a local universality result.
\begin{theorem}\label{uni_elliptic}(Universality for random elliptic polynomials)
Assume that the real random variables $\xi_j$ are independent and satisfy the matching condition {\bf C1} with the $\tilde \xi_j$ being standard real gaussian random variables. Then there exist positive constants $C, c$ depending only on the constants in Condition {\bf C1} such that the following holds.
For any real number $x_0$ with $n^{-1/2+\ep} \le |x_0| \le 1$ and for any function $G: \mathbb{C} \to \mathbb{C}$ supported on $[x_0- 1/\sqrt{n}, x_0+1/\sqrt{n}]$ with continuous derivatives up to order $6$ and $\norm{\triangledown^aG}_\infty\le n^{a/2}$ for all $0\le a\le 6$, we have
\begin{eqnarray}
\left |\E\sum G\left (\zeta \right) -\E\sum G\left (\tilde \zeta\right) \right |\le C n^{-c},\nonumber
\end{eqnarray}
where the first sum runs over all roots $\zeta_1, \zeta_2, \dots$ of $P_n$, and the second sum runs over all the roots $\tilde \zeta_1, \tilde \zeta_2, \dots$ of $ \tilde P_n$.
\end{theorem}
\begin{remark}\label{remark_elliptic}
If $P_n$ satisfies the assumptions of Theorem \ref{uni_elliptic}, so does the polynomial $Q_n(z) = z^{n}P_n\left (\frac{1}{z}\right ) = \sum_{i=0}^{n} \sqrt{n\choose i} \xi_{n-i} z^{i}$. And since the roots of $Q_n$ are just the reciprocal of the roots of $P_n$, from the conclusion of Theorem \ref{uni_elliptic} for $Q_n$, one can obtain the corresponding universality result of $P_n$ on the domain $1\le |x_0| \le n^{1/2-\ep}$.
\end{remark}
Thanks to this remark, our result proves universality on the domain $ n^{-1/2+\ep}\le |x_0| \le n^{1/2-\ep}$. By showing that the contribution outside of this domain is negligible, we obtain the following more quantitative version of Theorem \ref{BDthm}.
\begin{cor}\label{mean_elliptic} Under the assumption of Theorem \ref{uni_elliptic}, we have
$$\E N_{P_n}(\R)=\sqrt{n} + O(n^{1/2-c})$$
where $c$ and the implicit constant only depend on the constants in Condition {\bf C1}.
\end{cor}
We give a short proof of these results in Section \ref{proof_elliptic}.
\section{Application: Universality for Random Taylor series} \label{app5}
Let $\Gamma$ denote the Gamma function. In a recent preprint \cite{FK}, Flasche and Kabluchko considered the following random series
$$P(x) = \sum_{k=0}^{\infty} \xi_k c_k x^{k}$$
where the $c_k$ are real deterministic coefficients such that
$$c_k^{2}=\frac{k^{\gamma-1}}{\Gamma(\gamma)}L(k)$$
for some constant $\gamma>0$ and some function $L: (0, \infty)\to \R$ satisfying $L(t)>0$ for sufficiently large $t$ and $\lim _{t\to\infty} \frac{L(\lambda t)}{L(t)}=1$ for all $\lambda>0$. For example, $L(x)$ is some power of $\log x$.
We follow the terminology in \cite{FK} and call such a function $L$ a {\it slowly varying} function and the function $P$ a {\it random series with regularly varying coefficients}. The following is the main result of \cite{FK}.
\begin{theorem}\cite[Theorem 1.1]{FK}\label{kacseries_thm}
Assume that the random variables $\xi_k$ are iid real random variables with zero mean and unit variance. Then
$$\lim _{r\uparrow 1} \frac{\E N_{P}[0, r]}{-\log(1-r)}=\frac{\sqrt\gamma}{2\pi}.$$
\end{theorem}
We reprove Theorem \ref{kacseries_thm} under the (slightly different) assumption that the random variables $\xi_k$ are independent (not necessarily identically distributed) real random variables with zero mean, unit variance, and uniformly bounded $(2+\ep)$-moments. As usual, we allow that a few random variables have nonzero bounded mean, and so our result also applies to level sets. Our method also yields a polynomial rate of convergence.
As before, we obtain this as a corollary of a stronger theorem establishing the
local universality of the roots. Let
$$\tilde P(x) = \sum_{k=0}^{\infty} \tilde \xi_k c_k x^{k}$$
where the $\tilde \xi_k$ are independent standard gaussian.
\begin{theorem}[Universality for random series with regularly varying coefficients]\label{kacseries_uni}
Let $k, l$ be nonnegative integers with $k+l\ge 1$. Assume that the real random variables $\xi_j$ are independent and satisfy the matching condition {\bf C1} with the $\tilde \xi_j$ being standard real gaussian random variables. There exist positive constants $C', c$ depending only on the constants in Condition {\bf C1} such that the following holds.
Let $0 < \delta < 1$, and let $x_1,\dots, x_k$ be real numbers and $z_1, \dots, z_l$ be complex numbers satisfying $1-2\delta \le |x_i|, |z_j|\le 1-\delta$ for all relevant $i,j$. Let $G: \mathbb{R}^{k}\times\mathbb{C}^{l}\to \mathbb{C}$ by a function supported on $\prod_{i=1}^{k}[x_i-10^{-3}\delta, x_i+10^{-3}\delta] \times \prod_{j=1}^{l}B (z_j, 10^{-3}\delta)$ with continuous derivatives up to order $2(k+l)+4$ and $\norm{\triangledown^aG}_\infty\le \delta^{-a}$ for all $0\le a\le 2(k+l)+4$. Then
\begin{eqnarray}
\left |\E\sum G\left (\zeta_{i_1}, \dots, \zeta_{i_k}, \zeta_{j_1} , \dots, \zeta_{j_l}\right)
-\E\sum G\left (\tilde \zeta_{i_1}, \dots, \tilde \zeta_{i_k}, \tilde \zeta_{j_1}, \dots, \tilde \zeta_{j_l}\right) \right |\le C'\delta^{c},\nonumber
\end{eqnarray}
where the first sum runs over all $(k+l)$-tuples $(\zeta_{i_1}, \dots, \zeta_{i_k}, \zeta_{j_1}, \dots, \zeta_{j_l}) \in \R^{k}\times \C_{+}^{l}$ of the roots $\zeta_1, \zeta_2, \dots$ of $P$, and the second sum runs over all $(k+l)$-tuples $(\tilde \zeta_{i_1}, \dots, \tilde \zeta_{i_k}, \tilde \zeta_{j_1}, \dots, \tilde \zeta_{j_l}) \in \R^{k}\times \C_{+}^{l}$ of the roots $\tilde \zeta_1, \tilde \zeta_2, \dots$ of $ \tilde P$.
\end{theorem}
\begin{cor}\label{kacseries_cor}
Under the assumption of Theorem \ref{kacseries_uni}, there exist positive constants $C'$ and $c$ such that the following hold.
\begin{enumerate}
\item For any $r\in (0, 1)$,
$$\left|\E N_{P}[0, r] - \E N_{\tilde P}[0, r] \right |\le C$$
where $N_{P}[0, r]$ and $N_{\tilde P}[0, r]$ are the number of real roots of $P$ and $\tilde P$ in $[0, r]$, respectively.
\item We have
$$\lim _{r\uparrow 1} \frac{\E N_{P}[0, r]}{-\log(1-r)}=\frac{\sqrt\gamma}{2\pi}.$$
\end{enumerate}
\end{cor}
We prove Theorem \ref{kacseries_uni} and Corollary \ref{kacseries_cor} in Section \ref{kacseries_proof}.
\section{Proof of Theorems \ref{gcomplex} and \ref{greal}}\label{app1_proof_1}
Before starting the proofs, let us mention two Jensen's inequalities that we use several times in this manuscript. It will be clear in the context which Jensen's inequality is used.
The first, and perhaps more popular, Jensen's inequality relates the value of a convex function of an integral to the integral of that convex function. In particular, for any complex function $\phi$ on the real line and any real integrable random variable $X$, we have
$$\phi\left (\E(X)\right )\le \E \phi(X).$$
The second Jensen's inequality provides an upperbound on the number of roots of an analytic function. Assume that $f$ is an analytic function on an open domain that contains the closed disk $\bar B(z, R)$. Then for any $r<R$, we have
\begin{equation}
N(B(z, r))\le \frac{\log \frac{M}{m}}{\log\frac{R^{2}+r^{2}}{2Rr}}\label{jensenbound}
\end{equation}
where $N(B(z, r))$ is the number of roots of $f$ in the open disk $B(z, r)$ and $M = \max_{w\in \bar B(z, R)} |f(w)|$, $m = \max_{w\in \bar B(z, r)} |f(w)|$. For completeness, we include a short proof of this inequality in Appendix \ref{proof_jensen}.
\subsection{Proof of Theorem \ref{gcomplex}}\label{pgcomplex}
We first state a few lemmas. The first lemma reduces the theorem to the case when the function $G$ {\it splits}, namely $G$
is a product of functions of a single variable. In many applications, $G$ automatically takes this form.
This lemma was proved in \cite{TVpoly}. We include a short proof in Appendix \ref{fourier_proof}.
\begin{lemma}\label{fourier}
If Theorem \ref{gcomplex} holds for every function $G$ of the form
\begin{equation}\label{h2}
G(w_1,\dots, w_m) = G_1(w_1)\dots G_k(w_k)
\end{equation} where for each $1\le i\le k$, $G_i:\mathbb{C}\to \mathbb{C}$ is a function supported in $B(z_i, 1/50)$ with continuous derivatives up to order $3$ and $\norm{\triangledown^aG_{i}}_\infty\le 1$ for all $0\le a\le 3$, then it holds for any function $G$ satisfying the hypothesis of Theorem \ref{gcomplex}. Similarly for Theorem \ref{greal}.
\end{lemma}
The next lemma plays a critical role in our approach, as it shows that the zero pole problem (see the discussion in the last subsection of Section \ref{framework}) can be dealt with assuming anti-concentration at a single point.
\begin{lemma}\label{2norm}
Let $0<\delta_n, c_2<1$ and let $F_n$ be an entire function with $|F_n(w)|\ge \exp(-\delta_n^{-c_2})$ for some complex number $w$ and $|F_n(z)|\le \exp(\delta_n^{-c_2})$ for all $z\in B(w, 3/2)$. Then
\begin{equation}
\int_{B(w, 1/2)} \left |\log\left |F_n(z)\right |\right |^{2} dz \le 720^2 \times\delta_n^{-6c_2}.\nonumber
\end{equation}
\end{lemma}
The constant $720^2= 518400$ is for explicitness and plays no specific role. Both this and the constant $6$ in the exponent can be reduced but we make no attempt to optimize these constants. We include the proof in Appendix \ref{2norm_proof}.
The following lemma shows that the $\log$ function satisfies a universality property. It is a variant
of a lemma in \cite{TVpoly} and we include the proof in Appendix \ref{logcomp_proof}.
\begin{lemma}\textbf{(Log-comparability)}\label{logcomp}
Assume that the coefficients $\xi_i$ and $\tilde \xi_i$ satisfy Condition {\bf C1} for some constants $N_0, \ep, \tau$. Let $\alpha_1$ be a positive constant and $k$ be a positive integer. Assume that there exists a constant $C>0$ such that the random functions $F_n$ and $\tilde F_n$ satisfy Condition {\bf C2} \eqref{cond-delocal} with parameters $\alpha_1$ and $C$. There exist positive constants $\alpha_0$ and $C'$ such that for any ${z_1}, \dots, {z_k}\in D_n + B(0, 1/10)$, and function $K:\mathbb{C}^k\to \mathbb{C}$ with continuous derivatives up to order $3$ and $\norm{\triangledown^a K}_\infty\le \delta_n^{-\alpha_0}$ for all $0\le a\le 3$, we have
\[\big|\E K\big(\log|F_n(z_1)|, \dots, \log|F_n(z_{k})|\big)-\E K\big(\log|\tilde {F_n}(z_1)|, \dots, \log|\tilde {F_n}(z_{k})|\big) \big|\le {C'}\delta_n^{\alpha_0}.
\]
\end{lemma}
\begin{remark} \label{alpha0}
Following the proof, one can set $\alpha_0 = \frac{3\alpha_1\ep }{10^{3}}$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{gcomplex}]
By Lemma \ref{fourier}, we can assume that the function $G$ has the form \eqref{h2}. We need to show that
\begin{eqnarray}
\ab{\E \prod_{j=1}^{k} \left (\sum_{i} G_j (\zeta_i)\right )-\E \prod_{j=1}^{k} \left (\sum_{i} G_j (\tilde \zeta_i)\right )}\le C'\delta_n^{c},\label{du5}
\end{eqnarray}
for some constant $c >0$. By Green's formula, we have
\begin{equation}
\sum_{i} G_j({\zeta}_i)= \int_{\mathbb C}\log |F_n(z)|H_j(z)dz = \int_{B( z_j, 1/10)}\log |F_n(u_j)|H_j(u_j)du_j,\label{sat1}
\end{equation}
where $H_j(z) = -\frac{1}{2\pi}\triangle G_j(z)$. Note that $supp(H_j)\subset B( z_j, 1/10)$ and $\norm{H_j}_\infty\le 1$ for all $z\in \C$, thanks to the assumption on $G$ in Theorem \ref{gcomplex}. (As usual, $\| f\|_\infty = \sup_{z \in \C } | f(z)| $.) When $F_n$ is identically 0, we assume by convention that the left-hand side and the right-hand side are 0.
Let $A$ be a sufficiently large constant and $c_1$ be a sufficiently small positive constant. For this proof,
it suffices to set $c_1 := \frac{\alpha_0}{300 k^{2}}$ and $A := 2kC_1 + \frac{\alpha_1\ep}{60}$. This choice, together with the value of $\alpha_0$ in Remark \ref{alpha0}, yields the explicit values of $A$ and $c_1$ in the theorem.
Let $\bar c_1 := 100k c_1$. The power $c$ in \eqref{du5} can be chosen (quite generously) to be $c_1$.
Let $K:\R \to \R$ be a smooth function with the following properties
\begin{itemize}
\item $K$ is supported on the interval $[-2\delta_n^{-\bar c_1}, 2\delta_n^{-\bar c_1}]$
\item $K(x) = x$ for all $x\in [-\delta_n^{-\bar c_1}, \delta_n^{-\bar c_1}]$
\item $||K^{(a)}||_\infty = O\left (\delta_n^{-\bar c_1}\right )$ for all $0\le a\le 3$ (where $K^{(l)}$ is the $l$-th derivative of $K$).
\item $|K(x)|\le |x|$ for all $x\in \R$. \end{itemize}
\begin{remark} \label{K} It is not hard to show that such a function $K$ exists. In fact, one can construct $K$ explicitly, but we do not need an explicit formula for our proof; the same applies for
many later, similar, arguments. \end{remark}
Let $\Gamma := \prod_{j=1}^{k} B (z_j, 1/10)$ and $H(u) := \prod_{j=1}^{k} H_j(u_j)$ for $u :=(u_1, \dots, u_k)$.
By \eqref{sat1}, we have
\begin{eqnarray}
\E \prod_{j=1}^{k} \left (\sum_{i} G_j (\zeta_i)\right ) = \E \int_{\Gamma} H(u)\prod_{j=1}^{k}\log |F_n(u_j)| du &=& A_1+A_2\nonumber
\end{eqnarray}
where
$$A_1 := \E\int_{\Gamma} H(u)\prod_{j=1}^{k}K(\log|F_n(u_j)|) du,$$
$$A_2 := \E\int_{\Gamma} H(u)\left [\prod_{j=1}^{k}\log |F_n(u_j)| -\prod_{j=1}^{k} K(\log|F_n(u_j)|) \right ] du.$$
Let $\tilde A_1$ and $\tilde A_2$ be the corresponding terms for $\tilde F_n$. Our goal is to show that
\begin{equation} \label{goal1} A_1 + A_2 - \tilde A_1 - \tilde A_2 = O\left (\delta_n^{c}\right ).\end{equation}
By Lemma \ref{logcomp}, we have $A_1 - \tilde A_1 = O\left (\delta_n^{\bar c_1}\right )$.
We next show that both $A_2$ and $\tilde A_2$ are of order $O\left (\delta_n^{ c_1}\right )$. It suffices to consider $A_2$, as the treatment of $\tilde A_2$ is similar.
Let $\mathcal A_0$ be the event on which the following two properties hold
\begin{itemize}
\item For all $1\le j\le k$, $|F_n(z'_j)|\ge \exp(-\delta_n^{-c_1})$ for some $z_j'\in B(z_j, 1/100)$
\item $|F_n(z)|\le \exp(\delta_n^{-c_1})$ for all $z\in B(z_j, 2)$. \end{itemize}
By conditions {\bf C2} \eqref{cond-smallball} and {\bf C2} \eqref{cond-bddn}, $\P(\mathcal A_0^{c}) \le C\delta_n^{A}$, where $\mathcal A_0^{c}$ is the complement of $\mathcal A_0$.
We next break up $A_2$ as follows
\begin{eqnarray}
A_2 &=& \E\int_{\Gamma} H(u)\left [\prod_{j=1}^{k}\log |F_n(u_j)| -\prod_{j=1}^{k} K(\log|F_n(u_j)|) \right ] du \textbf{1}_{\mathcal A_0}+\E\int_{\Gamma} H(u) \prod_{j=1}^{k}\log |F_n(u_j)| du\textbf{1}_{\mathcal A_0^{c}} \nonumber\\
&& - \E\int_{\Gamma} H(u) \prod_{j=1}^{k} K(\log|F_n(u_j)|) du\textbf{1}_{\mathcal A_0^{c}} =: A_3 + A_4 - A_5\nonumber.
\end{eqnarray}
For $A_5$, since $\| K\|_\infty \le 2 \delta_n^{-\bar c_1}$ by construction and $A\ge 2k\bar c_1$, we have
$$|A_5|\le 2\delta_n^{-k\bar c_1}\P(\mathcal A^{c})\le 2 C\delta_n^{A -k\bar c_1} =O(\delta_n^{\bar c_1})
= O(\delta_n^{c_1}).$$
To bound $A_4$, notice that from \eqref{sat1}
$$\left |\int_{B(z_j, 1/100)} \log|F_n(u_j)|H_j(u_j)du_j\right |\le N_{F_n}(B(z_j, 1/100))=: N_j.$$
By H{\"o}lder's inequality,
$$|A_4|\le \prod_{j=1}^{k} \left (\E N_j^{k}\textbf{1}_{\mathcal A^{c}}\right )^{1/k}.$$
We bound each term on the right using H{\"o}lder's inequality as follows
$$\E N_j^{k}\textbf{1}_{\mathcal A^{c}} \le \delta_n^{- kC_1}\P(\mathcal A^{c}) + \left (\E N_j^{k+1}\textbf{1}_{N_j\ge \delta_n^{-C_1}}\right )^{k/(k+1)}\left (\P\left (\mathcal A_0^{c}\right )\right )^{1/(k+1)} . $$
By our setting $A\ge kC_1 + (k+1)\bar c_1$, the first term on the right-hand side is $O(\delta_n^{c_1})$.
Moreover, condition {\bf C2} \eqref{cond-poly} implies that the second term is $O(\P\left (\mathcal A_0^{c}\right )^{1/(k+1)} ) = O( \delta_n ^{c_1}) $. Thus, $A_4 =O (\delta_n^{c_1})$.
Finally, to bound $A_3$, we let $B$ be the (random) set of all $u\in \Gamma$ on which $\left |\log|F_n(u_j)|\right |\ge \delta_n^{-\bar c_1}$ for some $j$. Notice that if $u = (u_1, \dots, u_k ) \notin B$, then
$K (\log |F_n (u_j)| ) = \log |F_n (u_j)|$ by the properties of $K$ and the definition of $B$.
Moreover, for $u \in B$, $| K (\log |F_n (u_j)| ) | \le| \log |F_n (u_j)||$ as $|K(x)| \le |x| $ for all $x$.
It follows that
\begin{eqnarray}
|A_3|&\le& 2 \E\int_{\Gamma} \left |\prod _{j=1}^{k}\log |F_n(u_j)| \right | \textbf{1}_{B}(u)du \textbf{1}_{\mathcal A_0}.
\end{eqnarray}
By H\"older's inequality, the right-hand side is at most
$$ 2 \left [\E\int_{\Gamma} \left |\prod _{j=1}^{k}\log |F_n(u_j)| \right | ^{2} du \textbf{1}_{\mathcal A_0} \right ]^{1/2} \left [\E\int_{\Gamma} \textbf{1}_{B}(u) du \textbf{1}_{\mathcal A_0} \right ]^{1/2}. $$
By Lemma \ref{2norm}, on the event $\mathcal A_0$, we have
\begin{equation} \label{onA0} \int_{B(z_j, 1/100)} \left |\log |F_n(u_j)| \right | ^{2} du_j =O(\delta_n^{-6c_1}). \end{equation} It follows that
$$\int_{\Gamma} \left |\prod_{j=1}^{k}\log |F_n(u_j)| \right | ^{2} du =O(\delta_n^{-6kc_1}).$$
On the other hand, by the definition of $B$
$$\int_{\Gamma} \textbf{1}_{B}(u) du = O \left (\sum_{j=1}^{k}\int_{B(z_j, 1/100)} \textbf{1}_{|\log|F_n(u_j)||\ge \delta_n^{-\bar c_1} } du_j \right ). $$
Furthermore,
$$ \int_{B(z_j, 1/100)} \textbf{1}_{|\log|F_n(u_j)||\ge \delta_n^{-\bar c_1} } du_j \le \delta_n^{2\bar c_1}\int_{B(z_j, 1/100)}\left |\log |F_n(z)| \right |^{2} dz. $$
Using \eqref{onA0}, we obtain
$$ \int_{\Gamma} \textbf{1}_{B}(u) du \textbf{1}_{\mathcal A_0} =O( \delta_n^{2\bar c_1 } \delta_n^{-6k c_1}). $$
It follows that
$$|A_3| = O\left ( \left ( \delta_n^{-6k c_1} \times \delta_n^{2\bar c_1 } \delta_n^{-6k c_1} \right )^{1/2}\right )
=O(\delta_n^{\bar c_1 - 6kc_1}) = O(\delta_n^{c_1}) $$ as we set $\bar c_1 > 7k c_1 $. The bounds on
$|A_3|, |A_4|$ and $ |A_5| $ together imply $|A_2|= O(\delta_n ^{c_1}) $, concluding the proof.
\end{proof}
\subsection{Proof of Theorem \ref{greal}} \label{pgreal}
For $1\le i\le k, 1\le j\le l$, let $H_i:\mathbb{R}\to\mathbb{C}$ and $G_j:\mathbb{C}\to \mathbb{C}$ be smooth functions supported on $[x_i-1/50, x_i+1/50]$ and $B(z_j, 1/50)$ (respectively)
satisfying
\[|{\triangledown^{a}H_i}(x)|, |{\triangledown^{a}G_j}(z)|\le 1
\]
for any $x\in \R$, $z\in \C$ and $0\le a\le 3$.
By Lemma \ref{fourier}, we reduce the problem to showing that
\begin{eqnarray}
\ab{\E \left(\prod_{i=1}^{k}X_{i}\right)\left(\prod_{j=1}^{l}Y_{j}\right)-\E \left(\prod_{i=1}^{k}\tilde X_{i}\right)\left(\prod_{j=1}^{l}\tilde Y_{j}\right)}\le C'\delta_n^{\bar c},\label{du6}
\end{eqnarray} for some constants $C', \bar c >0$, where $X_{i} = \sum_{\zeta_s\in\mathbb{R}}H_i(\zeta_s), \tilde X_{i} = \sum_{\tilde \zeta_s\in\mathbb{R}}H_i(\tilde \zeta_s)$, $Y_{j}= \sum_{\zeta_s\in\mathbb{C}_+}G_j(\zeta_s), \tilde Y_{j}= \sum_{\tilde \zeta_s\in\mathbb{C}_+}G_j(\tilde \zeta_s)$. (We use $\bar c$ instead of $c$ to denote the exponent
on the right hand side, since we reserve $c$ for the exponent in Theorem \ref{gcomplex}, which we will use in the proof.)
The proof follows the ideas in \cite{TVpoly}.
The first step is to show that the number of complex zeros near the real axis is small with high probability. Let $c$ be the constant exponent in Theorem \ref{gcomplex} corresponding to $k+l$. Following Remark \ref{rmkconstants}, we can set $c= \frac{\alpha_1\ep }{10^{5}(k+l)^{2}}$.
With this choice of $c$, we set $c_2 := \frac{c}{100}= \frac{\alpha_1\ep }{10^{7}(k+l)^{2}}$ and $\gamma := \delta_n^{c_2}$. Let us also recall that in the statement of this theorem (Theorem \ref{greal}), $c_1= \frac{\alpha_1 \ep}{10^9 (k+l)^4}$, which is much smaller than $c_2$: $c_1= \frac{c_2}{100(k+l)^{2}}$.
\begin{lemma}\label{lmrepulsion} Under the assumptions of Theorem \ref{greal}, we have
$$\P \left(N_{ F_n}{B( x,\gamma)}\ge 2\right) = O(\gamma^{3/2}),
\qquad\text{for all } x\in \R \cap \left (D_n + B(0, 1/50)\right )$$
where the implicit constant depends only on the constants in Conditions {\bf C1} and {\bf C2} (but not on $n, \delta_n, D_n$ and $x$.
\end{lemma}
The power $3/2$ in the above lemma is not critical, we only need something strictly greater than 1.
Assuming this lemma, the rest of the proof is relatively simple. For every $1\le i\le k$, consider the strip $S_i := [ x_i - 1/50, x_i + 1/50]\times [-\gamma/4, \gamma/4]$. We can cover $S_i$ by $O(\gamma^{-1})$ disks of the form $B( x, \gamma)$, where $x \in [ x_i-1/50, x_i+1/50]$. Since $F_n$ has real coefficients, if $z$ is a root of $F_n$ in $S_i\backslash \R$, so is it conjugate $\bar z$. Using Lemma \ref{lmrepulsion} and the union bound, we obtain
\begin{eqnarray}
\P (\text{there is at least 1 (or equivalently 2) root(s) in } S_i\backslash \mathbb R )
&=& O(\gamma^{-1}\gamma^{3/2}) = O(\gamma^{1/2})\label{du4}.
\end{eqnarray}
Define
$\mathfrak H_i(z) := H_i(\Re (z))\phi \left (\frac{4\Im (z)}{\gamma}\right )$,
where $\phi$ is a smooth function on $\mathbb R$ that is supported on $[-1,1]$, with $\phi(0)=1$ and $\norm{\phi^{(a)}}_{\infty}=O(1)$ for all $0\le a\le 3$. (See Remark \ref{K}.) It is easy to see that $\mathfrak H_i$ is a smooth function supported on $S_i$ with $\| \mathfrak H_i \| _{\infty}\le 1$, and $\big\| \triangledown ^a \mathfrak H_i\big\| _{\infty} = O(\gamma^{-a})$ for $0\le a\le 3$.
Set
$\mathfrak X_i := \sum_{s} \mathfrak H_i(\zeta_s)$ and $D_i := \mathfrak X_{i} - X_{i}$. By the definitions of $\mathfrak X_{i}$ and $X_{i}$, $D_i = \sum_{\zeta_s\notin \R} \mathfrak H_i(\zeta_s)$. Our general strategy is to use $\mathfrak X_i$ to approximate $X_i$, then apply Theorem \ref{gcomplex} to $\mathfrak X_i$ and finish the proof using a triangle inequality.
From \eqref{du4}, $D_{i} = 0$ with probability at least $1 - O(\gamma^{1/2})$. Notice that by definition of $D_i$ and the fact that $\| \mathfrak H_i \| _{\infty}\le 1$,
\begin{equation} \label{Di} |D_{i}| \le N_{ F_n}{B( x_i, 1/5)}. \end{equation}
By \eqref{Di} and Jensen's inequality \eqref{jensenbound},
$$|D_{i}|\le N_{ F_n}{B( x_i, 1/5)} =O\left (\log \max_{w\in B(x_i, 2)} |F_n(w)|-\min_{z\in B(x_i, 1/5)}\log|F_n(z)|\right ) .$$
By Conditions {\bf C2} \eqref{cond-smallball} and {\bf C2} \eqref{cond-bddn}, with probability at least $1-O(\delta_n^{A})$, there exists $z\in B(x_i, 1/100)$ such that both terms on the right-hand side are of order $O\left (\delta_n^{-c_1}\right )$. Therefore, with probability at least $1-O(\delta_n^{A})$, we have $|D_i|\le N_{ F_n}{B( x_i, 1/5)}\le C'\delta_n^{-c_1}$ for some constant $C'$. For the rest of this proof, we denote $N_i:= N_{ F_n}{B( x_i, 1/5)}$.
Our next step is to bound $ \E \ab{D_{i}}^{k+l}$. To start, we have
\begin{equation} \label{real1}
\E \ab{D_{i}}^{k+l}\le \E \left (|D_{i}|^{k+l}\textbf{1}_{N_i\le C'\delta_n^{-c_1}}\right ) + \E \left (N_i^{k+l}\textbf{1}_{ N_i > C'\delta_n^{-c_1}}\right ). \end{equation}
Since $D_{i} = 0$ with probability at least $1 - O(\gamma^{1/2})$,
\begin{equation} \E \left (|D_{i}|^{k+l}\textbf{1}_{N_i\le C'\delta_n^{-c_1}}\right ) =O\left (\delta_n^{-c_1(k+l)}\gamma^{1/2}\right )=O\left (\delta_n^{-c_1(k+l)+c_2/2}\right ) =O\left (\delta_n^{c_1(k+l)^{2}}\right ) \nonumber
\end{equation}
because $c_2\ge 4c_1(k+l)^{2}$.
For the second term in \eqref{real1}, we further break up the event $ N_i > C'\delta_n^{-c_1}$ into two events
$$\Omega_1:= \delta_n^{-C_1}\ge N_i > C'\delta_n^{-c_1} \,\, {\rm and}\,\, \Omega_2:= \delta_n^{-C_1}\le N_i$$ where $C_1$ is the constant in the statement of Theorem \ref{greal}. We have
$$\E N_i^{k+l}\textbf{1}_{\Omega_1}\le \delta_n^{-C_1(k+l)}\P(\Omega_1) = O\left (\delta_n^{A-C_1(k+l)}\right )=O\left (\delta_n^{c_1(k+l)^{2}}\right ).$$
Moreover, by H{\"o}lder's inequality,
$$\E N_i^{k+l}\textbf{1}_{\Omega_2}\le \P\left(\Omega_2\right )^{\frac{2}{k+l+2}} \left (\E N_i^{k+l+2}\textbf{1}_{\Omega_2}\right )^{\frac{k+l}{k+l+2}} =O\left (\delta_n^{A/(k+l+2)}\right )\left (\E N_i^{k+l+2}\textbf{1}_{\Omega_2}\right )^{\frac{k+l}{k+l+2}}.$$
Under the assumption of Theorem \ref{greal}, Condition {\bf C2} \eqref{cond-poly} holds for the parameter $k+l$, which provides $\E N_i^{k+l+2}\textbf{1}_{\Omega_2} = O(1)$. As we set $A$ much larger than $c_1$, it is easy to check that
$$\E N_i^{k+l}\textbf{1}_{\Omega_2} =O\left (\delta_n^{A/(k+l+2)}\right ) = O\left (\delta_n^{c_1(k+l)^{2}}\right ).$$
Thus,
\begin{equation}
\E \left (N_{ F_n}{B( x_i, 1/5)}\right )^{k+l}\textbf{1}_{N_{ F_n}{B( x_i, 1/5)}\ge C'\delta_n^{-c_1}}=O\left (\delta_n^{A/(k+l+2)}\right )=O\left (\delta_n^{c_1(k+l)^{2}}\right ) \label{boundN}
\end{equation}
Combining all these bounds with \eqref{real1}, we obtain
$$\E |D_i|^{k+l}= O\left (\delta_n^{c_1(k+l)^{2}}\right ).$$
Moreover, from the above bounds, we get
\begin{equation}
\E |{\mathfrak X}_{i}|^{k+l} \le \E N_i^{k+l}= \E N_i^{k+l}\textbf{1}_{N_i\le C'\delta_n^{-c_1}}+\E N_i^{k+l}\textbf{1}_{\Omega_1} + \E N_i^{k+l}\textbf{1}_{\Omega_2} =O\left (\delta_n^{-c_1(k+l)}\right ),\nonumber
\end{equation} where the main contribution comes from the first term. Similarly, $\E |{X}_{i}|^{k+l}=O\left (\delta_n^{-c_1(k+l)}\right )$.
Next, for each $1\le j\le l$, let $\mathfrak G_j(z) := G_j(z)\varphi(\text{Im}(z)/\gamma)$ where $\varphi$ is a smooth function on $\R$ supported on $[1/2, \infty)$ with $\varphi=1$ on $[1, \infty)$ and $\norm{\varphi^{(a)}_{\infty}}=O(1)$ for all $0\le a\le 3$; see Remark \ref{K}.
Set $\mathfrak Y_j := \sum_{s}\mathfrak G_j(\zeta_s)$. By similar reasoning, we have $\E|\mathfrak Y_j - Y_j|^{k+l}=O\left (\delta_n^{c_1(k+l)^{2}}\right )$ and
$$\max\left \{\E |\mathfrak Y_j|^{k+l}, \E |Y_j|^{k+l}\right \} = O\left (\delta_n^{-c_1(k+l)}\right ).$$
Now, we show that the difference $\E\left |(\prod_{i=1}^{k}X_{i})(\prod_{j=1}^{l}Y_{j}) - (\prod_{i=1}^{k}\mathfrak X_{i})(\prod_{j=1}^{l}\mathfrak Y_{j} )\right |$ is small. Using the ``telescopic sum" argument, we decompose the difference inside the abolute value sign into the sum of $k+l$ differences, in each of which exactly one of the $X_1, \dots, X_k, Y_1, \dots, Y_j$ is replaced by its counterpart, and then use the triangle inequality to finish.
Let us bound the first difference; the argument for the rest is the same. By H{\"o}lder's inequality and the previous bounds on $D_i, X_i, Y_i$ etc, we have
\begin{eqnarray}
\E\left |X_1(\prod_{i=2}^{k}X_{i})(\prod_{j=1}^{l}Y_{j}) - \mathfrak X_{1}(\prod_{i=2}^{k} X_{i})(\prod_{j=1}^{l} Y_{j} )\right |&\le& \left (\E|D_1|^{k+l}\right )^{\frac{1}{k+l}}\prod_{i=2}^{k} \left (\E|X_i|^{k+l}\right )^{\frac{1}{k+l}}\prod_{j=1}^{l}\left (\E|Y_{j}|^{k+l}\right )^{\frac{1}{k+l}}\nonumber\\
&=&O\left (\delta_n^{c_1(k+l)}\prod_{k+l-1 \mbox{ terms}} \delta_n^{-c_1}\right )= O\left (\delta_n^{c_1}\right ).\nonumber
\end{eqnarray}
Thus,
\begin{equation}
\E\left |(\prod_{i=1}^{k}X_{i})(\prod_{j=1}^{l}Y_{j}) - (\prod_{i=1}^{k}\mathfrak X_{i})(\prod_{j=1}^{l}\mathfrak Y_{j} )\right |=O\left (\delta_n^{c_1}\right ).\nonumber
\end{equation}
We can obtain the same bound for the corresponding terms of $\tilde F_n$.
Finally, from Theorem \ref{gcomplex}, we have
\begin{equation}
\left |\E(\prod_{i=1}^{k}\mathfrak X_{i})(\prod_{j=1}^{l}\mathfrak Y_{j} )-\E(\prod_{i=1}^{k}\tilde {\mathfrak X_{i}})(\prod_{j=1}^{l}\tilde{ \mathfrak Y_{j}} )\right |= O\left (\delta_n^{c_1}\right )\nonumber.
\end{equation}
The desired estimate now follows from the triangle inequality.
\begin{proof}[Proof of Lemma \ref{lmrepulsion}]
The first step is to use
Theorem \ref{gcomplex} to reduce to the gaussian case. We handle the gaussian case using Rouche's theorem and various probabilistic
estimates based on some properties of the gaussian distribution.
For this proof, we let $\tilde \xi_1, \dots, \tilde \xi_n$ be gausian random variables with unit variance and satisfying $\E \tilde \xi_i = \E \xi_i$ for each $1\le i\le n$.
Let $H:\C\to [0, 1]$ be a non-negative smooth function supported on $B(x, 2\gamma)$, such that $H=1$ on $B(x, \gamma)$ and $|\triangledown ^a H|\le C\gamma^{-a}$ for all $0\le a\le 8$; see Remark \ref{K}.
Applying Theorem \ref{gcomplex} to $H$, we obtain
\begin{equation} \label{roots1}
\P (N_{ F_n}{B( x, \gamma)}\ge 2)\le
\E \sum_{i\neq j} H( \zeta_i)H( \zeta_j) \le\E \sum_{i\neq j} H( {\tilde \zeta_i})H( \tilde {\zeta}_j)+ O(\delta_n^c\gamma^{-8}). \end{equation}
The definition of $\gamma $ guarantees (via a trivial calculation) that $O( \delta_n^c\gamma^{-8})
=O(\gamma^{3/2})$, with room to spare. Thus, it remains to show
\begin{equation} \label{roots2} \E \sum_{i\neq j} H( {\tilde \zeta_i})H( \tilde {\zeta}_j) =O(\gamma^{3/2}). \end{equation}
Set $N := N_{ {\tilde F_n}}{B( x, 2\gamma)}$;
we bound the LHS of \eqref{roots2} from above by
\begin{equation} \label{roots3} \E N^{2}\textbf{1}_{ N\ge C'\delta_n^{-c_1}} + \E N(N-1) \textbf{1}_{ N < C'\delta_n^{-c_1}}. \end{equation}
Using the same argument as in the proof of \eqref{boundN}, we can show that $$\E N^{2}\textbf{1}_{ N\ge C'\delta_n^{-c_1}}=O\left (\delta_n^{A/(k+l+2)}\right)=O(\gamma^{3/2}).$$
Thus, it remains to show that $\E N(N-1) \textbf{1}_{ N < C'\delta_n^{-c_1}} = O(\gamma^{-3/2}) $. Since
$$ \E N(N-1) \textbf{1}_{ N < C'\delta_n^{-c_1}} \le C'^{2}\delta_n^{-2 c_1} \P( N \ge 2), $$ it suffices to
prove
\begin{equation}
\P (N \ge 2) = \P(N_{\tilde{F_n}}{B(x,2\gamma)}\ge 2) =O(\delta_n^{2c_1}\gamma^{3/2}). \label{repulsion_gau}
\end{equation}
Thus, we have reduced the problem to the gaussian setting.
Let $g(z) := \tilde F_n(x) + \tilde F_n'(x)(z-x)$ and $p(z) := \tilde F_n(z) - g(z)$.
Since for any fixed $x$, $\tilde F_n(x) \tilde F_n '(x) \neq 0$ with probability 1, $g(z)$ has exactly one
root. Thus, by Rouch\'{e}'s theorem,
$$\P(N_{\tilde{F_n}}{ B(x,2\gamma)}\ge 2) \le \P\left (\min_{z\in \partial B(x, 2\gamma)}|g(z)|\le \max_{z\in\partial B(x, 2\gamma)}|p(z)|\right ).$$
In the rest of the proof, we bound the right-hand side. We are going to show that with (appropriately) high probability, $\min_{z\in \partial B(x, 2\gamma)}|g(z)|$ is not too small and $ \max_{z\in\partial B(x, 2\gamma)}|p(z)|$ is not too large.
For every $z\in B(x, 4\gamma)$, we have $p(z) = \sum_{j=1}^{n} \tilde \xi_j v_j(z)$ where
\begin{eqnarray}
|v_j(z)| &\le& |z-x|^{2}\sup _{w\in B(x, 2\gamma)} |\phi_j''(w)| = O\left (\gamma^{2} \sup _{w\in B(x, 2\gamma)} |\phi_j''(w)|\right ).\nonumber
\end{eqnarray}
By Condition {\bf C2} \eqref{cond-repulsion},
\begin{equation} \label{exp1} |\E p(z)|=O\left (\gamma^{2} \sum_{j=1}^{n} |\E \tilde \xi_j|\sup_{w\in B(x, 1)}|\phi_j''(z)|\right )=O\left (\delta_n ^{2c_2-c_1}\sqrt{\sum_{j=1}^{n} |\phi_j(x)|^{2}}\right ), \end{equation} and
\begin{equation}
\Var (p(z))=O\left ( \gamma^{4} \sum_{i=1}^{n}\sup _{w\in B(x, 2\gamma)} |\phi_j''(w)|^{2}\right) = O\left (\delta_n^{4c_2-c_1} \sum_{j=1}^{n} |\phi_j(x)|^{2}\right) = O\left (\delta_n^{4c_2-c_1} \Var (\tilde F_n (x))\right)\label{varp}.
\end{equation}
Set $t := \delta_n^{2c_2-c_1} \sqrt{\Var (\tilde F_n (x))}$. The presious estimates show that $|\E p(z)| = O(t)$ and $\Var(p(z)) = O(t^{2}\delta_n^{c_1})$ for all $z\in B(x, 4\gamma)$. We will show the following concentration inequality
\begin{equation}
\P\left (\max_{z\in\partial B(x, 2\gamma)} |p(z)-\E p(z)|\ge \frac{1}{2}t\right )=O(1)\exp\left (-\frac{t^{2}}{100\max_{z\in B(x, 4\gamma)}\Var (p(z))}\right )=O\left (\gamma^{16/10}\delta_n^{2c_1}\right).\label{concenp1}
\end{equation}
Set $\bar p(z) := p(z)-\E p(z)$. For any $z\in \partial B(x, 2\gamma)$, by Cauchy's integral formula,
\begin{eqnarray}
|\bar p(z)|&\le& \int_0^{2\pi}\frac{|\bar p(x + 4\gamma e^{i\theta})|}{|z - x - 4\gamma e^{i\theta}|}4\gamma\frac{d\theta}{2\pi} \le 2\int_0^{2\pi}|\bar p(x + 4\gamma e^{i\theta})|\frac{d\theta}{2\pi}\nonumber\\
&\le&\max_{w\in B(x, 4\gamma)}\sqrt{\Var (p(w))}\int_0^{2\pi}\frac{|\bar p(x + 4\gamma e^{i\theta})|}{\sqrt{\Var (\bar p(x + 4\gamma e^{i\theta}))}}\frac{d\theta}{2\pi}\nonumber.
\end{eqnarray}
Hence, by Markov's inequality,
\begin{equation}
\P(\max_{z\in \partial B(x, 2\gamma)} |\bar p(z)|\ge t)\le \E\left (\exp\left (\int_0^{2\pi}\frac{|\bar p(x + 4\gamma e^{i\theta})|}{10\sqrt{\Var (\bar p(x + 4\gamma e^{i\theta}))}}\frac{d\theta}{2\pi}\right )^{2}\right )e^{-t^{2}/100\max_{z\in B(x, 4\gamma)}\Var (p(z))}.\nonumber
\end{equation}
Using Jensen's inequality for convex functions and Fubini's theorem, we obtain
\begin{eqnarray}
&&\E\left (\exp\left (\int_0^{2\pi}\frac{|\bar p(x + 4\gamma e^{i\theta})|}{10\sqrt{\Var (\bar p(x + 4\gamma e^{i\theta}))}}\frac{d\theta}{2\pi}\right )^{2}\right )\le \int_0^{2\pi}\E\exp\left (\frac{|\bar p(x + 4\gamma e^{i\theta})|^{2}}{100 {\Var (\bar p(x + 4\gamma e^{i\theta}))}}\right )\frac{d\theta}{2\pi}.\nonumber
\end{eqnarray}
The right-hand side is $O(1)$ by basic properties of the gaussian distribution. (Notice that
$p(z)$, for any fixed $z$ is a gaussian random variable.) This proves \eqref{concenp1}. Using the bound $|\E p(z)| = O(t)$ for all $z\in B(x, 2\gamma)$, one concludes that with probability at least $1 - O\left (\gamma^{16/10}\delta_n^{2c_1}\right)$,
\begin{equation} \label{maxx1} \max_{z\in \partial B(x, 2\gamma)} |p(z)| \le Kt, \end{equation}
for some constant $K>0$.
Now, we address $g(z)$; since $g$ is a linear function with real coefficients, we have
$$\min_{z\in \partial B(x, 2\gamma)} |g(z)| = \min \{|g(x - 2\gamma)|, |g(x + 2\gamma)| \}, $$ which reduces the task to obtaining lower bounds for the two end points only.
Note that $g(x+ 2\gamma)$ is normally distributed with standard deviation
\begin{equation}
\sqrt{\Var(g(x + 2\gamma))} = \sqrt{\sum_{j=1}^{n} (\phi_j (x) + 2\gamma \phi_j'(x))^{2}} \ge \sqrt{\sum_{j=1}^{n} \phi_j ^{2}(x) } - 2\gamma \sqrt{\sum_{j=1}^{n} \phi_j'^{2}(x)}\ge 1/2 \sqrt{\sum_{j=1}^{n} \phi_j ^{2}(x) }\nonumber
\end{equation}
where in the last two inequalities, we used the triangle inequality and then Condition {\bf C2} \eqref{cond-repulsion}. Note that by definition of $t$,
$$ \sqrt{\sum_{j=1}^{n} \phi_j ^{2}(x)} = \sqrt{ \Var (\tilde F_n(x))} = t\delta_n^{-2c_2+c_1}.$$
Since $g(x+2\gamma)$, as a random variable, is a real gaussian with density bounded by $\frac{1}{2\sqrt{\Var g(x+2\gamma)}}\le\frac{\delta_n^{2c_2-c_1}}{t}$, we have for any constant $K>0$,
\begin{equation}
\P(|g(x+ 2\gamma)| \le K t)=O\left ( \delta_n^{2c_2-c_1}\right ) =O\left ( \delta_n^{2c_1}\gamma ^{3/2}\right )\nonumber.
\end{equation}
In the last inequality we used the fact that $c_2$ is set to be much larger than $c_1$; see the paragraph following \eqref{du6}.
We can prove a similar statement for $g(x-2\gamma) $. Thus we can conclude that for any constant $K>0$,
\begin{equation} \label{minn1}
\P\left (\min_{z\in \partial B(x, 2\gamma)} |g(z)| \le Kt \right )= O\left (\delta_n^{2c_1}\gamma ^{3/2}\right ).\nonumber
\end{equation}
Combining \eqref{minn1} and \eqref{maxx1}, we conclude the proof of Lemma \ref{lmrepulsion}.
\end{proof}
\section{ Proof of Theorems \ref{complex} and \ref{real}}\label{proof-main}
In this section, we prove Theorems \ref{complex} and \ref{real} by applying Theorems \ref{gcomplex} and \ref{greal}. By dividing the coefficients $c_i$ and $d_i$ by their maximum modulus, it suffices to assume that $\max_{0\le j\le n}\{|c_j|, |d_j|\}= 1$. For the sake of simplicity, we assume all random variables have mean 0; the more general setting in Condition {\bf C4} can be dealt with via a routine modification.
Our crucial new ingredient is the following lemma, which is a generalization of
a classical result of Tur\'an \cite{turan1953}.
\begin{lemma}\cite[Chapter I]{Nazarov94} \label{turan111}
Let
$$p(t) = \sum_{k=0}^{h}a_k e^{i\lambda_k t}, \quad a_k \in \C, \quad\lambda_0<\lambda_1<\dots< \lambda_h \in \R.$$
Then for any interval $J\subset \R$ and any measurable subset $E\subset J$ of positive measure, we have
$$\max_{t\in J} |p(t) |\le \left (\frac{C|J|}{|E|}\right )^{h}\sup _{t\in E} |p(t) |$$
where $C$ is an absolute constant.
\end{lemma}
\subsection{Proof of Theorem \ref{complex}}
To apply Theorem \ref{gcomplex}, we set ${n_0} := 2n+1$ and $F_{n_0}(z) := P_n(10^{4}Cz/n)$ (normalization), $\delta_{n_0} := 1/n$, and $D_{n_0} := \{z: |\Im(z)| \le 1/10^{4}\}$. The functions $\phi_i$ in \eqref{F} are
$$\phi_1 = c_0, \phi_2 = c_1 \cos(x), \dots, \phi_{n+1} = c_n \cos(n x), \phi_{n+2} = d_1\sin(x), \dots, \phi_{2n+1} = d_n \sin(nx)$$
and the random variables $\xi_1, \dots, \xi_{{n_0}}$ in \eqref{F} will be $\xi_0, \dots, \xi_n, \eta_1, \dots, \eta_n$, respectively. The constant $10^{4}$ is chosen rather arbitrarily, any sufficiently large constant would work.
To deduce theorem \ref{complex} from Theorem \ref{gcomplex}, we only need to show that there exist positive constants $C_1, \alpha_1$ such that for any positive constants $A, c_1$, there exists a constant $C$ for which Conditions {\bf C2} \eqref{cond-poly}-{\bf C2} \eqref{cond-delocal} hold with parameters $(k, C_1, \alpha_1, A, c_1, C)$. For this model, one can choose $\alpha_1=1/2$ and $C_1$ to be any constant larger than $1$.
For Condition {\bf C2} \eqref{cond-poly}, notice that the periodic function $P_n$ has at most $2n$ complex zeros in the region $[a, a+2\pi)\times \R \subset \C$ for any $a\in \R$. Indeed, let $w = e^{iz}$ then
$$w^{n}P(z) = \frac{1}{2} \left (\sum_{k = 0}^{n} \xi_k (w^{n+k}+w^{n-k}) -i \sum_{k = 1}^{n} \eta_k (w^{n+k}-w^{n-k})\right )$$
which is a polynomial of degree $2n$ in $w$ and has at most $2n$ zeros. For each $w$ there is only one $z$ in the above region that corresponds to $w$. Thus this condition holds trivially for any constant
$C_1 >1$, as the left hand side of {\bf C2} \eqref{cond-poly} becomes zero.
Now we address (the critical) Condition {\bf C2} \eqref{cond-smallball}.
We will prove the following stronger statement that for every positive constants $c_1, A$, there exists a constant $C'$ such that the following holds. For every complex number $z_0$, there exists a real number $x$ such that $|x-z_0|\le |\Im (z_0)| + \frac{1}{n}$ and
$$\P\left ( |P(x)|\le \exp(-n^{c_1})\right )\le C'n^{-A}.$$
Let $x_ 0 = \Re(z_0)$ and $I = [x_0-\frac{1}{n}, x_0+\frac{1}{n}]$. By conditioning on the random variables $\eta_i$ and replacing $A$ by $2A$, it suffices to show that there exists $x\in I$ for which
\begin{equation}
\sup_{Z\in \R}\P\left ( \left |\sum_{j=0}^{n} c_j\xi_j \cos(jx)-Z \right |\le e^{-n^{c_1}}\right )\le C'n^{-A/2}.\label{anti_trig}
\end{equation}
Now let us recall the definition of $AP_0$ in Condition {\bf C3}. We would like to point out that in this part of the proof, we only use the fact that
the size of $AP_0$ is of order $\Theta (n)$.
We shall prove a more general version which will be useful for all of the remaining models in this manuscript.
\begin{lemma}\label{lmanti_concentration}
Let $\CE$ be an index set of size $N\in \N$, and let $(\xi_j)_{j\in \CE}$ be independent random variables satisfying the moment Condition {\bf C1} \eqref{cond-moment}. Let $(e_j)_{j\in \CE}$ be deterministic (real or complex) coefficients with $|e_j|\ge \bar e$ for all $j$ and for some number $\bar e\in \R_+$. Then for any $A\ge 1$, any interval $I\subset\R$ of length at least $N^{-A}$, there exists an $x\in I$ such that
$$\sup_{Z\in \R}\P\left (\left| \sum_{j\in \CE} e_j \xi_j \cos(jx)-Z\right |\le \bar e N^{-16A^{2}}\right )= O_A\left (N^{-A/2}\right )$$
where the implicit constant depends only on $A$ and the constants in Condition {\bf C1} \eqref{cond-moment}.
\end{lemma}
Assuming this Lemma, we condition on the random variables $(\xi_j)_{j\notin AP_0}$ and apply the Lemma with $\CE := AP_0, e_j :=c_j, N:=|AP_0|=\Theta(n)$ to obtain \eqref{anti_trig} directly with $\bar e = \Theta(1)$.
\begin{proof}[Proof of Lemma \ref{lmanti_concentration}]
We will prove Lemma \ref{lmanti_concentration} in three steps. In the first (and most important) step, we handle the case where $\xi_i$ are iid
Rademacher. In the second step, we handle the case where the $\xi_i$ have symmetric distributions. In the final step, we
address the most general setting.
{\it Step 1.} $\xi_i$ are iid Rademacher (that is, $\P(\xi_i=1)=\P(\xi_i=-1)=1/2$). The key
ingredient in this step is the following inequality, which is a variant of a result of Hal\'asz \cite{halasz1977estimates}; see also \cite{halasz1977}, \cite[Cor 7.16]{taovubook}, \cite[Cor 6.3]{nguyenvusurvey})for relevant estimates.
\begin{lemma}\label{halasz-inequality}
Let $\ep_1, \dots, \ep_n$ be independent Rademacher random variables. Let $a_1, \dots, a_n$ be real numbers and $l$ be a fixed integer. Assume that there is a constant $a>0$ such that for any
two different sets $\{i_1, \dots, i_{l'}\}$ and $\{j_1, \dots, j_{l''}\}$ where $l'+ l''\le 2l$, $|a_{i_1}+\dots + a_{i_{l'}} - a_{j_1}-\dots - a_{j_{l''}}|\ge a$. Then
$$\sup_{Z\in \R} \P\left (\left |\sum_{j=1}^{n}a_j\ep_j- Z\right |\le a n^{-l} \right ) = O_{l}(n^{-l}).$$
\end{lemma}
For the sake of completeness, we present a short proof of this lemmma in Appendix \ref{proof_Halasz}.
\vskip2mm
There exists a subset $\CE'\subset \CE$ of size at least half the size of $\CE$ such that for all $i\in \CE'$, $|\Re(e_i)|\ge \bar e/2$ or for all $i\in J$, $|\Im(e_i)|\ge \bar e/2$. Since
\begin{equation}
\P\left ( \left |\sum_{j\in \CE} e_j\xi_j \cos(jx)-Z \right |\le \bar e N^{-16A^{2}}\right )\le \P\left ( \left |\sum_{j\in \CE} \Re(e_j) \xi_j\cos(jx)-\Re(Z) \right |\le \bar e N^{-16A^{2}}\right )\nonumber
\end{equation}
and
\begin{equation}
\P\left ( \left |\sum_{j\in \CE} e_j\xi_j \cos(jx)-Z \right |\le \bar e N^{-16A^{2}}\right )\le \P\left ( \left |\sum_{j\in \CE} \Im(e_j) \xi_j\cos(jx)-\Im(Z) \right |\le \bar e N^{-16A^{2}}\right )\nonumber,
\end{equation}
we can, by conditioning on the $(\xi_j)_{j\notin \CE'}$ and replacing $\CE$ by $\CE'$, assume that the $e_i$ are real and $Z$ is real. This allows us to apply Lemma \ref{halasz-inequality}.
In order to apply Lemma \ref{halasz-inequality},
we first show that there exists an $x\in I$ such that for every 2 distinct index sets $\{i_1, \dots, i_{A'}\}$ and $\{j_1, \dots, j_{A''}\}$ in $\CE$ with $A' + A''\le 2A$, we have
\begin{equation} \label{H1} \left |\sum_{t =1}^{A'} e_{i_t}\cos(i_t x) - \sum_{t =1}^{A''} e_{j_t}\cos(j_t x)\right |> \bar e N^{-16A^{2}}N^{A}.
\end{equation}
Let us fix such two index sets and let $$h(x) := \sum_{t =1}^{A'} e_{i_t}\cos(i_t x) - \sum_{t =1}^{A''} e_{j_t}\cos(j_t x). $$ Let $E := \{x\in I: |h(x)|\le \bar e N^{-16A^{2}}N^{A}\}$. Since $h$ can be written in terms of exponential polynomials with $4A$ frequencies, we can apply Lemma \ref{turan111} to obtain
\begin{equation}
\max_{[0, 2\pi]} |h| \le \left (\frac{C'}{|E|}\right )^{4A}\sup _{E}|h|.\label{turan}
\end{equation}
By the definition of $E$, the right-hand side is bounded from above by $\left (\frac{C'}{|E|}\right )^{4A} \bar e N^{-16A^{2}}N^{A}$. To bound the left-hand side from below, observe from orthogonality of the functions $\cos kx$ that
\begin{eqnarray}
2\pi \max_{[0, 2\pi]}|h|^{2}&\ge& \int_{0}^{2\pi} |h|^{2}dx = \pi \sum_{t = 1}^{A'} |e_{i_t}|^{2} + \pi \sum_{t = 1}^{A''}|e_{j_t}|^{2} =\Omega_A\left (\bar e^{2}\right ),\label{H3}
\end{eqnarray} as all $|e_i|$ with $i\in \CE$ is at least $\bar e$.
Therefore, from \eqref{turan}, we get
$|E|= O_A(N^{-4A+1/4})$. Since there are only $O(N^{2A})$ choices for the sets $A'$ and $A^{''}$, we conclude that every $x$ in $I$, except for a set of Lebesgue measure at most $O_A(N^{-2A+1/4}) =o_A(|I|)$, satisfies \eqref{H1}.
To conclude the proof, we use \eqref{H1} with Lemma \ref{halasz-inequality}. By setting $a :=\bar e N^{-16A^{2}}N^{A}$ and $l:=A$, Lemma \ref{halasz-inequality} gives
\begin{equation} \label{H2}
\sup_{Z\in \C}\P\left ( \left |\sum_{j\in \CE} e_j\xi_j \cos(jx)-Z \right |\le \bar e N^{-16A^{2}}\right )
=O_A (N^{-A}) .
\end{equation}
This proves Lemma \ref{lmanti_concentration} for the Rademacher case.
\vskip2mm
{\it Step 2. } In this step, we consider the case where random variables $\xi_j$ have symmetric distributions. In this case,
$(\xi_j)_j$ and $(\xi_j\ep_j)_j$ have the same distribution where $\ep_j$ are independent Rademacher random variables that are independent of the $\xi_j$. Thus, the claimed statement is equivalent to
\begin{equation}
\sup _{Z\in \R} \P\left ( \left |\sum_{j\in \CE} e_j\xi_j \ep_j\cos(jx)-Z \right |\le \bar e N^{-16A^{2}}\right ) = O_A\left (n^{-A/2}\right )\label{reduce_rademacher1}
\end{equation}
for some $x\in I$.
The natural way to prove this is to use the (standard) conditioning argument, one fixes all $\xi_j$ and uses
the Rademacher variables as the only random source, going back to Step 1.
However, the situation here is more delicate, as
$x$ may not be the same in each evaluation of $\xi_j$. We handle this extra complication by
proving the stronger statement that
\begin{equation} \label{H5} \Xint-_{I}\sup _{Z\in \R} \P\left ( \left |\sum_{j\in \CE} e_j\xi_j \ep_j\cos(jx)-Z \right |\le \bar e N^{-16A^{2}}\right )dx= O_A(n^{-A/2})\end{equation}
where $\Xint-_I fdx:= \frac{1}{|I|}\int_{I}fdx$.
The left-hand side is at most $ \Xint-_{I}\E_{(\xi_j)_j}\sup _{Z\in \R} \P_{(\ep_j)_j}\left ( \left |\sum_{j\in \CE} e_j\xi_j \ep_j\cos(jx)-Z \right |\le \bar e N^{-16A^{2}}\right )dx$.
By Fubini's theorem, it suffices to show that
\begin{equation} \label{H6}
\E_{(\xi_j)_{j}}\Xint-_{I}\sup _{Z\in \R} \P_{(\ep_j)_j}\left ( \left |\sum_{j\in \CE} e_j\xi_j \ep_j\cos(jx) -Z\right |\le \bar e N^{-16A^{2}}\right )dx= O _A(n^{-A/2}).
\end{equation}
We first show that
with high probability, there are $\Theta (N)$ indices $j \in \CE$ such that $|\xi_j | = \Theta (1) $, which is needed
to guarantee \eqref{H3}. Assume, for a moment, that $\P (| \xi_j| < d )\ge 1-d$ for some small positive constant $d$.
Since the random variables $\xi_j$ are symmetric, they have mean 0. Using the boundedness of the $(2+\ep)$ central moment of $\xi_j$ (Condition {\bf C1}),
and the fact that $\xi_j$ has variance 1, we have
\begin{equation}
\E |\xi_j|^2= 1 = \E|\xi_j|^{2} \textbf{1}_{|\xi_j|< d} + \E|\xi_j|^{2} \textbf{1}_{|\xi_j|\ge d}\le d^{2} + d^{\ep/(2+\ep)}(\E|\xi_j|^{2+\ep})^{2/(2+\ep)}\le d^{2} + d^{\ep/(2+\ep)}\tau^{2/(2+\ep)}.\nonumber
\end{equation}
Thus, if $d$ is small enough (depending on $\tau$ and $\ep$), we have a contradiction.
Hence, there is a constant $d>0$ such that $\P(|\xi_j|< d)\le 1-d$. Now, by Chernoff's inequality, with probability at least $1 - e^{-\Theta(N)}$, there are at least $\Theta(N)$ indices $j\in \CE$ for which $|\xi_j|\ge d$.
On the event that this happens, we condition on the $\ep_j$ where $|\xi_j|<d$ and use Step 1 to conclude that outside a subset of $I$ of measure at most $O_A(N^{-2A+1/4})$, we have
$$\sup_{Z\in \C}\P_{(\ep_j)_j}\left ( \left |\sum_{j\in \CE} e_j\xi_j \ep_j\cos(jx)-Z \right |\le \bar e N^{-16A^{2}}\right ) =O_A(N^{-A}).$$
Therefore, the left-hand side of \eqref{H6} is at most
$$ e^{-\Theta(N)}+O_A(N^{-2A+1/4})+ O_A(N^{-A}) = O_A(n^{-A}),$$ completing the proof for this case.
\vskip2mm {\it Step 3.} Finally, we address the general case. Let $\xi_j'$ be independent copies of $\xi_j$, $j\in \CE$. Then the variables $\xi_j'' := \xi_j - \xi_j'$ are symmetric and have uniformly bounded $(2+\ep)$-moments.
By Step 2, we have
\begin{eqnarray}
&&\left [\P\left ( \left |\sum_{j\in \CE} e_j\xi_j \cos(jx)-Z\right |\le \bar e N^{-16A^{2}}\right )\right ]^{2}
\le \P\left ( \left |\sum_{j\in \CE} e_j\xi_j'' \cos(jx) \right |\le 2 \bar e N^{-16A^{2}}\right )\le O_A(n^{-A})\nonumber
\end{eqnarray}
where in the last inequality, we decompose the disk $B\left (0, 2 \bar e N^{-16A^{2}}\right )$ into $O(1)$ disks of radius $\bar e N^{-16A^{2}}$ (not necessarily centered at 0) before applying Step 2. Taking square root of both sides, we obtain Lemma \ref{lmanti_concentration}.
\end{proof}
The remaining conditions are easy to check.
Condition {\bf C2} \eqref{cond-bddn} follows from the following lemma.
\begin{lemma}
For any positive constants $A$, $c_1$ and $C$, we have, with probability at least $1 - O(n^{-A})$, $\log M \le n^{c_1}$, where $M := \max\{|P(z)|: |\Im(z)|\le C/n\}$.
\end{lemma}
\begin{proof}
For every $1\le j\le n$, we have $|e^{ijz}| = e^{-j\Im (z)}\le e^{C}$. And so,
\begin{equation}
\max _{|\Im(z)|\le C/n, 1\le j\le n} \{ |\cos jz|, |\sin jz|\}\le e^{C}.\label{trigbound}
\end{equation}
Let $B$ be the event on which $|\xi_j|\le n^{A/2 +1 }$ for all $0\le j\le n$.
Notice that on the complement $B^{c}$ of $B$, $\log M= o\left (n^{c_1}\right ) $ for any constant $c_1>0$. By Chebyshev's inequality
(exploiting the fact that $\E |\xi_i|^2 =1$) and the union bound, we have
$$\P(B^{c})\le \frac{n}{n^{A +2} } = o(n^{-A}), $$ completing the proof. \end{proof}
Finally Condition {\bf C2} \eqref{cond-delocal} follows from the following lemma
\begin{lemma}
\label{log-comp-11}
For any constant $C$, there exists a constant $C'>0$ such that for every $z$ with $|\Im(z)|\le C/n$,
\begin{equation}
\frac{|c_j||\cos(jz)|}{\sqrt{S}}\le C'n^{-1/2}, \qquad \forall 0\le j\le n.\label{log-compa1-P}
\end{equation}
and
\begin{equation}
\frac{|d_j||\sin(jz)|}{\sqrt{S}}\le C'n^{-1/2}, \qquad \forall 0\le j\le n.\label{log-compa2-P},
\end{equation} where $S:= \sum_{j=0}^n |c_j|^2 |\cos jz| ^2 + \sum_{j=1}^n |d_j|^2 |\sin jz| ^2$.
\end{lemma}
\begin{proof}
By \eqref{trigbound}, $|\cos(jz)|\le C$ and $|\sin(jz)|\le C$ for all $0\le j\le n$, so it suffices to show $S:= \Omega(n)$. To achieve this bound on $S$, it suffices to show that $AP_0$ contains a subset $ J$ of size $\Theta (n)$
such that
\begin{equation}
|\cos(jz)|\ge c^{\ast} \quad\mbox{for all $j\in J$, for some positive constant $c^{\ast}$}.\label{large_c}
\end{equation}
Writing $z =: a + ib$ and noticing that $$2|\cos(jz)| = |e^{jb}||e^{-2jb+2ija}+1| \ge |w^{j} + 1|$$ where $w := e^{-2b + 2ia}$. By Condition {\bf C3} and Lemma \ref{J}, we can find a subset $J$ of $AP_0$ of size $\Theta (n)$ such that $$|2aj (\mbox{mod} 2\pi)-\pi|\ge c$$ for some constant $c>0$ and all $j\in J$. We can assume, without loss of generality, that $c \le 1/10$ and
this guarantees $|\cos (2aj) +1 | \ge c^2/4$.
Consider $j \in J$, if $|e^{-2jb}-1|\ge c^{2}/10$ then by triangle inequality, $|w^{j} + 1| = |e^{-2jb}e^{2iaj}+1|\ge |e^{-2jb}-1|\ge c^{2}/10$. In the opposite case, $|e^{-2jb}| \ge 1-c^2/10 > .99$.
Keeping in mind that $c \le 1/10$, we have
\begin{equation}
|w^{j} + 1| \ge e^{-2jb}|e^{2iaj}+1| - |e^{-2jb}-1|\ge .99 c^2/4 -c^{2}/10 \ge c^{2}/10.
\end{equation} Thus, we achieved \eqref{large_c} with $c^{\ast} = c^2/10$.
\end{proof}
\subsection{Proof of real universality}
We only need to check that the repulsion Condition {\bf C2} \eqref{cond-repulsion} holds for the random function $F_n(z) = P_n(10^{4}C z/n)$. It is a routine to prove this, using Conditions {\bf C3, C4} and \eqref{large_c}.
\section{Proof of Theorem \ref{comparison} and Corollary \ref{maincor}}\label{proof-comparison}
As before, by rescaling the coefficients, we can assume that $\max_{0\le j\le n} \{ |c_j|, |d_j|\} = 1$.
Before going to the proofs, let us state a version of the Kac-Rice formula.
\begin{proposition}\cite[Theorem 2.5]{Far} \label{KacRice}
Let $P(t), t\in (a_0, b_0)$ be a real Gaussian process \footnote{ A Gaussian process $P(t), t\in (a_0, b_0)$ is a random variable $P:\Omega\times (a_0, b_0) \to \R$ with $\Omega$ being a probability space such that for each $\omega\in \Omega$, $P(\omega, .)$ is a continuous function on $(a_0, b_0)$ and for each $t\in (a_0, b_0)$, $P(., t)$ is a gaussian random variable.}. Let $\mathcal P (t)= \Var (P(t))$, $\mathcal Q(t) = \Var (P'(t))$, $\mathcal R(t) = \Cov (P(t), P'(t))$, $\rho(t)=\frac{\mathcal R(t)}{\sqrt{\mathcal P(t)\mathcal Q(t)}}$, $m(t) = \E P(t)$, and $\eta(t) = \frac{m'(t)-\rho(t)m(t)\sqrt{\CQ(t)/\CP(t)}}{\sqrt{\CQ(t)(1-\rho^{2}(t))}}$. Assume that $m'(t)$ is continuous and the joint normal distribution for $P(t)$ and $P'(t)$ has non-singular covariance matrix for each $t$, then for any interval $[a, b]\subset (a_0, b_0)$, we have
$$\E N_{P}(a, b) = \int_{a}^{b}\sqrt{\frac{\CQ(t)(1-\rho^{2}(t))}{\CP}}\phi\left (\frac{m(t)}{\sqrt{\CP(t)}}\right )\big (2\phi(\eta(t))+\eta(t)\left (2\Phi(\eta(t))-1\right )\big )dt$$
where $\phi(t)$ and $\Phi(t)$ are the standard normal density and distribution functions, respectively.
\end{proposition}
\begin{proof}[Proof of Theorem \ref{comparison}]
By triangle inequality, we can assume that $\tilde \xi_j$ and $\tilde \eta_j$ are gaussian random variables. Let $c$ be the constant in Theorem \ref{real} with $\alpha_1 = 1/2, k=1, l=0$. As in Remark \ref{rmkconstants}, we can set $c=\frac{\ep}{2.10^{9}}$. Let $\alpha = c/7$. It suffices to show that for every interval $(a_n, b_n)$ of size at most $1/n$, we have
\begin{equation}
\left |\E N_{P_n} (a_n, b_n) - \E N_{\tilde P_n} (a_n, b_n) \right | = O(n^{-\alpha/2}).\label{dd1}
\end{equation}
If $b_n-a_n\ge 1/n$, we simply divide the interval $(a,b)$ into $\lfloor (b_n-a_n)n \rfloor + 1$ intervals of size at most $1/n$ each and then apply \eqref{dd1} to each interval and then sum up the bounds.
Let $l := (b_n-a_n)/2$. Let $G$ be a smooth function on $\R$ with support in \newline $\left [\frac{a_n+b_n}{2}-l-n^{-1-\alpha},\frac{a_n+b_n}{2}+ l+n^{-1-\alpha}\right] $ such that $0\le G\le 1$, $G = 1$ on $\left [\frac{a_n+b_n}{2}-l, \frac{a_n+b_n}{2}+l\right ]$, and $\norm{G^{(a)}} _\infty\le Cn^{6\alpha+a}$ for all $0\le a \le 6$; see Remark \ref{K}.
By the definition of $G$, we have
\begin{equation}
\E N_{{P_n}}{(a_n, b_n)}\le \E \sum G(\zeta_i) \le \E N_{{P_n}}{(a_n-n^{-1-\alpha}, b_n+n^{-1-\alpha})}\nonumber
\end{equation}
where $\zeta_i$ are the real roots of $P_n$.
Similarly,
\begin{equation}
\E N_{{\tilde P_n}}{(a_n, b_n)}\le \E \sum G(\tilde \zeta_i) \le \E N_{{\tilde P_n}}{(a_n-n^{-1-\alpha}, b_n+n^{-1-\alpha})}\nonumber.
\end{equation}
Applying Theorem \ref{real} (with $k = 1, l=0$) to the function $G/n^{6\alpha}$, we get
\begin{eqnarray}
\E \sum G(\zeta_i)&=& \E \sum G(\tilde \zeta_i)+ O\left (n^{-c+6\alpha}\right )= \E \sum G(\tilde \zeta_i)+ O\left (n^{-\alpha}\right ) \nonumber.
\end{eqnarray}
Since $\alpha =c/7$, we obtain
\begin{eqnarray}
\E N_{{P_n}}{(a_n, b_n)}&\le& \E N_{{\tilde P_n}}{(a_n-n^{-1-\alpha}, b_n+n^{-1-\alpha})}+ O(n^{-\alpha} ) \le \E N_{\tilde P_n}{(a_n, b_n)}+ 2\mathcal I_{\tilde{P}_n} + O(n^{-\alpha} ) \nonumber,
\end{eqnarray}
where
$\mathcal I_{\tilde{P}_n} := \sup_{x\in \R}\E N_{\tilde P_n} (x - n^{-1-\alpha}, x)$. We will show later that $\mathcal I_{\tilde P_n} = O(n^{-\alpha/2})$, which gives the upper bound $\E N_{{P_n}}{(a_n, b_n)}\le \E N_{\tilde P_n}{(a_n, b_n)}+ O (n^{-\alpha/2}$).
Let us quickly address the lower bound $\E N_{{P_n}}{(a_n, b_n)}\ge \E N_{\tilde P_n}{(a_n, b_n)}-O(n^{-\alpha/2})$.
If $l > n^{-1-\alpha}$, we can argue as for the upper bound. In the case
$l \le n^{-1-\alpha}$, the desired bound follows from the observation that $\E N_{{P_n}}{(a_n, b_n)}\ge 0\ge \mathcal I_{\tilde P_n} - O(n^{-\alpha/2})\ge \E N_{\tilde P_n}{(a_n, b_n)}- O(n^{-\alpha/2})$. The upper and lower bounds together give \eqref{dd1}.
To prove the stated bound on $\mathcal I_{\tilde{P}_n}$, we use Proposition \ref{KacRice}, which asserts that for every $x\in \R$,
\begin{eqnarray}\label{nh1}
\E N_{\tilde P_n}[x - n^{-\alpha-1},x] &\le& \int_{x - n^{-\alpha-1}}^{x}\sqrt{\frac{\mathcal S}{\mathcal P^{2}}}dt +\int_{x - n^{-\alpha-1}}^{x}\frac{|m'|\mathcal P + |m|\mathcal R}{\mathcal P^{3/2}} dt,
\end{eqnarray}
where
\begin{itemize}
\item $m(t) := \E\tilde{P}_n(t)$
\item $ \mathcal P(t) : =\Var (\tilde P_n)=\sum_{k=0}^{n} c_k^{2}\cos^{2}(kt) + d_k^{2}\sin^{2}(kt)$
\item $\mathcal Q(t) :=\Var(\tilde P_n')=\sum_{k=0}^{n} k^{2}c_k^{2}\sin^{2}(kt) + k^{2}d_k^{2}\cos^{2}(kt)$
\item $\mathcal R(t) := \textbf{Cov}(\tilde P_n, \tilde P_n')=\sum_{k=0}^{n} k\cos(kt)\sin(kt) (-c_k^{2}+ d_k^{2})$
\item $\mathcal S(t) = \mathcal P(t) \mathcal Q (t) - \mathcal R^{2}(t) $.
\end{itemize}
Since $\CS(t)$ only has finitely many zeroes in $ [x - n^{-\alpha-1},x]$, we can decompose this interval into intervals whose interiors do not contain any zero of $\CS$, and use linearity of expectation if necessary. This way, we can technically assume that the joint distribution of $\tilde P_n$ and $\tilde P_n'$ is non-singular, as required in Proposition \ref{KacRice}.
From \eqref{trigbound} and \eqref{large_c}, there is a constant $K >0$ such that for every $t\in \R$,
\begin{equation}
\mathcal P\ge \frac{n}{K}, \quad \mathcal Q\le K n^{3},\mbox{ and }
\mathcal R \le Kn^{2}\le K n\mathcal P.\nonumber
\end{equation}
From here, we obtain (for all $t$) that $\frac{\mathcal S}{\mathcal P^{2}}\le \frac{\mathcal Q}{\mathcal P}\le Kn^{2}$.
Moreover, from Condition {\bf C4}, we have $|m(t)|\le K n^{\tau_0}$ and $|m'(t)|\le K n^{1/2 + \tau_0}$ (notice that
$m(t)=0$ if all atom random variables have zero mean; the upper bounds here come from
the bound on the expections). It follows that
\begin{eqnarray}
\frac{|m'|\mathcal P + |m|\mathcal R}{\mathcal P^{3/2}} \le K n^{1/2+\tau_0}.\nonumber
\end{eqnarray}
Using the above estimates, we conclude that the integrand on the right-hand side of \eqref{nh1} is bounded (in absolute value) by $O(n^{1/2+\tau_0})$. Since the length of the interval in the integration is $n^{-\alpha-1}$, the integral is of order $O(n^{\tau_0-\alpha-1/2}) = O(n^{-\alpha/2})$, as $\tau_0-1/2 = \frac{\ep}{10^{11}}\le \alpha/2$.
\end{proof}
\begin{proof}[Proof of Corollary \ref{maincor}] As promised in Remark \ref{rmk1}, we will prove the desired statement for $P_n$ as in \eqref{newP}. Applying Theorem \ref{comparison} with
$$\tilde P_n(x) :=u_n\sqrt{\sum_{i=0}^{n}c_i ^{2}} + \sum _{j=0}^{N_0} u_j n^{1/2-\alpha} \cos(jx) + \sum_{j=1}^{N_0} v_j n^{1/2-\alpha} \sin(jx) + \sum_{j=0}^{n}c_j \tilde \xi_j\cos(jx) + \sum_{j=1}^{n} c_j\tilde \eta_j\sin(jx)$$
where $\tilde \xi_j$ and $\tilde \eta_j$ are iid standard Gaussian, it suffices to prove that the desired estimate holds for $\tilde P_n$. Applying Proposition \ref{KacRice} to $\tilde P_n$, we obtain
$$\E N_{\tilde P_n}(a_n, b_n) = \int_{a_n}^{b_n}\sqrt{\frac{\sum_{i=0}^{n}c_i ^{2}i^{2}}{\sum_{i=0}^{n}c_i ^{2}}} \phi\left (\frac{m(x)}{\sqrt{\sum_{i=0}^{n}c_i ^{2}}}\right )\big [2\phi(q(x)) + q(x)\left (2\Phi(q(x))-1\right )\big ]dx$$
where $m(x) := u_n\sqrt{\sum_{i=0}^{n}c_i ^{2}} + \sum _{j=0}^{N_0} u_j n^{1/2-\alpha} \cos(jx) + \sum_{j=1}^{N_0} v_j n^{1/2-\alpha} \sin(jx)$ and $q(x) := \frac{m'(x)}{\sqrt{\sum_{i=0}^{n}c_i ^{2}i^{2}}}$.
By our setting, ${\sum_{i=0}^{n}c_i ^{2}} = \Theta(n)$, ${\sum_{i=0}^{n}c_i ^{2}i^{2}} = \Theta(n^{3})$, and so $\frac{m(x)}{\sqrt{\sum_{i=0}^{n}c_i ^{2}}} = u_n+O(n^{-\alpha})$ and $q(x) = O(n^{-1})$.
Therefore, by the boundedness of the functions $\Phi$, $\phi$ and $\phi'$, we get
$$\phi\left (\frac{m(x)}{\sqrt{\sum_{i=0}^{n}c_i ^{2}}}\right ) = \phi(u_n) + O(n^{-\alpha}), \mbox{ and } 2\phi(q(x))+q(x)\left (2\Phi(q(x))-1\right )=2\phi(0)+O(n^{-1}).$$
It follows that
\begin{eqnarray}
\E N_{\tilde P_n}(a_n, b_n) &=& 2\sqrt{\frac{\sum_{i=0}^{n}c_i ^{2}i^{2}}{\sum_{i=0}^{n}c_i ^{2}}} (b_n-a_n) \phi\left (u_n\right )\phi(0) + O\left (n^{-\alpha}\sqrt{\frac{\sum_{i=0}^{n}c_i ^{2}i^{2}}{\sum_{i=0}^{n}c_i ^{2}}} (b_n-a_n)\right )\nonumber\\
&=& 2\sqrt{\frac{\sum_{i=0}^{n}c_i ^{2}i^{2}}{\sum_{i=0}^{n}c_i ^{2}}} (b_n-a_n) \phi\left (u_n\right )\phi(0) + O\left (n^{-\alpha} (b_n-a_n)n\right )\nonumber.
\end{eqnarray}
Plugging in $\phi(x)=\frac{1}{\sqrt{2\pi}} e^{-x^{2}/2}$, we obtain
$$\E N_{P_n}(a_n, b_n) = \frac{b_n-a_n}{\pi} \sqrt{\frac{\sum_{j=0}^{n} c_j^{2}j^{2}}{\sum_{j=0}^{n} c_j^{2}}}\exp\left (-\frac{u_n^{2}}{2}\right ) + O\left (n^{-c}((b_n-a_n)n + 1) \right ) $$
where the positive constant $c$ and the implicit constant depend only on $\alpha, N_0, K, \tau_1$, $\ep$,
completing the proof.
\end{proof}
\section{Proof of Theorem \ref{kacreal} and Corollary \ref{kacmean}}\label{kacproof}
\begin{proof}[Proofs of Theorem \ref{kacreal}]
Let us first consider the case $0<\theta_n<\frac{1}{K}$ for some sufficiently large constant $K>0$. Let $\delta_n =\theta_n+1/n$.
We apply Theorem \ref{greal} to the random function
$F_n(z) := P_n(z\theta_n/10)$
and the domain $D_n := \{z: 1-2\theta_n\le |z\theta_n/10|\le 1-\theta_n+1/n\}$.
The main task is to show that there exist positive constants $C_1, \alpha_1$ such that for any positive constants $A, c_1$, there exists a constant $C$ for which Conditions {\bf C2} \eqref{cond-poly}-{\bf C2} \eqref{cond-repulsion} hold with parameters $(k+l, C_1, \alpha_1, A, c_1, C)$. For this model, one can choose $\alpha_1=1/2$ and $C_1=1$. Conditions {\bf C2} \eqref{cond-delocal} and {\bf C2} \eqref{cond-repulsion} can be checked by a simple algebraic manipulation, which we leave as an exercise.
To verify Condition {\bf C2} \eqref{cond-bddn}, notice that for any $M>2$, if we condition on the event $\Omega'$ on which $|\xi_i|\le M\left (1+\delta_n/2\right )^{i}$ for all $i$, then for all $z\in D_n + B(0, 2)$,
\begin{equation}
|F_n(z)| = O(M)\sum_{i=0}^{n} \left (1+\delta_n/2\right )^{i}(1-\delta_n+2/n)^{i} = O(M\delta_n^{-1}).\label{interm1}
\end{equation}
Thus, for every $M>2$, we have
\begin{equation}
\P\left (|F_n(z)| = O(M\delta_n^{-1})\right )= 1-O\left (\sum _{i=0}^{n} \frac{1}{M\left (1+\delta_n/2\right )^{i}}\right )= 1- O\left (\frac{1}{M\delta_n}\right ).\label{bound_kac}
\end{equation}
Setting $M = \delta_n^{-A-1}$, we obtain Condition {\bf C2} \eqref{cond-bddn}.
To prove Condition {\bf C2} \eqref{cond-smallball}, we show that for any constants $A$ and $c_1>0$, there exists a constant $B>0$ such that the following holds. For every $z_0$ with $1-2\theta_n\le |z_0|\le 1-\theta_n+1/n$, there exists $z= z_0e^{i\theta}$ where $\theta\in [-\delta_n/100, \delta_n/100]$ such that for every $1\le M\le n\delta_n$,
\begin{equation}
\P\left (|P_n(z)|\le e^{-\delta_n^{-c_1}}e^{-BM}\right )\le \frac{B\delta_n^{A}}{M^{A}}.\label{smallball_kac}
\end{equation}
Setting $M = 1$, we obtain Condition {\bf C2} \eqref{cond-smallball}.
By writing $z_ 0 = re ^{i\theta_0}$, the bound \eqref{smallball_kac} follows from a more general anti-concentration bound: there exists $\theta\in I := [\theta_0 - \delta_n/100, \theta_0 + \delta_n/100]$ such that
\begin{equation}
\sup _{Z\in \C}\P\left (|P_n(re^{i\theta})-Z|\le e^{-\delta_n^{-c_1}}e^{-BM}\right )\le \frac{B\delta_n^{A}}{M^{A}}.\nonumber
\end{equation}
Since the probability of being confined in a complex ball is bounded from above by the probability of its real part being confined in the corresponding interval on the real line, it suffices to show that
\begin{equation}
\sup _{Z\in \R}\P\left (\left |\sum_{j=0}^{M\delta_n^{-1}/2} \xi_j r^{j} \cos{j\theta}-Z\right |\le e^{-\delta_n^{-c_1}}e^{-BM}\right )\le \frac{B\delta_n^{A}}{M^{A}}.\nonumber
\end{equation}
This is, in turn, a direct application of Lemma \ref{lmanti_concentration} with $N := M\delta_n^{-1}/2$ and $\bar e := e^{-2M}\le r^{j}$ for all $0\le j\le M\delta_n^{-1}/2$.
Finally, to prove Condition {\bf C2} \eqref{cond-poly}, from \eqref{bound_kac}, \eqref{smallball_kac}, and Jensen's inequality, we get for every $1\le M\le n\delta_n$
$$\P(N\ge \delta_n^{-c_1} + BM) = O\left (\frac{\delta_n^{A}}{M^{A}}\right )$$
where $N = N_{F_n}B(w, 2)$, $w\in D_n$.
Let $A = k+l+2$, $c_1=1$ and $M = 1, 2, 2^{2}, \dots, 2^{m}$ where $m$ is the largest number such that $2^{m}\le n\delta_n$. Combining the above inequality with the fact that $N\le n$ a.s., we get
$$\E N^{k+l+2}\textbf{1}_{N\ge \delta_n^{-1}} \le C\sum_{i=1}^{m} \left (\delta_n^{-1} + B2^{i+1}\right )^{k+l+2}\frac{\delta_n^{A}}{2^{iA}} + C n^{k+l+2} \frac{\delta_n^{A}}{2^{mA}} \le C\delta_n^{A-k-l-2} = O(1).$$
This proves Condition {\bf C2} \eqref{cond-poly} and completes the proof for $\theta_n\le 1/K$. For $\theta_n\ge 1/K$, note that Jensen's inequality implies that
$$N_{P_n}B(0, 1-1/K) = O_K(1)\log\frac{\max_{w\in B(0, 1-1/2K) }|P_n(w)|}{\max_{w\in B(1-1/K, 1/3K)} |P_n(w)|}.$$
Thus, using the bounds \eqref{interm1}, \eqref{bound_kac}, \eqref{smallball_kac} for $\theta_n = 1-1/K$, we get for every $1\le M\le n/K$,
$$\P(N_{P_n}B(0, 1-1/2K) \ge BM) = O\left (\frac{1}{M^{A}}\right ).$$
And so, $\E N_{P_n}B(0, 1-1/2K) = O(1)$. The same holds for $\tilde P_n$ and therefore the desired result follows.
\end{proof}
\begin{proof}[Proof of Corollary \ref{kacmean}]
Without loss of generality, we can assume that $\tilde \xi_0, \dots, \tilde \xi_n$ are standard gaussian random variables. As in Remark \ref{Qrmk}, it suffices to restrict to the roots in the interval $[-1, 1]$. Divide this interval into $I_0 = \{x: |x|\le 1-1/C\}$ and $I_1 = [-1, 1]\setminus I_0$ and denote by $N(0)$ and $N(1)$ the number of real roots of $P_n$ in these sets, respectively. We have seen in the proof of Theorem \ref{kacreal} that $\E N(0) = O(1)$, and so is $\tilde N(0)$ which is the corresponding term for $\tilde P_n$.
To get $\E N(1) - \E \tilde N(1) = O(1)$, we decompose the interval $I_1$ into dyadic intervals $\pm [1-1/C, 1-1/2C), \pm[1-1/2C, 1-1/4C), \dots, \pm [1-2/n, 1-1/n)$, and finally $\pm [1-1/n, 1]$. In each of these intervals, say $[x, y)$, we show that $\E N_{P_n}[x, y) - \E N_{\tilde P_n} [x, y) = O((1-y+1/n)^{c})$ for some positive constant $c$. This can be routinely done by approximating the indicator function on the interval $[x, y)$ by a smooth function and applying Theorem \ref{kacreal}. We omit the details as it is similar to the proof of Theorem \ref{comparison}.
\end{proof}
\section{Proof of Theorems \ref{uni_flat} and Corollary \ref{mean_flat}}\label{proof_flat}
\begin{proof}[Proof of Theorem \ref{uni_flat}]
Notice that by the Borel-Cantelli lemma, with probability 1, there are only a finite number of $i$ such that $|\xi_i|\ge 2^{i}$. Thus with probability 1, the radius of convergence of the series $P$ is infinity and so $P$ is an entire function.
A natural idea is to apply Theorem \ref{gcomplex} with $n=\infty$ to the function $F_{n}(z) := P(z)$, with $\delta_n := |z_0|^{-1}$ and $D_n := \{z_0\}$. (We will skip the redundant subscript $n$ in the rest of the proof.)
However, since $\Var P(z) = e^{|z|^{2}}$, $|P(z)|$ is likely of order $\Theta (e^{|z|^{2}/2})$ in which case Condition {\bf C2} \eqref{cond-bddn} fails. The idea here is to find a proper scaling, which, at the same time, preserves the analyticity of $F$. We set
\begin{equation}
F(z) := \frac{P(z)}{e^{|z_0|^{2}/2} e^{(z-z_0)\bar{z_0}}}.\label{res11}
\end{equation}
A routine calculation shows that $\Var F(z) = \Theta(1)$.
Furthermore, $F$ is analytic and has
the same roots as $P$. The main task is to show that there exist positive constants $C_1, \alpha_1$ such that for any positive constants $A, c_1$, there exists a constant $C$ for which Conditions {\bf C2} \eqref{cond-poly}-{\bf C2} \eqref{cond-delocal} hold with parameters $(k, C_1, \alpha_1, A, c_1, C)$. For this model, one can choose $\alpha_1=1/2$ and $C_1=2$. We can, without loss of generality, assume that $|z_0|$ is sufficiently large because by Jensen's inequality, one can show that the expected number of roots of both $P$ and $\tilde P$ in $B(0,K)$, for any constant $K$, is $O_K(1)$.
Condition {\bf C2} \eqref{cond-bddn} is a direct consequence of the following lemma.
\begin{lemma} For any constant $A>0$, there is a constant $K>0$ such that for any $M \ge 2$,
\begin{equation}
\P\left (\max _{z\in B(z_0, 2)}|F(z)| \ge K M^{A}\delta^{-A-2}\right )\le \frac{K \delta^{A}}{M^{A}}. \label{bdd1}
\end{equation}
\end{lemma}
\begin{proof}
Let $L = |z_0|+1=\Theta(\delta^{-1})$. Let $\Omega'$ be the event that $|\xi_i|\le M^{A}L^{A}\left (1+\frac{1}{(L+2M)^{2}}\right )^{i}$ for all $i\ge 0$. Consider its complement $\Omega'^{c}$,
\begin{equation} \label{probbound} \P\left (\Omega'^{c}\right )= O \left ( \sum_{i=0}^{\infty} \frac{1}{M^{2A}L^{2A}\left (1+(L+2M)^{-2}\right )^{2i}}\right ) = O\left (\frac{\delta^{A}}{M^{A}}\right ).\end{equation}
On the other hand, once $\Omega'$ holds, then for every $z\in B(z_0, 2)$,
\begin{equation}
|P(z)|\le \sum_{i =0}^{\infty}\frac{|\xi_i||z|^{i}}{\sqrt{i!}}\le M^{A}L^{A}\sum_{i =0}^{\infty}\frac{(|z|+|z|^{-1})^{i}}{\sqrt{i!}}\label{bound-f}.
\end{equation}
Let $S(z) := \sum_{i =0}^{\infty}\frac{|z|^{i}}{\sqrt{i!}}$, and $x := x(z) = \lfloor|z|^{2}-1\rfloor$. We split into the sum of $S_1 := \sum_{i =0}^{5x-1}\frac{|z|^{i}}{\sqrt{i!}}$ and $S_2 := \sum_{i=5x}^{\infty}\frac{|z|^{i}}{\sqrt{i!}}$. Since the terms $\frac{|z|^{i}}{\sqrt{i!}}$ are increasing with $i$ running from 0 to $x$ and then decreasing with $i$ running from $x$ to $\infty$, we have $S_1\le 5x\frac{|z|^{x}}{\sqrt{x!}}$. Moreover,
\begin{equation}
|S_2|\le \frac{|z|^{5x}}{\sqrt{(5x)!}}\sum_{i =0}^{\infty}\frac{|z|^{i}\sqrt{(5x)!}}{\sqrt{(i+5x)!}}\le \frac{|z|^{5x}}{\sqrt{(5x)!}}S.\nonumber
\end{equation}
By Stirling's formula (and the fact that $x$ is sufficiently large)
\begin{equation}
\frac{|z|^{5x}}{\sqrt{(5x)!}}\le \sqrt{\frac{(x+2)^{5x}e^{5x}}{(5x)^{5x+1/2}}}\le \frac{1}{2}.\nonumber
\end{equation}
Hence, $S_2\le \frac{1}{2}S$, which implies
$$S\le 2S_1\le 10x\frac{|z|^{x}}{\sqrt{x!}}\le 100|z|^{2}e^{|z|^{2}/2}. $$ Thus, on $\Omega'$,
$$|P(z)|=O(M^{A}L^{A+2}e^{|z|^{2}/2}).$$
By the definition of $F$,
$$|F(z)|= O \left (\frac{M^{A}L^{A+2}e^{|z|^{2}/2}}{e^{|z_0|^{2}/2}e^{\Re((z-z_0)\bar{z_0})}}\right ) = O (M^{A} L^{A+2} )$$ which, together with \eqref{probbound}, yield the desired claim.
\end{proof}
Write $z_0 = re^{i\theta_0}$. To verify Condition {\bf C2} \eqref{cond-smallball}, the idea is to apply Lemma \ref{lmanti_concentration} to the polynomial
$$P(z_0e^{i\theta}) = \sum_{j=0}^{\infty} \frac{r^{j}}{\sqrt{j!}} \xi_je^{ij(\theta+\theta_0)}.$$
Note that when $|\theta|\le 1/100$, $z_0 e^{i\theta}\in B(z_0, 1/100)$.
Let $x_0=\lfloor|z_0|^{2}-1\rfloor$. For any $M\ge r$, we apply Lemma \ref{lmanti_concentration} to the set $\CE = \{x_0, x_0+1, \dots, x_0+M\}$, the random variables $(\xi_j)_{j\in \CE}$, the coefficients $e_j = \frac{r^{j}}{\sqrt{j!}} $ and obtain that for any positive constant $A\ge 3$, for the interval $I=[-M^{-A}, M^{-A}]\subset [-1/100, 1/100]$, there exists $\theta\in I$ such that
$$\sup_{Z\in \C}\P\left (\left |\sum_{j\in \CE} e_j \xi_j\cos(j\theta+j\theta_0)-Z\right |\le e_{x_0+M}M^{-16A^{2}}\right )=O\left (M^{-A/2}\right )$$
where we use the fact that $e_{x_0}\ge e_{x_0+1}\ge\dots \ge e_{x_0+M}$.
This together with the assumption that $\Re(\xi_0), \Im(\xi_0), \Re(\xi_1), \Im(\xi_1), \dots $ are independent imply that
$$\sup_{Z\in \C}\P\left (\left |\sum_{j\in \CE} e_j \xi_j\exp(ij(\theta+\theta_0))-Z\right |\le e_{x_0+M}M^{-16A^{2}}\right )=O\left (M^{-A/2}\right )$$
because the distance between two complex numbers is at least the distance between their real components.
Conditioning on the random variables outside $\CE$, we obtain some $\theta\in I$ such that with probability at least $1-O\left (M^{-A/2}\right )$,
$$|P(z_0e^{i\theta})|\ge e_{x_0+M}M^{-16A^{2}},$$
which implies
$$|F(z_0e^{i\theta})|\ge \frac{e_{x_0+M}M^{-16A^{2}}}{\exp(r^{2}/2)\left |\exp(r^{2}(e^{i\theta}-1))\right |}=\frac{r^{x_0+M}M^{-16A^{2}}}{\sqrt{(x_0+M)!}\exp(r^{2}/2)\left |\exp(r^{2}(e^{i\theta}-1)) \right |}.$$
For $\theta\in I$, $|r^{2}(e^{i\theta}-1)|=O(r^{2}M^{-A}) = O(1)$.
Thus, by Stirling's formula,
$$|F(z_0e^{i\theta})|=\Omega\left ( \frac{1}{r}\frac{r^{M}M^{-16A^{2}}}{\sqrt{(x_0+1)\dots(x_0+M)}}\right )=\Omega\left ( \frac{M^{-16A^{2}}}{r}\left (\frac{r^{2}}{r^2+M}\right )^{M/2}\right )
.$$
In other words, we have proved that for every constant $A\ge 3$, for every $M\ge r=|z_0|$, there exists $z\in B(z_0, 1/100)$ for which
\begin{equation}
\P\left (|F(z)|=O_A\left ( \frac{M^{-16A^{2}}}{r}\left (\frac{r^{2}}{r^2+M}\right )^{M/2}\right ) \right )=O_A\left (M^{-A/2}\right ).\label{smb1}
\end{equation}
Setting $M=r$, we obtain Condition {\bf C2} \eqref{cond-smallball} (note that $r =\delta^{-1}$).
Combining \eqref{bdd1} and \eqref{smb1} and Jensen's inequality, we get that there exists a constant $K$ depending only on $A$ such that for any $M\ge r$,
$$\P\left (N_{F}(B(z_0, 1)) \ge M^{2}\right )\le \frac{K}{M^{A}}.$$
Thus,
$$\E N^{k+2}_{F}(B(z_0, 1))\textbf{1}_{N_{F}(B(z_0, 1))\ge r^{2}}\le \sum_{M=r}^{\infty} \E N^{k+2}_{F}(B(z_0, 1))\textbf{1}_{M^{2}\le N_{F}(B(z_0, 1))\le (M+1)^{2}}.$$
As the right-hand side is at most $O(1) \sum _{M=r}^{\infty} \frac{K(M+1)^{2k+4}}{M^{A}}=O(1)$ by setting $A=2k+6$, Condition {\bf C2} \eqref{cond-poly} follows.
Finally for Condition {\bf C2} \eqref{cond-delocal}, note that $|z|^i/\sqrt{i!}$ is maximized at $i = \lfloor|z|^{2}-1\rfloor$. By Stirling's formula, at this $i$, $|z|^i/\sqrt{i!}=O\left ( \frac{\sqrt{\sum_{j} |z|^{2j}/j!}}{|z|^{1/2}}\right )$.
\end{proof}
\begin{proof}[Proof of Corollary \ref{mean_flat}]
As before, we simply approximate the indicator function $\textbf{1}_{B}$ above and below by smooth test functions $f$ and $g$ whose derivatives up to order $6$ is bounded by $O(r^{6a})$ for a sufficiently small constant $a$ and $\int_{\C} (f-g)dm = O( r^{-a})$. Applying Theorem \ref{uni_flat} to the function $f$, we obtain
$$\E N_{P}(B)\le \E \sum_{\zeta: P(\zeta) = 0} f(\zeta) = \E \sum_{\tilde \zeta: \tilde P(\tilde\zeta) = 0} f(\tilde \zeta) + O(r^{-c+6a}) = \E N_{\tilde P}(B) + O(r^{-a} + r^{-c+6a})$$
where $c$ is the constant in Theorem \ref{uni_flat}. By choosing $a = c/12$, we get $\E N_{P}(B) = \E N_{P}(B) + O(r^{c/12})$. And similarly, applying Theorem \ref{uni_flat} to the function $g$, we get the corresponding lower bound. This completes the proof.
\end{proof}
\section{Proof of Theorem \ref{uni_elliptic} and Corollary \ref{mean_elliptic}}\label{proof_elliptic}
\begin{proof}[Proof of Theorem \ref{uni_elliptic}]
We have $\Var P_n(z) = (|z|^{2}+1)^{n}$. As in the proof of Theorem \ref{uni_flat}, we will apply the framework in Section \ref{framework} to the function
$$F_{n}(z) = \frac{P_n(z/\sqrt n)}{(|x_0|^{2}+1)^{n/2}\exp\left (\frac{n(z/\sqrt{n}-x_0)\bar{x_0}}{(|x_0|^{2}+1)}\right) },$$
$\delta_n = n^{-1}$ and $D_n = \{\sqrt n x_0\}$. We have $\Var F(z) = \Theta(1)$. Note that the denominator is chosen so that $\Var F(z) = \Theta(1)$, $F$ is analytic, and $F(z) = 0$ if and only if $P(z/\sqrt{n})=0$. We will first show that Theorem \ref{gcomplex} holds, and then we show that the conclusion of Theorem \ref{greal} also holds. For Theorem \ref{gcomplex}, it suffices to show that there exist positive constants $C_1, \alpha_1$ such that for any positive constants $A, c_1$, there exists a constant $C$ for which Conditions {\bf C2} \eqref{cond-poly}-{\bf C2} \eqref{cond-delocal} hold with parameters $C_1, \alpha_1, A, c_1, C$. For this model, one can choose $\alpha_1=\ep/4$ and $C_1=1$. Condition {\bf C2} \eqref{cond-bddn} follows from the following. For any constants $A, c_1>0$, we have
\begin{equation}
\P\left (\max _{z\in B(\sqrt n x_0, 2)}|F(z)| \ge C e^{n^{c_1}}\sqrt{n}\right )\le \frac{Cn}{e^{n^{c_1}}} \label{bdd2}
\end{equation}
for some constant $C$ depending only on $A$ and $c_1$.
Indeed, let $\Omega'$ be the event on which $|\xi_i|\le e^{n^{c_1}}$ for all $i\ge 0$. The probability of its complement is bounded from above by
$$\P\left (\Omega'^{c}\right )\le \frac{Cn}{e^{n^{c_1}}}.$$
On $\Omega'$, for every $z\in B(x_0, 2/\sqrt n )$, we have
\begin{equation}
|P(z)|\le \sum_{i =0}^{n} \sqrt{n\choose i}|\xi_i||z|^{i} \le e^{n^{c_1}}\sqrt{n}\sqrt{\sum_{i =0}^{n}{n\choose i} |z|^{2i}} = e^{n^{c_1}}\sqrt{n} \sqrt{\Var P(z)}.\label{ell1}
\end{equation}
Thus,
$$|F(z)|\le C e^{n^{c_1}}\sqrt{n}.$$
For Condition {\bf C2} \eqref{cond-smallball}, note that the sequence $\sqrt{n\choose i} |x_0|^{i}$ increases from $i=1$ to $i_0 = \lfloor 1+\frac{(n-1)x_0^{2}}{1+x_0^{2}}\rfloor$ and then decreases. For $n^{-1/2+\ep}\le |x_0|\le 1$, we have $\frac{n^{2\ep}}{4}\le i_0\le \frac{n+1}{2}$. Condition {\bf C2} \eqref{cond-smallball} follows by showing that for any constants $A, c_1>0$, there exists a constant $C$ and an angle $\theta\in [-1/(100\sqrt{n}), 1/(100\sqrt{n})]$ such that
\begin{equation}
\P\left (|F(\sqrt{n}x_0e^{i\theta})|\le Ce^{-n^{c_1}}\right )\le Cn^{-A}.\label{smallball_elliptic}
\end{equation}
We apply Lemma \ref{lmanti_concentration} to the set $\CE = \{i_0, i_0+1, \dots, i_0+m\}$ where $m=\frac{n^{c_1/2}}{\log n}$, the random variables $(\xi_j)_{j\in \CE}$, the coefficients $e_j = \sqrt{n\choose j}r^{j}$ where $r =|x_0|$, and the interval $I=[-m^{-A'}, m^{-A'}]$ where $A'=5A/c_1$. We have
\begin{eqnarray}
1\le \frac{e_j}{e_{j+1}}\le \frac{\sqrt{{j+1}}}{r\sqrt {n-j}}\le n^{1/2},\nonumber
\end{eqnarray}
for all $j\in \CE$, which implies
\begin{eqnarray}
e_{i_0+m}\ge e_{i_0} n^{-m/2}.\nonumber
\end{eqnarray}
Moreover, we have since $e_{i_0}$ is the largest term, $\Var P(x_0)\le n e_{i_0}^{2}$, and so,
$$e_{i_0+m}\ge \frac{\sqrt{\Var P(x_0)}}{\sqrt{n} n^{m/2}} = \frac{\sqrt{\Var P(x_0)}}{\sqrt n e^{n^{c_1/2}}}.$$
Hence, there exists $\theta\in I$ such that for all $Z\in \C$,
$$\P\left (\left|\sum_{j\in \CE} e_j \xi_j \cos(j\theta)-Z\right | \le \sqrt{\Var P(x_0)}e^{-n^{c_1/2}}m^{-16A'^{2}}/\sqrt n\right ) =O\left (m^{-A'/2}\right )=O\left (n^{-A}\right ).$$
By conditioning on the random variables not in $\CE$, we obtain
\begin{equation}
\P\left (|P_n(x_0e^{i\theta})|\le \sqrt{\Var P(x_0)}e^{-n^{c_1/2}}m^{-16A'^{2}}/\sqrt n\right ) =O\left (n^{-A}\right ).\nonumber
\end{equation}
Since $e^{-n^{c_1/2}}m^{-16A'^{2}}/\sqrt n=\Omega\left (e^{-n^{c_1}}\right )$, we obtain
\begin{equation}
\P\left (|P_n(x_0e^{i\theta})|\le \sqrt{\Var P(x_0)}e^{-n^{c_1}}\right )= O\left (n^{-A}\right ).\label{ell2}
\end{equation}
That implies \eqref{smallball_elliptic} and therefore, Condition {\bf C2} \eqref{cond-smallball} follows.
Combining \eqref{bdd2} and \eqref{smallball_elliptic} and Jensen's inequality, we get that
$$\P\left (N_{F}(B(\sqrt n x_0, 1)) \ge n^{c_1}\right )\le Cn^{-A}.$$
From this and the fact that $N_{F}(B(\sqrt n x_0, 1))$ is always at most $n$, Condition {\bf C2} \eqref{cond-poly} follows.
For Condition {\bf C2} \eqref{cond-delocal}, as we have seen above, $E_i:=\sqrt{n\choose i}|x_0|^i$ is largest when $i_0= \lfloor 1+\frac{(n-1)x_0^{2}}{1+x_0^{2}}\rfloor\in [\frac{n^{2\ep}}{4}, \frac{n+1}{2}]$. It suffices to show that the $E_{i_0}=O( n^{-\ep/4})\sqrt{\sum_{i} E_{i}^{2}}$ which can be deduced from showing that the consecutive terms $(E_{i})_{i=i_0-n^{\ep/2}}^{i_0+n^{\ep/2}}$ are of the same order, i.e. $E_{i}/E_{j} = \Theta(1)$. We have for $i$ in the above window,
$$\frac{E_{i+1}^{2}}{E_{i}^{2}} = \frac{|x_0|(n-i+1)}{i+1} = \Theta\left (\frac{n-i+1}{n-i_0+1}\frac{i_0+1}{i+1}\right )=\Theta\left (1+\frac{1}{n^{\ep}}\right ).$$
Thus for all $i, j$ in the above window,
$$\frac{E_{i}}{E_{j}} = \Theta\left (1+\frac{1}{n^{\ep}}\right )^{n^{\ep/2}} = \Theta(1)$$
as needed. So Theorem \ref{gcomplex} holds for $F_n$. It's left to show that the conclusion of Theorem \ref{greal} also holds.
Unfortunately, Condition {\bf C2} \eqref{cond-repulsion} doesn't hold for $F_n$. Note that this condition is used in the proof of Theorem \ref{greal} only to show that \eqref{repulsion_gau} which says that for any $x\in [n^{-1/2+\ep}, 1+n^{-1/2}]$, we have for a sufficiently small constant $c$,
\begin{equation}
\P(N_{\tilde{F_n}}{B(\sqrt n x,2 n^{-c})}\ge 2)\le C n^{-16c/10}\label{repulsion_gau2}
\end{equation}
where $\tilde F_n$ is the corresponding function with standard gaussian coefficients.
To prove \eqref{repulsion_gau2}, we can instead use the fact that
\begin{eqnarray}
&&\P(N_{\tilde{F}}{B(\sqrt n x,2 n^{-c})}\ge 2)\le \P(N_{\tilde{F}}{B(\sqrt n x,2 n^{-c})\cap \C_{+}}\ge 1) +\nonumber\\
&& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \P(N_{\tilde{F}}{[\sqrt n x - 2 n^{-c}, \sqrt n x - 2 n^{-c}]}\ge 2)\nonumber\\
&\le& \int\int_{B(x, 2 n^{-c-1/2})\cap \C_{+}} \rho^{(0, 1)}(z)dz + \int_{x - 2 n^{-c-1/2}}^{x + 2 n^{-c-1/2}}\int_{x - 2 n^{-c-1/2}}^{x + 2 n^{-c-1/2}} \rho^{(2, 0)}(s, t)dsdt\nonumber
\end{eqnarray}
where $\rho^{(0, 1)}$ and $\rho^{(2, 0)}$ are the $(0, 1)$- and $(2, 0)$-correlation functions of $\tilde P_n$ respectively. By \cite[Proposition 13.3]{TVpoly}, these functions are bounded for all $z\in B(x, 2 n^{-c-1/2})\cap \C_{+}$ and $s, t \in [x - 2 n^{-c-1/2}, x + 2 n^{-c-1/2}]$ as follows
$$\rho^{(0, 1)}(x, y)= O(n^{3/2})(x-y) = O(n^{1-c})$$
and
$$\rho^{(2, 0)}(z) = O(n).$$
Thus,
\begin{eqnarray}
\P(N_{\tilde{F}}{B(\sqrt n x,2 n^{-c})}\ge 2)=O(n^{-2c})\nonumber
\end{eqnarray}
giving the desired estimate.
\end{proof}
\begin{proof}[Proof of Corollary \ref{mean_elliptic}]
As mentioned in remark \ref{remark_elliptic}, it suffices to show that
$$\E N_{P_n}[0, 1]=\frac{1}{4}\sqrt{n} + O(n^{1/2-c}).$$
We partition the interval $[0, 1]$ into 2 intervals $I_1:=[0, n^{-1/2+\ep}]$ and $I_2:=[n^{-1/2+\ep}, 1]$. On the interval $I_2$ where Theorem \ref{uni_elliptic} applies, we further partition it into equal intervals $J_i$ of length $n^{-1/2}$. On each of these small intervals $J_{i}$, we routinely approximate its indicator function above and below by smooth test functions and apply Theorem \ref{uni_elliptic} to these functions to obtain
$$\E N_{P_n}(J_i) - \E N_{\tilde P_n} (J_{i}) = O(n^{-c}).$$
Thus,
$$\E N_{P_n}(I_2) - \E N_{\tilde P_n} (I_2) = O(n^{1/2-c}).$$
It remains to show that the interval $I_1$ is insignificant. Note that $N_{P_n}(I_1)\le N_{P_n}B(x, 3x)$ where $x = n^{-1/2+\ep}$. By Jensen's inequality,
$$N_{P_n}B(x, 3x) \le C\log \frac{M}{|P_n(x)|}$$
where $M = \max_{|z|\le 4 x} |P_n(z)|$. By \eqref{ell1}, on the event $\Omega'$,
$$M\le e^{n^{c_1}}\sqrt{n}\sqrt{\sum_{i =0}^{n}{n\choose i} |4x|^{2i}} = e^{n^{\ep}}\sqrt{n} (16x^{2}+1)^{n/2}\le \sqrt{n}e^{n^{3\ep}}.$$
Thus, $\P\left (\log M\ge n^{3\ep}\right )\le \frac{n}{e^{n^{\ep}}}$. Moreover, by \eqref{ell2}, we have $ \P\left (|P_n(x)|\le e^{-n^{\ep}}\right )\le n^{-A}$. Combining these bounds, we get
$$\P\left (N _{P_n}B(x, 3x)\ge Cn^{3\ep}\right )\le Cn^{-2}.$$
Hence,
$$\E N_{P_n}B(x, 3x) \le Cn^{3\ep} + n. n^{-2} \le (C+1)n^{3\ep}.$$
This completes the proof.
\end{proof}
\section{Proof of Theorem \ref{kacseries_uni} and Corollary \ref{kacseries_cor}}\label{kacseries_proof}
\begin{proof}[Proof of Theorem \ref{kacseries_uni}]
The reader may notice that this proof is quite similar to the proof of Theorem \ref{kacreal}. We nonetheless present it here for the reader's convenience.
Let us first consider the case $0<\delta<\frac{1}{K}$ for some sufficiently large constant $K>0$.
We apply Theorem \ref{greal} to the random function
$F(z) := P(z\delta/10)$
and the domain $D := \{z: 1-2\delta\le |z\delta/10|\le 1-\delta\}$.
The main task is to show that there exist positive constants $C_1, \alpha_1$ such that for any positive constants $A, c_1$, there exists a constant $C$ for which Conditions {\bf C2} \eqref{cond-poly}-{\bf C2} \eqref{cond-delocal} hold with parameters $(k+l, C_1, \alpha_1, A, c_1, C)$. For this random series, one can choose $\alpha_1=\min\{1/4, \gamma/2\}$ and $C_1=1$.
We use the following crucial property of regularly varying coefficients
\begin{lemma}\cite[Theorem 5, page 423]{feller1966introduction}\label{lmmregular_varying}
If $c_k^{2} = \frac{k^{\gamma-1}L(k)}{\Gamma(\gamma)}$ where $L(k)$ is a slowly varying function then
$$\lim_{a\downarrow 0} \sum_{k=0}^{\infty} c_k^{2} (1-at)^{2k} (2at)^{\gamma}/L\left (\frac{1}{a}\right )=1$$
uniformly as long as $t$ stays in a compact subset of $(0, \infty)$.
\end{lemma}
Moreover, for any positive constant $c'>0$, there exists a constant $C>0$ (depending on the function $L$ such that $\frac{1}{Ct^{c'}}\le L(t)\le Ct^{c'}$ for all $t>0$. This simple observation can be proven using, for example, the Karamata's representation theorem (\cite[Proposition 1.3.8, page 26]{bingham1989regular}.
To verify Condition {\bf C2} \eqref{cond-delocal}, we use Lemma \ref{lmmregular_varying} to get for every $w\in B(0, 1-\delta/2)$,
$$\sum_{k=0}^{\infty} c_k^{2} |w|^{2k} = \Omega\left (\delta^{-\gamma} L(\delta^{-1})\right )=\Omega\left (\delta^{-\gamma+c'}\right )$$
while
$$c_k^{2}|w|^{2k}\le Ck^{\gamma-1+c'} (1-\delta)^{2k}=O\left (\delta^{-\gamma+1-2c'}+1\right ).$$
Letting $c'$ sufficiently small, we obtain Condition {\bf C2} \eqref{cond-delocal}.
Condition {\bf C2} \eqref{cond-repulsion} follows immediately from Lemma \ref{lmmregular_varying}.
To verify Condition {\bf C2} \eqref{cond-bddn}, notice that for any $M>2$, if we condition on the event $\Omega'$ on which $|\xi_i|\le M\left (1+\delta/2\right )^{i}$ for all $i$, then for all $z\in D + B(0, 3)$, by Lemma \ref{lmmregular_varying},
\begin{equation}
|F(z)| = O(M)\sum_{i=0}^{\infty} (1+|c_i|^{2})\left (1+\delta/2\right )^{i}(1-\delta)^{i} = O(M \delta^{-\gamma-1}).\label{interm11}
\end{equation}
Thus, for every $M>2$, we have
\begin{equation}
\P\left (|F(z)| = O(M\delta^{-\gamma-1})\right )= 1-O\left (\sum _{i=0}^{n} \frac{1}{M\left (1+\delta/2\right )^{i}}\right )= 1- O\left (\frac{1}{M\delta}\right ).\label{bound_kacseries}
\end{equation}
Setting $M = \delta^{-A-1}$, we obtain Condition {\bf C2} \eqref{cond-bddn}.
To prove Condition {\bf C2} \eqref{cond-smallball}, we show that for any constants $A$ and $c_1>0$, there exists a constant $B>0$ such that the following holds. For every $z_0$ with $1-2\delta\le |z_0|\le 1-\delta$, there exists $z= z_0e^{i\theta}$ where $\theta\in [-\delta, \delta]$ such that for every $M\ge 1$,
\begin{equation}
\P\left (|P(z)|\le e^{-\delta^{-c_1}}e^{-BM}\right )\le \frac{B\delta^{A}}{M^{A}}.\label{smallball_kacseries}
\end{equation}
Setting $M = 1$, we obtain Condition {\bf C2} \eqref{cond-smallball}.
By writing $z_ 0 = re ^{i\theta_0}$, the bound \eqref{smallball_kac} follows from a more general anti-concentration bound: there exists $\theta\in I := [\theta_0 - \delta, \theta_0 + \delta]$ such that
\begin{equation}
\sup _{Z\in \C}\P\left (|P(re^{i\theta})-Z|\le e^{-\delta^{-c_1}}e^{-BM}\right )\le \frac{B\delta^{A}}{M^{A}}.\nonumber
\end{equation}
Since the probability of being confined in a complex ball is bounded from above by the probability of its real part being confined in the corresponding interval on the real line, it suffices to show that
\begin{equation}
\sup _{Z\in \R}\P\left (\left |\sum_{j=0}^{M\delta^{-1}/2} c_j\xi_j r^{j} \cos{j\theta}-Z\right |\le e^{-\delta^{-c_1}}e^{-BM}\right )\le \frac{B\delta^{A}}{M^{A}}.\nonumber
\end{equation}
This is a direct application of Lemma \ref{lmanti_concentration}.
Finally, to prove Condition {\bf C2} \eqref{cond-poly}, from \eqref{bound_kac}, \eqref{smallball_kac}, and Jensen's inequality, we get for every $1\le M\le n\delta$
$$\P(N\ge \delta^{-c_1} + BM) = O\left (\frac{\delta^{A}}{M^{A}}\right )$$
where $N = N_{F}B(w, 2)$, $w\in D$.
Setting $c_1=1$ and $M = 1, 2, 2^{2}, \dots$, we get
$$\E N^{k+2}\textbf{1}_{N\ge \delta^{-1}} \le C\sum_{i=1}^{\infty} \left (\delta^{-1} + B2^{i+1}\right )^{k+2}\frac{\delta^{A}}{2^{iA}} \le C\delta^{A-k-2}.$$
This proves Condition {\bf C2} \eqref{cond-poly} and completes the proof for $\delta\le 1/K$. For $\delta\ge 1/K$, note that the Jensen's inequality implies that
$$N_{P}B(0, 1-1/K) = O_K(1)\log\frac{\max_{w\in B(0, 1-1/2K) }|P(w)|}{\max_{w\in B(1-1/K, 1/3K)} |P(w)|}.$$
Thus, using the bounds \eqref{interm1}, \eqref{bound_kac}, \eqref{smallball_kac} for $\theta = 1-1/K$, and apply we get for every $1\le M$,
$$\P(N_{P}B(0, 1-1/2K) \ge BM) = O\left (\frac{C'}{M^{A}}\right ).$$
And so, $\E N_{P}B(0, 1-1/2K) = O(1)$. The same holds for $\tilde P$ and therefore desired result follows.
\end{proof}
\begin{proof}[Proof of Corollary \ref{kacseries_cor}]
To prove the first part of Corollary \ref{kacseries_cor}, we decompose the interval $[0, r]$ into dyadic intervals $[0, 1/2], [1-1/2, 1-1/4), \dots$, and finally $\pm [1-\delta, r]$. In each of these interval, say $[x, y)$, we show that $\E N_{P}[x, y) - \E N_{\tilde P} [x, y) = O((1-y)^{c})$ for some positive constant $c$. This can be routinely done by approximating the indicator function on the interval $[x, y)$ by a smooth function and apply Theorem \ref{kacreal}. We omit the detail as it is similar to the proof of Theorem \ref{comparison}.
Thanks to the first part, to prove the second part of Corollary \ref{kacseries_cor}, it suffices to prove the corresponding statement for $\tilde P$ whose coefficients are gaussian. We adapt a strategy in \cite{FK}. For any interval $[a, b]\subset \R$, by the Kac-Rice formula (Proposition \ref{KacRice}), we have
$$\E N_{\tilde P}[a, b] = \frac{1}{\pi}\int_{a}^{b}\sqrt{f(x)}dx$$
where
$$f(x) = \frac{\left (\sum_{k=0}^{\infty} c_k^{2}x^{2k}\right )\left (\sum_{k=0}^{\infty} c_k^{2}k^{2}x^{2k-2}\right )-\left (\sum_{k=0}^{\infty} c_k^{2}kx^{2k-1}\right )^{2}}{\left (\sum_{k=0}^{\infty} c_k^{2}x^{2k}\right )^{2}}.$$
Lemma \ref{lmmregular_varying} suggests that we make the transformation
$$f_n(t) := f(1-2^{-n}t).$$
Applying Lemma \ref{lmmregular_varying} to $a = 2^{-n}$ and $t\in [1, 2]$, we obtain that uniformly on $x=1-at\in [1-2^{1-n}, 1-2^{n}]$, as $n\to \infty$
$$\sum_{k=0}^{\infty} c_k^{2}x^{2k} \sim 2^{-\gamma} (1-x)^{-\gamma}L\left (2^{n}\right ), \sum_{k=0}^{\infty} c_k^{2}kx^{2k-1} \sim x^{-1}2^{-\gamma-1} (1-x)^{-\gamma-1}L\left (2^{n}\right )\frac{\Gamma(\gamma+1)}{\Gamma(\gamma)}$$
and
$$ \sum_{k=0}^{\infty} c_k^{2}k^{2}x^{2k-2} \sim x^{-2}2^{-\gamma-2} (1-x)^{-\gamma-2}L\left (2^{n}\right )\frac{\Gamma(\gamma+2)}{\Gamma(\gamma)}$$
where $p_n\sim q_n$ means $\lim_{n\to\infty} \frac{p_n}{q_n}=1$.
Since $\Gamma(\gamma+2) = (\gamma+1)\Gamma(\gamma+1) = \gamma(\gamma+1)\Gamma(\gamma)$, we obtain that uniformly on $t\in [1, 2]$,
$$f_n(t)\sim \gamma (2^{-n}t)^{-2}/4.$$
We have
$$\E N_{\tilde P}[1-2^{1-n}, 1-2^{-n}] = \frac{1}{\pi}\int_{1}^{2}2^{-n}\sqrt{f_n(t)}dt.$$
By uniform convergence, we obtain
$$ \E N_{\tilde P}[1-2^{1-n}, 1-2^{-n}] \sim \frac{\sqrt\gamma\ln 2}{2\pi}.$$
Taking the Ces\'aro summation, we obtain
$$\frac{1}{n}\E N_{\tilde P}[0, 1-2^{-n}] = \frac{1}{n}\sum_{k=1}^{n}\E N_{\tilde P}[1-2^{1-k}, 1-2^{-k}] \sim \frac{\sqrt\gamma\ln 2}{2\pi}.$$
For each $r\in (0, 1)$, sandwiching $\E N_{\tilde P}[0, r]$ between $\E N_{\tilde P}[0, 1-2^{1-n}]$ and $\E N_{\tilde P}[0, 1-2^{-n}]$ (i.e., $n-1 = \lfloor -\log_{2}(1-r)\rfloor$), we get
$$\frac{1}{-\log (1-r)}\E N_{\tilde P}[0, r]\sim \frac{\sqrt\gamma}{2\pi}.$$
as desired.
\end{proof}
\emph{Acknowledgements.} The authors would like to thank Asaf Ferber and Yuval Peres for helpful remarks that led to some simplifications of our proofs.
\bibliographystyle{plain}
| {
"timestamp": "2018-01-25T02:00:36",
"yymm": "1711",
"arxiv_id": "1711.03615",
"language": "en",
"url": "https://arxiv.org/abs/1711.03615",
"abstract": "We investigate the local distribution of roots of random functions of the form $F_n(z)= \\sum_{i=1}^n \\xi_i \\phi_i(z) $, where $\\xi_i$ are independent random variables and $\\phi_i (z) $ are arbitrary analytic functions. Starting with the fundamental works of Kac and Littlewood-Offord in the 1940s, random functions of this type have been studied extensively in many fields of mathematics.We develop a robust framework to solve the problem by reducing, via universality theorems, the calculation of the distribution of the roots and the interaction between them to the case where $\\xi_i$ are gaussian. In this special case, one can use the Kac-Rice formula and various other tools to obtain precise answers.Our framework has a wide range of applications, which include the most popular models of random functions, such as random trigonometric polynomials and all basic classes of random algebraic polynomials (Kac, Weyl, and elliptic). Each of these ensembles has been studied heavily by deep and diverse methods. Our method, for the first time, provides a unified treatment of all of them.Among the applications, we derive the first local universality result for random trigonometric polynomials with arbitrary coefficients. When restricted to the study of real roots, this result extends several recent results, proved for less general ensembles. For random algebraic polynomials, we strengthen several recent results of Tao and the second author, with significantly simpler proofs. As a corollary, we sharpen a classical result of Erd{ö}s and Offord on real roots of Kac polynomials, providing an optimal error estimate. Another application is a refinement of a recent result of Flasche and Kabluchko on the roots of random Taylor series.",
"subjects": "Probability (math.PR)",
"title": "Roots of random functions: A framework for local universality",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750496039276,
"lm_q2_score": 0.815232489352,
"lm_q1q2_score": 0.8049402196126644
} |
https://arxiv.org/abs/2006.02429 | Infinite co-minimal pairs involving lacunary sequences and generalisations to higher dimensions | The study of minimal complements in a group or a semigroup was initiated by Nathanson. The notion of minimal complements and being a minimal complement leads to the notion of co-minimal pairs which was considered in a prior work of the authors. In this article, we study which type of subsets in the integers and free abelian groups of higher rank can be a part of a co-minimal pair. We show that a majority of lacunary sequences have this property. From the conditions established, one can show that any infinite subset of any finitely generated abelian group has uncountably many subsets which is a part of a co-minimal pair. Further, the uncountable collection of sets can be chosen so that they satisfy certain algebraic properties. | \section{Introduction and Motivation}
Let $(G,+)$ be an abelian group and $W\subseteq G$ be a nonempty subset. A nonempty set $W'\subseteq G$ is said to be an \textit{additive complement} to $W$ if $W + W' = G.$ Additive complements have been studied since a long time in the context of representations of the integers e.g., they appear in the works of Erd\H{o}s, Hanani, Lorentz and others. See \cite{Lorentz54, Erdos54, ErdosSomeUnsolved57} etc. The notion of minimal additive complements for subsets of groups was introduced by Nathanson in \cite{NathansonAddNT4}. An additive complement $W'$ to $W$ is said to be minimal if no proper subset of $W'$ is an additive complement to $W$, i.e.,
$$W + W' = G \,\text{ and }\, W + (W'\setminus \lbrace w'\rbrace)\subsetneq G \,\,\, \forall w'\in W'.$$
Minimal complements are intimately connected with the existence of minimal nets in groups. See \cite[Section 2]{NathansonAddNT4} and \cite[Section 2.1]{MinComp1}. Further, in case of the additive group $\mathbb{Z}$, they are related to the study of minimal representations. See \cite[Section 3]{NathansonAddNT4}.
Given two nonempty subsets $A, B$ of a group $G$, they are said to form a co-minimal pair if $A \cdot B = G$, and $A' \cdot B \subsetneq G$ for any $\emptyset \neq A' \subsetneq A$ and $A\cdot B' \subsetneq G$ for any $\emptyset \neq B' \subsetneq B$. Thus, they are pairs $(A\in G,B\in G)$ such that each element in a pair is a minimal complement to the other. Co-minimal pairs in essence capture the tightness property of a set and its complement. Further, they are a strict strengthening of the notion of minimal complements in the sense that a non-empty set $A$ might admit a minimal complement, but might not be a part of a co-minimal pair. See \cite[Lemma 2.1]{CoMin1}.
Which sort of sets can be a part of a co-minimal pair is an interesting question. It was shown in \cite[Theorem B]{CoMin1} that if $G$ is a free abelian group (of any rank $\geq 1$), then, given any non-empty finite set $A$, there exists another set $B$ such that $(A,B)$ forms a co-minimal pair.
Moreover, in a very recent work of Alon, Kravitz and Larson, they establish that any nonempty finite subset of an infinite abelian group is a minimal complement to some subset \cite[Theorem 2]{AlonKravitzLarson}, and this implies that the statement of \cite[Theorem B]{CoMin1} holds for nonempty finite subsets of any infinite abelian group.
However, the existence of non-trivial co-minimal pairs involving infinite subsets $A$ and $B$ were unknown, until recently. It was shown in \cite{CoMin2} that such pairs exist and explicit constructions of two such pairs in the integers $\mathbb{Z}$ were provided. The aim of this article is to establish that infinite co-minimal pairs are abundant along majority of lacunary sequences in the integers and from there, draw conclusions on which sort of infinite subsets can be a part of a co-minimal pair. We are considering the underlying set associated with the sequence and this implies that there is some sparseness between successive elements of the underlying set. These subsets also satisfy a number of algebraic and combinatorial properties. Moreover, they can be generalised to any free abelian group of finite rank.
\subsection{Statement of results}
First, let us recall the notion of lacunary sequences.
\begin{definition}[Lacunary sequence]
A Lacunary sequence is a sequence of numbers $\lbrace x_{n}\rbrace_{n\in \mathbb{N}}$ such that $\frac{x_{n+1}}{x_{n}}\geqslant \lambda >1 \forall n\in \mathbb{N}$.
\end{definition}
Additive complements of lacunary sequences also has a long history. For a brief history see \cite[Section 1]{RuzsaLacunary}. However, the study of minimal additive complements and co-minimal pairs involving lacunary sequences is new. Our first result concerns co-minimal pairs of lacunary sequences. To avoid introducing cumbersome notation from the beginning, we state a simplified version of Theorem \ref{Thm} below
\begin{theoremIntro}
\label{ThmA}
In the additive group of the integers, a ``majority'' of lacunary sequences have the property that they belong to a co-minimal pair.
\end{theoremIntro}
By a ``majority'', we mean the following: we know that a lacunary sequence $\lbrace x_{n}\rbrace_{n\in \mathbb{N}}$ is defined by the growth condition $\frac{x_{n+1}}{x_{n}}\geqslant \lambda$, with $\lambda \in (1,+\infty)$. The lacunary sequences which satisfy Theorem \ref{ThmA} have $\lambda \in[6,+\infty)$ and in some cases even $\lambda \in (3,+\infty)$. Further, they can be generalised to $\mathbb{Z}^{d}$. See Theorem \ref{Thm} for the complete statement. It is worth mentioning that in \cite{CoMin2}, it has been established that the set consisting of the terms of the lacunary sequence $1, 2, 2^2, 2^3, \cdots$ is a part of a co-minimal pair.
Next, we consider the subsets of $\ensuremath{\mathbb{Z}}$ of the following types.
\begin{enumerate}[(Type 1)]
\item Symmetric subsets of $\ensuremath{\mathbb{Z}}$ containing the origin,
\item Symmetric subsets of $\ensuremath{\mathbb{Z}}$ not containing the origin,
\item Subsets of $\ensuremath{\mathbb{Z}}$ which are bounded from the above or from the below.
\end{enumerate}
We investigate whether a subset of $\ensuremath{\mathbb{Z}}$ of the above types can be a part of a co-minimal pair with a subset of $\ensuremath{\mathbb{Z}}$ of the above types. We establish certain sufficient conditions for this to be true. In fact, we have the following result.
\begin{theoremIntro}
For (I, II) equal to any one of
$$(1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3), (3, 3),$$
there are uncountably many subsets of $\ensuremath{\mathbb{Z}}$ of Type I which form a co-minimal pair with a subset of $\ensuremath{\mathbb{Z}}$ of Type II.
\end{theoremIntro}
Again, the above is a specialised version of a more general theorem which holds for $\mathbb{Z}^{d}$. See Theorem \ref{Thm:Uncountable}.
\section{Co-minimal pairs involving lacunary sequences and generalisations}
Let $d\geq 1$ be an integer. Denote the lexicographic order on $\ensuremath{\mathbb{Z}}^d$ by the symbol $<$. The element $(1, \cdots, 1)$ of $\ensuremath{\mathbb{Z}}^d$ is denoted by $\ensuremath{\mathbf{1}}$.
For $?\in \{<, \leq, > , \geq\}$ and $x\in \ensuremath{\mathbb{Z}}^d$, the set $\{n\in \ensuremath{\mathbb{Z}}^d\,|\, n?x\}$ is denoted by $\ensuremath{\mathbb{Z}}^d_{?x}$.
Let $t_0 < t_1 < t_2 < \cdots$ be elements of $\ensuremath{\mathbb{Z}}^d$. Assume that $t_0 > 0$,
$$t_n \geq 2 t_{n-1}
\quad
\text{ for all }
n \geq 1,$$
and for some $n\geq 0$, all the coordinates of $t_n$ are positive. Let $\ensuremath{\mathcal{T}}, \ensuremath{\mathcal{V}}, \ensuremath{\mathcal {W}}$ be subsets of $\ensuremath{\mathbb{Z}}^d$ defined by
\begin{align*}
\ensuremath{\mathcal{T}}
& = \{t_0 , t_1, t_2, \cdots \},\\
\ensuremath{\mathcal{V}}
& = \ensuremath{\mathcal{T}} \cup (-\ensuremath{\mathcal{T}}),\\
\ensuremath{\mathcal {W}}
& = \ensuremath{\mathcal{T}} \cup \{0\} \cup (-\ensuremath{\mathcal{T}}).
\end{align*}
Then, the following holds,
\begin{theorem}
\label{Thm}
\quad
\begin{enumerate}
\item
If
$$t_n > 3 t_{n-1}
\quad
\text{ for all }
n \geq 1,$$
then the set
$\ensuremath{\mathcal {W}}$
and a symmetric subset $\ensuremath{\mathcal{E}}$ of $\ensuremath{\mathbb{Z}}^d$ containing the origin form a co-minimal pair in $\ensuremath{\mathbb{Z}}^d$.
\item
If
$$t_n > 3 t_{n-1}
\quad
\text{ for all }
n \geq 1,$$
then the set
$\ensuremath{\mathcal {W}}$
and a symmetric subset $\ensuremath{\mathcal{F}}$ of $\ensuremath{\mathbb{Z}}^d$ not containing the origin form a co-minimal pair in $\ensuremath{\mathbb{Z}}^d$.
\item
If
$$t_n \geq 6 t_{n-1}
\quad
\text{ for all }
n \geq 1,$$
then the set
$\ensuremath{\mathcal {W}}$
and a subset $\ensuremath{\mathcal{G}}$ of $\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq 0}$ form a co-minimal pair in $\ensuremath{\mathbb{Z}}^d$.
\item
If
$$t_n > 3 t_{n-1}
\quad
\text{ for all }
n \geq 1,$$
then the set
$\ensuremath{\mathcal{V}}$
and a symmetric subset $\ensuremath{\mathcal{P}}$ of $\ensuremath{\mathbb{Z}}^d$ containing the origin form a co-minimal pair in $\ensuremath{\mathbb{Z}}^d$.
\item
If
$$t_n > 3 t_{n-1}
\quad
\text{ for all }
n \geq 1,$$
then the set
$\ensuremath{\mathcal{V}}$
and a symmetric subset $\ensuremath{\mathcal{Q}}$ of $\ensuremath{\mathbb{Z}}^d$ not containing the origin form a co-minimal pair in $\ensuremath{\mathbb{Z}}^d$.
\item
If
$$t_n \geq 6 t_{n-1}
\quad
\text{ for all }
n \geq 1,$$
then the set
$\ensuremath{\mathcal{V}}$
and a subset $\ensuremath{\mathcal{R}}$ of $\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq 0}$ form a co-minimal pair in $\ensuremath{\mathbb{Z}}^d$.
\item
If
$$t_n \geq 2 t_{n-1}
\quad
\text{ for all }
n \geq 1,$$
then the set
$\ensuremath{\mathcal{T}}$
is a minimal complement to some subset of $\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq 0}$.
In addition, if at least one coordinate of the sequence of points $\{t_n - 2t_{n-1}\}_{n\geq 1}$ goes to $\infty$,
then the set
$\ensuremath{\mathcal{T}}$
and a subset $\ensuremath{\mathcal{S}}$ of $\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq 0}$ form a co-minimal pair in $\ensuremath{\mathbb{Z}}^d$.
\end{enumerate}
\end{theorem}
\begin{theorem}
\label{Thm:UncountableSubsets}
Any infinite subset of any finitely generated abelian group has uncountably many subsets which is a part of a co-minimal pair. In particular, any infinite subset of $\ensuremath{\mathbb{Z}}^d$ has uncountably many subsets which admit minimal complements.
\end{theorem}
It turns out that there are plenty of subsets of $\ensuremath{\mathbb{Z}}^d$ each of which is a part of a co-minimal pair.
\begin{theorem}
\label{Thm:Uncountable}
The set of integers $\ensuremath{\mathbb{Z}}^d$ has uncountably many
\begin{enumerate}
\item
symmetric subsets containing the origin, each of which forms a co-minimal pair together with a symmetric subset of $\ensuremath{\mathbb{Z}}^d$ containing the origin,
\item
symmetric subsets containing the origin, each of which forms a co-minimal pair together with a symmetric subset of $\ensuremath{\mathbb{Z}}^d$ not containing the origin,
\item
symmetric subsets containing the origin, each of which forms a co-minimal pair together with a subset of $\ensuremath{\mathbb{Z}}^d$ which avoids $\ensuremath{\mathbb{Z}}^d_{\geq 0}$,
\item
symmetric subsets not containing the origin, each of which forms a co-minimal pair together with a symmetric subset of $\ensuremath{\mathbb{Z}}^d$ containing the origin,
\item
symmetric subsets not containing the origin, each of which forms a co-minimal pair together with a symmetric subset of $\ensuremath{\mathbb{Z}}^d$ not containing the origin,
\item
symmetric subsets not containing the origin, each of which forms a co-minimal pair together with a subset of $\ensuremath{\mathbb{Z}}^d$ which avoids $\ensuremath{\mathbb{Z}}^d_{\geq 0}$,
\item
subsets contained in $\ensuremath{\mathbb{Z}}^d_{\geq 0}$, each of which forms a co-minimal pair together with a subset of $\ensuremath{\mathbb{Z}}^d$ which avoids $\ensuremath{\mathbb{Z}}^d_{\geq 0}$.
\end{enumerate}
\end{theorem}
\section{Proofs}
For two points $P, Q\in \ensuremath{\mathbb{Z}}^d$ satisfying $P\leq Q$, let $\ensuremath{\mathcal {X}}_{P, Q}$ denote the subset of $\ensuremath{\mathbb{Z}}^d$ defined by
$$
\ensuremath{\mathcal {X}}_{P, Q}
=
\ensuremath{\mathbb{Z}}^d_{\geq P}
\setminus
\ensuremath{\mathbb{Z}}^d_{\geq Q}
.
$$
\begin{lemma}
\label{Lemma:SumBound}
Let $P, Q$ be points in $\ensuremath{\mathbb{Z}}^d$ satisfying $P \leq Q$, and $A$ be a nonempty subset of $\ensuremath{\mathbb{Z}}^d$. For any $v\in \ensuremath{\mathbb{Z}}^d$, the inclusion
$$\ensuremath{\mathcal {X}}_{P, Q} + A_{\leq v} \subseteq
\ensuremath{\mathbb{Z}}^d
\setminus
\ensuremath{\mathbb{Z}}^d_{\geq Q+v} $$
holds.
\end{lemma}
\begin{proof}
Note that
\begin{align*}
\ensuremath{\mathcal {X}}_{P, Q} + A_{\leq v}
& \subseteq
\cup _{a\in A , a \leq v} (\ensuremath{\mathcal {X}}_{P, Q} +a)\\
& \subseteq
\cup _{a\in A , a \leq v} \ensuremath{\mathcal {X}}_{P+a, Q+a}\\
& \subseteq
(\cup _{a\in A , a \leq v} \ensuremath{\mathbb{Z}}^d_{\geq P+a} )
\setminus
(\cup _{a\in A , a \leq v} \ensuremath{\mathbb{Z}}^d_{\geq Q+a} )\\
& \subseteq
(\cup _{a\in A , a \leq v} \ensuremath{\mathbb{Z}}^d_{\geq P+a} )
\setminus
\ensuremath{\mathbb{Z}}^d_{\geq Q+v} \\
& \subseteq
\ensuremath{\mathbb{Z}}^d
\setminus
\ensuremath{\mathbb{Z}}^d_{\geq Q+v} .
\end{align*}
\end{proof}
Set
$$t_{-2} = t_{-1} = 0.$$
Define the subsets $\{\ensuremath{\mathcal I}_n\}_{n\geq 0}$ of $\ensuremath{\mathbb{Z}}^d$ as follows.
$$\ensuremath{\mathcal I}_n
=
\ensuremath{\mathcal {X}}_{-t_n, -t_{n-1} }.
$$
Consider the subsets $\{\ensuremath{\mathcal J}_n\}_{n\geq 0}$ of $\ensuremath{\mathbb{Z}}^d$ defined by
$$\ensuremath{\mathcal J}_n
=
\ensuremath{\mathcal {X}}_{-t_n, -t_{n-1} -t_{n-2}}.
$$
These subsets make sense since $t_n \geq 2 t_{n-1}$ holds for $n\geq 0$.
Note that for $n\geq 0$,
$$
-t_{n-1} - t_{n-2}
\leq
-t_{n-1}
$$
holds, this implies
$$
\ensuremath{\mathbb{Z}}^d_{\geq -t_{n-1} - t_{n-2} }
\supseteq
\ensuremath{\mathbb{Z}}^d_{\geq -t_{n-1}} ,
$$
which yields
$$\ensuremath{\mathcal J}_n \subseteq \ensuremath{\mathcal I}_n.$$
Since all the coordinates of $t_n$ are positive for some $n\geq 0$, we obtain $-t_n \to (-\infty, \cdots, -\infty)$. Thus it follows that
$$\cup _{n\geq 0} \ensuremath{\mathcal I}_n = \ensuremath{\mathbb{Z}}^d \setminus \ensuremath{\mathbb{Z}}^d_{\geq 0}.$$
\begin{proposition}
\label{Prop:M}
\quad
\begin{enumerate}
\item
Each of the sets
$$
\cup _{m\geq n+2} \ensuremath{\mathcal J}_m + \ensuremath{\mathcal {W}}
,
\ensuremath{\mathcal J}_{n+1} + (\ensuremath{\mathcal {W}}\setminus \{t_n\})$$
contains no point of $\ensuremath{\mathcal I}_n$ for all $n\geq 0$.
\item
For any $n\geq 1$, the inclusion
$$
(\ensuremath{\mathcal J}_0 \cup \cdots \cup (\ensuremath{\mathcal J}_n\cap \ensuremath{\mathbb{Z}}^d_{\geq -2t_{n-1}})) + \ensuremath{\mathcal {W}}
\subseteq
(\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq -t_n})
\cup
\ensuremath{\mathbb{Z}}^d_{\geq -3t_{n-1}}
$$
holds.
\item
If
$$t_n > 3 t_{n-1}
\quad
\text{ for all }
n \geq 1,$$
then the set
$$
\cup _{m\geq n+1}(- (\ensuremath{\mathcal J}_m\cap \ensuremath{\mathbb{Z}}^d_{\geq -2t_{m-1} })) + \ensuremath{\mathcal {W}}
$$
contains no point of $\ensuremath{\mathcal I}_n$ for all $n\geq 0$.
\item
For any $n\geq 1$, the inclusion
$$
(-(\ensuremath{\mathcal J}_1 \cup \cdots \cup \ensuremath{\mathcal J}_n)) + (\ensuremath{\mathcal {W}} \setminus \{-t_n\})
\subseteq
\ensuremath{\mathbb{Z}}^d_{\leq -2t_n}
\cup
(\ensuremath{\mathbb{Z}}^d \setminus \ensuremath{\mathbb{Z}}^d_{\leq -t_{n-1}})
$$
holds if
$$t_n \geq 3 t_{n-1}
\quad
\text{ for all }
n \geq 1.$$
\end{enumerate}
\end{proposition}
\begin{proof}
For $n\geq 0$, the inclusions
\begin{align*}
\ensuremath{\mathcal J}_{m} + \ensuremath{\mathcal {W}}_{\leq t_{m-2}}
& \subseteq
\ensuremath{\mathcal {X}}_{-t_m, - t_{m-1} - t_{m-2} } + \ensuremath{\mathcal {W}}_{\leq t_{m-2}}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d \setminus \ensuremath{\mathbb{Z}}^d_{\geq - t_{m-1} - t_{m-2} + t_{m-2} }\\
& \subseteq
\ensuremath{\mathbb{Z}}^d \setminus \ensuremath{\mathbb{Z}}^d_{\geq - t_{m-1}}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d \setminus \ensuremath{\mathbb{Z}}^d_{\geq - t_n}\\
\end{align*}
hold for $m\geq n+1$ (the second inclusion follows from Lemma \ref{Lemma:SumBound}),
the inclusions
\begin{align*}
\ensuremath{\mathcal J}_{m} + t_{m-1}
& \subseteq
\ensuremath{\mathcal {X}}_{-t_m, - t_{m-1} - t_{m-2} } + t_{m-1}\\
& =
\ensuremath{\mathcal {X}}_{-t_m+t_{m-1}, - t_{m-1} - t_{m-2} + t_{m-1}}\\
& =
\ensuremath{\mathcal {X}}_{-t_m+t_{m-1}, - t_{m-2} }\\
& \subseteq
\ensuremath{\mathcal {X}}_{-t_m+t_{m-1}, - t_n }
\end{align*}
hold for $m\geq n+2$,
the inclusions
\begin{align*}
\ensuremath{\mathcal J}_{m} + \ensuremath{\mathcal {W}}_{\geq t_m}
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq - t_{m}} + \ensuremath{\mathcal {W}}_{\geq t_m}\\
& =
\cup _{r\geq m} \ensuremath{\mathbb{Z}}^d_{\geq - t_{m} + t_r}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq - t_{m} + t_{m}}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq 0}
\end{align*}
hold for $m\geq n+1$. This proves part (1).
For $n\geq 1$, the inclusions
\begin{align*}
(\ensuremath{\mathcal J}_0 \cup \cdots \cup \ensuremath{\mathcal J}_n) + \ensuremath{\mathcal {W}}_{\leq -t_n}
& \subseteq
\ensuremath{\mathcal {X}}_{-t_n, 0} +\ensuremath{\mathcal {W}}_{\leq -t_n}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq - t_n}
\end{align*}
hold (the second inclusion follows from Lemma \ref{Lemma:SumBound}),
the inclusions
\begin{align*}
(\ensuremath{\mathcal J}_0 \cup \cdots \cup (\ensuremath{\mathcal J}_n\cap \ensuremath{\mathbb{Z}}^d_{\geq -2t_{n-1}})) + \ensuremath{\mathcal {W}}_{\geq -t_{n-1}}
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq -2t_{n-1} } + \ensuremath{\mathcal {W}}_{\geq -t_{n-1}}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq -3t_{n-1}}
\end{align*}
hold for $r\leq n-1$. This proves part (2).
For $n\geq 0$, $m\geq n+1$, the inclusions
\begin{align*}
(- (\ensuremath{\mathcal J}_m\cap \ensuremath{\mathbb{Z}}^d_{\geq -2t_{m-1} })) - t_r
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\leq 2t_{m-1} } - t_r \\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\leq 2t_{m-1} -t_m }\\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{< - t_{m-1}} \\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{< - t_n }
\end{align*}
hold for $r\geq m$,
the inclusions
\begin{align*}
(-\ensuremath{\mathcal J}_m) - t_r
& \subseteq
(
\ensuremath{\mathbb{Z}}^d_{\leq t_m}
\setminus
\ensuremath{\mathbb{Z}}^d_{\leq t_{m-1}}
)- t_r \\
& \subseteq
(
\ensuremath{\mathbb{Z}}^d
\setminus
\ensuremath{\mathbb{Z}}^d_{\leq t_{m-1}}
)- t_r \\
& =
\ensuremath{\mathbb{Z}}^d
\setminus
\ensuremath{\mathbb{Z}}^d_{\leq t_{m-1}-t_r}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d
\setminus
\ensuremath{\mathbb{Z}}^d_{\leq t_{m-1}-t_{m-1}}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d
\setminus
\ensuremath{\mathbb{Z}}^d_{\leq 0}
\end{align*}
hold for $r\leq m-1$. This proves part (3).
For $n\geq 1$, the inclusions
\begin{align*}
-(\ensuremath{\mathcal J}_1 \cup \cdots \cup \ensuremath{\mathcal J}_n) - t_r
& \subseteq
\ensuremath{\mathbb{Z}}^d \setminus \ensuremath{\mathbb{Z}}^d_{\leq t_0} -t_r\\
& =
\ensuremath{\mathbb{Z}}^d \setminus \ensuremath{\mathbb{Z}}^d_{\leq t_0 -t_r}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d \setminus \ensuremath{\mathbb{Z}}^d_{\leq t_0 -t_{n-1}}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d \setminus \ensuremath{\mathbb{Z}}^d_{\leq -t_{n-1}}
\end{align*}
hold for $r\leq n-1$,
the inclusions
\begin{align*}
-(\ensuremath{\mathcal J}_1 \cup \cdots \cup \ensuremath{\mathcal J}_n) - t_r
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\leq t_n } - t_r\\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\leq t_n - t_{n+1}}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\leq -2t_n }\\
\end{align*}
hold for $r\geq n+1$. This proves part (4).
\end{proof}
\begin{lemma}
\label{Lemma:Finiteness}
Let $S$ and $T$ be nonempty subsets of an abelian group $G$ such that $S + T = G$. If the set $S$ is countable, and each element of $G$ can be expressed as a sum of an element of $S$ and an element of $T$ only in finitely many ways, then some nonempty subset of $S$ is a minimal complement to $T$.
\end{lemma}
\begin{proof}
The lemma follows when $S$ is finite.
Let us consider the case when $S$ is infinite. Let $s_1, s_2, \cdots$ be elements of $G$ such that $S = \{s_1, s_2, \cdots\}$. Define $S_1 = S$ and for each positive integer $i\geq 1$, define
$$S_{i+1}
: =
\begin{cases}
S_i \setminus \{-s_i\} & \text{ if $S_i \setminus \{-s_i\}$ is a complement to $T$,}\\
S_i & \text{ otherwise.}
\end{cases}
$$
Let $\ensuremath{\mathcal{S}}$ denote the subset $\cap _{i \geq 1} S_i$ of $G$. We claim that the set $\ensuremath{\mathcal{S}}$ is a minimal complement to $T$.
For each $y\in G$ and for each $i\geq 1$, there exist elements $s_{y, i}\in S_i , t_{y, i}\in T$ such that $$y = s_{y, i} + t_{y, i}.$$ Consequently, for some element $s_y \in S$, we have the equality $s_y = s_{y, i}$ for infinitely many $i$. Further, for such integers $i$, we have $t_y = t_{y, i}$ where $t_y: = y - s_y\in T$. This implies that $y - t_y = s_{y, i}$ holds for infinitely many $i$. Hence, for each integer $i\geq 1$, there exists an integer $\ell_i \geq i$ such that $y - t_y = s_{y, \ell_i}$, which yields $$y\in t_y + S_{\ell_i} \subseteq t_y + S_i.$$ As a consequence, $y$ lies in $t_y + S$, i.e., $y\in S + T$. Hence $\ensuremath{\mathcal{S}}$ is an additive complement of $T$. It follows that $\ensuremath{\mathcal{S}}$ is a minimal complement to $T$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm}(1)]
Define the subsets $\ensuremath{\mathcal{E}}_n$ of $\ensuremath{\mathbb{Z}}^d$ for $n\geq 0$ as follows.
$$\ensuremath{\mathcal{E}}_n
=
\begin{cases}
\{0\} & \text{ if } n = -1,\\
\emptyset & \text{ if } n = 0,\\
\{
- t_{n-1}
\}
+
(\ensuremath{\mathcal I}_{n-1} \setminus ( (\cup_{-1\leq m \leq n-1} (\ensuremath{\mathcal{E}}_m \cup (-\ensuremath{\mathcal{E}}_m))) + \ensuremath{\mathcal {W}}))
& \text{ if } n \geq 1.
\end{cases}
$$
Define the subset $\ensuremath{\mathcal{E}}$ of $\ensuremath{\mathbb{Z}}^d$ by
$$\ensuremath{\mathcal{E}} : = \cup _{n\geq -1} (\ensuremath{\mathcal{E}}_n \cup (-\ensuremath{\mathcal{E}}_n)).$$
For $n\geq 0$,
the inclusions
\begin{align*}
\ensuremath{\mathcal{E}} + \ensuremath{\mathcal {W}}
& \supseteq
(\ensuremath{\mathcal{E}}_{n+1} + t_n) \cup
( (\cup_{-1\leq m \leq n} (\ensuremath{\mathcal{E}}_m \cup (-\ensuremath{\mathcal{E}}_m))) + \ensuremath{\mathcal {W}})\\
& \supseteq
(\ensuremath{\mathcal I}_{n} \setminus ( (\cup_{-1\leq m \leq n} (\ensuremath{\mathcal{E}}_m \cup (-\ensuremath{\mathcal{E}}_m))) + \ensuremath{\mathcal {W}}))\cup
( (\cup_{-1\leq m \leq n} (\ensuremath{\mathcal{E}}_m \cup (-\ensuremath{\mathcal{E}}_m))) + \ensuremath{\mathcal {W}})\\
& \supseteq
\ensuremath{\mathcal I}_n
\end{align*}
hold and hence $-\ensuremath{\mathcal I}_n \subseteq (-\ensuremath{\mathcal{E}}) + (-\ensuremath{\mathcal {W}}) = \ensuremath{\mathcal{E}} + \ensuremath{\mathcal {W}}$.
Moreover, the inclusions
$$
\ensuremath{\mathcal{E}} + \ensuremath{\mathcal {W}}
\supseteq
\ensuremath{\mathcal{E}}_{-1} + \ensuremath{\mathcal {W}}
\supseteq \{0\}$$
hold. It follows that $\ensuremath{\mathcal {W}}$ is an additive complement to $\ensuremath{\mathcal{E}}$.
We claim that $\ensuremath{\mathcal {W}}$ is a minimal complement of $\ensuremath{\mathcal{E}}$.
Since $t_n > 3t_{n-1}$ for $n\geq 1$, from Proposition \ref{Prop:M}, it follows that for $n\geq 0$, no point of $\ensuremath{\mathcal{E}} \times \ensuremath{\mathcal {W}}$ other than $(0, -t_n)$ goes to $-t_n$ under the addition map $\ensuremath{\mathcal{E}} \times \ensuremath{\mathcal {W}} \to \ensuremath{\mathbb{Z}}^d$, and hence no point of $\ensuremath{\mathcal{E}} \times \ensuremath{\mathcal {W}}$ other than $(0, t_n)$ goes to $t_n$ under the addition map $\ensuremath{\mathcal{E}} \times \ensuremath{\mathcal {W}} \to \ensuremath{\mathbb{Z}}^d$. Thus $\ensuremath{\mathcal {W}}$ is a minimal complement of $\ensuremath{\mathcal{E}}$.
We claim that $\ensuremath{\mathcal{E}}$ is a minimal complement to $\ensuremath{\mathcal {W}}$.
On the contrary, let us assume that $\ensuremath{\mathcal{E}}$ is not a minimal complement to $\ensuremath{\mathcal {W}}$. Hence $\ensuremath{\mathcal{E}}\setminus \{e\}$ is an additive complement to $\ensuremath{\mathcal {W}}$ for some $e\in \ensuremath{\mathcal{E}}$.
Note that $e \neq 0$.
Since $\ensuremath{\mathcal{E}}$ is symmetric, we may assume that $e$ lies in $\ensuremath{\mathcal{E}}_{n+1}$ for some $n\geq 0$. Thus $t_n +e$ lies in $\ensuremath{\mathcal I}_n$.
It follows from Proposition \ref{Prop:M} that no element of $\ensuremath{\mathcal I}_n$ lies in
$$
((\cup _{m\geq n+2} \ensuremath{\mathcal{E}}_m) + \ensuremath{\mathcal {W}})
\cup
((\cup _{m\geq n+1} (-\ensuremath{\mathcal{E}}_m)) + \ensuremath{\mathcal {W}})
\cup
(\ensuremath{\mathcal{E}}_{n+1} + (\ensuremath{\mathcal {W}} \setminus \{t_n\})).$$
So
$t_n+e$ belongs to
$(
(\cup_{0 \leq m \leq n} (\ensuremath{\mathcal{E}}_m \cup (-\ensuremath{\mathcal{E}}_m)))
+ \ensuremath{\mathcal {W}}
)
\cup
((\ensuremath{\mathcal{E}}_{n+1} \setminus\{e\}) + \{t_n\})$.
Since $e\in \ensuremath{\mathcal I}_{n+1}$, it follows that
$t_n + e$ lies in $(\ensuremath{\mathcal{E}}_{n+1} \setminus\{e\}) + \{t_n\}$, which yields $e\in \ensuremath{\mathcal{E}}_{n+1} \setminus\{e\}$. This contradicts the hypothesis that $\ensuremath{\mathcal{E}}$ is not a minimal complement of $\ensuremath{\mathcal {W}}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm}(2)]
Define the subsets $\ensuremath{\mathcal{F}}_n$ of $\ensuremath{\mathbb{Z}}^d$ for $n\geq 0$ as follows.
$$\ensuremath{\mathcal{F}}_n
=
\begin{cases}
\{-t_0\} & \text{ if } n = 0,\\
\{
- t_{n-1}
\}
+
(\ensuremath{\mathcal I}_{n-1} \setminus ( (\cup_{0\leq m \leq n-1} (\ensuremath{\mathcal{F}}_m \cup (-\ensuremath{\mathcal{F}}_m))) + \ensuremath{\mathcal {W}}))
& \text{ if } n \geq 1.
\end{cases}
$$
Define the subset $\ensuremath{\mathcal{F}}$ of $\ensuremath{\mathbb{Z}}^d$ by
$$\ensuremath{\mathcal{F}} : = \cup _{n\geq 0} (\ensuremath{\mathcal{F}}_n \cup (-\ensuremath{\mathcal{F}}_n)).$$
For $n\geq 0$,
the inclusions
\begin{align*}
\ensuremath{\mathcal{F}} + \ensuremath{\mathcal {W}}
& \supseteq
(\ensuremath{\mathcal{F}}_{n+1} + t_n) \cup
( (\cup_{0\leq m \leq n} (\ensuremath{\mathcal{F}}_m \cup (-\ensuremath{\mathcal{F}}_m))) + \ensuremath{\mathcal {W}})\\
& \supseteq
(\ensuremath{\mathcal I}_{n} \setminus ( (\cup_{0\leq m \leq n} (\ensuremath{\mathcal{F}}_m \cup (-\ensuremath{\mathcal{F}}_m))) + \ensuremath{\mathcal {W}}))\cup
( (\cup_{0\leq m \leq n} (\ensuremath{\mathcal{F}}_m \cup (-\ensuremath{\mathcal{F}}_m))) + \ensuremath{\mathcal {W}})\\
& \supseteq
\ensuremath{\mathcal I}_n
\end{align*}
hold and hence $-\ensuremath{\mathcal I}_n \subseteq (-\ensuremath{\mathcal{F}}) + (-\ensuremath{\mathcal {W}}) = \ensuremath{\mathcal{F}} + \ensuremath{\mathcal {W}}$.
Moreover, the inclusions
$$
\ensuremath{\mathcal{F}} + \ensuremath{\mathcal {W}}
\supseteq
\ensuremath{\mathcal{F}}_{0} + \ensuremath{\mathcal {W}}
\supseteq \{0\}$$
hold. It follows that $\ensuremath{\mathcal {W}}$ is an additive complement to $\ensuremath{\mathcal{F}}$.
We claim that $\ensuremath{\mathcal {W}}$ is a minimal complement of $\ensuremath{\mathcal{F}}$.
Since $t_n > 3t_{n-1}$ for $n\geq 1$, from Proposition \ref{Prop:M}, it follows that for $n\geq 0$, no point of $\ensuremath{\mathcal{F}} \times \ensuremath{\mathcal {W}}$ other than $(-2t_n, t_n)$ goes to $-t_n$ under the addition map $\ensuremath{\mathcal{F}} \times \ensuremath{\mathcal {W}} \to \ensuremath{\mathbb{Z}}^d$, and hence no point of $\ensuremath{\mathcal{F}} \times \ensuremath{\mathcal {W}}$ other than $(2t_n, -t_n)$ goes to $t_n$ under the addition map $\ensuremath{\mathcal{F}} \times \ensuremath{\mathcal {W}} \to \ensuremath{\mathbb{Z}}^d$. Thus $\ensuremath{\mathcal {W}}$ is a minimal complement of $\ensuremath{\mathcal{F}}$.
We claim that $\ensuremath{\mathcal{F}}$ is a minimal complement to $\ensuremath{\mathcal {W}}$.
On the contrary, let us assume that $\ensuremath{\mathcal{F}}$ is not a minimal complement to $\ensuremath{\mathcal {W}}$. Hence $\ensuremath{\mathcal{F}}\setminus \{f\}$ is an additive complement to $\ensuremath{\mathcal {W}}$ for some $f\in \ensuremath{\mathcal{F}}$.
Since $\ensuremath{\mathcal{F}}$ is symmetric, we may assume that $f$ lies in $\ensuremath{\mathcal{F}}_{n+1}$ for some $n\geq -1$.
Since no point of $\ensuremath{\mathcal{F}} \times \ensuremath{\mathcal {W}}$ other than $(-t_0, -t_0)$ goes to $-2t_0$ under the addition map $\ensuremath{\mathcal{F}} \times \ensuremath{\mathcal {W}} \to \ensuremath{\mathbb{Z}}^d$, it follows that $f \neq -t_0$, i.e., $f\notin \ensuremath{\mathcal{F}}_0$.
So $f$ lies in $\ensuremath{\mathcal{F}}_{n+1}$ for some $n\geq 0$. Thus $t_n +f$ lies in $\ensuremath{\mathcal I}_n$.
It follows from Proposition \ref{Prop:M} that no element of $\ensuremath{\mathcal I}_n$ lies in
$$
((\cup _{m\geq n+2} \ensuremath{\mathcal{F}}_m) + \ensuremath{\mathcal {W}})
\cup
((\cup _{m\geq n+1} (-\ensuremath{\mathcal{F}}_m)) + \ensuremath{\mathcal {W}})
\cup
(\ensuremath{\mathcal{F}}_{n+1} + (\ensuremath{\mathcal {W}} \setminus \{t_n\})).$$
So
$t_n+f$ belongs to
$(
(\cup_{0 \leq m \leq n} (\ensuremath{\mathcal{F}}_m \cup (-\ensuremath{\mathcal{F}}_m)))
+ \ensuremath{\mathcal {W}}
)
\cup
((\ensuremath{\mathcal{F}}_{n+1} \setminus\{f\}) + \{t_n\})$.
Since $f\in \ensuremath{\mathcal I}_{n+1}$, it follows that
$t_n + f$ lies in $(\ensuremath{\mathcal{F}}_{n+1} \setminus\{f\}) + \{t_n\}$, which yields $f\in \ensuremath{\mathcal{F}}_{n+1} \setminus\{f\}$. This contradicts the hypothesis that $\ensuremath{\mathcal{F}}$ is not a minimal complement of $\ensuremath{\mathcal {W}}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm}(3)]
Let $\{m_k\}_{k\geq 0}$ be an increasing sequence such that $m_0 \geq 2$ and $t_{m_k -2}\geq (k+1)\ensuremath{\mathbf{1}}$ for all $k\geq 0$. We define the subsets $G_n$ of $\ensuremath{\mathbb{Z}}^d$ for $n\geq 0$ as follows.
$$G_n
=
\begin{cases}
\ensuremath{\mathcal I}_0 & \text{ if } n = 0, \\
\ensuremath{\mathcal {X}}_{-2t_{n-1}, -t_{n-1} - t_{n-2}}
& \text{ if } n \geq 1 \text{ and } n \neq m_k \text{ for all } k \geq 0,\\
\ensuremath{\mathcal {X}}_{- t_n + k\ensuremath{\mathbf{1}}, - t_n + (k+1)\ensuremath{\mathbf{1}}}
\cup
\ensuremath{\mathcal {X}}_{-2t_{n-1}, -t_{n-1} - t_{n-2}}
& \text{ if } n \geq 1 \text{ and } n = m_k \text{ for some } k \geq 0.\\
\end{cases}
$$
Let $G$ denote the union $\cup_{n\geq 0 } G_n$. Note that $\ensuremath{\mathcal{T}}$ is an additive complement of $G$. Indeed, the inclusions
\begin{align*}
G + \ensuremath{\mathcal{T}}
& \supseteq
\left(\cup_{n\geq 1} (\ensuremath{\mathcal {X}}_{-2t_{n-1}, -t_{n-1} - t_{n-2}}+ \ensuremath{\mathcal{T}}) \right)
\bigcup
\left(\cup_{k\geq 0} (\ensuremath{\mathcal {X}}_{- t_{m_k} + k\ensuremath{\mathbf{1}}, - t_{m_k} + (k+1)\ensuremath{\mathbf{1}}}+ \ensuremath{\mathcal{T}})\right) \\
& \supseteq
(\cup_{n\geq 1} \ensuremath{\mathcal I}_{n-1} )
\cup
\ensuremath{\mathbb{Z}}^d_{\geq 0} \\
& = \ensuremath{\mathbb{Z}}^d
\end{align*}
hold, which shows that $G + \ensuremath{\mathcal{T}} = \ensuremath{\mathbb{Z}}^d$. So $\ensuremath{\mathcal {W}}$ is an additive complement to $G$.
We claim that $\ensuremath{\mathcal {W}}$ is a minimal complement of $G \setminus (\{-2t_0\}\cup \{- t_n - 3t_{n-1} \,|\,n \geq 1\})$. For $k\geq 0$, the inequalities
\begin{align*}
t_{m_k} - t_{m_k-1} - t_{m_k-2}
& =
(t_{m_k} - 2t_{m_k-1}) + (t_{m_k-1} - 2t_{m_k-2}) + t_{m_k-2} \\
& \geq t_{m_k-2} \\
& \geq (k+1)\ensuremath{\mathbf{1}}
\end{align*}
hold, which implies
$$-t_{m_k} + (k+1)\ensuremath{\mathbf{1}}
\leq
- t_{m_k-1} - t_{m_k-2} ,$$
this yields
$$
\ensuremath{\mathbb{Z}}^d_{\geq -t_{m_k} + (k+1)\ensuremath{\mathbf{1}}}
\supseteq
\ensuremath{\mathbb{Z}}^d_{\geq - t_{m_k-1} - t_{m_k-2} },$$
and hence
$$
\ensuremath{\mathcal {X}}_{-t_{m_k}, -t_{m_k} + (k+1)\ensuremath{\mathbf{1}}}
\subseteq
\ensuremath{\mathcal {X}}_{-t_{m_k}, - t_{m_k-1} - t_{m_k-2} }.$$
Thus $G_n \subseteq \ensuremath{\mathcal J}_n$ for all $n\geq 0$. Let $n$ be a positive integer. Note that $-3t_{n-1}, -4t_{n-1}$ lie in $\ensuremath{\mathcal I}_n$. The inclusions
\begin{align*}
\cup_{0 \leq m < n} G_m + \ensuremath{\mathcal {W}}_{\leq -t_n}
& \subseteq
(\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq 0}) + \ensuremath{\mathbb{Z}}^d_{\leq - t_n} \\
& \subseteq
\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq -t_n}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq - 4t_{n-1} }
\end{align*}
hold, and the inclusions
\begin{align*}
\cup_{0 \leq m < n} G_m + \ensuremath{\mathcal {W}}_{\geq -t_{n-1}}
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq - t_{n-1} } + \ensuremath{\mathbb{Z}}^d_{\geq - t_{n-1}}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq - 2t_{n-1} } \\
\end{align*}
hold. Using Proposition \ref{Prop:M}, it follows that the set
$$
\cup _{m \neq n, n+1} G_m + \ensuremath{\mathcal {W}}$$
contains none of $-3t_{n-1}, -4t_{n-1}$.
Since $-3t_{n-1} , -4 t_{n-1}\in \ensuremath{\mathcal I}_n$, and $G_{n+1} + t_n$ contains $\ensuremath{\mathcal I}_n$, it follows that $- t_n - 3t_{n-1}, -t_n -4t_{n-1} \in G_{n+1}$. By Proposition \ref{Prop:M}, $G_{n+1} + (\ensuremath{\mathcal {W}} \setminus \{t_n\})$ contains no element of $\ensuremath{\mathcal I}_n$. So no point of $\cup _{m \neq n} G_m + \ensuremath{\mathcal {W}}$ other than $(- t_n - 3t_{n-1}, t_n)$ goes to $-3t_{n-1}$, and no point of $\cup _{m \neq n} G_m + \ensuremath{\mathcal {W}}$ other than $(- t_n - 4t_{n-1}, t_n)$ goes to $-4t_{n-1}$. Note that the inclusions
\begin{align*}
(\{-t_{n-1}\} + \ensuremath{\mathcal I}_{n-1} )+ \ensuremath{\mathcal {W}}_{> -t_{n-1}}
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq -2t_{n-1} } + \ensuremath{\mathbb{Z}}^d_{> -t_{n-1}}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{> -2t_{n-1} -t_{n-1}}\\
& =
\ensuremath{\mathbb{Z}}^d_{> -3t_{n-1} }
\end{align*}
hold, the inclusions
\begin{align*}
(\{-t_{n-1}\} + \ensuremath{\mathcal I}_{n-1} )+ \ensuremath{\mathcal {W}}_{\leq -t_{n}}
& \subseteq
\ensuremath{\mathcal {X}}_{-t_{n-1}, -t_{n-2}} + \ensuremath{\mathcal {W}}_{\leq -t_{n}}\\
& \subseteq
\ensuremath{\mathcal {X}}_{-t_{n-1}, 0} + \ensuremath{\mathcal {W}}_{\leq -t_{n}}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d \setminus \ensuremath{\mathbb{Z}}^d_{\geq -t_n}
\end{align*}
hold. Moreover, the inclusion
$$
\ensuremath{\mathcal {X}}_{- t_n + k\ensuremath{\mathbf{1}}, - t_n + (k+1)\ensuremath{\mathbf{1}}} + \ensuremath{\mathcal {W}}_{\geq t_n} \subseteq \ensuremath{\mathbb{Z}}^d_{\geq 0}
$$
holds when $n = m_k$ for some $k\geq 0$, and the inclusions
\begin{align*}
\ensuremath{\mathcal {X}}_{- t_n + k\ensuremath{\mathbf{1}}, - t_n + (k+1)\ensuremath{\mathbf{1}}} + \ensuremath{\mathcal {W}}_{\leq t_{n-1}}
& \subseteq
\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq - t_n + (k+1)\ensuremath{\mathbf{1}} + t_{n-1}} \\
& \subseteq
\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq - 6t_{n-1} + (k+1)\ensuremath{\mathbf{1}} + t_{n-1}} \\
& =
\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq - 5t_{n-1} + (k+1)\ensuremath{\mathbf{1}}} \\
& \subseteq
\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq - 4t_{n-1} - t_{n-2} + (k+1)\ensuremath{\mathbf{1}}} \\
& \subseteq
\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq - 4t_{n-1} }
\end{align*}
hold when $n = m_k$ for some $k\geq 0$. Also note that $G_n$ contains $- 2t_{n-1}$. It follows that no element of $G_n \times \ensuremath{\mathcal {W}}$ other than $(-2t_{n-1}, -t_{n-1})$ goes to $-3t_{n-1}$ under the addition map $G_n\times \ensuremath{\mathcal {W}} \to \ensuremath{\mathbb{Z}}^d$. Hence no element of $G \times \ensuremath{\mathcal {W}}$ other than $(-2t_{n-1}, -t_{n-1}), (- t_n - 3t_{n-1}, t_n)$ goes to $-3t_{n-1}$ under the addition map $G\times \ensuremath{\mathcal {W}} \to \ensuremath{\mathbb{Z}}^d$. Moreover, $G_n + \ensuremath{\mathcal {W}}$ does not contain $-4t_{n-1}$. Hence no element of $G \times \ensuremath{\mathcal {W}}$ other than $(- t_n - 4t_{n-1}, t_n)$ goes to $-4t_{n-1}$ under the addition map $G\times \ensuremath{\mathcal {W}} \to \ensuremath{\mathbb{Z}}^d$. Proposition \ref{Prop:M} implies that
$$ - t_0
\notin
(\cup_{m \geq 2} G_m + \ensuremath{\mathcal {W}})
\cup
(G_{1} + (\ensuremath{\mathcal {W}} \setminus \{t_{0}\})).
$$
Also note that the inclusions
\begin{align*}
G_0 + \ensuremath{\mathcal {W}}_{\geq t_0}
& \subseteq
\ensuremath{\mathcal {X}}_{-t_0, 0} + \ensuremath{\mathcal {W}}_{\geq t_0}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq -t_0} + \ensuremath{\mathcal {W}}_{\geq t_0}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq 0}
\end{align*}
hold and the inclusions
\begin{align*}
G_0 + \ensuremath{\mathcal {W}}_{\leq -t_0}
& \subseteq
\ensuremath{\mathcal {X}}_{-t_0, 0} + \ensuremath{\mathcal {W}}_{\leq -t_0}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d \setminus \ensuremath{\mathbb{Z}}^d_{\geq -t_0}
\end{align*}
hold, and hence no element of $G\times \ensuremath{\mathcal {W}}$ other than $(-2t_0, t_0), (-t_0, 0)$ goes to $-t_0$ under the addition map $G\times \ensuremath{\mathcal {W}} \to \ensuremath{\mathbb{Z}}^d$. It follows that $\ensuremath{\mathcal {W}}$ is a minimal complement to $G \setminus (\{-2t_0\}\cup \{- t_n - 3t_{n-1} \,|\,n \geq 1\})$.
We claim that $\ensuremath{\mathcal {W}}$ and some subset of $G \setminus (\{-2t_0\}\cup \{- t_n - 3t_{n-1} \,|\,n \geq 1\})$ form a co-minimal pair. By Proposition \ref{Prop:M}, each element of $\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq 0} = \cup_{n\geq 0} \ensuremath{\mathcal I}_n$ can be expressed as a sum of an element of $G$ and an element of $\ensuremath{\mathcal {W}}$ only in finitely many ways. Note that the inclusions
\begin{align*}
G_m + \ensuremath{\mathcal {W}}_{\leq t_{m-1}}
& \subseteq
\ensuremath{\mathcal {X}}_{-t_m, - t_{m-1} } + \ensuremath{\mathcal {W}}_{\leq t_{m-1}}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d\setminus \ensuremath{\mathbb{Z}}^d_{\geq 0}
\end{align*}
hold for $m\geq 1$ and $r\leq m-1$, the inclusions
\begin{align*}
(\{-t_{m-1}\} + \ensuremath{\mathcal I}_{m-1} ) + \ensuremath{\mathcal {W}}_{\geq t_m}
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq - 2t_{m-1} } + \ensuremath{\mathbb{Z}}^d_{\geq t_m}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq t_m- 2t_{m-1} }
\end{align*}
hold for $m\geq 1$,
the inequality
$$
\ensuremath{\mathcal {X}}_{-t_{m_k} + k\ensuremath{\mathbf{1}}, -t_{m_k} + (k+1)\ensuremath{\mathbf{1}} } + t_r
\geq
\ensuremath{\mathbb{Z}}^d_{\geq - t_{m_k} + k\ensuremath{\mathbf{1}} + t_r}
=
\ensuremath{\mathbb{Z}}^d_{\geq k\ensuremath{\mathbf{1}}}
$$
holds for $k\geq 0$ and for $r\geq m_k$, and hence
$$\cup_{m\geq M} G_m + \ensuremath{\mathcal {W}}$$
does not contain a given point of $\ensuremath{\mathbb{Z}}^d_{\geq 0}$ for some large enough $M$.
Hence any given element element of $\ensuremath{\mathbb{Z}}^d_{\geq 0}$ can be expressed as a sum of an element of $G$ and an element of $\ensuremath{\mathcal {W}}$ only in finitely many ways. So each element of $\ensuremath{\mathbb{Z}}^d$ can be expressed as a sum of an element of $G$ and an element of $\ensuremath{\mathcal {W}}$ only in finitely many ways. In particular, each element of $\ensuremath{\mathbb{Z}}^d$ can be expressed as a sum of an element of $G \setminus (\{-2t_0\}\cup \{- t_n - 3t_{n-1} \,|\,n \geq 1\})$ and an element of $\ensuremath{\mathcal {W}}$ only in finitely many ways. By Lemma \ref{Lemma:Finiteness}, it follows that some nonempty subset $\ensuremath{\mathcal{G}}$ of $G \setminus (\{-2t_0\}\cup \{- t_n - 3t_{n-1} \,|\,n \geq 1\})$ is a minimal complement to $\ensuremath{\mathcal {W}}$. Since $\ensuremath{\mathcal {W}}$ is a minimal complement to $G \setminus (\{-2t_0\}\cup \{- t_n - 3t_{n-1} \,|\,n \geq 1\})$, it follows that $\ensuremath{\mathcal {W}}$ is a minimal complement to $\ensuremath{\mathcal{G}}$. Hence $(\ensuremath{\mathcal{G}}, \ensuremath{\mathcal {W}})$ is a co-minimal pair.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm}(4)]
Define the subsets $\ensuremath{\mathcal{P}}_n$ of $\ensuremath{\mathbb{Z}}^d$ for $n\geq 0$ as follows.
$$\ensuremath{\mathcal{P}}_n
=
\begin{cases}
\{0\} & \text{ if } n = -1,\\
\{-t_0\} & \text{ if } n = 0,\\
\{
- t_{n-1}
\}
+
(\ensuremath{\mathcal I}_{n-1} \setminus ( (\cup_{-1\leq m \leq n-1} (\ensuremath{\mathcal{P}}_m \cup (-\ensuremath{\mathcal{P}}_m))) + \ensuremath{\mathcal{V}}))
& \text{ if } n \geq 1.
\end{cases}
$$
Define the subset $\ensuremath{\mathcal{P}}$ of $\ensuremath{\mathbb{Z}}^d$ by
$$\ensuremath{\mathcal{P}} : = \cup _{n\geq -1} (\ensuremath{\mathcal{P}}_n \cup (-\ensuremath{\mathcal{P}}_n)).$$
For $n\geq 1$,
the inclusions
\begin{align*}
\ensuremath{\mathcal{P}} + \ensuremath{\mathcal{V}}
& \supseteq
(\ensuremath{\mathcal{P}}_{n+1} + t_n) \cup
( (\cup_{-1\leq m \leq n} (\ensuremath{\mathcal{P}}_m \cup (-\ensuremath{\mathcal{P}}_m))) + \ensuremath{\mathcal{V}})\\
& \supseteq
(\ensuremath{\mathcal I}_{n} \setminus ( (\cup_{-1\leq m \leq n} (\ensuremath{\mathcal{P}}_m \cup (-\ensuremath{\mathcal{P}}_m))) + \ensuremath{\mathcal{V}}))\cup
( (\cup_{-1\leq m \leq n} (\ensuremath{\mathcal{P}}_m \cup (-\ensuremath{\mathcal{P}}_m))) + \ensuremath{\mathcal{V}})\\
& \supseteq
\ensuremath{\mathcal I}_n
\end{align*}
hold and hence $-\ensuremath{\mathcal I}_n \subseteq (-\ensuremath{\mathcal{P}}) + (-\ensuremath{\mathcal{V}}) = \ensuremath{\mathcal{P}} + \ensuremath{\mathcal{V}}$.
Moreover, the inclusions
$$\ensuremath{\mathcal I}_0\subseteq \ensuremath{\mathcal{P}}_1 + t_0, $$
$$-\ensuremath{\mathcal I}_0\subseteq (-\ensuremath{\mathcal{P}}_1) + (- t_0), $$
$$
\ensuremath{\mathcal{P}} + \ensuremath{\mathcal{V}}
\supseteq
\ensuremath{\mathcal{P}}_{0} + \ensuremath{\mathcal{V}}
\supseteq \{0\}$$
hold. It follows that $\ensuremath{\mathcal{V}}$ is an additive complement to $\ensuremath{\mathcal{P}}$.
We claim that $\ensuremath{\mathcal{V}}$ is a minimal complement of $\ensuremath{\mathcal{P}}$.
Since $t_n > 3t_{n-1}$ for $n\geq 1$, from Proposition \ref{Prop:M}, it follows that for $n\geq 0$, no point of $\ensuremath{\mathcal{P}} \times \ensuremath{\mathcal{V}}$ other than $(0, -t_n)$ goes to $-t_n$ under the addition map $\ensuremath{\mathcal{P}} \times \ensuremath{\mathcal{V}} \to \ensuremath{\mathbb{Z}}^d$, and hence no point of $\ensuremath{\mathcal{P}} \times \ensuremath{\mathcal{V}}$ other than $(0, t_n)$ goes to $t_n$ under the addition map $\ensuremath{\mathcal{P}} \times \ensuremath{\mathcal{V}} \to \ensuremath{\mathbb{Z}}^d$. Thus $\ensuremath{\mathcal{V}}$ is a minimal complement of $\ensuremath{\mathcal{P}}$.
We claim that $\ensuremath{\mathcal{P}}$ is a minimal complement to $\ensuremath{\mathcal{V}}$.
On the contrary, let us assume that $\ensuremath{\mathcal{P}}$ is not a minimal complement to $\ensuremath{\mathcal{V}}$. Hence $\ensuremath{\mathcal{P}}\setminus \{p\}$ is an additive complement to $\ensuremath{\mathcal{V}}$ for some $p\in \ensuremath{\mathcal{P}}$.
Note that $p \neq 0$.
Since $\ensuremath{\mathcal{P}}$ is symmetric, we may assume that $p\in \ensuremath{\mathcal{P}}_{n+1}$ for some $n\geq -1$.
Since no point of $\ensuremath{\mathcal{P}} \times \ensuremath{\mathcal{V}}$ other than $(-t_0, -t_0)$ goes to $-2t_0$ under the addition map $\ensuremath{\mathcal{P}} \times \ensuremath{\mathcal{V}} \to \ensuremath{\mathbb{Z}}^d$, it follows that $p \neq -t_0$, i.e., $p\notin \ensuremath{\mathcal{P}}_0$.
So $p$ lies in $\ensuremath{\mathcal{P}}_{n+1}$ for some $n\geq 0$. Thus $t_n +p$ lies in $\ensuremath{\mathcal I}_n$.
It follows from Proposition \ref{Prop:M} that no element of $\ensuremath{\mathcal I}_n$ lies in
$$
((\cup _{m\geq n+2} \ensuremath{\mathcal{P}}_m) + \ensuremath{\mathcal{V}})
\cup
((\cup _{m\geq n+1} (-\ensuremath{\mathcal{P}}_m)) + \ensuremath{\mathcal{V}})
\cup
(\ensuremath{\mathcal{P}}_{n+1} + (\ensuremath{\mathcal{V}} \setminus \{t_n\})).$$
So
$t_n+p$ belongs to
$(
(\cup_{0 \leq m \leq n} (\ensuremath{\mathcal{P}}_m \cup (-\ensuremath{\mathcal{P}}_m)))
+ \ensuremath{\mathcal{V}}
)
\cup
((\ensuremath{\mathcal{P}}_{n+1} \setminus\{p\}) + \{t_n\})$.
Since $p\in \ensuremath{\mathcal I}_{n+1}$, it follows that
$t_n + p$ lies in $(\ensuremath{\mathcal{P}}_{n+1} \setminus\{p\}) + \{t_n\}$, which yields $p\in \ensuremath{\mathcal{P}}_{n+1} \setminus\{p\}$. This contradicts the hypothesis that $\ensuremath{\mathcal{P}}$ is not a minimal complement of $\ensuremath{\mathcal{V}}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm}(5)]
Define the subsets $\ensuremath{\mathcal{Q}}_n$ of $\ensuremath{\mathbb{Z}}^d$ for $n\geq 0$ as follows.
$$\ensuremath{\mathcal{Q}}_n
=
\begin{cases}
\{-t_0\} & \text{ if } n = 0,\\
\{
- t_{n-1}
\}
+
(\ensuremath{\mathcal I}_{n-1} \setminus ( (\cup_{0\leq m \leq n-1} (\ensuremath{\mathcal{Q}}_m \cup (-\ensuremath{\mathcal{Q}}_m))) + \ensuremath{\mathcal{V}}))
& \text{ if } n \geq 1.
\end{cases}
$$
Define the subset $\ensuremath{\mathcal{Q}}$ of $\ensuremath{\mathbb{Z}}^d$ by
$$\ensuremath{\mathcal{Q}} : = \cup _{n\geq 0} (\ensuremath{\mathcal{Q}}_n \cup (-\ensuremath{\mathcal{Q}}_n)).$$
For $n\geq 0$,
the inclusions
\begin{align*}
\ensuremath{\mathcal{Q}} + \ensuremath{\mathcal{V}}
& \supseteq
(\ensuremath{\mathcal{Q}}_{n+1} + t_n) \cup
( (\cup_{0\leq m \leq n} (\ensuremath{\mathcal{Q}}_m \cup (-\ensuremath{\mathcal{Q}}_m))) + \ensuremath{\mathcal{V}})\\
& \supseteq
(\ensuremath{\mathcal I}_{n} \setminus ( (\cup_{0\leq m \leq n} (\ensuremath{\mathcal{Q}}_m \cup (-\ensuremath{\mathcal{Q}}_m))) + \ensuremath{\mathcal{V}}))\cup
( (\cup_{0\leq m \leq n} (\ensuremath{\mathcal{Q}}_m \cup (-\ensuremath{\mathcal{Q}}_m))) + \ensuremath{\mathcal{V}})\\
& \supseteq
\ensuremath{\mathcal I}_n
\end{align*}
hold and hence $-\ensuremath{\mathcal I}_n \subseteq (-\ensuremath{\mathcal{Q}}) + (-\ensuremath{\mathcal{V}}) = \ensuremath{\mathcal{Q}} + \ensuremath{\mathcal{V}}$.
Moreover, the inclusions
$$
\ensuremath{\mathcal{Q}} + \ensuremath{\mathcal{V}}
\supseteq
\ensuremath{\mathcal{Q}}_{0} + \ensuremath{\mathcal{V}}
\supseteq \{0\}$$
hold. It follows that $\ensuremath{\mathcal{V}}$ is an additive complement to $\ensuremath{\mathcal{Q}}$.
We claim that $\ensuremath{\mathcal{V}}$ is a minimal complement of $\ensuremath{\mathcal{Q}}$.
Since $t_n > 3t_{n-1}$ for $n\geq 1$, from Proposition \ref{Prop:M}, it follows that for $n\geq 0$, no point of $\ensuremath{\mathcal{Q}} \times \ensuremath{\mathcal{V}}$ other than $(-2t_n, t_n)$ goes to $-t_n$ under the addition map $\ensuremath{\mathcal{Q}} \times \ensuremath{\mathcal{V}} \to \ensuremath{\mathbb{Z}}^d$, and hence no point of $\ensuremath{\mathcal{Q}} \times \ensuremath{\mathcal{V}}$ other than $(2t_n, -t_n)$ goes to $t_n$ under the addition map $\ensuremath{\mathcal{Q}} \times \ensuremath{\mathcal{V}} \to \ensuremath{\mathbb{Z}}^d$. Thus $\ensuremath{\mathcal{V}}$ is a minimal complement of $\ensuremath{\mathcal{Q}}$.
We claim that $\ensuremath{\mathcal{Q}}$ is a minimal complement to $\ensuremath{\mathcal{V}}$.
On the contrary, let us assume that $\ensuremath{\mathcal{Q}}$ is not a minimal complement to $\ensuremath{\mathcal{V}}$. Hence $\ensuremath{\mathcal{Q}}\setminus \{q\}$ is an additive complement to $\ensuremath{\mathcal{V}}$ for some $q\in \ensuremath{\mathcal{Q}}$.
Since $\ensuremath{\mathcal{Q}}$ is symmetric, we may assume that $q\in \ensuremath{\mathcal{Q}}_{n+1}$ for some $n\geq -1$.
Since no point of $\ensuremath{\mathcal{Q}} \times \ensuremath{\mathcal{V}}$ other than $(-t_0, -t_0)$ goes to $-2t_0$ under the addition map $\ensuremath{\mathcal{Q}} \times \ensuremath{\mathcal{V}} \to \ensuremath{\mathbb{Z}}^d$, it follows that $q \neq -t_0$, i.e., $q\notin \ensuremath{\mathcal{Q}}_0$.
So $q$ lies in $\ensuremath{\mathcal{Q}}_{n+1}$ for some $n\geq 0$. Thus $t_n +q$ lies in $\ensuremath{\mathcal I}_n$.
It follows from Proposition \ref{Prop:M} that no element of $\ensuremath{\mathcal I}_n$ lies in
$$
((\cup _{m\geq n+2} \ensuremath{\mathcal{Q}}_m) + \ensuremath{\mathcal{V}})
\cup
((\cup _{m\geq n+1} (-\ensuremath{\mathcal{Q}}_m)) + \ensuremath{\mathcal{V}})
\cup
(\ensuremath{\mathcal{Q}}_{n+1} + (\ensuremath{\mathcal{V}} \setminus \{t_n\})).$$
So
$t_n+q$ belongs to
$(
(\cup_{0 \leq m \leq n} (\ensuremath{\mathcal{Q}}_m \cup (-\ensuremath{\mathcal{Q}}_m)))
+ \ensuremath{\mathcal{V}}
)
\cup
((\ensuremath{\mathcal{Q}}_{n+1} \setminus\{q\}) + \{t_n\})$.
Since $q\in \ensuremath{\mathcal I}_{n+1}$, it follows that
$t_n + q$ lies in $(\ensuremath{\mathcal{Q}}_{n+1} \setminus\{q\}) + \{t_n\}$, which yields $q\in \ensuremath{\mathcal{Q}}_{n+1} \setminus\{q\}$. This contradicts the hypothesis that $\ensuremath{\mathcal{Q}}$ is not a minimal complement of $\ensuremath{\mathcal{V}}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm}(6)]
Let $G$ be as in the proof of Theorem \ref{Thm}(3).
Since $G + \ensuremath{\mathcal{T}} = \ensuremath{\mathbb{Z}}^d$, it follows that $\ensuremath{\mathcal{V}}$ is an additive complement to $G$.
We claim that $\ensuremath{\mathcal{V}}$ is a minimal complement of $G \setminus \{- t_n - 3t_{n-1} \,|\,n \geq 1\}$. For $n\geq 1$, no element of $G \times \ensuremath{\mathcal {W}}$ other than $(-2t_{n-1}, -t_{n-1}), (- t_n - 3t_{n-1}, t_n)$ goes to $-3t_{n-1}$ under the addition map $G\times \ensuremath{\mathcal {W}} \to \ensuremath{\mathbb{Z}}^d$, and no element of $G \times \ensuremath{\mathcal {W}}$ other than $(- t_n - 4t_{n-1}, t_n)$ goes to $-4t_{n-1}$ under the addition map $G\times \ensuremath{\mathcal {W}} \to \ensuremath{\mathbb{Z}}^d$. Moreover, no element of $G\times \ensuremath{\mathcal {W}}$ other than $(-2t_0, t_0), (-t_0, 0)$ goes to $-t_0$ under the addition map $G\times \ensuremath{\mathcal {W}} \to \ensuremath{\mathbb{Z}}^d$. It follows that $\ensuremath{\mathcal{V}}$ is a minimal complement to $G \setminus \{- t_n - 3t_{n-1} \,|\,n \geq 1\}$.
Each element of $\ensuremath{\mathbb{Z}}^d$ can be expressed as a sum of an element of $G$ and an element of $\ensuremath{\mathcal {W}}$ only in finitely many ways. In particular, each element of $\ensuremath{\mathbb{Z}}^d$ can be expressed as a sum of an element of $G \setminus \{- t_n - 3t_{n-1} \,|\,n \geq 1\}$ and an element of $\ensuremath{\mathcal{V}}$ only in finitely many ways. By Lemma \ref{Lemma:Finiteness}, it follows that some nonempty subset $\ensuremath{\mathcal{R}}$ of $G \setminus \{- t_n - 3t_{n-1} \,|\,n \geq 1\}$ is a minimal complement to $\ensuremath{\mathcal{V}}$. Since $\ensuremath{\mathcal{V}}$ is a minimal complement to $G \setminus \{- t_n - 3t_{n-1} \,|\,n \geq 1\}$, it follows that $\ensuremath{\mathcal{V}}$ is a minimal complement to $\ensuremath{\mathcal{R}}$. Hence $(\ensuremath{\mathcal{R}}, \ensuremath{\mathcal{V}})$ is a co-minimal pair.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm}(7)]
Let $G$ be the set as in the proof of Theorem \ref{Thm}(3).
Note that $\ensuremath{\mathcal{T}}$ is an additive complement of $G$.
Let $n\geq 0$ be an integer.
Proposition \ref{Prop:M} implies that
$$ - t_n
\notin
(\cup_{m \geq n+2} G_m + \ensuremath{\mathcal{T}})
\cup
(G_{n+1} + (\ensuremath{\mathcal{T}} \setminus \{t_{n}\})).
$$
Note that the inclusions
\begin{align*}
G_m + \ensuremath{\mathcal{T}}
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq - t_m } + \ensuremath{\mathbb{Z}}^d_{\geq t_0} \\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq - t_m + t_0}\\
& \subseteq
\ensuremath{\mathbb{Z}}^d_{\geq -t_{n} + t_0}
\end{align*}
hold for any $m\leq n$.
So
$$ - t_n
\notin
(\cup_{m \neq n, n+1} G_m + \ensuremath{\mathcal{T}})
\cup
(G_{n+1} + (\ensuremath{\mathcal{T}} \setminus \{t_{n}\})).
$$
Hence $\ensuremath{\mathcal{T}}$ is a minimal complement of $G$.
Since each element of $\ensuremath{\mathbb{Z}}^d$ can be expressed as a sum of an element of $G$ and an element of $\ensuremath{\mathcal{V}}$ only in finitely many ways, it follows that each element of $\ensuremath{\mathbb{Z}}^d$ can be expressed as a sum of an element of $G$ and an element of $\ensuremath{\mathcal{T}}$ only in finitely many ways.
By Lemma \ref{Lemma:Finiteness}, some nonempty subset $\ensuremath{\mathcal{S}}$ of $G$ is a minimal complement to $\ensuremath{\mathcal{T}}$. Since $\ensuremath{\mathcal{T}}$ is a minimal complement to $G$, it follows that $(\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{T}})$ is a co-minimal pair.
\end{proof}
\begin{proof}
[Proof of Theorem \ref{Thm:UncountableSubsets}]
Let $G$ be a finitely generated abelian group. Then $G$ is isomorphic to the direct product of a finite group $G_\ensuremath{\mathrm{tors}}$ and a free abelian group $\ensuremath{\mathbb{Z}}^d$. Given any infinite subset $X$ of $G$, it contains an infinite subset $Y$ such that the projections of all the elements of $Y$ to $G_\ensuremath{\mathrm{tors}}$ are equal.
It suffices to show that any infinite subset of $\ensuremath{\mathbb{Z}}^d$ has uncountably many subsets which admit minimal complements. Let $X$ be an infinite subset of $\ensuremath{\mathbb{Z}}^d$. Let $S$ be the subset consisting of the integers $1\leq i\leq d$ such that the absolute values of the $i$-th coordinate of the elements of $X$ form an unbounded set. Note that $S$ is nonempty. Thus $X$ has an infinite subset $Y$ such that the $i$-th coordinate of all the elements of $Y$ are equal for any $i\in \{1, 2, \cdots, d\}\setminus S$, and the absolute values of the $i$-th coordinate of the elements of $Y$ form an unbounded set for any $i\in S$.
Replacing $X$ by one of its translate, we may assume that the $i$-th coordinates of the elements of $Y$ are equal to $0$ for any $i\in \{1, 2, \cdots, d\}\setminus S$.
Replacing $X$ by one of its image under an automorphism of $\ensuremath{\mathbb{Z}}^d$, we may assume that $Y$ has a subset $Z$ such that the $i$-th coordinates of the elements of $Z$ form an infinite subset of $\ensuremath{\mathbb{Z}}_{\geq 1}$ for any $i\in \{1, 2, \cdots, d\}\setminus S$.
Let $d'$ denote the cardinality of $S$. It suffices to show that if $A$ an infinite subset of $\ensuremath{\mathbb{Z}}^{d'}$ such that for any $1\leq i \leq d'$, the $i$-th coordinates of the points of $A$ form an infinite subset of $\ensuremath{\mathbb{Z}}_{\geq 1}$, then $A$ has uncountably many subsets which admit minimal complements.
Note that there is a sequence $\{x_n\}_{n\geq 0}$ contained in $A$ such that $x_n\geq 6x_{n-1}$ for all $n\geq 1$. For any subsequence $\{x_{n_k}\}$ of this sequence, the inequality $x_{n_k} \geq 6 x_{n_{k-1}}$ holds for any $k\geq 1$ and moreover, $x_{n_0} > 0$ holds and all the coordinates of any term of this subsequence are positive. By Theorem \ref{Thm}, the subset $\{x_{n_k} \,|\, k \geq 0\}$ of $A$ admits a minimal complement in $\ensuremath{\mathbb{Z}}^{d'}$.
Since the finite subsets of $\ensuremath{\mathbb{N}}$ are precisely the bounded subsets of $\ensuremath{\mathbb{N}}$, it follows that the number of finite subsets of $\ensuremath{\mathbb{N}}$ is countable. Thus the number of infinite subsets of $\ensuremath{\mathbb{N}}$ is uncountable. So $\{x_n\}_{n\geq 0}$ has uncountably many subsequences. It follows that $A$ has uncountably many subsets which admit minimal complements.
\end{proof}
\begin{proof}
[Proof of Theorem \ref{Thm:Uncountable}]
Since the finite subsets of $\ensuremath{\mathbb{N}}$ are precisely the bounded subsets of $\ensuremath{\mathbb{N}}$, it follows that the number of finite subsets of $\ensuremath{\mathbb{N}}$ is countable. Thus the number of infinite subsets of $\ensuremath{\mathbb{N}}$ is uncountable. So any sequence has uncountably many subsequences. Note that if $\{x_n\}_{n\geq 0}$ is a sequence in $\ensuremath{\mathbb{Z}}^d$ such that $x_0 = (1, \cdots, 1)$ and
$$x_n \geq 6 x_{n-1}
\quad
\text{ for all }
n \geq 1,$$
then any subsequence of $\{x_n\}_{n\geq 0}$ satisfies the hypothesis of each of the seven parts of Theorem \ref{Thm}. Thus Theorem \ref{Thm:Uncountable} follows from the existence of a sequence $\{x_n\}_{n\geq 0}$ in $\ensuremath{\mathbb{Z}}^d$ such that $x_0 = (1, \cdots, 1)$ and
$$x_n \geq 6 x_{n-1}
\quad
\text{ for all }
n \geq 1,$$
which exists, for instance, consider the sequence $x_n = (6^n, \cdots, 6^n)$.
\end{proof}
As an application of Theorem \ref{Thm}, we provide several examples of subsets of $\ensuremath{\mathbb{Z}}$ each of which is a part of a co-minimal pair.
\begin{corollary}
For each of the following subsets $S_{1},S_{2}$, there exists subsets $S'_{1},S'_{2}$ such that $(S_{1},S'_{1})$ and $(S_{2},S'_{2})$ form co-minimal pairs.
\begin{enumerate}
\item $S_{1} := \{n^k \,|\, k \geq 0\}$ for any $n\geq 3$.
\item $S_{2}:= \{2^k + k\,|\, k \geq 0\}$.
\end{enumerate}
\end{corollary}
\begin{proof}
The growth condition of the elements of the subsets (considered as sequences) in this corollary clearly satisfies the condition of Theorem \ref{Thm}.
\end{proof}
\section{Acknowledgements}
The first author would like to thank the Department of Mathematics at the Technion where a part of the work was carried out.
The second author would like to acknowledge the Initiation Grant from the Indian Institute of Science Education and Research Bhopal, and the INSPIRE Faculty Award from the Department of Science and Technology, Government of India.
\def\cprime{$'$} \def\Dbar{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex
\accent"16\hss}D} \def\cfac#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else
\setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7
\hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"13\hss}}\penalty 10000
\hskip-1\wd7\penalty 10000\box7}
\def\cftil#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else
\setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7
\hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"7E\hss}}\penalty 10000
\hskip-1\wd7\penalty 10000\box7}
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2020-06-04T02:20:28",
"yymm": "2006",
"arxiv_id": "2006.02429",
"language": "en",
"url": "https://arxiv.org/abs/2006.02429",
"abstract": "The study of minimal complements in a group or a semigroup was initiated by Nathanson. The notion of minimal complements and being a minimal complement leads to the notion of co-minimal pairs which was considered in a prior work of the authors. In this article, we study which type of subsets in the integers and free abelian groups of higher rank can be a part of a co-minimal pair. We show that a majority of lacunary sequences have this property. From the conditions established, one can show that any infinite subset of any finitely generated abelian group has uncountably many subsets which is a part of a co-minimal pair. Further, the uncountable collection of sets can be chosen so that they satisfy certain algebraic properties.",
"subjects": "Number Theory (math.NT)",
"title": "Infinite co-minimal pairs involving lacunary sequences and generalisations to higher dimensions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750510899382,
"lm_q2_score": 0.8152324826183822,
"lm_q1q2_score": 0.8049402141755023
} |
https://arxiv.org/abs/1506.07004 | Local density of Caputo-stationary functions in the space of smooth functions | We consider the Caputo fractional derivative and say that a function is Caputo-stationary if its Caputo derivative is zero. We then prove that any $C^k\big([0,1]\big)$ function can be approximated in $[0,1]$ by a a function that is Caputo-stationary in $[0,1]$, with initial point $a<0$. Otherwise said, Caputo-stationary functions are dense in $C^k_{loc}(\mathbb{R})$. | \section*{Introduction}
The interest in fractional calculus has increased in the last decades given its numerous applications in viscoelasticity, signal processing, anomalous diffusion, biology, geomorphology, materials science, fractals and so on. Nevertheless, fractional calculus is a classical argument, studied since the end of the seventeenth century by many great mathematicians like Leibniz (perhaps he was the first to mention it in a letter to L'H\^{o}pital), Euler, Lagrange, Laplace, Lacroix, Fourier, Abel, Liouville, Heaviside, Weyl, Hadamard, Riemann and so on (see \cite{MR93} for an interesting time-line history).
One can find several definitions of fractional derivatives in the literature, just to name a few, the Riemann-Liouville, the Caputo, the Riesz, the Hadamard fractional derivative, or the generalization given by the Erdélyi-Kober operator (see \cite{KST06}, \cite{MR93} and \cite{SK93} for more details on fractional integrals, derivatives and applications). The spotlight in this paper is the Caputo derivative, introduced by Michele Caputo in \cite{C67} in the late sixties.
The Caputo fractional derivative is a so-called nonlocal operator, that models long-range interactions. For instance, if we think of a function depending on time, the Caputo fractional derivative would represent a memory effect, pointing out that the state of a system at a given time depends on past events. In other words, the Caputo derivative describes a causal system (also known as a non-anticipative system).
This nonlocal character of the Caputo derivative gives rise to a peculiar behavior: on a bounded interval, say $[0,1]$, one can find a Caputo-stationary function ``close enough'' to any smooth function, without any geometrical constraints. This is a surprising result when one thinks of the rigidity of the classical derivatives. For instance, the functions with null first derivative are constant functions, the functions with null second derivatives are affine functions. Such functions cannot approximate locally any given $C^k$ function, for any fixed $k\in \ensuremath{\mathbb{N}}_0$.
Let $a \in \ensuremath{\mathbb{R}}$ and $s \in (0,1)$ be two arbitrary parameters. We define the functional space \textcolor{black}{
\eqlab{ \label{ca1s} C_a^{1,s} := \Big\{ f \colon \ensuremath{\mathbb{R}} \to \ensuremath{\mathbb{R}} \mbox{ s.t. for any } x>a, \; f \in AC\big([a,x]\big)
\mbox{ and } \displaystyle & f'(\cdot){(x-\cdot)^{-s}} \in L^1\big( (a, x)\big) \Big\} .}}
We denote here by $AC(I)$ the space of absolutely continuous functions on $I$. \textcolor{black}{ Moreover, we recall the Gamma function (see Chapter 6.1 in \cite{AS64} for other details), defined for $z>0$ as
\[ \Gamma(z) := \int_0^{+\infty} t^{z-1}e^{-t}\, dt.\] }
We define now the Caputo derivative.
\begin{defn}
The Caputo derivative of $u\in C_a^{1,s}$ with initial point $a\in \ensuremath{\mathbb{R}}$ at the point $x>a$ is given by
\begin{equation} \label{caputo}
D^s_a u(x):= \displaystyle \frac{1}{\Gamma(1-s)}\int_a^x u'(t)(x-t)^{-s}\, dt .
\end{equation}
\end{defn}
\noindent We define a Caputo-stationary function as follows.
\begin{defn}
We say \textcolor{black}{that} $u\in C_a^{1,s}$ is Caputo-stationary with initial point $a\in \ensuremath{\mathbb{R}}$ \textcolor{black}{at the point} $x>a$ if
\bgs{ \label{caph}
D^s_a u(x)=0.}
\textcolor{black}{Let $I$ be an interval such that $a\leq \inf I$.} We say \textcolor{black}{that} $u$ is Caputo-stationary with initial point $a$ in $I$ if $D_a^su(x)=0$ holds for any $x \in I$.
\end{defn}
\textcolor{black}{For $k\in \ensuremath{\mathbb{N}}_{0}$}, we consider $C^k\lr{[0,1]}$ to be the space of the $k$-times continuous differentiable functions on $[0,1]$, endowed with the $C^k$-norm
\[ \|f\|_{C^k\lr{[0,1]}} =\sum_{i=0}^k \sup_{x\in [0,1]}|f^{(i)}(x)|.\] The main result that we prove here is that for any fixed $k \in \ensuremath{\mathbb{N}}_0$, given any $C^k\big([0,1]\big)$ function, there exists an initial point $a<0$ and a Caputo-stationary function with initial point $a$, that in $[0,1]$ is arbitrarily close (in the $C^k$ norm) to the given function. More precisely:
\begin{thm}\label{thm:thm1}
\textcolor{black}{Let $k\in \ensuremath{\mathbb{N}}_0$ and $s\in (0,1)$ be two arbitrary parameters .} Then for any $f \in C^k\big([0,1]\big)$ and any $\ensuremath{\varepsilon}>0$ there exists an initial point $a<0$ and a function $u\in C^{1,s}_a $ such that
\[ D_a^s u(x)=0 \text{ in } [0,\infty) \]and
\[\| u-f\|_{C^k\big([0,1]\big)} < \ensuremath{\varepsilon}.\]
\end{thm}
\bigskip
\textcolor{black}{In the next lines we recall some notions and make some preliminary remarks on the Caputo derivative. }
\textcolor{black}{The reader can see Chapter 7.5 in \cite{zygmund} for the definition of absolutely continuous functions. In particular, we use the following characterization, given in Theorem 7.29 in \cite{zygmund}, that we recall in the next Theorem.
\begin{thm}\label{acrep} A function $f$ is absolutely continuous in $ [a,b]$ if and only if $f'$ exists almost everywhere in $[a,b]$, $f'$ is integrable on $[a,b]$ and
\bgs{ f(x)-f(a)=\int_a^x f'(t)\, dt, \quad a\leq x\leq b.}
\end{thm} }
\textcolor{black}{By convention, when we take the Caputo derivative $D_a^s$ of a function, we assume that the function is ``causal'', i.e. that it is constant on $(-\infty,a)$. In particular, we take $u(x)=u(a)$ for any $x<a$ and this, by definition \eqref{caputo}, implies that $D_a^s u(x) =0$ for $x<a$. }
\textcolor{black}{Lastly, we recall the Beta function (see Chapter 6.2 in the book \cite{AS64} for other details) defined for $x,y >0$ as
\eqlab {\beta(x,y) :=\int_0^1 t^{x-1} (1-t)^{y-1} \, dt \label{b01}.}
We also have that
\[ \beta(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}.\] In particular, the next explicit result holds
\eqlab { \label{b02} \beta(s,1-s)=\Gamma(s)\Gamma(1-s)=\frac{\pi}{\sin\pi s}.} }
\section{Strategy of the proof}\label{sbssc}
The proof is inspired from \cite{DSV14}, where a similar result is proved for the fractional Laplacian (see \cite{DNPV12} for details about this operator). Here, we have to take into account the structure of the Caputo derivative and study in detail its behavior.
The main idea of the proof is that one can build a Caputo-stationary function in say $I = [0,1]$ by choosing a ``good'' given function as ``boundary'' dat\textcolor{red}{um}. For the nonlocal operators, the ``boundary'' is the complement of the given interval, for example, the fractional Laplacian takes into account the entire space and the ``boundary'' is $\ensuremath{\mathbb{R}} \setminus I$. On the other hand, the Caputo derivative considers only the left-side complement and this reflects in the lack of symmetry of the boundary conditions. Namely, the ``boundary'' in the equations with the Caputo derivative is $(-\infty, 0]$, with the added convention that events start at a given point, say $t_0<0$ and $f$ is constant before time $t_0$.
In order the prove Theorem \ref{thm:thm1}, we use at first the Stone-Weierstrass Theorem, that we recall here. Let $k\in \ensuremath{\mathbb{N}}_{0}$ be a fixed arbitrary number.
\begin{thm}\label{thm:SW}
For any $f \in C^k\big([0,1]\big)$ and any positive $\ensuremath{\varepsilon}$ there exists a polynomial $P$ such that \[ \|f-P\|_{C^k\big([0,1]\big)} < \ensuremath{\varepsilon}.\]
\end{thm}
Then, if we prove that for any polynomial $P$ there exists a Caputo-stationary function $u$ arbitrarily close to it, by using Theorem \ref{thm:SW} we would have that
\[ \begin{split}\|u -f \|_{C^k \big([0,1]\big)} \leq \ensuremath{ &\;} \|u-P\|_{C^k \big([0,1]\big)} +\|f-P\|_{C^k \big([0,1]\big)} < 2\ensuremath{\varepsilon}.
\end{split}\]
This would conclude the proof of Theorem \ref{thm:thm1}.
In order to have this, we claim that it suffices to prove that for any monomial
\[ q_m(x)= x^m \mbox{, } m\in \ensuremath{\mathbb{N}}\]
and for any $\ensuremath{\varepsilon}_m >0$ there exists a function $u_m$ that is Caputo-stationary in $[0,1]$, such that
\begin{equation}\label{mapp1} \|u_m-q_m\|_{C^k \big([0,1]\big)} < \ensuremath{\varepsilon}_m.\end{equation}
Indeed, consider an arbitrary $n \in \ensuremath{\mathbb{N}}$ and the polynomial $\displaystyle P(x)= \sum_{m=0}^n c_m q_m(x)$. Then the function
$\displaystyle u(x):=\sum_{m=0}^n c_m u_m(x)$ would satisfy
\[ \|u - P \|_{C^k \big([0,1]\big)} \leq \sum_{m=0}^n |c_m|\, \|u_m- q_m\|_{C^k \big([0,1]\big)} < \sum_{m=0}^n |c_m| \ensuremath{\varepsilon}_m =\ensuremath{\varepsilon},\]
where one considers for any $m$ the small quantity $\ensuremath{\varepsilon}_m=\displaystyle \frac{\ensuremath{\varepsilon}}{|c_m|(n+1)}$.
Also, the function $u$ is Caputo-stationary, since the Caputo derivative is linear. Hence, the function $u$ is Caputo-stationary and is ``close'' to any polynomial. This proves the claim.
\bigskip
In the rest of the paper, we prove that we can find a Caputo-stationary function close to any given monomial. To do this, we proceed as follows: \\
\begin{itemize}
\item \textcolor{black}{ In Section \ref{sectrfcp}, we obtain a representation formula for $u$, when $D_a^s u(x)= 0 $ in $(b,\infty)$ for a given $b>a$ and having prescribed $u$ on $(-\infty,b]$. To do this, we prove that having $D_a^s u(x)= 0 $ is equivalent to having a particular integro-differential equation. We then obtain a representation formula for the integro-differential equation, hence for our initial equation. } \\
\item In Section \ref{sectaps}, we prove that there exists a sequence $(v_j)_{j\in \ensuremath{\mathbb{N}}}$ of Caputo-stationary functions in $(0,\infty)$ such that, uniformly on bounded subintervals of $(0,\infty)$, we have that $\lim_{j\to \infty} v_j(x) = \kappa x^s$, for a \textcolor{black}{suitable} constant $\kappa>0$.\\
\item In Section \ref{sectma} we prove that there exists a Caputo-stationary function with an arbitrarily large number of derivatives prescribed. We do this by taking advantage of the particular structure of the function $x^s$. If we take any derivative of such a function, say $(x^s)^{(i)}= s(s-1)\dots(s-i+1) x^{s-i} ,$ for \textcolor{black}{$x> 0$} this derivative never vanishes.\\
\item \textcolor{black}{Section \ref{sectthm1} deals with the proof of Theorem \ref{thm:thm1}. Prescribing the derivatives of $u$ such that, for $m\in \ensuremath{\mathbb{N}}$, they vanish at $0$ until the order $m-1$, and are equal to $1$ at order $m$, using a Taylor expansion and performing a blow-up argument, we can conclude the proof of the main theorem.}
\end{itemize}
\section{A representation formula for a Caputo-stationary function}\label{sectrfcp}
The purpose of this section is to deduce a Poisson-like representation formula for a function $u\in C_a^{1,s}$ that is Caputo-stationary with initial point $a$ in the interval $(b,\infty)$ for $b>a$, and fixed outside, i.e.
\bgs{ &D_a^s u(x) = 0 &\text{ in } & (b,\infty),\\
&\mbox{ prescribed data } &\text{ in } & (-\infty,b]. }
To do this, we prove that this problem is equivalent to the integro-differential equation
\bgs{&\int_b^x u'(t)(x-t)^{-s}\, dt = g(x) &\text{ in } & (b,\infty),\\
&\mbox{ prescribed data } &\text{ in } & (-\infty,b], }
for a given function $g$ (that depends on the prescribed data of the initial problem). Then, we
introduce in Theorem \ref{thm:probc} a representation formula for this integro-differential equation. With these two results in hand, we obtain a representation for the solution of the initial problem.
Moreover, we present here an interior regularity result.
\begin{center}
\begin{figure}[htpb]
\hspace{0.6cm}
\begin{minipage}[b]{0.85\linewidth}
\centering
\includegraphics[width=0.90\textwidth]{Lem31.png}
\caption{A Caputo-stationary function in $(b,\infty)$ prescribed on $(-\infty,b]$}
\label{fign:Lem31}
\end{minipage}
\end{figure}
\end{center}
\textcolor{black}{In this section, we fix the arbitrary parameters $a,b \in \ensuremath{\mathbb{R}}$ with $b>a$ and $s\in(0,1)$.}
We state in the next Lemma the equivalence between the two problems above.
\begin{lem}\label{lem:int11}
Let $\varphi \in C\big((-\infty,b] \big)\cap C^1\big([a,b]\big)$ such that $\varphi(x) = \varphi(a)$ in $(-\infty,a]$. Then $u\in C_a^{1,s}$ satisfies the equation
\bgs{ \label{intr1}
D_a^s u(x)& =0 &\text{ in } & (b,\infty),\\
u(x)&=\varphi(x) &\text{ in } & (-\infty,b]}
if and only if it satisfies
\bgs{ \label{intr2}
\int_b^x u'(t)(x-t)^{-s} \, dt &=- \int_{a}^b \varphi'(t)(x-t)^{-s}\, dt & \text{ in }& (b,\infty),\\
u(x)&=\varphi(x)&\text{ in } & (-\infty,b]. }
\end{lem}
The reader can see a qualitative graphic of a function described by Lemma \ref{lem:int11} in Figure \ref{fign:Lem31}. \textcolor{black}{An explicit example of such a function is build in the Appendix, in Figure \ref{fign:es1}.}
\begin{proof} Since $\varphi \in C^1\big([a,b])$ we have for any $x\geq b$
\bgs{\bigg |\int_a^{b} \varphi'(t)(x-t)^{-s} \, dt \bigg |\leq \sup_{t\in [a,b]} |\varphi'(t)| \frac{ (x-a)^{1-s}-(x-b)^{1-s}}{1-s}<\infty.}
Hence the map $x\mapsto\displaystyle \int_a^{b} \varphi'(t)(x-t)^{-s} \, dt $ is well defined \textcolor{black}{in $[b,\infty)$}. Using the definition \eqref{caputo} for $x >b$ we have that
\begin{equation*}
\begin{split}
\Gamma(1-s) D_a^s u(x)
= \ensuremath{ &\;} \int_b^x u'(t)(x-t)^{-s} \, dt + \int_a^{b} u'(t)(x-t)^{-s} \, dt \\
=\ensuremath{ &\;} \int_b^x u'(t)(x-t)^{-s} \, dt + \int_a^{b} \varphi'(t)(x-t)^{-s} \, dt .
\end{split}
\end{equation*}
It follows that $D_a^s u(x)=0$ on $(b,\infty)$ is equivalent to
\bgs{
\int_b^x u'(t)(x-t)^{-s} \, dt =- \int_a^{b} \varphi'(t)(x-t)^{-s} \, dt \quad \text{ in } (b,\infty). }
This concludes the proof of the Lemma.
\end{proof}
In the following Theorem we introduce a representation formula for an integro-differential equation.
\begin{thm}
\label{thm:probc}
Let $g \in C_b^{1,1-s}$. The problem
\eqlab{ \label{probc1}
\int_b^x u'(t)(x-t)^{-s} \,dt & = g(x) \quad \mbox{ in } (b,\infty),\\
u(b) &= 0 } admits on $[b,\infty)$ a unique solution $u\in C_b^{1,s}$. Moreover, for any $x>b$,
\begin{equation} \label{solc1}
u(x)= \textcolor{black}{\ensuremath{ \frac{\sin \pi s}{ \pi}}}\int_b^x g(t)(x-t)^{s-1} \, dt .
\end{equation}
\end{thm}
\begin{proof}
\textcolor{black}{We prove this theorem by showing that $u$ given in \eqref{solc1} is well defined, belongs to the space $C_b^{1,s}$ and is the unique solution of the problem \eqref{probc1}.}
Since $g$ belongs to $C_b^{1,1-s}$ (recall \eqref{ca1s}), for any $x> b$ we have that
\bgs{|u(x)| \leq\ensuremath{ \frac{\sin \pi s}{ \pi}} \int_b^x |g(t)| (x-t)^{s-1} \, dt \leq c_s\sup_{ t \in [b,x]} |g(t)| (x-b)^{s} <\infty,} where $c_s$ is a positive constant. Hence the definition \eqref{solc1} is well posed.
\bigskip
We prove that $u$ belongs to $ C_b^{1,s}$.
We claim that
\textcolor{black}{ \eqlab{ \label{cbsu1} g\in C_b^{1,1-s} & \mbox{ and } u \mbox{ as in } \eqref{solc1} \implies \\
& u \in AC\big([b,\infty)\big) \mbox{ and} \\
& u'(y)=\frac{\sin \pi s} {\pi} \lr{ \int_b^y g'(\tau)(y-\tau)^{s-1}\, d\tau + g(b)(y-b)^{s-1}} \quad \mbox{ a.e. in } [b,\infty). }}
We fix an arbitrary $x>b$. According to definition \eqref{ca1s}, $g \in AC\big( [b,x]\big) $ and \textcolor{black}{thanks to Theorem \ref{acrep} we have that}
for any $t\in [b,x]$
\[ g(t)=\int_b^t g'(\tau)\, d\tau + g(b).\]
And so in \eqref{solc1} we have that
\eqlab{ \label{bla2} \frac{\pi}{\sin \pi s} \, u(x)
= \ensuremath{ &\;} \int_b^x \lr{ \int_b^t g'(\tau)\, d\tau } (x-t)^{s-1}\, dt + g(b) \int_b^x (x-t)^{s-1}\,dt .}
We compute
\eqlab {\label{bla1} \int_b^x (x-t)^{s-1} \, dt = \frac{(x-b)^s}{s} = \int_b^x (y-b)^{s-1} \, dy.}
Tonelli theorem applied to the positive measurable function $|g'(\tau)|(x-t)^{s-1}$ on the domain
\eqlab{ \label{rev2}D_{b,x}:=\{ (t,\tau) \text{ s.t. } b\leq t\leq x, b\leq \tau\leq t\}}
with the product measure $d(t,\tau)$ gives
\eqlab{\label{rev1} \iint_{D_{b,x}} |g'(\tau)|\, (x-t)^{s-1} \, d (t, \tau) = \ensuremath{ &\;} \int_b^x |g'(\tau)| \lr{ \int_\tau^x (x-t)^{s-1} \, dt }\,d\tau \\
=\ensuremath{ &\;}\frac{1}{s}\int_b^x |g'(\tau)| (x-\tau)^s \, d\tau \\
\leq\ensuremath{ &\;}\frac{(x-b)^s}{s} \|g'\|_{L^1\big((b,x)\big)}, }
which is a finite quantity. Hence $|g'(\tau)| (x-\tau)^{s-1} \in L^1\lr{D_{b,x}, d(t,\tau)}$ and by Fubini theorem \textcolor{black}{ and using \eqref{bla1}} it follows that
\bgs{ \int_b^x \lr{\int_b^t g'(\tau)\, d\tau } (x-t)^{s-1}\, dt = \ensuremath{ &\;} \int_b^x g'(\tau) \lr{ \int_{\tau}^x (x-t)^{s-1} \, dt} \, d\tau\\
= \ensuremath{ &\;} \int_b^x g'(\tau) \lr{\int_\tau^x (y-\tau)^{s-1} \, dy} \, d\tau\\
=\ensuremath{ &\;} \int_b^x \lr{ \int_b^y g'(\tau) (y-\tau)^{s-1} \, d\tau }\, dy.}
Inserting this and identity \eqref{bla1} into \eqref{bla2}, we obtain that
\[ \frac{\pi}{\sin \pi s}\, u(x) = \int_b^x \lr{ \int_b^y g'(\tau) (y-\tau)^{s-1} \, d\tau + g(b) (y-b)^{s-1} } \, dy.\]
\textcolor{black}{Hence $u$ is the integral function of a $L^1\big((b,x)\big)$ function (thanks to \eqref{rev1}) and recalling that $u(b)=0$, according to Theorem \ref{acrep} we have that $u\in AC\big([b,x]\big)$.}
Moreover, almost everywhere in $[b,x]$
\bgs {\label{u1d}
\frac{\pi} {\sin \pi s} \,u'(y)=\int_b^y g'(\tau)(y-\tau)^{s-1}\, d\tau + g(b)(y-b)^{s-1}. }
\textcolor{black}{With this, given the arbitrary choice of $x$, we have proved the claim \eqref{cbsu1}.}
We claim now that $ u'(\cdot) (x-\cdot)^{-s} \in L^1\big( (b,x))$. Using the second identity in \eqref{cbsu1}, we obtain that
\eqlab{ \label{fbca} &\frac{\pi}{\sin \pi s} \int_b^x | u'(y)| (x-y)^{-s} \, dy \\
\leq
\ensuremath{ &\;} \int_b^x \lr{\int_b^y |g'(\tau)| (y-\tau)^{s-1} \, d\tau} (x-y)^{-s} \, dy + | g(b) | \int_b^x (y-b)^{s-1} (x-y)^{-s}dy . }
Tonelli theorem applied to the positive function $|g'(\tau)| (y-\tau)^{s-1} (x-y)^{-s} $ on the domain $D_{b,x}$ \textcolor{black}{given in \eqref{rev2}} with the product measure $d(y,\tau)$ gives
\bgs { \label{tt31} \iint_{D_{b,x}} |g'(\tau)| (y-\tau)^{s-1} (x-y)^{-s} \, d(y, \tau) = \ensuremath{ &\;} \int_b^x |g'(\tau)| \lr{ \int_\tau^x (y-\tau)^{s-1} (x-y)^{-s} \, dy } \, d\tau.}
By using the change of variables $\displaystyle t = \frac{y-\tau}{x-\tau}$, thanks to definition \eqref{b01} \textcolor{black}{and identity \eqref{b02}} we have that
\eqlab{ \label{bfcomp}
\int_\tau^x (y-\tau)^{s-1} (x-y)^{-s} \, dy = \int_0^1 t^{s-1}(1-t)^{-s} \, dt = \frac{\pi}{\sin\pi s}.}
Hence we obtain that
\eqlab{ \label{Fubsto} \iint_{D_{b,x}} |g'(\tau)| (y-\tau)^{s-1} (x-y)^{-s} \, d(y ,\tau)
= \ensuremath{ &\;} \frac{\pi}{\sin\pi s} \| g'\|_{L^1\big((b,x)\big)}.}
\textcolor{black}{From this and using again \eqref{bfcomp} with $b=\tau$,} we obtain in \eqref{fbca} that
\bgs{ \ensuremath{ &\;} \int_b^x | u'(y)| (x-y)^{-s} \, dy \leq \|g'\|_{ L^1 \big((b,x)\big)} + |g(b)|.}
Hence $ u'(\cdot) (x-\cdot)^{-s} \in L^1\big( (b,x))$, as claimed. \textcolor{black}{From this and \eqref{cbsu1}, recalling definition \eqref{ca1s} it follows that} $u$
belongs to the space $C_b^{1,s}$.
\bigskip
We prove now that $u$ is a solution of the problem \eqref{probc1}. Using the second identity in \eqref{cbsu1} we have that
\eqlab{ \label{blaq1} \frac{\pi}{\sin \pi s} \int_b^x u' (y)(x-y)^{-s}\, dy
=\ensuremath{ &\;} \int_b^x \lr { \int_b^y g'(\tau)(y-\tau)^{s-1} \, d\tau } (x-y)^{-s} \, dy \\ \ensuremath{ &\;} + g(b) \int_b^x (y-b)^{s-1} (x-y)^{-s}\, dy .}
Thanks to \eqref{Fubsto}, we have that $|g'(\tau)| (y-\tau)^{s-1} (x-y)^{-s} \in L^1 (D_{b,x}, d(y,\tau)) $. We apply Fubini theorem and using \eqref{bfcomp} we get that
\bgs{ \int_b^x \lr{\int_b^y g'(\tau)(y-\tau)^{s-1} (x-y)^{-s}\, d\tau }\, dy =\ensuremath{ &\;} \int_b^x g'(\tau) \lr{ \int_\tau^x (y-\tau)^{s-1} (x-y)^{-s} \, dy } \, d\tau, \\
=\ensuremath{ &\;} \frac{\pi}{\sin \pi s} \lr{ g(x)-g(b) } .}
Thanks again to \eqref{bfcomp}, in \eqref{blaq1} it follows that
\bgs{ \int_b^x \ensuremath{ &\;} u'(y)(x-y)^{-s}\, dy = g(x),}
therefore $u$ is a solution of the problem \eqref{probc1}.
\bigskip
The solution is unique. We prove this by taking two different solutions $u_1,u_2\in C_b^{1,s}$ of the problem \eqref{probc1}. Let $u:=u_1-u_2$, then $u$ satisfies
\bgs{ \int_b^x u'(t)(x-t)^{-s}\, dt &=0 &\text{in} \quad &(b,\infty), \\
u(b)&=0 .&&}
We take any $y>x$, we multiply both terms by the positive quantity $(y-x)^{s-1}$, integrate from $b$ to $y$ and obtain that
\eqlab {\label{bla3} \int_b^y \lr{ \int_b^x u'(t) (x-t)^{-s}\, dt } (y-x)^{s-1} \, dx =0.}
Since $u \in C_b^{1,s}$, \textcolor{black}{we use Tonelli theorem on $D_{b,y}$ (we recall definition \eqref{rev2}) and by \eqref{bfcomp} we obtain that}
\bgs{ \iint_{D_{b,y}} |u'(t)| (x-t)^{-s}(y-x)^{s-1} \, d(x,t)
=\ensuremath{ &\;} \int_b^y |u'(t)| \lr{ \int_t^y (x-t)^{-s}(y-x)^{s-1} \, dx} \, dt\\
=\ensuremath{ &\;} \frac{\pi}{\sin \pi s} \|u'\|_{L^1\big((b,y)\big)}, } which is a finite quantity. Fubini theorem then allows us to compute
\bgs{ \int_b^y \lr{ \int_b^x u'(t) (x-t)^{-s}\, dt } (y-x)^{s-1} \, dx =&\; \int_b^y u'(t) \lr{ \int_t^y (x-t)^{-s}(y-x)^{s-1} \, dx }\, dt \\
= \ensuremath{ &\;} \frac{\pi}{\sin \pi s}u(y) .}
It follows from \eqref{bla3} and from the initial condition $u(b)=0$ that $u_1(x)= u_2(x)$ on $[b,\infty)$. Therefore $u$ given in \eqref{solc1} is the unique solution of the problem \eqref{probc1} and this concludes the proof of the Theorem.
\end{proof}
We introduce an interior regularity result.
\begin{lem} \label{intreg}
Let $g \in C^{\infty}\big([b,\infty)\big)$ and $u$ be defined as in \eqref{solc1}. Then $u\in C^{\infty}\big((b,\infty)\big)$.
\end{lem}
\begin{proof}
We prove by induction that the next statement, which we call $P(n)$, holds for any $n\in \ensuremath{\mathbb{N}}$:
\textcolor{black}{ \[ u\in C^n\big((b,\infty)) \]}
and
\eqlab{ \label{undiff1} u^{(n)}(y) = \ensuremath{ \frac{\sin \pi s}{ \pi}}\lr{ \int_b^y g^{(n)}(\tau) (y-\tau)^{s-1} \, d\tau + \sum_{i=0}^{n-1} \tilde c_{s,i} g^{(i)}(b) (y-b)^{s-n+i} } \\\mbox{for any } y\in (b,\infty) ,}
where
\begin{equation} \label{ctcsi1} \tilde c_{s,i} = \begin{cases} (s-1)\dots (s-n+i+2) (s-n+i+1) \quad &\text{ for } i\neq n-1\\
1 \quad \quad\quad\quad &\text{ for } i= n-1. \end{cases}
\end{equation}
We denote by
\[ v(y):=\int_b^y g'(\tau)(y-\tau)^{s-1}\, d\tau. \]
\textcolor{black}{and from \eqref{cbsu1} we have that almost anywhere in $[b,\infty)$
\eqlab{\label{uprim} u'(y) = \ensuremath{ \frac{\sin \pi s}{ \pi}} \lr{ v(y) + g(b)(y-b)^{s-1}}.} Since $g\in C^{\infty}\big([b,\infty)\big)$, we have in particular that $g'\in C_b^{1,1-s}$ hence from the definition of $v$ and \eqref{cbsu1} we get that $v\in AC\big([b,\infty)\big)$. It follows that $u'\in C\big((b,\infty)\big)$, since it is a sum of continuous functions. Therefore $u\in C^1\big((b,\infty)\big)$ and \eqref{uprim} holds pointwise in $(b,\infty)$}.
And so $P(1)$ is true.
In order to prove the inductive step, we suppose that $P(n)$ holds and prove $P(n+1)$.
Let now
\[ v(y):=\int_b^y g^{(n)}(\tau)(y-\tau)^{s-1}\, d\tau.\] From \eqref{undiff1} we have that for any $y\in (b,\infty)$
\eqlab{\label{unv1} u^{(n)}(y) =\ensuremath{ \frac{\sin \pi s}{ \pi}}\lr{ v(y)+ \sum_{i=0}^{n-1} \tilde c_{s,i} g^{(i)}(b) (y-b)^{s-n+i} } .}
\textcolor{black}{Since $g\in C^{\infty}\big([b,\infty)\big)$, in particular we have that $g^{(n)}\in C_b^{1,1-s}$ hence from the definition of $v$ and thanks to \eqref{cbsu1} we get that $v\in AC\big([b,\infty)\big)$ and
almost everywhere on $[b,\infty) $
\[ v'(y) = \int_b^y g^{(n+1)}(\tau) (y-\tau)^{s-1}\, d\tau + g^{(n)}(b) (y-b)^{s-1}.\] Now, also $g^{(n+1)}\in C_b^{1,1-s}$ and so, thanks to \eqref{cbsu1}, the map
\eqlab{\label{yg1} y \mapsto \displaystyle \int_b^y g^{(n+1)}(\tau) (y-\tau)^{s-1}\, d\tau \quad \in AC\big([b,\infty)\big) .} It yields that $v\in C^1\lr{(b,\infty)}$ and so from \eqref{unv1} we get that $u^{(n+1)}\in C\lr{(b,\infty)}$. Taking the derivative of \eqref{unv1} we have that pointwise in $(b,\infty)$}
\bgs{ \frac{\pi}{\sin\pi s} u^{(n+1)} (y)= \ensuremath{ &\;} \int_b^y g^{(n+1)}(\tau) (y-\tau)^{s-1}\, d\tau +
g^{(n)}(b)(y-b)^{s-1}+
\sum_{i=0}^{n-1} \tilde c_{s,i} g^{(i)}(b) (s-n+i) (y-b)^{s-n+i-1} \\
=\ensuremath{ &\;} \int_b^y g^{(n+1)}(\tau)(y-\tau)^{s-1}\, d\tau + \sum_{i=0}^{n} \tilde c_{s,i} g^{(i)}(b) (y-b)^{s-n+i},}
where we have used \eqref{ctcsi1} in the last line.
Therefore the statement $P(n+1)$ is true and the proof by induction is concluded.
It finally yields that $u\in C^{\infty}\lr{(b,\infty)}$ and this concludes the proof of the Lemma.
\end{proof}
\smallskip
\section{Existence of a sequence of Caputo-stationary functions that tends \\to the function $x^s$}\label{sectaps}
In this Section we introduce some preliminary results, on which we will base the proof of Theorem \ref{thm:thm1}. The purpose of this section is to build a sequence of functions that are Caputo-stationary in $(0,\infty)$ and that tends uniformly on bounded subintervals of $(0,\infty)$ to the function $x^s$. We do this by building a Caputo-stationary function in $(1,\infty)$, that at the point $1+\ensuremath{\varepsilon}$ is asymptotic to $\ensuremath{\varepsilon}^s$ and then we use a blow-up argument.
\bigskip
We fix the arbitrary parameter $s\in (0,1)$. We introduce the first Lemma of this Section.
\begin{lem}\label{lem1}
Let $\psi_0 \in C^1\big( [0,1]\big) \cap C\big((-\infty,1]\big)$ be such that
\eqlab { \label{fifi41} &\psi_0(x)=\psi_0(0) & \text{ for any } &x\in (-\infty, 0],\\
&\psi_0(x) =0 &\text{ for any } &x\in \bigg[\frac{3}{4}, 1\bigg],\\
&\psi'_0 (x)<0 &\text{ for any } &x\in \bigg[0,\frac{3}{4}\bigg). }
Let $\psi\in C_0^{1,s}$ be the solution of the problem
\eqlab {\label{prob1}
D_0^s\psi(x) &=0 &\text{ in }& (1,\infty),\\
\psi(x)&=\psi_0(x)&\text{ in } &(-\infty,1].} \textcolor{black}{Then $\psi \in C^{\infty}\big((1,\infty)\big)$ and} if $x=1+\ensuremath{\varepsilon}$, we have that
\eqlab{\label{psieee} \psi(1+\ensuremath{\varepsilon}) =\kappa \ensuremath{\varepsilon}^s + \mathcal O(\ensuremath{\varepsilon}^{s+1}) }
as $\ensuremath{\varepsilon} \to 0$, for some $\kappa>0$.
\end{lem}
\textcolor{black}{An explicit example of a function described in Lemma \ref{lem1} is depicted in Figure \ref{fign:es2} in the Appendix.}
\begin{proof}[Proof of Lemma \ref{lem1}]
Thanks to Lemma \ref{lem:int11} we have that $\psi \in C_0^{1,s}$ is solution of the problem \eqref{prob1} if and only if
\bgs{ \int_1^x \psi'(t)(x-t)^{-s}\, dt &= -\int_0^{3/4} \psi_0'(t) (x-t)^{-s} \, dt &\mbox{in } &(1,\infty), \\
\psi(x) &=\psi_0(x) &\mbox{in } &(-\infty,1].}
On $[1,\infty)$ we define the function
\eqlab{\label{gbla1} g(x):=- \int_0^{3/4} \psi_0'(t) (x-t)^{-s} \, dt, }
\textcolor{black}{hence our problem is now
\eqlab{ \label{psil4} \int_1^x \psi'(t)(x-t)^{-s}\, dt &=g(x) &\mbox{in } &(1,\infty), \\
\psi(x) &=\psi_0(x) &\mbox{in } &(-\infty,1].}
We claim that $g\in C^{\infty}\big([1,\infty)\big)$.}
For that, let $F\colon [1,\infty)\times [0,3/4]\to \ensuremath{\mathbb{R}}$ be defined as $F(x,t):=\psi_0'(t)(x-t)^{-s}$.
\textcolor{black}{Now, for any $h>0$ arbitrarily small we have that
\[ \bigg| \frac{F(x+h,t)-F(x,t)}h \bigg| \leq \sup_{t\in [0,3/4]} |\psi_0'(t)| \bigg|\frac{(x+h-t)^{-s}-(x-t)^{-s}}h\bigg|. \]
Since the map $[1,\infty)\ni x \mapsto (x-t)^{-s}$ is differentiable for any $t\in [0,3/4]$, by the mean value theorem we have that for $\theta \in (0,h)$
\[ \bigg|\frac{(x+h-t)^{-s}-(x-t)^{-s}}h\bigg| \leq s (x+\theta-t)^{-s-1} \leq s(x-t)^{-s-1}.\]
Then
\[\bigg|\frac{F(x+h,t)-F(x,t)}h\bigg|\leq s \sup_{t\in [0,3/4]} |\psi_0'(t)| (x-t)^{-s-1} \in L^1\big([0,3/4],dt\big),\]
hence by the dominated convergence theorem, we can pass the limit inside the integral and obtain that
\[ g'(x)= -\int_0^{3/4} \partial_x F (x,t)\, dt = s\int_0^{3/4} \psi'_0(t) (x-t)^{-s-1}\, dt.\]}
We can now take for any $n \in \ensuremath{\mathbb{N}}$ the function $F_n\colon [1,\infty)\times [0,3/4]\to \ensuremath{\mathbb{R}}$ to be $F_n(x,t):=\psi_0'(t)(x-t)^{-s-n}$ and repeat the above argument. We obtain that $g$ is $C^\infty\big([1,\infty)\big)$, as claimed and
moreover for any $n\in \ensuremath{\mathbb{N}}_0$ we have that
\eqlab{ \label{gn1} g^{(n)}(x) = -\bar c_{s,n}\int_0^{3/4} \psi_0'(t) (x-t)^{-s-n}\,dt, }
where
\begin{equation} \label{ctcns2} \bar c_{s,n} = \begin{cases} (-s)(-s-1)\dots (-s-n+1) &\mbox{ for } n\neq 0\\
1 &\mbox{ for } n=0.\end{cases}\end{equation}
Since $\psi(1)=0$ and $g\in C^\infty\big([1,\infty)\big)$ (hence in particular $g\in C_1^{1,1-s}$), thanks to Theorem \ref{thm:probc} we get that the problem \eqref{psil4} admits a unique solution $\psi \in C_1^{1,s}$ given by
\eqlab{ \label{solll} &\psi(x)=\ensuremath{ \frac{\sin \pi s}{ \pi}} \int_1^x g(t) (x-t)^{s-1} \, dt &\mbox{ in } &(1,\infty),\\
&\psi(x)=\psi_0(x) &\mbox{ in } & (-\infty,1].}
\textcolor{black}{Moreover, we claim that $\psi \in C_0^{1,s}$. Indeed, from Lemma \ref{intreg} we get that $\psi\in C^{\infty}\big( (1,\infty)\big)$. Also
$ \lim_{x\to 1^+} \psi(x) =0 =\psi(1)$ and so from this and the hypothesis we have that $\psi \in C^{\infty}\big( (1,\infty)\big) \cap C^1\lr{[0,1]} \cap C(\ensuremath{\mathbb{R}})$, hence $\psi \in AC\lr{[0,\infty)}$. Also for any $x>0$
\[ \int_0^x |\psi'(t)(x-t)^{-s}| \, dt \leq c_s \|\psi'\|_{L^{\infty}\lr{(0,x)}} x^{1-s} <\infty, \]
and so the claim follows from definition \eqref{ca1s}.}
Therefore, $\psi \in C_0^{1,s}$ is the unique solution of problem \eqref{psil4} and from Lemma \ref{lem:int11} it follows that \eqref{solll} is also the unique solution of problem the \eqref{prob1}.
\bigskip
We prove now the claim \eqref{psieee}. Let $x=1+\ensuremath{\varepsilon}$. Then from \eqref{solll} we have that
\bgs{ \frac{\pi}{\sin \pi s} \psi(1+\ensuremath{\varepsilon}) = \int_1^{1+\ensuremath{\varepsilon}} g(\tau) (1+\ensuremath{\varepsilon}-\tau)^{s-1} \, d\tau.}
The change of variables $z= (\tau-1)/\ensuremath{\varepsilon}$ gives
\bgs{ \frac{\pi}{\sin \pi s} \psi(1+\ensuremath{\varepsilon}) = \ensuremath{\varepsilon}^s \int_0^1 g(\ensuremath{\varepsilon} z+ 1) (1-z)^{s-1}\, dz.}
Using definition \eqref{gbla1} we have that
\[ g(\ensuremath{\varepsilon} z+ 1) = -\int_0^{3/4} \psi_0'(t)(\ensuremath{\varepsilon} z+1-t)^{-s} \, dt ,\]
hence
\bgs{ \frac{\pi}{\sin \pi s} \psi(1+\ensuremath{\varepsilon}) = -\ensuremath{\varepsilon}^s \int_0^1 \lr{ \int_0^{3/4} \psi_0'(t)(\ensuremath{\varepsilon} z+1-t)^{-s}\, dt} (1-z)^{s-1}\,dz.}
Tonelli theorem on $[0,1]\times [0,3/4]$ applied to the function $|\psi_0'(t)|(\ensuremath{\varepsilon} z+1-t)^{-s} (1-z)^{s-1}$ yields
\bgs{ \iint_{ [0,1]\times [0,3/4]} \ensuremath{ &\;} |\psi_0'(t)|(\ensuremath{\varepsilon} z+1-t)^{-s} (1-z)^{s-1} d(t,z)\\
=\ensuremath{ &\;} \int_0^{3/4} |\psi_0'(t)| \lr{ \int_0^1 (1-z)^{s-1}(\ensuremath{\varepsilon} z+1-t)^{-s}\, dz}\, dt.}
We have that $(\ensuremath{\varepsilon} z+1-t)^{-s}\leq (1-t)^{-s}\leq 4^s$, hence
\bgs{ \int_0^{3/4} |\psi_0'(t)| \lr{ \int_0^1 (1-z)^{s-1}(\ensuremath{\varepsilon} z+1-t)^{-s}\, dz}\, dt \leq \ensuremath{ &\;} 4^s \int_0^{3/4} |\psi_0'(t)| \lr{\int_0^1 (1-z)^{s-1}\, dz}\, dt \\
\leq \ensuremath{ &\;} \frac{3 \cdot 4^{s-1}}s \sup_{t\in[0,3/4]}|\psi_0'(t)| ,}
which is finite. Therefore $|\psi_0'(t)|(\ensuremath{\varepsilon} z+1-t)^{-s} (1-z)^{s-1}\in L^1\big([0,1]\times [0,3/4], d(t,z)\big)$ and by Fubini theorem we have that
\eqlab{ \label{psibla2}
\frac{\pi}{\sin \pi s} \psi(1+\ensuremath{\varepsilon}) =\ensuremath{ &\;} -\ensuremath{\varepsilon}^s \int_0^{3/4}\psi_0'(t) \lr{ \int_0^1 (\ensuremath{\varepsilon} z+1-t)^{-s} (1-z)^{s-1} \, dz}\, dt \\
=\ensuremath{ &\;} -\ensuremath{\varepsilon}^s \int_0^{3/4} \psi_0'(t) I_s(\ensuremath{\varepsilon},t)\, dt.}
We consider the function $f(z)=(\ensuremath{\varepsilon} z+1-t)^{-s}$ and make a Taylor expansion with a Lagrange reminder in $0$. Namely, one has that there exists $c \in (0,z)$ such that
\[ f(z)=\sum_{i=0}^n f^{(i)}(0) \frac{z^i}{i!} + \frac{f^{(n+1)}(c)}{(n+1)!}z^{n+1}.\]
We have that for some $c\in (0,z)$
\[ (\ensuremath{\varepsilon} z+ 1-t)^{-s} = \sum_{i=0}^n \frac{\bar c_{s,i}} {i!}\ensuremath{\varepsilon}^i (1-t)^{-s-i} z^i + \frac{\bar c_{s,n+1}}{(n+1)!} \ensuremath{\varepsilon}^{n+1} (\ensuremath{\varepsilon} c+1-t)^{-s-n-1} z^{n+1},\]
where $\bar c_{s,i}$ is given in \eqref{ctcns2}. Using this, we have that
\bgs{ I_s(\ensuremath{\varepsilon},t)= \ensuremath{ &\;} \sum_{i=0}^n \frac{\bar c_{s,i}} {i!}\ensuremath{\varepsilon}^i (1-t)^{-s-i}\int_0^1 (1-z)^{s-1} z^i\, dz \\ \ensuremath{ &\;} + \frac{\bar c_{s,n+1}}{(n+1)!} \ensuremath{\varepsilon}^{n+1} (\ensuremath{\varepsilon} c+1-t)^{-s-n-1} \int_0^1 (1-z)^{s-1} z^{n+1}\, dz.}
We use the definition \eqref{b01} of the Beta function and continue
\bgs{ I_s(\ensuremath{\varepsilon},t) = \sum_{i=0}^n \frac{\bar c_{s,i}\beta(i+1,s) } {i!}\ensuremath{\varepsilon}^i (1-t)^{-s-i} + \frac{\bar c_{s,n+1}\beta(n+2,s)}{(n+1)!} \ensuremath{\varepsilon}^{n+1} (\ensuremath{\varepsilon} c+1-t)^{-s-n-1}.}
In \eqref{psibla2} we obtain that
\eqlab { \label{psibla3} \frac{\pi}{\sin \pi s} \psi(1+\ensuremath{\varepsilon}) =\ensuremath{ &\;} -\ensuremath{\varepsilon}^{s} \sum_{i=0}^n \frac{\bar c_{s,i}\beta(i+1,s) } {i!} \ensuremath{\varepsilon}^i \int_0^{3/4}\psi_0'(t) (1-t)^{-s-i}\, dt \\ \ensuremath{ &\;}- \ensuremath{\varepsilon}^{s+n+1} \frac{\bar c_{s,n+1}\beta(n+2,s)}{(n+1)!} \int_0^{3/4} \psi_0'(t) (\ensuremath{\varepsilon} c+1-t)^{-s-n-1} \, dt.}
We notice that $(\ensuremath{\varepsilon} c+1-t)^{-s-n-1}\leq 4^{s+n+1}$ and it follows that
\[ \bigg| \int_0^{3/4} \psi_0'(t) (\ensuremath{\varepsilon} c+1-t)^{-s-n-1} \, dt\bigg| \leq 3\cdot 4^{s+n} \sup_{t \in [0,3/4]} |\psi_0'(t)|,\] which is finite.
We define then the finite quantities
\bgs{ C_{s,\psi_0,i}:=\ensuremath{ &\;} -\frac{\bar c_{s,i} \beta(i+1,s) }{i!} \int_0^{3/4} \psi_0'(t)(1-t)^{-s-i}\, dt\\ =\ensuremath{ &\;} \frac{\beta(i+1,s) } {i!} g^{(i)}(1) \quad \mbox{for} \quad i=0,\dots,n } and
\bgs{ C_{s,\psi_0,n+1}: = \ensuremath{ &\;} -\frac{\bar c_{s,n+1} \beta(n+2,s) }{(n+1)!} \int_0^{3/4} \psi_0'(t)(\ensuremath{\varepsilon} c +1-t)^{-s-n-1}\, dt \\
=\ensuremath{ &\;} \frac{\beta(n+2,s) } {(n+1)!} g^{(n+1)}(\ensuremath{\varepsilon} c +1), }
where we have used \eqref{gn1}.
It follows in \eqref{psibla3} that
\[ \frac{\pi}{\sin \pi s}\psi(1+\ensuremath{\varepsilon}) = \sum_{i=0}^{n+1} C_{s,\psi_0,i} \textcolor{black}{\ensuremath{\varepsilon}^{s+i}} . \]
This gives for $\ensuremath{\varepsilon} \to 0$ that
\[\psi(1+\ensuremath{\varepsilon})= \kappa\ensuremath{\varepsilon}^s + \mathcal O (\ensuremath{\varepsilon}^{s+1}),\]
where
\bgs{ \kappa =C_{s,\psi,0}= \beta(1,s) g(1) \textcolor{black}{=} - \beta(1,s) \int_0^{3/4} \psi_0'(t)(1-t)^{-s} \, dt. }
Since $-\psi_0'(x)>0$ in $[0,3/4)$ by hypothesis (see \eqref{fifi41}), we have that
\[-\int_0^{3/4} \psi_0'(t)(1-t)^{-s} \, dt>0.\]
This implies that $\kappa$ is strictly positive and it concludes the proof of the Lemma.
\end{proof}
\bigskip
Blowing up the function built in Lemma \ref{ls1}, we obtain a sequence of Caputo-stationary functions in $(0,\infty)$ that on $(0,\infty)$ tends to the function $x^s$.
\begin{lem}\label{ls1}
There exists a sequence $(v_j)_ {j \in \ensuremath{\mathbb{N}}}$ of functions $v_j \in C^{1,s}_{-j}\cap C^{\infty}\big((0,\infty)\big)$ such that for any $j \in \ensuremath{\mathbb{N}}$
\begin{equation} \label{pbvj1}
\begin{aligned}
D^s_{-j} v_j(x)&= 0 &\text{ in } &(0,\infty), \\
v_j(x) &= 0 &\text{ in } & \Big[-\frac{j}4,0\Big]
\end{aligned}
\end{equation}
and for any $x>0$
\eqlab{ \label{limvj} \lim_{j\to \infty} v_j(x)=\textcolor{black}{\kappa} x^s, } for some $\textcolor{black}{\kappa}>0$.
Moreover, on any bounded subinterval $I\subseteq (0,\infty)$ the convergence is uniform.
\end{lem}
A qualitative example of a sequence described in Lemma \ref{ls1} is depicted in Figure \ref{fign:Lem42}.
\begin{center}
\begin{figure}[htpb]
\hspace{0.6cm}
\begin{minipage}[b]{0.85\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{Lem42.png}
\caption{A sequence of Caputo-stationary functions in $(0,\infty)$}
\label{fign:Lem42}
\end{minipage}
\end{figure}
\end{center}
\begin{proof}
We consider the function $\psi$ solution of the problem \eqref{prob1} as introduced in Lemma \ref{lem1}, and define for any $j\in \ensuremath{\mathbb{N}}$
\[ v_j(x) := j^s \psi\bigg(\frac{x}{j} +1\bigg).\] We prove that for any $j\in \ensuremath{\mathbb{N}}$ the function $v_j$ is solution of the problem \eqref{pbvj1}.
Recalling Lemma \ref{lem1}, we have that $\psi(x)=\psi_0(x)$ in $(-\infty,1]$, hence $v_j(x)=j^s \psi_0 \displaystyle \bigg(\frac{x}{j} +1\bigg)$ when $\displaystyle \frac{x}{j} +1 \leq 1$, i.e. when $x\leq 0$. Moreover, from conditions \eqref{fifi41} we have that $v_j(x)=j^s \psi_0(0)$ when $\displaystyle \frac{x}{j} +1 \leq 0$, hence when $x\leq -j$ and $v_j(x)=0$ when $\displaystyle\frac{3}{4}\leq \displaystyle \frac{x}{j} +1 \leq 1$, hence for $x\in \displaystyle \left[-\frac{j}4,0\right]$.
According to the fact that $\psi \in C_0^{1,s}\cap C^{\infty}\big((1,\infty)\big)$, we have that $v_j\in C_{-j}^{1,s} \cap C^{\infty}\big((0,\infty)\big)$. Furthermore, since $\psi$ is solution of the problem \eqref{prob1}, we have by the definition \eqref{caputo} that
\bgs{ D_{-j}^s v_j(x) =\ensuremath{ &\;} \ensuremath{ \frac{1}{\Gamma(1-s)}} \int^x_{-j} v_j'(t)(x-t)^{-s} \, dt\\
= \ensuremath{ &\;} \frac{ j^{s-1}}{\Gamma(1-s)} \int^x_{-j}\psi'\Big(\frac{t}{j}+1\Big) (x-t)^{-s} \, dt.}
We use the change of variables $y= t/j +1$ and obtain
\bgs{ D_{-j}^s v_j(x) =\ensuremath{ &\;} \ensuremath{ \frac{1}{\Gamma(1-s)}} \int_0^{x/j+1} \psi'(y)\Big(\frac{x}{j}+1-y\Big)^{-s}\, dy \\
=\ensuremath{ &\;} D_0^s \psi \Big(\frac{x}{j} +1\Big).}
This implies that $D_{-j}^s v_j(x) =0 $ whenever $D_0^s \psi \displaystyle \left(\frac{x}{j} +1\right) =0$. From \eqref{prob1}, this happens when $\displaystyle \frac{x}j +1 >1$, hence for $x>0$.
\textcolor{black}{And so in conclusion} we have that for any $j\in \ensuremath{\mathbb{N}}$ the functions $v_j \in C_{-j}^{1,s} \cap C^{\infty}\big((0,\infty)\big)$ satisfy
\bgs{ D_{-j}^s v_j(x) &= 0 &\mbox{ in } & (0,\infty), \\
v_j(x) &= 0 &\mbox{ in } & \left[-\frac{j}4,0\right] }
and
\bgs{ v_j(x)&=j^s\psi_0\lr{\frac{x}j +1} &\mbox{ in } & (-\infty,0],\\
v_j(x)&=j^s\psi_0(0) &\mbox{ in } & (-\infty,-j].}
In particular, $v_j$ is solution of the problem \eqref{pbvj1} for any $j \geq 1$.
\bigskip
We prove now that as $j \to \infty$, the sequence $v_j (x)$ tends on $(0,\infty)$ to the function $\kappa x^s$, for a suitable constant $\kappa>0$. Using \eqref{psieee}, for $x>0$ and for a large $j$ we have that
\[ v_j(x) = j^s \psi \left(\frac{x}{j}+1 \right) = j^s \left( \kappa \frac{x^s}{j^s} + \mathcal O \left(\frac{x^{s+1}}{j^{s+1}}\right)\right) = \kappa x^s + \mathcal O \left(\frac{x^{s+1}}{j}\right).\]
By sending $j$ to infinity we obtain that
\[ \lim_{j\to \infty} v_j(x)=\kappa x^s.\]
On any bounded subinterval $I\subseteq (0,\infty)$, we have that
\bgs{ \lim_{j \to \infty} \sup_{x\in I} |v_j(x) - \kappa x^s| = 0. } It follows also that on any bounded subinterval $I\subseteq (0,\infty)$ the sequence $v_j$ is uniformly bounded.
This concludes the proof of the Lemma.
\end{proof}
\section{Existence of a Caputo-stationary function with arbitrarily \\large number of derivatives prescribed}\label{sectma}
Using Lemma \ref{ls1} we prove that there exists a Caputo-stationary function with arbitrarily large number of derivatives prescribed. Namely, for any $m\in \ensuremath{\mathbb{N}}$ we want to prove that we can find a Caputo-stationary function $v$ and a point $p$, such that the derivatives of
$v$ in $p$ vanish until the order $m-1$. More precisely:
\begin{thm}\label{thm4}
For any $m \in \ensuremath{\mathbb{N}}$ there exist a point $p>0$, a constant $R>0$ and a function $v \in C_{-R}^{1,s} \cap C^{\infty} \big((0,\infty)\big)$ such that
\eqlab{ \label{cc1}
D^s_{-R} v(x)&=0 &\text{ in } &(0, \infty), \\
v(x)&=0 &\text{ in } & \Big [-\frac{R}4,0\Big]}
and
\eqlab{ \label{cc2}
&v^{(l)}(p)=0 & & \text{ for any } \quad l< m\\
&v^{(m)}(p)=1.&&}
\end{thm}
\begin{proof}
We consider $\mathcal Z$ to be the set of the pairs $(v,x)$ of all functions $v\in C_{-R}^{1,s}\cap C^{\infty} \big((0,\infty)\big)$ satisfying conditions \eqref{cc1} for some $R>0$, and $x\in (0,\infty)$. More precisely
\textcolor{black}{\bgs{\mathcal Z = \Big\{ (v,x) \text{ s.t. } x\in (0,\infty) \mbox{ and } \exists\, R>0 \text{ s.t. } & v\in C_{-R}^{1,s} \cap C^{\infty} \big((0,\infty)\big), D^s_{-R} v=0 \text{ in } (0, \infty), v =0 \text{ in } \Big [-\frac{R}4,0\Big] \Big\}. }}
We fix $m\in \ensuremath{\mathbb{N}}$. To each pair $(v,x)\in \mathcal Z$ we associate the vector $ \big(v(x), v'(x), \dots, v^{(m)}(x)\big) \in \ensuremath{\mathbb{R}}^{m+1}$ and consider $\mathcal V$ to be the vector space spanned by this construction. We claim that this vector space exhausts $\ensuremath{\mathbb{R}}^{m+1}$.
Suppose by contradiction that this is not so and $\mathcal V$ lays in a hyperplane. Then there exists a vector $(c_0,c_1,\dots,c_m)\in \ensuremath{\mathbb{R}}^{m+1}\setminus \{0\}$ orthogonal to any vector $ \big(v(x), v'(x), \dots, v^{(m)}(x)\big) $ with $(v,x) \in \mathcal Z$, hence
\[ \sum_{i=0}^m c_i v^{(i)}(x) = 0.\]
We notice that for any $j\geq 1$ the pairs $(v_j,x)$ with $v_j$ satisfying problem \eqref{pbvj1} and $x\in (0,\infty)$ belong to the set $\mathcal Z$. It follows that for any $j\geq 1$ we have that
\eqlab{ \label{bla10} \sum_{i=0}^m c_i v_j^{(i)} (x) =0.}
Let $\varphi \in C^{\infty}_c\big((0,\infty)\big)$. Integrating by parts we have that for any $i\in \ensuremath{\mathbb{N}}$
\bgs{ \int_\ensuremath{\mathbb{R}} v_j^{(i)}(x)\varphi(x)\, dx= (-1)^i \int_\ensuremath{\mathbb{R}} v_j(x) \varphi^{(i)}(x)\, dx.}
Thanks to Lemma \ref{ls1}, the sequence $v_j$ is uniformly convergent to $\kappa x^s$ on any bounded subinterval $I\subseteq (0,\infty)$, for some $\kappa>0$. By the dominated convergence theorem we have that
\[ \lim_{j \to \infty} \int_\ensuremath{\mathbb{R}} v_j^{(i)}(x)\varphi(x)\, dx = (-1)^i\lim_{j \to \infty} \int_{\ensuremath{\mathbb{R}}} v_j(x) \varphi^{(i)}(x)\, dx = (-1)^i \int_{\ensuremath{\mathbb{R}}} \kappa x^s \varphi^{(i)}(x) \, dx.\] We integrate by parts one more time and obtain that
\[ (-1)^i \int_\ensuremath{\mathbb{R}} \kappa x^s \varphi^{(i)}(x)\, dx = \int_\ensuremath{\mathbb{R}} \kappa (x^s)^{(i)} \varphi(x)\, dx.\]
It follows that
\[ \lim_{j \to \infty} \int_\ensuremath{\mathbb{R}} v_j^{(i)}(x)\varphi(x)\, dx = \int_\ensuremath{\mathbb{R}} \kappa ( x^s)^{(i)} \varphi(x)\, dx .\] Multiplying by $c_i$ and summing up, we obtain that
\bgs{ \lim_{j\to \infty} \int_\ensuremath{\mathbb{R}} \sum_{i=0}^m c_i v_j^{(i)}(x) \varphi(x)\, dx = \int_\ensuremath{\mathbb{R}} \sum_{i=0}^m c_i \kappa ( x^s)^{(i)} \varphi(x)\, dx .}
From this and equality \eqref{bla10} we finally obtain that
\[ 0 = \int_{\ensuremath{\mathbb{R}}}\sum_{i=0}^m c_i \kappa (x^s)^{(i)} \varphi(x)\, dx \] for any $\varphi \in C_c^{\infty}\big((0,\infty)\big)$.
This implies that on $(0,\infty)$
\[ 0= \kappa \sum_{i=0}^m c_i(x^s)^{(i)} =\kappa \sum_{i=0}^m c_i s(s-1)\dots(s-i+1)x^{s-i}. \]
We divide this relation by $\kappa$ (that is strictly positive) and multiply by $x^{m-s}$ and obtain that for any $x \in (0,\infty) $
\[ \sum_{i=0}^m c_i s(s-1)\dots(s-i+1) x^{m-i} =0.\]
We have here a polynomial that vanishes for any positive $x$. Thanks to the fact the $s \in(0,1)$ the product $s(s-1) \dots (s-i+1)$ is never zero, therefore one must have $c_i= 0 $ for every $i\in\ensuremath{\mathbb{N}}_0$. This is a contradiction since the vector $(c_o,\dots,c_m)$ was assumed not null. Hence the vector space $\mathcal V$ exhausts $\ensuremath{\mathbb{R}}^{m+1}$ and there exists $(v,p) \in \mathcal Z $ such that $\big( v(p), v'(p),\dots, v^{(m)}(p)\big)=(0,0,\dots,1)$. This concludes the proof of Theorem \ref{thm4}.
\end{proof}
\section{Proof of Theorem \ref{thm:thm1}}\label{sectthm1}
This section is dedicated to the proof of Theorem \ref{thm:thm1}. We translate and rescale the function $v$ as given in Theorem \ref{thm4}. The derivatives of the rescaled function vanish in $0$ until the order $m-1$, and the $\mbox{m}^{th}$ derivative equals $1$. Using a Taylor expansion, we obtain that this rescaled function well approximates the monomial $q(x)=c_m x^m$.
\begin{proof}[Proof of Theorem \ref{thm:thm1}]
In Section \ref{sbssc} we explained why it suffices to prove that for any $m\in \ensuremath{\mathbb{N}}$ and any monomial $q_m(x)=x^m$ there exists a Caputo-stationary function $u$ such that
\[ \|u-q_m\|_{C^k\big([0,1]\big)} < \ensuremath{\varepsilon}.\]
For an arbitrary $m\in \ensuremath{\mathbb{N}}$, we take for convenience the monomial \[ q_m(x)=\frac{x^{m} }{m!}. \] \textcolor{black}{Also, we consider $p,R>0$ and the function $v$ as introduced in Theorem \ref{thm4}} and
we translate and rescale $v$. Let $\delta $ be a positive quantity (to be taken conveniently small in the sequel) and let $u$ be the function
\[ u(x):= \frac{v(\delta x +p)}{\delta^m} .\]
Since $v\in C_{-R}^{1,s} \cap C^{\infty}\big((0,\infty)\big)$ we have that $u \in C_{ \frac{-p-R}{\delta}}^{1,s} \cap C^{\infty} \lr{\Big(-\displaystyle \frac{p}{\delta}, \infty\Big)}$ and
\bgs{ \Gamma(1-s) D_{ \frac{-p-R}{\delta} }^s u(x) =\ensuremath{ &\;} \int_{ \frac{-p-R}{\delta} }^x u'(t)(x-t)^{-s}\, dt \\
= \ensuremath{ &\;} \delta^{1-m} \int_{ \frac{-p-R}{\delta} }^x v'(\delta t+ p) (x-t)^{-s}\, dt.}
We change the variable $y=\delta t +p$ and obtain that
\bgs{ \Gamma(1-s) D^s_{\frac{-p-R}{\delta} } u(x) = \ensuremath{ &\;} \delta^{s-m} \int_{-R}^{\delta x +p} v'(y) (\delta x+ p-y)^{-s}\, dy\\=\ensuremath{ &\;}
\Gamma(1-s) D^s_{-R} v(\delta x +p).}
Let $a:=\displaystyle \frac{-p-R}{\delta}$. Using the properties \eqref{cc1} of $v$ we obtain that
\bgs{ D_a ^su(x) =0 \text{ in } \Big(-\frac{p}{\delta}, \infty\Big). }
With this notation, we have that $u\in C_a^{1,s}$ and since $\displaystyle -\frac{p}{\delta}<0$, that $D_a ^su(x) =0 \text{ in } [0, \infty). $
Furthermore, from the conditions \eqref{cc2} and the definition of $u$ we get that
\bgs{ u^{(l)}(0)& = \delta^{l-m}v^{(l)}(p)= 0 & & \text{ for any } \quad l< m\\
u^{(m)}(0)& = v^{(m)}(p)= 1.&&}
Let for any $x>-p/\delta$ \[ g(x):= u(x) -q_m(x).\] We have that
\eqlab{ \label{gr1} g^{(l)}(0)&= 0 &\mbox{ for any } &l\leq m\quad \mbox{and}\\
g^{(m+l)} (x)&= u^{(m+l)}(x) & \mbox{ for any } &l\geq 1 .}
Moreover for $l\geq 1 $ we have that $u^{(m+l)}(x)= \delta^l v^{(m+l)} (\delta x +p)$ and it follows that
\[| g^{(m+l)}(x) | = \delta^{l} |v^{(m+l)}(\delta x+p) | .\]
Hence for $x \in [0,1]$ we have the bound
\eqlab{ \label{bg1} | g^{(m+l)}(x) |\leq \delta^l \sup_{y \in [p, p+\delta] } |v^{(m+l)} (y)| = \tilde C \delta^l,} where $\tilde C$ is a positive constant.
We consider the derivative of order $k$ of $g$ and take its Taylor expansion with the Lagrange reminder. \textcolor{black}{Thanks to \eqref{gr1}, for some $c\in (0,x)$ we have that
\[ g^{(k)}(x)= \sum_{i=\max\{ k,m+1\}} ^{k+m+1} g^{(i)}(0) \frac{x^{i-k}}{(i-k)!} +g^{(m+k+2)} (c) \frac{x^{m+2}}{(m+2)!} .\] }
Using \eqref{bg1} for any $x\in [0,1]$, eventually renaming the constants we have that
\textcolor{black}{\[ | g^{(k)}(x)| \leq C \sum_{i={\max\{1,k-m\}}}^{k+2} \delta^i, \]}
therefore for $k\in \ensuremath{\mathbb{N}}_{0}$
\[ | g^{(k)}(x)| = |q_m^{(k)}(x) -u^{(k)}(x)|= \mathcal O(\delta) . \]
If we let $\delta\to 0$ we have that $u^{(k)}$ approximates $q_m^{(k)}$. Finally, for any small $\ensuremath{\varepsilon}(\delta)>0$
\[ \|u-q_m\|_{C^k\big([0,1]\big)} < \ensuremath{\varepsilon}\] and this concludes the proof of Theorem \ref{thm:thm1}.
\end{proof}
\section*{Appendix}
In this Appendix, we want to give some explicit examples related to some Lemmas that were introduced in this paper.
At this purpose, to give an example of Lemma \ref{lem:int11}, we take $a=0, b=1, s=1/2$ and the function $\varphi(x)=x$ in $[0,1]$ and $\varphi(x)=0$ in $(-\infty,0)$. We built the function $u\in C_0^{1,1/2}$ that satisfies
\eqlab{\label{es1} D_0^{\frac{1}2} u(x)& =0 &\text{ in } & (1,\infty),\\
u(x)&=x &\text{ in } & [0,1],\\
u(x)&=0&\text{ in } & (-\infty,0) .}
Let \[ g(x):= -\int_0^1 \frac{\varphi'(t)}{\sqrt{x-t}} \, dt=-\int_0^1 (x-t)^{\frac{1}2}\, dt = 2\sqrt{x-1}-2\sqrt{x}.\]
According to Lemma \ref{lem:int11} and to Theorem \ref{thm:probc}, the unique solution of the problem \eqref{es1} is given by
\[ u(x) =u(1)+ \frac{1}{\pi}\int_1^x \frac{g(t)}{\sqrt{x-t}}\, dt,\]
and computing, this gives
\[ u(x)=\frac{2}{\pi} \lr{x\arcsin \frac{1}{\sqrt x}-\sqrt{x-1}}. \]
We depict this function in the following Figure \ref{fign:es1}.
\begin{center}
\begin{figure}[htpb]
\hspace{0.6cm}
\begin{minipage}[b]{0.85\linewidth}
\centering
\includegraphics[width=0.90\textwidth]{lem21ref.png}
\caption{A Caputo-stationary function in $(1,\infty)$ prescribed on $(-\infty,1]$}
\label{fign:es1}
\end{minipage}
\end{figure}
\end{center}
In Lemma \ref{lem1}, we take $a=0, b=1, s=1/2$ and the quadratic function
\sys [\psi_0(x) =]{ &\frac{16}9 \lr{x-\frac{3}4}^2 &\mbox{ in } &\lrq{0,\frac{3}4},\\
& 0 & \mbox{ in } &\lrq{\frac{3}4,1}.}
So we are looking for a function $\psi \in C_0^{1,1/2}$ that satisfies
\eqlab{\label{es2} D_0^{\frac{1}2} \psi(x)& =0 &\text{ in } & (1,\infty),\\
\psi(x)&=\psi_0(x) &\text{ in } & (-\infty,1] .}
The solution, according again to Lemma \ref{lem:int11} and to Theorem \ref{thm:probc} is given by
\[\psi(x) = \frac{1}{\pi}\int_1^{x}g(t)(x-t)^{-\frac{1}2} \, dt, \quad \mbox{ where } \quad g(t)= -\int_0^{\frac{3}4} \psi_0'(t) (x-t)^{-\frac{1}2}\, dt.\]
Computing this, we have that
\[ g(t)= -\frac{16}{27} \lr{ 8t^{\frac{3}2}-9t^{\frac{1}2} -(4t-3)^{\frac{3}2}}\]
and
\[ \psi(x) =\frac{1}{27\pi} \lrq{ 27 \pi + \sqrt{x-1} (-48x+52) +\arcsin \frac{1}{\sqrt{x}} (96x^2-144x) -\arcsin \frac{1}{\sqrt{4x-3}} (96x^2-144x+54) }.\]
We depict this function in the following Figure \ref{fign:es2}.
\begin{center}
\begin{figure}[htpb]
\hspace{0.6cm}
\begin{minipage}[b]{0.85\linewidth}
\centering
\includegraphics[width=0.90\textwidth]{lem31ref.png}
\caption{A Caputo-stationary function in $(1,\infty)$ prescribed on $(-\infty,1]$}
\label{fign:es2}
\end{minipage}
\end{figure}
\end{center}
| {
"timestamp": "2016-05-30T02:06:56",
"yymm": "1506",
"arxiv_id": "1506.07004",
"language": "en",
"url": "https://arxiv.org/abs/1506.07004",
"abstract": "We consider the Caputo fractional derivative and say that a function is Caputo-stationary if its Caputo derivative is zero. We then prove that any $C^k\\big([0,1]\\big)$ function can be approximated in $[0,1]$ by a a function that is Caputo-stationary in $[0,1]$, with initial point $a<0$. Otherwise said, Caputo-stationary functions are dense in $C^k_{loc}(\\mathbb{R})$.",
"subjects": "Classical Analysis and ODEs (math.CA); Analysis of PDEs (math.AP)",
"title": "Local density of Caputo-stationary functions in the space of smooth functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750507184357,
"lm_q2_score": 0.8152324826183822,
"lm_q1q2_score": 0.8049402138726414
} |
https://arxiv.org/abs/1909.07869 | Visualizing Movement Control Optimization Landscapes | A large body of animation research focuses on optimization of movement control, either as action sequences or policy parameters. However, as closed-form expressions of the objective functions are often not available, our understanding of the optimization problems is limited. Building on recent work on analyzing neural network training, we contribute novel visualizations of high-dimensional control optimization landscapes; this yields insights into why control optimization is hard and why common practices like early termination and spline-based action parameterizations make optimization easier. For example, our experiments show how trajectory optimization can become increasingly ill-conditioned with longer trajectories, but parameterizing control as partial target states---e.g., target angles converted to torques using a PD-controller---can act as an efficient preconditioner. Both our visualizations and quantitative empirical data also indicate that neural network policy optimization scales better than trajectory optimization for long planning horizons. Our work advances the understanding of movement optimization and our visualizations should also provide value in educational use. |
\section{Properties and limitations of random 2D slice visualizations}\label{sec:theory}
So far, we have provided many examples of random 2D slice visualizations of high-dimensional objective functions. We have also demonstrated that such visualizations have predictive power regarding the difficulty of optimization. However, as information is obviously lost in only evaluating the objective along a 2D slice, this section provides further analysis of the limitations of the approach. To allow investigating how the mathematical properties of objective functions manifest in the visualized slices, this section focuses on simple test functions with closed-form expressions. Such simple expressions are not available for real-life movement optimization objectives that depend on complex simulated dynamics, typically implemented using a black-box physics simulator.
\subsection{2D Visualizations are Optimistic About Ill-conditioning}
Consider the following cost function
\begin{eqnarray}
f(\mathbf{x})&=&\sum_{i=1}^{k} x_i^2 + \epsilon \sum_{i=k+1}^{d} x_i^2\\
&=& ||\mathbf{x}_{:k}||^2 + \epsilon ||\mathbf{x}_{k:}||^2, \label{eq:kd}
\end{eqnarray}
where $\epsilon$ is a small constant, i.e., $f(\mathbf{x})$ mostly depends only on the first $k$ optimized variables. $\mathbf{x}_{:k}$ denotes the projection of $\mathbf{x}$ into the subspace of the first $k$ dimensions. $\mathbf{x}_{k:}$ denotes the projection into the remaining dimensions. The Hessian of $f(\mathbf{x})$ is diagonal, containing the curvatures along the unit vectors as the diagonal elements. Curvature along the first $k$ unit vectors equals $2$ and curvature along the rest of the dimensions is $2\epsilon$. Thus, if $k \neq 0$ and $k \neq d$, the condition number $\kappa = 1/\epsilon$.
Geometrically, the isosurfaces of $f(\mathbf{x})$ are $n$-spheres elongated by a factor of $1/\sqrt{\epsilon}$ along the last $d-k$ dimensions. The visualized 2D isocontours correspond to the intersections of the isosurfaces with the visualization plane. This is illustrated in Fig. \ref{fig:isosurfaces} and on the supplemental video for $d=3$.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{images/isosurfaces_and_isocontours.png}
\caption{The isosurfaces, random visualization planes, and 2D visualization isocontours for Equation \ref{eq:kd} in the case of $d=3$.}\label{fig:isosurfaces}
\end{figure}
Investigating Fig. \ref{fig:isosurfaces} reveals a basic property of the 2D visualizations: with the convex quadratic objective of Equation \ref{eq:kd}, the elongation of the 2D isocontours is less than or equal to the true elongation of the isosurfaces. \emph{In other words, $\kappa_{2D} \le \kappa$.}
This property follows from the isocontours corresponding to planar intersections of the isosurfaces. First, as illustrated in the middle of Fig. \ref{fig:isosurfaces}, it is possible to rotate the visualization plane such that the isocontours display less elongation. Second, the isocontours cannot display more than the real elongation; the visualized elongation is at maximum when one plane basis vector aligns with a direction of high elongation (vertical axis in the middle of Fig. \ref{fig:isosurfaces}), and the other basis vector aligns with a direction of low elongation, in which case the isocontours display the correct $\kappa_{2D} = \kappa = 1/\epsilon$.
\subsection{2D Visualizations Show Ill-conditioning More Accurately With Low Intrinsic Dimensionality}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/sweep_dk.png}
\caption{Contour plots of random slices of $f(\mathbf{x})$ in Equation \ref{eq:kdapprox} with different $k$ and $d$. Visualized ill-conditioning is more accurate with small $k$. }\label{fig:kd}
\end{figure}
It turns out that \textit{visualization accuracy depends on the intrinsic dimensionality $k$}. To analyze this, let us consider the extreme case of $\epsilon = 0$, i.e.,
\begin{equation}
f(\mathbf{x}) = f(\mathbf{x}_{:k}) = ||\mathbf{x}_{:k}||^2. \label{eq:kdapprox}
\end{equation}
Fig. \ref{fig:kd} shows the contour plots of random 2D slices with different $k$ and $d$. With low $k$, the visualized ill-conditioning is more accurate, independent of full problem dimensionality $d$. Deriving a closed-form expression of $\kappa_{2D}$ as a function of $d$ and $k$ is beyond the scope of this paper. However, as shown below, $k=1$ and $k=d$ result in the correct $\kappa_{2D}=\infty$ and $\kappa_{2D}=1$, respectively.
Let $\mathbf{u}, \mathbf{v} \in \mathbb{R}^d$ denote the slice basis vectors, with orthogonality $\mathbf{u}^T\mathbf{v}=0$. On the visualization plane, $\mathbf{x}=p_1 \mathbf{u} + p_2 \mathbf{v} = [\mathbf{u}\ \mathbf{v}]\mathbf{p}$, where $\mathbf{p}$ denotes the 2D position on the plane. Similarly, $\mathbf{x}_{:k}=[\mathbf{u}_{:k} \ \mathbf{v}_{:k}]\mathbf{p}$, and the objective can be expressed as:
\begin{eqnarray}
f(\mathbf{x}_{:k}) &=& ||[\mathbf{u}_{:k} \mathbf{v}_{:k}]\mathbf{p}||^2\\
&=& \mathbf{p}^T \begin{bmatrix}
\mathbf{u}^T_{:k}\mathbf{u}_{:k} & \mathbf{u}^T_{:k}\mathbf{v}_{:k} \\
\mathbf{v}^T_{:k}\mathbf{u}_{:k} & \mathbf{v}^T_{:k}\mathbf{v}_{:k}
\end{bmatrix} \mathbf{p} \\
&=& \mathbf{p}^T \mathbf{A} \mathbf{p}.
\end{eqnarray}
Because $\mathbf{A}$ is symmetric, the Hessian of the quadratic form w.r.t. $\mathbf{p}$ is:
\begin{eqnarray}
H(f(\mathbf{x}_{:k}))=\mathbf{A}+\mathbf{A}^T=2\mathbf{A}.
\end{eqnarray}
The condition number $\kappa_{2D}=\kappa(H(f(\mathbf{x}_{:k}))=\kappa(\mathbf{A})$, as the condition number is invariant to scaling the Hessian by a constant.
With $k$=1, the vectors $\mathbf{u}_{:k}=[u_1], \mathbf{v}_{:k}=[v_1]$, and the determinant becomes zero:
\begin{eqnarray}
\det(\mathbf{A})&=& \mathbf{u}^T_{:k}\mathbf{u}_{:k}\mathbf{v}^T_{:k}\mathbf{v}_{:k} - \mathbf{u}^T_{:k}\mathbf{v}_{:k}\mathbf{v}^T_{:k}\mathbf{u}_{:k} \\
&=&u_1 u_1 v_1 v_1 - u_1 v_1 v_1 u_1 = 0.
\end{eqnarray}
This indicates at least one zero eigenvalue and, since $\mathbf{A}$ is not a null matrix, some eigenvalue must also be nonzero, i.e., $\kappa(\mathbf{A})=\max(eig(\mathbf{A}))/\min(eig(\mathbf{A}))=\infty$.
When $k$ grows from 1 to $d$, the vectors $\mathbf{u}_{:k}, \mathbf{v}_{:k}$ gradually become closer to $\mathbf{u}, \mathbf{v}$, i.e., unit-length and orthogonal. Thus, the off-diagonal elements become zero and $\mathbf{A}$ becomes the identity matrix, with $\kappa(\mathbf{A})=1$.
Although the quadratic $f(\mathbf{x})$ was chosen to be separable for easier mathematical analysis, the result generalizes to the arbitrarily rotated case.
\subsection{2D Visualizations Can Be Optimistic About Multimodality}
Fig. \ref{fig:nonconvex} shows contour plots of random visualization slices with different dimensionality $d$, using two multimodal test functions. Rastrigin's function is a standard multimodal optimization test function with the global minimum at the origin and infinitely many local minima:
\begin{equation}
f_{Rastrigin}(\mathbf{x})=10d+\sum_{i=1}^d [x_i^2-10\cos(2\pi x_i)]
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/nonconvex.png}
\caption{Contour plots of random slices of multimodal test functions with different $d$. Large $d$ can make multimodality less apparent, although it can be compensated by using unnormalized visualization basis vectors (bottom row). On the first row, the visualization plane is centered at the origin, between the optima. On the second row, it is centered at the optimum.}\label{fig:nonconvex}
\end{figure}
Additionally, we use the following bimodal function:
\begin{equation}
f_{Bimodal}(\mathbf{x})=e^{-\frac{1}{2}||\mathbf{x}-\mathbf{1}||^2} + 0.8e^{-\frac{1}{2}||\mathbf{x}+\mathbf{1}||^2},
\end{equation}
where $\mathbf{1}$ denotes a vector of ones. We visualize this function both around the origin and around the dominant mode at $\mathbf{1}$.
Fig. \ref{fig:nonconvex} reveals two key insights:
\begin{itemize}
\item \textit{Multimodality becomes less apparent with increasing dimensionality $d$.} The exception is the first row, where visualized multimodality only depends on plane rotation independent of $d$. This is because the visualization plane intersects the origin; in this case, the optima are always equally far from the plane and thus have similar influence on the visualized $f_{Bimodal}(\mathbf{x})$. However, when the plane intersects the optimum at $\mathbf{1}$, the other optimum tends to lie increasingly far from the plane with increasing $d$, having a negligible effect on the visualization.
\item The visualization of Rastrigin's function illustrates how \textit{landscape features may scale differently with dimensionality}. Rastrigin's central mode becomes more dominant with large $d$. At the bottom of Fig. \ref{fig:nonconvex}, we demonstrate how this can be compensated by omitting the unit-length normalization of the slice basis vectors, and instead normalizing them to the mean of their sampled lengths. Each basis vector element is sampled uniformly in the range $[-1,1]$.
\end{itemize}
\subsection{If 2D Visualizations Show Problems, There Really Are Problems}
\textbf{\textit{Visualized ill-conditioning is real}} The results above indicate that 2D visualizations have limited sensitivity as a diagnostic tool for detecting ill-conditioning and multimodality. Fortunately, $\kappa_{2D} \le \kappa$ also means that the visualizations have high specificity, i.e. if the 2D isocontours are elongated, the problem is indeed ill-conditioned.
\textbf{\textit{Visualized non-convexity indicates real non-convexity}} For a convex unimodal objective, the 2D visualization is likewise convex and unimodal. This follows from the intersection of two convex sets being convex. Each 2D isocontour encloses a set that is the intersection of two convex subsets of $\mathbb{R}^d$, i.e., the visualization plane and the volume enclosed by the corresponding isosurface. However, other non-convexity can be confused with multimodality. Consider a curved, banana-shaped 3D isosurface. It is possible to intersect this with a plane such that the resulting isocontours comprise two ellipses.
\subsection{Summary of Limitations}
In summary, random 2D visualization slices of high-dimensional objectives tend to be optimistic about both ill-conditioning and multimodality. However, this limitation is mitigated by the visualizations not showing illusory non-convexity or ill-conditioning. In other words, \textit{as a diagnostic tool for detecting problems, random 2D visualizations have low sensitivity compensated by high specificity}.
Mitigating the low sensitivity is a potential topic for future work. For instance, if computing eigenvectors and eigenvalues of the Hessian or its low-rank approximation (e.g., \cite{li1992principal}) is not too expensive, one could visualize using a 2D basis formed by the eigenvectors with lowest and highest eigenvalues. This way, the visualization would be in line with the condition number and show ill-conditioning with higher sensitivity. In this paper, however, we have focused on random visualization slices due to their simplicity and prior success in visualizing neural network loss landscapes \cite{li2018visualizing}. Furthermore, at least for large policy networks with millions of parameters, computing eigenvector approximations is quite expensive.
\section{Properties and limitations of random 2D slice visualizations}\label{sec:theory}
In the preceding sections, we have provided many examples of random 2D slice visualizations of high-dimensional objective functions. We have also demonstrated that such visualizations can predict the difficulty of optimization. However, as information is obviously lost in only evaluating the objective along a 2D slice, one may pose the following research question:
\textit{How reliable are random 2D visualizations as a diagnostic tool for indicating problems like ill-conditioning and multimodality?}
In this section, we investigate this question using test functions with known properties and closed-form expressions. This yields the following main results:
\begin{itemize}
\item There is no guarantee that ill-conditioning and multimodality will show up in a random visualization slice.
\item On the other hand, if ill-conditioning or multimodality do show up, they are real. This mitigates the limitation above and makes random 2D slice visualizations a useful tool for analyzing optimization problems.
\end{itemize}
In other words, \textit{as a diagnostic tool for identifying problems, random 2D visualizations have low sensitivity compensated by high specificity}.
\subsection{Visualizing ill-conditioned optima}
As elaborated below, the number of high and low eigenvalues of the Hessian greatly affects visualization in addition to the condition number, i.e., ratio of highest and lowest eigenvalues.
Consider the following cost function with \textit{intrinsic dimensionality $k$}:
\begin{eqnarray}
f(\mathbf{x})&=&\sum_{i=1}^{k} x_i^2 + \sum_{i=k+1}^{d} \epsilon x_i^2\\
&=& ||\mathbf{x}_{:k}||^2 + \epsilon ||\mathbf{x}_{k:}||^2 \label{eq:kd}
\end{eqnarray}
where $\epsilon$ is a small constant, i.e., $f(\mathbf{x})$ mostly depends only on the first $k$ optimized variables. $\mathbf{x}_{:k}$ denotes the projection of $\mathbf{x}$ into the subspace of the first $k$ dimensions. $\mathbf{x}_{k:}$ denotes the projection into the remaining dimensions. The Hessian of $f(\mathbf{x})$ is diagonal, containing the curvatures along the unit vectors as the diagonal elements. Curvature along the first $k$ unit vectors equals $2$ and curvature along the rest of the dimensions is $2\epsilon$. Thus, if $k \neq 0$ and $k \neq d$, the condition number $\kappa = 1/\epsilon$.
Geometrically, the isosurfaces of $f(\mathbf{x})$ are $n$-spheres elongated by a factor of $1/\sqrt{\epsilon}$ along the last $d-k$ dimensions. The visualized 2D isocontours correspond to the intersections of the isosurfaces with the visualization plane. This is illustrated in Figure \ref{fig:isosurfaces} and on the supplemental video for $d=3$.
\begin{figure}[h]
\centering
\includegraphics[width=3.4in]{images/isosurfaces_and_isocontours.png}
\caption{The isosurfaces, random visualization planes, and 2D visualization isocontours for Equation \ref{eq:kd} in the case of $d=3$.}\label{fig:isosurfaces}
\end{figure}
Investigating Figure \ref{fig:isosurfaces} reveals a basic property of the 2D visualizations:
\textbf{Proposition 1} \textit{With an objective function of the form of Equation \ref{eq:kd}, the elongation of the 2D isocontours is less than or equal to the true elongation of the isosurfaces. In other words, $\kappa_{2D} \le \kappa$}
\textit{Proof.} The proof follows from the isocontours corresponding to planar intersections of the isosurfaces. First, as illustrated in the middle of Figure \ref{fig:isosurfaces}, it is possible to rotate the visualization plane such that the isocontours display less elongation. Second, the isocontours cannot display more than the real elongation; the visualized elongation is at maximum when one plane basis vector aligns with a direction of high elongation (vertical axis in the middle of Figure \ref{fig:isosurfaces}), and the other basis vector aligns with a direction of low elongation, in which case the isocontours display the correct $\kappa = 1/\epsilon$ $\qed$
A crucial observation from Figure \ref{fig:isosurfaces} is that the number of possible visual plane rotations that display ill-conditioning depends on $k$. When $k=1$, the isocontours are mostly highly elongated, except when the visualization plane is aligns with the lens-shaped isosurface. In contrast, with $k=d-1$, the 2D isocontours are mostly not showing the true elongation of the isosurface, except when the visualization plane aligns with the axis of elongation.
For a more formal analysis of how the visualization depends on $k$, let us consider the extreme case of $\epsilon = 0$, i.e.,
\begin{equation}
f(\mathbf{x}) = f(\mathbf{x}_{:k}) = ||\mathbf{x}_{:k}||^2. \label{eq:kdapprox}
\end{equation}
Figure \ref{fig:kd} shows the contour plots of random 2D slices with different $k$ and $d$, motivating the following proposition:
\textbf{Proposition 2} \textit{As $k$ moves from 1 to $d$, the condition number of a random 2D slice visualization of $f(\mathbf{x})$ moves from $\infty$ to 1, with random variation between the extremes.}
\textit{Proof.} Let $\mathbf{u}, \mathbf{v} \in \mathbb{R}^d$ denote the slice basis vectors, with orthogonality $\mathbf{u}^T\mathbf{v}=0$. On the visualization plane, $\mathbf{x}=p_1 \mathbf{u} + p_2 \mathbf{v} = [\mathbf{u} \mathbf{v}]\mathbf{p}$, where $\mathbf{p}$ denotes the 2D position on the plane. Similarly, $\mathbf{x}_{:k}=[\mathbf{u}_{:k} \mathbf{v}_{:k}]\mathbf{p}$,
\begin{eqnarray}
f(\mathbf{x}_{:k}) &=& ||[\mathbf{u}_{:k} \mathbf{v}_{:k}]\mathbf{p}||^2\\
&=& \mathbf{p}^T \begin{bmatrix}
\mathbf{u}^T_{:k}\mathbf{u}_{:k} & \mathbf{u}^T_{:k}\mathbf{v}_{:k} \\
\mathbf{v}^T_{:k}\mathbf{u}_{:k} & \mathbf{v}^T_{:k}\mathbf{v}_{:k}
\end{bmatrix} \mathbf{p} \\
&=& \mathbf{p}^T \mathbf{A} \mathbf{p}
\end{eqnarray}
The Hessian w.r.t. $\mathbf{p}$ of the quadratic form is
\begin{eqnarray}
H(f(\mathbf{x}_{:k}))=\mathbf{A}+\mathbf{A}^T=2\mathbf{A}
\end{eqnarray}
because $\mathbf{A}$ is symmetric. The condition number $\kappa(H(f(\mathbf{x}_{:k}))=\kappa(\mathbf{A})$, as the condition number is invariant to scaling the Hessian by a constant. Thus, it is sufficient to analyze how $\mathbf{A}$ changes when $k$ moves from 1 to $d$.
With $k$=1, $\mathbf{u}_{:k}=u_1, \mathbf{v}_{:k}=v_1$ are scalars, and the determinant becomes zero:
\begin{eqnarray}
\det(\mathbf{A})&=& \mathbf{u}^T_{:k}\mathbf{u}_{:k}\mathbf{v}^T_{:k}\mathbf{v}_{:k} - \mathbf{u}^T_{:k}\mathbf{v}_{:k}\mathbf{v}^T_{:k}\mathbf{u}_{:k} \\
&=&u_1 u_1 v_1 v_1 - u_1 v_1 v_1 u_1 = 0.
\end{eqnarray}
This indicates at least one zero eigenvalue and thus $\kappa(\mathbf{A})=\max(eig(\mathbf{A}))/\min(eig(\mathbf{A}))=\infty$.
When $k$ grows from 1 to $d$, $\mathbf{u}_{:k}, \mathbf{v}_{:k}$ gradually become closer to $\mathbf{u}, \mathbf{v}$, i.e., unit-length and orthogonal. Thus, the off-diagonal elements become zero and $\mathbf{A}$ becomes the identity matrix, with $\kappa(\mathbf{A})=1$. $\qed$
Although the quadratic $f(\mathbf{x})$ was chosen to be separable for easier mathematical analysis, the propositions above generalize to the arbitrarily rotated case.
Considering Figure \ref{fig:kd} and the propositions above, one may articulate the following conclusions:
\begin{itemize}
\item A random 2D slice visualization may not display ill-conditioning if the optimization problem has high intrinsic dimensionality $k$.
\item On the other hand, the visualizations will not show non-existent problems; spherical functions will appear spherical and the intersection of the plane and an isosurface cannot show more elongation than the isosurface has. Thus, the visualization can still be useful in highlighting problems, which is also supported by the the empirical data in this paper. Furthermore, while higher intrinsic dimensionality makes ill-conditioning harder to visualize, it also decreases the severity of the problem, as one can get closer to the true optimum even if the optimizer only focuses on the intrinsic dimensions.
\end{itemize}
\begin{figure}[h!]
\centering
\includegraphics[width=3.4in]{images/sweep_dk.png}
\caption{Contour plots of random slices of $f(\mathbf{x})$ in Equation \ref{eq:kd} with different $k$ and $d$. As $k$ grows towards $d$, ill-conditioning becomes less apparent in the visualization. }\label{fig:kd}
\end{figure}
\paragraph{Relation to random projection theory.} Our visualized 2D isocontours are determined by intersecting a randomly selected plane (a 2D subspace) with high-dimensional isosurfaces. Interestingly, random projection theory deals with a related operation of projecting high-dimensional data on random subspaces. A central result, in line with our findings, is that if data clusters are highly eccentric (non-spherical), projecting the data to a random low-dimensional subspace will make them appear more spherical \cite{dasgupta1999learning, dasgupta2013experiments}.
\subsection{Visualizing multimodality}
Figure \ref{fig:nonconvex} shows contour plots of random visualization slices with different dimensionality $d$, using two multimodal test functions. Rastrigin's function is a standard multimodal optimization test function with the global minimum at the origin and infinitely many local minima:
\begin{equation}
f_{Rastrigin}(\mathbf{x})=10d+\sum_{i=1}^d [x_i^2-10\cos(2\pi x_i)]
\end{equation}
Additionally, we use the following bimodal function:
\begin{equation}
f_{Bimodal}(\mathbf{x})=e^{-\frac{1}{2}||\mathbf{x}-\mathbf{1}||^2} + 0.8e^{-\frac{1}{2}||\mathbf{x}+\mathbf{1}||^2},
\end{equation}
where the boldface $\mathbf{1}$ denotes a vector of ones. We visualize this function both around the origin and around the dominant mode at $\mathbf{1}$.
From Figure \ref{fig:nonconvex}, one may draw the following conclusions:
\begin{itemize}
\item Multimodality may become less apparent with increasing dimensionality. Additionally, if there are only a few modes, it may easily happen that modes collapse together in the visualization.
\item With the bimodal function, visualization appears more unimodal when computed around the optimum at $\mathbf{1}$. This can be explained by the norm of the other mode having less influence near the optimum, as the Euclidian distance between the optima grows with dimensionality.
\item With functions like Rastrigin, landscape features may scale differently. Rastrigin's central mode becomes larger and more dominant with large $d$. In Figure \ref{fig:nonconvex}, we demonstrate how this can be compensated by omitting the unit-length normalization of the slice basis vectors, and instead normalizing them to the mean of their original lengths. We also demonstrate that high-pass filtering the 2D-array of function values before plotting can bring back detail that would otherwise become invisible with large $d$.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[width=3.4in]{images/nonconvex.png}
\caption{Contour plots of random slices of multimodal test functions with different $d$. Large $d$ can make multimodality less apparent, although it can be compensated by using unnormalized visualization basis vectors or high-pass filtering. }\label{fig:nonconvex}
\end{figure}
In summary, considering both Figure \ref{fig:kd} and Figure \ref{fig:nonconvex}, random 2D slice visualizations can help in characterizing optimization landscapes and identifying problems. However, for some function types like an elongated quadratic with high intrinsic dimensionality, the visualizations may be overly optimistic.
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{M}{uch} of computer animation research formulates animation as an optimization problem. It has been shown that complex movements can emerge from minimizing a cost function that measures the divergence from movement goals, such as moving to a specific pose or location while minimizing effort. In principle, this holds the promise of elevating an animator to the role of a choreographer, directing virtual actors and stuntmen through the definition of movement goals. However, solving the optimization problems can be hard in practice. It can require hours or even days of computing time, which is highly undesirable for interactive applications and the rapid iteration of movement goals; defining the goals can be a non-trivial design problem in itself, requiring multiple attempts to produce a desired aesthetic result.
Optimization problems can be divided into the four classes of increasing difficulty illustrated in Fig. \ref{fig:introfigure}:
\begin{itemize}
\item \textit{Convex and well-conditioned} Convexity refers to the shape of the isocontours of the objective function, or level sets in a general $d$-dimensional case. In the ideal well-conditioned case, the isocontours are spherical, and simple gradient descent recovers a direct path to the optimum.
\item \textit{Convex and ill-conditioned} In ill-conditioned optimization, the isocontours are elongated instead of spherical. The gradient---coinciding with isocontour normals---no longer points towards the optimum, and numerical optimization may require more iterations.
\item \textit{Non-convex and unimodal} Non-convexity tends to make optimization even harder, but numerical optimization usually still converges if there are no local optima to distract it.
\item \textit{Non-convex and multimodal} In this problem class, the landscape has local optima which can attract optimization. Gradient-free, sampling-based approaches like CMA-ES---common in animation research---may still find the global optimum \cite{hansen2004evaluating}, but this can be computationally expensive. Unfortunately, movement optimization can easily fall into this class, e.g. due to the discontinuities caused by colliding objects, and multiple options for going around obstacles \cite{Hamalainen2014,Hamalainen2015}.
\end{itemize}
\begin{figure}[h!]
\centering
\includegraphics[width=2.9in]{images/introfigure.png}
\caption{Common types of optimization landscapes. The surfaces denote the values of 2-dimensional (bivariate) objective functions, with the isocontours displayed below the surface. The black curves show the progress of gradient descent optimization from an initial point.} \label{fig:introfigure}
\end{figure}
Landscape visualizations like those in Fig. \ref{fig:introfigure} provide useful intuitions of optimization problems, and can help in reformulating a problem into a more tractable form. In general, one would like the objective function to be \textit{more convex, well-conditioned, and unimodal}. We know that problem modifications such as the choice of action space can greatly affect movement optimization efficiency \cite{peng2017learning}, but visualizing the effects on the optimization landscape is challenging because of high problem dimensionality. High dimensionality and/or non-differentiable physics simulators can also prevent analyzing problem conditioning through the eigenvalues of the Hessian.
The primary inspiration of this paper comes from recent work on visualizing neural network loss function landscapes by Li et al. \cite{li2018visualizing}. Strikingly, the paper shows that \textit{visualization of random 2D slices of a high-dimensional objective function can convey useful intuitions and predict the difficulty of optimization}, even with highly complex networks with millions of parameters. More specifically, the approach generates 3D landscape plots of the objective function $f(\mathbf{x}): \mathbb{R}^d \rightarrow \mathbb{R}$ by evaluating it along a plane (a 2D subspace in $\mathbb{R}^d$) defined by two random orthogonal basis vectors intersecting the optimum. Li et al. \cite{li2018visualizing} use the approach to illustrate how deeper networks have more local optima, but adding skip-connections greatly helps in making the landscape more convex and unimodal.
\textbf{Contribution:} We contribute by showing that the random slice visualization approach of Li et al. \cite{li2018visualizing} can be applied in the domain of movement optimization. Fig. \ref{fig:randomslices} shows examples of this, visualizing the same objective function with different random basis vectors. Furthermore, we use the visualizations to investigate the following research questions:
\begin{itemize}
\item What is the effect of the number of timesteps -- i.e., the planning horizon -- on the optimization landscape? (Section \ref{sec:timesteps})
\item What is the effect of the choice of action space on the optimization landscape? (Section \ref{sec:actionspace})
\item What is the effect of converting instantaneous costs into rewards through exponentiation? (Section \ref{sec:rewards})
\item What is the effect of early termination of movement trajectories or episodes, e.g., when deviating from a target state? (Section \ref{sec:termination})
\item Do the visualizations predict actual optimization performance, and generalize from simple to complex problems? (Sections \ref{sec:optcompare} and \ref{sec:biped}).
\end{itemize}
Additionally, Section \ref{sec:theory} provides a more theoretical investigation of the reliability and limitations of random 2D slice visualizations. We conclude that such visualization is a useful tool for diagnosing problems; the somewhat low sensitivity is compensated by high specificity. Our visualizations also explain why movement optimization best practices such as early termination and parameterizing actions as target angles work so well, which should make our work useful in teaching computer animation and movement optimization.
\begin{figure*}[!t]
\centering
\includegraphics[width=7.0in]{images/randomslices.png}
\caption{Visualizing the inverted pendulum trajectory optimization objective of Equation \ref{eq:trajcost} with different random basis vectors and $T=100$. Although the plots are not exactly similar, they all exhibit the same overall structure, i.e., multimodality and an elongated, ill-conditioned optimum.}\label{fig:randomslices}
\end{figure*}
\section{Related work}
\noindent\textbf{Spacetime optimization} Much of the earlier work on animation as optimization focused on extensions of the seminal work on spacetime optimization by Witkin and Kass \cite{witkin_spacetime_1988,cohen_interactive_1992,fang_efficient_2003,safonova_synthesizing_2004,wampler2009optimal}, where the optimized variables included the root position and rotation as well as joint rotations for each animation frame. However, the synthesized motions were limited by the need for prior knowledge of contact information, such as when and which body parts should make contact with the ground. This limitation was overcome by \cite{mordatch_discovery_2012}, who introduced auxiliary optimized variables that specify the contact information. However, the number of colliding body parts was still limited.
\par\smallskip
\noindent\textbf{Animation as simulation control} In recent years, the focus of research has shifted towards animation as a simulation control problem. Typically, one optimizes simulation control parameters such as time-varying actuation torques of character joints, and an off-the-shelf physics simulator is used to realize the movement. While spacetime optimization can be performed with gradient-based optimization methods like Sequential Quadratic Programming \cite{witkin_spacetime_1988} or L-BFGS \cite{mordatch_discovery_2012}, simulation control is commonly approached with sampling-based, gradient-free optimization due to non-differentiable dynamics and/or multimodality \cite{Liu2010,Hamalainen2014,Hamalainen2015}. This is also what our work focuses on; it remains as future work to extend our visualizations to analyzing spacetime optimization.
\par\smallskip
\noindent\textbf{Trajectory and policy optimization} Two main classes of approaches include trajectory and policy optimization. In trajectory optimization, one optimizes the time-varying control parameters directly, which can be done both offline \cite{ngo_spacetime_1993,al_borno_trajectory_2013,naderi2017discovering} or online, while the character moves and acts \cite{tassa2012synthesis,Hamalainen2014,Hamalainen2015}. In policy optimization, one optimizes the parameters of a policy function such as a neural network that maps character state to (approximately) optimal control, typically independent of the current simulation time. This can be done both using neuroevolution \cite{Geijtenbeek2013,such2017deep} or Reinforcement Learning (RL), which has recently proven powerful even with complex humanoid movements \cite{schulman2017proximal,peng2018deepmimic,lee2019scalable,bergamin2019drecon,park2019learning}. Unfortunately policy optimization/learning can be computationally expensive with large neural networks, and may require careful curriculum design \cite{yu2018learning}. On the other hand, it can produce controllers that require orders of magnitude less computing resources after training, compared to using trajectory optimization to solve each required movement in an interactive application such as a video game. Trajectory and policy optimization approaches can also be combined \cite{levine2013guided,mordatch2014combining,rajamaki2018continuous}, which allows one to adjust the trade-off between training time and runtime expenses. In this paper, we provide analyses of both trajectory and policy optimization landscapes.
\par\smallskip
\noindent\textbf{Visualizing optimization} Many optimization visualizations are problem-specific, utilizing the semantics of optimized parameters \cite{jones1994visualization}. Visualization is also used for letting a user interact and inform optimization \cite{meignan2015review}; this, however, falls outside the scope of this paper. Considering non-interactive, generic methods applicable to continuous-valued optimization, landscape visualizations like the ones in Fig. \ref{fig:introfigure} are a standard textbook method. Although it is technically trivial to extend this to higher-dimensional problems by visualizing the objective function on a random plane (a 2D subspace), Li et al. \cite{li2018visualizing} only recently demonstrated that such random slices can provide meaningful insights and, likewise, have some predictive power on the difficulty of very high-dimensional optimization. Inspired by \cite{li2018visualizing}, we test the random slice visualization approach in a new domain, and also provide additional analyses of the method's reliability and limitations. Other common methods for visualizing high-dimensional optimization include graphing the objective function along a straight line from the initial point to the found optimum \cite{goodfellow2014qualitatively,keskar2016large,dinh2017sharp,smith2017exploring}, or visualizing in a plane determined from the path taken during optimization \cite{goodfellow2014qualitatively}. There are also examples of visualizing movement optimization through a conversion to an interactive game or puzzle; in this case, players perform the optimization aided by predictive visualizations of how different actions affect the simulation state \cite{hamalainen2017predictive}.
In computer animation and movement control research, objective functions are visualized occasionally, using various approaches to reduce the objectives to 2D. For example, Hämäläinen et al. \cite{Hamalainen2014} visualize contact discontinuities and multimodality in a 2D toy problem, and Sok et al. \cite{sok2007simulating} visualize a high-dimensional multimodal objective with respect to two manually selected parameters. However, we know of no previous paper that focuses on visualizing movement optimization, or applies the 2D random slice approach of Li et al. \cite{li2018visualizing} to movement optimization.
\section{Test Problem: Inverted Pendulum Balancing}\label{sec:problems}
This section describes the inverted pendulum balancing problem that is used throughout Sections \ref{sec:timesteps}-\ref{sec:optcompare}, before testing the visualization approach on the more complex simulated humanoid of Section \ref{sec:biped}. The pendulum is depicted in Fig. \ref{fig:pendulum}. Although it is simple, it offers the following benefits for analyzing movement optimization:
\begin{itemize}
\item In the trajectory optimization case, we know the true optimum. As the pendulum dynamics are differentiable, we can also compute the Hessian and its eigenvalues for further analysis. We implement this using Autograd \cite{maclaurin2015autograd}.
\item In the policy optimization case, we can use a simple P-controller as the parameteric policy, which admits visualizing the full optimization landscape instead of only low-dimensional slices.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[width=1.0in]{images/pendulum.png}
\caption{The inverted pendulum model. The force exerted by gravity is denoted by $g$, and $\alpha$ denotes the angular deviation from an upright position.}\label{fig:pendulum}
\end{figure}
The dynamics governing the angle $\alpha_t$, angular velocity $\omega_t$, and control torque $\tau_t$ of the pendulum at timestep $t$ are implemented as:
\begin{eqnarray}
\omega_t &=& \omega_{t-1} + \delta (\tau_t + 0.5\ l\ g \sin(\alpha_{t-1})),\label{eq:avelUpdate}\\
\alpha_t &=& \alpha_{t-1} + \delta \omega_t,
\end{eqnarray}
where $\delta$, $l$, and $g$ are the simulation timestep, pendulum length, and force induced by gravity, respectively. We use $\delta=0.1$, $l=0.2$, and $g=0.981$.
\subsection{Trajectory Optimization}\label{sec:trajopt}
Most of our trajectory optimization visualizations are generated from the problem of balancing a simple inverted pendulum, starting from an upward position such that the optimal torque sequence $\tau_1, ..., \tau_T$ is all zeros. The subscripts denote timestep indices and $T$ is the planning horizon, i.e., the length of the simulated trajectory. The optimization objective is to minimize the trajectory cost $\mathcal{C}$ computed as the sum of instantaneous costs:
\begin{equation}
\mathcal{C}=\sum_{t=1}^T (\alpha_t^2 + w \tau_t^2). \label{eq:trajcost} \\
\end{equation}
The cost is minimized when the pendulum stays upright at $\alpha=0$ with zero torques. The relative importance of state cost $\alpha_t^2$ and action cost $\tau_t^2$ is adjusted by the multiplier $w$. We use $w=1$ unless specified otherwise. The cost landscape is visualized in case of $T=100$ in Fig. \ref{fig:randomslices}.
Some of our experiments convert the cost minimization problem into a reward maximization problem, computing trajectory reward $\mathcal{R}$ using exponentiated costs as
\begin{equation}
\mathcal{R}=\sum_{t=1}^T ( e^{-\alpha_t^2} + w e^{-\tau_t^2} ).\label{eq:trajreward}
\end{equation}
The reward formulation has been recently used with stellar results in the policy optimization of complex humanoid movements \cite{peng2018deepmimic}. Exponentiation is also used in framing optimal control as estimation \cite{todorov2008general,todorov2009efficient}.
\subsection{Policy Optimization}
In the case of policy optimization, we use the same pendulum simulation, with a minor adjustment. Instead of directly optimizing control torques, we use a policy $\tau_t = \pi_\theta (\mathbf{s}_t)$, parametrized by $\theta$, where $\mathbf{s}$ denotes pendulum state. The optimization objective is to either minimize the expected trajectory cost, or maximize the expected reward, assuming that each trajectory or ``episode'' is started from a random initial state. As closed-form expressions for the expectations are not available, we replace them by averages. These are computed from 10 episodes, each started from a different initial pendulum angle.
In all the pendulum policy optimization visualizations, we use a simple P-controller as the policy:
\begin{equation}
\pi_\theta (\mathbf{s}_t) = \theta \alpha_t.
\end{equation}
The benefit of this formulation is that we only have a single policy parameter to optimize; thus, we can visualize the full objective function, shown in Fig. \ref{fig:policy}. Despite the simplicity, this provides a multimodal optimization problem with properties similar to more complex problems. As shown in Fig. \ref{fig:policy}, there is global optimum at approximately $\theta=-0.1$, and also a false optimum near $\theta=0$. At the false optimum, the action cost is minimized simply by letting the pendulum hang downwards. In a real-world case like controlling a simulated humanoid, the false optimum corresponds to resting on the ground with zero effort; readers familiar with humanoid control probably know that such a behavior is easy to elicit by having too large an effort cost.
\begin{figure}[th]
\centering
\includegraphics[width=\linewidth]{images/policy_w.png}
\caption{Average episode cost (Equation \ref{eq:trajcost}) with $T=200$, as a function of the P-controller policy parameter $\theta$, and action cost weight $w$. A large $w$ makes the local and true optima more equally good compared to the surrounding regions. This makes it more likely that a global Monte Carlo optimization method like CMA-ES will get attracted to the false optimum. The success of local gradient-based optimization depends on which cost basin the optimization is initialized in.} \label{fig:policy}
\end{figure}
\section{Effect of trajectory length} \label{sec:timesteps}
Figures \ref{fig:sweepT} and \ref{fig:sweepT_policy} visualize the inverted pendulum trajectory and policy optimization landscapes, with different trajectory and episode lengths $T$. The figures yield two main insights:
\begin{itemize}
\item{Trajectory optimization can become increasingly non-separable and ill-conditioned with large $T$.}
\item{In both trajectory and policy optimization, the landscapes become more multimodal with large $T$.}
\end{itemize}
\begin{figure}[th]
\centering
\includegraphics[width=\linewidth]{images/hessians_and_surfaces_torques.png}
\caption{Effect of trajectory length $T$ on inverted pendulum trajectory optimization. The optimization problem becomes increasingly ill-conditioned and non-separable with longer action sequences. The bottom row shows the Hessian matrices at the optimal points with increasing $T$. $\kappa$ denotes the condition number of the Hessian. }\label{fig:sweepT}
\end{figure}
\begin{figure}[th]
\centering
\includegraphics[width=\linewidth]{images/sweep_policy_T.png}
\caption{Effect of trajectory length $T$ on inverted pendulum trajectory policy optimization. Local optima become pronounced with large $T$, as the agent has more time to diverge and accumulate cost far from the desired states.}\label{fig:sweepT_policy}
\end{figure}
\subsection{Trajectory Optimization}
\label{sec:horizon_trajectory}
The trajectory optimization landscapes of Fig. \ref{fig:sweepT} are augmented with visualizations of the Hessian matrices of the cost function at the optimum. This allows further analysis of some important properties. A diagonal Hessian means that the optimization problem is separable, and the variables can be optimized independent of each others. Strong off-diagonal elements imply that if one changes a variable, then one must also change some other variable to remain at the bottom of the valley in the landscape.
On the other hand, the eigenvalues of the Hessian measure curvature along the eigenvectors; the condition number $\kappa$, which denotes the ratio of the largest to smallest eigenvalues of the Hessian, is generally considered as a predictor of optimization difficulty. In the ideal case, the Hessian is a (scaled) identity matrix, i.e., a diagonal matrix where all the eigenvalues are the same; this indicates both separability and no ill-conditioning with $\kappa=1$.
Intuitively, the ill-conditioning with a large $T$ can be explained by the fact that perturbing an action of the optimal trajectory will lead to state divergence that accumulates over time. Thus, the total state cost is more sensitive to earlier actions, which leads to large differences in the Hessian eigenvalues. Non-separability stems from a need to adjust later actions to correct the state divergence. On the other hand, with a small $T$, the state has less time to diverge, and the $\tau^2$ action cost dominates; this gives rise to the constant diagonal structure of the Hessian, as actions of all timesteps contribute equally and independently to the cost.
More formally, we may rewrite the cost function (Equation \ref{eq:trajcost}) of the trajectory optimization as:
\begin{equation}
\mathcal{C}=\sum_{t=1}^T \alpha_t(\tau_1, \cdots, \tau_t)^2 + \sum_{t=1}^T w \tau_t^2, \\
\end{equation}
emphasizing that the state $\alpha_t$ depends on the past actions $\tau_1$ to $\tau_t$. The Hessian of the cost function can then be expressed as:
\begin{equation}
\frac{\partial^2 \mathcal{C}}{\partial \boldsymbol{\tau}^2}=2 \sum_{t=1}^T \big(\nabla_{\boldsymbol{\tau}} \alpha_t (\nabla_{\boldsymbol{\tau}} \alpha_t)^T + \alpha_t \frac{\partial^2 \alpha_t }{\partial \boldsymbol{\tau}^2}\big) + 2w \mathbf{I}. \\
\label{eq:cost_hessian}
\end{equation}
The last term of Equation \ref{eq:cost_hessian} is a diagonal matrix that does not depend on $T$, while the first term results in off-diagonal elements accumulated over time. As $T$ grows larger, the first term begins dominating the second one, hence the decrease in separability (i.e., the fading diagonal shown in Fig. \ref{fig:sweepT}).
The increase of the condition number follows from the causality of the dynamics---future actions do not affect past states. This means that when $t=1$, the first term of Equation \ref{eq:cost_hessian} is a $T \times T$ matrix that has only one nonzero element at the upper-left corner, while, when $t=T$, the nonzero elements occupy the entire matrix. Summing up all the matrices from $t=1$ to $t=T$, the first term of the Hessian will have larger values toward the upper-left corner, as evidenced by the bright areas in Fig. \ref{fig:sweepT}. The longer $T$ is, the more disparate the upper-left and bottom-right corners become, leading to larger condition numbers.
Although it is well known that the difficulty of solving a trajectory optimization increases proportionally with the length of the planning horizon, our visualization shows that the difficulty is not solely caused by the increase in the size of the problem, but also by the increasingly ill-conditioned and non-separable objective function. Section \ref{sec:actionspace} explains how the choice of action parameters can act as an efficient preconditioner.
\subsection{Policy Optimization}
In policy optimization, the optimized parameters have no similar dependency to the horizon $T$, as the effect of each policy parameter on the objective is averaged over all states. The separability of policy parameters depends on the function representation of the policy. For example, if the policy is represented as a multi-layer nonlinear neural network, the Hessian of the objective function is bound to be inseparable, as the neuron weights across layers are multiplied with each other when passing data through the network.
However, a longer horizon in policy optimization can induce a multimodal landscape in the objective function (Fig. \ref{fig:sweepT_policy}). An explanation is that a longer time budget enables strategies that are not possible when the horizon is shorter. The ``mollification'' of the landscape at shorter horizons explains why policy optimization may benefit from a curriculum in which the planning horizon is gradually increased. The OpenAI Five Dota 2 bots provide a recent impressive example of this, gradually increasing the $\gamma$ parameter of Proximal Policy Optimization \cite{schulman2017proximal} during training\footnote{https://blog.openai.com/openai-five/}.
\section{Effect of the Choice of Action Space} \label{sec:actionspace}
In the previous section, we saw that minimizing effort as the sum of squared actions gives rise to a strong constant diagonal in the Hessian, which in principle leads to better conditioned optimization. Parameterizing actions as torques, and having a squared torque cost term, is also common in continuous control benchmark problems such as OpenAI Gym MuJoCo \cite{brockman2016openai}. However, \cite{peng2017learning} showed that parameterizing actions as target joint angles can make policy optimization more effective. The target poses that the policy outputs are converted to joint torques, typically using a P- or PD-controller. Such pose-based control is also common in earlier work on trajectory optimization \cite{al2013trajectory,Hamalainen2014,Hamalainen2015,Rajamaki:2017:ASB:3099564.3099579}, often claimed to give better results than optimizing raw torques; despite this, comprehensive comparisons of control parameterizations are rare.
Fig. \ref{fig:sweepT_angles} shows that using actions as target angles $\bar{\alpha}$ does indeed scale better to long horizons. The strong constant diagonal of the Hessian does not fade out as much, and the condition number $\kappa$ grows much slower with larger $T$. We compute the torques using a PD-controller: $\tau = k_p (\bar{\alpha}-\alpha) + k_d \omega$, using $k_p=1$ and $k_d=-1$. The optimal $\bar{\alpha}$ sequence is all zeros, as the pendulum starts from an upright position with zero velocity.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/hessians_and_surfaces_pd.png}
\caption{Effect of episode length $T$ on inverted pendulum trajectory optimization, when parameterizing actions as target angles. Remarkably, as opposed to the torque parameterization of Fig. \ref{fig:sweepT}, the landscape becomes much less ill-conditioned with large $T$.}\label{fig:sweepT_angles}
\end{figure}
An explanation for the effectiveness of the target angle parameterization is that \textit{actions represent (partial) target states}. This makes the optimization of state cost or reward terms more separable and well-conditioned, and the Hessian closer to a scaled identity matrix. The optimal action at each timestep is more independent of preceding actions; a reasonable strategy is to always drive the pendulum towards the desired $\alpha=0$. This explains why the Hessian is closer to diagonal. Furthermore, with the state-based control, perturbing early actions leads to less cumulative state divergence; hence, all actions contribute more equally to the cost. This explains why the spread of the diagonal values is low.
It should be noted that when using target joint angles with a character with an unactuated root, the parameterization cannot fully remove the dependencies between the actions of different timesteps. Deviations in initial actions can still lead to divergence, e.g. falling out of balance, which needs to be corrected by later actions.
We can also see the effect of action choice from the dynamics of the pendulum. Using torques as actions directly, the dynamic equation for the angle can be expressed as:
\begin{equation}
\alpha_t = \alpha_{t-1} + \delta \omega_{t-1} + \delta^2 (\tau_t + 0.5\ l\ g \sin(\alpha_{t-1})),
\label{eq:torque_action}
\end{equation}
where the dependency to past states and actions is primarily due to the first two terms. When using target angles as actions, we replace $\tau_t$ with $\bar{\alpha}_{t-1} - \alpha_{t-1} - \omega_{t-1}$ and arrive at a slightly different dynamic equation:
\begin{equation}
\alpha_t = (1 - \delta^2) \alpha_{t-1} + (\delta - \delta^2)\omega_{t-1} + \delta^2 (\bar{\alpha}_{t-1} + 0.5\ l\ g \sin(\alpha_{t-1})).
\label{eq:angle_action}
\end{equation}
Now, the previous angle $\alpha_{t-1}$ and velocity $\omega_{t-1}$ have smaller multipliers $(1-\delta^2)$ and $(\delta - \delta^2)$. This discount is applied recursively over time, such that the angle and velocity at $n$ time steps ago are discounted by $(1-\delta^2)^n$ and $(\delta-\delta^h)^n$. Using the same reasoning of causality as described in Section \ref{sec:horizon_trajectory}, the exponential reduction in dependency on previous states makes the Hessian of the cost function better conditioned (Equation \ref{eq:cost_hessian}).
\subsection{Pose splines}\label{sec:splines}
In many papers, long action sequences are parameterized as "pose splines", i.e. parameteric curves that define the target pose of the controlled character over time, which are then implemented using P- or PD-controllers \cite{al2013trajectory,Hamalainen2014,naderi2017discovering}. The discussion above provides a strong motivation for this, as such parameterization achieves two things at once:
\begin{itemize}
\item Better separability and conditioning of the problem due to the choice of action space.
\item Improved avoidance of ill-conditioning with long action sequences; in effect, the control points of a spline can be thought of as a shorter sequence of higher-level actions, each of which defines the instantaneous actions for multiple timesteps.
\end{itemize}
Naturally, using splines has aesthetic motivations as well, as they result in smoother motion with less frame-by-frame noise.
Experimental results comparing spline-based optimization with direct optimization of action sequences are provided in Section \ref{sec:biped}.
\section{Effect of using rewards instead of costs} \label{sec:rewards}
Fig. \ref{fig:sweepT_rewards} shows the inverted pendulum trajectory optimization landscape using the reward function of Equation \ref{eq:trajreward}. Comparing this to Fig. \ref{fig:sweepT}, one notices the following:
\begin{itemize}
\item With large $T$, the landscape structures are essentially the same for both the cost function (Equation \ref{eq:trajcost}) and the reward function (Equation \ref{eq:trajreward}), with an elongated optimum and some local optima. The Hessian and $\kappa(T)$ at the optimal point are the same in both cases. Thus, in principle, the reward function should be as easy or hard to optimize as the cost function.
\item On the other hand, at $T=10$, the reward landscape shows some additional non-convexity. This suggests that in practice, the reward function may cause a performance hit in optimization. Section \ref{sec:optcompare} provides evidence of this.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/hessians_and_surfaces_torques_rewards.png}
\caption{Effect of trajectory length $T$ on inverted pendulum trajectory optimization using the reward formulation of Equation \ref{eq:trajreward}. The landscape behaves similarly to the cost minimization in Fig. \ref{fig:sweepT}, except for the additional non-convexity at $T=10$, and overall sharper ridges.} \label{fig:sweepT_rewards}
\end{figure}
The main difference of the functions is that the sum of exponentiated costs is more tolerant to temporary deviations. A sum-of-squares cost function heavily penalizes the cost being large in even a single timestep, whereas the exponentiation clamps the rewards in the range $[0,1]$. In principle, an agent could exploit the reward formulation by only focusing on some reward terms. We interpret the ridges in Fig. \ref{fig:sweepT_rewards} ($T=10$) as manifestations of this.
In Section \ref{sec:optcompare}, we shall see that the reward function can indeed result in worse optimization performance. However, the next section discusses how using rewards instead of costs can be more desirable when combined with the technique of early termination.
\section{Effect of early termination} \label{sec:termination}
\begin{figure*}[ht]
\centering
\includegraphics[width=7.0in]{images/effect_of_termination.png}
\caption{Effect of early termination on inverted pendulum trajectory optimization landscape. Termination without an alive bonus increases multimodality of cost minimization, but makes reward maximization more convex.}\label{fig:termination}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=7.0in]{images/effect_of_termination_policy.png}
\caption{Effect of termination on policy optimization. In cost minimization, termination creates a local minimum at $\theta=0.5$, which drives the pendulum to termination to avoid accumulating more costs. Termination removes local optima when combined with an alive bonus or using rewards instead of costs.}\label{fig:termination_policy}
\end{figure*}
Standard continuous control policy optimization benchmark tasks \cite{brockman2016openai,tassa2018deepmind} utilize early termination of simulated episodes; this means that if the agent deviates from some desired region of the state space, e.g. a bipedal agent loses balance, the state is considered a terminal one; the agent stops receiving any rewards, and is reset to an initial state. Typically, termination greatly speeds up policy optimization, e.g. \cite{peng2018deepmimic}. As far as we know, early termination has not been utilized in trajectory optimization, but there are no obstacles for this.
Figures \ref{fig:termination} and \ref{fig:termination_policy} show the effect of terminating trajectories and episodes if $|\alpha| > 2.0$, i.e. if the pendulum deviates significantly from the desired upright pose. From the figures, one can gain two key insights:
\begin{itemize}
\item \textit{Termination can greatly improve landscape convexity by removing false optima.} The pendulum cannot receive rewards from the local optimum of hanging downwards if termination prevents it from experiencing the corresponding states.
\item \textit{If the rewards are not strictly non-negative, new false optima are introduced to the landscape}. Naturally, if the agent is experiencing costs or negative rewards, a good strategy may be to terminate episodes as early as possible, which is what happens at the $\theta=0.5$ local optimum in Fig. \ref{fig:termination_policy} (second subfigure from left, when termination is used with cost minimization). The problem can be mitigated by adding a termination penalty to the cost of the terminal simulation step, or a so-called alive bonus for all non-terminal states. This yields a clearly more convex landscape.
\end{itemize}
Although an alive bonus is used in many papers and benchmark tasks \cite{rajeswaran2016epopt, brockman2016openai, yu2018learning}, we feel the danger of combining termination and negative rewards is not emphasized enough in previous literature. Our visualization highlights that the early termination can be a double-edged sword---it can be harmful when the reward function consists of a mixture of positive and negative terms that are not well-balanced. Our visualization technique suggests that a good default strategy may be to use a combination of termination and rewards instead of costs. This way, one does not need to fine-tune the termination penalty or alive bonus. Furthermore, converting costs to rewards through exponentiation limits the results to the range $[0,1]$, which is beneficial for most RL algorithms that feature some form of value function learning with neural networks. On the other hand, as shown in the next section, it can result in slower convergence.
\section{Do visualizations predict optimization performance?}\label{sec:optcompare}
Fig. \ref{fig:optCompare} tests how well the visualizations of the previous sections predict actual optimization performance in the inverted pendulum trajectory optimization. The figure compares quadratic costs to exponentiated rewards, and actions parameterized as torques to target angles implemented using a P-controller as in Section \ref{sec:actionspace}. To keep the figure readable, we did not include curves with and without termination; this does not make a big difference in the simple pendulum problem, and termination is further investigated in the next section. All the optimizations were performed using CMA-ES \cite{hansen2001completely,hansen2016cma}, which is common in animation research, and is known to perform well with multimodal optimization tasks. A population size of 100 was used. To allow comparing both costs and rewards, progress is graphed as the Euclidian distance from the true optimum.
The results indicate that parameterizing actions as target angles is considerably more efficient, as predicted by the visualizations of Section \ref{sec:actionspace}. The exponential transform of costs to rewards degrades performance, in line with the observations of Section \ref{sec:rewards}.
\begin{figure}[th]
\centering
\includegraphics[width=\linewidth]{images/optimizationResults.png}
\caption{Inverted pendulum trajectory optimization results, plotted as the mean of 10 runs of CMA-ES. We compare both cost minimization and reward maximization with actions parameterized both as torques and target angles. As predicted by our visualizations, the angle parameterization scales better for large $T$, and the reward maximization is less efficient.} \label{fig:optCompare}
\end{figure}
\section{Generalizability to more complex agents}\label{sec:biped}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/humanoid.png}
\caption{3D humanoid locomotion test from Unity Machine Learning Agents framework.} \label{fig:humanoid}
\end{figure}
This section tests the generalizability and usefulness of the visualization approach with a more complex agent, using both policy optimization and trajectory optimization. We use the 3D humanoid locomotion test from the Unity Machine Learning Agents framework \cite{juliani2018unity} shown in Fig. \ref{fig:humanoid}. The test is designed for policy optimization; we modify it to also support trajectory optimization by starting each episode/trajectory from a fixed initial state, without randomization. We have also modified the code and environment to be fully deterministic, i.e., non-reproducible simulation is not corrupting the landscape visualizations.
The action space is 39-dimensional, with actions defined as both joint target angles, as well as maximum torques that the joint motors are allowed to use for reaching their targets. We optimize with planning horizons of 1, 3, 5, and 10 seconds, resulting in 585, 1755, 2925, and 5850 optimized variables. The actions define target angles and maximum torques for the 16 joints of the humanoid. As the global optima are not known, we visualize the landscapes around the optima found in each test, following Li et al. \cite{li2018visualizing}. We use a simulation timestep of $1/75$ seconds, with control actions repeated for 5 timesteps, i.e., taking 15 actions per second.
As detailed below in Sections \ref{sec:biped_traj}-\ref{sec:scalability}, the results support our earlier findings:
\begin{itemize}
\item 2D visualization slices reveal that trajectory optimization is highly multimodal, with optima that become narrower as the length of the planning horizon increases.
\item As hypothesized in Section \ref{sec:splines}, utilizing a spline parameterization results in a more well-behaving landscape.
\item Termination based on agent state reduces local optima in both trajectory and policy optimization
\item Policy optimization with neural network policies scales better for long planning horizons, with the landscape remaining essentially unchanged as the planning horizon grows. Notably, \textit{policy optimization was more efficient than optimizing a single long trajectory}, even though our policy network has over 2M parameters to optimize, i.e., orders of magnitude more.
\end{itemize}
\subsection{Trajectory Optimization}\label{sec:biped_traj}
For trajectory optimization we use the recent highly scalable CMA-ES variant called LM-MA-ES \cite{loshchilov2018large}---as the physics simulator is not differentiable, gradient-based methods are not applicable. We visualize the landscapes around the optimum found by LM-MA-ES. Similar to CMA-ES, LM-MA-ES is a quasi-parameter-free method, typically only requiring adjustments to the iteration sampling budget (population size): this is increased from the recommended value for more difficult problems. The recommended budget---a logarithmic function of the number of variables---did not produce robust results. Instead, we used a 10 times larger budget in all the optimization runs.
We first tested trajectory optimization on action sequences of up to 5 seconds, i.e., up to 2925 optimized variables. This turned out to be a difficult task, resulting in somewhat unstable gaits even after hours of CPU time
Here, 2D slice visualizations provided useful diagnostic information, revealing the difficult multimodality of the optimization problem. Fig. \ref{fig:humanoid_termination} shows the landscapes around the found local optimum for each tested planning horizon. Although there is a clear optimum in the center, the rest of the landscape is noisy and ill-conditioned. This is exacerbated with the longer planning horizons.
Fig. \ref{fig:humanoid_termination} also shows that termination helps remove local optima, and produces a smoother landscape. In the humanoid locomotion test, termination takes place when a body part other than the feet touches the ground.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/humanoid_trajectory_comparison.png}
\caption{Trajectory optimization landscapes of humanoid locomotion with different planning horizons. Termination removes local optima, producing a smoother landscape.} \label{fig:humanoid_termination}
\end{figure}
\subsection{Trajectory Optimization with Splines}
In an effort to better handle long planning horizons, we tested trajectory optimization with spline-based parameterization. Instead of setting target angles and maximum torques for joints once every 5 timesteps, we interpolated the actions for each timestep using Catmull-Rom splines. The control points of the splines were optimized using LM-MA-ES, akin to the method presented by Hämäläinen et al. \cite{Hamalainen2014} for online optimization. Fig. \ref{fig:humanoid_spline} shows how this slightly improves the landscapes, with reduced noise and gentler slopes towards the optimum.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/humanoid_spline.png}
\caption{Spline-parameterized trajectory optimization landscapes of humanoid locomotion. The landscapes exhibit less noise than the trajectory optimization landscapes of Fig. \ref{fig:humanoid_termination}\label{fig:humanoid_spline}}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/humanoid_per_time_step.png}
\caption{Non-spline trajectory optimization landscapes around action sequences resulting from evaluating the optimal splines of Fig. \ref{fig:humanoid_spline}. The landscapes become noisier and introduce more elongated ridges and valleys.} \label{fig:humanoid_per_time_step}
\end{figure}
To provide a more direct comparison to optimizing action sequences without splines, Fig. \ref{fig:humanoid_per_time_step} shows the non-spline trajectory optimization landscape around the action sequences resulting from the optimized splines of Fig. \ref{fig:humanoid_spline}. Without the spline parametrization, the 3s and 5s landscapes degenerate, becoming noisier and more ill-conditioned.
Although trajectory optimization with splines results in cleaner landscapes and qualitatively better and smoother movement than non-spline trajectory optimization, finding good long trajectories required hours of CPU time. This motivated us to also test policy optimization, as explained below.
\subsection{Policy Optimization} \label{sec:biped_PPO}
We trained a neural network policy for solving the humanoid locomotion task using Proximal Policy Optimization (PPO) \cite{schulman2017proximal} and different planning horizons. PPO utilizes episodic experience collection, i.e., the agent is started from some initial state, and explores states and actions until a terminal state, or maximum episode length, is reached. We used the planning horizon as the episode length.
The resulting landscapes, shown in Fig. \ref{fig:humanoid_policy}, show a significant improvement in convexity, sphericity, and unimodality. The landscape also remains essentially unchanged with a longer planning horizon.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/humanoid_policy_comparison.png}
\caption{Policy optimization landscapes of humanoid locomotion. As opposed to trajectory optimization, the task scales well with increasing planning horizon. Termination removes local optima.} \label{fig:humanoid_policy}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/convergence-unity-combined.png}
\caption{Best trajectory or episode reward as a function of timesteps simulated during optimization, with different planning horizons and optimization approaches. The graphs show the mean and standard deviation of 5 independent optimization runs. Policy optimization with PPO scales significantly better for long planning horizons. \label{fig:humanoid_comparison}}
\end{figure}
\subsection{Scalability of Trajectory and Policy Optimization}\label{sec:scalability}
The policy optimization visualizations suggest that as trajectory lengths increase, policy optimization should be more efficient than trajectory optimization. Fig. \ref{fig:humanoid_comparison} provides evidence supporting this hypothesis. It compares optimization progress with different planning horizons and the three optimization strategies: LM-MA-ES, LM-MA-ES with splines, and PPO. With a trajectory length of 5 seconds, PPO already shows improved performance over the other methods, and the advantage is dramatically larger with 10 second trajectories. However, this comes with a caveat: with a neural network policy, there is more overhead per simulation step, as each optimization iteration requires training the policy and value networks using multiple minibatch gradient updates. Optimizing for 5 million timesteps with PPO was approximately 4 times slower than with LM-MA-ES in our tests, when measured in wall-clock time, using a single 4-core computer.
Interestingly, although the spline landscapes look slightly better in Fig. \ref{fig:humanoid_spline}, spline trajectory optimization results in slightly lower rewards with a given simulation budget. A plausible explanation for this is that the Unity locomotion test has a very simple reward function that trajectory optimization can exploit with unnatural and jerky movements, as shown on the supplemental video\footnote{\url{https://youtu.be/5v_lsGCahSI}} at 03:01. From a pure reward maximization perspective, a spline parameterization that enforces a degree of smoothness may not be ideal and one may expect more gains if the reward function favors smooth movement. In general, best results are achieved when the action parameterization induces a useful prior for the optimization task.
Fig. \ref{fig:mujoco-results} replicates the result of Fig. \ref{fig:humanoid_comparison} with other agents and optimization methods, providing additional evidence that policy optimization scales better to long planning horizons. To generate the figure, we conducted trajectory and policy optimizations using 4 common OpenAI Gym \cite{brockman2016openai} MuJoCo agents and locomotion tasks (or ``environments'' in the Gym lingo): 2D monopedal hopper (Hopper-v2), 2D bipedal walker (Walker2d-v2), 2D half quadruped (HalfCheetah-v2), and 3D humanoid (Humanoid-v2). In policy optimization, we tested both PPO and Soft Actor-Critic (SAC) \cite{haarnoja2018soft}, a more recent method that is growing in popularity. We used the Stable Baselines \cite{stable-baselines} PPO and SAC implementations with their default settings. In trajectory optimization, we used CMA-ES instead of LM-MA-ES, as it was easily available for the Python-based Gym framework. For each task and optimization method, we performed 10 independent training runs with different random seeds. To allow aggregating the convergence curves of all tasks in a single plot, we normalized the episode/trajectory rewards of each task over all optimization runs and methods to the range [0,1]. We used the default MuJoCo reward functions and episode termination, but removed the initial state randomization to allow direct comparison of trajectory and policy optimization, similar to the Unity humanoid tests above. We used a control frequency of 20Hz for all the tasks.
\begin{figure}[t]
\includegraphics[width=\linewidth]{images/mujoco-results.png}
\caption{Replicating the result of Fig. \ref{fig:humanoid_comparison} in 4 OpenAI Gym MuJoCo tasks (Walker2d-v2, Hopper-v2, HalfCheetah-v2, Humanoid-v2), using 3 optimization methods (CMA-ES, PPO, and SAC), planning horizons 1s, 5s, 10s, and 20s, and 10 independent optimization runs per method and task. Each learning curve aggregates the results from all 4 tasks, using normalized episode/trajectory rewards. \label{fig:mujoco-results}}
\end{figure}
\input{appendix_in_body}
\section{Conclusion}
We have presented several novel visualizations of continuous control trajectory and policy optimization landscapes, demonstrating the usefulness of the random 2D slice visualization approach of Li et al. \cite{li2018visualizing} in this domain. We have also presented a mathematical analysis of the limitations of the random 2D slice visualizations.
Our visualizations provide new intuitions of movement optimization problems and explain why common best practices are powerful. The visualization approach can be used as a diagnostic tool for understanding why optimization does not converge, or progresses slowly. Even when a global optimum is not known, as in Section \ref{sec:biped}, it can be useful to plot the landscape around a found optimum: If the optimized movement is not satisfactory or one optimization approach performs worse than another, visualization can provide insights on why this is the case, e.g., due to ill-conditioning or multimodality.
We acknowledge that some of our results---e.g. the efficiency of episode termination---are already known to experienced readers. We do, however, provide novel visual evidence of the underlying reasons; for example, we show how termination based on agent state removes local optima in the space of optimized actions or policy parameters. This contributes to the understanding of movement optimization, and, as representative images are known to increase understanding and recall \cite{carney2002pictorial}, it should also have pedagogical value in educating new researchers and practitioners.
To conclude, the key insights from our work can be summarized as:
\begin{itemize}
\item Random 2D slice visualizations are useful in analyzing high-dimensional movement optimization landscapes, and can predict movement optimization efficiency.
\item The curse of dimensionality hits trajectory optimization hard, as it can become increasingly ill-conditioned with longer planning horizons. Policy optimization scales better in this regard. Perhaps counterintuitively, optimizing a neural network policy can be more efficient than optimizing a single action trajectory with orders of magnitude less parameters.
\item Parameterizing actions as (partial) target states---e.g. target angles that the character's joints are driven towards---is strongly motivated, as opposed to optimizing raw control torques. It can make trajectory optimization more well-conditioned and separable.
\item Combining the two points above, one can explain the power of the common practice of optimizing splines that define time-varying target poses; pose parameterization leads to more well-conditioned optimization, and the spline control points define a shorter sequence of macro actions, which further counteracts ill-conditioning caused by sequence length. However, the smoothness constraints that splines impose on movements may not be ideal for all reward functions.
\item Using early termination appears strongly motivated, as it typically results in a more convex landscape in both trajectory and policy optimization. However, combining termination with costs or negative rewards is dangerous, which our visualizations clearly illustrate.
\end{itemize}
In our future work, we aim to investigate the parameterization of actions as target states for complex controlled agents with unactuated roots. We hypothesize that this can be implemented for both trajectory and policy optimization, using a general-purpose neural network controller trained for reaching the target states.
\section*{Acknowledgements}
This research has been supported by Academy of Finland grant 299358.
\bibliographystyle{IEEEtran}
\section{Introduction}\label{sec:introduction}}
\newpage
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{images/perttu.png}}]{Perttu Hämäläinen}
received an M.Sc.(Tech) degree from Helsinki University of Technology in 2001, an M.A. degree from the University of Art and Design Helsinki in 2002, and a doctoral degree in computer science from Helsinki University of Technology in 2007. Presently, Hämäläinen is an associate professor at Aalto University, publishing on human-computer interaction, computer animation, machine learning, and game research. Hämäläinen is passionate about human movement in its many forms, ranging from analysis and simulation to first-hand practice of movement arts such as parkour or contemporary dance.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{images/juuso.jpg}}]{Juuso Toikka}
received his M.Sc.(Tech) degree in Computer Science from Aalto University in 2019. While this manuscript was in preparation, Toikka worked as a research assistant at the Department of Computer Science at Aalto University, Finland, but he has since moved on to Ubisoft RedLynx, pursuing a game industry career. His professional interests include animation tools, procedural animation, movement control optimization, reinforcement learning, and emergence in games.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{images/amin.png}}]{Amin Babadi}
is a doctoral candidate at the Department of Computer Science, Aalto University, Finland. His research focuses on developing efficient, creative movement artificial intelligence for physically simulated characters in multi-agent settings. Babadi has previously worked on three commercial games, developing AI, animation, gameplay, and physics simulation systems.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{images/karen.jpg}}]{C. Karen Liu}
is an associate professor in the Department of Computer Science at Stanford University. She received her Ph.D. degree in Computer Science from the University of Washington. Liu's research interests are in computer graphics and robotics, including physics-based animation, character animation, optimal control, reinforcement learning, and computational biomechanics. She developed computational approaches to modeling realistic and natural human movements, learning complex control policies for humanoids and assistive robots, and advancing fundamental numerical simulation and optimal control algorithms. The algorithms and software developed in her lab have fostered interdisciplinary collaboration with researchers in robotics, computer graphics, mechanical engineering, biomechanics, neuroscience, and biology. Liu received a National Science Foundation CAREER Award, an Alfred P. Sloan Fellowship, and was named Young Innovators Under 35 by Technology Review. In 2012, Liu received the ACM SIGGRAPH Significant New Researcher Award for her contribution in the field of computer graphics.
\end{IEEEbiography}
\twocolumn[\section*{Paper Supplement: Visualizing Movement Control Optimization Landscapes}]
\begin{figure*}[b]
\centering
{\includegraphics[width=0.9\linewidth]{images/landscape-grid-Hopper-v2.png}}
\caption{Trajectory optimization (CMA-ES) and policy optimization (PPO, SAC) landscapes for the Hopper-v2 MuJoCo environment, with trajectory/episode lengths ranging from 1 to 20 seconds. Trajectory optimization becomes highly ill-conditioned for long trajectories.}
\label{fig:landscape_hopper}
\end{figure*}
This supplementary document presents additional results to augment the paper's Section 9.4. Recall that the central result of Section 9.4 is that policy optimization scales better to long trajectories/episodes, although a policy neural network typically has orders of magnitude more parameters to optimize than a single trajectory (even a long one). This was tested with multiple locomotion tasks and optimizers: A Unity Machine Learning Agents 3D humanoid (optimized using LM-MA-ES and PPO) and four different MuJoCo agents: A 2D monopedal hopper (Hopper-v2), 2D bipedal walker (Walker2d-v2), 2D half quadruped (HalfCheetah-v2), and a 3D humanoid (Humanoid-v2), optimized using CMA-ES, PPO, and SAC. The optimization landscape visualizations of section 9.4---from the Unity humanoid locomotion case---agree on the result, displaying much less multimodality and ill-conditioning in the policy optimization case.
Similar landscape plots of the MuJoCo agents are included below. All landscape visualizations are centered around the found optima (a vector of control torques for each time step in trajectory optimization, or a vector of neural network parameters in policy optimization). The visualizations were computed using grids of $100\times100$ points, computing the mean return of 10 trajectories/episodes for each grid point. To improve visual clarity, all landscapes were also filtered using Gaussian blur with $\sigma=1.0$.
These MuJoCo landscapes support the results of Section 9.4. This is clearest in the hopper landscapes shown in Fig.~\ref{fig:landscape_hopper}. CMA-ES trajectory optimization landscapes become increasingly ill-conditioned with long trajectories, with narrow ridges where an optimizer typically zigzags back and forth over the ridge, making very slow progress along the ridge. In contrast, the policy optimization landscapes show almost spherical optima. Note that the policy networks for PPO and SAC have different parameter counts (as per the default parameters of the Stable Baselines implementations that we used). Thus, the landscapes cannot display exactly same optima.
The plots for the other MuJoCo environments (Fig. \ref{fig:landscape_halfcheetah}-\ref{fig:landscape_humanoid}) exhibit similar qualities, although less clearly. The trajectory optimization landscapes also become increasingly multimodal and/or noisy with longer trajectories. It should be noted that each landscape's vertical axis is normalized to show maximal detail, i.e., the heights of the optima in different landscapes cannot be directly compared.
\begin{figure*}[!ht]
\centering
{\includegraphics[width=0.9\linewidth]{images/landscape-grid-HalfCheetah-v2.png} }
\caption{Trajectory optimization (CMA-ES) and policy optimization (PPO, SAC) landscapes for the HalfCheetah-v2 MuJoCo environment, with trajectory/episode lengths ranging from 1 to 20 seconds.}
\label{fig:landscape_halfcheetah}
\end{figure*}
\begin{figure*}[!ht]
\centering
{\includegraphics[width=0.9\linewidth]{images/landscape-grid-Walker2d-v2.png} }
\caption{Trajectory optimization (CMA-ES) and policy optimization (PPO, SAC) landscapes for the Walker2d-v2 MuJoCo environment, with trajectory/episode lengths ranging from 1 to 20 seconds.}
\end{figure*}
\begin{figure*}[!ht]
\centering
{\includegraphics[width=0.9\linewidth]{images/landscape-grid-Humanoid-v2.png} }
\caption{Trajectory optimization (CMA-ES) and policy optimization (PPO, SAC) landscapes for the Humanoid-v2 MuJoCo environment, with trajectory/episode lengths ranging from 1 to 20 seconds.}
\label{fig:landscape_humanoid}
\end{figure*}
\end{document}
\section{Proof of Proposition 2}
Deriving the exact formula for the condition number of the 2D slice as a function of $d$ is beyond the scope of this paper. However, the visualization will display the ill-conditioning if one of the projection norms $||\mathbf{u}_{[:k]}||, ||\mathbf{v}_{[:k]}||$ differ from each other, i.e., $f(\mathbf{x})$ grows faster when moving along one axis than the other. Such vector pairs and all their rotations around $\mathbf{u} \times \mathbf{v}$ form good visualization basis vectors. Thus, we need to prove that randomly selecting such a pair becomes less probable with a large $d$. Assume that one first selects $\mathbf{u}$ randomly and then calculates $\mathbf{v}$ through orthogonalization as:
\begin{equation}
\mathbf{v}=\frac{\mathbf{r} - \mathbf{u} (\mathbf{u}^T\mathbf{r})}{||\mathbf{r} - \mathbf{u} (\mathbf{u}^T\mathbf{r})||},
\end{equation}
where $\mathbf{r} $ is another random unit-length vector independent of $\mathbf{u}$. With a large $d$, $\mathbf{u}$ and $\mathbf{r}$ are more likely to be orthogonal to begin with such that $\mathbf{u}^T\mathbf{r} \approx 0$ and $\mathbf{v} \approx \mathbf{r}$, which means that the norms $||\mathbf{u}_{[:k]}||, ||\mathbf{v}_{[:k]}||$ are independent random variables with the same expectation. Conversely, with small $d$, the orthogonalization has a larger effect, and affects the norm of $\mathbf{v}_{[:k]}$, making it more likely that the norms differ. For example, consider the case of $k=2$. If $\mathbf{u}$ is parallel to the subspace, i.e., lies in the plane spanned by the first two unit vectors, orthogonalization will rotate $\mathbf{v}$ away from the plane unless $\mathbf{r}$ is also parallel to the plane $\qed$ \textcolor{green}{CHECK: is this logic sound?}
| {
"timestamp": "2020-08-25T02:08:06",
"yymm": "1909",
"arxiv_id": "1909.07869",
"language": "en",
"url": "https://arxiv.org/abs/1909.07869",
"abstract": "A large body of animation research focuses on optimization of movement control, either as action sequences or policy parameters. However, as closed-form expressions of the objective functions are often not available, our understanding of the optimization problems is limited. Building on recent work on analyzing neural network training, we contribute novel visualizations of high-dimensional control optimization landscapes; this yields insights into why control optimization is hard and why common practices like early termination and spline-based action parameterizations make optimization easier. For example, our experiments show how trajectory optimization can become increasingly ill-conditioned with longer trajectories, but parameterizing control as partial target states---e.g., target angles converted to torques using a PD-controller---can act as an efficient preconditioner. Both our visualizations and quantitative empirical data also indicate that neural network policy optimization scales better than trajectory optimization for long planning horizons. Our work advances the understanding of movement optimization and our visualizations should also provide value in educational use.",
"subjects": "Machine Learning (cs.LG); Machine Learning (stat.ML)",
"title": "Visualizing Movement Control Optimization Landscapes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9736446425653804,
"lm_q2_score": 0.8267118004748677,
"lm_q1q2_score": 0.8049235154779346
} |
https://arxiv.org/abs/1802.09792 | Constructing Representative Scenarios to Approximate Robust Combinatorial Optimization Problems | In robust combinatorial optimization with discrete uncertainty, two general approximation algorithms are frequently used, which are both based on constructing a single scenario representing the whole uncertainty set. In the midpoint method, one optimizes for the average case scenario. In the element-wise worst-case approach, one constructs a scenario by taking the worst case in each component over all scenarios. Both methods are known to be $N$-approximations, where $N$ is the number of scenarios.In this paper, these results are refined by reconsidering their respective proofs as optimization problems. We present a linear program to construct a representative scenario for the uncertainty set, which guarantees an approximation guarantee that is at least as good as for the previous methods. Incidentally, we show that the element-wise worst-case approach can have an advantage over the midpoint approach if the number of scenarios is large. In numerical experiments on the selection problem we demonstrate that our approach can improve the approximation guarantee of the midpoint approach by around 20%. | \section{Introduction}
We consider combinatorial optimization problems of the general form
\[ \min_{\pmb{x}\in\mathcal{X}} \pmb{c}\pmb{x} \]
where $\pmb{c} \ge \pmb{0}$ is a cost vector, and $\mathcal{X} \subseteq \{0,1\}^n$ is a set of feasible solutions. As real-world problems may suffer from uncertainty, robust counterparts to combinatorial problems have been considered in the literature, see \cite{Aissi2009,kasperski2016robust} for surveys on the topic. The resulting robust (or min-max) optimization problem is then of the form
\[ \min_{\pmb{x}\in\mathcal{X}} \max_{\pmb{c}\in\mathcal{U}} \pmb{c}\pmb{x} \tag{\textsc{MinMax}}\]
where $\mathcal{U}$ contains all possible cost vectors $\pmb{c}^1, \ldots, \pmb{c}^N$ against we wish to protect.
As robust combinatorial problems are usually NP-hard, approximation methods have been considered \cite{Aissi2007281}. Two such heuristics stand out in the literature, as they are easy to use and implement, and have been providing the best-known approximation guarantee for a wide range of problems. While this guarantee has been improved for specific problems, they are still the best-known general methods (see \cite{approx}).
Both algorithms are based on constructing a single scenario that represents the whole uncertainty $\mathcal{U}$. For the midpoint algorithm, we use $\hat{\pmb{c}}$ with $\hat{c}_i = 1/N \sum_{j\in[N]} c^j_i$ for all $i\in[n]$. For the element-wise worst-case algorithm, we set $\overline{\pmb{c}}$ by using $\overline{c}_i = \max_{j\in[N]} c^j_i$. Let us denote by $\pmb{x}(\pmb{c})$ a minimizer for the nominal problem with costs $\pmb{c}$, and set $\hat{\pmb{x}}:=\pmb{x}(\hat{\pmb{c}})$ (the midpoint solution) and $\overline{\pmb{x}}:=\pmb{x}(\overline{\pmb{c}})$ (the element-wise worst-case solution). The following results can be found in \cite{Aissi2009}.
\begin{theorem}\label{th-n1}
The midpoint solution $\hat{\pmb{x}}$ is an $N$-approximation for \textsc{MinMax}.
\end{theorem}
\begin{theorem}\label{th-n2}
The element-wise worst-case solution $\overline{\pmb{x}}$ is an $N$-approximation for \textsc{MinMax}.
\end{theorem}
Frequently, problems with ''nice'' structure (such as shortest path, spanning tree, selection, or assignment) have been considered in the literature, where it is possible to solve the nominal problem in polynomial time. In particular, this setting makes it possible to solve both of the above approaches in polynomial time by solving one specific scenario (i.e., finding $\pmb{x}(\hat{\pmb{c}})$ or $\pmb{x}(\overline{\pmb{c}})$). This can then be used, e.g., as part of a branch and bound procedure for the (hard) robust problem.
Recently, data-driven robust optimization approaches have been investigated in the literature (see, e.g., \cite{atmos,bertsimas2018data}). This paper has a similar research outlook by using the available data for better approximation guarantees, instead of ignoring structure that may be present. In a similar spirit, by analyzing the symmetry of an uncertainty set, \cite{conde2012constant} is able to derive improved approximation bounds for the related \textsc{MinMax Regret} problem with compact uncertainty sets.
The contributions of this paper are as follows. By re-examining the proofs for Theorems~\ref{th-n1} and \ref{th-n2}, we present a linear program (LP) to construct a scenario $\pmb{c}'$ that is ''representative'' for the uncertainty set $\mathcal{U}$. We show that the resulting solution $\pmb{x}(\pmb{c}')$ has an approximation guarantee that is at least as good as the guarantee for $\hat{\pmb{x}}$ and $\overline{\pmb{x}}$. We also compare the midpoint and element-wise worst-case approach in more detail and find that the latter can outperform the former if the number of scenarios is large. In numerical experiments, we compare the quality of upper and lower bounds of our approach with the midpoint method, and demonstrate that it is possible to find considerably smaller a-priori and a-posteriori gaps by solving a simple linear program.
\section{Scenario construction based on the midpoint approach}\label{mainsec}
Let $OPT$ be the optimal objective value of problem \textsc{MinMax}, and let $\pmb{x}^*$ be any optimal solution. We make the following distinctions.
\begin{definition}
Let some scenario $\pmb{c}$ (not necessarily in $\mathcal{U}$) be given. Then
\[ UB(\pmb{c}) = \max_{i\in[N]} \pmb{c}^i\pmb{x}(\pmb{c}) \]
is an upper bound on $OPT$.
If it is possible to compute a lower bound from $\pmb{c}$, we denote this as $LB(\pmb{c})$, and a bound on the ratio as
\[ r(\pmb{c}) \ge UB(\pmb{c}) / LB(\pmb{c}) \]
We call $r(\pmb{c})$ an \emph{a-priori} bound, if it does not require the computation of $\pmb{x}(\pmb{c})$ to find. Otherwise, we call it an \emph{a-posteriori} bound.
\end{definition}
The reason for this distinction is that calculation of $\pmb{x}$ can be costly, if the nominal problem is not solvable in polynomial time.
As an example, the midpoint method uses $\hat{\pmb{c}} := \frac{1}{N}\sum_{i\in[N]} \pmb{c}^i$. It comes with an a-priori bound that is $N$, but by using $LB(\hat{\pmb{c}}) = \hat{\pmb{c}}\pmb{x}(\hat{\pmb{c}})$, we can calculate a stronger a-posteriori bound.
We now consider the problem of finding a better a-priori bound than $N$. To this end, note that Theorem~\ref{th-n1} can be proven in the following way.
\begin{proof}[Proof of Theorem~\ref{th-n1}]
\[
UB(\hat{\pmb{c}}) = \max_{i\in[N]} \pmb{c}^i \hat{\pmb{x}} \stackrel{(i)}{\le} N \hat{\pmb{c}} \hat{\pmb{x}} \le N \hat{\pmb{c}} \pmb{x}^* \stackrel{(ii)}{\le} N \max_{i\in[N]} \pmb{c}^i \pmb{x}^* = N\cdot OPT
\]
\end{proof}
To mirror the steps of this proof, let us consider the following optimization problem:
\begin{align}
\min_{t,\pmb{c}}\ &t \label{eq0} \\
\text{s.t. } & \max_{i\in[N]} \pmb{c}^i\pmb{x}(\pmb{c}) \le t\cdot \pmb{c}\pmb{x}(\pmb{c}) \label{eq1}\\
& \pmb{c} \pmb{x}^* \le \max_{i\in[N]} \pmb{c}^i \pmb{x}^* \label{eq2}
\end{align}
\begin{lemma}\label{lem1}
Let $(t,\pmb{c})$ be a feasible solution to problem (\ref{eq0}--\ref{eq2}). Then, $\pmb{x}(\pmb{c})$ is a $t$-approximation for \textsc{MinMax}.
\end{lemma}
\begin{proof}
Analogous to the proof of Theorem~\ref{th-n1}.
\end{proof}
Note that Problem~(\ref{eq0}--\ref{eq2}) cannot be solved directly, as both the optimal solution $\pmb{x}^*$ and $\pmb{x}(\pmb{c})$ are unknown. To circumvent these two issues, we use different, sufficient constraints instead.
\begin{lemma}\label{lem2}
Let $\pmb{c}$ fulfil
\begin{equation}
\sum_{j\in S} c^i_j \le t \sum_{j\in S} c_j \quad \forall i\in[N], S\subseteq[n] : |S| = k \label{suf1}
\end{equation}
for some value of $t$, and constant $k$ such that $k\le \sum_{j\in[n]} x_j$ for all $x\in\mathcal{X}$. Then, $(t,\pmb{c})$ also fulfils \eqref{eq1}.
\end{lemma}
\begin{proof}
Let $X = \{j\in[n]: x_j(\pmb{c}) = 1\}$ and $\mathcal{S} = \{S\subseteq[n] : |S| = k, S\subseteq X\}$. Then, the number of sets $S$ in $\mathcal{S}$ containing a specific item $j\in X$ is the same for all $j$. Let $\ell$ be this number. By summing \eqref{suf1} over all $S\in\mathcal{S}$, we find that
\[ \ell \sum_{j\in X} c^i_j \le t \ell \sum_{j\in X} c_j \qquad \forall i\in[N] \]
and the claim follows.
\end{proof}
Note that for constant $k$, it is possible in polynomial time to check if $k\le \sum_{j\in[n]} x_j$ for all $x\in\mathcal{X}$. Also, the set $\mathcal{S}$ contains polynomially many elements. As an example, for $k=1$, Constraint~\eqref{suf1} becomes
\[ c^i_j \le tc_j \qquad \forall i\in[N], j\in[n] \]
and for $k=2$, it becomes
\[ c^i_j + c^i_l \le t(c_j+c_l) \qquad \forall i\in[N], j,l\in[n], j\neq l \]
In general, the constraints for some fixed $k$ also imply the constraints for any larger $k$. This means that the larger the value of $k$, the larger is the set of feasible solutions to our optimization problem, and the better approximation guarantees we can get.
\begin{lemma}\label{lem3}
Let $\pmb{c}$ be in $conv(\mathcal{U}) = conv\{\pmb{c}^1,\ldots,\pmb{c}^N\}$. Then, $\pmb{c}$ fulfils \eqref{eq2}.
\end{lemma}
\begin{proof}
Let $\pmb{c} = \sum_{i\in[N]} \lambda_i \pmb{c}^i$ with $\sum_{i\in[N]} \lambda_i = 1$ and $\lambda_i \ge 0$ for all $i\in[N]$. Then, for any $\pmb{x}\in\mathcal{X}$,
\[ \pmb{c}\pmb{x} = \sum_{i\in[N]} \lambda_i \pmb{c}^i\pmb{x} \le \sum_{i\in[N]} \lambda_i \max_{j\in[N]}\pmb{c}^j\pmb{x} = \max_{i\in[N]} \pmb{c}^i\pmb{x} \]
\end{proof}
We now consider the following linear program:
\begin{align}
\max\ & t \label{neq0}\\
\text{s.t. } & t \sum_{j\in S} c^i_j \le \sum_{j\in S} c_j & \forall i\in[N], S\subseteq[n] : |S| = k \label{neq1} \\
& \pmb{c} = \sum_{i\in[N]} \lambda_i \pmb{c}^i \label{neq2}\\
& \sum_{i\in[N]} \lambda_i = 1 \label{neq3}\\
& \lambda_i \ge 0 & \forall i\in[N] \label{neq4}
\end{align}
Note that we replaced variable $t$ in Problem~(\ref{eq0}--\ref{eq2}) with $1/t$ to linearize terms.
\begin{theorem}
Let $(t^*,\pmb{c}^*)$ be an optimal solution to Problem~(\ref{neq0}--\ref{neq4}). Then, $\pmb{x}(\pmb{c}^*)$ is a $1/t^*$-approximation for \textsc{MinMax}, and $1/t^* \le N$.
\end{theorem}
\begin{proof}
By Lemmas~\ref{lem2} and \ref{lem3}, $(1/t^*,\pmb{c}^*)$ is feasible for Problem~(\ref{eq0}--\ref{eq2}). Using Lemma~\ref{lem1}, we therefore find that $\pmb{x}(\pmb{c}^*)$ is a $1/t^*$-approximation for \textsc{MinMax}.
To see that $1/t^* \le N$, note that $(1/N,\hat{\pmb{c}})$ is a feasible solution to Problem~(\ref{neq0}--\ref{neq4}).
\end{proof}
Once a solution $(t^*,\pmb{c}^*)$ has been computed, we have found an a-priori approximation guarantee. If we then compute $\pmb{x}(\pmb{c}^*)$, we can derive a lower bound $\pmb{c}^*\pmb{x}(\pmb{c}^*)$, as $\pmb{c}^*\in conv(\mathcal{U})$, and an upper bound by calculating the objective value of $\pmb{x}(\pmb{c}^*)$ for \textsc{MinMax}. This way, a stronger a-posteriori guarantee is found.
\begin{example}
We illustrate our approach using a small selection problem as an example. Given four items, the task is to choose two of them that minimize the worst-case costs over three scenarios. The upper part of Table~\ref{extable} shows the item costs in each scenario.
\begin{table}[htb]
\begin{center}
\begin{tabular}{c|rrrr}
item & 1 & 2 & 3 & 4\\
\hline
$\pmb{c}^1$ & 5 & 5 & 3 & 3 \\
$\pmb{c}^2$ & 3 & 8 & 9 & 7 \\
$\pmb{c}^3$ & 3 & 2 & 1 & 6 \\
\hline
$\hat{\pmb{c}}$ & 3.67 & 5.00 & 4.33 & 5.33 \\
$\pmb{c}'$ & 3.75 & 6.88 & 6.75 & 5.50 \\
$\pmb{c}''$ & 3.00 & 8.00 & 9.00 & 7.00
\end{tabular}
\caption{Example item costs, with midpoint scenario ($\hat{\pmb{c}}$), our LP-based scenario with $k=1$ ($\pmb{c}'$), and with $k=2$ ($\pmb{c}''$).}\label{extable}
\end{center}
\end{table}
The midpoint scenario (i.e., the average in each item) is shown in the row below ($\hat{\pmb{c}}$). An optimal solution for this scenario is to pack items 1 and 3. This means that we have an a-priori approximation ratio of $N=3$, and can calcluate a lower bound $LB(\hat{\pmb{c}}) = \hat{\pmb{c}}\hat{\pmb{x}} = 8$ and an upper bound $UB(\hat{\pmb{c}}) = \max_{i\in[N]}\pmb{c}^i\hat{\pmb{x}} = 12.$ Combining lower and upper bound, we find the stronger a-posteriori bound of $1.50$.
Using our linear program~(\ref{neq0}--\ref{neq4}) with $k=1$, we construct the scenario given in the next row $(\pmb{c}')$ and find an a-priori guarantee of $1.33$. For this scenario, an optimal solution is to take items 1 and 4. Accordingly, we find a lower bound of $9.25$, an upper bound of $10$, and an a-posteriori ratio of $1.08$.
Finally, we also use our LP with $k=2$ to find the scenario $\pmb{c}''$ and an a-priori guarantee of 1. This means that even before we have solved the problem, we already know that the resulting solution will be optimal. Indeed, we find that packing items 1 and 4 gives the optimal solution with objective value 10.
\end{example}
Note that we can also use the linear program~(\ref{neq0}--\ref{neq4}) to strengthen the approximation guarantee of the midpoint scenario $\hat{\pmb{c}}$ without calculating $\hat{\pmb{x}}$, by only keeping $t$ variable.
We conclude this section by introducing an alternative approach to calculate a-posteriori bounds, which cannot be used for a-priori bounds. To this end, note that
\[ \max_{\pmb{c}\in conv(\mathcal{U})} \min_{\pmb{x}\in\mathcal{X}} \pmb{c}\pmb{x} \le \min_{\pmb{x}\in\mathcal{X}} \max_{i\in[n]} \pmb{c}^i\pmb{x}\]
If the nominal problem can be written as a linear program, it can be dualized to find a compact formulation for the max-min problem. As both $\hat{\pmb{c}}$ and the optimal solution to problem (\ref{neq0}--\ref{neq4}) are in $conv(\mathcal{U})$, this approach will result in a lower bound which will be at least as good as the lower bounds of the other two approaches. This may not result in a better ratio beteen upper and lower bound, however. We will test this approach in the experimental section.
\section{On the element-wise worst-case}
We now focus on the element-wise worst-case scenario $\overline{\pmb{c}}$ with $\overline{c}_i = \max_{j\in[N]} c^j_i$. A proof for Theorem~\ref{th-n2} is the following.
\begin{proof}[Proof of Theorem~\ref{th-n2}]
\[ UB(\overline{\pmb{c}}) = \max_{i\in[N]} \pmb{c}^i \overline{\pmb{x}} \stackrel{(i)}{\le} \overline{\pmb{c}}\overline{\pmb{x}} \le \overline{\pmb{c}}\pmb{x}^* \stackrel{(ii)}{\le} N \max_{i\in[N]} \pmb{c}^i\pmb{x}^* = N \cdot OPT\]
\end{proof}
Accordingly, we can generalize this proof to an optimization problem by writing
\begin{align}
\min_{t,\pmb{c}}\ &t \label{wc0}\\
\text{s.t. } & \max_{i\in[N]} \pmb{c}^i \pmb{x}(\pmb{c}) \le \pmb{c}\pmb{x}(\pmb{c}) \label{wc1}\\
& \pmb{c}\pmb{x}^* \le t \max_{i\in[N]} \pmb{c}^i \pmb{x}^* \label{wc2}
\end{align}
By substituting $\pmb{c}' := \pmb{c}/t$, Problem~(\ref{wc0}--\ref{wc2}) becomes equivalent to Problem~(\ref{eq0}--\ref{eq2}). Hence, we can apply the same techniques to transform this into a conservative linear program~(\ref{neq0}--\ref{neq4}) as in the previous section. Note, however, that while $\hat{\pmb{c}}$ is a feasible solution for this problem, this may not be the case for $\overline{\pmb{c}}$.
Related to the \textsc{MinMax} approach is \textsc{MinMax Regret}, where objective values are normalized by the optimal objective value in each scenario, i.e.,
\[ \min_{\pmb{x}\in\mathcal{X}} \max_{i\in[N]} \left( \pmb{c}^i\pmb{x} - \pmb{c}^i\pmb{x}(\pmb{c}^i) \right) \tag{\textsc{MinMax Regret}} \]
The following result is also from \cite{Aissi2009}.
\begin{theorem}\label{th-n3}
The midpoint algorithm is an $N$-approximation for \textsc{MinMax Regret}; this does not hold for the element-wise worst-case algorithm.
\end{theorem}
In combination with Theorems~\ref{th-n1} and \ref{th-n2}, this means that there are no known problem classes where the element-wise worst-case solution gives a better performance guarantee than the midpoint solution. The midpoint solution has also been found to be the best-known general approximation algorithm for interval uncertainty problems \cite{kasperski2006approximation}. For these reasons, the midpoint solution has seen more attention in the research literature than the element-wise worst-case approach. However, in the following we show that if the number of scenarios is large, we element-wise worst-case approach can perform better than the midpoint approach, i.e., not only the size of the uncertainty set plays a role for approximability, but also the problem dimension.
\begin{theorem}\label{maintheorem}
The element-wise worst-case algorithm is a $|X|$-approximation for \textsc{MinMax}, where
$|X|=\max_{\pmb{x}\in\mathcal{X}} \sum_{j\in[n]} x_j$.
\end{theorem}
\begin{proof}
It holds that
\begin{align*}
&\max_{i\in[N]} \sum_{j\in[n]} c^i_j \overline{x}_j
\le \sum_{j\in[n]} \overline{c}_j \overline{x}_j
\le \sum_{j\in[n]} \overline{c}_j x^*_j
= \sum_{j\in[n]} \max_{i\in[N]} c^i_j x^*_j \\
& \le |X| \cdot \max_{j\in[n]} \max_{i\in[N]} c^i_j x^*_j
= |X|\cdot \max_{i\in[N]} \max_{j\in[n]} c^i_j x^*_j
\le |X| \cdot \max_{i\in[N]} \sum_{j\in[n]} c^i_j x^*_j = |X| \cdot OPT
\end{align*}
\end{proof}
Note that $|X| \le n$. The approximation guarantees from Theorems~\ref{th-n1} and \ref{th-n2} are tight, as the following two examples for robust shortest path problems demonstrate (see also \cite{Aissi2009}).
\begin{figure}[htbp]
\centering
\subfigure[Hard instance for the midpoint solution.\label{graph1}]{\includegraphics[width=.25\textwidth]{graph1}}\hspace*{2cm}
\subfigure[Hard instance for the element-wise worst-case solution.\label{graph2}]{\includegraphics[width=.35\textwidth]{graph2}}
\caption{Example instances for robust shortest path with two scenarios.}\label{graphs}
\end{figure}
In Figure~\ref{graph1}, the midpoint solution cannot distinguish between the upper edge and the lower edge. Hence, in this case, the $N$-approximation guarantee is tight with $N=2$. In Figure~\ref{graph2}, the element-wise worst-case solution cannot differentiate between the upper and the lower path. This instance is an example where the $N$-approximation guarantee is tight for this approach.
Note that the instance from Figure~\ref{graph1} can be extended by using more scenarios, preserving that the midpoint solution is an $N$-approximation, without additional edges. This is not the case for the element-wise worst-case scenario in Figure~\ref{graph2}: To extend this instance to more scenarios, additional edges are required. This demonstrates that the midpoint solution is not a $|X|$-approximation, as shown for the element-wise worst-case approach.
\section{Experiments}
To test the quality of our LP-based scenario construction approach, we consider instances of the selection problem (see, e.g., \cite{kasperski2016robust}). Here, $\mathcal{X} = \{ \pmb{x}\in\{0,1\}^n : \sum_{j\in[n]} x_j = p\}$ for some integer parameter $p$. We generate item costs $c^i_j$ by sampling uniformly i.i.d. from $\{0,1,\ldots,100\}$. We use instances sizing from $n=10$, $p=3$ to $n=30$, $p=9$ and use $N\in\{2,5,10,50,100\}$. For each parameter combination, we generate 1000 instances and average results.
Table~\ref{apriori} shows the a-priori bounds for the midpoint approach when using our linear program~(\ref{neq0}--\ref{neq4}) for evaluation with $k=1$, $k=2$ and $k=3$ (Mid-1-Pre, Mid-2-Pre, and Mid-3-Pre, respectively). We compare this to the a-priori bounds that are found when also optimizing over the scenario $\pmb{c}$ for $k=1$, $k=2$ and $k=3$ (LP-1-Pre, LP-2-Pre, and LP-3-Pre, respectively). Note that overall, all guarantees are considerably smaller than $N$. Furthermore, our approach is able to improve the bound of the midpoint algorithm. On average, the guarantee that the midpoint approach gives is more than 20\% larger than our guarantee.
\begin{table}
\centering
\begin{tabular}{rrr|rrr|rrr}
$n$ & $p$ & $N$ & Mid-1-Pre & Mid-2-Pre & Mid-3-Pre & LP-1-Pre & LP-2-Pre & LP-3-Pre \\
\hline
10 & 3 & 2 & 1.86 & 1.75 & 1.65 & 1.70 & 1.57 & 1.46 \\
10 & 3 & 5 & 2.41 & 2.09 & 1.90 & 1.83 & 1.67 & 1.54 \\
10 & 3 & 10 & 2.45 & 2.13 & 1.97 & 1.79 & 1.65 & 1.53 \\
10 & 3 & 50 & 2.26 & 2.10 & 2.00 & 1.59 & 1.53 & 1.46 \\
10 & 3 & 100 & 2.18 & 2.08 & 2.00 & 1.52 & 1.48 & 1.43 \\
20 & 6 & 2 & 1.93 & 1.86 & 1.80 & 1.84 & 1.76 & 1.70 \\
20 & 6 & 5 & 2.66 & 2.32 & 2.14 & 2.09 & 1.94 & 1.82 \\
20 & 6 & 10 & 2.63 & 2.32 & 2.16 & 2.01 & 1.89 & 1.80 \\
20 & 6 & 50 & 2.32 & 2.18 & 2.09 & 1.77 & 1.73 & 1.69 \\
20 & 6 & 100 & 2.23 & 2.13 & 2.06 & 1.70 & 1.67 & 1.64 \\
30 & 9 & 2 & 1.96 & 1.92 & 1.87 & 1.90 & 1.84 & 1.79 \\
30 & 9 & 5 & 2.78 & 2.45 & 2.27 & 2.24 & 2.08 & 1.97 \\
30 & 9 & 10 & 2.73 & 2.42 & 2.26 & 2.13 & 2.03 & 1.94 \\
30 & 9 & 50 & 2.36 & 2.22 & 2.14 & 1.87 & 1.83 & 1.79 \\
30 & 9 & 100 & 2.26 & 2.16 & 2.10 & 1.79 & 1.77 & 1.74
\end{tabular}
\caption{Average a-priori bounds.}\label{apriori}
\end{table}
We contrast the a-priori bounds with a-posteriori bounds in Table~\ref{aposteriori}, i.e., we calculate the solutions $\pmb{x}(\pmb{c})$ for the respective scenarios $\pmb{c}$ and the resulting ratio of upper and lower bound. On average, the bound provided by the midpoint solution is around $17\%$ larger than the bound provided by our approach with $k=2$ or $k=3$. The max-min approach (denoted by MM) performs slightly better than our approach (Mid-Post is on average $19\%$ larger than MM-Post), but this comes without an a-priori guarantee, at the cost of higher computational effort, and it is not always possible to compute as explained in Section~\ref{mainsec}.
\begin{table}
\centering
\begin{tabular}{rrr|r|rrr|r}
$n$ & $p$ & $N$ & Mid-Post & LP-1-Post & LP-2-Post & LP-3-Post & MM-Post \\
\hline
10 & 3 & 2 & 1.30 & 1.24 & 1.22 & 1.21 & 1.24 \\
10 & 3 & 5 & 1.57 & 1.35 & 1.30 & 1.32 & 1.29 \\
10 & 3 & 10 & 1.66 & 1.39 & 1.34 & 1.36 & 1.34 \\
10 & 3 & 50 & 1.82 & 1.37 & 1.36 & 1.38 & 1.37 \\
10 & 3 & 100 & 1.85 & 1.35 & 1.35 & 1.36 & 1.35 \\
20 & 6 & 2 & 1.21 & 1.18 & 1.17 & 1.16 & 1.14 \\
20 & 6 & 5 & 1.40 & 1.30 & 1.26 & 1.24 & 1.19 \\
20 & 6 & 10 & 1.47 & 1.33 & 1.28 & 1.28 & 1.24 \\
20 & 6 & 50 & 1.59 & 1.34 & 1.31 & 1.32 & 1.32 \\
20 & 6 & 100 & 1.63 & 1.33 & 1.31 & 1.32 & 1.32 \\
30 & 9 & 2 & 1.17 & 1.16 & 1.15 & 1.14 & 1.10 \\
30 & 9 & 5 & 1.32 & 1.26 & 1.21 & 1.20 & 1.14 \\
30 & 9 & 10 & 1.38 & 1.30 & 1.26 & 1.25 & 1.19 \\
30 & 9 & 50 & 1.48 & 1.30 & 1.28 & 1.28 & 1.28 \\
30 & 9 & 100 & 1.52 & 1.30 & 1.28 & 1.28 & 1.30
\end{tabular}
\caption{Average a-posteriori bounds.}\label{aposteriori}
\end{table}
Finally, we show more details on the a-posteriori bounds by providing both the upper and lower bounds in Tables~\ref{ubs} and \ref{lbs}. We find that our approach gives both better upper, and better lower bounds than the midpoint approach. While the MaxMin approach provides the best lower bounds, its upper bounds are often worse than for the midpoint solution.
\begin{table}
\centering
\begin{tabular}{rrr|r|r|rrr|r}
$n$ & $p$ & $N$ & OPT & Mid-UB & LP-1-UB & LP-2-UB & LP-3-UB & MM-UB \\
\hline
10 & 3 & 2 & 96.6 & 108.0 & 105.3 & 103.8 & 103.3 & 110.3 \\
10 & 3 & 5 & 142.9 & 169.5 & 162.8 & 158.0 & 158.8 & 165.9 \\
10 & 3 & 10 & 170.4 & 199.3 & 198.2 & 189.0 & 189.1 & 202.0 \\
10 & 3 & 50 & 219.0 & 248.3 & 249.8 & 241.9 & 239.9 & 254.1 \\
10 & 3 & 100 & 234.8 & 260.4 & 262.6 & 256.3 & 253.6 & 265.4 \\
20 & 6 & 2 & 172.1 & 193.7 & 190.6 & 188.9 & 187.5 & 189.1 \\
20 & 6 & 5 & 247.6 & 296.6 & 289.4 & 282.2 & 280.2 & 276.9 \\
20 & 6 & 10 & 292.7 & 351.0 & 346.2 & 334.6 & 332.3 & 337.2 \\
20 & 6 & 50 & 369.4 & 431.8 & 438.8 & 424.6 & 420.6 & 440.3 \\
20 & 6 & 100 & 395.6 & 457.7 & 461.2 & 450.8 & 446.6 & 464.5 \\
30 & 9 & 2 & 247.2 & 276.1 & 273.9 & 273.0 & 271.9 & 266.0 \\
30 & 9 & 5 & 351.2 & 416.2 & 408.6 & 398.3 & 395.9 & 384.1 \\
30 & 9 & 10 & 409.2 & 491.1 & 483.3 & 471.7 & 467.7 & 461.6 \\
30 & 9 & 50 & 513.1 & 605.5 & 610.3 & 592.4 & 588.6 & 607.0 \\
30 & 9 & 100 & 547.5 & 638.3 & 645.1 & 628.5 & 623.6 & 648.3
\end{tabular}
\caption{Average upper bounds.}\label{ubs}
\end{table}
\begin{table}
\centering
\begin{tabular}{rrr|r|r|rrr|r}
$n$ & $p$ & $N$ & OPT & Mid-LB & LP-1-LB & LP-2-LB & LP-3-LB & MM-LB \\
\hline
10 & 3 & 2 & 96.6 & 82.9 & 85.1 & 85.8 & 86.1 & 90.1 \\
10 & 3 & 5 & 142.9 & 108.3 & 121.9 & 122.1 & 121.1 & 129.4 \\
10 & 3 & 10 & 170.4 & 120.3 & 143.6 & 141.6 & 139.6 & 151.2 \\
10 & 3 & 50 & 219.0 & 136.7 & 183.0 & 178.2 & 174.4 & 186.2 \\
10 & 3 & 100 & 234.8 & 140.8 & 194.8 & 190.6 & 186.2 & 196.6 \\
20 & 6 & 2 & 172.1 & 160.5 & 161.6 & 162.1 & 162.4 & 166.2 \\
20 & 6 & 5 & 247.6 & 212.9 & 223.7 & 225.2 & 225.6 & 234.1 \\
20 & 6 & 10 & 292.7 & 238.5 & 260.9 & 261.2 & 260.3 & 272.9 \\
20 & 6 & 50 & 369.4 & 272.5 & 327.8 & 323.6 & 319.9 & 333.1 \\
20 & 6 & 100 & 395.6 & 280.8 & 348.1 & 343.8 & 339.7 & 351.2 \\
30 & 9 & 2 & 247.2 & 236.3 & 237.2 & 237.5 & 237.8 & 242.1 \\
30 & 9 & 5 & 351.2 & 316.3 & 325.0 & 328.3 & 328.9 & 337.9 \\
30 & 9 & 10 & 409.2 & 355.1 & 373.2 & 375.6 & 375.8 & 389.2 \\
30 & 9 & 50 & 513.1 & 408.0 & 467.8 & 464.3 & 460.7 & 475.3 \\
30 & 9 & 100 & 547.5 & 420.4 & 495.9 & 491.5 & 487.4 & 500.1
\end{tabular}
\caption{Average lower bounds.}\label{lbs}
\end{table}
\section{Conclusion}
Most robust combinatorial optimization problems are hard, which has lead to the development of general approximation algorithms. The two best-known such approaches are the midpoint method and the element-wise worst-case approach. Both rely on creating a single scenario that is representative for the whole uncertainty set. By reconsidering the respective proofs that both are $N$-approximation algorithms, we find an optimization problem to construct a representative scenario that results in an approximation which is at least as good as for the previous two scenarios.
In computational experiments using the selection problem, we test this approach numerically. We find that the midpoint method gives a guarantee that is about 20\% larger than ours, while we only need to solve a simple linear program to construct the representative scenario. The improved a-priori guarantee is also reflected in an improved a-posteriori guarantee, with our approach providing both better upper and lower bounds than before. This smaller gap could potentially be used within branch-and-bound algorithms for a more efficient search for an optimal solution.
| {
"timestamp": "2018-02-28T02:07:48",
"yymm": "1802",
"arxiv_id": "1802.09792",
"language": "en",
"url": "https://arxiv.org/abs/1802.09792",
"abstract": "In robust combinatorial optimization with discrete uncertainty, two general approximation algorithms are frequently used, which are both based on constructing a single scenario representing the whole uncertainty set. In the midpoint method, one optimizes for the average case scenario. In the element-wise worst-case approach, one constructs a scenario by taking the worst case in each component over all scenarios. Both methods are known to be $N$-approximations, where $N$ is the number of scenarios.In this paper, these results are refined by reconsidering their respective proofs as optimization problems. We present a linear program to construct a representative scenario for the uncertainty set, which guarantees an approximation guarantee that is at least as good as for the previous methods. Incidentally, we show that the element-wise worst-case approach can have an advantage over the midpoint approach if the number of scenarios is large. In numerical experiments on the selection problem we demonstrate that our approach can improve the approximation guarantee of the midpoint approach by around 20%.",
"subjects": "Optimization and Control (math.OC)",
"title": "Constructing Representative Scenarios to Approximate Robust Combinatorial Optimization Problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357248544007,
"lm_q2_score": 0.8198933381139645,
"lm_q1q2_score": 0.8049185805966071
} |
https://arxiv.org/abs/math/0605401 | Bounds on the $f$-Vectors of Tight Spans | The tight span $T_d$ of a metric $d$ on a finite set is the subcomplex of bounded faces of an unbounded polyhedron defined by~$d$. If $d$ is generic then $T_d$ is known to be dual to a regular triangulation of a second hypersimplex. A tight upper and a partial lower bound for the face numbers of $T_d$ (or the dual regular triangulation) are presented. | \section{Introduction}
\noindent
Associated with a finite metric~$d:\{1,\dots,n\}\times\{1,\dots,n\}\to\RR$ is the unbounded polyhedron
\[ P_d \ = \ \SetOf{x\in\RR^n}{x_i+x_j\ge d(i,j)\text{ for all $i,j$}} \quad . \] Note that the condition ``for all
$i,j$'' includes the diagonal case $i=j$, implying that $P_d$ is contained in the positive orthant and thus pointed.
Following Dress~\cite{MR753872} we call the polytopal subcomplex $T_d$ formed of the bounded faces of~$P_d$ the
\emph{tight span} of~$M$; see also Bandelt and Dress~\cite{MR858908}. In Isbell's paper~\cite{MR0182949} the same
object arises as the \emph{injective envelope} of~$d$. The metric $d$ is said to be \emph{generic} if the
polyhedron~$P_d$ is simple.
Up to a minor technicality, the tight span $T_d$ is dual to a regular subdivision of the \emph{second hypersimplex}
\[ \Delta_{n,2} \ = \ \conv\SetOf{e_i+e_j}{1\le i<j\le n} \quad ,\]
and the tight spans for generic metrics correspond to regular triangulations.
The tight spans of metric spaces with at most six points have been classified by Dress~\cite{MR753872} and Sturmfels and
Yu~\cite{MR2097310}; see also De Loera, Sturmfels, and Thomas~\cite{MR1357285} for further details.
Develin~\cite{Develin} obtained sharp upper and lower bounds for the dimension of a tight span of a metric on a given
number of points. The present paper can be seen as a refined analysis of Develin's paper. Our main result is the
following.
\begin{thm*}
The number of $k$-faces in a tight span of a metric on $n$ points is at most
\[ 2^{n-2k-1}\frac{n}{n-k}\binom{n-k}{k} \quad , \]
and for each $n$ there is a metric $\dmax^n$ uniformly attaining this upper bound.
\end{thm*}
In particular, this result says that there are no $k$-faces for $k>\lfloor n/2\rfloor$, which is Develin's upper bound
on the dimension of a tight span. Since the vertices of the tight span correspond to the facets of a hypersimplex
triangulation, and since further $\Delta_{n,2}$ admits an unimodular triangulation, this upper bound of $2^{n-1}$ for
the number of vertices of~$T_d$ is essentially the volume of $\Delta_{n,2}$. In fact, the normalized volume of
$\Delta_{n,2}$ equals $2^{n-1}-n$, but this minor difference will be explained later.
The paper is organized as follows. We start out with a section on the combinatorics of unbounded convex polyhedra.
Especially, we are concerned with the situation where such a polyhedron, say $P$, of dimension~$n$, is simple, that is,
each vertex is contained in exactly $n$ facets. It then turns out that the $h$-vector of the simplicial ball which is
dual to the bounded subcomplex of~$P$ has an easy combinatorial interpretation using the vertex-edge graph of~$P$. This
is based on ---and at the same time generalizes--- a result of Kalai~\cite{MR964396}. Further, translating Develin's
result on the upper bound of the dimension of a tight span to the dual, says that a regular triangulation of a second
hypersimplex $\Delta_{n,2}$ does not have any interior faces of dimension up to $\lfloor (n-1)/2\rfloor-1$. As a
variation of a concept studied by McMullen~\cite{MR2070631} and others we call such triangulations \emph{almost small
face free} or \emph{asff}, for short. The Dehn-Sommerville equations for the boundary then yield strong restrictions
for the $h$-vector of an asff simplicial ball. Applying these techniques to the specific case of hypersimplex
triangulations leads to the desired result. The final two sections focus on the construction of extremal metrics. Here
the metric $\dmax^n$ is shown to uniformly attain the upper bound on the $f$-vector. The situation turns out to be more
complicated as far as lower bounds are concerned. The paper concludes with a lower bound for the number of faces of
maximal dimension of a tight span of dimension $\lceil n/3\rceil$, which is Develin's lower bound. Further we construct
a metric $\dmin^n$ which attains this lower bound. However, we do not have a tight lower bound for the number of faces
of smaller dimension. Our analysis suggests that such a result might require to classify all possible $f$-vectors of
tight spans, a task beyond the scope of this paper.
\section{Combinatorics of Unbounded Polyhedra}
\noindent
A \emph{(convex) polyhedron} is the intersection of finitely many affine halfspaces in Euclidean space. Equivalently, it
is the set of feasible solutions of a linear program. A polyhedron $P$ is called \emph{pointed} if it does not contain
any affine line or, equivalently, its lineality space is trivial. Further, $P$ is pointed if and only if it has at
least one vertex. A \emph{(convex) polytope} is a bounded polyhedron. For basic facts about polytopes and polyhedra
the reader may consult Ziegler~\cite{MR1311028}.
For a not necessarily pointed bounded polyhedron $P$ we denote the face poset by $\cF(P)$. If $P$ is bounded then
$\cF(P)$ is a Eulerian lattice. Two pointed polyhedra are called \emph{combinatorially equivalent} if their face posets
are isomorphic.
A polyhedron $P$ is pointed if and only if it is projectively equivalent to a polytope. For this reason one can always
think of a pointed polyhedron $P$ as a polytope $P'$ with one face marked: the \emph{face at infinity}. However, this
is not the only way to turn an unbounded polyhedron into a polytope: Take an affine halfspace $H^+$ which contains all
the vertices of $P$ and whose boundary hyperplane $H$ intersects all the unbounded edges.
\begin{lem}
The combinatorial type of the polytope $\bar{P}=P\cap H^+$ only depends on the combinatorial type of $P$.
\end{lem}
\begin{proof}
The vertices of $\bar{P}$ come in two kinds: Either they are vertices of~$P$ or they are intersections of rays of~$P$
with the hyperplane~$H$. The rays can be recognized in the face poset of the unbounded polyhedron~$P$ as those edges
which contain only one vertex. The claim now follows from the fact that the face lattice of the polytope $\bar{P}$ is
atomic, that is, each face of~$\bar{P}$ is the join of vertices of~$\bar{P}$.
\end{proof}
We call $\bar{P}$ the \emph{closure} of~$P$.
The vertices and the bounded edges of a polyhedron~$P$ form an abstract graph which we denote by~$\Gamma(P)$. Note that
in the unbounded case the rays (or unbounded edges) of~$P$ are not represented in~$\Gamma(P)$.
An $n$-dimensional pointed polyhedron $P$ is \emph{simple} if each vertex is contained in exactly $n$ facets. Clearly,
simplicity is a combinatorial property. If $P$ is bounded, that is, $P$ is a polytope, then it is simple if and only if
the graph $\Gamma(P)$ is $n$-regular.
\begin{prop}
The pointed polyhedron $P$ is simple if and only if its closure $\bar{P}$ is.
\end{prop}
\begin{proof}
If $P$ is a simple polyhedron, then $P$ is combinatorially equivalent to a polyhedron $Q$ which is the intersection of
(facet defining) affine halfspaces in general position. Without loss of generality we can choose an affine hyperplane
$H$ which is in general position with respect to the facets of~$Q$ and which has the property that $H^+$ contains the
vertices of~$P$. Then $Q\cap H$ is simple, that is, $\Gamma(Q\cap H)$ is $(n-1)$-regular. By construction each
vertex of $Q\cap H$ is contained in exactly one unbounded edge of~$Q$. This implies that the graph of the closure
$\Gamma(Q\cap H^+)$ is $n$-regular, whence $\bar{Q}=Q\cap H^+$ is simple. The reverse implication is trivial.
\end{proof}
\begin{prop}
The combinatorial type of $\bar{P}$ is determined by the $2$-skeleton $\cF_{\le 2}(P)$.
\end{prop}
\begin{proof}
The unbounded edges of~$P$ are exactly those edges which contain exactly one vertex each. Hence $\cF_{\le 2}(P)$
determines the vertices of the face $P\cap H$ in the closure $\bar{P}=P\cap H^+$. The edges of $P\cap H$ correspond
to the unbounded $2$-faces of~$P$, that is, those $2$-faces which contain two unbounded edges. Altogether $\cF_{\le
2}(P)$ determines the graph of the simple polytope $\bar{P}$. A result of Blind and Mani~\cite{MR921106} then
yields the claim.
\end{proof}
The \emph{bounded subcomplex} $\bounded{P}$ of an unbounded polyhedron $P$ is the polyhedral subcomplex of the boundary
$\partial P$ of~$P$ which is formed of the bounded faces. Clearly, $\bounded{P}$ is contractible. The graph
$\Gamma(P)$ is the $1$-skeleton of the bounded subcomplex.
Kalai's proof~\cite{MR964396} of the aforementioned result of Blind and Mani~\cite{MR921106} is based on a
characterization of the $h$-vector of a simple polytope in terms of acyclic orientations of its graph. The remainder of
this section is devoted to explaining how this can be extended to bounded subcomplexes of unbounded polyhedra.
Consider an $n$-dimensional pointed polyhedron $P\subset\RR^n$ which is unbounded and a generic linear objective
function~$\alpha:\RR^n\to\RR$. Let us assume that $\alpha$ is \emph{generic} on $\bar{P}=P\cap H^+$, that is, it is
$1$--$1$ on the vertices of~$\bar{P}$. This way each edge of $P$, bounded or not, is a directed arc, say, with the
decrease of~$\alpha$. Let us assume further that $\alpha$ is \emph{initial} with respect to $\bar{P}\cap H=P\cap H$,
that is, there are no arcs pointing towards the face $\bar{P}\cap H$ of $\bar{P}$. In the language of linear
optimization, this means that the linear program $\max\smallSetOf{\alpha x}{x\in P}$ is unbounded and that the reverse
linear program
\[ \min\SetOf{\alpha x}{x\in P} \]
has a unique optimal vertex.
For each vertex $v$ of $\bar{P}$ let the \emph{out-degree} $\outdeg v$, with respect to~$\alpha$, be the number of edges
in $\bar{P}$ which are incident with~$v$ and directed away from~$v$. For any subset $U$ of the vertices of~$\bar{P}$ we
let
\[ h_i(U)=\#\SetOf{v\in U}{\outdeg v=i} \; .\]
\begin{prop}\label{prop:f-from-h}
We have
\[
f_k(\bounded{P})\ =\ \sum_{i=k}^n\binom{i}{k}h_i(P) \quad .
\]
\end{prop}
\begin{proof}
Each non-empty bounded face $F$ of $P$ has a unique $\alpha$-maximal vertex~$v=\argmax\alpha(F)$. Conversely, $F$ is
the unique face of~$P$ which is spanned by the edges in~$F$ which are incident with $v$. This way
$\binom{i}{k}h_i(P)$ counts those $k$-dimensional faces $F\le \bar{P}$ whose maximal vertex is not in $\bar{P}\cap H$ and
which has $\outdeg\argmax\alpha(F)=i$.
\end{proof}
Later we will be interested in maximizing the $f$-vector of the bounded subcomplexes of certain unbounded polyhedra.
Because the binomial coefficients are non-negative, the previous proposition implies that maximizing the $f$-vector is
equivalent to maximizing the $h$-vector.
\section{Combinatorics of Simplicial Balls}\label{section:simplicial-balls}
\noindent
For an arbitrary $n$-dimensional simplicial complex $K$ with $f$-vector $f(K)$ we can define its \emph{$h$-vector} by
letting
\begin{equation}\label{eq:h-from-f}
h_k(K)=\sum_{i=0}^k(-1)^{k-i}\binom{n+1-i}{n+1-k}f_{i-1}(K) \quad .
\end{equation}
Moreover, the \emph{$g$-vector} is set to $g_0(K)=1$ and $g_k(K)=h_k(K)-h_{k-1}(K)$ for $k\ge 1$.
As a consequence of the Euler equation, iteratively applied to intervals in the face lattice, we obtain the
\emph{Dehn-Sommerville relations}.
\begin{thm}\label{thm:DS}
For each simplicial $(n-1)$-sphere~$S$ we have
\[
h_k(S)\ =\ h_{n-k}(S) \quad .
\]
\end{thm}
As a further consequence the $f$-vectors (or $g$- or $h$-vectors) of a simplicial ball and its boundary are related.
\begin{thm}{\rm (McMullen and Walkup~\cite{MR0298557})}\label{thm:DS-ball} For each simplicial $(n-1)$-ball~$B$ we have
\[
g_k(\partial B)\ =\ h_k(B)-h_{n-k}(B) \quad .
\]
\end{thm}
See also Billera and Bj\"orner~\cite{MR1730171} and McMullen~\cite[Corollary 2.6]{MR2070631}.
Let $\interior{B}$ be the set of interior faces of the ball~$B$. Although $\interior{B}$ is not a polyhedral complex we
nonetheless write $f(\interior{B}):=f(B)-f(\partial B)$ for its $f$-vector. Formally, we can also define the $h$-vector
of the interior faces of a ball by using the equation~\eqref{eq:h-from-f}.
\begin{prop}\label{prop:h-int}
For each simplicial $(n-1)$-ball~$B$ we have
\[ h_{n-k}(B) \ = \ h_k(\interior{B}) \quad . \]
\end{prop}
\begin{proof}
\begin{align*}
h_{n-k}(B)\ \whyrelation{\ref{thm:DS-ball}}{=}\
& h_k(B)-g_k(\partial B)\ \whyrelation{\eqref{eq:h-from-f}}{=}\
\sum_{i=0}^k(-1)^{k-i}\binom{n-i}{n-k}f_{i-1}(B) \\
& - \left(\sum_{i=0}^k(-1)^{k-i}\binom{n-i-1}{n-k-1}f_{i-1}(\partial B)
- \sum_{i=0}^{k-1}(-1)^{k-i-1}\binom{n-i-1}{n-k}f_{i-1}(\partial B)\right)\\
=\ & \sum_{i=0}^k(-1)^{k-i}\binom{n-i}{n-k}\bigl(f_{i-1}(\interior{B})+f_{i-1}(\partial B)\bigr)
\;-\; \sum_{i=0}^k(-1)^{k-i}\binom{n-i}{n-k}f_{i-1}(\partial B)\\
=\ & \sum_{i=0}^k(-1)^{k-i}\binom{n-i}{n-k}f_{i-1}(\interior{B}) \ \whyrelation{\eqref{eq:h-from-f}}{=} \ h_k(\interior{B}) \quad .
\end{align*}
\end{proof}
The following proposition is due to McMullen~\cite[Proposition 2.4c]{MR2070631}. We include its simple proof for the
sake of completeness.
\begin{prop}\label{prop:asff-h-vanishing}
Let $B$ be a simplicial $(n-1)$-ball without any interior faces of dimension up to~$e$. Then
\[
h_k(B)\ = \ 0 \; \text{ for $k\ge n-e-1$}\qquad \text{and}\qquad h_k(B)\ =\ g_k(\partial B) \; \text{ for $k\le e+1$}\quad .
\]
\end{prop}
\begin{proof}
Our assumption on the interior faces says that $f_k(\interior{B})=0$ for $k\le e$. From the proof of
Proposition~\ref{prop:h-int} we see that
\[
h_{n-k}(B) \ = \ \sum_{i=0}^k(-1)^{k-i}\binom{n-i}{n-k}f_{i-1}(\interior{B}) \quad ,
\]
which directly proves $h_{n-k}(B)=0$ for $k\le e+1$. Applying Theorem~\ref{thm:DS-ball} once again also proves the
second claim.
\end{proof}
Of special interest is the case of a simplicial ball without small interior faces. Following
McMullen~\cite[\S3]{MR2070631} we call a face $\sigma$ of a simplicial $(n-1)$-ball \emph{small} if
$\dim\sigma\le\lfloor(n-1)/2\rfloor$, and it is \emph{very small} if $\dim\sigma<\lfloor(n-1)/2\rfloor$. A simplicial
$(n-1)$-ball is \emph{(almost) small-face-free}, abbreviated \emph{(a)sff}, if it does not have any (very) small
interior faces.
\begin{cor}\label{cor:defined-by-boundary-n-odd}
The $f$-vector of an $(n-1)$-dimensional asff simplicial ball, for $n$ odd, is determined by the $f$-vector of its boundary.
\end{cor}
\begin{proof}
Assume that $B$ is an $(n-1)$-dimensional asff simplicial ball. Then we have
\[
f_k(B) \ = \ \sum_{i=k}^{n} \binom{i}{k}h_i(B)
\ \whyrelation{\ref{prop:asff-h-vanishing}}{=} \
\sum_{i=k}^{(n-1)/2} \binom{i}{k}g_i(\partial B) \quad .
\]
\end{proof}
A similar computation shows the following analog for $n$ even.
\begin{cor}\label{cor:defined-by-boundary-n-even}
The $f$-vector of an $(n-1)$-dimensional asff simplicial ball, for $n$ even, is determined by the $f$-vector of its
boundary and $f_{n/2-1}=h_{n/2-1}$.
\end{cor}
A polytope is \emph{simplicial} if each proper face is a simplex. Equivalently, its boundary complex is a simplicial
sphere. In terms of cone polarity simplicity and simpliciality of polytopes are dual notions. In this way, the bounded
subcomplex $\bounded{P}$ of an unbounded simple $n$-polyhedron $P$ becomes the set of interior faces of a simplicial
$(n-1)$-ball $B(P)$ in the boundary of the polar dual $\bar{P}^*$ of the closure. The facets of $B(P)$ bijectively
correspond to the vertices of~$P$. As an equation of $f$-vectors this reads as follows.
\begin{equation}\label{eq:bounded-f}
f_k(\bounded{P}) \ = \ f_{n-k-1}(B(\bar{P}^*))-f_{n-k-1}(\partial B(\bar{P}^*)) \ = \ f_{n-k-1}(\interior{B(\bar{P}^*)})
\end{equation}
Moreover, since $h(\interior{B(\bar{P}^*)})$ is defined via the equation~\eqref{eq:h-from-f},
Proposition~\ref{prop:f-from-h} implies that
\begin{equation}\label{eq:h-h}
h_{n-k}(\bounded{P}) \ = \ h_k(\interior{B(\bar{P}^*)}) \ \whyrelation{\ref{prop:h-int}}{=} \ h_{n-k}(B(\bar{P}^*))\quad .
\end{equation}
\begin{exmp}
A simplicial $n$-polytope is \emph{neighborly} if any set of $\lfloor n/2\rfloor$ vertices forms a face. Examples are
provided by the \emph{cyclic polytopes}, that is, the convex hulls of finitely many points on the \emph{moment curve}
\[ t\mapsto(t,t^2,\dots,t^n) \quad .\] The definition of neighborliness readily implies that any triangulation of a
neighborly simplicial polytope without additional vertices is asff. Corollary~\ref{cor:defined-by-boundary-n-odd} now
says that each triangulation of an even-dimensional neighborly simplicial polytope has the same $f$-vector. Such
polytopes are called \emph{equidecomposable}.
\end{exmp}
The next example will suitably be generalized in Section~\ref{sec:max}.
\begin{exmp}\label{exmp:octahedron}
Any triangulation of a $3$-polytope without additional vertices is asff. For instance, see the triangulation $\Theta$
of the regular octahedron in Figure~\ref{fig:exmp}. Here we have
\[
\begin{array}{lclclclclcl}
f(\Theta) &=& (6,13,12,4) \ , &\quad& f(\partial\Theta) &=& (6,12,8) \ , &\quad& f(\interior{\Theta}) &=&
(0,1,4,4) \ , \\
h(\Theta) &=& (1,2,1,0,0) \ , &\quad& h(\partial\Theta) &=& (1,3,3,1) \ , &\quad& h(\interior{\Theta}) &=&
(0,0,1,2,1) \ .
\end{array}
\]
\end{exmp}
\section{Tight Spans and Triangulations of Hypersimplices}
\noindent
A \emph{distance function} is a symmetric matrix with real coefficients and a zero diagonal. We identify distance
functions with vectors in $\RR^{\binom{n}{2}}$ in a natural way. A non-negative distance function $d$ is a
\emph{metric} if it satisfies the triangle inequality $d(i,k)\le d(i,j)+d(j,k)$.
We recall some definitions from the introduction. Each finite metric $d\in\RR^{\binom{n}{2}}$ gives rise to a pointed
unbounded polyhedron
\[ P_d \ = \ \SetOf{x\in\RR^n}{x_i+x_j\ge d(i,j)\text{ for all $i,j$}} \quad . \] The bounded subcomplex $T_d :=
\bounded{P_d}$ is called the \emph{tight span} of~$d$. The metric $d$ is \emph{generic} if the polyhedron~$P_d$ is
simple.
The \emph{second hypersimplex} \[ \Delta_{n,2} \ = \ \conv\SetOf{e_i+e_j}{1\le i<j\le n} \] is an $(n-1)$-polytope which
is not simplicial. In fact, its facets are either $(n-2)$-simplices or $(n-2)$-dimensional hypersimplices
$\Delta_{n-1,2}$. As in De Loera, Sturmfels, and Thomas~\cite{MR1357285} we will use graph theory language in order to
describe a regular polyhedral subdivision $\Delta^d$ of $\Delta_{n,2}$ induced by the metric~$d$: If we identify the
vertices of $\Delta_{n,2}$ with the edges of the complete graph $K_n$ in a natural way then the cells of $\Delta^d$
correspond to subgraphs $\Gamma$ of $K_n$ (represented by their edge sets) which admit a height function
$\lambda\in\RR^n$ satisfying
\[
\lambda_i+\lambda_j=d(i,j) \text{ if $\{i,j\}$ is an edge}
\quad\text{and}\quad
\lambda_i+\lambda_j>d(i,j) \text{ if $\{i,j\}$ is not an edge of~$\Gamma$}\quad .
\]
The metric $d$ is generic if and only if $\Delta^d$ is a (regular) triangulation. Conversely, each regular
triangulation of $\Delta_{n,2}$ gives rise to a generic metric. Hence in the generic case we can apply the results from
the previous sections.
In the next few steps we will explore the structure of $T_d$ in terms of the dual simplicial ball $\Delta^d$. To this
end it is instrumental to begin with detailed information about the dual graph of~$\Delta_{n,2}$. The small cases are,
of course, special: $\Delta_{3,2}$ is a triangle, and $\Delta_{4,2}$ is an octahedron, as studied in
Example~\ref{exmp:octahedron}. The following is known, which is why we omit the (simple) proof.
\begin{lem}\label{lem:hypersimplex-dual-graph}
Let $n\ge 5$. Then the second hypersimplex $\Delta_{n,2}$ has $n$~facets isomorphic with $\Delta_{n-1,2}$ and $n$
simplex facets. Any two facets of hypersimplex type are adjacent, and their intersection is isomorphic with
$\Delta_{n-2,2}$. No two simplex facets are adjacent. Each simplex facet is adjacent to $n-1$ hypersimplex facets.
\end{lem}
A consequence of this observation is that all the faces of a hypersimplex are either hypersimplices or simplices.
\begin{prop}\label{prop:inductive-step}
For $n\ge 5$ let $\Delta$ be a triangulation of $\Delta_{n,2}$ such that on each $m$-dimensional hypersimplex face a
triangulation with the same $f$-vector $(f^{(m)}_0,\dots,f^{(m)}_m)$ is induced. Then we obtain
\[
f_{n-2}(\partial\Delta) \ = \ n + n f^{(n-2)}_{n-2}
\quad \text{and} \quad
f_k(\partial\Delta) \ = \ \sum_{i=1}^{n-1-k} (-1)^{i-1} \binom{n}{i} f^{(n-1-i)}_k \; \text{for $k<n-2$} \quad .
\]
\end{prop}
\begin{proof}
The claim for $f_{n-2}$ follows from the fact that $\Delta_{n,2}$ has $n$ simplex facets and $n$ hypersimplex facets,
and that we assumed that each hypersimplex facet is triangulated into $f^{(n-2)}_{n-2}$ simplices of dimension~$n-2$.
Lemma~\ref{lem:hypersimplex-dual-graph} says that the subgraph of the dual graph of $\Delta_{n,2}$ induced on the
hypersimplex facets is a complete graph $K_n$. Moreover, each face of dimension less than $n-2$ arises as a subface
of a hypersimplex facet. Therefore only the triangulations of the hypersimplex facets have to be taken into account,
where doubles have to be removed. The claim then follows from a standard inclusion-exclusion argument.
\end{proof}
Clearly, Proposition~\ref{prop:inductive-step} translates into various equations for the $g$- and $h$-vectors. We
choose to establish the following relation.
\begin{cor}\label{cor:inductive-step}
For $n\ge 5$ let $\Delta$ be a triangulation of $\Delta_{n,2}$ such that on each $m$-dimensional hypersimplex face a
triangulation with the same $f$-vector $(f^{(m)}_0,\dots,f^{(m)}_m)$ is induced. Then we obtain
\[
g_k(\partial\Delta) \ = \ \sum_{i=1}^n\sum_{j=0}^{\min(i,k)}(-1)^{i+j-1}\binom{n}{i}\binom{i}{j}h^{(n-1-i)}_{k-j} \quad
\text{for $k\le\lfloor n/2\rfloor$.}
\]
Here $(h^{(k)}_0,\dots,h^{(k)}_k)$ denotes the common $h$-vector of the $k$-faces.
\end{cor}
\begin{proof}
\begin{align*}
g_k(\partial\Delta) \ & = \ \sum_{i=0}^k(-1)^{k-i}\binom{n-i}{k-i}f_{i-1}(\partial\Delta) \\
& \whyrelation{\ref{prop:inductive-step}}{=} \
\sum_{i=0}^k(-1)^{k-i}\binom{n-i}{k-i} \, \left(\sum_{j=1}^{n-i}(-1)^{j-1}\binom{n}{j}f^{(n-1-j)}_{i-1}\right) \\
& = \ \sum_{i=0}^k(-1)^{k-i}\binom{n-i}{k-i} \, \left(\sum_{j=1}^{n}(-1)^{j-1}\binom{n}{j}f^{(n-1-j)}_{i-1}\right)
\quad \text{(since $f^{(n-1-j)}_{i-1}=0$ if $j>n-i$)} \\
& = \ \sum_{i=0}^k\sum_{j=1}^{n}\binom{n}{j}(-1)^{k-i+j-1}\binom{n-i}{k-i} f^{(n-1-j)}_{i-1}\\
& = \ \sum_{i=0}^k\sum_{j=1}^{n}\binom{n}{j}(-1)^{k-i+j-1}\left(\sum_{l=0}^j\binom{j}{l}\binom{n-j-i}{k-l-i}\right)
f^{(n-1-j)}_{i-1} \\
& = \ \sum_{j=1}^{n} \sum_{l=0}^j (-1)^{j+l-1}\binom{n}{j}\binom{j}{l} \sum_{i=0}^k (-1)^{k-l-i}\binom{n-j-i}{k-l-i}
f^{(n-1-j)}_{i-1} \\
& = \ \sum_{j=1}^{n} \sum_{l=0}^j (-1)^{j+l-1}\binom{n}{j}\binom{j}{l} \sum_{i=0}^{k-l} (-1)^{k-l-i}\binom{n-j-i}{(n-j)-(k-l)}
f^{(n-1-j)}_{i-1} \\
& = \ \sum_{j=1}^{n} \sum_{l=0}^j (-1)^{j+l-1}\binom{n}{j}\binom{j}{l} h^{(n-1-j)}_{k-l} \\
& = \ \sum_{j=1}^{n} \sum_{l=0}^{\min(j,k)} (-1)^{j+l-1}\binom{n}{j}\binom{j}{l} h^{(n-1-j)}_{k-l} \quad .
\end{align*}
\end{proof}
We call a distance function $e\in\RR^{\binom{n}{2}}$ \emph{isolated} if there is an index $i\in\{1,\dots,n\}$ and a (not
necessarily positive) real number $\lambda\ne 0$ such that $e(i,j)=e(j,i)=\lambda$ for all $j\ne i$ and $e(j,k)=0$
otherwise. Moreover, we say that two metrics are \emph{equivalent} if they differ by a linear combination of isolated
distance functions. The following is known.
\begin{prop}\label{prop:equivalent-metrics}
Let $d$ be a generic metric.
\begin{enumerate}
\item If $d$ and $d'$ are equivalent metrics then $\Delta^d=\Delta^{d'}$.
\item For each generic metric $d$ there is a unique equivalent generic metric $d'$ such that $B(P_{d'})$ is
combinatorially equivalent to $\Delta^{d'}=\Delta^d$.\label{it:ideal}
\end{enumerate}
\end{prop}
An metric $d'$ is \emph{ideal} if it satisfies $\Delta^{d'}\cong B(P_{d'})$.
Proposition~\ref{prop:equivalent-metrics}\eqref{it:ideal} then reads as: Each generic metric is equivalent to an ideal
one. The equivalence class of metrics of an ideal generic metric $d'$ on $n$ points can be described as follows: The
triangulation $\Delta^{d'}$ induces a triangulation of the boundary of the hypersimplex $\Delta_{n,2}$. For $n\ge 5$,
$\Delta_{n,2}$ has $n$ simplex facets, and the simplicial balls $B(P_d)$ corresponding to non-ideal metrics equivalent
to $d'$ arise from $\Delta^{d'}=\Delta^d$ by gluing additional $(n-1)$-simplices to the simplex facets of
$\Delta_{n,2}$.
\begin{exmp}\label{exmp:4points}
Consider the metric on four points given by the matrix
\begin{equation}\label{eq:exmp}
d=
\begin{pmatrix}
0 & 2 & 3 & 2\\
2 & 0 & 2 & 3\\
3 & 2 & 0 & 2\\
2 & 3 & 2 & 0
\end{pmatrix} \quad .
\end{equation}
The metric~$d$ turns out to be generic, and the tight span $T_d=\bounded{P_d}$ is $2$-dimensional. The corresponding
simplicial ball $\Delta^d$ is a triangulation of the regular octahedron, that is, the hypersimplex $\Delta(4,2)$. See
Figure~\ref{fig:exmp}.
\begin{figure}[htbp]\centering
\includegraphics[width=.2\textwidth]{4points}
\qquad
\includegraphics[width=.2\textwidth]{4points-eq}
\qquad
\includegraphics[width=.2\textwidth]{octa}
\caption{The tight span of the metric~$d$ defined in~\eqref{eq:exmp}, the tight span of an equivalent ideal metric
$d'$, and the corresponding triangulation $\Delta^d=\Delta^{d'}$. Images produced with \texttt{polymake}~\cite{polymake} and
\texttt{JavaView}~\cite{javaview}\label{fig:exmp}}
\end{figure}
The metric
\[
d'=
\begin{pmatrix}
0 & 1 & 2 & 1\\
1 & 0 & 1 & 2\\
2 & 1 & 0 & 1\\
1 & 2 & 1 & 0
\end{pmatrix}
\]
is equivalent to~$d$ and ideal, that is, its tight span satisfies $T_{d'}\cong\Delta^{d'}=\Delta^d$.
\end{exmp}
\begin{lem}\label{lem:ideal}
Let $d,d'\in\RR^{\binom{n}{2}}$ be equivalent metrics such that $d'$ is ideal. Then $h_k(T_d)=h_k(T_{d'})$ for
$k\ne1$ and $h_1(T_d)\le h_1(T_{d'})+n$.
\end{lem}
Throughout the following we consider a fixed generic metric~$d$.
We summarize results of Develin~\cite{Develin}. As before we identify a metric $d$ on $n$ points with an element of
$\RR^{\binom{n}{2}}$ and a graph on $n$ nodes with a $0/1$-vector of the same length $\binom{n}{2}$.
\begin{defn}
For a given weight vector $w\in\RR_+^n$ on $n$ points we call a non-negative vector $\mu\in\RR^{\binom{n}{2}}$ a
\emph{fractional $w$-matching} if $\sum_{i=1}^{n}\mu(i,j)=w_j$ for all $1\le j\le n$. The \emph{support} $\supp\mu$ is
the graph of those edges $(i,j)$ with $\mu(i,j)>0$.
\end{defn}
For a given graph $\Gamma\in\{0,1\}^{\binom{n}{2}}$, and $w\in\RR_+^n$ with $w_i=\deg_\Gamma(i)$, consider the linear
program
\begin{equation}\label{eq:LP}
\begin{array}{ll}
\max\ \langle \mu,d \rangle & \text{subject to}\\ [0.5ex]
\sum_{i=1}^n \mu(i,j)=w_j & \text{for all $1\le j\le n$, and}\\
\mu\ge 0 & \hspace{10em} .
\end{array}
\end{equation}
A fractional $w$-matching is called \emph{optimal} if it is an optimal solution of this linear program.
\begin{thm}{\rm (Develin~\cite{Develin})}\label{thm:Develin}
Let $d$ be a generic metric on $n$ points.
\begin{enumerate}
\item For each graph $\Gamma\in\{0,1\}^{\binom{n}{2}}$ the linear program~\eqref{eq:LP} has a unique optimal
solution~$\opt{\mu}(\Gamma)$.\label{it:opt}
\item The graphs $\Gamma$ with $\supp\opt{\mu}(\Gamma)=\Gamma$ are precisely the cells of~$\Delta^d$.\label{it:cells}
\item A cell $\Gamma$ is an interior simplex if and only if it is a spanning subgraph of~$K_n$ which is not isomorphic
with the star $K_{1,n-1}$. \label{it:spanning}
\item The support of an optimal $w$-matching for an arbitrary $w\in\RR_+^n$ is a cell of $\Delta^d$.\label{it:support}
\item No cell $\Gamma$ contains an non-trivial even tour. \label{it:even-cycle}
\item The dimension of $T_d$ is bounded by
\[ \lceil n/3 \rceil \ \le \ \dim T_d \ \le \ \lfloor n/2 \rfloor \quad . \] \label{it:bounds}
\end{enumerate}
\end{thm}
Here a \emph{tour} in the graph~$\Gamma$ is any closed path $(v_0,v_1,\dots,v_m=v_0)$; it is \emph{trivial} if each of
its edges occurs at least twice. A \emph{cycle} is a tour in which each edge occurs only once. In particular,
statement~\eqref{it:even-cycle} in the theorem implies that each vertex is contained in at most one cycle (which must
further be odd, if it exists). Further, it turns out that the property~\eqref{it:even-cycle} characterizes the
non-degenericity of~$d$; see \cite[Proposition~2.10]{Develin}.
The following lemma is a key step in obtaining upper bounds on the $f$-vectors of tight spans. It gives a bound on the
number of facets of~$T_d$ in the case where the dimension $\dim T_d=\lfloor n/2 \rfloor$ is maximal.
\begin{lem}\label{lem:n/2-bound}
The triangulation $\Delta^d$ is asff. Moreover,
\[
f_{\lfloor n/2\rfloor}(T_d) \ = \ f_{\lceil n/2\rceil-1}(\interior{\Delta^d}) \ \le \
\begin{cases}
1 & \text{if $n$ even}\\
n & \text{if $n$ odd} \quad .
\end{cases}
\]
\end{lem}
\begin{proof}
Any spanning subgraph of the complete graph $K_n$ needs at least $\lceil n/2 \rceil$ edges. In view of
Theorem~\ref{thm:Develin}\eqref{it:spanning} this implies that an interior face of $\Delta^d$ is at least of dimension
$\lceil n/2 \rceil-1=\lfloor (n-1)/2\rfloor$ or, equivalently, that $\Delta^d$ is asff.
Assume first that $n$ is even, and that $\Gamma$ is a graph with $n/2$ edges which corresponds to an interior simplex
of~$\Delta^d$. This says that $\Gamma$ is a perfect matching of~$K_n$ and hence an optimal solution of the linear
program~\eqref{eq:LP} for the weight $w=(1,1,\dots,1)$. From the uniqueness result
Theorem~\ref{thm:Develin}\eqref{it:opt} it thus follows that $f_{n/2-1}(\interior{\Delta^d})\le 1$.
Now let $n$ be odd. Then $\Gamma$ is a spanning subgraph of~$K_n$ with $(n+1)/2$ edges. This implies that $\Gamma$
has a unique node $t$ of degree~$2$. Clearly, there are $n$ choices for $t$.
\end{proof}
Note that $\Delta^d$ being asff is equivalent to the upper bound $\dim T_d \le \lfloor n/2 \rfloor$ in
Theorem~\ref{thm:Develin}\eqref{it:bounds}.
As a further piece of notation we introduce
\begin{align*}
H_k(n) \
:=& \ \max\SetOf{h_k(\Delta)}{\text{$\Delta$ regular triangulation of $\Delta_{n,2}$}} \\
\whyrelation{\eqref{eq:h-h}}{=}& \ \max\SetOf{h_k(T_d)}{\text{$d$ ideal metric on $n$ points}} \quad .
\end{align*}
We are now ready to prove our main result.
\begin{thm}\label{thm:upper-bound}
The $h$-vector of a regular triangulation $\Delta$ of the hypersimplex $\Delta_{n,2}$ is bounded from above by
\[
H_k(n) \ \le \ \binom{n}{2k} \quad \text{for $k\ne 1$}
\]
and $H_1(n) \le \binom{n}{2}-n$.
\end{thm}
Via Proposition~\ref{prop:f-from-h} this upper bound on the $h$-vector gives the recursion
\[
F_k(n) \ = \ 2F_k(n-1)+F_{k-1}(n-2) \quad ,
\]
where $F_k(n)$ is the maximal number of $k$-faces of the tight span of any generic metric on $n$ points. This further
translates into the following equivalent upper bound for the $f$-vector:
\[ F_k(n) \ \le \ 2^{n-2k-1}\frac{n}{n-k}\binom{n-k}{k} \quad . \] In Section~\ref{sec:max} it will be shown that these
bounds are tight. There even is a regular triangulation of~$\Delta_{n,2}$ which simultaneously maximizes all entries of
the $h$-vector. Note that this fact will be used in the proof of this theorem.
The bound $F_0(n) \le 2^{n-1}$ for the number of vertices of a tight span also follows from the known fact that the
normalized volume of $\Delta_{n,2}$ equals $2^{n-1}-n$: The vertices of a tight span of an ideal generic metric are in
$1-1$ correspondence with the facets of a regular triangulation of~$\Delta_{n,2}$; and changing from the ideal metric to
an equivalent non-ideal metric allows for another $n$ vertices in the tight span. As there are unimodular (and regular)
triangulations of $\Delta_{n,2}$, for instance, the thrackle triangulations studied by De Loera, Sturmfels, and
Thomas~\cite{MR1357285}, it is clear that this bound is tight.
We need some elementary facts about multinomial coefficients, which we phrase as equations of binomial coefficients.
Moreover, it will be convenient to make use of \emph{Kronecker's delta} notation
\[
\delta_{i,k} \ = \ \begin{cases} 1 & \text{if $i=k$,} \\ 0 & \text{otherwise.} \end{cases}
\]
\begin{lem}\label{lem:main:a}
\[
\sum_{i=1}^n (-1)^{i+k} \binom{n}{i} \binom{i}{k-1} (n-i) \ = \ n\,\delta_{1,k}
\]
\end{lem}
\begin{proof}
For $k=0$ we have $\binom{i}{-1}=0$, and the claim is obvious. So we assume that $k>0$.
\begin{align*}
\sum_{i=1}^n (-1)^{i+k} \binom{n}{i} \binom{i}{k-1} (n-i) \ &= \
\sum_{i=k-1}^n (-1)^{i+k} \binom{n}{i} \binom{i}{k-1} (n-i) \; - \; \delta_{1,k}(-1)^1 \binom{n}{0} \binom{0}{0} \\
&= \ k\binom{n}{k} \sum_{i=k-1}^n (-1)^{i+k} \binom{n-k}{i-(k-1)} \; + \; n\delta_{1,k} \\
&= - \ k\binom{n}{k} \sum_{i=0}^{n-(k-1)} (-1)^k \binom{n-k}{i} \; + \; n\delta_{1,k} \\
&= \ n\delta_{1,k} \quad .
\end{align*}
\end{proof}
\begin{lem}\label{lem:main:b}
\[ \sum_{i=j}^n (-1)^{i+j-1}\binom{n}{i}\binom{i}{j}\binom{n-i}{2(k-j)} \ = \ 0 \quad . \]
\end{lem}
\begin{proof}
\begin{align*}
\sum_{i=j}^n (-1)^{i+j-1}\binom{n}{i}\binom{i}{j}\binom{n-i}{2(k-j)} \
=& \ \binom{n}{j}\binom{n-j}{2(k-j)} \sum_{i=j}^n (-1)^{i+j-1}\binom{n-2k+j}{i-j} \\
=& \ - \binom{n}{j}\binom{n-j}{2(k-j)} \sum_{i=0}^{n-j} (-1)^k \binom{n-2k+j}{i} \\
=& \ - \binom{n}{j}\binom{n-j}{2(k-j)} \sum_{i=0}^{n-2k+j} (-1)^k \binom{n-2k+j}{i} \\
=& \ 0 \quad .
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:upper-bound}]
The hypersimplex $\Delta(4,2)$ is the regular octahedron, and (up to combinatorial equivalence) it has a unique
triangulation $\Theta$ without additional vertices; see the Examples~\ref{exmp:octahedron} and~\ref{exmp:4points}.
Then $h(\Theta)=(1,2,1,0,0)$. This settles the case $n=4$.
We will proceed by induction on $n$. From Proposition~\ref{prop:asff-h-vanishing} and Equation~\eqref{eq:h-h} it
follows that maximizing the $h$-vector of $\Delta$ amounts to the same as maximizing the $g$-vector of the boundary
$\partial\Delta$. Hence, inductively we can assume that each hypersimplex $l$-face of $\Delta_{n,2}$ is maximally
triangulated, that is, in the notation of Corollary~\ref{cor:inductive-step},
$h^{(l)}_k=\binom{l+1}{2k}-(l+1)\delta_{1,k}$ for all~$k$. We can write this as an equation rather than an inequality
since we know from the construction in Section~\ref{sec:max} that this bound is attained.
\begin{align*}
h_k(\Delta)
\ \whyrelation{\ref{prop:asff-h-vanishing}}{=}& \ g_k(\partial\Delta) \\
\ \whyrelation{\ref{cor:inductive-step}}{=}& \
\sum_{i=1}^n\sum_{j=0}^{\min(i,k)}(-1)^{i+j-1}\binom{n}{i}\binom{i}{j}h^{(n-1-i)}_{k-j} \\
\ =& \
\sum_{i=1}^n\sum_{j=0}^{\min(i,k)}(-1)^{i+j-1}\binom{n}{i}\binom{i}{j}\left[\binom{n-i}{2(k-j)}-(n-i)\delta_{1,k-j}\right] \\
=& \ \sum_{i=1}^n\sum_{j=0}^{\min(i,k)}(-1)^{i+j-1}\binom{n}{i}\binom{i}{j}\binom{n-i}{2(k-j)} -
\sum_{i=1}^n\sum_{j=0}^{\min(i,k)}(-1)^{i+j-1}\binom{n}{i}\binom{i}{j}(n-i)\delta_{1,k-j} \\
=& \ \sum_{i=1}^n\sum_{j=0}^{\min(i,k)}(-1)^{i+j-1}\binom{n}{i}\binom{i}{j}\binom{n-i}{2(k-j)}
\; - \; \sum_{i=1}^n (-1)^{i+k}\binom{n}{i}\binom{i}{k-1}(n-i)\\
\whyrelation{\ref{lem:main:a}}{=}& \ \sum_{j=0}^{k}\sum_{i=1}^n (-1)^{i+j-1}\binom{n}{i}\binom{i}{j}\binom{n-i}{2(k-j)} \; - \; n \delta_{1,k} \\
=& \ \sum_{j=0}^{k}\sum_{i=j}^n (-1)^{i+j-1}\binom{n}{i}\binom{i}{j}\binom{n-i}{2(k-j)}
\; - \; (-1)^{-1}\binom{n}{0}\binom{0}{0}\binom{n-0}{2(k-0)} \; - \; n \delta_{1,k} \\
\whyrelation{\ref{lem:main:b}}{=}& \ \binom{n}{2k} \; - \; n \delta_{1,k}
\end{align*}
\end{proof}
\section{A Metric with Maximal $f$-Vector}
\label{sec:max}
\noindent
In the sequel we will prove that the upper bounds given are tight. To this end, for each $n\ge 4$, we define the metric
$\dmax^n$ by letting
\[
\dmax^n(i,j) \ = \ 1 + \frac{1}{n^2+in+j} \quad ,
\]
for $1\le i < j \le n$. We suitably abbreviate $\Pmax^n=P_{\dmax^n}$ and $\Tmax^n=T_{\dmax^n}$.
\begin{prop}
The metric $\dmax^n$ is generic.
\end{prop}
\begin{proof}
Due to \cite[Proposition~2.10]{Develin} it suffices to show that no graph $\Gamma$ corresponding to a cell of
$\Delta^d$ contains a non-trivial even tour. Assuming the contrary, let $C=(i_1,i_2,\dots,i_{2n},i_1)$ be such a
tour. Then we have a non-trivial affine dependence
\[
\sum_{(k,l) \in A}\dmax^n(k,l)= \sum_{(k,l) \in B}\dmax^n(k,l)
\]
with $A=\{(i_1,i_2),(i_3,i_4),\dots\}$ and $B=\{(i_2,i_3),\dots,(i_{2n-2},i_{2n-1}),(i_{2n},i_1)\}$. But this
contradicts the fact that $\{\dmax^n(i,j)\}$ is a linearly independent set over $\QQ$.
\end{proof}
\begin{figure}[htbp]\centering
\includegraphics[height=.35\textwidth]{max5}\quad
\includegraphics[height=.35\textwidth]{max6}
\caption{Visualization of (the graphs of) the tight spans $\Tmax^5$, with $f$-vector $(16,20,5)$, and $\Tmax^6$, with
$f$-vector $(32,48,18,1)$. The unique $3$-face of $\Tmax^6$ is a cube. The two corresponding triangulations occur
under the name ``thrackle triangulations'' in De Loera, Sturmfels, and Thomas~\cite{MR1357285}. Moreover,
$\Tmax^6$, or rather the tight span of an equivalent ideal metric, is $\#66$ in Sturmfels and Yu~\cite{MR2097310}.}
\end{figure}
The key property of the metric $\dmax^n$ is the following.
\begin{lem}\label{lem:d-max-property}
For $1\leq i\leq j\leq k \leq l\leq n$ we have
\[
\dmax^n(i,j)-\dmax^n(i,k) \ \leq \ \dmax^n(j,l)-\dmax^n(k,l)\phantom{\quad.}
\]
and
\[
\dmax^n(i,l)-\dmax^n(i,k) \ \leq \ \dmax^n(j,l)-\dmax^n(j,k)\quad.
\]
\end{lem}
\begin{proof}
Without loss of generality we can assume $i<j<k<l$. Then we have
\begin{align*}
\dmax^n(i,j)-\dmax^n(i,k) \ =& \ \frac{1}{n^2+in+j} - \frac{1}{n^2+in+k}\\
=& \ \frac{k-j}{(n^2+in+j)(n^2+in+k)} \ < \ \frac{(k-j)n}{(n^2+jn+l)(n^2+kn+l)}\\
=& \ \frac{1}{n^2+jn+l} - \frac{1}{n^2+kn+l}
\ = \ \dmax^n(j,l)-\dmax^n(k,l) \quad .
\end{align*}
The other inequality follows from a similar computation.
\end{proof}
It is clear that also all submetrics of $\dmax^n$, that is, metrics induced on subsets of $\{1,\dots,n\}$, share this
property. To further analyze $\dmax^n$ and its tight span we require an additional characterization of the cells in the
tight span of a generic metric. In the sequel we write $E(\Gamma)$ for the set of edges of a graph~$\Gamma$.
\begin{prop}\label{prop:connected-criteria}
Let $d$ be a generic metric on $n$ points, and let $\Gamma$ be a connected graph with $n$ vertices, $n$ edges and without
non-trivial even tours. Then $\Gamma$ defines a cell of $\Delta^d$ if and only if for all $\{v,w\} \not\in E(\Gamma)$ we have
\begin{equation}\label{eq:fo}
d(v,w) \ \leq \ \sum_{k=1}^{m-1} (-1)^{k-1} d(v_k,v_{k+1}),
\end{equation}
where $P=(v=v_1,v_2,\dots,v_m=w)$ is any path from $v$ to $w$ of odd length.
\end{prop}
\begin{proof}
A connected graph with $n$ nodes and $n-1$ edges is a tree. Therefore, $\Gamma$ can be seen as a tree with an
additional edge which is contained in the unique (odd) cycle. This implies that there is a path of odd length between
any two vertices $v$ and $w$ (go around the odd cycle once if necessary). While this path of odd length is not unique
two such paths only differ by the insertion/deletion of trivial even tours or the direction in which the odd cycle is
traversed. Moreover, the set $P'$ of those edges occurring an odd number of times in the path~$P$ is independent of
the choice of the path~$P$. A direct computation then shows that the value $\sum_{k=1}^{m-1} (-1)^{k-1}
d(v_k,v_{k+1})$ is also independent of the choice of~$P$.
Let $\Gamma$ be a cell of $\Delta^d$, and let $\{v,w\} \notin E(\Gamma)$ be an non-edge. We consider the graph $C$
consisting of $\{w,v\}$ and the edge set $P'$ of those edges which occur in the path $P$ an odd number of times.
Clearly, $C$ is an even cycle in the complete graph, and we define $c'\in \RR^{\binom n2}$ as
\begin{align}\label{cc}
c'_{\alpha \beta} \ := \
\begin{cases}
1 &\text{for } \{\alpha,\beta\}=\{v_k,v_{k+1}\}\in E(C)\text{ and } k \text{ odd} \\
-1 &\text{for } \{\alpha,\beta\}=\{v_k,v_{k+1}\}\in E(C)\text{ and } k \text{ even}\\
1 &\text{for } \{\alpha,\beta\}=\{v,w\} \\
0 &\text{otherwise}
\end{cases}\quad.
\end{align}
Then $c:=\Gamma+\frac 1 2 c'$ is a feasible point of \eqref{eq:LP} and we have
\[
\langle c, d\rangle
\ = \ \langle \Gamma, d \rangle+ \frac{1}{2} \left(-\sum_{k=1}^{m-1} (-1)^{k-1} d(v_k,v_{k+1}) +d(v,w)\right)
\ > \ \langle\Gamma,d\rangle\quad.
\]
Since Theorem \ref{thm:Develin}\eqref{it:cells} establishes the optimality of~$\Gamma$ we can infer that the non-edge
$\{v,w\}$ satisfies the inequality~\eqref{eq:fo}.
For the reverse direction let $\Gamma$ be a graph such that \eqref{eq:fo} is true for all $\{v,w\} \notin E(\Gamma)$.
Further let $\opt{\mu}(\Gamma)$ be the optimal solution to the linear program~\eqref{eq:LP}, which is unique due to
Theorem \ref{thm:Develin}\eqref{it:opt}. Then Theorem \ref{thm:Develin}\eqref{it:cells} tells us that we have to show
$\opt{\mu}(\Gamma)=\Gamma$. Assuming the converse, Theorem \ref{thm:Develin}\eqref{it:support} gives us $\{v,w\}\notin
E(\Gamma)$ with $\opt{\mu}(\Gamma)_{vw}>0$. Let $c=\opt{\mu}(\Gamma)-\frac{\opt{\mu}(\Gamma)_{vw}}{2} c'$ with $C$ and
$c'$ as in the first part of the proof. Then we have
\[
\langle c',d \rangle
\ = \ \langle c,d \rangle + \frac{\opt{\mu}(\Gamma)_{vw}} 2 \left(\sum_{k=1}^{m-1} (-1)^{k-1} d(v_k,v_{k+1}) -
d(v,w)\right)
\ \geq \ 0 \quad .
\]
But this is a contradiction to the fact that $\opt{\mu}(\Gamma)$ is the unique optimal value.
\end{proof}
As mentioned previously, Lemma~\ref{lem:d-max-property} is the only property of $\dmax^n$ which actually matters.
\begin{lem}\label{lem:cycle-opt}
Let $d$ be any generic metric on $n$ points for which the inequalities in Lemma \ref{lem:d-max-property} hold, for
example, $d=\dmax^n$. Then the cycle
\[ C \ = \ (1,(n+1)/{2}+1,2,(n+1)/{2}+2,\dots,(n-1)/{2},n,(n+1)/2,1)\]
is a cell of $\Delta^d$ if $n$ is odd. If $n$ is even then the graph $D$ consisting of the cycle
\[ C' \ = \ (1,n/{2}+2,2,n/{2}+3,\dots,n/{2}-1,n,n/{2},1) \]
and the additional edge $\{1,n/{2}+1\}$ defines a cell.
\end{lem}
\begin{proof}
We consider the case where $n$ is odd. For each non-edge $\{j,l\}\notin E(C)$ we verify the conditions of Proposition
\ref{prop:connected-criteria}. The proof distinguishes four cases, the first of them being $j<l<(n+1)/2$. The
distance of $j$ and $l$ in the cycle~$C$ is even then, and as a path
of odd length we can take
\begin{align*}
P \ = \ (&l,(n+1)/2+l,l+1,(n+2)/2+l+1,\dots,(n-1)/2,n,(n+1)/2,\\
&1,(n+1)/2+1,2,(n+1)/2+2,\dots,j) \quad .
\end{align*}
Hence we have to show that
\begin{align}\label{eq:cycle-opt}
d(j,l) \ \leq \ & \sum_{k=l}^{(n-1)/2}\left( d(k,(n+1)/2+k)-d(k+1,(n+1)/2+k)\right) \notag\\
& +\,d(1,(n+1)/2)\\
& - \, \sum_{k=1}^{j-1}\left(d(k,(n+1)/2+k)-d(k+1,(n+1)/2+k)\right) \quad \notag.
\end{align}
We compute
\begin{align*}
d(j,l)-d(1,(n+1)/2) \ = \ &\sum_{k=l}^{(n-1)/2} \left(d(j,k)-d(j,k+1)\right)\\
&- \, \sum_{k=1}^{j-1}\left( d(k,(n+1)/2)-d(k+1,(n+1)/2)\right)\quad.
\end{align*}
Considering the summands of the first sum, the first part of Lemma \ref{lem:d-max-property} yields
\[
d(j,k)-d(j,k+1) \ \leq \ d(k,(n+1)/2+k)-d(k+1,(n+1)/2+k)
\]
because $j\leq k\leq k+1\leq (n+1)/2 +k$. The summands of the second sum satisfy $k\leq k+1\leq (n+1)/2\leq
(n+1)/2 +k$, whence the second part of Lemma \ref{lem:d-max-property} says that
\[
d(k,(n+1)/2)-d(k+1,(n+1)/2) \ \geq \ d(k,(n+1)/2+k)-d(k+1,(n+1)/2+k)\quad.
\]
By summing up we obtain the inequality~\eqref{eq:cycle-opt} as desired.
The remaining three cases are $(n-1)/2<j<l$, $j<l-(n+1)/2<(n+1)/2+j<l$, and $l-(n-1)/2<j<(n+1)/2,(n-1)/2<l<j+(n+1)/2$.
These, as well as the situation for $n$ even, are reduced to similar computations.
\end{proof}
\begin{figure}[htbp]\centering
\begin{overpic}[width=.8\textwidth]{dmax_cycles}
\put( 0.5 ,22){$1$}
\put(12.25,22){$2$}
\put(24 ,22){$3$}
\put(35.75,22){$4$}
\put( 0.5 ,3){$5$}
\put(12.25,3){$6$}
\put(24 ,3){$7$}
\put(35.75,3){$8$}
\put(47.5 ,3){$9$}
\put(62.5 ,22){$1$}
\put(74.25,22){$2$}
\put(86 ,22){$3$}
\put(97.75,22){$4$}
\put(62.5 ,3){$5$}
\put(74.25,3){$6$}
\put(86 ,3){$7$}
\put(97.75,3){$8$}
\end{overpic}
\caption{This illustrates Lemma~\ref{lem:cycle-opt}: Cycle~$C$ for $n=9$ odd (left) and graph $D$ for $n=8$ even (right).}
\end{figure}
\begin{thm}
We have
\[ h_i(\Tmax^n) \ = \ \binom{n}{2i} \quad . \]
\end{thm}
\begin{proof}
First we show that we have equality in the bound of Lemma \ref{lem:n/2-bound} for $\dmax^n$ and all its submetrics. This is
immediate from Lemma \ref{lem:cycle-opt} because for $n$ even the graph $D$ has a spanning subgraph with
$n/2$ edges corresponding to an interior simplex of $\Delta^d$ by Theorem \ref{thm:Develin}\eqref{it:spanning}. For $n$ odd we
find $n$ spanning subgraphs of $C$ with $(n+1)/2$ edges each. These are exactly the bounds of Lemma \ref{lem:n/2-bound}.
Now the result follows from the computation in the proof of Theorem \ref{thm:upper-bound}.
\end{proof}
\section{Towards a Lower Bound}
\noindent
Before we can prove something about lower bounds we require an additional lemma on the graphs defining cells of
$\Delta^d$, that is, graphs supporting optimal solutions of the linear program~\eqref{eq:LP}.
\begin{lem}\label{lem:B11}
Let $w=(b,1,\dots,1)\in \RR^n$, and let $\Gamma$ be the support of the corresponding solution of the optimal
fractional $w$-matching. Then for the connected component $C$ of $\Gamma$ containing vertex $1$ exactly one of the
following is true:
\begin{enumerate}
\item Either the component $C$ consists of one odd cycle and $b-1$ additional edges incident with the vertex~$1$,
\label{it:B11-cycle}
\item or the component $C$ consists of $b$ edges incident with the vertex~$1$.\label{it:B11-non_cycle}
\end{enumerate}
All other connected components of $\Gamma$ are isolated edges or odd cycles.
\end{lem}
\begin{proof}
Let $\mu$ be a fractional $w$-matching with support graph $\Gamma$. First, no vertex other than~$1$ can have a degree
greater than or equal to~$3$: Suppose otherwise that there is a vertex~$x\ne 1$ with three neighbors $u,v,w$. Since
the total weight of the edges through~$x$ equals one, we have $\mu(x,u),\mu(x,v),\mu(x,w)<1$. This implies that each
of $u,v,w$ must be adjacent to another vertex (via an edge of weight less than one), and these paths continue further
into all three directions starting from $x$. Because the graph~$\Gamma$ is finite eventually these three paths must
reach a vertex that they already saw previously. Since we started into three directions it is not possible that all
the vertices that we saw lie on one cycle. Therefore there are at least two cycles in the connected component of~$x$,
which implies that there is a non-trivial even tour through~$x$. And this is forbidden by
Theorem~\ref{thm:Develin}\eqref{it:even-cycle}.
The same argument also shows that the vertex~$1$ is contained in at most one (odd) cycle. Moreover, each vertex
adjacent to~$1$ which is not contained in the odd cycle through~$1$ (if it exists) cannot be adjacent to any other
vertex: Otherwise it would also generate a path which must end in a cycle as above. Note that all edges in a cycle
necessarily have weight $1/2$.
If $\mu(x,y)=1$ for some $x,y\ne 1$ then both, $x$ and $y$ are only contained in the edge $\{x,y\}$. Therefore the
claim.
\end{proof}
\begin{figure}[htbp]\centering
\includegraphics[width=.8\textwidth]{b11}
\caption{Graphs supporting an optimal $(b,1,\dots,1)$-matching as in Lemma~\ref{lem:B11} for $n=8$ and $b=4$ (two
components to the left) and $b=3$ (three components to the right), respectively.}
\end{figure}
The case $b=1$ in the preceding result (with the same kind of argument) occurs in the proof of
Theorem~\ref{thm:Develin}\eqref{it:bounds} which is \cite[Theorem~3.1]{Develin}.
As a lower bound analog to Lemma \ref{lem:n/2-bound} for generic metrics we show the following theorem. The three
different cases correspond to the congruence class of $\NN$ modulo~$3$.
\begin{thm}\label{thm:lower-bound}
Let $d$ be a generic metric on $n$ points such that $T_d$ has dimension $\lceil n/3 \rceil$. Then we have
\[
f_{\lceil n/3 \rceil}(T_d) \ = \ f_{\lfloor 2n/3\rfloor-1}(\interior{\Delta^d})\ \geq \
\begin{cases}
n\cdot3^{k-2}+3^k & \text{if $n=3k$} \\
3^{k-1} & \text{if $n=3k+1$} \\
5\cdot 3^{k-1} & \text{if $n=3k+2$}\quad.
\end{cases}
\]
\end{thm}
\begin{proof}
Let first $n=3k+1$ and $\Gamma$ be the support of the optimal fractional $w$-matching for $w=(1,\dots,1)$.
Lemma~\ref{lem:B11} yields that $\Gamma$ only consists of isolated edges and odd cycles. As $\Gamma$ cannot have a
spanning subgraph with more than $\lfloor 2n/3\rfloor$ edges (since we assumed that $\dim T_d=\lceil n/3\rceil$) the
only possibility is that $\Gamma$ consists of $k-1$ cycles of length three and two isolated edges. Since each
$3$-cycle has exactly three spanning subgraphs we get at least $3^{k-1}$ faces of dimension $k=(n-1)/3$, as desired.
For $n=3k$ a similar argument yields $3^k$ faces of dimension~$k$. Additionally, we consider the support $\Gamma'$ of
the optimal fractional $w$-matching for $w=(3,1,\dots,1)$, and again we can apply Lemma~\ref{lem:B11}. If we were in
case \eqref{it:B11-cycle} then $\Gamma'$ had a spanning subgraph of at most $3$ (from the connected component
containing vertex $1$) plus $2(k-2)$ (from $k-3$ cycles of length three and two isolated edges in the rest) edges,
summing up to $2k-1<2n/3$ altogether, which is impossible. So we are in case \eqref{it:B11-non_cycle} of
Lemma~\ref{lem:B11}. Then we get spanning subgraphs of $\Gamma'$ with $3$ (connected component containing vertex $1$)
plus $2k-3$ edges, which makes $2n/3$ altogether. Again each of the $k-2$ cycles of length three of $\Gamma'$ has
three possible spanning subgraphs yielding $3^{k-2}$ faces. These are all different from those obtained as subgraphs
of $\Gamma$ since they have a vertex of degree~$3$. Repeating this argument for all the $n$ vertices instead of
vertex $1$ proves the claim for $n=3k$.
Finally, let $n=3k+2$. Again we use a similar argument as in the case $n=3k+1$ to get $3^k$ facets. The
corresponding graph $\Gamma$ has two edges not contained in any $3$-cycle. Assume that one edge contains the
vertex~$i$ and the other contains the vertex~$j$. Consider $w\in\RR^n$ with $w_i=2$ and all other components equal to
$1$. We proceed as in the case $n=3k$, and again we apply Lemma~\ref{lem:B11}: As before the case
\eqref{it:B11-cycle} is impossible because this would yield a spanning subgraph with at least $2+2(k-1)=2k<\lfloor
2n/3\rfloor$ edges. Hence we are in case \eqref{it:B11-non_cycle} to get a graph $\Gamma'$ with subgraphs of size
$3+2(k-1)=2k-1=\lfloor 2n/3\rfloor$. There are $3^{k-1}$ of that kind which are different from the spanning subgraphs
of $\Gamma$ because $i$ has degree $2$. A similar argument with $j$ instead of $i$ completes the proof of the
theorem.
\end{proof}
We can also construct a metric for which this bound is tight. For an arbitrary graph $\Gamma$ on $n$ points we define a metric via
\[
d_\Gamma(i,j) \ = \
\begin{cases} 2;& \text{if } \{i,j\}\in E(\Gamma)\\
1+\frac{1} {n^2+in+j} & \text{otherwise,}
\end{cases}
\]
for $i<j$.
Notice that our metric $\dmax^n$ corresponds to the graph on $n$ vertices without any edges.
We define $\dmin^n:=d_{\Gamma_{\min}^n}$ by letting
\begin{equation*}
\{i,j\}\in E(\Gamma^n_{\min}) :\Leftrightarrow
\begin{cases}
\lfloor\frac{i-1}3\rfloor= \big\lfloor\frac{j-1}3\big\rfloor & \text{for $n\equiv 0,1\mod 3$}\\
\lfloor{\frac{i-1}3}\rfloor= \big\lfloor{\frac{j-1}3}\big\rfloor \text{ and }i,j<n & \text{for $n\equiv 2\mod 3$} \quad .
\end{cases}
\end{equation*}
So $\Gamma^n_{\min}$ consists of $\lfloor n/3\rfloor$ triangles and $n \mod 3$ isolated vertices. In fact, $d_{\min}^n$ is a slight
modification and generalization of the metric given by Develin in \cite[Proposition 3.3]{Develin} to proof the tightness of his lower
bound. Actually the proof that our bound is tight is obtained by analyzing the proof to \cite[Proposition 3.3]{Develin} and
refining its techniques.
\begin{figure}[htbp]\centering
\includegraphics[height=.35\textwidth]{min5}\quad
\includegraphics[height=.35\textwidth]{min6}
\caption{Visualization of (the graphs of) the tight spans $\Tmin^5$, with $f$-vector $(16,20,5)$, and $\Tmin^6$, with $f$-vector
$(31,45,15)$. Note that the image of $\Tmin^6$ shown is slightly misleading as the three collinear vertices in
the center, in fact, form a triangle. The tight span $\Tmin^6$, or rather the tight span of an equivalent ideal
metric, occurs as $\#7$ in the list of Sturmfels and Yu~\cite{MR2097310}.}
\end{figure}
It is natural to ask if we can find a lower bound for all components of the $f$-vector from
Theorem~\ref{thm:lower-bound} in the same way as we derived Theorem \ref{thm:upper-bound} from Lemma
\ref{lem:n/2-bound}. Unfortunately, this requires a much greater effort. The main problem is that there are
non-isomorphic subgraphs of $\Gamma_{\min}^n$ induced by submetrics of $\dmin^n$ of the same number of points; they even
give tight spans with different $f$-vectors. Actually, such a proof would include the computation of the full
$f$-vector of all metrics $d_\Gamma$ with all components of $\Gamma$ of size at most~$3$. Therefore, we suggest to
investigate the combinatorics and the $f$-vectors of the metrics $d_\Gamma$ for arbitrary graphs $\Gamma$. This should
lead to a complete classification of all possible $f$-vectors of tight spans of generic metrics.
\bibliographystyle{amsplain}
| {
"timestamp": "2006-05-15T19:37:38",
"yymm": "0605",
"arxiv_id": "math/0605401",
"language": "en",
"url": "https://arxiv.org/abs/math/0605401",
"abstract": "The tight span $T_d$ of a metric $d$ on a finite set is the subcomplex of bounded faces of an unbounded polyhedron defined by~$d$. If $d$ is generic then $T_d$ is known to be dual to a regular triangulation of a second hypersimplex. A tight upper and a partial lower bound for the face numbers of $T_d$ (or the dual regular triangulation) are presented.",
"subjects": "Metric Geometry (math.MG)",
"title": "Bounds on the $f$-Vectors of Tight Spans",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357200450139,
"lm_q2_score": 0.8198933337131077,
"lm_q1q2_score": 0.8049185723329446
} |
https://arxiv.org/abs/1710.02879 | Equality of orthogonal transvection group and elementary orthogonal transvection group | H. Bass defined orthogonal transvection group of an orthogonal module and elementary orthogonal transvection group of an orthogonal module with a hyperbolic direct summand. We also have the notion of relative orthogonal transvection group and relative elementary orthogonal transvection group with respect to an ideal of the ring. According to the definition of Bass relative elementary orthogonal transvection group is a subgroup of relative orthogonal transvection group of an orthogonal module with hyperbolic direct summand. Here we show that these two groups are the same in the case when the orthogonal module splits locally. | \section{\large Introduction}
In Section 5 of \cite{SV} L.N. Vaserstein proved that first row of an elementary linear matrix of even size
(bigger than or equal to 4) is the same as the first row of a symplectic matrix of the same size w.r.t. an alternating
form. This result motivated us to prove that the orbit of a unimodular row of even size under the action of
elementary linear group is same as the orbit of a unimodular row of same size under the action of elementary
symplectic group (see Theorem 4.1, \cite{cr}); we also proved a relative (to an ideal of the ring) version of this result (see Theorem 5.5, \cite{cr}). Generalising this result in the setting of finitely generated projective modules
involving the transvection groups as defined by H. Bass, we proved that
in the case of a symplectic module with a hyperbolic direct summand the orbits of any unimodular element from the
symplectic module, under the actions of elementary linear transvection group and the elementary symplectic transvection group
coincide(see Theorem 6.1, \cite{cr2}).
While proving the above result on equality of orbits of unimodular
elements of symplectic modules, we observed that in the relative case to an ideal of the ring, the equality holds between the linear transvection
group and the elementary linear transvection group (see Proposition 4.10, \cite{cr2}).
We also noticed that in the
relative case to an ideal of the ring, the symplectic transvection group and the elementary symplectic transvection
group coincide (see Theorem 5.23, \cite{cr2}). In the absolute case
the equalities for linear transvection group, symplectic
transvection group, and orthogonal transvection group with the corresponding elementary transvection groups
were proved in \cite{bbr}. In view of the above results it is natural to ask whether the equality of orthogonal transvection group and elementary orthogonal
transvection group holds in the relative case to an ideal of the ring.
In this article we prove the equality of these two groups in the case when the orthogonal module splits locally.
\section{\large Preliminaries}
\medskip
In this article we will always assume that $R$ is a commutative ring with unit.
A row ${ v} = (v_{1}, \ldots, v_{n}) \in R^{n}$ is said to be {\it
unimodular} if there are elements $w_{1}, \ldots, w_{n}$ in $R$ such
that $v_{1}w_{1} + \cdots + v_{n}w_{n} = 1$. Um$_{n}(R)$ will denote
the set of all unimodular rows ${ v} \in R^{n}$. Let $I$ be an ideal
in $R$. We denote by ${\rm Um}_n(R, I)$ the set of all unimodular rows
of length $n$ which are congruent to $e_1 = (1, 0, \ldots, 0)$ modulo
$I$. (If $I = R$, then ${\rm Um}_n(R, I)$ is ${\rm Um}_n(R)$).
\begin{de} {\rm Let $P$ be a finitely generated projective $R$-module.
An element $p \in P$ is said to be {\it unimodular} if there exists a
$R$-linear map $\phi: P \to R$ such that $\phi(p)=1$. The collection
of unimodular elements of $P$ is denoted by ${\rm Um}(P)$.
Let $P$ be of the form $R \oplus Q$ and have an element of the form
$(1,0)$ which correspond to the unimodular element. An element $(a,q)
\in P$ is said to be {\it relative unimodular} w.r.t. an ideal $I$ of
$R$ if $(a,q)$ is unimodular and $(a,q)$ is congruent to $(1,0)$
modulo $IP$. The collection of all relative unimodular elements
w.r.t. an ideal $I$ is denoted by ${\rm Um}(P,IP)$. }
\end{de}
Let us recall that if $M$ is a finitely presented $R$-module and $S$
is a multiplicative set of $R$, then $S^{-1} {\rm Hom}_R(M,R) \cong
{\rm Hom}_{R_S}(M_S, R_S)$ (Theorem 2.13", Chapter I, \cite{lam}).
Also recall that if $f=(f_1, \ldots,
f_n)\in R^n := M$, then $\Theta_M(f)=\{ \phi(f): \phi \in {\rm
Hom}(M,R) \}= \sum_{i=1}^n Rf_i$. Therefore, if $P$ is a finitely
generated projective $R$-module of rank $n$, $\mathfrak{m}$ is a maximal ideal
of $R$ and $v\in {\rm Um}(P)$, then $v_\mathfrak{m} \in {\rm Um}_n(R_\mathfrak{m})$. Similarly if
$v \in {\rm Um}(P,IP)$ then $v_\mathfrak{m} \in {\rm Um}_n(R_\mathfrak{m}, I_\mathfrak{m})$.
\begin{de} {\bf Elementary Linear Group:}
{\rm Elementary linear group E$_{n}(R)$ denote the subgroup of SL$_{n}(R)$ consisting of all
{\it elementary} matrices, i.e. those matrices which are a finite
product of the {\it elementary generators} E$_{ij}(\lambda) = I_{n} +
e_{ij}(\lambda)$, $1 \leq i \neq j \leq n$, $\lambda \in R$, where
$e_{ij}(\lambda) \in$ M$_{n}(R)$ has an entry $\lambda$ in its $(i,
j)$-th position and zeros elsewhere.}
\end{de}
In the sequel, if $\alpha$ denotes an $m \times n$ matrix, then we let
$\alpha^t$ denote its {\it transpose} matrix. This is of course an
$n\times m$ matrix. However, we will mostly be working with square
matrices, or rows and columns.
\begin{de} {\bf The Relative Groups E$_n(I)$, E$_n(R,I)$:} {\rm Let $I$ be
an ideal of $R$. The relative elementary linear group {\rm E}$_n(I)$ is the
subgroup of {\rm E}$_n(R)$ generated as a group by the elements {\rm
E}$_{ij}(x)$, $x \in I$, $1 \leq i \neq j \leq n$.
The relative elementary linear group ${\rm E}_n(R, I)$ is the normal
closure of {\rm E}$_n(I)$ in {\rm E}$_n(R)$.
$($Equivalently, ${\rm E}_n(R, I)$ is generated as a group by ${\rm
E}_{ij}(a) {\rm E}_{ji}(x)$E$_{ij}(-a)$,\textnormal} \newcommand{\longrightarrow}{\longrightarrow{ with} $a \in R$, $x
\in I$, $i \neq j$, provided $n \geq 3$ {\rm (see \cite{V3}, Lemma 8)}$)$.}
\end{de}
\begin{de}
{\rm E$_n^1(R, I)$ is the subgroup of ${\rm E}_n(R)$ generated by the
elements of the form $E_{1i}(a)$ and $E_{j1}(x)$, where $a \in R, x
\in I$, and $2 \le i, j \le n$. }
\end{de}
\begin{rmk} \label{unimodularOverLocalRing}
It is easy to check that if $v \in {\rm Um}_n(R,I)$, where $(R, \mathfrak{m})$ is a
local ring and $I$ be an ideal of $R$, then $v = e_1 \beta$, for some
$\beta \in {\rm E}_n(R,I)$.
\end{rmk}
\begin{de} {\bf Orthogonal Group:} {\rm The orthogonal group ${\rm O}_{2n}(R)$
with respect to the standard symmetric matrix $\widetilde {\psi}_n = \underset{i=1}{\overset{n}\sum}
e_{2i-1,2i} + \underset{i=1}{\overset{n}\sum} e_{2i,2i-1}$ is the collection
$\{\alpha \in {\rm GL}_{2n}(R)\,\,|\,\, \alpha^t \widetilde{\psi}_n
\alpha = \widetilde{\psi}_n \}$. For an ideal $I$ of $R$, ${\rm O}_{2n}(R, I)$
represents the kernel of the natural map ${\rm O}_{2n}(R) \longrightarrow {\rm O}_{2n}(R/I)$.}
\end{de}
\textnormal} \newcommand{\longrightarrow}{\longrightarrow{Let $\sigma$ denote the permutation of the natural numbers $\{1, 2, \ldots, 2n \}$ given
by $\sigma(2i)=2i-1$ and $\sigma(2i-1)=2i$}.
\begin{de} \label{2.4} {\bf Elementary Orthogonal Group:}
{\rm As in \S 2 of \cite{SK} we define for $z \in R$, $1\le i\ne j\le 2n$,
\begin{eqnarray*}
oe_{ij}(z) = 1_{2n} + z e_{ij} - z e_{\sigma (j) \sigma (i)} & {\rm if} ~ i\ne \sigma(j).
\end{eqnarray*}
It is easy to check that all these elements belong to O$_{2n}(R)$.
We call them {\it elementary orthogonal matrices} with respect to the standard
symmetric matrix $\widetilde{\psi}_n$ over $R$ and the
subgroup of {\rm O}$_{2n}(R)$ generated by them is called the
elementary orthogonal group {\rm EO}$_{2n}(R)$ with respect to the standard
symmetric matrix $\widetilde{\psi}_n$.}
\end{de}
\begin{de} {\bf The Relative Group EO$_{2n}(I)$, EO$_{2n}(R,I)$:}
{\rm Let $I$ be an ideal of $R$. The relative elementary group ${\rm EO}_{2n}(I)$
is the subgroup ${\rm EO}_{2n}(R)$ generated as a group by the elements
$oe_{i j}(x), x \in I$ and $1 \le i \ne j \le 2n$.
The relative elementary group ${\rm EO}_{2n}(R, I)$ is the normal closure of
${\rm EO}_{2n}(I)$ in ${\rm EO}_{2n}(R)$.}
\end{de}
\begin{lem} \label{equiv-defn}
${\rm EO}_{2n}(R, I)$ is generated as a group by the elements of the form $g ~oe_{ij}(x) g^{-1}$, where
$g \in {\rm EO}_{2n}(R), x \in I$, and either $i=1$ or $j=1$.
\end{lem}
Proof: An element of the form $g ~oe_{ij}(x) g^{-1} \in {\rm EO}_{2n}(R, I)$, where
$g \in {\rm EO}_{2n}(R), x \in I$, and either $i=1$ or $j=1$. Consider an elementary generator
$oe_{kl}(a) oe_{ij}(x) oe_{kl}(-a)$ of
${\rm EO}_{2n}(R, I)$, where $a \in R, x \in I$, and $i, j \ne 1$. Then
\begin{eqnarray*}
&& oe_{kl}(a) oe_{ij}(x) oe_{kl}(-a)\\
&=& ^{oe_{kl}(a)} [oe_{i1}(x), oe_{1j}(1)] \\
&=& ^{oe_{kl}(a)} \{ oe_{i1}(x) ~ oe_{1j}(1) ~ oe_{i1}(-x) ~ oe_{1j}(-1) \} \\
&=& ^{oe_{kl}(a)} oe_{i1}(x) ^{oe_{kl}(a)} \{ oe_{1j}(1) ~ oe_{i1}(-x) ~ oe_{1j}(-1) \}
\end{eqnarray*}
and hence the lemma follows.
\hfill{$\square$}
\begin{de}\label{defn-EO^1}
{\rm The group ${\rm EO}_{2n}^1(R,I)$ is the subgroup of ${\rm EO}_{2n}(R)$
generated by the elements of the form $oe_{1 i}(a)$ and $oe_{j
1}(x)$, where $a \in R$, $x \in I$ and $3 \le i, j \le 2n$.}
\end{de}
In the following two lemmas we obtain some useful facts regarding elementary
orthogonal groups. An analogous result in the linear case was proved in
\cite{vdK1} and in the symplectic case was proved in the Appendix
of \cite{cr2}.
\begin{lem} \label{subset}
Let $R$ be a commutative ring and $I$ be an ideal of $R$. Then ${\rm EO}_{2n}(R, I)
\subseteq {\rm EO}_{2n}^1(R, I)$, for $n \ge 3$.
\end{lem}
Proof: Let us define $S_{ij} = \{ oe_{ij}(a) oe_{ji}(x) oe_{ij}(-a) : a \in R, x \in I \}$. It suffices
to show that ${\rm EO}_{2n}^1(R, I)$ contains the set $S_{ij}$ for all $1 \le i \ne j \le 2n$.
Note that $S_{ij} = S_{\sigma(j) \sigma(i)}$ and
$S_{1j} \subseteq {\rm EO}_{2n}^1(R, I)$, for $3 \le j \le 2n$. First we state the following
identities
\begin{align}\label{commprop}
[gh,k] &~ = ~ \big({}^g[h,k]\big)[g,k],\\
[g,hk] &~ = ~ [g,h]\big({}^h[g,k]\big),\\
{}^g[h,k] &~ = ~ [{}^gh,{}^gk],
\end{align}
where $^gh$ denotes $ghg^{-1}$ and $[g,h]=ghg^{-1}h^{-1}$. Using these
identities and the commutator law $[oe_{ik}(a), oe_{kj}(b)] = oe_{ij}(ab)$
we establish the inclusion.
Note that $oe_{1j}(x), oe_{i1}(x) \in {\rm EO}_{2n}^{1}(R,I)$, for $3 \le
i,j \le 2n$ and $x \in I$. For $3 \le i,j \le 2n$ and $x \in I$, we have $oe_{ij}(x) = [oe_{i1}(x),
oe_{1j}(1)] \in {\rm EO}_{2n}^{1}(R,I)$. In
the following computation we will express the generators of
$S_{ij}$ in terms of $oe_{i1}(x)$ or $oe_{1j}(a)$, where $x \in
I, a \in R $. Also, note that we will use $\circledast$ to represent
elements of ${\rm EO}_{2n}^1(R, I)$.
\begin{eqnarray*}
^{oe_{ij}(a)} oe_{ij}(x) &=& ^{oe_{ij}(a)} [oe_{j2}(1), oe_{2i}(x)] \\
&=& [oe_{i2}(a) oe_{j2}(1), oe_{2i}(x) oe_{2j}(ax)] \\
&=& ^{oe_{i2}(a)} [oe_{j2}(1), oe_{2i}(x) oe_{2j}(ax)] [oe_{i2}(a), oe_{2i}(x) oe_{2j}(ax)] \\
&=& ^{oe_{i2}(a)} [oe_{j2}(1), oe_{2i}(x)] ^{oe_{i2}(a) oe_{2i}(x)} [oe_{j2}(1), oe_{2j}(ax)] \\
&& [oe_{i2}(a), oe_{2i}(x)] oe_{2j}(a^2x^2) oe_{2i}(x) \\
&=& ^{oe_{i2}(a)} oe_{ji}(x) ^{oe_{i2}(a)}[oe_{j2}(1) oe_{ji}(-x), oe_{2j}(ax)] ~ \circledast\\
&=& \circledast ~ [oe_{j2}(1) oe_{ji}(x) oe_{j2}(ax), oe_{ij}(a^2x) oe_{2j}(ax)] ~ \circledast\\
&=& \circledast ~ ^{oe_{j2}(1)} [oe_{ji}(x) oe_{j2}(ax), oe_{ij}(a^2x) oe_{2j}(ax)] \\
&& [oe_{j2}(1), oe_{ij}(a^2x) oe_{2j}(ax)] ~ \circledast \\
&=& \circledast ~ [oe_{ij}(x) oe_{j2}(ax), oe_{ij}(a^2x) oe_{i2}(-a^2x) ^{oe_{j2}(1)} oe_{2j}(ax)] \\
&& oe_{i2}(-a^2x) ^{oe_{ij}(a^2 x^2)} [oe_{j2}(1), oe_{2j}(ax)] ~ \circledast
\end{eqnarray*}
Since $S_{i2}, S_{j2} \subseteq
{\rm EO}_{2n}^1(R,I)$, therefore $S_{ij} \subseteq {\rm EO}_{2n}^1(R, I)$, for $3 \le i, j \le 2n$. Similarly $S_{ik},
S_{1k} \subseteq {\rm EO}_{2n}^1(R, I)$ will imply that $S_{i1} \subseteq {\rm EO}_{2n}^1(R, I)$, for $3 \le i \le 2n$.
\hfill{$\square$}
\begin{pr} \label{vanderk1}
Let $R$ be a commutative ring and let $I$ be an
ideal of $R$. Then for $n \ge 3$ the following sequence is exact
\[ 1 \longrightarrow {\rm EO}_{2n}(R,I) \longrightarrow {\rm EO}^1_{2n}(R,I) \longrightarrow {\rm EO}^1_{2n}(R/I,0) \longrightarrow 1. \]
Thus ${\rm EO}_{2n}(R,I)$ equals ${\rm EO}^1_{2n}(R,I) \cap {\rm O}_{2n}(R,I)$.
\end{pr}
Proof: Let $f : {\rm EO}_{2n}^1(R, I) \longrightarrow {\rm EO}_{2n}^1(R/I, 0)$. Note that
$\ker(f) = {\rm EO}_{2n}^1(R,I) \cap {\rm O}_{2n}(R,I)$. We shall prove that $\ker(f) = {\rm EO}_{2n}(R,I)$.
Let $E = \prod_{k=1}^r
oe_{j_k 1}(x_k) oe_{1i_k}(a_k)$ be an element in the $\ker(f)$ and $E$ can be
written as $oe_{j_1 1}(x_1) \prod_{k=2}^r \gamma_k oe_{j_k 1}(x_k)
\gamma_k^{-1}$, where $\gamma_l$ is equal to
$\prod_{k=1}^{l-1}~oe_{1i_k}(a_k) \in {\rm EO}_{2n}(R)$, and hence $\ker(f)
\subseteq {\rm EO}_{2n}(R,I)$. The reverse inclusion follows from the fact
that ${\rm EO}_{2n}(R,I) \subseteq {\rm EO}_{2n}^1(R,I)$ (see Lemma \ref{subset}).
\hfill{$\square$}
\section{{\large Local Global Principle for Relative Elementary Group}}
In this section we prove Lemma \ref{ness4-E1} and Lemma \ref{E1-dil-strong} which will be used in proving the main result in the final section.
In Lemma \ref{E1-dil-strong} we obtain the Local-Global Principle for an extended
ideal for slightly larger group EO$_{2n}^1(R,I)$ than the relative group EO$_{2n}(R, I)$. This group was introduced in the linear case
by W. van der Kallen in \cite{vdK1}.
The Local-Global Principle for an extended
ideal in the linear, orthogonal and symplectic groups was proved in \cite{acr}.
The line of arguments given in the proofs below closely follow that in the Local-Global principle for an extended ideal in the symplectic case
in \cite{cr2}. However, as far as the the computational details are concerned, there are substantial deviations from \cite{cr2} in many steps.
\begin{lem} \label{ness4-E1} Let $R$ be a commutative ring and $I$ be an
ideal of $R$. Let $n \ge 3$. Let $\varepsilon =
\varepsilon_1 \ldots \varepsilon_r$ be an element of ${\rm EO}_{2n}^1(R,I)$,
where each $\varepsilon_k$ is an elementary generator. If
$oe_{ij}(Xf(X))$ is an elementary generator of ${\rm EO}_{2n}^1(R[X],I[X])$,
then
\begin{eqnarray*}
\varepsilon ~ oe_{ij}(Y^{4^r}Xf(Y^{4^r}X)) ~ \varepsilon^{-1}
&=& \prod_{t=1}^s oe_{i_t j_t}(Y h_t(X,Y)),
\end{eqnarray*}
where either $i_t=1$ or $j_t=1$ and $h_t(X,Y) \in R[X,Y]$, when
$i_t=1$; $h_t(X,Y) \in I[X,Y]$ when $j_t=1$.
\end{lem}
Proof: We prove the result using induction on $r$, where $\varepsilon$ is
product of $r$ many elementary generators. Let $r=1$ and $\varepsilon
= oe_{pq}(a)$. Note that $a \in R$ when $p=1$, and $a \in I$ when
$q=1$. Given that $oe_{ij}(Xf(X))$ is an elementary generator of
${\rm EO}_{2n}^1(R[X],I[X])$. First we assume $i=1$, hence
$f(X) \in R[X]$.
{\it Case $($1$)$:} Let $(p,q)=(1,j)$. In this case
\begin{eqnarray*}
oe_{1j}(a) ~ oe_{1j}(Y^4X f(Y^4X)) ~ oe_{1j}(-a) &=& oe_{1j}(Y^4X f(Y^4X)).
\end{eqnarray*}
{\it Case $($2$)$:} Let $(p,q)=(1, \sigma(j))$. In this case
\begin{eqnarray*}
oe_{1 \sigma(j)}(a) ~ oe_{1j}(Y^4X f(Y^4X)) ~ oe_{1 \sigma(j)}(-a) &=& oe_{1j}(Y^4X f(Y^4X)).
\end{eqnarray*}
{\it Case $($3$)$:} Let $(p,q)=(1, k), k \ne j , \sigma(j)$. In this case
\begin{eqnarray*}
oe_{1 k}(a) ~ oe_{1j}(Y^4X f(Y^4X)) ~ oe_{1 k}(-a) &=& oe_{1j}(Y^4X f(Y^4X)).
\end{eqnarray*}
{\it Case $($4$)$:} Let $(p,q)=(j, 1))$. In this case
\begin{eqnarray*}
&& oe_{j 1}(a) ~ oe_{1j}(Y^4X f(Y^4X)) ~ oe_{j 1}(-a) \\
&=& ^{oe_{j1}(a)} [oe_{1k}(Y^2X f(Y^4X)), oe_{kj}(Y^2)] \\
&=& [oe_{jk}(aY^2X f(Y^4X)) ~ oe_{1k}(Y^2X f(Y^4X)), oe_{k1}(a Y^2) ~ oe_{kj}(Y^2)] \\
&=& oe_{jk}(a Y^2X f(Y^4X)) ~ oe_{1k}(Y^2X f(Y^4X)) ~ oe_{k1}(a Y^2) ~ oe_{kj}(Y^2) \\
&& oe_{1k}(-Y^2X f(Y^4X)) ~ oe_{jk}(- aY^2X f(Y^4X)) ~ oe_{kj}(-Y^2) ~ oe_{k1}(-a Y^2) \\
&=& oe_{jk}(aY^2X f(Y^4X)) ~ oe_{1k}(Y^2X f(Y^4X)) ~ oe_{k1}(a Y^2) ~ oe_{1k}(-Y^2X f(Y^4X)) \\
&& oe_{1k}(Y^2X f(Y^4X)) ~ oe_{kj}(Y^2) ~ oe_{1k}(-Y^2X f(Y^4X)) ~ oe_{kj}(-Y^2) ~ oe_{kj}(Y^2) \\
&& [oe_{j1}(-aY), oe_{1k}(Y X f(Y^4X))] ~ oe_{kj}(-Y^2) ~ oe_{k1}(-a Y^2) \\
&=& [oe_{j1}(aY), oe_{1k}(YX f(Y^4X))] ~ oe_{1k}(Y^2X f(Y^4X)) ~ oe_{k1}(aY^2) \\
&& oe_{1k}(Y^2X f(Y^4X)) oe_{1j}(Y^2X f(Y^4X)) [oe_{k1}(-aY^4) oe_{j1}(-aY), \\
&& oe_{1k}(Y X f(Y^4X)) ~ oe_{1j}(Y^3 X f(Y^4X))] oe_{k1}(-a Y^2)
\end{eqnarray*}
{\it Case $($5$)$:} Let $(p,q)=(\sigma(j), 1))$. In this case
\begin{eqnarray*}
&& oe_{\sigma(j) 1}(a) ~ oe_{1j}(Y^4X f(Y^4X)) ~ oe_{\sigma(j) 1}(-a) \\
&=& ^{oe_{\sigma(j) 1}(a)} [oe_{1k}(Y^2), oe_{kj}(Y^2X f(Y^4X))] \\
&=& [oe_{\sigma(j) k} (aY^2) ~ oe_{1k}(Y^2), oe_{kj}(Y^2X f(Y^4X))] \\
&=& oe_{\sigma(j) k} (aY^2) ~ oe_{1k}(Y^2) ~ oe_{kj}(Y^2X f(Y^4X)) ~ oe_{1k}(-Y^2) ~ oe_{\sigma(j) k} (-aY^2) \\
&& oe_{kj}(-Y^2X f(Y^4X)) \\
&=& oe_{\sigma(j) k} (aY^2) ~ oe_{1j} (Y^4X f(Y^4X)) ~ oe_{kj}(Y^2X f(Y^4X)) \\
&& [oe_{\sigma(j) 1}(-aY), oe_{1k}(Y)] ~ oe_{kj}(-Y^2X f(Y^4X)) \\
&=& [oe_{\sigma(j) 1}(aY), oe_{1k}(Y)] ~ oe_{1j} (Y^4X f(Y^4X))\\
&& [oe_{\sigma(j) 1}(-aY), oe_{1k}(Y) oe_{1j}(Y^3X f(Y^4X))]
\end{eqnarray*}
{\it Case $($6$)$:} Let $(p,q)=(k, 1), k \ne j , \sigma(j)$. In this case
\begin{eqnarray*}
&& oe_{k 1}(a) ~ oe_{1j}(Y^4X f(Y^4X)) ~ oe_{k 1}(-a) \\
&=& oe_{kj} (aY^4X f(Y^4X)) ~ oe_{1j}(Y^4X f(Y^4X)) \\
&=& [oe_{k1}(aY^2), oe_{1j}(Y^2X f(Y^4X))] ~ oe_{1j}(Y^4X f(Y^4X))
\end{eqnarray*}
Hence the result is true when $i=1$ and $\varepsilon$ is an elementary
generator. Carrying out similar calculation one can show the result is
true when $j=1$ and $\varepsilon$ is an elementary generator. Let us
assume that the result is true when $\varepsilon$ is product of $r-1$
many elementary generators, i.e, $\varepsilon_2 \ldots \varepsilon_r ~
oe_{ij}(Y^{4^{r-1}}X f(Y^{4^{r-1}}(X)) ~ \varepsilon_r^{-1} \ldots
\varepsilon_2^{-1} = \prod_{t=1}^k oe_{p_t q_t}(Y g_t(X,Y))$, where
either $p_t=1$ or $q_t=1$. Note that $g_t(X,Y) \in R[X,Y]$ when
$p_t=1$ and $g_t(X,Y) \in I[X,Y]$ when $q_t=1$.
We now establish the result when $\varepsilon$ is product of $r$ many
elementary generators. We have
\begin{eqnarray*}
&& \varepsilon ~ oe_{ij}(Y^{4^r}X f(Y^{4^r}X)) ~ \varepsilon^{-1} \\
& = & \varepsilon_1 \varepsilon_2 \ldots \varepsilon_r ~ oe_{ij}(Y^{4^r}X f(Y^{4^r}X))
~ \varepsilon_r^{-1} \ldots \varepsilon_2^{-1} \varepsilon_1^{-1} \\
& = & \varepsilon_1 ~ \big( \prod_{t=1}^k oe_{p_t q_t}(Y^4 g'_t(X,Y)) \big) ~ \varepsilon_1^{-1} \\
& = & \prod_{t=1}^k \varepsilon_1 ~ oe_{p_t q_t}(Y^4 g'_t(X,Y)) ~ \varepsilon_1^{-1} \\
& = & \prod_{t=1}^s oe_{i_t j_t}(Y h_t(X,Y)).
\end{eqnarray*}
To get the last equality one needs to repeat the calculation which was
done for a single elementary generator. Note that at the last line
either $i_t=1$ or $j_t=1$. Also, note that $h_t(X,Y) \in R[X,Y]$, when
$i_t=1$ and $h_t(X,Y) \in I[X,Y]$, when $j_t=1$
\hfill{$\square$}
\begin{nt}
{\rm Let $M$ be a finitely presented $R$-module and $a$ be a
non-nilpotent element of $R$. Let $R_a$ denote the ring $R$
localised at the multiplicative set $\{a^i : i \ge 0 \}$ and $M_a$
denote the $R_a$-module $M$ localised at $\{a^i : i \ge 0 \}$. Let
$\alpha(X)$ be an element of ${\rm End}(M[X])$. The localization map $i:
M \to M_a$ induces a map $i^*: {\rm End}(M[X]) \to {\rm End}(M[X]_a)
= {\rm End}(M_a[X])$. We shall denote $i^*(\alpha(X))$ by $\alpha(X)_a$
in the sequel.}
\end{nt}
We need the following two lemmas.
\begin{lem} \label{equal-auto}
Let $M$ be a finitely presented $R$-module and $I$ be an ideal of
$R$. Let $\alpha(X), \beta(X) \in
{{\rm End}}(M[X],IM[X])=ker({{\rm End}}(M[X]) \longrightarrow {{\rm End}}(M[X]/IM[X]))$, with
$\alpha(0)=\beta(0).$ Let $a$ be a non-nilpotent element in
$R$ such that $\alpha(X)_a = \beta(X)_a$ in
${{\rm End}}(M_a[X],IM_a[X])$. Then $\alpha(a^N X)=\beta(a^N X)$ in
${{\rm End}}(M[X],IM[X])$, for $N \gg 0$.
\end{lem}
\begin{lem} \label{E1-dil-strong} Let $R$ be a commutative ring and
$I$ be an ideal of $R$. Let $n \ge 3$. Let $a$ be a
non-nilpotent element in $R$ and $\alpha(X)$ be in
${\rm EO}_{2n}^1(R_a[X],I_a[X])$, with $\alpha(0)=Id$. Then there exists
$\alpha^*(X) \in {\rm EO}_{2n}^1(R[X],I[X])$, with $\alpha^*(0) = Id.$, such
that $\alpha^*(X)$ localises to $\alpha(bX)$, for $b \in (a^d)$, $d
\gg 0$.
\end{lem}
The proofs as in Lemma 3.3 in \cite{cr2} and in Lemma 3.4 in \cite{cr2} work verbatim for Lemma \ref{equal-auto} and Lemma \ref{E1-dil-strong} as above, respectively, and hence the proofs are omitted.
The following result was proved in \cite{acr}. We next apply Lemma \ref{E1-dil-strong} to record a different proof.
\begin{thm} \label{rel-dil-strong} Let $R$ be a commutative ring and
$I$ be an ideal of $R$. Let $n \ge 3$. Let $a$ be a
non-nilpotent element in $R$ and $\alpha(X)$ be in
${\rm EO}_{2n}(R_a[X],I_a[X])$, with $\alpha(0)=Id$. Then there exists
$\alpha^*(X) \in {\rm EO}_{2n}(R[X],I[X])$, with $\alpha^*(0) = Id.$, such that
$\alpha^*(X)$ localises to $\alpha(bX)$, for $b \in (a^d)$, $d \gg
0$.
\end{thm}
Proof: Follows from Lemma \ref{vanderk1} and Lemma \ref{E1-dil-strong}.
\hfill{$\square$}
\section{\large{Orthogonal Modules and Orthogonal Transvections}}
In this section we prove Theorem \ref{eql-rel} which is the main result of this paper. We begin with a sequence of definitions.
\begin{de}
{\rm
Let $M$ be an $R$-module. A {\it bilinear form} on $M$ is a function
$\beta: M \times M \longrightarrow R$ such that $\beta(x, y)$ is $R$-linear as a function
of $x$ for fixed $y$, and $R$-linear as a function of $y$ for fixed $x$. The pair
$(M, \beta)$ is called bilinear form module over $R$. $\beta$ is called an {\it inner product}
if it satisfies non-degeneracy condition, i.e, the natural map induced by $\beta$
from $P \longrightarrow P^*$ is an isomorphism. In this case
the pair $(M, \beta)$ is called inner product module over $R$. A bilinear form
or inner product $\beta$ is called {\it symmetric} if $\beta(x, y) = \beta(y, x)$, for all $x, y \in M$. An inner
product module $(M, \beta)$ will be called {\it inner product space} if $M$ is finitely
generated and projective over $R$.
}
\end{de}
\begin{de}
{\rm
An {\it orthogonal $R$-module} is a pair $(P,\langle , \rangle)$,
where $P$ is a finitely generated projective $R$-module of even rank
and $\langle , \rangle: P \times P \longrightarrow R$ is a non-degenerate symmetric bilinear form.
This is also known as {\it symmetric inner product space}.}
\end{de}
\begin{de}
{\rm
Let $(P_1,\langle , \rangle_1)$ and $(P_2,\langle , \rangle_2)$ be
two orthogonal $R$-modules. Their {\it orthogonal sum} is the pair
$(P,\langle , \rangle)$, where $P=P_1 \oplus P_2$ and the inner product
is defined by $\langle (p_1,p_2),(q_1,q_2)\rangle = \langle
p_1,q_1\rangle_1 + \langle p_2,q_2\rangle_2$.}
\end{de}
\begin{de}
{\rm There is a non-degenerate symmetric bilinear form $\langle , \rangle$ on the
$R$-module $R \oplus R^*$, namely $\langle (a_1,f_1),
(a_2,f_2) \rangle = f_2(a_1) + f_1(a_2)$. The orthogonal module $R \oplus R^*$ with this
symmetric bilinear form is denoted by $\mathbb{H}(R)$ and called {\it hyperbolic plane}.
Note that $\mathbb{H}^n(R)$ is the orthogonal sum of $n$-copies of $\mathbb{H}(R)$.}
\end{de}
\begin{de}
{\rm An {\it isometry} of an orthogonal module $(P,\langle , \rangle)$ is
an automorphism of $P$ which fixes the bilinear form. The group of
isometries of $(P, \langle , \rangle)$ is denoted by
${\rm O}(P)$. }
\end{de}
\begin{de}
{\rm
Let $(P, \langle, \rangle)$ be an orthogonal module. H. Bass defined
{\it orthogonal transvection}
of an orthogonal module $(P, \langle, \rangle)$ is an automorphism of the form
\begin{eqnarray*}
\tau(p) &=& p - \langle u , p \rangle v + \langle v , p \rangle u,
\end{eqnarray*}
where $u,v \in P$ are isotropic, i.e, $\langle u, u \rangle = \langle v, v \rangle = 0$
with $\langle
u,v \rangle=0$, and either $u$ or $v$ is unimodular. It is easy to
check that $\langle \tau(p), \tau(q) \rangle = \langle p, q
\rangle$, i.e, $\tau \in {\rm O}(P)$ and $\tau$ has an inverse $\sigma(p) = p + \langle u, p
\rangle v - \langle v, p \rangle u$.
The subgroup of ${\rm O}(P)$ generated by the orthogonal
transvections is called orthogonal transvection group and denoted by ${\rm Trans}_{{\rm O}}(P, \langle , \rangle)$ (see
\cite{bass2} or \cite{HO}). }
\end{de}
{\bf Now onwards $Q$ will denote $(R^2 \oplus P)$ with induced form
on $(\mathbb{H}(R)~\oplus~P)$, and $Q[X]$ will denote $(R[X]^2 \oplus
P[X])$ with induced form on $(\mathbb{H}(R[X])~\oplus~P[X])$.}
\begin{de} {\rm The orthogonal transvections of $Q=(R^2 \oplus P)$ of
the form
\begin{eqnarray*}
(a, b, p) & \mapsto & (a, b + \langle p, q \rangle, p-aq),
\end{eqnarray*}
or of the form
\begin{eqnarray*}
(a, b, p) & \mapsto & (a + \langle p, q \rangle, b, p-bq),
\end{eqnarray*}
where $a, b \in R$ and $p, q \in P$,
are called {\it elementary orthogonal transvections}. Let us denote the first
isometry by $\rho(q)$ and the second one by $\mu(q)$. It can
be verified that the elementary orthogonal transvections are orthogonal
transvections on $Q$. Indeed, consider $(u, v) =((0,1,0),(0,0,q))$ to get
$\rho(q)$ and consider $(u, v)= ((1,0,0),(0,0,q))$ to get $\mu(q, \beta)$.
The subgroup of ${\rm Trans}_{{\rm O}}(Q, \langle , \rangle)$ generated by
elementary orthogonal transvections is denoted by ${\rm ETrans}_{{\rm O}}(Q,
\langle , \rangle)$.}
\end{de}
\begin{de}
{\rm Let $I$ be an ideal of $R$. The group of {\it relative orthogonal
transvections} to an ideal $I$ is generated by the orthogonal
transvections of the form $\sigma(p) = p - \langle u, p \rangle v +
\langle v, p \rangle u$, where either $u \in IP$ or $v \in IP$. The group
generated by relative orthogonal
transvections is denoted by ${\rm Trans}_{{\rm O}}(P,IP, \langle , \rangle)$.}
\end{de}
\begin{de} {\rm Let $I$ be an ideal of $R$. The elementary orthogonal
transvections of $Q$ of the form $\rho(q), \mu(q)$, where
$q \in IP$ are called {\it relative elementary
orthogonal transvections} to an ideal $I$.
The subgroup of ${\rm ETrans}_{{\rm O}}(Q, \langle , \rangle)$ generated by
relative elementary orthogonal transvections is denoted by
${\rm ETrans}_{{\rm O}}(IQ,\langle , \rangle )$. The normal closure of
${\rm ETrans}_{{\rm O}}(IQ,\langle , \rangle )$ in
${\rm ETrans}_{{\rm O}}(Q, \langle , \rangle)$ is denoted by
${\rm ETrans}_{{\rm O}}(Q,IQ, \langle , \rangle)$.}
\end{de}
\begin{rmk} \label{free}
Let $P=\oplus_{i=1}^{2n} Re_i$ be a free $R$-module with $R=2R$.
The non-degenerate symmetric bilinear form $\langle,\rangle$ on $P$
corresponds to a symmetric matrix $\varphi$ with
respect to the basis $\{e_1, e_2, \ldots, e_{2n} \}$ of $P$ and we write
$\langle p, q \rangle = p^t \varphi q$.
In this case the orthogonal transvection $\tau(p) = p - \langle u,
p \rangle v + \langle v, p \rangle u$ corresponds to the matrix
$(I_{2n} - v u^t \varphi + u v^t \varphi)$ and the group generated by them
is denoted by ${\rm Trans}_{{\rm O}}(P, \langle , \rangle_{\varphi})$.
Also in this case ${\rm ETrans}_{{\rm O}}(Q, \langle , \rangle_{\widetilde{\psi}_1 \perp \varphi})$ will
be generated by the matrices of the form $\rho_{\varphi}(q) =
\Big( \begin{smallmatrix} 1 & 0 & 0 \\ 0 & 1 & q^t \varphi \\ -q &
0 & I_{2n} \end{smallmatrix} \Big)$, and $\mu_{\varphi}(q) =
\Big( \begin{smallmatrix} 1 & 0 & q^t \varphi \\ 0 & 1 & 0 \\ 0 &
-q & I_{2n} \end{smallmatrix} \Big)$.
Note that for standard symmetric matrix $\widetilde{\psi}_n$ and for $q=(q_1, \ldots, q_{2n}) \in
R^{2n}$ with $q^t \widetilde{\psi}_n q = 0$, we have
\begin{eqnarray} \label{relatn5}
\rho_{\widetilde{\psi}_n}(q) &=& \prod_{i=3}^{2n+2} oe_{i1}(-q_{i-2}), \\
\label{relatn6}
\mu_{\widetilde{\psi}_n}(q) &=& \prod_{i=3}^{2n+2} oe_{1i}(-q_{\sigma(i-2)}).
\end{eqnarray}
\end{rmk}
In the following four lemmas we shall use the assumptions and notations in the
statement of Remark \ref{free}.
\begin{lem} \label{kopeiko}
Let $R$ be a commutative ring with $R=2R$, and $I$ be an ideal of $R$. Let $(P, \langle, \rangle)$
be an orthogonal $R$-module with $P$ free $R$-module of rank $2n$, $n \ge 2$ and $Q= R^2
\oplus P$ with the induced form on $\mathbb{H}(R) \oplus P$. If the symmetric bilinear form $\langle, \rangle$ correspond
(w.r.t. some basis) to $\widetilde{\psi}_n$, the standard symmetric matrix, then ${\rm Trans}_{{\rm O}}(Q,IQ,
\langle , \rangle_{\widetilde{\psi}_{n+1}}) = {\rm EO}_{2n+2}(R,I)$.
\end{lem}
Proof: For proof see \S 2 of \cite{SK}.
\begin{lem} \label{free,psi} Let $R$ be a commutative ring with
$R=2R$, and let $I$ be an ideal of $R$. Let $(P, \langle, \rangle)$
be an orthogonal $R$-module with $P$ free $R$-module of rank $2n$, $n \ge 2$ and $Q= R^2
\oplus P$ with the induced form on $\mathbb{H}(R) \oplus P$. If the symmetric bilinear form $\langle, \rangle$ correspond
(w.r.t. some basis) to $\widetilde{\psi}_n$, the standard
symmetric matrix, then ${\rm ETrans}_{{\rm O}}(Q,IQ,
\langle , \rangle_{\widetilde{\psi}_{n+1}}) = {\rm EO}_{2n+2}(R,I)$.
\end{lem}
Proof: We first show ${\rm ETrans}_{{\rm O}}(Q,IQ, \langle ,
\rangle_{\widetilde{\psi}_{n+1}})$ is a subset of ${\rm EO}_{2n+2}(R,I)$. An element of ${\rm ETrans}_{{\rm O}}(Q,IQ, \langle ,
\rangle_{\widetilde{\psi}_{n+1}})$ is of the form $T_1(q) T_2(s) T_1(q)^{-1}$, where $q \in R^{2n}$, $s \in I^{2n} (\subseteq R^{2n})$. Here $T_1$ and $T_2$ can be either of $\rho_{\widetilde{\psi}_n}$ or $\mu_{\widetilde{\psi}_n}$.
Using equations (\ref{relatn5}) and (\ref{relatn6}) we
show that either of the above elements belong to ${\rm EO}_{2n+2}(R,I)$,
and hence ${\rm ETrans}_{{\rm O}}(Q,IQ, \langle , \rangle_{\widetilde{\psi}_{n+1}}) \subseteq
{\rm EO}_{2n+2}(R,I)$.
To show the other inclusion we recall that ${\rm EO}_{2n+2}(R,I)$ is generated
by the elements $g ~ oe_{ij}(x) g^{-1}$, where $g \in {\rm EO}_{2n+2}(R), x \in I$, and
either $i=1$ or $j=1$ (see Lemma \ref{equiv-defn}). Using commutator
relation $[oe_{ik}(a), oe_{kj}(b)] = oe_{ij}(ab)$ and the
equations (\ref{relatn5}), (\ref{relatn6}) we can show that ${\rm EO}_{2n+2}(R,I)
\subseteq {\rm ETrans}_{{\rm O}}(Q,IQ, \langle , \rangle_{\widetilde{\psi}_{n+1}})$, and hence
the equality is established.
\hfill{$\square$}
\begin{lem} \label{phi=phi*,ab}
Let $P$ be a free $R$-module of rank $2n$. Let $(P,\langle ,
\rangle_{\varphi})$ and $(P,\langle , \rangle_{\varphi^*})$ be two
orthogonal $R$-modules with $\varphi= \varepsilon^t
\varphi^* \varepsilon$, for some $\varepsilon \in
{\rm GL}_{2n}(R)$. Then
\begin{eqnarray*}
{\rm Trans}_{{\rm O}}(P,\langle , \rangle_{\varphi}) &=& \varepsilon^{-1} ~
{\rm Trans}_{{\rm O}}(P,\langle , \rangle_{\varphi^*}) ~ \varepsilon,\\
{\rm ETrans}_{{\rm O}}(Q, \langle , \rangle_{\widetilde{\psi}_1 \perp \varphi}) &=& (I_2 \perp
\varepsilon)^{-1}~ {\rm ETrans}_{{\rm O}}(Q,\langle , \rangle_{\widetilde{\psi}_1 \perp \varphi^*}) ~ (I_2
\perp \varepsilon).
\end{eqnarray*}
\end{lem}
Proof: In the free case for orthogonal transvections we have
\begin{eqnarray*}
(I_{2n} - v u^t \varphi + u v^t \varphi) & = & \varepsilon^{-1} ~ (I_{2n} - \tilde{v} \tilde{u}^t \varphi^*
- \tilde{u} \tilde{v}^t \varphi^*) ~ \varepsilon,
\end{eqnarray*}
where $\tilde{u}=\varepsilon u$ and $\tilde{v} =
\varepsilon v$. Hence the first equality follows.
For elementary orthogonal transvections we have
\begin{eqnarray*}
(I_2 \perp \varepsilon)^{-1} \rho_{\varphi^*}(q) (I_2 \perp \varepsilon) &=& \rho_{\varphi} (\varepsilon^{-1} q),\\
(I_2 \perp \varepsilon)^{-1} \mu_{\varphi^*}(q) (I_2 \perp \varepsilon) &=& \mu_{\varphi} (\varepsilon^{-1} q),
\end{eqnarray*}
hence the second equality follows.
\hfill{$\square$}
\begin{lem} \label{phi=phi*,rel} Let $I$ be an ideal of $R$ and $P$ be
a free $R$-module of rank $2n$. Let $(P,\langle , \rangle_{\varphi})$
and $(P,\langle , \rangle_{\varphi^*})$ be two orthogonal $R$-modules
with $\varphi= \varepsilon^t \varphi^* \varepsilon$, for some $\varepsilon \in {\rm GL}_{2n}(R)$. Then
\begin{eqnarray*}
{\rm Trans}_{{\rm O}}(P,IP,\langle , \rangle_{\varphi}) &=&
\varepsilon^{-1} ~ {\rm Trans}_{{\rm O}}(P,IP,\langle , \rangle_{\varphi^*}) ~ \varepsilon,\\
{\rm ETrans}_{{\rm O}}(Q,IQ,\langle , \rangle_{\widetilde{\psi}_1 \perp \varphi}) &=& (I_2 \perp
\varepsilon)^{-1} ~ {\rm ETrans}_{{\rm O}}(Q,IQ,\langle , \rangle_{\widetilde{\psi}_1 \perp \varphi^*}) ~ (I_2
\perp \varepsilon).
\end{eqnarray*}
\end{lem}
Proof: Using the three equations appearing in the proof of Lemma
\ref{phi=phi*,ab}, we get these equalities.
\hfill{$\square$}
\begin{pr} \label{local-case,rel} Let $R$ be a commutative ring with
$R=2R$, and let $I$ be an ideal of $R$. Let $(P,\langle ,
\rangle_{\varphi})$ be an orthogonal $R$-module with $P$ free of rank
$2n$, $n \ge 2$ and $Q = R^2 \oplus P$ with the induced form on $\mathbb{H}(R) \oplus P$. If $\varphi=\varepsilon^t \widetilde{\psi}_n \varepsilon$,
for some $\varepsilon \in {\rm GL}_{2n}(R)$, then
${\rm Trans}_{{\rm O}}(Q,IQ,\langle , \rangle_{\widetilde{\psi}_1 \perp \varphi}) =
{\rm ETrans}_{{\rm O}}(Q,IQ,\langle , \rangle_{\widetilde{\psi}_1 \perp \varphi})$.
\end{pr}
Proof: Using Lemma \ref{kopeiko}, Lemma \ref{free,psi}, and Lemma
\ref{phi=phi*,rel} we get,
\begin{eqnarray*}
{\rm Trans}_{{\rm O}}(Q,IQ,\langle , \rangle_{\widetilde{\psi}_1 \perp \varphi}) &=& (I_2
\perp \varepsilon)^{-1}~ {\rm Trans}_{{\rm O}}(Q,IQ,\langle , \rangle_{\widetilde{\psi}_{n+1}})
~ (I_2 \perp \varepsilon)\\
&=& (I_2 \perp \varepsilon)^{-1}~
{\rm EO}_{2+2n}(R,I) ~ (I_2 \perp \varepsilon),
\end{eqnarray*}
and
\begin{eqnarray*}
{\rm ETrans}_{{\rm O}}(Q,IQ,\langle , \rangle_{\widetilde{\psi}_1 \perp \varphi}) &=& (I_2 \perp
\varepsilon)^{-1} ~ {\rm ETrans}_{{\rm O}}(Q,IQ,\langle , \rangle_{\widetilde{\psi}_{n+1}}) ~ (I_2
\perp \varepsilon)\\
&=& (I_2 \perp \varepsilon)^{-1} ~ {\rm EO}_{2+2n}(R,I)
~ (I_2 \perp \varepsilon),
\end{eqnarray*}
and hence the equality is established.
\hfill{$\square$}
\begin{de}
\rm{
An orthogonal module $(P, \langle, \rangle)$ over the ring $R$ is called {\it split} if there
exists a submodule $N \subseteq P$ such that $N$ is a direct summand of $P$
and such that $N$ is precisely equal to its orthogonal complement $N^{\perp} = \{ p \in P :
\langle p, n \rangle = 0 ~{\rm for~ all}~ n \in N \}$.
Moreover, an orthogonal module $(P, \langle, \rangle)$ over the ring $R$ is called {\it locally split} if
$(P_\mathfrak{m}, \langle, \rangle)$
is a split orthogonal $R_\mathfrak{m}$-module for every maximal ideal $\mathfrak{m}$ of $R$.
}
\end{de}
\begin{lem} \label{split&free} (See Lemma 6.3, Chapter I in \cite{MH})
Let $R$ be a ring such that every finitely generated projective module over $R$ is free. Then
an inner product space over $R$ is split if and only if it possesses a basis so that the associated
inner product matrix has the form $\big( \begin{smallmatrix} 0 & I \\ I & A \end{smallmatrix} \big)$.
If we also assume that $2$ is a unit in the ring, then every split inner product space has matrix
$\big( \begin{smallmatrix} 0 & I \\ I & 0 \end{smallmatrix} \big)$ with respect to a suitable basis.
\end{lem}
\begin{rmk} \label{localcase} In view of Proposition \ref{local-case,rel}
and above lemma, for any
split orthogonal module $(P,\langle , \rangle_{\varphi})$ over a local
ring $(R,\mathfrak{m})$ with $R=2R$, we have ${\rm Trans}_{{\rm O}}(Q,IQ,\langle,\rangle_{\widetilde{\psi}_1
\perp \varphi} ) = {\rm ETrans}_{{\rm O}}(Q,IQ, \langle , \rangle_{\widetilde{\psi}_1
\perp \varphi} )$. Here $I$ is an ideal of the ring $R$.
\end{rmk}
Next we establish dilation principle for relative elementary orthogonal
transvection group.
\begin{lem} \label{rel-dilatn-ETrans_O} Let $R$ be a commutative ring
with $R=2R$, and let $I$ be an ideal of $R$. Let $(P,\langle ,
\rangle)$ be an orthogonal $R$-module with rank of $P$ is
$2n$, $n \ge 2$, and $Q=R^2 \oplus P$ with the induced form on $\mathbb{H}(R) \oplus P$. Suppose that $a$ is a
non-nilpotent element of $R$ such that $P_a$ is a free $R_a$ module,
$(P_a, \langle, \rangle)$ is split orthogonal $R_a$-module,
and the bilinear form $\langle, \rangle$ corresponds to the
symmetric matrix $\varphi$ (w.r.t. some basis). Let
$\alpha(X) \in {\rm Aut}(Q[X])$, with $\alpha(0) = Id$, and $\alpha(X)_a
\in {\rm ETrans}_{{\rm O}}(Q_a[X],IQ_a[X],\langle , \rangle_{\widetilde{\psi}_1 \perp
\varphi})$. Then, there exists $\alpha^*(X) \in
{\rm ETrans}_{{\rm O}}(Q[X],IQ[X],\langle , \rangle)$, with $\alpha^*(0) =
Id.$, such that $\alpha^*(X)$ localises to $\alpha(bX)$, for $b \in
(a^d)$, $d \gg 0$.
\end{lem}
Proof: We have $P_a \cong R_a^{2n}$. Let $e_1, \ldots, e_{2n+2}$ be the standard basis of $Q_a$ with respect to which the bilinear form on $Q_a$ will correspond to $\widetilde{\psi}_1 \perp \varphi$. Since $(P_a, \langle, \rangle)$ is a split orthogonal $R$-module with $R_a = 2R_a$, we have $\varphi = \varepsilon^t \widetilde{\psi}_n \varepsilon$, for some $\varepsilon \in {\rm GL}_{2n}(R_a)$ by Lemma \ref{split&free}. Therefore,
${\rm ETrans}_{{\rm O}}(Q_a[X], IQ_a[X], \langle, \rangle_{\widetilde{\psi}_1 \perp \varphi}) = (I_2 \perp \varepsilon)^{-1} {\rm EO}_{2n+2}(R_a[X], I_a[X]) (I_2 \perp \varepsilon)$ by Lemma \ref{free,psi}, and Lemma
\ref{phi=phi*,rel}. Hence, $\alpha(X)_a = (I_2 \perp \varepsilon)^{-1} ~\beta(X)~ (I_2 \perp \varepsilon)$, for some $\beta(X) \in {\rm EO}_{2n+2}(R_a[X], I_a[X])$, with $\beta(0)=Id.$ By Lemma \ref{vanderk1} we have
\begin{eqnarray*}
{\rm EO}_{2n+2}(R_a[X], I_a[X]) &=& {\rm EO}_{2n+2}^1(R_a[X], I_a[X]) ~\cap {\rm O}_{2n+2}(R_a[X], I_a[X]).
\end{eqnarray*}
Hence we can write $\beta(X) = \prod_t \gamma_t ~ oe_{i_t j_t}(X f_t(X)) ~ \gamma_t^{-1}$, where either $i_t =1$, or $j_t=1$, and $\gamma_t \in {\rm EO}_{2n+2}^1(R_a, I_a)$. Note that $f_t(X) \in R_a[X]$, when $i_t=1$ and $f_t(X) \in I_a[X]$, when $j_t=1$. Using Lemma \ref{ness4-E1} we get $\beta(Y^{4^r}X) = \prod_k oe_{i_k j_k}(Y h_k(X,Y)/ a^m)$, with either $i_k =1$ or $j_k=1$. Note that $h_k(X,Y) \in R[X,Y]$, when $i_k=1$ and $h_k(X,Y) \in I[X,Y]$, when $j_k=1$. We have
\begin{eqnarray*}
oe_{1 j_k}(Y h_k(X,Y)/ a^m) &=& I_{2n+2} - (-Y h_k(X,Y)/ a^m) ~ e_1 ~ e_{\sigma(j_k)}^t ~ \widetilde{\psi}_{n+1} \\
&& + (-Y h_k(X,Y)/ a^m) ~ e_{\sigma(j_k)} ~ e_1^t ~ \widetilde{\psi}_{n+1}, ~ {\rm for} ~ j_k \ge 3, \\
oe_{i_k 1}(Y h_k(X,Y)/ a^m) &=& I_{2n+2} - (-Y h_k(X,Y)/ a^m) ~ e_{i_k} ~ e_2^t ~ \widetilde{\psi}_{n+1}\\
&& + (- Y h_k(X,Y)/ a^m) ~ e_2 ~ e_ {i_k}^t ~ \widetilde{\psi}_{n+1}, ~ {\rm for} ~ i_k \ge 3. \\
\end{eqnarray*}
Let $\varepsilon_1, \ldots, \varepsilon_{2n}$ be the columns of the matrix $\varepsilon \in {\rm GL}_{2n}(R_a)$. Let $\widetilde{e_i}^t$ denote the column vector $(I_2 \perp \varepsilon) e_i$ of length $2n+2$. Note that $\widetilde{e_1}=e_1, \widetilde{e_2}=e_2$, and $\widetilde{e_i}^t = (0, 0, \varepsilon_{i-2}^t)$, for $i \ge 3$. Using Lemma \ref{phi=phi*,ab} we can write $\alpha(Y^{4^r}X)_a$ as product of elements of the form
\begin{eqnarray*}
& I_{2n+2} - (-Y h_k(X,Y)/ a^m) \widetilde{e_1} \widetilde{e}_{\sigma(j_k)}^t \left( \begin{smallmatrix} \widetilde{\psi}_1 & 0 \\ 0 & \varphi \end{smallmatrix} \right ) + (-Y h_k(X,Y)/ a^m) \widetilde{e}_{\sigma(j_k)} \widetilde{e_1}^t \left ( \begin{smallmatrix} \widetilde{\psi}_1 & 0 \\ 0 & \varphi \end{smallmatrix} \right )& \\
&= \mu_{\varphi}((Y h_k(X,Y)/ a^m) \varepsilon_{\sigma(j_k) -2}), & \\
& I_{2n+2} - (-Y h_k(X,Y)/ a^m) \widetilde{e}_{i_k} \widetilde{e_2}^t \left ( \begin{smallmatrix} \widetilde{\psi}_1 & 0 \\ 0 & \varphi \end{smallmatrix} \right ) + (-Y h_k(X,Y)/ a^m) \widetilde{e_2} \widetilde{e}_{i_k}^t \left ( \begin{smallmatrix} \widetilde{\psi}_1 & 0 \\ 0 & \varphi \end{smallmatrix} \right ) & \\
&= \rho_{\varphi}(-(Y h_k(X,Y)/ a^m) \varepsilon_{i_k -2}), &
\end{eqnarray*}
for $i_k, j_k \ge 3$. Note that $\alpha(Y^{4^r}X)_a \in {\rm ETrans}_{{\rm O}}(Q_a[X, Y],IQ_a[X, Y],\langle , \rangle_{\widetilde{\psi}_1 \perp
\varphi})$, hence $\alpha(Y^{4^r}X)_a = id ~{\rm mod}~ (IQ_a[X, Y])$. Since $\rho_{\varphi}$ and $\mu_{\varphi}$ satisfy the splitting property $\rho_{\varphi}(q_1 + q_2) = \rho_{\varphi}(q_1) \rho_{\varphi}(q_2)$ and $\mu_{\varphi}(q_1 + q_2) = \mu_{\varphi}(q_1) \mu_{\varphi}(q_2)$, we get $\alpha(Y^{4^r}X)_a$ is product of elements of the form $T_1((Y f_k(X, Y)/a^m) \varepsilon_{p_k}) T_2((Y g_k(X, Y)/a^m) \varepsilon_{q_k}) \\T_1(-(Y f_k(X, Y)/a^m) \varepsilon_{p_k}) $, where $T_1, T_2$ are either $\rho_{\varphi}$ or $\mu_{\varphi}$, $f_k(X, Y) \in R[X, Y]$, $g_k(X, Y) \in I[X, Y]$, and $p_k , q_k \ge 3$.
Let $s \ge 0$ be an integer such that $\widetilde{\varepsilon_i} = a^s \varepsilon_i \in P$ for all $i = 1, \ldots, 2n$. Let $d = s+m$. Therefore $\alpha((a^{d}Y)^{4^r} X)_a$ is product of elements of the form $T_1((a^d Y f_k(X, a^dY)/a^m) \varepsilon_{p_k}) \\T_2((a^d Y g_k(X, a^d Y)/a^m) \varepsilon_{q_k}) T_1(-(a^d Y f_k(X, a^d Y)/a^m) \varepsilon_{p_k}) $
Substituting $Y=1$ we get $\alpha(a^dX)_a$ is product of elements of the forms
\begin{eqnarray*}
T_1(a^s f'_k(X) \varepsilon_{p_k}) T_2(a^s g'_k(X) \varepsilon_{q_k}) T_1(-(a^m f'_k(X) \varepsilon_{p_k}).
\end{eqnarray*}
Let us set $\alpha^*(X)$ to be the product of elements of the forms
\begin{eqnarray*}
T_1(f'_k(X) \widetilde{\varepsilon}_{p_k}) T_2(g'_k(X) \widetilde{\varepsilon}_{q_k}) T_1(- f'_k(X) \widetilde{\varepsilon}_{p_k}).
\end{eqnarray*}
From the construction it is clear that $\alpha^*(X)$ belongs to ${\rm ETrans}_{{\rm O}}(Q[X], IQ[X], \langle, \rangle)$, $\alpha^*(0)=Id.$, and $\alpha^*(X)$ localises to $\alpha(bX)$, for some $b \in (a^d), d \gg 0$.
\hfill{$\square$}
\begin{lem} \label{LG-ETrans-rel} Let $R$ be a commutative ring with
$R=2R$, and let $I$ be an ideal of $R$. Let $(P,\langle , \rangle)$
be a locally split orthogonal $R$-module with $P$ is of rank $2n$, $n \ge 2$,
and $Q = R^2 \oplus P$ with the induced form on $\mathbb{H}(R) \oplus P$.
Let $\alpha(X) \in {{\rm O}}(Q[X])$, with $\alpha(0) = Id$. If $\alpha(X)_\mathfrak{m} \in
{\rm ETrans}_{{\rm O}}(Q_\mathfrak{m}[X],IQ_\mathfrak{m}[X],\langle , \rangle_{\widetilde{\psi}_1 \perp
\varphi_\mathfrak{m}})$, for each maximal ideal $\mathfrak{m}$ of $R$, then
$\alpha(X) \in {\rm ETrans}_{{\rm O}}(Q[X],IQ[X],\langle , \rangle)$.
\end{lem}
Proof: One can suitably choose an element $a_\mathfrak{m}$ from $R \setminus
\mathfrak{m}$ such that $\alpha(X)_{a_\mathfrak{m}}$ belongs to
${\rm ETrans}_{{\rm O}}(Q_{a_\mathfrak{m}}[X],IQ_{a_\mathfrak{m}}[X])$. Let us set $\gamma(X,Y) =
\alpha(X+Y) \alpha(Y)^{-1}$. Note that $\gamma(X,Y)_{a_\mathfrak{m}}$ belongs
to ${\rm ETrans}_{{\rm O}}(Q_{a_\mathfrak{m}}[X,Y],IQ_{a_\mathfrak{m}}[X,Y])$, and $\gamma(0,Y) =
Id$. From Lemma \ref{rel-dilatn-ETrans_O} it follows that $\gamma(b_\mathfrak{m}
X,Y) \in {\rm ETrans}_{{\rm O}}(Q[X,Y],IQ[X,Y])$, for $b_\mathfrak{m} \in (a_\mathfrak{m}^d)$, where $d
\gg 0$. Note that the ideal generated by $a_\mathfrak{m}^d$'s is the whole ring
$R$. Therefore, $c_1 a_{\mathfrak{m}_1}^d+ \ldots + c_k a_{\mathfrak{m}_k}^d = 1 $, for
some $c_i \in R$. Let $b_{m_i}= c_i a_{m_i}^d \in (a_{m_i}^d)$. It is
easy to see that $\alpha(X)=\prod_{i=1}^{k-1}\gamma(b_{m_i}X,T_i)
\gamma(b_{m_k}X,0)$, where $T_i = b_{m_{i+1}}X+ \cdots +
b_{m_k}X$. Each term in the right hand side of this expression belongs
to ${\rm ETrans}_{{\rm O}}(Q[X], IQ[X])$ and hence $\alpha(X) \in {\rm ETrans}_{{\rm O}}(Q[X],
IQ[X])$.
\hfill{$\square$}
\medskip
We now establish equality of the orthogonal transvection group and the
elementary orthogonal transvection group (in the relative case to an
ideal) of a locally split orthogonal $R$-module
with $R=2R$. An absolute version of this result (i.e, when $I=R$)
was proved in \cite{bbr} (see Theorem 3.10). Before proving the main
result we establish a lemma
to show that orthogonal transvections are homotopic to identity.
\begin{lem} \label{ortho-trans-hom-to-identity}
Let $(P, \langle, \rangle)$ be an orthogonal $R$-module and $\alpha \in
{\rm Trans}_{{\rm O}}(P, \langle, \rangle)$. Then there exists $\beta(X) \in
{\rm Trans}_{{\rm O}}(P[X], \langle, \rangle)$ such that $\beta(1)=\alpha$ and
$\beta(0)=Id.$
\end{lem}
Proof: As $\alpha \in {\rm Trans}_{{\rm O}}(P, \langle, \rangle)$, it is
product of orthogonal transvections of the form $\tau$, where
$\tau$ takes $p \in P$ to $p - \langle u, p \rangle v + \langle v, p
\rangle u$, where $u, v \in P$
are isotropic with $\langle u, v \rangle = 0$, and either $u$ or
$v$ is unimodular. Define $\tau X$ as the map which takes $p \in P$
to either $p - \langle u, p \rangle vX + \langle vX, p \rangle u$ or
$p - \langle uX, p \rangle v + \langle v, p \rangle uX$. This choice depends on
whether $u$ is unimodular or $v$ is unimodular. Note that $uX$
represents $u$ {\it times} $X$ and $vX$ represents $v$ {\it times} $X$.
Also, note that $uX, vX \in P[X]$. We set $\beta(X)$ to be the product of
elements of the form $\tau X$, whenever $\tau$ appears in the
expression of $\alpha$. Then $\beta(1)=\alpha$ and $\beta(0)=Id$.
\hfill{$\square$}
\begin{thm} \label{eql-rel} Let $R$ be a commutative ring with $R=2R$,
and let $I$ be an ideal of $R$. Let $(P, \langle , \rangle)$ be a
locally split orthogonal $R$-module with $P$ is of rank $2n$, $n \ge 2$,
and $Q = R^2 \oplus P$ with the induced form on $\mathbb{H}(R) \oplus P$.
Then ${\rm Trans}_{{\rm O}}(Q,IQ,\langle , \rangle) = {\rm ETrans}_{{\rm O}}(Q,IQ,\langle , \rangle)$.
\end{thm}
Proof: We have ${\rm ETrans}_{{\rm O}}(Q,IQ,\langle,\rangle) \subseteq
{\rm Trans}_{{\rm O}}(Q,IQ,\langle,\rangle)$. We need to show other inclusion.
Let us choose $\alpha$ from ${\rm Trans}_{{\rm O}}(Q,IQ,\langle,\rangle)$. By
Lemma \ref{ortho-trans-hom-to-identity} there exists $\alpha(X)$ in
${\rm Trans}_{{\rm O}}(Q[X],IQ[X],\langle,\rangle)$ such that $\alpha(1) =
\alpha$ and $\alpha(0) = Id$. Note that
${\rm Trans}_{{\rm O}}(Q_{\mathfrak{m}}[X],IQ_{\mathfrak{m}}[X],\langle,\rangle_{\widetilde{\psi}_1 \perp
\varphi_{\mathfrak{m}}}) = {\rm ETrans}_{{\rm O}}(Q_{\mathfrak{m}}[X],IQ_{\mathfrak{m}}[X],\langle,
\rangle_{\widetilde{\psi}_1 \perp \varphi_{\mathfrak{m}}})$, for each maximal ideal $\mathfrak{m}$
of $R$ (follows from Remark \ref{localcase}). Hence $\alpha(X)_\mathfrak{m}$
belongs to
${\rm ETrans}_{{\rm O}}(Q_{\mathfrak{m}}[X],IQ_{\mathfrak{m}}[X],\langle,\rangle_{\widetilde{\psi}_1 \perp
\varphi \otimes R_{\mathfrak{m}}[X]})$, for each maximal ideal $\mathfrak{m}$ of
$R$. Therefore, $\alpha(X) \in
{\rm ETrans}_{{\rm O}}(Q[X],IQ[X],\langle,\rangle)$ (see Lemma
\ref{LG-ETrans-rel}). Substituting $X=1$ we get the result.
\hfill{$\square$}
\medskip
In closing we make Remark \ref{final-rmk} below
for which we need the following elementary observation.
\begin{lem} \label{local-split}
Let $(P, \langle, \rangle)$ be a split orthogonal $R$-module. Then $(P_\mathfrak{m}, \langle, \rangle)$
is a split orthogonal $R_\mathfrak{m}$-module for every maximal ideal $\mathfrak{m}$ of $R$.
\end{lem}
Proof: Let us consider an equivalent form of the definition of split orthogonal $R$-modules
as it is stated in \S 6, Chapter I, \cite{MH}. The orthogonal module $(P, \langle, \rangle)$ is
split if it is direct sum of two submodules $M$ and $N$ which are dually paired to $R$ by the
inner product,
\begin{eqnarray*}
M \stackrel{\cong}{\longrightarrow} {\rm Hom}_R(N,R), ~{\rm and}~ N \stackrel{\cong}{\longrightarrow} {\rm Hom}_R(M,R)
\end{eqnarray*}
and such that $N$ is self orthogonal, i.e, $\langle N, N \rangle= 0$. Tensoring with $R_\mathfrak{m}$ we get
$P_\mathfrak{m} = M_\mathfrak{m} \oplus N_\mathfrak{m}$. Moreover, $P$ projective will imply both $M$ and $N$ are
projective and hence finitely presented ($M$ finitely presented means there exists an exact
sequence $R^k \longrightarrow R^l \longrightarrow M$, for suitable natural numbers $k,l$). Therefore, by
Proposition 2.13" in Chapter I, \cite{lam} we get
\begin{eqnarray*}
M_\mathfrak{m} \stackrel{\cong}{\longrightarrow} {\rm Hom}_{R_\mathfrak{m}}(N_\mathfrak{m}, R_\mathfrak{m}), ~{\rm and}~ N_\mathfrak{m}
\stackrel{\cong}{\longrightarrow} {\rm Hom}_{R_\mathfrak{m}}(M_\mathfrak{m},R_\mathfrak{m})
\end{eqnarray*}
Also, $N$ is self orthogonal will imply $N_\mathfrak{m}$ is self-orthogonal and hence
$(P_\mathfrak{m}, \langle, \rangle)$ is a split orthogonal $R_\mathfrak{m}$-module for every maximal ideal $\mathfrak{m}$
of $R$.
\hfill{$\square$}
\begin{rmk}\label{final-rmk} In view of the above lemma the result as in Theorem \ref{eql-rel} holds when
$(P, \langle , \rangle)$ is assumed to be a split orthogonal $R$-module.
\end{rmk}
\medskip
\noindent {Acknowledgement:} The author thanks Department of
Science and Technology, Govt. of India for INSPIRE Faculty Award
[IFA-13 MA-24] that supported this work.
| {
"timestamp": "2017-10-10T02:09:50",
"yymm": "1710",
"arxiv_id": "1710.02879",
"language": "en",
"url": "https://arxiv.org/abs/1710.02879",
"abstract": "H. Bass defined orthogonal transvection group of an orthogonal module and elementary orthogonal transvection group of an orthogonal module with a hyperbolic direct summand. We also have the notion of relative orthogonal transvection group and relative elementary orthogonal transvection group with respect to an ideal of the ring. According to the definition of Bass relative elementary orthogonal transvection group is a subgroup of relative orthogonal transvection group of an orthogonal module with hyperbolic direct summand. Here we show that these two groups are the same in the case when the orthogonal module splits locally.",
"subjects": "Commutative Algebra (math.AC)",
"title": "Equality of orthogonal transvection group and elementary orthogonal transvection group",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9658995713428387,
"lm_q2_score": 0.8333245891029456,
"lm_q1q2_score": 0.8049078634039823
} |
https://arxiv.org/abs/2101.02299 | Enumerating Labeled Graphs that Realize a Fixed Degree Sequence | A finite non-increasing sequence of positive integers $d = (d_1\geq \cdots\geq d_n)$ is called a degree sequence if there is a graph $G = (V,E)$ with $V = \{v_1,\ldots,v_n\}$ and $deg(v_i)=d_i$ for $i=1,\ldots,n$. In that case we say that the graph $G$ realizes the degree sequence $d$. We show that the exact number of labeled graphs that realize a fixed degree sequence satisfies a simple recurrence relation. Using this relation, we then obtain a recursive algorithm for the exact count. We also show that in the case of regular graphs the complexity of our algorithm is better than the complexity of the same enumeration that uses generating functions. | \section*{Introduction}
A finite non-increasing sequence of positive integers
$d_1\geq \cdots\geq d_n$ is called a \emph{degree sequence} if there
is a graph $(V,E)$ with $V = \{v_1,\ldots,v_n\}$ and $\deg(v_i)=d_i$
for $i=1,\ldots,n$. In that case, we say that the graph $G$
\emph{realizes the degree sequence} $d$. In this article, in
Theorem~\ref{thm:main} we give a remarkably simple recurrence relation
for the exact number of labeled graphs that realize a fixed degree
sequence $(d_1,\ldots,d_n)$. We also give an algorithm and a concrete
implementation to explicitly count three classes of labeled graphs for
a moderate number of vertices.
There is an extensive volume of research on the asymptotics of the
number of graphs that realize a fixed degree sequence. We refer the
reader to Wormald's excellent ICM lecture~\cite{Wormald18} for an
comprehensive survey, and references therein. As for the exact number
of graphs that realize a fixed degree sequence, Read obtains
enumeration formulas in~\cite{Read59} and \cite{Read60} as
applications of Polya's
\emph{Hauptsatz}~\cite{Polya39,PolyaRead87}. However, Read himself
admits
\begin{quote}
``It may readily be seen that to evaluate the above expressions in
particular cases may involve an inordinate amount of
computation.''~\cite[Sect.7]{Read60}
\end{quote}
But we encountered no explicit complexity analysis of Read's formulas
in our search in the literature. On the other hand, in~\cite{McKay83}
McKay writes explicit generating polynomials whose complexities can
readily be calculated, and in which coefficients of certain monomials
yield the exact number of different classes of labeled graphs. In
particular, he writes a generating polynomial (see
Equation~\eqref{eq:McKay}), whose computation complexity is
$\C{O}(2^{n^2/2})$, in which the coefficient of the monomial
$x_1^m\cdots x_n^m$ gives the exact count of labeled $m$-regular
graphs on $n$-vertices.
Our recurrence relation works for all degree sequences, but for those
degree sequences where there is a uniform upper limit $m$ for the
degrees, our algorithm has the worst-case complexity of
$\C{O}(n^{mn})$. This means, in the specific case of $m$-regular
graphs we achieve a better complexity than generating polynomials.
While factorial-like worst-case complexity of the
enumeration~\eqref{eq:enumeration} may render practical calculations
difficult, the fact that it is recursive allows us to employ
computational tactics such as \emph{dynamic
programming}~\cite{Bellman57,LewMauch07} or
\emph{memoization}~\cite{Michie68} to achieve better average
complexity. We explore this avenue in our implementation given in the
Appendix. To demonstrate of the versatility of our recurrence
relation and the resulting implementation, we tabulate the number of
$m$-regular labeled graphs, the number of labeled graphs that realize
the same degree sequence with binary trees, and the number of labeled
graphs that realize the same degree sequence with complete bipartite
graphs.
One can also read a given degree sequence $(d_1,\ldots,d_n)$ as a
partition of $N = \sum_i d_i$. The Erdős-Gallai
Theorem~\cite{ErdosGallai, Chodum:ErdosGallai} tells us when such a
partition is realizable as a degree sequence, or one can also use
Havel-Hakimi algorithm to decide whether the given partition is
realizable~\cite{Havel55,Hakimi62}. Now, one can also use our
enumeration to decide whether given a degree sequence is realizable,
but admittedly, the Havel-Hakimi algorithm has a much better
complexity.
\subsection*{Plan of the article} We prove our recurrence relation,
analyze its complexity and compare it with generating functions for
regular graphs in Section~\ref{Sect:Enumeration}. We present explicit
calculations we made in Section~\ref{Sect:Calculations}, and the code
we used performing the calculations in the Appendix.
\subsection*{Notations and conventions}
We assume all graphs are simple, labeled, and undirected throughout
the article.
\subsection*{Acknowledgments}
This article was written while the author was on academic leave at
Queen’s University in Canada from Istanbul Technical University. The
author would like to thank both universities for their support.
\section{Enumerating Graphs That Realize a Fixed Degree Sequence}
\label{Sect:Enumeration}
\subsection{The recurrence relation}
Assume $d=(d_1,\ldots,d_n)$ is a non-increasing sequence of strictly
positive integers $d_i>0$. Let us consider the trivial cases first:
It is clear that there is a single graph on the empty sequence
$\epsilon$: the empty graph. Also, in case $n=1$, the only case for
which the sequence $(d_1)$ is realized by a graph is when $d_1=0$
which is excluded by our assumption. So, the count is 0 for all
$(d_1)$ for $d_1>0$. We also exclude the case where the sum
$\sum_i d_i$ is odd since such sequences cannot be realized as degree
sequences because of the Hand-Shake Lemma.
We assume $n>1$ and the sum $\sum_i d_i$ is even. If we consider the
vertex $x_n$ we see that it needs to be connected to exactly $d_n$
vertices in the set $\{x_1,\ldots,x_{n-1}\}$. We need to consider the
set of all subsets of $\{x_1,\ldots,x_{n-1}\}$ of size $d_n$ to
enumerate all possibilities. So, let $S$ be an arbitrary subset of
$\{1,\ldots,n-1\}$ of size $d_n$, and let $\chi_S$ be the
characteristic function of the set $S$. Every graph in which $x_n$ is
connected to each vertex in $\{x_i\mid i\in S\}$ realizes the same
degree sequence
\begin{equation}
\label{eq:1}
(d_1 - \chi_S(1),\ldots, d_{n-1} - \chi_S(n-1))
\end{equation}
if we remove $x_n$ and all the edges connected to $x_n$. Let us write
$(d_1,\ldots,d_{n-1})\slash S$ for the sequence~\eqref{eq:1} after we
reorder the sequence in descending order and remove all 0's. Let
$C((d_1,\ldots,d_n))$ be the number of graphs that realize the same
degree sequence $(d_1,\ldots,d_n)$. Thus we obtain:
\begin{theorem}\label{thm:main}
The total number of labeled graphs that realize the degree sequence
$(d_1,\ldots,d_n)$ satisfies the recurrence relation
\begin{equation}
\label{eq:enumeration}
C((d_1,\ldots,d_n)) = \sum_{S\in\binom{\{1,\ldots,n-1\}}{d_n}} C((d_1,\ldots,d_{n-1})\slash S)
\end{equation}
where we write $\binom{X}{k}$ for the set of all subsets of size $k$
of a set $X$.
\end{theorem}
\subsection{The complexity analysis}
In~\cite{McKay83} McKay writes a generating polynomial
\begin{equation}
\label{eq:McKay}
f(x) = \prod_{1\leq i<j\leq n} (1 + x_i x_j).
\end{equation}
in which the coefficient of the monomial $x_1^{m}\cdots x_n^{m}$
yields the number of $m$-regular graphs on $n$-vertices. If we assume
the complexity of the calculation is given by the number of
multiplications in the product, then the computational complexity of
the generating polynomial is $\C{O}(2^{n^2/2})$.
Let $d=(d_1,\ldots,d_n)$ be a degree sequence, and let us use
$\# C(d)$ for the total number of summands (which is the number of
leaves in the recursion tree) in $C(d)$ in
Equation~\eqref{eq:enumeration} which will be the complexity measure
for our enumeration.
\begin{proposition}\label{prop:complexity}
Assume there is a fixed upper bound $m$ for the degrees in $d$.
Then the complexity of the enumeration given in~\eqref{thm:main} is
$\C{O}(n^{mn})$. In particular, the enumeration complexity for
$m$-regular graphs is also $\C{O}(n^{mn})$.
\end{proposition}
\begin{proof}
As long as $2m\leq n$, the function $\binom{n}{m}$ is increasing in
$m$. Then
\begin{align}
\label{eq:10}
\# C((d_1,\ldots,d_n))
= & \sum_{S\in\binom{\{1,\ldots,n-1\}}{d_n}}\# C((d_1,\ldots,d_{n-1})\slash S)\\
\leq & \binom{n}{m} \max_{S\in 2^{\{1,\ldots,n-1\}}}\# C((d_1,\ldots,d_{n-1})\slash S)\\
\leq & \cdots \leq \binom{n}{m}\cdots \binom{2m}{m} \max_{S\in 2^{\{1,\ldots,2m-1\}}} \# C((d_1,\dots,d_{2m-1})\slash S) \\
\leq & \binom{n}{m}^{n-2m} C_m
\end{align}
for some constant $C_m$. Since $m$ is fixed and $\binom{n}{m}$ is
of order $n^m$ we get that the number of summands in
$C((d_1,\ldots,d_n))$ is $\C{O}(n^{m(n-2m)}) = \C{O}(n^{mn})$.
\end{proof}
One can easily see that the enumeration complexity
of~\eqref{eq:enumeration} we obtained in
Proposition~\ref{prop:complexity} is better that the complexity of
generating polynomial for regular graphs. However, we still have to
work around the fact that the worst-case complexity is factorial-like
with a constant exponent. Fortunately, one can employ powerful
computational tactics such as dynamic programming or memoization to
improve average complexity of the enumeration since it is
recursive. See the Appendix for how we used memoization to improve
average complexity of our calculations.
\section{Explicit Calculations}
\label{Sect:Calculations}
Let us start with calculating an explicit example by hand. The degree
sequence of the complete graph $K_n$ on $n$-vertices is the constant
sequence $n-1$ of length $n$. Since one has only one subset of
$\{1,\ldots,n-1\}$ of size $n-1$ we get that
\begin{align}\label{eq:13}
C((\underbrace{n-1,\ldots,n-1}_\text{$n$-times}))
= C((\underbrace{n-2,\ldots,n-2}_\text{$n-1$-times}))
= \cdots = C((1,1)) = C(\epsilon) = 1.
\end{align}
In other words, the labeled complete graph $K_n$ is the only graph
with that specific degree sequence.
\subsection{Enumerating regular graphs}
An $m$-regular graph on $n$-vertices is similar to $K_{m+1}$ in that
it is a graph on $n$-vertices where every vertex has the same constant
degree $m$
\begin{equation}
\label{eq:3}
(\underbrace{m,\ldots,m}_{\text{$n$-times}}).
\end{equation}
Now, let us write
\begin{equation}
\label{eq:4}
R(n,m) = C((\underbrace{m,\ldots,m}_{\text{$m$-times}})).
\end{equation}
We need to note that $R(n,m)=0$ when $m\geq n$, or when both $n$ and
$m$ are odd since in these cases there are no graphs that can realize
the sequences given in~\eqref{eq:3}.
We calculated $R(n,m)$ for $1\leq n\leq 30$ and $2\leq m\leq 8$. The
results for $1\leq m\leq 5$ took about 2 minutes while the cases for
$6\leq m\leq 8$ took about 10 minutes on a moderate
computer\footnote{On an Intel i5-8250U CPU working at 1.60GHz with 8Gb
of RAM on a Linux operating system.}. These calculations strongly
indicate the average complexity of the enumeration algorithm with
memoization is much better than the worst-case complexity.
We tabulated the results for $2\leq n\leq 15$ in Table~\ref{table:1}.
A smaller version of the tables can be found
at~\cite[pg. 279]{Comtet:AdvancedCombinatorics}, and as the sequence
A295193 at OEIS~\cite{OEIS}. The individual sequences for
$m=1,\ldots,6$ in Table~\ref{table:1} are respectively the sequences
A001147, A001205, A002829, A005815, A338978 and A339847 at OEIS.
\begin{table}[t]
\footnotesize
\begin{tabular}{|r|l|l|l|l|l|}\hline
$n$ & $m=1$ & $m=2$ & $m=3$ & $m=4$ & $m=5$\\\hline\hline
2 & 1 &0 & 0 & 0 & 0 \\ \hline
3 & 0 &1 & 0 & 0 & 0 \\ \hline
4 & 3 &3 & 1 & 0 & 0 \\ \hline
5 & 0 &12 & 0 & 1 & 0 \\ \hline
6 & 15 &70 & 70 & 15 & 1 \\ \hline
7 & 0 &465 & 0 & 465 & 0 \\ \hline
8 & 105 &3507 & 19355 & 19355 & 3507 \\ \hline
9 & 0 &30016 & 0 & 1024380 & 0 \\ \hline
10 & 945 &286884 & 11180820 & 66462606 & 66462606 \\ \hline
11 & 0 &3026655 & 0 & 5188453830 & 0 \\ \hline
12 & 10395 &34944085 & 11555272575 & 480413921130 & 2977635137862 \\ \hline
13 & 0 &438263364 & 0 & 52113376310985 & 0 \\ \hline
14 & 135135 &5933502822 & 19506631814670 & 6551246596501035 & 283097260184159421 \\ \hline
15 & 0 &86248951243 & 0 & 945313907253606891 & 0 \\ \hline
\end{tabular}
\begin{tabular}{|r|l|l|l|l|}\hline
$n$ & $m=6$ & $m=7$ & $m=8$ \\\hline\hline
2 & 0 & 0 & 0 \\ \hline
3 & 0 & 0 & 0 \\ \hline
4 & 0 & 0 & 0 \\ \hline
5 & 0 & 0 & 0 \\ \hline
6 & 0 & 0 & 0 \\ \hline
7 & 1 & 0 & 0 \\ \hline
8 & 105 & 1 & 0 \\ \hline
9 & 30016 & 0 & 1 \\ \hline
10 & 11180820 & 286884 & 945 \\ \hline
11 & 5188453830 & 0 & 3026655 \\ \hline
12 & 2977635137862 & 480413921130 & 11555272575 \\ \hline
13 & 2099132870973600 & 0 & 52113376310985 \\ \hline
14 & 1803595358964773088 & 1803595358964773088 & 283097260184159421 \\ \hline
15 & 1872726690127181663775 & 0 & 1872726690127181663775 \\ \hline
\end{tabular}
\normalsize
\vspace{2mm}
\caption{The number of labeled $m$-regular graphs on $n$-vertices
for $m=1,\ldots,8$.}\label{table:1}
\end{table}
\subsection{Enumerating graphs that realize the same degree sequences as
binary trees}
Any binary tree with $k+1$-leaves will have $k-1$ internal vertices of
degree 3. Thus any such tree has the degree sequence
\begin{equation}
\label{eq:5}
(\underbrace{3,\ldots,3}_{\text{$k-1$-times}},\underbrace{1,\ldots,1}_{\text{$k+1$-times}}).
\end{equation}
Now, let
\begin{equation}
\label{eq:6}
T(k) = \binom{2k}{k-1} C((\underbrace{3,\ldots,3}_{\text{$k-1$-times}},\underbrace{1,\ldots,1}_{\text{$k+1$-times}})).
\end{equation}
be the number of labeled graphs that has the same degree sequence
given in~\eqref{eq:5}. Notice that we put a correction factor
$\binom{2k}{k-1}$ since in the original enumeration $C(d)$ vertices
are not allowed to change degree. In the case of regular graphs, one
does not need a correction factor since every vertex has the same
degree.
Using our implementation of the enumeration algorithm we calculated
these numbers on the same setup we described above. The results are
calculated almost immediately and they are given in
Table~\ref{table:3}.
\begin{table}[h]
\footnotesize
\begin{tabular}{{|r|l|}}\hline
$k$ & $R(k)$ \\\hline\hline
1 & 1 \\\hline
2 & 4 \\\hline
3 & 90 \\\hline
4 & 8400 \\\hline
5 & 1426950 \\\hline
6 & 366153480 \\\hline
7 & 134292027870 \\\hline
8 & 67095690261600 \\\hline
9 & 43893900947947050 \\\hline
10 & 36441011093916429000 \\\hline
11 & 37446160423265535041100 \\\hline
12 & 46669357647008722700474400 \\\hline
13 & 69367722399061403579194432500 \\\hline
14 & 121238024532751529573125745790000 \\\hline
15 & 246171692450596203263023527657431250 \\\hline
\end{tabular}
\normalsize
\vspace{2mm}
\caption{The number of labeled graphs that realize the same degree
sequence as any binary tree on $2k$ vertices.}\label{table:3}
\end{table}
In~\cite[pg.6]{Moon70} the number of labeled trees on $n$ vertices
that realize a fixed degree sequence $(d_1,\ldots,d_n)$ is calculated
as
\begin{equation}
\label{eq:11}
\binom{n-2}{d_1-1,\ldots,d_n-1}
\end{equation}
which is different than our calculations for the case
$(d_1,\ldots,d_n)$ given as~\eqref{eq:5}. But note that Moon's
formula enumerates only trees that realize a particular degree
sequence while we count all graphs.
\subsection{Enumerating graphs that realize the same degree sequences as
complete bipartite graphs}
We fix two positive integers $n\leq m$. A complete bipartite graph
$K_{n,m}$ contains $n+m$ vertices which is split into two disjoint
sets, say black and white. Black vertices are connected to every
white vertex and vice versa, but vertices of the same color are not
connected. Any such graph would have the degree sequence
\begin{equation}
\label{eq:7}
(\underbrace{m,\ldots,m}_{\text{$n$-times}},\underbrace{n,\ldots,n}_{\text{$m$-times}})
\end{equation}
Let us write
\begin{equation}
\label{eq:8}
K(n,m) =
\begin{cases}
\binom{n+m}{n} C((\underbrace{m,\ldots,m}_{\text{$n$-times}},\underbrace{n,\ldots,n}_{\text{$m$-times}})) & \text{ if } n\neq m\\
C((\underbrace{n,\ldots,n}_{\text{$2n$-times}})) & \text{ if } n=m
\end{cases}
\end{equation}
for the number of labeled graphs that realize the degree sequence
given in~\eqref{eq:7} for every $m\geq 2$ and $1\leq n\leq m$. We
tabulated the results for $2\leq m\leq 10$ and $2\leq n\leq 6$ in
Table~\ref{table:4}. The first column of Table~\ref{table:4} is
A002061 at OEIS.
\begin{table}
\centering
\footnotesize
\begin{tabular}{{|r|l|l|l|l|l|l|l|}}\hline
$m$ & $n=2$ & $n=3$ & $n=4$ & $n=5$ & $n=6$ \\\hline
2 & 3 & & & & \\ \hline
3 & 7 & 70 & & & \\ \hline
4 & 13 & 553 & 19355 & & \\ \hline
5 & 21 & 3211 & 527481 & 66462606 & \\ \hline
6 & 31 & 13621 & 10649191 & 6445097701 & 2977635137862 \\ \hline
7 & 43 & 44962 & 153984573 & 466128461506 & 1051046246482968 \\ \hline
8 & 57 & 123145 & 1601363093 & 24363074013321 & 277358348828368109 \\ \hline
9 & 73 & 293293 & 12389057785 & 905113150135831 & 53355534127828683775 \\ \hline
10 & 91 & 627571 & 74598011761 & 23985623638038361 & 7334781492338569314961 \\ \hline
\end{tabular}
\normalsize
\vspace{2mm}
\caption{The number of labeled graphs that realize the same degree
sequence as the complete bipartite graph
$K_{n,m}$.}\label{table:4}
\end{table}
\section*{Appendix: The Code}
Since or implementation is simple and short, we opted to list the code
we used to make our calculations here in an Appendix in
Figure~\ref{fig:1}. This way, our results can be reproduced and
verified.
We implemented our enumeration using Common Lisp~\cite{Steel:CLTL} and
a suitable memoization to control the depth of the recursive
calls. However, due to efficiency issues of the data structures we
use, our degree sequences are non-decreasing instead of being
non-increasing. We used SBCL version 2.0.11 to run the lisp
code~\cite{SBCL}.
Our enumeration calculation requires us to calculate a finite number
of shorter degree sequences in each call. When we employ memoization,
we use a global table of already calculated results. If an enumeration
on a shorter degree sequence is needed, and if the result is already
calculated for another branch of the recursive call we recall the
result instead of calculating it from scratch.
\begin{figure}[h]
\begin{lstlisting}
(defun subsets (k xs)
(cond ((null xs) 'nil)
((= 1 k) (loop for x in xs collect (list x)))
(t (union (subsets k (cdr xs))
(mapcar (lambda (x) (cons (car xs) x))
(subsets (1- k) (cdr xs)))))))
\end{lstlisting}
\begin{lstlisting}
(defun new-degree-sequence (ds is)
(let ((ys (copy-list (cdr ds))))
(dolist (i is) (decf (nth i ys)))
(delete 0 (sort ys #'<))))
\end{lstlisting}
\begin{lstlisting}
(let ((table (make-hash-table :test #'equal)))
(defun graph-count (ds)
(cond
((null ds) 1)
((oddp (reduce #'+ ds)) 0)
(t (or (gethash ds table)
(setf (gethash ds table)
(let* ((index-set (loop for i from 0 below (1- (length ds))
collect i))
(all-subsets (subsets (car ds) index-set)))
(loop for xs in all-subsets sum
(graph-count (new-degree-sequence ds xs))))))))))
\end{lstlisting}
\caption{Common Lisp implementation of the enumeration algorithm
given in Theorem~\ref{thm:main}.}\label{fig:1}
\end{figure}
| {
"timestamp": "2021-01-08T02:03:58",
"yymm": "2101",
"arxiv_id": "2101.02299",
"language": "en",
"url": "https://arxiv.org/abs/2101.02299",
"abstract": "A finite non-increasing sequence of positive integers $d = (d_1\\geq \\cdots\\geq d_n)$ is called a degree sequence if there is a graph $G = (V,E)$ with $V = \\{v_1,\\ldots,v_n\\}$ and $deg(v_i)=d_i$ for $i=1,\\ldots,n$. In that case we say that the graph $G$ realizes the degree sequence $d$. We show that the exact number of labeled graphs that realize a fixed degree sequence satisfies a simple recurrence relation. Using this relation, we then obtain a recursive algorithm for the exact count. We also show that in the case of regular graphs the complexity of our algorithm is better than the complexity of the same enumeration that uses generating functions.",
"subjects": "Combinatorics (math.CO)",
"title": "Enumerating Labeled Graphs that Realize a Fixed Degree Sequence",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9901401452783598,
"lm_q2_score": 0.8128673223709251,
"lm_q1q2_score": 0.804852568664379
} |
https://arxiv.org/abs/1301.0095 | A New Proof of Kemperman's Theorem | Let $G$ be an additive abelian group and let $A,B \subseteq G$ be finite and nonempty. The pair $(A,B)$ is called critical if the sumset $A+B = {a+b \mid $a \in A$ and $b\in B$}$ satisfies $|A+B| < |A| + |B|$. Vosper proved a theorem which characterizes all critical pairs in the special case when $|G|$ is prime. Kemperman generalized this by proving a structure theorem for critical pairs in an arbitrary abelian group. Here we give a new proof of Kemperman's Theorem. | \section{Introduction}
Throughout this paper we shall assume that $G$ is an additive abelian group. For subsets $A,B \subseteq G$, we define the \emph{sumset} of $A$ and $B$ to be $A + B = \{ a + b \mid \mbox{$a \in A$ and $b \in B$} \}.$ If $g \in G$ we let $g + A = \{g\} + A$ and $A + g = A + \{g\}$. The \emph{complement} of $A$ is the set $\overline{A} = G \setminus A$, and we let $-A = \{ -a \mid a \in A \}$.
The classical direct problem for addition in groups is to ask: how small the sumset $A + B$ can be? If $G \cong \mathbb{Z}$ (or more generally, $G$ is torsion-free) it is not difficult to argue that $| A + B | \geq | A | + | B | - 1$ holds for every pair of finite nonempty sets $(A,B)$. In 1813 Cauchy proved that this assertion remains true when the order of $G$ is prime and $ A + B \neq G$. This result was rediscovered by Davenport in 1935, and it is now known as the Cauchy-Davenport theorem.
\begin{theorem}[Cauchy \cite{cauchy} - Davenport \cite{davenport}]
\label{thm:CD}
If $A,B \subseteq \mathbb{Z} / p\mathbb{Z}$ are nonempty and $p$ is prime then
\[ |A+B| \ge \min \{ p, |A| + |B| - 1 \}. \]
\end{theorem}
For arbitrary abelian groups we can not expect to have such a lower bound. For instance, if $H$ is a finite proper nontrivial subgroup of $G$, and $A=B = H$, then we will have $A+B=H$. So any generalization of Theorem \ref{thm:CD} will have to take subgroup structure into account. Next we introduce an important theorem of Kneser which yields a generalization of Cauchy-Davenport to arbitrary abelian groups.
We define the \emph{stabilizer} of a subset $A\subseteq G$, denoted $\stab{A}$, to be the subgroup of $G$ defined by $\stab{A}= \{ g \in G \mid g + A = A \}.$ Note that $A$ is a union of $\stab{A}$-cosets, and $\stab{A}$ is the maximal subgroup of $G$ with this property. For a subgroup $H\leq G$, we say that a subset $A$ is \emph{$H$-stable} if $A+H=A$ (equivalently, $H \le \stab{A}$).
\begin{theorem}[Kneser \cite{kneser}, version I]
\label{thm:Kneser}
If $A$ and $B$ are finite nonempty subsets of $G$ and $H = \stab{A+B}$, then
\begin{equation} \label{eq:Kneser}
|A+B| \ge |A + H| + |B+H| - |H|.
\end{equation}
\end{theorem}
To better understand Kneser's theorem, let us introduce some further notation. Whenever $H \le G$ we let $\varphi_{G/H}$ denote the canonical homomorphism from $G$ to the quotient group $G/H$. Now for $H = \stab{A+B}$ let $\tilde{A}=\varphi_{G/H}(A)$ and $\tilde{B}=\varphi_{G/H}(B)$. By definition we have $|A+B|=|\tilde{A}+\tilde{B}||H|$, $|A+H|=|\tilde{A}||H|$ and $|B+H|=|\tilde{B}||H|$. Using these simple equalities, we can express (\ref{eq:Kneser}) as $|\tilde{A}+\tilde{B}|\geq |\tilde{A}|+|\tilde{B}|-1$, acquiring the appearance of Cauchy-Davenport's lower bound in $G / H$.
Define the \emph{deficiency} of a pair $(A,B)$ to be $\delta(A,B) = |A| + |B| - |A+B|$. We will say that a pair $(A,B)$ is \emph{critical} if $\delta (A,B)> 0$. The Cauchy-Davenport Theorem implies that, apart from the case when $A+B= \mathbb{Z} / p\mathbb{Z}$, all critical pairs in $\mathbb{Z} / p\mathbb{Z}$ satisfy $\delta(A,B)=1$. Meanwhile, Kneser's theorem asserts that for a critical pair $(A,B)$ in $G$, the pair $(\tilde{A},\tilde{B})$ of $G/H$ as defined above, will be critical with deficiency $\delta(\tilde{A},\tilde{B})=1$. Indeed, Kneser's Theorem is equivalent to the statement that every critical pair $(A,B)$ with $H = G_{A+B}$ satisfies $|A+B| = |A+H| + |B+H| - |H|$.
Now we shall turn our attention to the structure of critical pairs. One simple construction for a critical pair $(A,B)$ is to choose $A,B$ so that $\min\{ |A|, |B| \} = 1$. A second, more interesting construction is to choose $A$ and $B$ to be arithmetic progressions with a common difference. In 1956 Vosper proved the following theorem which characterizes critical pairs in groups of prime order, and these structures feature prominently in his result.
\begin{theorem}[Vosper \cite{vosper1} \cite{vosper2}, version I]
\label{thm:vosper1}
If $(A,B)$ is a critical pair of nonempty subsets of $\mathbb{Z}/p\mathbb{Z}$ and $p$ is prime, then one of the following holds.
\begin{enumerate}
\item $|A| + |B| > p$ and $A+B = \mathbb{Z}/p\mathbb{Z}$.
\item $|A| + |B| = p$ and $|A+B| = p-1$.
\item $\min\{|A|, |B| \} = 1$.
\item $A$ and $B$ are arithmetic progressions with a common difference.
\end{enumerate}
\end{theorem}
In 1960 Kemperman proved a structure theorem which characterizes critical pairs in an arbitrary abelian group. Although this theorem was published few years after Vosper's, it took some time before it achieved the recognition and attention it deserved. This resulted in part from the inherent complexity of critical pairs, and in part from the difficult nature of Kemperman's paper. Recently, this situation has improved considerably thanks to the work of Grynkiewicz \cite{grynkiewicz-qpk} \cite{grynkiewicz-step}, Lev \cite{lev-kemp}, and Hamidoune \cite{hamidoune-kemperman} \cite{hamidoune-structure}. Grynkiewicz recasts Kemperman's Theorem and then takes a step further by characterizing those pairs $(A,B)$ with $|A+B| = |A+ |B|$. Lev gives a more convenient ``top-down'' formulation of Kemperman's Theorem which we shall adopt here. Finally, Hamidoune showed that all of these results could be achieved using the isoperimetric method.
Here we shall give a new proof of Kemperman's theorem based on some recent work of the second author which generalizes Kemperman's Theorem to arbitrary groups. Although this generalization leans heavily on the isoperimetric method, we shall not adopt these techniques here. Instead we will exploit Kneser's theorem, thus making our proof rather closer in spirit to Kemperman's original than to any of these more recent works. Our paper also differs with the existing literature in our statement of Kemperman's Theorem. The main difference here is that we will work with triples of subsets instead of pairs, and this has the effect of reducing the number of configurations we need to consider.
The remainder of this paper is organized as follows. Over the next two sections, we reduce the original classification problem to a classification problem for certain types of triples of subsets. Section~\ref{sec:critrios} contains our new statement of Kemperman's theorem, and the remaining sections are devoted to its proof.
\section{Pure Pairs}
We define a pair $(A,B)$ to be \emph{pure} if $G_A = G_B = G_{A+B}$. Our main goal in this section is to reduce our original problem to that of classifying pure critical pairs. However, we shall first address some of the uninteresting constructions of critical pairs.
Consider the behaviour appearing in the first outcome of Theorem~\ref{thm:vosper1}, in the context of a general abelian group. If $A,B \subseteq G$ satisfy $|A| + |B| > |G|$, then every $g \in G$ satisfies $B \cap (g-A) \neq \emptyset$ , and it follows that $A+B = G$. So every such pair will be critical. Therefore, the critical pairs $(A,B)$ with $A+B = G$ are precisely those for which $|A| + |B| > |G|$. Accordingly, we will call such pairs \emph{trivial}. Another rather uninteresting construction of a critical pair $(A,B)$ is to take exactly one of $A$ or $B$ to be empty. So, we will also call a pair $(A,B)$ \emph{trivial} if either $A = \emptyset$ or $B = \emptyset$, and we will generally restrict our attention to nontrivial critical pairs.
Now we turn our attention to the notion of pure.
\begin{observation}
\label{pureobs}
If $G_{A+B} = H$, then $(A+H, B+H)$ is pure.
\end{observation}
\begin{proof}
This follows from $H \le G_{A+H} \le G_{A+H + B} = G_{A+B} = H$ and a similar chain of inequalities for $G_{B+H}$.
\end{proof}
Note that by our discussion from the previous section, every pure critical pair $(A,B)$ satisfies $|A+B| = |A| + |B| - |G_{A+B}|$. Next we will show that the problem of classifying critical pairs reduces to that of classifying pure critical pairs. In short, critical pairs $(A,B)$ are at most $G_{A+B}$ elements away from a pure critical pair $(A^*,B^*)$ where $A \subseteq A^*$ and $B \subseteq B^*$. This \emph{superpair} / \emph{subpair} relation is denoted $(A,B) \subseteq (A^*,B^*)$.
\begin{proposition}
For every nontrivial pair of finite subsets $(A,B)$ of $G$ the following are equivalent.
\begin{enumerate}
\item The pair $(A,B)$ is critical.
\item There exists a pure critical superpair $(A^*,B^*) \supseteq (A,B)$ for which
$|A^* \setminus A| + |B^* \setminus B| < |G_{A^*+B^*}|$.
\end{enumerate}
\end{proposition}
\begin{proof}
If (1) holds, then set $H = G_{A+B}$ and note that Observation \ref{pureobs} implies that $A^* = A+H$ and $B^* = B+H$ have $(A^*,B^*)$ pure and critical. Thus $|A| + |B| > |A+B| = |A^* + B^*| = |A^*| + |B^*| - |H|$ so (2) holds.
If (2) holds, then set $H = G_{A^* + B^*}$, let $z \in A^* + B^*$ and choose $a \in A^*$ and $b \in B^*$ with $a+b = z$. Now, for every $h \in H$ the elements $a' = a+h$ and $b' = b-h$ satisfy $a' \in A^*$ and $b' \in B^*$ and $a' + b' = z$. So, $z$ has at least $|H|$ distinct representations as a sum of an element in $A^*$ and an element in $B^*$. It follows from this and $|A^* \setminus A| + |B^* \setminus B| < |H|$ that $A+B = A^* + B^*$. This gives us $|A+B| = |A^* + B^*| = |A^*| + |B^*| - |H| > |A| + |B|,$ so $(A,B)$ is critical and (1) holds.
\end{proof}
In light of the above proposition, to classify all critical pairs, it suffices to classify the nontrivial pure critical pairs.
\section{Trios}
\label{sec:trio}
In the study of critical pairs, there is a third set which appears naturally in conjunction with $A$ and $B$, namely $C = \overline{-(A+B)}$. For simplicity, let us assume for a moment that $G$ is finite and $(A,B)$ is critical. Then we have
\begin{itemize}
\item $0 \not\in A+ B + C$
\item $|A| + |B| + |C| > |G|$.
\end{itemize}
In this case we see that the pair $(B,C)$ is critical since $B+C$ is disjoint from $-A$ (so $|B+C| \le |G| - |A| < |B| + |C|$) and similarly $(A,C)$ is critical. So, in other words, taking the set $C$ as defined above gives us a triple of sets so that each of the three pairs is critical. Accordingly we now extend our definitions from pairs to triples. To allow for infinite groups we shall permit sets which are infinite but cofinite.
\begin{definition}
If $A,B,C \subseteq G$ satisfy $0 \not\in A + B + C$ and each of $A$, $B$, $C$ is
either finite or cofinite, then we say that $(A,B,C)$ is a \emph{trio}.
\end{definition}
\begin{definition}
If $(A,B,C)$ is a trio and $n$ is the size of the complement of the largest of the sets $A,B,C$ and $\ell, m$ are the sizes of the other two sets, then we define the \emph{deficiency} of $(A,B,C)$ to be $\delta(A,B,C) = \ell + m - n$. We say that $(A,B,C)$ is \emph{critical} if $\delta(A,B,C) > 0$. In the case that $G$ is finite, we have $\delta(A,B,C) = |A|+|B|+|C| - |G|$.
\end{definition}
We say that a trio $(A,B,C)$ is \emph{trivial} if one of $A$, $B$, or $C$ is empty. These definitions for trios naturally extend our notions for pairs. More precisely, if $A,B \subseteq G$ are finite and $C = \overline{-(A+B)}$ then $(A,B,C)$ is a trio, and we have
\begin{itemize}
\item $(A,B)$ is trivial if and only if $(A,B,C)$ is trivial.
\item $\delta(A,B) = \delta(A,B,C)$
\item $(A,B)$ is critical if and only if $(A,B,C)$ is critical.
\end{itemize}
Vosper's Theorem has a convenient restatement in terms of trios, as the extra symmetry in a trio eliminates one of the outcomes (and assuming nontriviality eliminates another).
\begin{theorem}[Vosper, version II]
\label{vosper2}
If $(A,B,C)$ is a nontrivial critical trio in $\mathbb{Z}/p\mathbb{Z}$ and $p$ is prime, then one of the following holds.
\begin{enumerate}
\item $\min\{ |A|, |B|, |C| \} = 1$.
\item $A$, $B$, and $C$ are arithmetic progressions with a common difference.
\end{enumerate}
\end{theorem}
Similar to pairs, we define the \emph{supertrio} relation $(A,B,C) \subseteq (A^*,B^*,C^*)$ if $A \subseteq A^*, B \subseteq B^*$, and $C \subseteq C^*$, and call a trio $(A,B,C)$ \emph{maximal} if the only supertrio $(A^*, B^*, C^*) \supseteq (A,B,C)$ is $(A,B,C)$ itself. Note that $(A,B,C)$ is maximal if and only if $C = \overline{-(A+B)}$ and $B = \overline{-(A+C)}$ and $A = \overline{-(B+C)}$. The following proposition shows that pure critical pairs come from maximal critical trios.
\begin{proposition}
Let $A,B,C \subseteq G$ be nonempty and assume $A,B$ are finite. Then the following are equivalent.
\begin{enumerate}
\item $(A,B)$ is a pure critical pair and $C = \overline{-(A+B)}$.
\item $(A,B,C)$ is a maximal critical trio.
\end{enumerate}
\end{proposition}
\begin{proof}
Assume that $(A,B)$ is pure and critical with $H = G_A = G_B = G_{A+B}$, and let $C = \overline{-(A+B)}$. Suppose (for a contradiction) that $(A^*,B,C)$ is a supertrio with $A \subset A^*$. Then $A^* + B \subseteq \overline{-C} = A+B$ so $A^* + B = A+B$, but then $|A^* + B| = |A + B| = |A| + |B| - |H| < |A^*| + |B| - |H|$ contradicts Kneser's Thoerem. By a similar argument there is no trio $(A,B^*,C)$ with $B^* \supset B$, and thus $(A,B,C)$ is maximal.
Next assume $(A,B,C)$ is maximal and critical, and note that whenever $H \le G_B$ we must also have $H \le G_A$ (otherwise $(A+H,B,C)$ contradicts maximality). It follows that $G_A = G_B = G_C$. Now $G_{A+B} = G_C$ implies that $(A,B)$ is pure, and $\delta(A,B) = \delta(A,B,C) > 0$ implies that $(A,B)$ is critical. That $C = \third{A+B}$ follows from maximality.
\end{proof}
The above proposition further reduces the general classification problem to that of determining all maximal critical trios. Next we give a version of Kneser's Theorem for trios which illustrates a key property of maximal critical trios, and prove the equivalence of the two versions.
\begin{theorem}[Kneser, version II]
\label{kneser2}
If $(A,B,C)$ is a maximal critical trio in $G$, then $G_A = G_B = G_C$ and $\delta(A,B,C) = |G_A|$.
\end{theorem}
\begin{proof}[Proof of Equivalence]
To see that version I implies version II, let $(A,B,C)$ be a maximal critical trio and note that the previous proposition implies that $(A,B)$ is pure and critical. Thus $G_A = G_B = G_{A+B} = G_C$, and by Kneser's Theorem $\delta(A,B,C) = \delta(A,B) = |G_{A+B}|$.
For the other direction, let $A,B \subseteq G$ be finite and nonempty and assume $(A,B)$ is critical (otherwise the result is trivial). Set $C = \overline{-(A+B)}$ and choose a maximal supertrio $(A^*,B^*,C) \supseteq (A,B,C)$. Now applying the theorem gives us a subgroup $H = G_C = G_{A^*} = G_{B^*}$ with $\delta(A^*, B^*, C) = |H|$. Since $\delta(A,B,C) > 0$ we have $|A^* \setminus A| < |H|$ which implies $A^* = A+H$ and by a similar argument $B^* = B+H$. Thus $|A+H| + |B+H| - |A+B| = \delta(A^*,B^*) = |H|$ as desired.
\end{proof}
\section{Critical Trios}\label{sec:critrios}
\newcommand{\impureH}{
\fill[gray] (.1,.1) circle (4pt);
\fill[gray] (1.1,.1) circle (2pt);
\fill[gray] (2.1,.1) circle (5pt);
}
\newcommand{\pureH}{
\foreach \x in {0,1,2} {
\fill[gray] (\x,0) -- (\x + .2,0) -- (\x + .2,.2) -- (\x, .2);
}
}
\newcommand{\triomaker}[2]{
\begin{tikzpicture}[very thick,x=2cm,y=2cm]
\foreach \x/\Y in {#1} {
\foreach \y/\l in \Y {
\fill[gray] (\x,\y * .2) -- (\x + .2, \y * .2) -- (\x + .2, \y * .2 + .2) -- (\x, \y * .2 + .2);
\node [label=right:{$\l$}] (l) at (\x + .1, \y * .2 + .1){};
}
}
#2
\foreach \x in {0,1,2} {
\foreach \y in {.2,.4,...,1.2} {
\draw (\x,\y) -- (\x + .2,\y);
}
\node [label=right:{$H$}] (H) at (\x + .1, .1){};
\draw (\x,0) -- (\x,1.4) -- (\x + .2, 1.4) -- (\x + .2, 0) -- cycle;
}
\node [label=below:{$A$}] (A) at (.1,0){};
\node [label=below:{$B$}] (B) at (1.1,0){};
\node [label=below:{$A+B$}] (C) at (2.1,0){};
\end{tikzpicture}
}
\begin{figure}
\begin{tabular}{|c|c|}
\hline
pure beat & pure chord \\
\triomaker{0/{}, 1/{2/x_1+H, 3/x_2+H,5/x_3+H}, 2/{2/x_1+H, 3/x_2+H,5/x_3+H}}{\pureH} &
\triomaker{0/{1/R}, 1/{1/R,2/2R,3/3R}, 2/{1/R,2/2R,3/3R,4/4R}}{\pureH} \\
\hline
impure beat & impure chord \\
\triomaker{0/{}, 1/{1/x_1+H, 3/x_2+H,4/x_3+H}, 2/{1/x_1+H, 3/x_2+H,4/x_3+H}}{\impureH} &
\triomaker{0/{1/R,2/2R}, 1/{1/R,2/2R}, 2/{1/R,2/2R,3/3R,4/4R}}{\impureH}\\
\hline
\end{tabular}
\caption{Structure Atlas}\label{fig:atlas}
\end{figure}
Note that if $(A,B,C)$ is a trio, then any permutation of these three sets yields a new trio. In addition, for every $g \in G$ we have that $(A+g, B-g, C)$ is a trio. It follows immediately that these operations preserve nontriviality, maximality, criticality, and deficiency, and we say that two trios are \emph{similar} if one can be turned into the other by a sequence of these operations.
Next we will introduce some terminology to describe the types of behaviour present in the structure of nontrivial maximal critical trios. We begin with a structure which generalizes those critical pairs $(A,B)$ with $\min\{ |A|, |B| \} = 1$ by allowing for subgroups.
\begin{definition} Let $H< G$ be finite. A trio $\Upsilon$ is a \emph{pure beat relative to} $H$ if $\Upsilon$ is similar to a trio $(A,B,C)$ which satisfies the following:
\begin{enumerate}
\item $A =H$,
\item $\stab{B} = H$, and
\item $C = \overline{-(A+B)} \neq \emptyset$.
\end{enumerate}
\end{definition}
Before introducing our next structure, we require a bit more terminology. Let $H < G$ be a finite subgroup, let $R \in G/H$ and assume that $G/H$ is a cyclic group generated by $R$. Then we define any set of the form $S = \{ A + iR \mid 0 \le i \le k \}$ with $A \in G/H$ to be an $R$-\emph{sequence}. We call $A$ the \emph{head} of this sequence, $A+kR$ the \emph{tail} of the sequence, and we say that $S$ is \emph{basic} if it has head $H$. We define $k+1$ to be the \emph{length} of the sequence, and we call it \emph{nontrivial} if it has length at least $2$. It is easy to see that a pair of $R$-sequences will be critical, and this is the form in which we will encounter arithmetic progressions.
\begin{definition} Let $H < G$ be finite with $G/H$ is cyclic. A trio $\Upsilon$ is a \emph{pure chord relative to} $H$, if there exists $R \in G/H$ which generates $G/H$ and a trio $(A,B,C)$ similar to $\Upsilon$ for which the following hold.
\begin{enumerate}
\item $A,B$ are nontrivial $R$-sequences. \label{itm:nontriv}
\item $C = \overline{-(A+B)}$ is not contained in a single $H$-coset. \label{itm:third}
\end{enumerate}
\end{definition}
It follows immediately from our definitions that every pure beat or pure chord relative to $H$ is a maximal critical trio with deficiency $|H|$.
For each of these two basic structures, there is a variant which allows for recursive constructions of maximal critical trios. Before introducing these variants, we require another bit of terminology. For every set $A \subseteq G$ there is a unique minimal subgroup $H$ for which $A$ is contained in an $H$-coset. We denote this $H$-coset by $[A]$ and call it the \emph{closure} of $A$.
\begin{definition} A trio $\Upsilon$ is an \emph{impure beat} \emph{relative to} $H < G$, if
there is a trio $(A,B,C)$ similar to $\Upsilon$ for which
\begin{enumerate}
\item $[A] = H$,
\item $B \setminus H$ is $H$-stable,
\item $C \setminus H = \overline{-(A+B)}\setminus H$, and
\item $B \cap H \neq \emptyset$ and $C \cap H \neq \emptyset$.
\end{enumerate}
In this case $(A, B \cap H, C \cap H)$ is a trio in $H$ which we call a \emph{continuation} of $\Upsilon$.
\end{definition}
\begin{definition}
Let $H < G$ be finite and assume $G/H$ is cyclic. A trio $\Upsilon$ is an \emph{impure chord relative to} $H$, if there exists $R \in G/H$ which generates $G/H$ and a trio $(A,B,C)$ similar to $\Upsilon$ satisfying
\begin{enumerate}
\item $H \cup A$ and $H \cup B$ are nontrivial basic $R$-sequences,
\item $C \setminus H = \overline{- (A+B)} \setminus H \neq \emptyset$, and
\item $A \cap H$, $B \cap H$, and $C \cap H$ are all nonempty.
\end{enumerate}
As above, $(A \cap H, B \cap H, C \cap H)$ is a trio in $H$ which we call a \emph{continuation} of $\Upsilon$.
\end{definition}
Note that if $(A,B,C)$ is an impure beat or impure chord relative to $H$ and $(A',B',C')$ is a continuation, then our definitions imply that $(A',B',C')$ is a nontrivial trio in $H$. Furthermore, it follows from these constructions that $(A',B',C')$ will be maximal whenever $(A,B,C)$ is maximal, and $\delta(A',B',C') = \delta(A,B,C)$.
With this, we can finally state Kemperman's structure theorem.
\begin{theorem}[Kemperman]
\label{thm:kemperman}
Let $\Upsilon_1$ be a maximal nontrivial critical trio in $G_1$. Then there exists a sequence of trios $\Upsilon_1, \Upsilon_2, \cdots, \Upsilon_m$ in respective subgroups $G_1 > G_2 > \cdots > G_m$ satisfying
\begin{enumerate}
\item $\Upsilon_i$ is an impure beat or an impure chord with continuation $\Upsilon_{i+1}$ for $1 \le i \le m-1$, and
\item $\Upsilon_m$ is either a pure beat or a pure chord.
\end{enumerate}
\end{theorem}
\section{Incomplete Closure}
In this section we focus our attention on critical pairs and trios which contain a set $A$ for which $[A] \neq G$. In particular, we shall prove a stability lemma which shows that every maximal critical trio containing such a set must be a pure or impure beat. We begin with a lemma which was proved for general groups by Olson \cite{olson}, but which follows from Kneser's Theorem for abelian groups (as observed by Lev \cite{lev-kemp}).
\begin{lemma}
\label{cor:kneser}
Let $A,B$ be nonempty finite subsets of $G$ and assume that $A+B \neq G$ and $[A] = G$. Then
$|A+B| \ge \tfrac{1}{2}|A| + |B|$.
\end{lemma}
\begin{proof}
By Theorem~\ref{thm:Kneser}, $H=\stab{A+B}$ satisfies $|A+B| \geq |A+H|+|B+H|-|H|$ and $H \neq G$ since $A+B \neq G$. Since $A$ is not contained in any $H$-coset, $|A+H| - |H| \ge \frac{1}{2}|A+H| \ge \frac{1}{2}|A|$.
Combining these two inequalities yields the desired bound.
\end{proof}
For a set $A \subseteq G$ and a subgroup $H \le G$, we say that $A$ is $H$-\emph{quasistable} if there exists $R \in G/H$ so that $A \setminus R$ is $H$-stable. Members of a pure beat or chord relative to $H<G$ are $H$-stable. The impure versions comprise $H$-quasistable sets, and their continuations are the partial $H$ cosets.
\begin{lemma}
\label{lem:discon_sum}
Let $(A,B)$ be a critical pair of finite sets and assume $[A] \in G/H$ for some $H < G$. Then $A+B$ is $H$-quasistable. Furthermore, if $H$ is finite, then $\delta(H,B) \ge \delta(A,B)$.
\end{lemma}
\begin{proof}
By replacing $A$ by $g+A$ for a suitable $g \in G$, we may assume that $[A] = H$. Let $R_1, \ldots, R_k \in G/H$ be the $H$-cosets which have nonempty intersection with $B$, and for every $1 \le i \le k$ let $B_i = B \cap R_i$. Now we have two inequalities,
\begin{enumerate}
\item $|A + B_i| \ge |B_i|$ and
\item if $A+B_i \neq R_i$, then $|A+B_i| \ge \frac{1}{2}|A| + |B_i|$,
\end{enumerate}
the second of which follows from the previous lemma. Since $A+B$ is the disjoint union $\bigcup_{i=1}^k (A + B_i)$, it follows that there is at most one $1 \le i \le k$ for which $A+B_i \neq R_i$, so $A+B$ is $H$-quasistable.
For the last part, we may assume that $H$ is finite and that $A+B_i = R_i$ for all $2 \le i \le k$. Since $|A+B_1| \ge |A|$ we find \[ \delta(A,B) = |A| + |B| - \sum_{i=1}^k |A + B_i| \le |B| - (k-1)|H| = \delta(H,B) \] which completes the proof.
\end{proof}
We are now ready to prove our stability lemma for trios which contain a set with closure not equal to $G$.
\begin{lemma}[Beat Stability]
\label{lem:beat}
If $(A,B,C)$ is a maximal critical trio and $[A] \in G/H$ for some $H <G$, then $(A,B,C)$ is either a pure or impure beat.
\end{lemma}
\begin{proof} By possibly moving from $(A,B,C)$ to a similar trio, we may assume that $[A] = H < G$ and that $B$ is finite. Now, $(A,B)$ is a critical pair, so by Lemma~\ref{lem:discon_sum}, $A+B$ is $H$-quasistable. If $A+B$ is $H$-stable, then $H$ is finite and it follows from maximality that $A=H$ and $H = \stab{B} = \stab{C}$ so $(A,B,C)$ is a pure beat. In the latter case, we may assume (by possibly passing to a similar trio) that $\emptyset \neq (A+B) \cap H \neq H$ and it then follows from maximality that $(A,B,C)$ is an impure beat.
\end{proof}
\section{Purification}
In this section we will develop a process we call purification which will allow us to make a subtle modification to a critical trio to obtain a new trio with deficiency no smaller than the original. This will be a key tool in the remainder of the paper.
We have already defined notions of deficiency for pairs of finite sets and for trios. It is also convenient to have a notion of deficiency for a single finite set. If $\emptyset \neq A \subset G$ is finite we define the \emph{deficiency} of $A$ to be
\[ \delta(A) = \max_{B \subset G : A+B \neq G} \delta(A,B). \]
Here we only consider finite nonempty sets $B$. Note that this is indeed well defined since for every $B \subseteq G$ we have $\delta(A,B) = |A| + |B| - |A+B| \le |A|$ so the maximum in the formula will be obtained. The following theorem of Mann shows that there is always a finite subgroup which achieves this maximum.
\begin{theorem}[Mann]
\label{thm:mann}
If $A \subset G$ is finite and nonempty, there exists a finite subgroup $H < G$ with $\delta(A,H) = \delta(A)$ and $A+H \neq G$.
\end{theorem}
\begin{proof}
Choose $\emptyset \neq B \subseteq G$ so that $\delta(A,B) = \delta(A)$ and $A+B \neq G$. Now set $H = \stab{A+B}$ and apply Kneser's Theorem to obtain
\begin{eqnarray*}
\delta(A,B)
&=& |A| + |B| - |A+B| \\
&\le& |A| + |B| - |A+H| - |B+H| + |H| \\
&\le& |A| - |A+H| + |H| \\
&=& \delta(A,H).
\end{eqnarray*}
Finally, $A+H \subseteq A+B+H < G$ since $H = \stab{A+B}$.
\end{proof}
Next we establish a lemma which is a key part of purification.
\begin{lemma}
\label{inc_bd}
Let $H < G$ and $A \subset G$ be finite and assume $(A,H)$ is critical. If $B \subseteq H$, then $\delta(A,B) \le \delta(A,H)$.
\end{lemma}
\begin{proof}
We may assume that $(A,B)$ is critical, as otherwise the result holds immediately. Choose $K \le G$ so that $[B] \in G/K$ and note that Lemma \ref{lem:discon_sum} implies $\delta(A,B) \le \delta(A,K)$. Since $K \le H$, to complete the proof, it suffices to show $\delta(A,K) \le \delta(A,H)$ under the assumption $K < H$.
Define $S = (A+H) \setminus A$ and let $S' = \{ g \in S \mid g+K \subseteq S \}$ and $S'' = S \setminus S'$. Since $(A,H)$ is critical $|S'| < |H|$, and then we must have $|S'| \le |H| - |K|$ (since $|S'|$, $|H|$, and $|K|$ are all multiples of $|K|$). Thus
\[ \delta(A,H) = |H| - |S| \ge |K| - |S''| = |K| - |(A+K) \setminus A| = \delta(A,K) \]
which completes the proof.
\end{proof}
\begin{lemma}[Purification]
\label{purification}
Let $(A,B,C)$ be a critical trio in $G$, let $H \le G$, and assume $A$ and $H$ are finite and $(A,H)$ is critical. If $R \in G/H$ satisfies $\emptyset \neq R \cap B \neq R$ and $S = \overline{-(A+R)}$, then $ \delta(A,B \cup R, C \cap S) \ge \delta(A,B,C)$.
\end{lemma}
\begin{proof}
Since $(A,B,C)$ and $(A,R,S)$ are trios, it follows that both $(A, B \cup R, C \cap S)$ and $(A, B \cap R, C \cup S)$ are trios. Furthermore
\[ \delta(A, B \cup R, C \cap S) + \delta(A, B \cap R, C \cup S) = \delta(A,B,C) + \delta(A,R,S). \]
The previous lemma implies $\delta(A,R,S) = \delta(A,R) \ge \delta(A,B \cap R) \ge \delta(A,B \cap R, C \cup S)$ and together with the above equation, this yields the desired result.
\end{proof}
Note that the above lemma also applies to pairs. More precisely, if $A,B \subseteq G$ and $H < G$ are finite and both $(A,B)$ and $(A,H)$ are critical, then for every $R
\in G/H$ with $B \cap R \neq \emptyset$, we have $\delta(A,B \cup R) \ge \delta(A,B)$.
\section{Near Sequences}
The goal of this section is to establish two important lemmas concerning a type of set called a near sequence. The first is a stability lemma which will show that whenever $(A,B,C)$ is a maximal critical trio with some additional properties, and $A$ is a near sequence, then $(A,B,C)$ must be a pure or impure chord. The second will show that whenever $(A^*,B^*,C^*)$ is a pure or impure chord, of which $(A,B,C)$ is a critical subtrio, then every finite set among $(A,B,C)$ must be a near sequence.
We begin by introducing a couple of important definitions. For this purpose we shall assume that $H < G$ is a finite subgroup and $R \in G/H$ generates the group $G/H$.
\begin{definition}
We say that $A \subseteq G$ is a \emph{near} $R$-\emph{sequence} if $A+H$ is an $R$-sequence and $|(A+H) \setminus A| < |H|$.
\end{definition}
\begin{definition}
We say that $A \subseteq G$ is a \emph{fringed} $R$-\emph{sequence} if
\begin{enumerate}
\item $A+H$ is an $R$-sequence, and
\item if $A+H$ has head $S$ and tail $T$, then either $A \setminus S$ or $A \setminus T$ is $H$-stable.
\end{enumerate}
\end{definition}
If $A$ is an $R$-sequence, near $R$-sequence, or fringed $R$-sequence, we say that $A$ is \emph{proper} if $| \overline{A} | \ge 2|H|$, and we call it \emph{nontrivial} if $|A| > |H|$. Next we prove a technical lemma where fringed sequences emerge.
\begin{lemma}\label{lem:fringed}
Let $(A,B)$ be a nontrivial critical pair, and assume that $A$ is a nontrivial near $R$-sequence for $R \in G/H$, and that $B$ is not contained in any $H$-coset. If there exists an $R$-sequence $B^*$ with $B \subseteq B^*$ and $A+B^* \neq G$, then $A+B$ is a fringed $R$-sequence.
\end{lemma}
\begin{proof} Suppose (for a contradiction) that the lemma fails, and let $A,B$ be a counterexample for which $|B|$ is minimum. By shifting $A$ (i.e. replacing $A$ by $A+g$ for some $g \in G$) and $B$ we may assume that $A+H = \bigcup_{i=0}^{\ell} iR$ and $B^* = \bigcup_{i=0}^{ m} iR$. For convenience let us define $A_i = A \cap i R$ and $B_i = B \cap i R$ for every $i \in \mathbb{Z}$. By replacing $B^*$ with a smaller $R$-sequence, we may assume that $B_0 \neq \emptyset$ and $B_{ m} \neq \emptyset$. We first prove a series of three claims.
\begin{claim}
$B_i \neq \emptyset$ for $0 \le i \le m$.
\end{claim}
It follows from repeatedly applying our purification lemma to $H$-cosets $R \in G/H$ for which $\emptyset \neq R \cap B \neq R$ that $(A,B+H)$ is critical and thus $(A+H,B+H)$ is critical. It follows from this that the sets $\tilde{A},\tilde{B} \subseteq \mathbb{Z}$ given by $\tilde{A} = \{0,1,\ldots,{\ell} \}$ and $\tilde{B} = \{ i \in \mathbb{Z} \mid iR \cap B \neq \emptyset \}$ satisfy $(\tilde{A},\tilde{B})$ critical. It follows, e.g. from Lemma 1.3 of Nathanson\cite{nathanson}, that $\tilde{B}$ is the interval $\{ 0, 1, \ldots m\}$ which implies the claim.
\begin{claim}\label{clm:noguts}
$A+B$ does not contain $\bigcup_{i=1}^{ \ell + m -1} iR$.
\end{claim}
Suppose for a contradiction that this claim fails. Let $K_0 = \stab{A_0 + B_0}$ and $K_1 = \stab{A_{{\ell}} + B_{ m}}$. We have $K_0, K_1 < H$ since $A+B$ is not a fringed sequence. Now applying Kneser's Theorem to the sumsets $A_0 + B_0$ and $A_{{\ell}} + B_{ m}$ we find
\begin{eqnarray*}
|A+B|
&=& ({\ell} + m - 1)|H| + |A_0 +B_0| + |A_{{\ell}} + B_{ m}| \\
&\ge& ({\ell} + m-1)|H| + |A_0| + |A_{\ell}| + |B_0| + |B_{ m}| - |K_0| - |K_1| \\
&\ge& \pr{({\ell}-1)|H| + |A_0| + |A_{\ell}|} + \pr{( m-1)|H| + |B_0| + |B_{m}|} \\
&=& |A| + |B|
\end{eqnarray*}
which gives us the desired contradiction.
\begin{claim}
$ m = 1$
\end{claim}
As usual, we suppose for contradiction that $ m > 1$. First consider the set $B' = B \setminus B_{ m}$. It follows easily from the inequality $|A_{\ell} + B_{ m}| \ge |B_{ m}|$ that $\delta(A,B') \ge \delta(A,B) > 0$ so $(A,B')$ is critical. By the minimality of our counterexample, it follows that $A+B'$ must contain $\bigcup_{i=1}^{\ell+m-2} iR$. Similarly, $(A,B\setminus B_0)$ is a critical pair and hence contains $\bigcup_{i=2}^{\ell+m-1} iR$. Putting these together, we find $\bigcup_{i=1}^{\ell+m-1} iR \subseteq A+B$, contradicting Claim~\ref{clm:noguts}.
With these claims in place, we are ready to complete the proof. By the purification lemma, $(A, B \cup R)$ and $(A, B \cup H)$ are critical. This gives us
\begin{eqnarray}
0 < \delta(A,B \cup R) &=& |A| + |B_0| - {\ell} |H| - |A_0 + B_0| \label{eqn:fringed1} \\
0 < \delta(A,B \cup H) &=& |A| + |B_1| - {\ell} |H| - |A_{\ell} + B_1| \label{eqn:fringed2}
\end{eqnarray}
We also have
\begin{equation}
|A| - |A_0 + B_0| - |A_{\ell} + B_1| \le |A| - |A_0| - |A_{\ell}| \le ({\ell}-1)|H| \label{eqn:fringed3}
\end{equation}
Now summing equations (\ref{eqn:fringed1}) and (\ref{eqn:fringed2}) and substituting (\ref{eqn:fringed3}) yields
\begin{equation}
0 < |A| + |B_0| + |B_1| - ({\ell}+1)|H|. \label{eqn:fringed4}
\end{equation}
It follows from Claim~\ref{clm:noguts} that we may choose a point $z \in \pr{\bigcup_{i=1}^{\ell} iR } \setminus (A+B)$. Assuming $z \in iR$, we have $B_0 \cap (z - A_i) = \emptyset$ and $B_1 \cap (z - A_{i-1}) = \emptyset$. These together with $|A \setminus (A_{i-1} \cup A_i)| \le ({\ell}-1)|H|$ then imply
\begin{equation}
|B_0| + |B_1| \le 2 |H| - |A_{i-1}| - |A_i| \le ({\ell}+1)|H| - |A| \label{eqn:fringed5}
\end{equation}
Inequalities (\ref{eqn:fringed4}) and (\ref{eqn:fringed5}) are contradictory, and this completes the proof.
\end{proof}
\begin{lemma}
Let $(A,B,C)$ be a nontrivial critical trio with $A,B$ finite, and assume that every supertrio $(A,B^*, C^*) \supseteq (A,B,C)$ has $(A,B,C) = (A,B^*,C^*)$. Let $H < G$, let $R \in G/H$ and assume $A$ is a nontrivial proper near $R$-sequence. If neither $B$ nor $C$ is contained in an $H$-coset, then $B$ and $\overline{C}$ are fringed $R$-sequences.
\end{lemma}
\begin{proof} \setcounter{claim}{0}
Suppose (for a contradiction) that there is a counterexample to the lemma using the set $A$, and then choose $B$ and $C$ so that $(A,B,C)$ is a counterexample for which
\begin{enumerate}
\item $\delta(A,B,C)$ is maximum.
\item $| \{ S \in G/H \mid \emptyset \neq S \cap B \neq S \} | + | \{ S \in G/H \mid \emptyset \neq S \cap C \neq S \} |$ is minimum. (subject to 1).
\end{enumerate}
Observe that if $B$ is a fringed $R$-sequence, we can automatically conclude that $\overline{C} = -(A+B)$ is a fringed $R$-sequence by the maximality of $C$.
\begin{claim}\label{clm:Gfinite}
There does not exist an $R$-sequence $D$ with $A+D \neq G$ so that $B \subseteq D$ or $C \subseteq D$. So,
in particular, $G$ must be finite.
\end{claim}
If such a set $D$ exists with $B \subseteq D$, then by applying the previous lemma we deduce that $A+B$ is a fringed $R$-sequence. But then, the maximality of $B$ implies that $B$ is a fringed $R$-sequence. Similarly, if such a set $D$ exists with $C \subseteq D$, then $G$ is finite and $A+C$ is a fringed $R$-sequence. But then $B = \third{A+C}$ is also a fringed $R$-sequence.
\begin{claim}\label{clm:impure}
There does not exist $S \in G/H$ with $S \subseteq B$ or with $S \subseteq C$.
\end{claim}
If $S \in G/H$ satisfies $S \subseteq B$ then Claim~\ref{clm:Gfinite} is violated by $D = \third{S+A}$ since $C \subset D$. A similar argument holds if $S \subseteq C$.
Since $G$ is finite by Claim~\ref{clm:Gfinite}, we will show that both $B$ and $C$ (hence $\overline{C}$) are fringed $R$-sequences. Without loss of generality, $|C|\geq |B|$. In particular, $|C|>|H|$ since $|G \setminus A| \geq 2|H|$ and $(A,B,C)$ is critical.
Using Claim~\ref{clm:impure}, choose an $H$-coset $S \in G/H$ so that $\emptyset \neq B \cap S \neq S$. Now setting $B' = B \cup S$ and $C' = C \cap \third{A+S}$, Lemma~\ref{inc_bd} implies $\delta(A,B',C') \ge \delta(A,B,C)$ and it follows that $|C \setminus C'| \le |B' \setminus B| < |H|$. Since $|C| > |H|$ we have that $C' \neq \emptyset$, so $(A,B',C')$ is a nontrivial critical trio. Choose $B'',C''$ maximal so that $(A,B'',C'')$ is a supertrio of $(A,B',C')$.
If $B'' \neq B'$ or $C'' \neq C'$ then $\delta(A,B'',C'') > \delta(A,B,C)$ so by the first criteria in our choice of counterexmple, the lemma holds for $(A,B'',C'')$. On the other hand, if $B'' = B'$ and $C'' = C'$ then the quantity in our second optimization criteria improves, so again we find that the lemma holds for $(A,B'',C'')$.
If $C''$ is not contained in a single $H$-coset, then $B''$ is a fringed sequence, but then Claim~\ref{clm:Gfinite} is violated by $D = B'' + H$ since $B \subset D$. So, we may assume that $C'' \subseteq T$ for some $T \in G/H$. Now let $U \in G/H$ satisfy $U \subseteq \third{A+T}$ and suppose for contradiction that $B \cap U = \emptyset$. In this case $(A, B' \cup U, C')$ is a trio and Lemma \ref{purification} implies
\[ \delta(A, B', C') + |H| = \delta(A, B' \cup U, C') \le \delta(A, C') \le \delta(A,H) \le |H| \]
which is a contradiction. It follows that every $H$-coset contained in $\third{A+T}$ must have nonempty intersection with $B$.
With this knowledge, we now return to our original trio $(A,B,C)$ and modify it to form a new trio by setting $C''' = C \cup T$ and $B''' = B \cap \third{A + T}$. It follows from our purification lemma that $(A,B''',C''')$ is a trio with $\delta(A,B''',C''') \ge \delta(A,B,C)$. Furthermore, since $\third{A+T}$ contains at least two $H$-cosets, the set $B'''$ cannot be contained in a single $H$-coset. Now, by repeating the argument from above, we may extend $(A, B''', C''')$ to a trio with the second and third sets maximal, and then the lemma will hold for this new trio. This then implies that $C'''$ is contained in an $R$-sequence which violates Claim~\ref{clm:Gfinite}. This completes the proof.
\end{proof}
\begin{lemma}[Sequence Stability]
\label{lem:chord}
Let $(A,B,C)$ be a maximal critical trio with $[A] = [B] =[C] = G$. If $A$ is a proper near sequence, then $(A,B,C)$ is either a pure or an impure chord.
\end{lemma}
\begin{proof}
Let $A$ be a proper near $R$-sequence for $R \in G/H$ and assume (without loss) that $B$ is finite. By the previous lemma we deduce that $B$ is a fringed $R$-sequence. However, then by maximality $A$ is also a fringed $R$-sequence. Again using maximality, we conclude that $(A,B,C)$ is either a pure or impure chord relative to $H$.
\end{proof}
\begin{lemma}
\label{lem:near}
Let $(A,B,C)$ be a critical trio of which $(A^*, B^*, C^*)$ is a maximal critical supertrio. If $(A^*, B^*, C^*)$ is a pure or impure chord, and $A$ is finite, then $A$ is a proper near sequence.
\end{lemma}
\begin{proof}
If $(A^*, B^*, C^*)$ is a pure chord relative to $H \le G$ then $\delta(A^*, B^*, C^*) = H$ and since $(A,B,C)$ is critical, we must have $|A^* \setminus A| < |H|$. Since $A^*$ is finite, it is a proper $R$-sequence for some $R \in G/H$ and it follows immediately that $A$ is a near $R$-sequence. Next suppose that $(A^*, B^*, C^*)$ is an impure chord relative to $H \le G$. In this case, there exists a subgroup $K < H$ so that $K = \stab{A^*} = \stab{B^*} = \stab{C^*}$. Now since $(A,B,C)$ is critical, it follows that $|A^* \setminus A| < |K|$. Since $A^*$ is a proper fringed $R$-sequence for some $R \in G/H$, we again find that $A$ is a proper near $R$-sequence.
\end{proof}
\section{Proof}
In this section we prove Kemperman's Theorem.
\begin{proof}[Proof of Theorem~\ref{thm:kemperman}] \setcounter{claim}{0}
Suppose (for a contradiction) that the theorem fails and let $(A,B,C)$ be a counterexample with $|A| \le |B| \le |C|$ so that
\begin{enumerate}
\item If there is a finite counterexample, then $|G|$ is minimum.
\item $\overline{C}$ is minimum (subject to 1).
\item the number of terms in $( [A], [B], [C] )$ equal to $G$ is maximum (subject to 1, 2).
\end{enumerate}
We shall establish properties of our trio with a series of claims.
\begin{claim}
The group $H = \stab{A} = \stab{B} = \stab{C}$ is trivial.
\end{claim}
Otherwise we obtain a smaller counterexample by passing to the quotient group $G/H$ and the trio $\big( \varphi_{G/H}(A), \varphi_{G/H}(B), \varphi_{G/H}(C) \big)$.
\begin{claim}\label{clm:beat}
None of the sets $A$, $B$, or $C$ is contained in a proper coset.
\end{claim}
Suppose for contradiction that $A,B$ or $C$ is contained in a proper coset, and apply Lemma~\ref{lem:beat}. If $(A,B,C)$ is a pure beat, we have an immediate contradiction. Otherwise, we may assume that $(A,B,C)$ is an impure beat relative to $H$ with continuation $(A',B',C')$. If $H$ is finite, then $(A',B',C')$ contradicts our choice of $(A,B,C)$ for the first criteria. Otherwise $H$ is infinite, and we may assume (without loss) that $C'$ is infinite.
Now, $A$ and $B$ are finite and $H$-quasiperiodic, so both $A$ and $B$ are both contained in a single $H$-coset. Hence criteria (1) and (2) agree on $(A,B,C)$ and $(A',B',C')$. Furthermore, only one term in $( [A], [B], [C] )$ is equal to $G$, but by construction, one of $A'$, $B'$ has closure $H$, and $[C'] = H$ (since $C'$ is cofinite). Therefore $(A',B',C')$ is a counterexample which contradicts our choice.
\begin{claim}\label{clm:chord}
$A$ is not a proper near sequence.
\end{claim}
Otherwise it follows from Lemma~\ref{lem:chord} that either $(A,B,C)$ is a pure or impure chord. In the former case we have an immediate contradiction. In the latter a continuation $(A',B',C')$ contradicts our choice of $(A,B,C)$.
\begin{claim}\label{clm:crithalf}
If $D \subseteq G$ satisfies $(A,D)$ critical and $|D| > \frac{1}{2}|A|$ then $[D] = G$.
\end{claim}
Suppose (for a contradiction) that $(A,D)$ is critical and $|D| > \frac{1}{2}|A|$ and that $[D] = H + x$ for $H < G$. If $A$ contains points in at least three $H$-cosets, then $A+D$ is $H$-quasiperiodic by Lemma~\ref{lem:discon_sum} so $|A+D| \ge 2|H| + |D| \ge 3|D| \ge |D| + |A|$ which contradicts the assumption that $(A,D)$ is critical. It then follows from Claim~\ref{clm:beat} that $A$ must contain points in exactly two $H$-cosets. Now, if $|A| \le |H|$ then we have $|A+D| \ge |H| + |D| \ge |A| + |D|$ which is contradictory. Otherwise, $|A| > |H|$ and $A$ contains points in exactly two $H$-cosets, but then $A$ is a near $R$-sequence for some $R \in G/H$ and this contradicts Claim~\ref{clm:chord}.
\begin{claim}\label{clm:nocoset_crit}
There does not exist a nontrivial finite subgroup $H < G$ so that $(A,H)$ is critical.
\end{claim}
Suppose for contradiction that $(A,H)$ is critical with $\br{0} < H < G$. By Claim 1 we may choose an $H$-coset $R \in G/H$ so that $\emptyset \neq C \cap R \neq R$. Now setting $C' = C \cup R$ and $B' = B \cap \third{A+R}$ our purification lemma implies $\delta(A,B',C') \ge \delta(A,B,C)$. It follows from this that $0 \le |C' \setminus C| - |B \setminus B'| < |H| - |B \setminus B'|$. If $|A+H| = 2|H|$ then $A$ is a near sequence which contradicts Claim~\ref{clm:chord}. Therefore, we have $|A| + |H| > |A+H| \ge 3|H|$. This gives us $|B'| > |B| - |H| \ge |A| - |H| \ge \frac{1}{2}|A|$ so by Claim~\ref{clm:crithalf} we have that $[B'] = G$.
Now we let $(A^*,B^*,C^*)$ be a maximal supertrio of $(A,B',C')$. Since $(A^*,B^*,C^*)$ is maximal with $\delta(A^*,B^*,C^*) \ge \delta(A,B',C') \ge \delta(A,B,C)$ and $|\overline{C^*}| < |\overline{C}|$, the theorem holds for $(A^*,B^*,C^*)$. Therefore, $(A^*, B^*,C^*)$ must either be a pure or impure chord (since $[A] = [B] = [C] = G$). Now Lemma~\ref{lem:near}
implies that $A$ is a proper near sequence, but this contradicts Claim~\ref{clm:chord}.
\begin{claim}\label{clm:deficiency1}
Let $D \subseteq G$ be finite and assume that $(A,D)$ is nontrivial and critical. Then $\delta(A,D) = 1$ and further, either $|D| = 1$ or $[D] = G$.
\end{claim}
It follows immediately from Claim~\ref{clm:nocoset_crit} and Mann's theorem that $\delta(A,D) = 1$. Suppose for contradiction that $[D] = H + x$ for $\br{0} < H < G$. Then Lemma~\ref{lem:discon_sum} implies that $A+D$ is $H$-quasistable. Since $[A] = G$, it follows that $H$ is finite. Again, by Lemma~\ref{lem:discon_sum}, $\delta(A,H) \geq \delta(A,D)$, contradicting Claim~\ref{clm:nocoset_crit}.
\begin{claim}
$B$ is not a Sidon set: $|(g+B) \cap B| > 1$ for some $g \in G \setminus \br{0}$.
\end{claim}
Suppose (for a contradiction) that $B$ is a Sidon set. We must have $|A| \ge 3$ as otherwise either $[A] \neq G$ or $A$ is a near sequence. Choose distinct elements $a_1,a_2,a_3 \in A$. Now we have
\[ |A+B| \ge | (a_1 + B) \cup (a_2 + B) \cup (a_3 + B) | \ge 3|B| - 3 \ge |A| + |B| \]
which is contradictory.
With this last claim in place, we are now ready to complete the proof. Since $B$ is not a Sidon set, we may choose $g \in G \setminus \br{0}$ so that $B' = B \cap (g+B)$ satisfies $|B'| \ge 2$. Set $C' = C \cup (-g + C)$ and $B'' = B \cup (g+B)$ and $C'' = C \cap (-g + C)$. It now follows from basic principles that $(A,B',C')$ and $(A,B'',C'')$ are trios and
\begin{equation}\label{eqn:defsum}
\delta(A,B',C') + \delta(A,B'',C'') = 2 \delta(A,B,C).
\end{equation}
If $C'' = \emptyset$ then $G$ must be finite and we have $|C'| = 2|C|$, so
\begin{eqnarray*}
\delta(A,B',C') &=& |A| + |B'| + |C'| - |G|\\
&\ge& |A| + 2 + 2|C| - |G| \\
&>& |A| + |B| + |C| - |G| \\
&=& \delta(A,B,C)
\end{eqnarray*}
which contradicts Claim~\ref{clm:deficiency1}. Therefore $C'' \neq \emptyset$ and then both $(A,B',C')$ and $(A,B'',C'')$ are nontrivial. Then, (\ref{eqn:defsum}) and Claim~\ref{clm:deficiency1} imply that $\delta(A,B',C') = \delta(A,B'',C'') = 1$ and that $(A,B',C')$ and $(A,B'',C'')$ are both maximal. Since $|G \setminus C'| < |G \setminus C|$ the theorem holds for the trio $(A,B',C')$. Since $|B'| \ge 2$, Claim~\ref{clm:deficiency1} implies that $[B'] = G$. However, then $(A,B',C')$ must be a pure or impure chord, and then Lemma~\ref{lem:near} implies that $A$ is a near sequence which contradicts Claim~\ref{clm:chord}. This completes the proof.
\end{proof}
| {
"timestamp": "2013-03-19T02:09:35",
"yymm": "1301",
"arxiv_id": "1301.0095",
"language": "en",
"url": "https://arxiv.org/abs/1301.0095",
"abstract": "Let $G$ be an additive abelian group and let $A,B \\subseteq G$ be finite and nonempty. The pair $(A,B)$ is called critical if the sumset $A+B = {a+b \\mid $a \\in A$ and $b\\in B$}$ satisfies $|A+B| < |A| + |B|$. Vosper proved a theorem which characterizes all critical pairs in the special case when $|G|$ is prime. Kemperman generalized this by proving a structure theorem for critical pairs in an arbitrary abelian group. Here we give a new proof of Kemperman's Theorem.",
"subjects": "Combinatorics (math.CO)",
"title": "A New Proof of Kemperman's Theorem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787879966232,
"lm_q2_score": 0.8152324848629214,
"lm_q1q2_score": 0.8047802163424542
} |
https://arxiv.org/abs/2302.10731 | Real roots of real cubics and optimization | The solution of the cubic equation has a century-long history; however, the usual presentation is geared towards applications in algebra and is somewhat inconvenient to use in optimization where frequently the main interest lies in real roots. In this note, we present the roots of the cubic in a form that makes them convenient to use and we also focus on information on the location of the real roots. Armed with this, we provide several applications in optimization where we compute Fenchel conjugates, proximal mappings and projections. | \section{Introduction}
The history of solving cubic equations is rich and centuries old; see, e.g.,
Confalonieri's recent book \cite{Confa} on Cardano's work. Cubics do also appear in convex and nonconvex optimization.
However, treatises on solving the cubic often focus on the general complex case making the results less useful to optimizers.
The purpose of this note is two-fold. We present a largely self-contained derivation of the solution of the cubic with an emphasis on usefulness to practitioners. We do not claim novelty of these results; however, the presentation appears to be particularly convenient. We then turn to novel results. We show how the formulas can be used to compute Fenchel conjugates and proximal mappings of some convex functions. We also discuss projections on convex and nonconvex sets.
\subsection{Outline of the paper}
The paper is organized as follows.
In \cref{subsec:facts},
we collect some facts on polynomials.
\cref{sec:depcubic} contains a self-contained treatment of the depressed cubic; in turn, this leads quickly to counterparts for the general cubic in \cref{sec:gencubic}.
\cref{sec:convquar} concerns convex quartics --- we compute their
Fenchel conjugates and proximal mappings.
In \cref{sec:alpha/x}, we present a formula for the proximal mapping of the convex reciprocal function.
An explicit formula for the projection onto the epigraph of a parabola is provided in \cref{sec:projepipar}.
In \cref{sec:missing}, we derive a formula for the projection of certain points onto a rectangular hyperbolic paraboloid.
In the final \cref{sec:perspective}, we revisit the proximal mapping of the closure of a perspective function.
\subsection{Some facts}
\label{subsec:facts}
We now collect some properties of polynomials that are well known;
as a reference, we recommend \cite{RahSch}.
\begin{fact}
\label{f:reproots}
Let $f(x)$ be a nonconstant complex polynomial and let $r\in \CC$ such that $f(r)=0$.
Then the multiplicity of $r$ is is the smallest integer $k$ such that
the $k$th derivative at $r$ is nonzero: $f^{(k-1)}(r)=0$ and $f^{(k)}(r)\neq 0$.
When $k=1$, $2$, or $3$,
then we say that $r$ is a simple, double, or triple root, respectively.
\end{fact}
\begin{fact} {\bf (Vieta)}
\label{f:Vieta}
Suppose $f(x)=ax^3+bx^2+cx+d$ is a cubic polynomial (i.e., $a\neq 0$) with complex coefficients.
If $r_1,r_2,r_3$ denote the (possibly repeated and complex) roots of $f$, then
\begin{subequations}
\label{e:Vieta}
\begin{align}
r_1+r_2+r_3&= -\frac{b}{a}\\
r_1r_2+r_1r_3 + r_2r_3 &= \frac{c}{a}\\
r_1r_2r_3 &= -\frac{d}{a}.
\end{align}
\end{subequations}
Conversely, if $r_1,r_2,r_3$ in $\mathbb{C}$ satisfy \cref{e:Vieta},
then they are the (possibly repeated) roots of $f$.
\end{fact}
\begin{fact}
\label{f:cubicstart}
Suppose $f(x)=ax^3+bx^2+cx+d$ is a cubic polynomial (i.e., $a\neq 0$) with real coefficients. Then $f$ has three (possibly complex) roots (counting multiplicity).
More precisely, exactly one of the following holds:
\begin{enumerate}
\item $f$ has exactly one real root which either is simple (and the two remaining roots are nonreal simple roots and conjugate to each other) or is a triple root.
\item $f$ has exactly two distinct real roots: one is simple and the other double.
\item $f$ has exactly three distinct simple real roots.
\end{enumerate}
\end{fact}
\begin{remark}
We mention that the roots of a polynomial of a \emph{fixed} degree depend continuously on the coefficients --- see \cite[Theorem~1.3.1]{RahSch}
for a precise statement and also the other results in \cite[Section~1.3]{RahSch}.
\end{remark}
\section{The depressed cubic}
\label{sec:depcubic}
In this section, we study the \emph{depressed} cubic
\begin{empheq}[box=\mybluebox]{equation}
g(z) := z^3+pz+q,
\quad\text{where}\;\;
p\in\ensuremath{\mathbb R} \;\;\text{and}\;\; q\in\ensuremath{\mathbb R}.
\end{empheq}
\begin{theorem}
\label{t:roots}
We have
\begin{equation}
g'(z)=3z^2+p
\;\;\text{and}\;\;
g''(z)=6z.
\end{equation}
Then
$0$ is the only inflection point of $g$:
$g$ is strictly concave on $\ensuremath{\mathbb{R}_-}$ and $g$ is strictly convex on $\ensuremath{\mathbb{R}_+}$.
Moreover,
exactly one of the following cases occurs:
\begin{enumerate}
\item
\label{t:roots1}
$p<0$: Set $z_\pm :=\pm\sqrt{-p/3}$. Then $z_-<z_+$, $z_\pm$ are two distinct simple roots of $g'$,
$g$ is strictly increasing on $\left]\ensuremath{-\infty},z_-\right]$,
$g$ is strictly decreasing on $[z_-,z_+]$,
$g$ is strictly increasing on $\left[z_+,\ensuremath{+\infty}\right[$.
Moreover,
\begin{equation}
\label{e:221212c}
g(z_-)g(z_+)= 4\Delta,
\quad\text{where}\;\;
\Delta := (p/3)^3+(q/2)^2,
\end{equation}
and this case trifurcates further as follows:
\begin{enumerate}
\item
\label{t:roots1a}
$\Delta>0$: Then $g$ has exactly one real root $r$. It is simple and
given by
\begin{equation}
r := u_-+u_+,\quad
\text{where}\;\;
u_{\pm} := \sqrt[\mathlarger 3]{\frac{-q}{2}\pm \sqrt{\Delta}}.
\end{equation}
The two remaining simple nonreal roots are
\begin{equation}
-\ensuremath{\tfrac{1}{2}}(u_-+u_+)\pm\ensuremath{\mathrm i}\ensuremath{\tfrac{1}{2}}\sqrt{3}(u_--u_+).
\end{equation}
\item
\label{t:roots1b}
$\Delta=0$: If $q>0$ (resp.\ $q<0$), then
$2z_-$ (resp.\ $2z_+$) is a simple real root while $z_+$ (resp.\ $z_-$) is a double root.
Moreover, these cases can be combined into\footnote{Observe that this is the case when $\Delta\to 0^+$ in \cref{t:roots1a}.}
\begin{equation}
\frac{3q}{p}= 2\sqrt[\mathlarger 3]{\frac{-q}{2}}\;\text{is a simple root of $g$}\;\;\text{and}\;\;
\frac{-3q}{2p}= -\sqrt[\mathlarger 3]{\frac{-q}{2}}\;\text{is a double root of $g$.}
\end{equation}
\item
\label{t:roots1c}
$\Delta<0$:
Then $g$ has three simple real roots $r_-,r_0,r_+$ where
$r_-<z_-<r_0<z_+<r_+$.
Indeed,
set
\begin{equation}
\theta := \arccos \frac{-q/2}{(-p/3)^{3/2}},
\end{equation}
which lies in
$\left]0,\pi\right[$, and then define $z_0,z_1,z_2$ by
\begin{equation}
z_k := 2(-p/3)^{1/2}\cos\Big(\frac{\theta+2k\pi}{3} \Big).
\end{equation}
Then $r_- = z_1$, $r_0=z_2$, and $r_+ = z_0$.
\end{enumerate}
\item
\label{t:roots2}
$p=0$: Then $g'$ has a double root at $0$, and $g$ is strictly increasing on $\ensuremath{\mathbb R}$.
The only real root is
\begin{equation}
r := (-q)^{1/3}.
\end{equation}
If $q=0$, then $r$ is a triple root.
If $q\neq 0$, then $r$ is a simple root and the remaining nonreal simple roots
are $-\ensuremath{\tfrac{1}{2}} r \pm \ensuremath{\mathrm i}\ensuremath{\tfrac{1}{2}}\sqrt{3}r$.
\item
\label{t:roots3}
$p>0$: Then $g'$ has no real root, $g$ is strictly increasing on $\ensuremath{\mathbb R}$, and
$g$ has exactly one real root $r$. It is simple and given by
\begin{equation}
r := u_-+u_+,\quad
\text{where}\;\;
u_{\pm} := \sqrt[\mathlarger 3]{\frac{-q}{2}\pm \sqrt{\Delta}} \;\;\text{and}\;\;
\Delta := (p/3)^3+(q/2)^2.
\end{equation}
Once again, the two remaining simple nonreal roots are
\begin{equation}
-\ensuremath{\tfrac{1}{2}}(u_-+u_+)\pm\ensuremath{\mathrm i}\ensuremath{\tfrac{1}{2}}\sqrt{3}(u_--u_+).
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
Except for the formulas for the roots, all statements on $g$ follow from standard calculus.
\cref{t:roots1a}:
Because $\Delta>0$ and $g$ is strictly decreasing on $[z_-,z_+]$,
it follows from \cref{e:221212c}
that $g$ has the same sign on $[z_-,z_+]$ and so $g$ has no root in that interval.
Now $g$ is strictly increasing on $\left]\ensuremath{-\infty},z_-\right]$ and on
$\left[z_+,\ensuremath{+\infty}\right[$; hence,
$g$ has exactly one real root $r$ and it lies outside $[z_-,z_+]$.
Note that $r$ must be simple because the roots of $g'$ are $z_\mp$ and
$r\neq z_\mp$.
Note that $u_-<u_+$.
Next,
$u_-^3u_+^3
= (q/2)^2-\Delta = -(p/3)^3
$
and so
\begin{equation}
\label{e:221212a}
u_-u_+ = -p/3.
\end{equation}
Also,
\begin{equation}
\label{e:221212b}
u_-^3+u_+^3 = \frac{-q}{2}-\sqrt{\Delta}+\frac{-q}{2}+\sqrt{\Delta}=-q.
\end{equation}
Hence
\begin{align*}
g(r) &= r^3+pr+q\\
&=(u_-+u_+)^3+p(u_-+u_+)+q\\
&=u_-^3 + u_+^3 +3u_-u_+(u_-+u_+)+p(u_-+u_+)+q\\
&=\big(u_-^3 + u_+^3 \big) + (3u_-u_++p)(u_-+u_+)+q\\
&= -q + \big(3(-p/3)+p\big)(u_-+u_+)+q \tag{using \cref{e:221212a} and \cref{e:221212b}}\\
&= 0
\end{align*}
as claimed.
Observe that we only need the properties \cref{e:221212a} and \cref{e:221212b}
about $u_-,u_+$ to conclude that $u_-+u_+$ is a root of $g$.
This observation leads us quickly to the two remaining complex roots:
First, denote the primitive 3rd root of unity by $\omega$, i.e.,
\begin{equation}
\label{e:prim3root}
\omega := \exp(2\pi \ensuremath{\mathrm i}/3) = \cos(2\pi/3)+\ensuremath{\mathrm i}\sin(2\pi/3)=-\ensuremath{\tfrac{1}{2}} +\ensuremath{\mathrm i}\ensuremath{\tfrac{1}{2}}\sqrt{3}.
\end{equation}
Then $\omega^2 = \overline{\omega} = -\ensuremath{\tfrac{1}{2}} -\ensuremath{\mathrm i}\ensuremath{\tfrac{1}{2}}\sqrt{3}$
and $\omega^3 = \overline{\omega}^3 = 1$.
Now set
\begin{equation*}
v_- := \omega u_- \;\;\text{and}\;\; v_+ := \omega^2u_+ = \overline{\omega}u_+.
\end{equation*}
Then $v_-v_+(\omega u_-)=(\omega^2u_+)=\omega^3u_-u_+=u_-u_+=-p/3$
by \cref{e:221212a}, and
$v_-^3+v_+^3=(\omega u_-)^3+(\omega^2u_+)^3 = \omega^3u_-^3+\omega^6u_+^3
=u_-^3+u_+^3=-q$ by \cref{e:221212a}.
Hence
\begin{align*}
v_-+v_+
&=
\omega u_- + \overline{\omega}u_+\\
&=\big(-\ensuremath{\tfrac{1}{2}} +\ensuremath{\mathrm i}\ensuremath{\tfrac{1}{2}}\sqrt{3}\big)u_- + \big(-\ensuremath{\tfrac{1}{2}} -\ensuremath{\mathrm i}\ensuremath{\tfrac{1}{2}}\sqrt{3}\big)u_+\\
&= -\ensuremath{\tfrac{1}{2}}(u_-+u_+)+\ensuremath{\mathrm i}\ensuremath{\tfrac{1}{2}}\sqrt{3}(u_--u_+)
\end{align*}
and its conjugate are the remaining simple complex roots of $g$.
\cref{t:roots1b}:
From \cref{e:221212c}, it follows that $z_-$ or $z_+$ is a root of $g$.
In view of \cref{f:reproots} and $g'(z_-)=g'(z_+)$, it follows that one of
$z_-,z_+$ is at least a double root, but not both; moreover, it cannot be a triple root
because $0$ is the only root of $g''$ and $z_-<0<z_+$.
Hence exactly one of $z_-,z_+$ is a double root.
To verify the remaining parts, we first define
\begin{equation*}
r_1 := \frac{3q}{p}\;\;\text{and}\;\;
r_2 := \frac{-3q}{2p}.
\end{equation*}
Because $\Delta=0$, it follows that $4p^3+27q^2=0$.
Hence
\begin{align*}
g(r_1)
&= r_1^3+pr_1+q
= \frac{27q^3}{p^3}+\frac{3pq}{p}+q
= \frac{27q^3}{p^3}+4q
= \frac{q}{p^3}\big(27q^2+4p^3 \big)
= 0
\end{align*}
and
\begin{align*}
g(r_2)
&= r_2^3+pr_2+q
= \frac{-27q^3}{8p^3}+\frac{-3pq}{2p}+q
= \frac{-27q^3}{8p^3}-\frac{q}{2}
= \frac{-q}{8p^3}\big(27q^2+4p^3 \big)
= 0.
\end{align*}
The assumption that $\Delta=0$ readily yields
\begin{equation*}
p = \frac{-3^{1/3}q^{2/3}}{2^{2/3}}\;\;\text{and}\;\;
|q| = \frac{2(-p)^{3/2}}{3^{3/2}}.
\end{equation*}
Hence
\begin{equation*}
r_1 = 3q p^{-1}
= 3q (-1)3^{-1/3}q^{-2/3}2^{2/3} = 2^{2/3}(-q)^{1/3}
\end{equation*}
and
\begin{equation*}
r_2 = -3q 2^{-1}p^{-1}
= -3q 2^{-1}(-1)3^{-1/3}q^{-2/3}2^{2/3} = -2^{-1/3}(-q)^{1/3}
\end{equation*}
as claimed.
If $q>0$, then
\begin{equation*}
r_1 = \frac{3q}{p} = \frac{3\cdot 2(-p)^{3/2}}{3^{3/2}p}=-2(-p/3)^{1/2}=2z_-
\end{equation*}
and
\begin{equation*}
r_2 = \frac{-3q}{2p}=-\ensuremath{\tfrac{1}{2}}\frac{3q}{p}=-\ensuremath{\tfrac{1}{2}} r_1 = -\ensuremath{\tfrac{1}{2}} 2z_-=z_+.
\end{equation*}
Similarly, if $q<0$, then
$r_1=2z_+$ and $r_2=z_-$.
No matter the sign of $q$, we have $r_2\in\{z_-,z_+\}$ and thus $g'(r_2)=0$, i.e.,
$r_2$ is the double root.
\cref{t:roots1c}:
In view of \cref{e:221212c}, $g(z_-)$ and $g(z_+)$ have opposite signs.
Because $g$ is strictly decreasing on $[z_-,z_+]$, it follows that
$g(z_-)>0>g(z_+)$.
Hence there is at least on real root $r_0$ in $\left]z_-,z_+\right[$.
On the other hand, $g$ is strictly increasing on $\left]\ensuremath{-\infty},z_-\right]$
and on $\left[z_+,\ensuremath{+\infty}\right[$ which yields further roots $r_-$ and $r_+$ as announced. Having now three real roots, they must all be simple.
Next, note that $\Delta<0$
$\Leftrightarrow$
$0\leq (q/2)^2<-(p/3)^3 = (-p/3)^3$
$\Leftrightarrow$
$0\leq (q/2)^2/(-p/3)^3<1$
$\Leftrightarrow$
$0\leq (|q|/2)/(-p/3)^{3/2}<1$
$\Leftrightarrow$
$-1<(-q/2)/(-p/3)^{3/2}<1$.
It follows that
\begin{equation}
\label{e:221213b}
\theta = \arccos \frac{-q/2}{(-p/3)^{3/2}} \in \left]0,\pi\right[
\end{equation}
as claimed.
For convenience, we set, for $k\in\{0,1,2\}$,
\begin{equation}
\theta_k := \frac{\theta+2k\pi}{3};
\quad\text{hence,}\;\;
z_k = 2(-p/3)^{1/2}\cos(\theta_k).
\end{equation}
Recall that $0<\theta<\pi$, which allows us to draw three conclusions:
\begin{subequations}
\begin{align}
0<\theta_0=\theta/3<\pi/3
&\Rightarrow
1>\cos(\theta_0)=\cos(\theta/3)>1/2;\\
2\pi/3<\theta_1=(\theta+2\pi)/3<\pi
&\Rightarrow
-1/2 > \cos(\theta_1)=\cos((\theta+2\pi)/3) > -1;\\
4\pi/3<\theta_2 = (\theta+4\pi)/3<5\pi/3
&\Rightarrow
-1/2 < \cos(\theta_2)=\cos((\theta+2\pi)/3) < 1/2.
\end{align}
\end{subequations}
Hence $\cos(\theta_1)<\cos(\theta_2)<\cos(\theta_0)$ and thus
\begin{equation}
z_1<z_2<z_0.
\end{equation}
All we need to do is to verify that each $z_k$ is actually a root of $g$.
To this end, observe first that
the triple-angle formula for the cosine
(see, e.g., \cite[Formula~4.3.28 on page~72]{AS}) yields
\begin{subequations}
\label{e:221213a}
\begin{align}
\cos^3(\theta_k)
&=\frac{3\cos(\theta_k)+\cos(3\theta_k)}{4}
= \frac{3\cos(\theta_k)+\cos(\theta+2k\pi)}{4}\\
&= \frac{3\cos(\theta_k)+\cos(\theta)}{4}.
\end{align}
\end{subequations}
Then
\begin{align*}
g(z_k) &=
z_k^3+pz_k +q\\
&=
8(-p/3)^{3/2}\cos^3(\theta_k)+p2(-p/3)^{1/2}\cos(\theta_k)+q\\
&=
2(-p/3)^{3/2}\big(3\cos(\theta_k)+\cos(\theta) \big)+2(-p/3)^{1/2}p\cos(\theta_k)+q
\tag{using \cref{e:221213a}}\\
&=2(-p/3)^{1/2}\cos(\theta_k)\big(3(-p/3)+p\big)+2(-p/3)^{3/2}\cos(\theta)+q\\
&=2(-p/3)^{3/2}\cos(\theta)+q\\
&=
2(-p/3)^{3/2}\frac{-q/2}{(-p/3)^{3/2}}
+ q\tag{using \cref{e:221213b}}\\
&= 0,
\end{align*}
and this completes the proof for this case.
\cref{t:roots2}: If $q=0$, then $g(z)=z^3$ so $z=0$ is the only root of $g$ and it is of multiplicity $3$. Thus we assume that $q\neq 0$.
Then $g(z)=0$ $\Leftrightarrow$ $z^3+q =0$
$\Leftrightarrow$ $z^3=-q$
$\Rightarrow$ $z=(-q)^{1/3}\neq 0$.
Because $g$ is strictly increasing on $\ensuremath{\mathbb R}$, $r:= (-q)^{1/3}$ is the only real root of $g$.
Because $g'$ has only one real root, namely $0$, it follows that
$g'(r)\neq 0$ and so $r$ is a simple root.
Denoting again by $\omega$ the primitive 3rd root of unity (see \cref{e:prim3root}),
it is clear
that the remaining complex (simple) roots
are $\omega r$ and $\overline{\omega}r$ as claimed.
\cref{t:roots3}:
Note that $\Delta \geq (p/3)^3>0$ because $p>0$.
The fact that $r$ is a root is shown exactly as in \cref{t:roots1a}.
It is simple because $g'$ has no real roots,
and $r$ is unique because $g$ is strictly increasing.
The complex roots are derived exactly as in \cref{t:roots1a}.
\end{proof}
We now provide a conise version of \cref{t:roots}:
\begin{corollary} {\bf (trichotomy)}
\label{c:roots}
Set $\Delta := (p/3)^3+(q/2)^2$.
Then exactly one of the following holds:
\begin{enumerate}
\item $p=0$ or $\Delta>0$: Then $g$ has exactly one real root and it is given by
\begin{equation}
\sqrt[\mathlarger 3]{\frac{-q}{2}+ \sqrt{\Delta}}
+
\sqrt[\mathlarger 3]{\frac{-q}{2}-\sqrt{\Delta}}.
\end{equation}
\item $p<0$ and $\Delta=0$:
Then $g$ has exactly two real roots which are given by
\begin{equation}
\frac{3q}{p}= 2\sqrt[\mathlarger 3]{\frac{-q}{2}}
\;\;\text{and}\;\;
\frac{-3q}{2p}= -\sqrt[\mathlarger 3]{\frac{-q}{2}}.
\end{equation}
\item $\Delta<0$:
Then $g$ has exactly three real roots $z_0,z_1,z_2$ which are given by
\begin{equation}
z_k := 2(-p/3)^{1/2}\cos\Big(\frac{\theta+2k\pi}{3} \Big),
\quad\text{where}\;\;
\theta := \arccos \frac{-q/2}{(-p/3)^{3/2}},
\end{equation}
and where $z_1<z_2<z_0$.
\end{enumerate}
\end{corollary}
\section{The general cubic}
\label{sec:gencubic}
In this section, we turn to the general cubic
\begin{empheq}[box=\mybluebox]{equation}
\label{e:gencubic}
f(x) := ax^3+bx^2+cx+d,
\quad\text{where}\;\;
a,b,c,d\;\text{are in}\;\ensuremath{\mathbb R}\;\text{and}\; a>0.
\end{empheq}
(The case $a<0$ is treated similarly.)
Note that $f''(x)=6ax+2b$ has exactly one zero, namely
\begin{equation}
\label{e:defx0}
x_0 := \frac{-b}{3a}.
\end{equation}
The change of variables
\begin{equation}
\label{e:charvar}
x = z+x_0
\end{equation}
leads to the well known depressed cubic
\begin{equation}
\label{e:defpq}
g(z) := z^3 + pz+q, \;\;\text{where}\;\; p := \frac{3ac-b^2}{3a^2}
\;\;\text{and}\;\; q := \frac{27a^2d+2b^3-9abc}{27a^3}
\end{equation}
which we reviewed in \cref{sec:depcubic}.
Here $ag(z)=f(x)=f(z+x_0)$ so
the roots of $g$ are precisely those of $f$, translated by $x_0$:
\begin{equation}
\text{$x$ is a root of $f$ $\Leftrightarrow$ $x-x_0$ is a root of $g$.}
\end{equation}
So all we need to do is find the roots of $g$, and then add $x_0$ to them, to obtain
the roots of $f$. Because the change of variables \cref{e:charvar} is linear,
multiplicity of the roots are preserved.
Translating some of the results from \cref{t:roots} for $g$ to $f$ gives the following:
\begin{theorem}
\label{t:genroots}
$f$
is strictly concave on $\left]\ensuremath{-\infty},x_0\right]$ and
is strictly convex on $\left[x_0,\ensuremath{+\infty}\right[$, where $x_0$ is the unique inflection point of $f$ defined in \cref{e:defx0}.
Recall the definitions of $p,q$ from \cref{e:defpq} and also set
\begin{equation}
\Delta := (p/3)^3+(q/2)^2 = \frac{(3ac-b^2)^3}{(9a^2)^3}
+ \frac{(27a^2d+2b^3-9abc)^2}{(54a^3)^2}.
\end{equation}
Then exactly one of the following cases occurs:
\begin{enumerate}
\item
\label{t:genroots1}
\fbox{$b^2>3ac \Leftrightarrow p<0$}\,: Set $x_\pm :=(-b\pm\sqrt{b^2-3ac})/(3a)$.
Then $x_\pm$ are two distinct simple roots of $f'$,
$f$ is strictly increasing on $\left]\ensuremath{-\infty},x_-\right]$,
$f$ is strictly decreasing on $[x_-,x_+]$,
$f$ is strictly increasing on $\left[x_+,\ensuremath{+\infty}\right[$.
This case trifurcates further as follows:
\begin{enumerate}
\item
\label{t:genroots1a}
\fbox{$\Delta>0$}\,: Then $f$ has exactly one real root; moreover, it
is simple and given by
\begin{equation}
x_0+u_-+u_+,\quad
\text{where}\;\;
u_{\pm} := \sqrt[\mathlarger 3]{\frac{-q}{2}\pm \sqrt{\Delta}}.
\end{equation}
The two remaining simple nonreal roots are
$x_0-\ensuremath{\tfrac{1}{2}}(u_-+u_+)\pm\ensuremath{\mathrm i}\ensuremath{\tfrac{1}{2}}\sqrt{3}(u_--u_+)$.
\item
\label{t:genroots1b}
\fbox{$\Delta=0$}\,:
Then $f$ has two distinct real roots: The simple root is
\begin{equation}
x_0+\frac{3q}{p} =
x_0+2\sqrt[\mathlarger 3]{\frac{-q}{2}} =
\frac{4 a b c -b^3 -9 a^{2} d}{a{\left(b^{2} - 3 a c\right)} }
\end{equation}
and the double root is
\begin{equation}
x_0-\frac{3q}{2p} = x_0 -\sqrt[\mathlarger 3]{\frac{-q}{2}} =
\frac{9ad-bc}{2(b^{2} - 3 a c)}.
\end{equation}
\item
\label{t:genroots1c}
\fbox{$\Delta<0$}\,:
Then $f$ has three simple real roots $r_-,r_0,r_+$ where
$r_-<x_-<r_0<x_+<r_+$.
Indeed,
set
\begin{equation}
\theta := \arccos \frac{-q/2}{(-p/3)^{3/2}},
\end{equation}
which lies in
$\left]0,\pi\right[$, and then define $y_0,y_1,y_2$ by
\begin{equation}
y_k := x_0+2(-p/3)^{1/2}\cos\Big(\frac{\theta+2k\pi}{3} \Big).
\end{equation}
Then $r_- = y_1$, $r_0=y_2$, and $r_+ = y_0$.
\end{enumerate}
\item
\label{t:genroots2}
\fbox{$b^2=3ac \Leftrightarrow p=0$}\,:
Then $f$ is strictly increasing on $\ensuremath{\mathbb R}$ and its
only real root is
\begin{equation}
r := x_0 + (-q)^{1/3}.
\end{equation}
If $q=0$, then $r$ is a triple root.
If $q\neq 0$, then $r$ is a simple root and the remaining nonreal simple roots
are $x_0-\ensuremath{\tfrac{1}{2}} (-q)^{1/3} \pm \ensuremath{\mathrm i}\ensuremath{\tfrac{1}{2}}\sqrt{3}(-q)^{1/3}$.
\item
\label{t:genroots3}
\fbox{$b^2<3ac \Leftrightarrow p>0$}\,:
Then $f$ is strictly increasing on $\ensuremath{\mathbb R}$, and
$f$ has exactly one real root; moreover, it is simple and given by
\begin{equation}
x_0+u_-+u_+,\quad
\text{where}\;\;
u_{\pm} := \sqrt[\mathlarger 3]{\frac{-q}{2}\pm \sqrt{\Delta}}.
\end{equation}
The two remaining simple nonreal roots are
$x_0-\ensuremath{\tfrac{1}{2}}(u_-+u_+)\pm\ensuremath{\mathrm i}\ensuremath{\tfrac{1}{2}}\sqrt{3}(u_--u_+)$.
\end{enumerate}
\end{theorem}
In turn, \cref{c:roots} turns into
\begin{corollary}
\label{c:genroots}
Recall \cref{e:defx0} and \cref{e:defpq}, and
set
\begin{equation}
\Delta := (p/3)^3+(q/2)^2 = \frac{(3ac-b^2)^3}{(9a^2)^3}
+ \frac{(27a^2d+2b^3-9abc)^2}{(54a^3)^2}
\end{equation}
Then exactly one of the following holds:
\begin{enumerate}
\item
\label{c:genroots1}
\fbox{$b^2=3ac$ or $\Delta>0$}\,:
Then $f$ has exactly one real root and it is given by
\begin{equation}
x_0+ \sqrt[\mathlarger 3]{\frac{-q}{2}+ \sqrt{\Delta}}
+
\sqrt[\mathlarger 3]{\frac{-q}{2}-\sqrt{\Delta}}.
\end{equation}
\item
\label{c:genroots2}
\fbox{$b^2>3ac$ and $\Delta=0$}\,:
Then $f$ has exactly two real roots which are given by
\begin{equation}
x_0+\frac{3q}{p}= x_0+2\sqrt[\mathlarger 3]{\frac{-q}{2}}
\;\;\text{and}\;\;
x_0+\frac{-3q}{2p}= x_0-\sqrt[\mathlarger 3]{\frac{-q}{2}}.
\end{equation}
\item
\label{c:genroots3}
\fbox{$\Delta<0$}\,:
Then $f$ has exactly three real (simple) roots $r_0,r_1,r_2$, where
\begin{equation}
r_k := x_0+2(-p/3)^{1/2}\cos\Big(\frac{\theta+2k\pi}{3} \Big),
\quad\text{}\;\;
\theta := \arccos \frac{-q/2}{(-p/3)^{3/2}},
\end{equation}
and $r_1<r_2<r_0$.
\end{enumerate}
\end{corollary}
\section{Convex Analysis of the general quartic}
\label{sec:convquar}
In this section, we study the function
\begin{empheq}[box=\mybluebox]{equation}
\label{e:genquartic}
h(x) := \alpha x^4 + \beta x^3+ \gamma x^2+ \delta x + \varepsilon, \quad
\text{where $\alpha,\beta,\gamma,\delta,\varepsilon$ are in $\ensuremath{\mathbb R}$ with $\alpha\neq 0$.}
\end{empheq}
We start by characterizing convexity.
\begin{proposition} {\bf (convexity)}
\label{p:convexquartic}
The general quartic \cref{e:genquartic} is convex
if and only if
\begin{equation}
\label{e:221214a}
\alpha>0 \;\;\text{and}\;\; 8\alpha\gamma\geq 3\beta^2.
\end{equation}
\end{proposition}
\begin{proof}
Note that
$h'(x)=4\alpha x^3 + 3\beta x^2 + 2\gamma x + \delta$ and, also completing the square,
\begin{align}
h''(x)&=12 \alpha x^2 + 6\beta x + 2\gamma
= \frac{3}{4}\alpha\Big(4x+\frac{\beta}{\alpha}\Big)^2 + 2\gamma - \frac{3\beta^2}{4\alpha}.
\end{align}
Hence $h''\geq 0$
$\Leftrightarrow$
[$\alpha>0$ and $2\gamma \geq 3\beta^2/(4\alpha)$]
$\Leftrightarrow$ \cref{e:221214a}.
(For further information on deciding nonnegativity of polynomials,
see \cite[Section~3.1.3]{BPT}.)
\end{proof}
Having characterization convexity, we shall assume this condition for the remainder of this section:
\begin{empheq}[box=\mybluebox]{equation}
\label{e:convexquartic}
\text{$h$ is convex, i.e.,\;\;}
\alpha >0 \;\text{and}\; 8\alpha\gamma\geq 3\beta^2.
\end{empheq}
\begin{proposition} {\bf (Fenchel conjugate)}
\label{p:Fenchelquartic}
Recall our assumptions \cref{e:genquartic} and \cref{e:convexquartic}.
Let $y\in \ensuremath{\mathbb R}$.
Then
\begin{equation}
h^*(y) = yx_y-h(x_y),
\end{equation}
where
$p:=(8\alpha\gamma-3\beta^2)/(16\alpha^2)\geq 0$,
$q:=(8\alpha^2(\delta-y)+\beta^3-4\alpha\beta\gamma)/(32\alpha^3)$,
$\Delta := (p/3)^2+(q/2)^2\geq 0$, and
\begin{equation}
\label{e:theo1}
x_y :=
-\frac{\beta}{4\alpha}+
\sqrt[\mathlarger 3]{\frac{-q}{2}+\sqrt{\Delta}}
+
\sqrt[\mathlarger 3]{\frac{-q}{2}-\sqrt{\Delta}}.
\end{equation}
\end{proposition}
\begin{proof}
Because $h$ is supercoercive, it follows from
\cite[Proposition~14.15]{BC2017} that $\ensuremath{\operatorname{dom}} h^*=\ensuremath{\mathbb R}$.
Combining with the differentiability of $h$, it follows that
$y\in \ensuremath{\operatorname{int}\operatorname{dom}}\, h^*\subseteq \ensuremath{\operatorname{dom}} \partial h^* = \ensuremath{{\operatorname{ran}}\,} \partial h = \ensuremath{{\operatorname{ran}}\,} h'$.
However, if $h'(x)=y$, then $h^*(y)=xy-h(x)$ and we have found the conjugate.
It remains to solve $h'(x)=y$, i.e.,
\begin{equation}
\label{e:221220a}
4\alpha x^3+3\beta x^2+2\gamma x+\delta-y=0.
\end{equation}
So we set
\begin{equation*}
f(x) := ax^3+bx^2+cx+d,
\quad\text{where}\;\;
a := 4\alpha,
\;\;
b := 3\beta,
\;\;
c := 2\gamma,
\;\;
d := \delta-y.
\end{equation*}
To solve \cref{e:221220a}, i.e., $f(x)=0$, we first note that
\begin{align*}
p
&=
\frac{3ac-b^2}{3a^2}
= \frac{3(4\alpha)(2\gamma)-(3\beta)^2}{3(4\alpha)^2}
=
\frac{8\alpha\gamma-3\beta^2}{16\alpha^2}
\geq 0,
\end{align*}
where the inequality follows from \cref{e:convexquartic}.
Next,
\begin{align*}
q
&=
\frac{3^3a^2d+2b^3-3^2abc}{(3a)^3}
= \frac{3^34^2\alpha^2(\delta-y)+2(3^3\beta^3)-3^2(4\alpha)(3\beta)(2\gamma)}{3^34^3\alpha^3}\\
&=
\frac{8\alpha^2(\delta-y)+\beta^3-4\alpha\beta\gamma}{32\alpha^3}
\end{align*}
and
\begin{equation*}
\Delta = (p/3)^3 + (q/2)^2\geq 0,
\end{equation*}
where the inequality follows because $p\geq 0$.
Then $-b/(3a)=-\beta/(4\alpha)$ and now
\cref{c:genroots}\cref{c:genroots1} yields the unique solution of $f(x)=0$ as
\cref{e:theo1}.
\end{proof}
\begin{proposition} {\bf (proximal mapping)}
\label{p:proxquartic}
Recall our assumptions \cref{e:genquartic} and \cref{e:convexquartic}.
Let $y\in \ensuremath{\mathbb R}$.
Then
\begin{equation}
\label{e:proxquartic}
\ensuremath{\operatorname{Prox}}_h(y) = -\frac{\beta}{4\alpha}+\sqrt[\mathlarger 3]{\frac{-q}{2}+ \sqrt{\Delta}}
+
\sqrt[\mathlarger 3]{\frac{-q}{2}-\sqrt{\Delta}},
\end{equation}
where
\begin{align}
\label{e:proxquarticpq}
p := \frac{4\alpha(1+2\gamma)-3\beta^2}{16\alpha^2},
\quad
q := \frac{8\alpha^2(\delta-y)+\beta^3-2\alpha\beta(1+2\gamma)}{32\alpha^3},
\end{align}
and
$\Delta := (p/3)^3+(q/2)^2\geq 0$.
\end{proposition}
\begin{proof}
Because $h$ is differentiable and full domain,
it follows that $\ensuremath{\operatorname{Prox}}_h(y)$ is the \emph{unique} solution $x$
of the equation $h'(x)+x-y=0$.
The proof thus proceeds analogously to that of
\cref{p:Fenchelquartic} --- the only difference is we must solve
\begin{equation*}
f(x) := ax^3+bx^2+cx+d,
\quad\text{where}\;\;
a := 4\alpha,
\;\;
b := 3\beta,
\;\;
c := 2\gamma+1,
\;\;
d := \delta-y.
\end{equation*}
(The only difference is that $c=2\gamma+1$ rather than $2\gamma$ due to the additional term ``$+x$''.)
Thus we know \emph{a priori} that the resulting cubic must have a unique real solution.
We now have
\begin{align*}
\label{e:221214b}
0 &< \frac{1}{4\alpha} \leq
\frac{1}{4\alpha}+\frac{8\alpha\gamma-3\beta^2}{16\alpha^2}
= \frac{4\alpha(1+2\gamma)-3\beta^2}{16\alpha^2}=p
= \frac{12\alpha(1+2\gamma)-9\beta^2}{48\alpha^2}\\
&= \frac{3ac-b^2}{3a^2},
\end{align*}
which is our usual $p$ from discussing roots of the cubic $f$.
Similarly, the $q$ defined here is the same as the usual $q$ for $f(x)$ (see \cref{e:defpq}).
Finally, the formula for $x=\ensuremath{\operatorname{Prox}}_h(y)$ now follows from
\cref{c:genroots}\cref{c:genroots1}.
\end{proof}
\begin{example}
\label{ex:geoquart}
Suppose that
\begin{equation}
h(x)=x^4+x^3+x^2+x+1,
\end{equation}
and let $y\in\ensuremath{\mathbb R}$.
Then $h$ is convex and
\begin{subequations}
\begin{align}
\label{e:221220b}
h^*(y)&=yx_y-h(x_y),\quad\text{where}\\
x_y &=
-\frac{1}{4}+
\frac{1}{2}\sqrt[\mathlarger 3]{y-\tfrac{5}{8}+\sqrt{(y-\tfrac{5}{8})^2+(\tfrac{5}{4})^3}}
+
\frac{1}{2}\sqrt[\mathlarger 3]{y-\tfrac{5}{8}-\sqrt{(y-\tfrac{5}{8})^2+(\tfrac{5}{4})^3}}.
\end{align}
\end{subequations}
Moreover,
\begin{equation}
\label{e:221215b}
\ensuremath{\operatorname{Prox}}_h(y) = -\frac{1}{4}+
\frac{1}{2}\sqrt[\mathlarger 3]{y-\tfrac{3}{8}+{\sqrt{(y-\tfrac{3}{8})^2+(\tfrac{3}{4})^3}}}
+\frac{1}{2}\sqrt[\mathlarger 3]{y-\tfrac{3}{8}-{\sqrt{(y-\tfrac{3}{8})^2+(\tfrac{3}{4})^3}}}.
\end{equation}
See \cref{fig:figure3} for a visualization.
\end{example}
\begin{proof}
Note that $h$ fits the pattern of \cref{e:genquartic} with
$\alpha=\beta=\gamma=\delta=\varepsilon=1$.
The characterization of convexity presented in
\cref{p:convexquartic} turns into $1>0$ and $8\geq 3$ which are both obviously true. Hence $h$ is convex.
To compute the Fenchel conjugate, we apply \cref{p:Fenchelquartic} and get
$p=5/16$, $q=(5-8y)/32=-(y-5/8)/4$, and
$\Delta = 5^3/16^3 + (y-5/8)^2/8^2$. Then $-q/2=(y-5/8)/8$.
Hence
\cref{e:theo1} turns into
\begin{align*}
x_y &=
-\frac{1}{4}+
\sqrt[\mathlarger 3]{\frac{y-5/8}{8}+\sqrt{\frac{(y-5/8)^2}{8^2}+\frac{5^3}{16^3}}}
+
\sqrt[\mathlarger 3]{\frac{y-5/8}{8}-\sqrt{\frac{(y-5/8)^2}{8^2}+\frac{5^3}{16^3}}}\\
&=
-\frac{1}{4}+
\sqrt[\mathlarger 3]{\frac{y-5/8}{8}+\sqrt{\frac{(y-5/8)^2}{8^2}+\frac{5^3}{8^2\cdot 4^3}}}
+
\sqrt[\mathlarger 3]{\frac{y-5/8}{8}-\sqrt{\frac{(y-5/8)^2}{8^2}+\frac{5^3}{8^2\cdot 4^3}}},
\end{align*}
which simplifies to \cref{e:221220b}.
To compute $\ensuremath{\operatorname{Prox}}_h(y)$, we utilize
\cref{p:proxquartic}. Obtaining fresh values for $p,q,\Delta$, we have this time $p=9/16$, $q=(3-8y)/32$,
$\Delta = ((8y-3)^2+27)/4096 = ((8y-3)^2+3^3)/64^2$.
Hence
$-q/2 = (8y-3)/64$ and
$\sqrt{\Delta}=\sqrt{(8y-3)^2+3^3}/64$.
It follows that
\begin{align*}
\sqrt[\mathlarger 3]{\frac{-q}{2}\pm\sqrt{\Delta}}
&=
\sqrt[\mathlarger 3]{\frac{8y-3}{64}\pm\frac{\sqrt{(8y-3)^2+3^3}}{64}}
= \frac{1}{4}\sqrt[\mathlarger 3]{8y-3\pm{\sqrt{(8y-3)^2+3^3}}}\\
&= \frac{1}{2}\sqrt[\mathlarger 3]{y-\tfrac{3}{8}\pm{\sqrt{(y-\tfrac{3}{8})^2+(\tfrac{3}{4})^3}}}
\end{align*}
This, $-\beta/(4\alpha)=-1/4$, and \cref{e:proxquartic}
now yields \cref{e:221215b}.
\end{proof}
\begin{figure}[htp]
\centering
\includegraphics[width=.31\textwidth]{zoomedinfinalh.pdf}\hfill
\includegraphics[width=.32\textwidth]{zoomedinfinalhstar.pdf}\hfill
\includegraphics[width=.325\textwidth]{zoomedinfinalproxh.pdf}
\caption{A visualization of \cref{ex:geoquart}.
Depicted are $h$ (left), its conjugate $h^*$ (middle), and the proximal mapping $\ensuremath{\operatorname{Prox}}_h$ (right).}
\label{fig:figure3}
\end{figure}
\begin{example}
Suppose that
\begin{equation}
h(x) = \alpha x^4, \quad\text{where $\alpha>0$,}
\end{equation}
and let $y\in\ensuremath{\mathbb R}$.
Then
\begin{equation}
\label{e:alpha4*}
h^*(y)=\frac{3}{4(4\alpha)^{1/3}}y^{4/3}
\end{equation}
and
\begin{equation}
\label{e:221215a}
\ensuremath{\operatorname{Prox}}_h(y)=\frac{1}{2}\sqrt[\mathlarger 3]{\frac{y}{\alpha}+ \sqrt{\frac{1+27\alpha y^2}{27\alpha^3}}}
+
\frac{1}{2}\sqrt[\mathlarger 3]{\frac{y}{\alpha}- \sqrt{\frac{1+27\alpha y^2}{27\alpha^3}}}.
\end{equation}
\end{example}
\begin{proof}
Note that $h$ fits the pattern of \cref{e:genquartic} with
$\beta=\gamma=\delta=\varepsilon=0$.
The characterization of convexity presented in
\cref{p:convexquartic} turns into $\alpha>0$ and $0\geq 0$ which are both obviously true. Hence $h$ is convex.
We start by computing the Fenchel conjugate of $h$ using \cref{p:Fenchelquartic}.
We have $p=0$, $q=-y/(4\alpha)$, $\Delta = (y/(8\alpha))^2$,
and $-\beta/(4\alpha)=0$.
Hence $-q/2=y/(8\alpha)$ and $\sqrt{\Delta}=|y|/(8\alpha)$ which imply
$\sqrt[3]{(-q/2)\pm\sqrt{\Delta}} =\sqrt[3]{y/(8\alpha)\pm |y|/(8\alpha) }
= \sqrt[3]{\max\{0,y/(4\alpha)\}}$ or
$\sqrt[3]{\min\{0,y/(4\alpha)\}}$.
Using \cref{e:theo1}, we get
\begin{align*}
x_y &=
\sqrt[\mathlarger 3]{\max\{0,y/(4\alpha)\}}
+\sqrt[\mathlarger 3]{\min\{0,y/(4\alpha)\}}
= \sqrt[3]{y/(4\alpha)}.
\end{align*}
Using \cref{p:Fenchelquartic}, we obtain
\begin{align*}
h^*(y)
&=yx_y-h(x_y)
=yy^{1/3}/(4\alpha)^{1/3}
-\alpha y^{4/3}/(4\alpha)^{4/3}\\
&=
\frac{|y|^{4/3}}{4^{1/3}\alpha^{1/3}}-
\frac{|y|^{4/3}}{4^{4/3}\alpha^{1/3}}
=\frac{|y|^{4/3}}{4^{1/3}\alpha^{1/3}}\big(1-\tfrac{1}{4} \big)
=\frac{3|y|^{4/3}}{4(4\alpha)^{1/3}}
\end{align*}
as claimed.
To compute $\ensuremath{\operatorname{Prox}}_h(y)$, we utilize \cref{p:proxquartic}.
Obtain fresh values of $p,q,\Delta$, we have this time
$p=1/(4\alpha)>0$ and $q=-y/(4\alpha)$ (see \cref{e:proxquarticpq}).
Hence $\Delta = (p/3)^3+(q/2)^2 = (1+27\alpha y^2)/(1728\alpha^3)$ and so
$\sqrt{\Delta}=\sqrt{1+27\alpha y^2}/(8(3\alpha)^{3/2})$.
Now $-\beta/(4\alpha)=0$ and $-q/2=y/(8\alpha)$, so \cref{e:proxquartic} yields
\begin{align*}
\ensuremath{\operatorname{Prox}}_h(y) &=
\sqrt[\mathlarger 3]{\frac{y}{8\alpha}+ \frac{\sqrt{1+27\alpha y^2}}{8(3\alpha)^{3/2}}}
+
\sqrt[\mathlarger 3]{\frac{y}{8\alpha}- \frac{\sqrt{1+27\alpha y^2}}{8(3\alpha)^{3/2}}}\\
&=
\frac{1}{2}\sqrt[\mathlarger 3]{\frac{y}{\alpha}+ \sqrt{\frac{1+27\alpha y^2}{27\alpha^3}}}
+
\frac{1}{2}\sqrt[\mathlarger 3]{\frac{y}{\alpha}- \sqrt{\frac{1+27\alpha y^2}{27\alpha^3}}}
\end{align*}
as claimed.
\end{proof}
\begin{remark}
The Fenchel conjugate formula \cref{e:alpha4*} is known and can also be computed by
combining, e.g., \cite[Example~13.2(i) with Proposition~13.23(i)]{BC2017}.
The prox formula \cref{e:221215a} appears --- with a typo though --- in \cite[Example~24.38(v)]{BC2017}.
Finally, for a Maple implementation for quartics, see \cite{Yves}.
\end{remark}
\section{The proximal mapping of $\alpha/x$}
\label{sec:alpha/x}
In this section, we study the convex reciprocal function
\begin{empheq}[box=\mybluebox]{equation}
\label{e:alpha/x}
h(x) := \begin{cases} \alpha/x, &\text{if $x>0$;}\\
\ensuremath{+\infty}, &\text{if $x\leq 0$,}
\end{cases}\qquad
\text{where $\alpha> 0$.}
\end{empheq}
The Fenchel conjugate $h^*$, which requires only solving a \emph{quadratic} equation,
is essentially known (e.g., combine \cite[Example~13.2(ii) and Proposition~13.23(i)]{BC2017}), and given by
\begin{equation}
h^*(y) = \begin{cases}
-2\sqrt{-\alpha y}, &\text{if $y\leq 0$;}\\
\ensuremath{+\infty}, &\text{if $y>0$.}
\end{cases}
\end{equation}
The purpose of this section is to explicitly compute $\ensuremath{\operatorname{Prox}}_h$.
We have the following result:
\begin{proposition}
\label{p:alpha/x}
Suppose that $h$ is given by \cref{e:alpha/x},
and let $y\in\ensuremath{\mathbb R}$.
Set $y_0 := -3\sqrt[3]{\alpha/4}\approx -1.88988 \sqrt[3]{\alpha}$.
Then we have the following three possibilities:
\begin{enumerate}
\item If $y_0<y$, then
\begin{align*}
\ensuremath{\operatorname{Prox}}_h(y)
&= \frac{y}{3} +
\sqrt[\mathlarger 3]{\frac{\alpha}{2}+\Big(\frac{y}{3}\Big)^3 +\sqrt{\alpha \Big( \frac{\alpha}{4}+\Big(\frac{y}{3}\Big)^3 \Big)}}
+\sqrt[\mathlarger 3]{\frac{\alpha}{2}+\Big(\frac{y}{3}\Big)^3 -\sqrt{\alpha \Big( \frac{\alpha}{4}+\Big(\frac{y}{3}\Big)^3 \Big)}}.
\end{align*}
\item If $y=y_0$, then
\begin{equation*}
\ensuremath{\operatorname{Prox}}_h(y_0) = \sqrt[3]{\alpha}/\sqrt[3]{4}\approx 0.62996\sqrt[3]{\alpha}.
\end{equation*}
\item If $y<y_0$, then
\begin{equation*}
\ensuremath{\operatorname{Prox}}_h(y)=
\frac{y}{3}\bigg(1- 2\cos\Big(\frac{1}{3}\arccos\frac{(y/3)^3+\alpha/2}{-(y/3)^{3}}\Big)\bigg).
\end{equation*}
\end{enumerate}
\end{proposition}
\begin{proof}
Because $\ensuremath{\operatorname{dom}} h = \ensuremath{\,\left]0,+\infty\right[}$, we must find the \emph{positive}
solution of the equation $h'(x)+x-y=0$.
Since $h'(x)=-\alpha x^{-2}$, we are looking for the (necessarily unique) positive solution
of $x^2(h'(x)+x-y)=0$, i.e., of
\begin{equation*}
x^3-yx^2-\alpha=0.
\end{equation*}
This fits the pattern of \cref{e:gencubic} in \cref{sec:gencubic},
with parameters
$a=1$, $b=-y$, $c=0$, and $d=-\alpha$.
As in \cref{e:defpq}, we set
We have
\begin{align}
\label{e:Ilop}
p &:=
\frac{3ac-b^2}{3a^2}
= -\frac{y^2}{3}
\begin{cases}
<0, &\text{if $y\neq 0$;}\\
=0, &\text{if $y=0$}
\end{cases}
\end{align}
and
\begin{align*}
q &:=
\frac{27a^2d+2b^3-9abc}{27a^3}
= -\alpha - 2(y/3)^3.
\end{align*}
Next,
\begin{align*}
\Delta
&= (p/3)^3 + (q/2)^2
= -y^6/9^3 + (\alpha+2(y/3)^3)^2/4\\
&=
-(y/3)^6+\alpha^2/4+\alpha(y/3)^3+(y/3)^6\\
&= \alpha\big(\alpha/4 + (y/3)^3\big).
\end{align*}
Hence
\begin{equation}
\Delta
\begin{cases}
< 0 \Leftrightarrow y < y_0; \\
= 0 \Leftrightarrow y = y_0; \\
> 0 \Leftrightarrow y>y_0,
\end{cases}
\quad \text{where}\;\; y_0 := -\frac{3}{\sqrt[3]{4}}\sqrt[3]{\alpha}\approx -1.88988 \sqrt[3]{\alpha}.
\end{equation}
Now set
\begin{equation}
\label{e:Ilofirst}
x_0 := -\frac{b}{3a} = \frac{y}{3}.
\end{equation}
Note that
\begin{equation}
\label{e:Ilo-q/2}
-q/2=(y/3)^3+\alpha/2.
\end{equation}
We now discuss the three possibilities from \cref{c:genroots} --- these will correspond to the three items of the result!
\emph{Case~1}: $b^2=3ac$ or $\Delta>0$; equivalently, $y=0$ or $y>y_0$; equivalently, $y_0<y$.\\
Then \cref{c:genroots}\cref{c:genroots1} yields
\begin{align*}
\ensuremath{\operatorname{Prox}}_h(y)
&= x_0 + \sqrt[\mathlarger 3]{\frac{-q}{2}+\sqrt{\Delta}} + \sqrt[\mathlarger 3]{\frac{-q}{2}-\sqrt{\Delta}}\\
&= \frac{y}{3} +
\sqrt[\mathlarger 3]{\frac{\alpha}{2}+\Big(\frac{y}{3}\Big)^3 +\sqrt{\alpha \Big( \frac{\alpha}{4}+\Big(\frac{y}{3}\Big)^3 \Big)}}
+\sqrt[\mathlarger 3]{\frac{\alpha}{2}+\Big(\frac{y}{3}\Big)^3 -\sqrt{\alpha \Big( \frac{\alpha}{4}+\Big(\frac{y}{3}\Big)^3 \Big)}}
\end{align*}
as claimed.
\emph{Case~2}: $\Delta=0$; equivalently, $y=y_0$.\\
Then \cref{c:genroots}\cref{c:genroots2} yields two distinct real roots.
We can take a short cut here, though:
By exploiting the continuity of $\ensuremath{\operatorname{Prox}}_h$ at $y_0$ via
$\ensuremath{\operatorname{Prox}}_h(y_0)=\lim_{y\to y_0^+}\ensuremath{\operatorname{Prox}}_h(y)$, we get
\begin{equation*}
\ensuremath{\operatorname{Prox}}_h(y_0) =
\frac{y_0}{3}+2\sqrt[\mathlarger 3]{\frac{\alpha}{2}+\Big(\frac{y_0}{3}\Big)^3 }
= \sqrt[3]{\alpha}/\sqrt[3]{4}\approx 0.62996\sqrt[3]{\alpha}.
\end{equation*}
\emph{Case~3}: $\Delta<0$; equivalently, $y<y_0$.\\
By uniqueness of $\ensuremath{\operatorname{Prox}}_h(y)$, the desired root must be the \emph{largest} (and the only positive) real root offered in this case (see \cref{c:genroots}\cref{c:genroots3}):
\begin{equation}
\ensuremath{\operatorname{Prox}}_h(y)=
x_0 + 2(-p/3)^{1/2}\cos\Big(\frac{\theta}{3}\Big),
\quad\text{where}\;\;
\theta := \arccos\frac{-q/2}{(-p/3)^{3/2}},
\end{equation}
By \cref{e:Ilop},
$-p/3=y^2/9$; thus, using $y<y_0<0$, we have $(-p/3)^{1/2}=-y/3$,
$(-p/3)^{3/2}=-(y/3)^3$, and \cref{e:Ilo-q/2} yields
$-(q/2)/(-p/3)^{3/2}=-((y/3)^3+\alpha/2)/(y/3)^3$.
This and \cref{e:Ilofirst} results in
\begin{equation}
\ensuremath{\operatorname{Prox}}_h(y)=
\frac{y}{3} - 2\frac{y}{3}\cos\Big(\frac{\theta}{3}\Big),
\quad\text{where}\;\;
\theta := \arccos \bigg(-\frac{(y/3)^3+\alpha/2}{(y/3)^3}\bigg).
\end{equation}
\end{proof}
\begin{remark}
Suppose that $\alpha=1$. Then $\ensuremath{\operatorname{Prox}}_h$ was discussed in \cite{p-o.net};
however, no explicit formulae were presented.
For a visualization of $\ensuremath{\operatorname{Prox}}_h$ in this case, see \cref{fig2}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{oneoverxfinal}
\caption{A visualization of \cref{p:alpha/x} when $\alpha=1$.
}
\label{fig2}
\end{figure}
\end{remark}
\section{Projection onto epigraph of a parabola}
\label{sec:projepipar}
In this section, we study projection onto the epigraph of the function
\begin{empheq}[box=\mybluebox]{equation}
h\colon \ensuremath{\mathbb R}^n\to\ensuremath{\mathbb R}\colon \ensuremath{\mathbf{x}} \mapsto \alpha\|\ensuremath{\mathbf{x}}\|^2,
\quad
\text{where $\alpha> 0$.}
\end{empheq}
\begin{theorem}
\label{t:projepipar}
Set $E := \epi h \subseteq \ensuremath{\mathbb R}^{n+1}$.
Let $(\ensuremath{\mathbf{y}},\eta)\in(\ensuremath{\mathbb R}^n\times\ensuremath{\mathbb R})$.
If $(\ensuremath{\mathbf{y}},\eta)\in E$, then $P_E(\ensuremath{\mathbf{y}},\eta)=(\ensuremath{\mathbf{y}},\eta)$.
So we assume that
$(\ensuremath{\mathbf{y}},\eta)\in(\ensuremath{\mathbb R}^n\times\ensuremath{\mathbb R})\smallsetminus E$, i.e.,
$\alpha\|\ensuremath{\mathbf{y}}\|^2>\eta$. Set $\nu := \|\ensuremath{\mathbf{y}}\|\geq 0$,
\begin{equation}
p := -\frac{(2\alpha \eta-1)^2}{12\alpha^2},
\quad
q := \frac{(2\alpha \eta-1)^3-27\alpha^2\nu^2}{108\alpha^3},
\end{equation}
$\Delta := (p/3)^3+(q/2)^2 = (27\alpha^2{\nu^2}-2(2\alpha\eta-1)^3)\nu^2/(1728\alpha^4)$, and
\begin{equation}
x := \begin{cases}
-\dfrac{\alpha \eta+1}{3\alpha} +
\sqrt[\mathlarger 3]{-q/2 +\sqrt{\Delta}}
+
\sqrt[\mathlarger 3]{-q/2 -\sqrt{\Delta}}, &\text{if $\Delta\geq 0$;}\\[+5mm]
-\dfrac{\alpha \eta+1}{3\alpha} +
\dfrac{|2\alpha \eta-1|}{3\alpha}\cos\Big(\dfrac{1}{3}\arccos \dfrac{-q/2}{(-p/3)^{3/2}}\Big), &\text{if $\Delta<0$.}
\end{cases}
\end{equation}
Then
\begin{equation}
P_E(\ensuremath{\mathbf{y}},\eta) = \Big(\frac{\ensuremath{\mathbf{y}}}{1+2\alpha x},\eta+x\Big).
\end{equation}
See \cref{fig3} for an illustration for the case $\alpha=1/2$.
\end{theorem}
\begin{proof}
For $x\geq 0$, we have $x h=x\alpha\|\cdot\|^2$,
$x \nabla h=2\alpha x\ensuremath{\operatorname{Id}}$,
$\ensuremath{\operatorname{Id}}+ x \nabla h=(1+2\alpha x)\ensuremath{\operatorname{Id}}$ and therefore
$\ensuremath{\operatorname{Prox}}_{x h}=(1+2\alpha x)^{-1}\ensuremath{\operatorname{Id}}$.
In view of \cite[Theorem~6.36]{Beck2}, we must first find a positive root $x$ of
$\varphi(x) := h(\ensuremath{\operatorname{Prox}}_{x h}(\ensuremath{\mathbf{y}}))-x-\eta=\alpha \|\ensuremath{\mathbf{y}}\|^2/(1+2\alpha x)^2-x-\eta=0$.
Note that $\varphi(0)>0$, that $\varphi$ is strictly decreasing on $\ensuremath{\mathbb{R}_+}$, and that
$\varphi(x)\to\ensuremath{-\infty}$ as $x\to\ensuremath{+\infty}$.
Hence $\varphi$ has \emph{exactly one} positive root.
Multiplying by $(1+2\alpha x)^2>0$, where $x>0$, results in the cubic
$\alpha \nu^2-(x+\eta)(1+2\alpha x)^2=0$,
which must have \emph{exactly one} positive root.
Re-arranging leads us to
\begin{equation}
f(x) := 4\alpha^2 x^3 + 4\alpha(\alpha \eta+1)x^2 + (4\alpha \eta+1)x +\eta-\alpha \nu^2=0,
\end{equation}
a cubic which we know has exactly one positive root.
As in \cref{sec:gencubic}, we set
\begin{equation}
a := 4\alpha^2,
\;\;
b := 4\alpha(\alpha \eta+1),
\;\;
c := 4\alpha \eta+1,
\;\;
d := \eta-\alpha \nu^2 < 0,
\end{equation}
\begin{equation}
p := \frac{3ac-b^2}{3a^2} = -\frac{(2\alpha \eta-1)^2}{12\alpha^2}\leq 0,
\end{equation}
and
\begin{equation}
q := \frac{27a^2d+2b^3-9abc}{27a^3} = \frac{8\alpha^3\eta^3-12\alpha^2\eta^2+6\alpha \eta-27\alpha^2\nu^2-1}{108\alpha^3}.
\end{equation}
We then have
\begin{subequations}
\begin{align}
\Delta &:= (p/3)^3+(q/2)^2 = \frac{\big(8\alpha^3\eta^3-12\alpha^2\eta^2+6\alpha \eta-27\alpha^2\nu^2-1\big)^2-(2\alpha \eta-1)^6}{(6\alpha)^6}\\
&=-\frac{\nu^2}{1728\alpha^4}\big(16\alpha^3\eta^3-24\alpha^2\eta^2+12\alpha \eta-27\alpha^2\nu^2-2 \big)\\
&=\frac{\nu^2}{1728\alpha^4}\big(27\alpha^2\nu^2-2(2\alpha\eta-1)^3\big),
\end{align}
\end{subequations}
as claimed.
Utilizing \cref{c:genroots}, we have
\begin{subequations}
\begin{equation}
x = -\frac{\alpha \eta+1}{3\alpha} +
\sqrt[\mathlarger 3]{-q/2 +\sqrt{\Delta}}
+
\sqrt[\mathlarger 3]{-q/2 -\sqrt{\Delta}},
\quad\text{if $\Delta\geq 0$}
\end{equation}
and
\begin{equation}
x = -\frac{\alpha \eta+1}{3\alpha} +
\frac{|2\alpha \eta-1|}{3\alpha}\cos\Big(\frac{1}{3}\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big)
\quad\text{if $\Delta< 0$.}
\end{equation}
\end{subequations}
(Because we know there is \emph{exactly one} positive root, it is clear that we must pick
$r_0$ in \cref{c:genroots}\cref{c:genroots3} when $\Delta<0$.)
Finally, \cite[Theorem~6.36]{Beck2} yields
\begin{equation}
P_E(\ensuremath{\mathbf{y}},\eta) = \big(\ensuremath{\operatorname{Prox}}_{xh}(\ensuremath{\mathbf{y}}),\eta+x\big)
= \Big(\frac{\ensuremath{\mathbf{y}}}{1+2\alpha x},\eta+x\Big)
\end{equation}
as claimed.
\end{proof}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Test1}
\caption{A visualization of \cref{t:projepipar} when $n=1$ and $\alpha=1/2$.
The epigraph is shown in gray.
The red curve corresponds to $\Delta=0$, the green region to $\Delta<0$ (where trig functions are used) and the blue region to $\Delta>0$.}
\label{fig3}
\end{figure}
\section{On the projection of a rectangular hyperbolic paraboloid }
\label{sec:missing}
In this section, $X$ is a real Hilbert space and we set
\begin{empheq}[box=\mybluebox]{equation}
S := \menge{(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}},\gamma)\in X\times X\times\ensuremath{\mathbb R}}{\scal{\ensuremath{\mathbf{x}}}{\ensuremath{\mathbf{y}}}=\alpha\gamma},
\quad
\text{where $\alpha\in\ensuremath{\mathbb R}\smallsetminus\{0\}$.}
\end{empheq}
Using the Hilbert product space norm
$\|(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}},\gamma)\| := \sqrt{\|\ensuremath{\mathbf{x}}\|^2+\|\ensuremath{\mathbf{y}}\|^2+\beta^2\gamma^2}$,
where $\beta>0$, we are interested
in finding the projection onto $S$.
Various cases were discussed in \cite{hypar},
but 3 were treated only implicitly.
Armed with the cubic, we are now able to treat two of these cases explicitly (the remaining case features a quintic and remains hard).
The first case concerns
\begin{equation}
\label{e:case1bla}
P_S(\ensuremath{\mathbf{z}},-\ensuremath{\mathbf{z}},\gamma),
\quad\text{when}\;\;
\ensuremath{\mathbf{z}}\in X\smallsetminus\{0\}\;\text{and}\;
\alpha(\gamma-\alpha/\beta^2)<-\|\ensuremath{\mathbf{z}}\|^2/4,
\end{equation}
while the second case is
\begin{equation}
\label{e:case2bla}
P_S(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{z}},\gamma),
\quad\text{when}\;\;
\ensuremath{\mathbf{z}}\in X\smallsetminus\{0\}\;\text{and}\;
\alpha(\gamma+\alpha/\beta^2)>\|\ensuremath{\mathbf{z}}\|^2/4.
\end{equation}
\subsection{The case when \cref{e:case1bla} holds}
\begin{theorem}
Suppose
$\ensuremath{\mathbf{z}}\in X\smallsetminus\{0\}$, set
\begin{equation}
\zeta := \|\ensuremath{\mathbf{z}}\|>0,
\end{equation}
and assume that
\begin{equation}
\label{e:230109a}
\alpha(\gamma-\alpha/\beta^2)<-\zeta^2/4.
\end{equation}
Set
\begin{equation}
\label{e:230109b}
p := -\frac{(\alpha+\beta^{2} \gamma )^{2}}{3 \alpha^{2}},
\quad
q :=
\frac{2(\alpha+\beta^2\gamma)^3}{27\alpha^3}
+ \frac{\beta^2\zeta^2}{\alpha^2},
\end{equation}
and
\begin{equation}
\label{e:230109c}
\Delta := (p/3)^3+(q/2)^2=
\frac{\beta^2\zeta^2}{\alpha^2}\Big(\frac{\beta^2\zeta^2}{4\alpha^2}+\frac{(\alpha+\beta^2\gamma)^3} {27\alpha^3}\Big).
\end{equation}
If $\Delta\geq 0$, then set
\begin{equation}
\label{e:xDnonneg}
x := \frac{2\alpha-\beta^2\gamma}{3\alpha}+ \sqrt[\mathlarger 3]{\frac{-q}{2}+ \sqrt{\Delta}}
+
\sqrt[\mathlarger 3]{\frac{-q}{2}-\sqrt{\Delta}};
\end{equation}
and if $\Delta<0$, then set
\begin{subequations}
\label{e:xDneg}
\begin{equation}
x := \frac{2\alpha-\beta^2\gamma}{3\alpha}
+\delta\frac{2(\alpha+\beta^2\gamma)}{3\alpha}\cos\bigg(\frac{1}{3}\Big((3+\delta)\pi+\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big) \bigg)
\end{equation}
where
\begin{equation}
\delta := \sign(\alpha^2+\alpha\beta^2\gamma)\in\{-1,0,1\}.
\end{equation}
\end{subequations}
Then $-1<x<1$ and
\begin{equation}
P_S(\ensuremath{\mathbf{z}},-\ensuremath{\mathbf{z}},\gamma) = \Big(\frac{\ensuremath{\mathbf{z}}}{1-x},\frac{-\ensuremath{\mathbf{z}}}{1-x},\gamma+\frac{\alpha x}{\beta^2} \Big).
\end{equation}
\end{theorem}
\begin{proof}
By \cite[Theorem~4.1(ii)(a)]{hypar},
there exists a \emph{unique} $x\in\left]-1,1\right[$ such that
\begin{equation}
\frac{2\zeta^2}{(1-x)^2}+\frac{2\alpha^2x}{\beta^2}+2\alpha\gamma=0;
\end{equation}
multiplying by
$\beta^2(1-x)^2/2>0$ yields the cubic
\begin{equation}
f(x) := ax^3+bx^2+cx+d=0
\end{equation}
where
\begin{equation}
a :=
\alpha^{2}>0,
\;\;
b :=
\alpha \beta^{2} \gamma - 2\alpha^{2},
\;\;
c :=
\alpha^2-2\alpha \beta^{2} \gamma,
\;\;
d :=
\alpha \beta^{2} \gamma + \beta^{2} \zeta^{2}.
\end{equation}
Our strategy is to systematically discuss all cases of
\cref{t:genroots} and then combine cases as much as possible.
As usual, we set
\begin{equation}
\label{e:230109d}
x_0 := -\frac{b}{3a} =
\frac{2\alpha-\beta^{2}\gamma }{3 \alpha}
=
\frac{2\alpha^2-\alpha \beta^{2}\gamma }{3 \alpha^{2}}
\quad\text{and}\quad
p := \frac{3ac-b^2}{3a^2} =
-\frac{(\alpha+\beta^{2} \gamma )^{2}}{3 \alpha^{2}} \leq 0,
\end{equation}
and we note that the definition of $p$ is consistent with the one given
in \cref{e:230109b}.
We have the characterization
\begin{equation}
p = 0
\;\Leftrightarrow\;
\alpha+\beta^2\gamma=0
\;\Leftrightarrow\;
\gamma=-\alpha/\beta^2.
\end{equation}
Again as usual, we set
\begin{equation}
q := \frac{27a^2d+2b^2-9abc}{27a^3} =
\frac{2(\alpha+\beta^2\gamma)^3}{27\alpha^3}
+ \frac{\beta^2\zeta^2}{\alpha^2},
\end{equation}
which matches \cref{e:230109b}, and of course
\begin{equation}
\Delta := (p/3)^3+(q/2)^2=
\frac{\beta^2\zeta^2}{\alpha^2}\Big(\frac{\beta^2\zeta^2}{4\alpha^2}+\frac{(\alpha+\beta^2\gamma)^3} {27\alpha^3}\Big),
\end{equation}
which matches \cref{e:230109c}.
We now systematically discuss the case of \cref{t:genroots}.
\noindent
\emph{Case~1:} $p<0$, i.e., $\alpha+\beta^2\gamma\neq 0$ by \cref{e:230109b}.
\emph{Case~1(a):} $p<0$ and $\Delta>0$.\\
Then \cref{t:genroots}\cref{t:genroots1a} and
the definition $x_0$ in \cref{e:230109d} yield
\cref{e:xDnonneg}.
\emph{Case~1(b):} $p<0$ and $\Delta=0$.\\
By \cref{t:genroots}\cref{t:genroots1b}, there are two roots,
$x_0+3q/p$ and $x_0-3q/(2p)$, one of which lies in $\left]-1,1\right[$.
Now
\begin{align*}
x_0-\frac{3q}{2p}-1
&= \frac{2\alpha-\beta^2\gamma}{3\alpha}
-\frac{3}{2}\frac{2(\alpha+\beta^2\gamma)^3+27\alpha\beta^2\zeta^2} {27\alpha^3}
\frac{-3\alpha^2}{(\alpha+\beta^2\gamma)^2}-1\\
&=
\frac{2\alpha-\beta^2\gamma}{3\alpha}+
\frac{(\alpha+\beta^2\gamma)^3+27\alpha\beta^2\zeta^2/2}{3\alpha(\alpha+\beta^2\gamma)^2}-1\\
&=
\frac{\big((2\alpha-\beta^2\gamma)+(\alpha+\beta^2\gamma)-(3\alpha)\big)}{3\alpha(\alpha+\beta^2\gamma)^2}(\alpha+\beta^2\gamma)^2
+ \frac{27\alpha\beta^2\zeta^2/2}{3\alpha(\alpha+\beta^2\gamma)^2}\\
&= \frac{9\beta^2\zeta^2}{2(\alpha+\beta^2\gamma)^2}\\
&\geq 0;
\end{align*}
hence the root $x_0-3q/(2p)$ lies in $\left[1,\ensuremath{+\infty}\right[$ and therefore
our desired root is the remaining one, namely $x_0+3q/p$, which also
allows us to use the representation \cref{e:xDnonneg}.
\emph{Case~1(c):} $p<0$ and $\Delta<0$.\\
According to \cref{t:genroots}\cref{t:genroots1c}, we have three distinct real roots, but there is information about their location. We must locate the root in $\left]-1,1\right[$.
First,
$b^2-3ac=(\alpha^2+\alpha\beta^2\gamma)^2$
which yields
$\sqrt{b^2-3ac}=|\alpha^2+\alpha\beta^2\gamma|$.
This and the definition of $b$ yields
\begin{align*}
x_\pm &:= \frac{-b\pm\sqrt{b^2-3ac}}{3a}
= \frac{2\alpha^2-\alpha\beta^2\gamma\pm |\alpha^2+\alpha\beta^2\gamma|}{3\alpha^2}\\
&=\frac{1}{3\alpha^2}
\frac{(3\alpha^2)+(\alpha^2-2\alpha\beta^2\gamma)\pm\big|(3\alpha^2)-(\alpha^2-2\alpha\beta^2\gamma)\big|}{2}.
\end{align*}
Hence
\begin{equation}
x_-=\frac{\min\{3\alpha^2,\alpha^2-2\alpha\beta^2\gamma\}}{3\alpha^2}
<\frac{\max\{3\alpha^2,\alpha^2-2\alpha\beta^2\gamma\}}{3\alpha^2}=x_+.
\end{equation}
We now bifurcate one last time.
\emph{Case~1(c)($+$):} $p<0$, $\Delta<0$, and $\alpha^2+\alpha\beta^2\gamma>0$.\\
Then $3\alpha^2>\alpha^2-2\alpha\beta^2\gamma$ and
therefore $x_+=1$.
It follows that our desired root $x$ is the ``middle root'' corresponding to
$k=2$ in \cref{t:genroots}\cref{t:genroots1c}:
\begin{align*}
x
&=x_0+2(-p/3)^{1/2}\cos\bigg(\frac{1}{3}\Big(4\pi+\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big) \bigg)\\
&=
\frac{2\alpha-\beta^2\gamma}{3\alpha}
+\frac{2|\alpha+\beta^2\gamma|}{3|\alpha|}\cos\bigg(\frac{1}{3}\Big(4\pi+\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big) \bigg)\\
&=
\frac{2\alpha-\beta^2\gamma}{3\alpha}
+\frac{2(\alpha+\beta^2\gamma)}{3\alpha}\cos\bigg(\frac{1}{3}\Big(4\pi+\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big) \bigg),
\end{align*}
where in the last line we used the assumption to deduce that
$|\alpha+\beta^2\gamma|/|\alpha|
=|\alpha^2+\alpha\beta^2\gamma|/\alpha^2
=(\alpha^2+\alpha\beta^2\gamma)/\alpha^2=
(\alpha+\beta^2\gamma)/\alpha$.
\emph{Case~1(c)($-$):} $p<0$, $\Delta<0$, and $\alpha^2+\alpha\beta^2\gamma\leq 0$.\\
Then
$3\alpha^2\leq \alpha^2-2\alpha\beta^2\gamma$
and therefore $x_-=1$.
It follows that our desired root is the ``smallest root'' corresponding to
$k=1$ in \cref{t:genroots}\cref{t:genroots1c}:
\begin{align*}
x
&=x_0+2(-p/3)^{1/2}\cos\bigg(\frac{1}{3}\Big(2\pi+\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big) \bigg)\\
&=
\frac{2\alpha-\beta^2\gamma}{3\alpha}
+\frac{2|\alpha+\beta^2\gamma|}{3|\alpha|}\cos\bigg(\frac{1}{3}\Big(2\pi+\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big) \bigg)\\
&=
\frac{2\alpha-\beta^2\gamma}{3\alpha}
-\frac{2(\alpha+\beta^2\gamma)}{3\alpha}\cos\bigg(\frac{1}{3}\Big(2\pi+\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big) \bigg),
\end{align*}
where in the last line we used the assumption to deduce that
$|\alpha+\beta^2\gamma|/|\alpha|
=|\alpha^2+\alpha\beta^2\gamma|/\alpha^2
=-(\alpha^2+\alpha\beta^2\gamma)/\alpha^2=
-(\alpha+\beta^2\gamma)/\alpha$.
Note that the last two cases can be combined to obtain \cref{e:xDneg}.
\noindent
\emph{Case~2:} $p=0$, i.e., $\alpha+\beta^2\gamma= 0$ by \cref{e:230109b}.
Then $\Delta = (q/2)^2\geq 0$; hence, $\sqrt{\Delta}=|q|/2$ and
thus $\{-q/2\pm\sqrt{\Delta}\} = \{-q,0\}$.
By \cref{t:genroots}\cref{t:genroots2}, the only real root is
$x_0+(-q)^{1/3}=x_0+(-q/2+\sqrt{\Delta})^{1/3} + (-q/2-\sqrt{\Delta})^{1/3}$ which is the same as \cref{e:xDnonneg} using \cref{e:230109d}.
\noindent
\emph{Case~3:} $p>0$. In vie of \cref{e:230109b}, this case never occurs.
\end{proof}
\subsection{The case when \cref{e:case2bla} holds}
\begin{theorem}
Suppose
$\ensuremath{\mathbf{z}}\in X\smallsetminus\{0\}$, set
\begin{equation}
\zeta := \|\ensuremath{\mathbf{z}}\|>0,
\end{equation}
and assume that
\begin{equation}
\label{e:230111a}
\alpha(\gamma+\alpha/\beta^2)>\zeta^2/4.
\end{equation}
Set
\begin{equation}
\label{e:230111b}
p := -\frac{(\beta^{2} \gamma - \alpha )^{2}}{3 \alpha^{2}},
\quad
q :=
\frac{2(\beta^2\gamma - \alpha)^3}{27\alpha^3}
- \frac{\beta^2\zeta^2}{\alpha^2},
\end{equation}
and
\begin{equation}
\label{e:230111c}
\Delta := (p/3)^3+(q/2)^2=
\frac{\beta^2\zeta^2}{\alpha^2}\Big(\frac{\beta^2\zeta^2}{4\alpha^2}-\frac{(\beta^2\gamma - \alpha)^3} {27\alpha^3}\Big).
\end{equation}
If $\Delta\geq 0$, then set
\begin{equation}
\label{e:xDnonneg11}
x := -\frac{2\alpha+\beta^2\gamma}{3\alpha}+ \sqrt[\mathlarger 3]{\frac{-q}{2}+ \sqrt{\Delta}}
+
\sqrt[\mathlarger 3]{\frac{-q}{2}-\sqrt{\Delta}};
\end{equation}
and if $\Delta<0$, then set
\begin{subequations}
\label{e:xDneg11}
\begin{equation}
x := -\frac{2\alpha+\beta^2\gamma}{3\alpha}
+\delta\frac{2(\alpha-\beta^2\gamma)}{3\alpha}\cos\bigg(\frac{1}{3}\Big((2+2\delta)\pi+\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big) \bigg)
\end{equation}
where
\begin{equation}
\delta := \sign(\alpha^2-\alpha\beta^2\gamma)\in\{-1,0,1\}.
\end{equation}
\end{subequations}
Then $-1<x<1$ and
\begin{equation}
P_S(\ensuremath{\mathbf{z}},\ensuremath{\mathbf{z}},\gamma) = \Big(\frac{\ensuremath{\mathbf{z}}}{1+x},\frac{\ensuremath{\mathbf{z}}}{1+x},\gamma+\frac{\alpha x}{\beta^2} \Big).
\end{equation}
\end{theorem}
\begin{proof}
By \cite[Theorem~4.1(iii)(a)]{hypar},
there exists a \emph{unique} $x\in\left]-1,1\right[$ such that
\begin{equation}
\frac{2\zeta^2}{(1+x)^2}-\frac{2\alpha^2x}{\beta^2}-2\alpha\gamma=0;
\end{equation}
multiplying by
$-\beta^2(1+x)^2/2<0$ yields the cubic
\begin{equation}
f(x) := ax^3+bx^2+cx+d=0
\end{equation}
where
\begin{equation}
a :=
\alpha^{2}>0,
\;\;
b :=
\alpha \beta^{2} \gamma + 2\alpha^{2},
\;\;
c :=
\alpha^2+2\alpha \beta^{2} \gamma,
\;\;
d :=
\alpha \beta^{2} \gamma -
\beta^{2} \zeta^{2}.
\end{equation}
Our strategy is to systematically discuss all cases of
\cref{t:genroots} and then combine cases as much as possible.
As usual, we set
\begin{equation}
\label{e:230111d}
x_0 := -\frac{b}{3a} =
-\frac{2\alpha+\beta^{2}\gamma }{3 \alpha}
=
-\frac{2\alpha^2+\alpha \beta^{2}\gamma }{3 \alpha^{2}}
\quad\text{and}\quad
p := \frac{3ac-b^2}{3a^2} =
-\frac{(\beta^{2} \gamma-\alpha)^{2}}{3 \alpha^{2}} \leq 0,
\end{equation}
and we note that the definition of $p$ is consistent with the one given
in \cref{e:230111b}.
We have the characterization
\begin{equation}
p = 0
\;\Leftrightarrow\;
\beta^2\gamma-\alpha=0
\;\Leftrightarrow\;
\gamma=\alpha/\beta^2.
\end{equation}
Again as usual, we set
\begin{equation}
q := \frac{27a^2d+2b^2-9abc}{27a^3} =
\frac{2(\beta^2\gamma-\alpha)^3}{27\alpha^3}
- \frac{\beta^2\zeta^2}{\alpha^2},
\end{equation}
which matches \cref{e:230111b}, and of course
\begin{equation}
\Delta := (p/3)^3+(q/2)^2=
\frac{\beta^2\zeta^2}{\alpha^2}\Big(\frac{\beta^2\zeta^2}{4\alpha^2}-\frac{(\beta^2\gamma-\alpha)^3} {27\alpha^3}\Big),
\end{equation}
which matches \cref{e:230111c}.
We now systematically discuss the case of \cref{t:genroots}.
\noindent
\emph{Case~1:} $p<0$, i.e., $\beta^2\gamma-\alpha\neq 0$ by \cref{e:230111b}.
\emph{Case~1(a):} $p<0$ and $\Delta>0$.\\
Then \cref{t:genroots}\cref{t:genroots1a} and
the definition of $x_0$ in \cref{e:230111d} yield
\cref{e:xDnonneg11}.
\emph{Case~1(b):} $p<0$ and $\Delta=0$.\\
By \cref{t:genroots}\cref{t:genroots1b}, there are two roots,
$x_0+3q/p$ and $x_0-3q/(2p)$, one of which lies in $\left]-1,1\right[$.
Now
\begin{align*}
x_0-\frac{3q}{2p}+1
&= -\frac{2\alpha+\beta^2\gamma}{3\alpha}
-\frac{3}{2}\frac{2(\beta^2\gamma-\alpha)^3-27\alpha\beta^2\zeta^2} {27\alpha^3}
\frac{-3\alpha^2}{(\beta^2\gamma-\alpha)^2}+1\\
&=
-\frac{2\alpha+\beta^2\gamma}{3\alpha}+
\frac{(\beta^2\gamma-\alpha)^3-27\alpha\beta^2\zeta^2/2}{3\alpha(\beta^2\gamma-\alpha)^2}+1\\
&=
\frac{\big((-2\alpha-\beta^2\gamma)+(\beta^2\gamma-\alpha)+(3\alpha)\big)}{3\alpha(\beta^2\gamma-\alpha)^2}(\beta^2\gamma-\alpha)^2
-\frac{27\alpha\beta^2\zeta^2/2}{3\alpha(\beta^2\gamma-\alpha)^2}\\
&= -\frac{9\beta^2\zeta^2}{2(\beta^2\gamma-\alpha)^2}\\
&\leq 0;
\end{align*}
hence the root $x_0-3q/(2p)$ lies in $\left]\ensuremath{-\infty},-1 \right]$
and therefore
our desired root is the remaining one, namely $x_0+3q/p$, which also
allows us to use the representation \cref{e:xDnonneg11}.
\emph{Case~1(c):} $p<0$ and $\Delta<0$.\\
According to \cref{t:genroots}\cref{t:genroots1c}, we have three distinct real roots, but there is information about their location. We must locate the root in $\left]-1,1\right[$.
First,
$b^2-3ac=(\alpha^2-\alpha\beta^2\gamma)^2$
which yields
$\sqrt{b^2-3ac}=|\alpha^2-\alpha\beta^2\gamma|$.
This and the definition of $b$ yields
\begin{align*}
x_\pm &:= \frac{-b\pm\sqrt{b^2-3ac}}{3a}
= \frac{-2\alpha^2-\alpha\beta^2\gamma\pm |\alpha^2-\alpha\beta^2\gamma|}{3\alpha^2}\\
&=\frac{1}{3\alpha^2}
\frac{(-3\alpha^2)+(-\alpha^2-2\alpha\beta^2\gamma)\pm\big|(-3\alpha^2)-(-\alpha^2-2\alpha\beta^2\gamma)\big|}{2}.
\end{align*}
Hence
\begin{equation}
x_-=\frac{\min\{-3\alpha^2,-\alpha^2-2\alpha\beta^2\gamma\}}{3\alpha^2}
<\frac{\max\{-3\alpha^2,-\alpha^2-2\alpha\beta^2\gamma\}}{3\alpha^2}=x_+.
\end{equation}
We now bifurcate one last time.
\emph{Case~1(c)($+$):} $p<0$, $\Delta<0$, and $\alpha^2-\alpha\beta^2\gamma>0$.\\
Then $-3\alpha^2<-\alpha^2-2\alpha\beta^2\gamma$ and
therefore $x_-=-1$.
It follows that our desired root $x$ is the ``middle root'' corresponding to
$k=2$ in \cref{t:genroots}\cref{t:genroots1c}:
\begin{align*}
x
&=x_0+2(-p/3)^{1/2}\cos\bigg(\frac{1}{3}\Big(4\pi+\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big) \bigg)\\
&=
-\frac{2\alpha+\beta^2\gamma}{3\alpha}
+\frac{2|\beta^2\gamma-\alpha|}{3|\alpha|}\cos\bigg(\frac{1}{3}\Big(4\pi+\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big) \bigg)\\
&=
-\frac{2\alpha+\beta^2\gamma}{3\alpha}
+\frac{2(\alpha-\beta^2\gamma)}{3\alpha}\cos\bigg(\frac{1}{3}\Big(4\pi+\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big) \bigg),
\end{align*}
where in the last line we used the assumption to deduce that
$|\beta^2\gamma-\alpha|/|\alpha|
=|\alpha\beta^2\gamma-\alpha^2|/\alpha^2
=(\alpha^2-\alpha\beta^2\gamma)/\alpha^2=
(\alpha-\beta^2\gamma)/\alpha$.
\emph{Case~1(c)($-$):} $p<0$, $\Delta<0$, and $\alpha^2-\alpha\beta^2\gamma\leq 0$.\\
Then
$-3\alpha^2\geq -\alpha^2-2\alpha\beta^2\gamma$
and therefore $x_+=-1$.
It follows that our desired root is the ``largest root'' corresponding to
$k=0$ in \cref{t:genroots}\cref{t:genroots1c}:
\begin{align*}
x
&=x_0+2(-p/3)^{1/2}\cos\bigg(\frac{1}{3}\Big(\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big) \bigg)\\
&=
-\frac{2\alpha+\beta^2\gamma}{3\alpha}
+\frac{2|\beta^2\gamma-\alpha|}{3|\alpha|}\cos\bigg(\frac{1}{3}\Big(\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big) \bigg)\\
&=
-\frac{2\alpha+\beta^2\gamma}{3\alpha}
+\frac{2(\beta^2\gamma-\alpha)}{3\alpha}\cos\bigg(\frac{1}{3}\Big(\arccos \frac{-q/2}{(-p/3)^{3/2}}\Big) \bigg),
\end{align*}
where in the last line we used the assumption to deduce that
$|\beta^2\gamma-\alpha|/|\alpha|
=|\alpha\beta^2\gamma-\alpha^2|/\alpha^2
=(\alpha\beta^2\gamma-\alpha^2)/\alpha^2=
(\beta^2\gamma-\alpha)/\alpha$.
Note that the last two cases can be combined to obtain \cref{e:xDneg11}.
\noindent
\emph{Case~2:} $p=0$, i.e., $\alpha-\beta^2\gamma= 0$ by \cref{e:230111b}.
Then $\Delta = (q/2)^2\geq 0$; hence, $\sqrt{\Delta}=|q|/2$ and
thus $\{-q/2\pm\sqrt{\Delta}\} = \{-q,0\}$.
By \cref{t:genroots}\cref{t:genroots2}, the only real root is
$x_0+(-q)^{1/3}=x_0+(-q/2+\sqrt{\Delta})^{1/3} + (-q/2-\sqrt{\Delta})^{1/3}$ which is the same as \cref{e:xDnonneg11} using \cref{e:230111d}.
\noindent
\emph{Case~3:} $p>0$. In view of \cref{e:230111b}, this case never occurs.
\end{proof}
\section{A proximal mapping of a closure of a perspective function}
\label{sec:perspective}
The following completes \cite[Example~24.57]{BC2017} which stopped short of providing solutions for a cubic encountered.
\begin{example}
Define the function $h$ on
$\ensuremath{\mathbb R}^n\times\ensuremath{\mathbb R}$ by
\begin{equation}
h(y,\eta) :=
\begin{cases}
\|y\|^2/(2\eta), &\text{if $\eta>0$;}\\
0, &\text{if $y=0$ and $\eta=0$;}\\
\ensuremath{+\infty}, &\text{otherwise.}
\end{cases}
\end{equation}
Let $\gamma>0$ and $(y,\eta)\in\ensuremath{\mathbb R}^n\times\ensuremath{\mathbb R}$.
Then
\begin{equation}
\ensuremath{\operatorname{Prox}}_{\gamma h}(y,\eta) =
\begin{cases}
(0,0), &\text{if $\|y\|^2+2\gamma\eta\leq 0$;}\\
\Big(\big(1-\frac{\gamma\lambda}{\|y\|} \big)y,\eta+\frac{\gamma\lambda^2}{2} \Big), &\text{if $\|y\|^2+2\gamma\eta>0$,}
\end{cases}
\end{equation}
where
$p = 2(\eta+\gamma)/\gamma$,
$\Delta = (p/3)^3+(\|y\|/\gamma)^2$, and
\begin{equation}
\label{e:230106}
\lambda = \begin{cases}
\sqrt[\mathlarger 3]{\dfrac{\|y\|}{\gamma}+\sqrt{\Delta}}
+
\sqrt[\mathlarger 3]{\dfrac{\|y\|}{\gamma}-\sqrt{\Delta}}, &\text{if $\Delta\geq 0$;}\\[+7mm]
2(-p/3)^{1/2}\cos\Big(\dfrac{1}{3}\arccos\dfrac{\|y\|/\gamma}{(-p/3)^{3/2}}\Big), &\text{if $\Delta<0$.}
\end{cases}
\end{equation}
\end{example}
\begin{proof}
The first cases where already provided in \cite[Example~24.57]{BC2017}.
Now assume
$\|y\|^2+2\gamma\eta>0$.
It was also observed in \cite[Example~24.57]{BC2017} that if $y=0$, then $\ensuremath{\operatorname{Prox}}_{\gamma h}(y,\eta)=(0,\eta)$.
So assume also that $y\neq 0$.
It follows from the discussion that in
\cite[Example~24.57]{BC2017} that
$\lambda$ is the unique positive solution of
the already depressed cubic
\begin{equation}
\lambda^3 + \frac{2(\eta+\gamma)}{\gamma}\lambda
-\frac{2\|y\|}{\gamma}=0,
\end{equation}
which is where the discussion in \cite{BC2017} halted. Continuing here, we set
\begin{equation}
p := \frac{2(\eta+\gamma)}{\gamma},
\quad
q := -\frac{2\|y\|}{\gamma} < 0,
\end{equation}
and $\Delta := (p/3)^3+(q/2)^2$.
Using \cref{c:roots}, we see that if $\Delta<0$, then
\begin{equation}
\lambda = 2(-p/3)^{1/2}\cos\Big(\frac{1}{3}\arccos\frac{-q/2}{(-p/3)^{3/2}}\Big)
\end{equation}
while if $\Delta\geq 0$, then
\begin{equation}
\lambda =
\sqrt[\mathlarger 3]{\frac{-q}{2}+\sqrt{\Delta}}
+
\sqrt[\mathlarger 3]{\frac{-q}{2}-\sqrt{\Delta}}
\end{equation}
which slightly simplifies to the expression provided in \cref{e:230106}.
Finally, notice that if $y=0$, then
the assumption that $\|y\|^2+2\gamma\eta>0$ yields
$\eta>0$; thus, $p>0$, $q=0$, and hence $\Delta>0$. Formally, our $\lambda$ then simplifies to $0$ which conveniently allows us to combine this case with the case $y\neq 0$.
\end{proof}
\section*{Acknowledgments}
The authors thank Dr.\ Amy Wiebe for referring us to \cite{BPT}.
HHB is supported by the Natural Sciences and
Engineering Research Council of Canada. MKL was partially supported by SERB-UBC fellowship and NSERC Discovery grants of HHB and XW.
| {
"timestamp": "2023-02-22T02:17:04",
"yymm": "2302",
"arxiv_id": "2302.10731",
"language": "en",
"url": "https://arxiv.org/abs/2302.10731",
"abstract": "The solution of the cubic equation has a century-long history; however, the usual presentation is geared towards applications in algebra and is somewhat inconvenient to use in optimization where frequently the main interest lies in real roots. In this note, we present the roots of the cubic in a form that makes them convenient to use and we also focus on information on the location of the real roots. Armed with this, we provide several applications in optimization where we compute Fenchel conjugates, proximal mappings and projections.",
"subjects": "Optimization and Control (math.OC); Functional Analysis (math.FA)",
"title": "Real roots of real cubics and optimization",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787842245939,
"lm_q2_score": 0.8152324803738429,
"lm_q1q2_score": 0.8047802088358503
} |
https://arxiv.org/abs/2001.03938 | Edge ideals with almost maximal finite index and their powers | A graded ideal $I$ in $\mathbb{K}[x_1,\ldots,x_n]$, where $\mathbb{K}$ is a field, is said to have almost maximal finite index if its minimal free resolution is linear up to the homological degree $\mathrm{pd}(I)-2$, while it is not linear at the homological degree $\mathrm{pd}(I)-1$, where $\mathrm{pd}(I)$ denotes the projective dimension of $I$. In this paper we classify the graphs whose edge ideals have this property. This in particular shows that for edge ideals the property of having almost maximal finite index does not depend on the characteristic of $\mathbb{K}$. We also compute the non-linear Betti numbers of these ideals. Finally, we show that for the edge ideal $I$ of a graph $G$ with almost maximal finite index, the ideal $I^s$ has a linear resolution for $s\geq 2$ if and only if the complementary graph $\bar{G}$ does not contain induced cycles of length $4$. | \section*{Introduction}
In this paper, we consider the edge ideals whose minimal free resolution has relatively large number of linear steps. Let $I$ be a graded ideal in the polynomial ring $S=\mathbb{K}[x_1,\ldots,x_n]$, where $\mathbb{K}$ is a field, generated by homogeneous polynomials of degree $d$. The ideal is called $r$-steps linear, if $I$ has a linear resolution up to the homological degree $r$, that is the graded Betti numbers $\beta_{i,i+j}(I)$ vanish for all $i\leq r$ and all $j>d$. The number
\[
\mathrm{index}(I)=\inf\{r:\ \text{$I$ is not $r$-steps linear}\}
\]
is called the Green--Lazarsfeld index (or briefly index) of $I$. A related invariant, called the $N_{d,r}$-property, was first considered by Green and Lazarsfeld in \cite{GL1, GL2}. In the paper \cite{BC} the index was introduced for the quotient ring $S/I$, where $I$ is generated by quadratics, to be the largest integer $r$ such that the $N_{2,r}$-property holds. It is in general very hard to determine the value of the index. One reason is that this value, in general, depends on the characteristic of $\mathbb{K}$. The index of quadratic monomial ideals is more studied in the literature taking advantage of some combinatorial methods. Indeed, since the index is preserved passing through polarization, one may reduce to the case of squarefree quadratic monomial ideals which can be viewed as the edge ideals of simple graphs, and the index of these ideals is proved to be characteristic independent, see \cite[Theorem~2.1]{EGHP}.
The main question regarding the study of the index of edge ideals is to classify the graphs with respect to the index of their edge ideals, in particular, it is more interesting to see when the index attains its largest or smallest value.
In 1990, Fr\"oberg \cite{Fr} classified the graphs whose edge ideals have a linear resolution. A graded ideal $I$ is said to have a linear resolution if $\mathrm{index}(I)=\infty$. In fact Fr\"oberg showed that given a graph $G$, its edge ideal $I(G)$ has a linear resolution over all fields if and only if the complement $\bar{G}$ of $G$ is chordal, which means that all cycles in $\bar{G}$ of length $>3$ have a chord. In 2005, Eisenbud et al. \cite{EGHP} gave a purely combinatorial description of the index of edge ideals in terms of the size of the smallest cycle(s) of length $>3$ in the complementary graph, c.f. Theorem~\ref{index of graphs}. This result shows that the index gets its smallest value $1$ if and only if $G$ admits a gap, i.e. $\bar{G}$ contains an induced cycle of length $4$. If the index of $I$ attains the largest finite value, we have $\mathrm{index}(I)=\mathrm{pd}(I)$, where $\mathrm{pd}(I)$ denotes the projective dimension of $I$. In this case the ideal $I$ is said to have maximal finite index, see \cite{BHZ}. In \cite[Theorem~4.1]{BHZ}, it was shown that the edge ideal $I(G)$ has maximal finite index if and only if $\bar{G}$ is a cycle of length~$>3$. In this paper, we proceed one more step and consider the edge ideals $I(G)$ with $\mathrm{index}(I(G))=\mathrm{pd}(I(G))-1$. We call them edge ideals with almost maximal finite index. In Section~\ref{classify} of this paper we precisely determine the simple graphs whose edge ideals have this property, see Theorem~\ref{check-out}. These graphs are presented in Figures~\ref{type a}--\ref{type d}. In particular, it is deduced that the property of having almost maximal finite index is characteristic independent for edge ideals, though this is not the case for ideals generated in higher degrees, as discussed in the beginning of Section~\ref{classify}. It is also seen that the graded Betti numbers of these edge ideals do not depend on the characteristic of the base field. We will compute the Betti numbers in the non-linear strands in Proposition~\ref{Bettis of almost}. The main tool used throughout this section is Hochster's formula, Formula~(\ref{Hochster}).
In the second half of the paper we study the index of powers of edge ideals with almost maximal finite index. Although, for arbitrary ideals, many properties such as depth, projective dimension or regularity stabilize for large powers (see e.g., \cite{Ba,Ch,Ca,Co, CHT,HH, HHZh1,HHZ1,HW}), their initial behaviour is often quite mysterious. However, edge ideals behave more controllable from the beginning. In the study of the index of powers of edge ideals, one of the main results is due to Herzog, Hibi and Zheng \cite[Theorem~3.2]{HHZh1}. They showed that for a graph $G$, all powers of the edge ideal $I(G)$ have a linear resolution if and only if so does $I(G)$. On the other hand, it was shown in \cite[Theorem~3.1]{BHZ} that all powers of $I(G)$ have index $1$ if and only if $I(G)$ has also index $1$. In the same paper it was proved that if $I(G)$ has maximal finite index~$>1$, then $I(G)^s$ has a linear resolution for all $s\geq 2$. This shows that chordality of the complement of $G$ is not a necessary condition on $G$ so that all high powers of its edge ideal have a linear resolution. Francisco, H\`a and Van~Tuyl proved, in a personal communication, that being gap-free is a necessary condition for a graph $G$ in order that a power of its edge ideal has a linear resolution (see also \cite[Proposition~1.8]{NP}). However, Nevo and Peeva showed, by an example, that being gap-free alone is not a sufficient condition so that all high powers of the edge ideal have a linear resolution \cite[Counterexample~1.10]{NP}. Later, Banerjee \cite{Ba}, and Erey \cite{Er, Er1} respectively proved that if a gap-free graph $G$ is also cricket-free or diamond-free or $C_4$-free, then the ideal $I(G)^s$ has a linear resolution for all $s\geq 2$. The definition of these concepts are recalled in Section~\ref{powers of almost maximal}.
Section~\ref{powers of almost maximal} is devoted to answer the question whether the high powers of edge ideals with almost maximal finite index have a linear resolution. Not all graphs whose edge ideals have this property are cricket-free or diamond-free. However, using some formulas for an upper bound of the regularity of either powers of edge ideals or in general monomial ideals offered in \cite[Theorem~5.2]{Ba}, and \cite[Lemma~2.10]{DHS} respectively, we give a positive answer to this question in case the graphs are gap-free, see Theorem~\ref{powers}. We will prove this theorem in several parts, mainly in Theorem~\ref{main G_a3} and Theorem~\ref{I^k has lin res}.
Theorem~\ref{powers} together with \cite[Theorem~4.1]{BHZ} yield the following consequence which is a partial generalization of the result of Herzog et al. in \cite[Theorem~3.2]{HHZh1}.
\begin{thm}\label{bound}
Let $G$ be a simple gap-free graph and let $I\subset S$ be its edge ideal. Suppose $\mathrm{pd}(I)\!-\!\mathrm{index}(I)\leq\!1$. Then $I^s$ has a linear resolution over all fields for any $s\geq 2$.
\end{thm}
One may ask which is the largest integer $c$ such that Theorem~\ref{bound} remains valid if one replaces $\mathrm{pd}(I) - \mathrm{index}(I) \leq 1$ by $\mathrm{pd}(I) - \mathrm{index}(I) \leq c$. Computation by {\em Macaulay~2}, \cite{M2}, shows that in the example of Nevo and Peeva \cite[Counterexample~1.10]{NP}, $\mathrm{index}(I)=2$, and $\mathrm{pd}(I)=8$. Hence $c$ must be an integer with $1\leq c\leq 5$.
\subsection*{Acknowledgement} Research was supported by a grant from IPM. This work was initiated while the author was resident at MSRI during the Spring 2017 semester and supported by National Science Foundation under Grant No. DMS-1440140. Theorem~\ref{check-out} is a consequence of a question David Eisenbud asked the author. She would like to thank him for the invaluable discussions throughout her postdoctoral fellowship at MSRI. She also extends her gratitude to Rashid Zaare-Nahandi for his comments on this manuscript.
Finally, the author would like to express her appreciation to the anonymous referee for the remarkable comments and useful suggestions which helped to improve the manuscript.
\section{Preliminaries}\label{section 1}
In this section we recall some concepts, definitions and results from Commutative Algebra and Combinatorics which will be used throughout the paper. Let $S=\mathbb{K}[x_1, \ldots, x_n]$ be the polynomial ring over a field $\mathbb{K}$ with $n$ variables, and let $M$ be a finitely generated graded $S$-module. Let the sequence
$$
0\to F_p\to\cdots \to F_2 \to F_1 \to F_0 \to M \to 0
$$
be the minimal graded free resolution of $M$, where for all $i \geq 0$ the modules $F_i = \oplus_j S(-j)^{\beta_{i,j}^{\mathbb{K}}(M)}$ are free $S$-modules of rank $\beta_{i}^{\mathbb{K}}(M):=\sum_{j}\beta_{i,j}^{\mathbb{K}}(M)$.
The numbers $\beta_{i,j}^{\mathbb{K}}(M) = \dim_{\mathbb{K}} \mbox{Tor}^S_i(M, \mathbb{K})_j$ are called the \textit{graded Betti numbers} of $M$ and $\beta_i^{\mathbb{K}}(M)$ is called the $i$-th {\em Betti number} of $M$. We write $\beta_{i,j}(M)$ for $\beta_{i,j}^{\mathbb{K}}(M)$ when the field is fixed.
The {\em projective dimension} of $M$, denoted by $\mathrm{pd}(M)$, is the largest $i$ for which $\beta_{i}(M)\neq 0$. The {\em Castelnuovo-Mumford regularity } of $M$, $\mathrm{reg}(M)$, is defined to be $$\mathrm{reg}(M)=\sup\{j-i:\ \beta_{i,j}(M)\neq 0\}.$$
Let $I$ be a graded ideal of $S$ generated in a single degree $d$.
The {\em Green--Lazarsfeld index} (briefly index) of $I$, denoted by $\mathrm{index}(I)$, is defined to be
$$\mathrm{index}(I)=\inf\{i:\ \beta_{i,j}(I)\neq 0,\ \text{for some } j>i+d\}.$$
Since $\beta_{0,j}(I)=0$ for all $j>d$, one always has $\mathrm{index}(I)\geq 1$.
The ideal $I$ is said to have a {\em $d$-linear resolution} if $\mathrm{index}(I)=\infty$. This means that for all $i$, $\beta_{i}(I)=\beta_{i,i+d}(I)$, and this is the case if and only if $\mathrm{reg}(I)=d$. Otherwise $\mathrm{index}(I)\leq \mathrm{pd}(I)$. In case $I$ has the largest possible finite index, that is $\mathrm{index}(I)=\mathrm{pd}(I)$, $I$ is said to have {\em maximal finite index}.
\medspace
In Section~\ref{classify} of this paper we deal with squarefree monomial ideals generated in degree $2$. These ideals are the edge ideals of simple graphs. Recall that a {\em simple} graph is a graph with no loops and no multiple edges, and given a graph $G$ on the vertex set $[n]:=\{1,\ldots,n\}$, its edge ideal $I(G)\subset S$ is an ideal generated by all quadratics $x_ix_j$, where $\{i,j\}$ is an edge in $G$. We denote by $E(G)$ the set of all edges of $G$, and by $V(G)$ the vertex set of $G$. For a vertex $v\in V(G)$, the neighbourhood $N_G(v)$ of $v$ in $G$ is defined to be
$$N_G(v)=\{u\in V(G):\ \{u,v\}\in E(G) \}.$$
The complement $\bar{G}$ of $G$ is a graph on $V(G)$ whose edges are those pairs of $V(G)$ which do not belong to $E(G)$. The simplicial complex
$$\Delta(G)=\{F\subseteq V(G):\ \text{for all } \{i,j\}\subseteq F \text{ one has } \{i,j\} \in E(G)\}$$
is called the {\em flag complex} of $G$. The {\em{independence complex}} of $G$ is the flag complex of $\bar{G}$. One can check that $I(G)=I_{\Delta(\bar{G})}$, where $I_{\Delta(\bar{G})}$ is the Stanley-Reisner ideal of $\Delta(\bar{G})$.
We assume that the reader is familiar with the definition and elementary properties of simplicial complexes. For more details consult with \cite{HHBook}.
\medspace
The main tool used widely in Section~\ref{classify} for the computation of the graded Betti numbers is Hochster's formula~\cite[Theorem~8.1.1]{HHBook}. Let $\Delta$ be a simplicial complex on $[n]$, and let $\tilde{C}(\Delta, \mathbb{K})$ be the augmented oriented chain complex of $\Delta$ over a field $\mathbb{K}$ with the differentials
\begin{align*}
&\quad\quad\quad\quad\quad\quad \partial_i: \bigoplus_{F\in\Delta\atop \dim F=i}\mathbb{K}F\to \bigoplus_{G\in\Delta\atop \dim G=i-1}\mathbb{K}G,\\
&\partial_i([v_0,\ldots,v_{i}])=\sum_{0\leq j\leq i}(-1)^{j}[v_0,v_1,\ldots, v_{j-1},v_{j+1}, \ldots, v_{i}],
\end{align*}
where by $[v_0,v_1,\ldots,v_{i}]$ we mean the face $\{v_0,v_1,\ldots,v_{i}\}\subseteq [n]$ of $\Delta$ with $v_0<v_1<\cdots<v_i$.
Hochster's formula states that for the Stanley-Reisner ideal $I:=I_{\Delta}\subset S$ one has
\begin{eqnarray}\label{Hochster}
\beta_{i,j}(I)=\sum_{W\subseteq [n],\ |W|=j} \dim_\mathbb{K} \widetilde{H}_{j-i-2}(\Delta_W;\mathbb{K}),
\end{eqnarray}
where $\Delta_W$ is the induced subcomplex of $\Delta$ on $W$ and $\widetilde{H}_i(\Delta_W;\mathbb{K})$ is the $i$-th reduced homology of the complex $\widetilde{C}(\Delta_W, \mathbb{K})$. We denote by $\partial_i^W$ the differentials of the chain complex $\widetilde{C}(\Delta_W, \mathbb{K})$.
\medspace
Theorem~\ref{index of graphs} which is due to Eisenbud et al. \cite{EGHP} provides a combinatorial method for determining the index of the edge ideal of a graph. To this end, one needs to consider the length of the minimal cycles of the complementary graph. A minimal cycle is an induced cycle of length$>3$, and by an induced cycle we mean a cycle with no chord. The length of an induced cycle $C$ is denoted by $|C|$.
\begin{thm}[{\cite[Theorem~2.1]{EGHP}}]\label{index of graphs}
Let $I(G)$ be the edge ideal of a simple graph $G$. Then
$$\mathrm{index}(I(G))=\inf\{|C|: \ C \text{ is a minimal cycle} \text{ in } \bar{G}\}-3.$$
\end{thm}
\section{Edge ideals with almost maximal final index}\label{classify}
A graded ideal $I\subset S$ is said to have {\em almost maximal finite index} over $\mathbb{K}$ if $\mathrm{index}(I)=\mathrm{pd}(I)-1$. Since, in general, $\mathrm{pd}(I)$ and $\mathrm{index}(I)$ depend on the characteristic of the base field, the property of having almost maximal finite index may also be characteristic dependent. For example, setting $\Delta$ to be a triangulation of a real projective plane, the Stanley-Reisner ideal of $\Delta$ is generated in degree $3$ and it has almost maximal finite index over all fields of characteristic $2$, while it has a linear resolution over other fields (cf. \cite[\S 5.3]{BHBook}). However, as we will see in Corollary~\ref{final note}, in the case of quadratic monomial ideals, having almost maximal finite index is characteristic independent. Note that, although by Theorem~\ref{index of graphs}, the index of an arbitrary edge ideal does not depend on the base field, its projective dimension may depend. M.~Katzman presents a graph in \cite[Section~4]{Ka} whose edge ideal has different projective dimensions over different fields.
\medspace
In this section, we give a classification of the graphs whose edge ideals have almost maximal finite index.
We will present this classification in Theorem~\ref{check-out}, but before, we need some intermediate steps which give more insight about the complement of such graphs.
Unless otherwise stated, throughout this section, $G$ is a simple graph on the vertex set $[n]$ and $\bar{G}$ is its complement, $\Delta$ denotes the independence complex $\Delta(\bar{G})$, and $\partial$, $\partial^W$ denote respectively the differentials of the augmented oriented chain complexes of $\Delta(\bar{G})$, $\Delta(\bar{G})_W$ over a fixed field $\mathbb{K}$.
\medspace
First, in order to avoid repetition of some arguments, we gather some facts which will be used frequently in the sequel in the following Observation. Meanwhile, we also fix some notation.
\begin{remno}\label{connected induced graphs}\hfill\par\rm
Let $G$ be a simple graph on the vertex set $[n]$ and let $I:=I(G)\subset S$ be its edge ideal.
{\bf (O-1)} The graph $G$ is connected if and only if its flag complex $\Delta(G)$ is connected. On the other hand for an arbitrary simplicial complex $\Gamma$ and any field $\mathbb{K}$, $$\dim_\mathbb{K}\widetilde{H}_0(\Gamma;\mathbb{K})=(\text{Number of connected components of }\Gamma)-1,$$ see \cite[Problem~8.2]{HHBook}.
Moreover, for any subset $W\subseteq [n]$ one has $\Delta(G_W)=\Delta(G)_W$, where $G_W$ is the induced subgraph of $G$ on the vertex set $W$.
It follows that $G_W$ is connected if and only if $\widetilde{H}_0(\Delta(G)_W;\mathbb{K})=0$. Now if $\beta_{i,i+2}(I)=0$ for some $i$, then by Hochster's formula
$\widetilde{H}_{0}(\Delta_W;\mathbb{K})=0$ and hence $\bar{G}_W$ is connected for all $W\subseteq [n]$ with $|W|=i+2$.
\medskip
{\bf (O-2)} Throughout, by $P=u_1-u_2-\cdots-u_r$ in $G$ we mean a path in $G$ on $r$ distinct vertices with the set of edges $\bigcup_{1\leq i\leq r-1}\{\{u_i,u_{i+1}\}\}$. If, in addition $\{u_1,u_r\}\in E(G)$, then
$C=u_1-u_2-\cdots-u_{r}-u_1$ is a cycle in $G$. Then
\begin{align}\label{cycle kernel}
T(C):=(\sum_{1\leq i\leq r-1}[u_i,u_{i+1}])-[u_1,u_{r}]\in \ker \partial^{\Delta(G)}_1,
\end{align}
where $\partial^{\Delta(G)}$ denotes the differentials of the chain complex of $\Delta(G)$. It is shown in \cite[Theorem~3.2]{Co} that $\widetilde{H}_1(\Delta(G); \mathbb{K})\neq 0$ if and only if there exists a minimal cycle $C$ in $G$ such that $T(C)\notin \mathrm{Im\ } \partial_2^{\Delta(G)}$. Indeed, it is proved that $\widetilde{H}_1(\Delta(G); \mathbb{K})$ is minimally generated by the nonzero homology classes $T(C)+\mathrm{Im\ } \partial_2^{\Delta(G)}$, where $C$ is a minimal cycle in $G$.
If $C$ is the base of a cone whose apex is the vertex $u_{r+1}$, then
$$T(C)=\partial_{2}^{\Delta(G)}((\sum_{1\leq i\leq r-1}[u_{r+1},u_i,u_{i+1}])-[u_{r+1},u_1,u_{r}])$$
which implies that $T(C)+\mathrm{Im\ }\partial_2^{\Delta(G)}=0$. Recall that an $r$-gonal {\em cone} with the apex $a$ is a graph $G'$ with the vertex set $V(G')=V(C)\cup\{a\}$, where $a\notin V(C)$ and $C$ is an $r$-cycle in $G'$ which is called the base of $G'$, and $E(G')=E(C)\cup\{\{a,u_i\}: u_i\in V(C)\}$.
\medskip
{\bf (O-3)} Now let $D$ be an $r$-gonal dipyramid in $G$; that is a subgraph of $G$ with the vertex set $V(D)=V(C)\cup\{a,b\}$ and $E(D)= \bigcup_{1\leq i\leq r}\left(\{a,u_i\}\cup\{b,u_i\}\right) \cup E(C)$ where $C$ is an $r$-cycle as above which is called the {\em waist } of $D$. Then
\begin{align}\label{dipyramid kernel}
T(D):=(\sum_{1\leq i\leq r-1}[a,u_i,u_{i+1}]-[b,u_i,u_{i+1}])-[a,u_1,u_{r}]+[b,u_1,u_{r}]\in \ker \partial_2^{\Delta(G)}.
\end{align}
\par {\bf (O-4)} Suppose $\mathrm{index}(I)=t$. By Theorem~\ref{index of graphs}, $\bar{G}$ contains a minimal cycle $C=u_1-u_2-\cdots-u_{t+3}-u_1$ which has the smallest length among all minimal cycles of $\bar{G}$.
\begin{itemize}
\item[$(i)$] If $\beta_{t+1,t+4}(I)=0$, then $\widetilde{H}_{1}(\Delta_W;\mathbb{K})=0$ for all $W\subseteq [n]$ with $|W|=t+4$.
Set $W=\{u_{t+4}\}\cup V(C)$ for an arbitrary vertex $u_{t+4}\in [n]\setminus V(C)$. Then $C$ is a minimal cycle in $\bar{G}_W$ and $T(C)\in\ker\partial_1^{W}$ implies that $T(C)\in\mathrm{Im\ }\partial_2^{W}$.
It follows that each edge $e$ of $C$ is contained in a $2$-face $F_e$ of ${\Delta_W}$. Since $C$ is minimal, we must have $F_e=e\cup\{u_{t+4}\}$ which means that $u_{t+4}$ is adjacent to all vertices of $C$ in $\bar{G}$ and hence $\bar{G}_W$ is a cone.
\item[$(ii)$] If $\beta_{t+2,t+5}(I)=0$, then $\widetilde{H}_{1}(\Delta_W;\mathbb{K})=0$ for all $W\subseteq [n]$ with $|W|=t+5$. Set $W=\{u_{t+4}, u_{t+5}\}\cup V(C)$ for arbitrary vertices $u_{t+4}, u_{t+5}\in [n]\setminus V(C)$.
As in (i), $T(C)=\partial_2^{W}(L)$ for some $L\in \bigoplus_{F\in \Delta_W\atop{\dim F=2}} \mathbb{K}F$, and hence each edge of $C$ is contained in a $2$-face of $\Delta_W$.
It follows that for each edge $e$ of $C$ either $\{u_{t+4}\}\cup e\in\Delta_W$ or $\{u_{t+5}\}\cup e\in \Delta_W$. If for all $e\in E(C)$ one has $\{u_{t+4}\}\cup e\in\Delta_W$, then $\Delta_W$ contains a cone. Same holds if we replace $u_{t+4}$ with $u_{t+5}$. Suppose $\{u_{t+4}\}\cup e, \{u_{t+5}\}\cup e' \notin\Delta_W$ for some $e, e'\in E(C)$, which implies that $u_{t+4}, u_{t+5}$ are not adjacent to all vertices of $C$ in $\bar{G}$.
Without loss of generality suppose $\{u_{t+5},u_1,u_2\}\notin \Delta_W$. It follows that $\{u_{t+4}\}\cup\{u_1,u_2\}\in \Delta_W$. If $\{u_1,u_2\}$ is the only edge $e$ of $C$ with $\{u_{t+4}\}\cup e\in\Delta_W$, then for all $e'\in E(C)$ with $e'\neq \{u_1,u_2\}$ one has $\{u_{t+5}\}\cup e'\in \Delta_W$. In particular, $\{u_1,u_{t+3}, u_{t+5}\},\{u_2,u_3,u_{t+5}\}\in \Delta_W$ which implies by the definition of $\Delta_W=\Delta(\bar{G}_W)$ that $\{u_{t+5},u_1,u_2\}\in \Delta_W$, a contradiction. Since
$u_{t+4}$ is not adjacent to all vertices of $C$ in $\bar{G}$, and since $\{u_{t+4},u_1\},\{u_{t+4},u_2\}\in E(\bar{G})$, it follows that there exists $3\leq j\leq t+3$ such that $\{u_j, u_{t+4}\}\notin E(\bar{G})$. Let $a, b $ be respectively the biggest and the smallest integers with $2\leq a<j<b\leq t+3$ for which $\{u_a,u_{t+4}\},\{u_b,u_{t+4}\}\in E(\bar{G})$. If such $b$ does not exist we let $b=1$ which implies that $a\neq 2$ because otherwise $\{u_{t+5}\} \cup e' \in \Delta_W$ for all $e' \in E(C) \setminus \{\{u_1, u_2\}\}$, so $u_{t+5}$ is adjacent to all vertices of $C$ in $\bar{G}$.
Now if $b\neq 1$, then $C':=u_{t+4}-u_a-u_{a+1}-\cdots-u_{b}-u_{t+4}$ is a minimal cycle in $\bar{G}_W$ of length $b-a+2$, and if $b=1$ then $C':=u_{t+4}-u_a-u_{a+1}-\cdots-u_{t+3}-u_{1}-u_{t+4}$ is a minimal cycle of length $t+6-a$. Since $\mathrm{index}(I)=t$, we must have $|C'|\geq t+3$ in either case, and so $\{a,b\}=\{1,3\}$ if $b=1$, and $\{a,b\}=\{2,t+3\}$ if $b\neq 1$. In both cases the vertex $u_{t+4}$ is adjacent to only three successive vertices of $C$ in $\bar{G}$. Without loss of generality we may assume that $u_1,u_2,u_3$ are these three vertices. Thus $u_{t+4}$ is adjacent to only two edges $\{u_1,u_2\}, \{ u_2,u_3\}$ of $C$ in $\bar{G}$ and hence $\{u_1,u_3,u_4\ldots, u_{t+3}\}\subseteq N_{\bar{G}}(u_{t+5})$. It follows that $\{u_{t+5}, u_2\}\notin E(\bar{G})$ because $u_{t+5}$ is not adjacent to all vertices of $C$ in $\bar{G}$. Thus we get the minimal $4$-cycle $C'':=u_{t+5}-u_1-u_2-u_3-u_{t+5}$. It follows that $t=1$ because $\mathrm{index}(I)=t$. Therefore $|C|=4$ and the only $2$-faces of $\Delta_W$ containing an edge of $C$ are $\{u_1,u_2,u_5\}, \{u_2,u_3,u_5\}, \{u_3,u_4,u_6\}, \{u_1,u_4,u_6\}$. But no linear combination of theses faces will result in $L$ with $\partial_{2}^{W}(L)=T(C)$. We need more $2$-faces in $\Delta_W$. It follows that $\{u_i, u_5,u_6\}\in \Delta_W$ for some $1\leq i\leq 4$.
In particular, $\{u_5,u_6\}\in E(\bar{G})$. This forms a graph $\bar{G}_W$ which is drawn as the graph $G_{(d)_2}$ in Figure~\ref{type d}.
\item[$(iii)$ ]If $\beta_{t+2,t+6}(I)=0$, then $\widetilde{H}_{2}(\Delta(\bar{G})_W;\mathbb{K})=0$ for all $W\subseteq [n]$ with $|W|=t+6$. Suppose $\bar{G}$ contains a dipyramid $D$ with the vertex set $\{u_{t+4}, u_{t+5}\}\cup V(C)$, where the waist $C$ is a minimal cycle of length $t+3$.
Set $W=\{u_{t+4}, u_{t+5},u_{t+6}\}\cup V(C)$ for arbitrary vertex $ u_{t+6}\in [n]\setminus (V(C)\cup \{u_{t+4}, u_{t+5}\})$. Then by (O-3) one has $T(D)\in \ker\partial_2^{W}$ and hence $T(D)\in \mathrm{Im\ }\partial_3^W$. This implies that each $2$-face of $D$ is contained in a $3$-face of ${\Delta_W}$. Since $C$ is minimal, it follows that either $\{u_{t+4},u_{t+5}\}\in E(\bar{G})$ or $\{u_{t+4}, u_{t+5}\}\cup V(C)\subseteq N_{\bar{G}}(u_{t+6})$.
\end{itemize}
\end{remno}
\begin{ex}\label{mesal}
Here we give $7$ types of the graphs $G$ whose edge ideal $I:=I(G)$ has almost maximal finite index over all fields. Indeed, we present the complementary graphs $\bar{G}$ for which $\mathrm{pd}(I)=\mathrm{index}(I)+1$. Take $t = 1$ for the cases (c) and (d) below. Since the smallest minimal cycles in the following graphs $\bar{G}$ are of length $t+3\geq 4$, by Theorem~\ref{index of graphs} we have $\mathrm{index}(I)=t$. We show that $\mathrm{pd}(I)=t+1$. Note that as it is also clear from Hochster's formula, $\beta_{i,j}(I)=0$ for all $j<i+2$ and hence, in order to show that $\mathrm{pd}(I)=t+1$ it is enough to prove $\beta_{t+1,j}(I)\neq 0$ for some $j\geq t+3$ and $\beta_{t+2,j}(I)= 0$ for all $t+4\leq j\leq n$.
The argument below is independent of the choice of the base field.
\medspace
\indent (a) Let $\bar{G}$ be either of the graphs $G_{(a)_1}, G_{(a)_2}, G_{(a)_3}$ shown in Figure~\ref{type a} with $t\geq 1$. The two graphs $G_{(a)_1}, G_{(a)_2}$ have one minimal cycle $C=1-2-\cdots-(t+3)-1$, and the graph $G_{(a)_3}$, has two minimal cycles $C$ and $C'=1-(t+4)-3-4-\cdots-(t+3)-1$.
Setting $W=[t+4]$, we have $T(C)\in \ker \partial_1^W$ by (O-2). Since $t>0$, there are edges of $C$ in all three graphs which are not contained in a $2$-face of $\Delta_W$. In particular, $T(C)\notin \mathrm{Im\ }\partial_2^W$. Hence $\widetilde{H}_1(\Delta_W;\mathbb{K})\neq 0$ which implies that $\beta_{t+1,t+4}(I)\neq 0$. Thus $\mathrm{pd}(I)\geq t+1$. If $\beta_{t+2,j}(I)\neq 0$ for some $j$, then there exists $W\subseteq [t+4]$ with $|W|=j$ such that $\widetilde{H}_{|W|-t-4}(\Delta_W;\mathbb{K})\neq 0$. It then follows that $W=[t+4]$ and $\widetilde{H}_0(\Delta_W;\mathbb{K})\neq 0$. But ${\bar{G}}_W=\bar{G}$ is connected meaning that $\Delta_W$ is connected, by (O-1). Hence $\widetilde{H}_0(\Delta_W;\mathbb{K})=0$, a contradiction. Therefore $\mathrm{pd}(I)=t+1$.
\begin{figure}[ht!]
\begin{center}
\hspace*{-2cm}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.4cm,y=0.4cm]
\clip(54.5,3.2) rectangle (68.5,14);
\draw [color=black] (64.,10.)-- (60.04207935839705,9.202683366710461);
\draw (59.2,9.98) node[anchor=north west] {\begin{scriptsize}1\end{scriptsize}};
\draw (62.6,6) node[anchor=north west] {\begin{scriptsize}3\end{scriptsize}};
\draw (60.7,7.2) node[anchor=north west] {\begin{scriptsize}2\end{scriptsize}};
\draw (63.2,11.2) node[anchor=north west] {\begin{scriptsize}t+4\end{scriptsize}};
\draw (58.6,12.4) node[anchor=north west] {\begin{scriptsize}t+3\end{scriptsize}};
\draw (62.5,4.5) node[anchor=north west] {$G_{(a)_1}$};
\draw [shift={(64.,10.)},line width=0.4pt] plot[domain=5.257759132311878:5.982897107046088,variable=\t]({1.*4.037431066773457*cos(\t r)+0.*4.037431066773457*sin(\t r)},{0.*4.037431066773457*cos(\t r)+1.*4.037431066773457*sin(\t r)});
\draw [shift={(64.,10.)},color=black] plot[domain=0.21715133778930423:5.982897107046091,variable=\t]({1.*4.037431066773395*cos(\t r)+0.*4.037431066773395*sin(\t r)},{0.*4.037431066773395*cos(\t r)+1.*4.037431066773395*sin(\t r)});
\begin{scriptsize}
\draw [fill=black] (64.,10.) circle (1.5pt);
\draw [fill=black] (64.,10.) circle (1.5pt);
\draw [fill=black] (60.39320973638185,11.814363142597488) circle (1.5pt);
\draw [fill=black] (60.04207935839705,9.202683366710461) circle (1.5pt);
\draw [fill=black] (61.10367540176967,7.187144966296193) circle (1.5pt);
\draw [fill=black] (68.,10.) circle (0.3pt);
\draw [fill=black] (67.98,9.76) circle (0.3pt);
\draw [fill=black] (63,6.1) circle (1.5pt);
\draw [fill=black] (67.96341651367774,9.465941589917382) circle (0.3pt);
\end{scriptsize}
\end{tikzpicture}
\hspace*{-1.4cm}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.4cm,y=0.4cm]
\clip(54.5,3.2) rectangle (68.5,14);
\draw [color=black] (64.,10.)-- (60.04207935839705,9.202683366710461);
\draw [color=black] (64.,10.)-- (61.10367540176967,7.187144966296193);
\draw (59.2,9.98) node[anchor=north west] {\begin{scriptsize}1\end{scriptsize}};
\draw (60.7,7.2) node[anchor=north west] {\begin{scriptsize}2\end{scriptsize}};
\draw (62.6,6) node[anchor=north west] {\begin{scriptsize}3\end{scriptsize}};
\draw (63.2,11.2) node[anchor=north west] {\begin{scriptsize}t+4\end{scriptsize}};
\draw (58.6,12.4) node[anchor=north west] {\begin{scriptsize}t+3\end{scriptsize}};
\draw (62.5,4.5) node[anchor=north west] {$G_{(a)_2}$};
\draw [shift={(64.,10.)},line width=0.4pt] plot[domain=5.257759132311878:5.982897107046088,variable=\t]({1.*4.037431066773457*cos(\t r)+0.*4.037431066773457*sin(\t r)},{0.*4.037431066773457*cos(\t r)+1.*4.037431066773457*sin(\t r)});
\draw [shift={(64.,10.)},color=black] plot[domain=0.21715133778930423:5.982897107046091,variable=\t]({1.*4.037431066773395*cos(\t r)+0.*4.037431066773395*sin(\t r)},{0.*4.037431066773395*cos(\t r)+1.*4.037431066773395*sin(\t r)});
\begin{scriptsize}
\draw [fill=black] (64.,10.) circle (1.5pt);
\draw [fill=black] (64.,10.) circle (1.5pt);
\draw [fill=black] (60.39320973638185,11.814363142597488) circle (1.5pt);
\draw [fill=black] (60.04207935839705,9.202683366710461) circle (1.5pt);
\draw [fill=black] (61.10367540176967,7.187144966296193) circle (1.5pt);
\draw [fill=black] (68.,10.) circle (0.3pt);
\draw [fill=black] (67.98,9.76) circle (0.3pt);
\draw [fill=black] (63,6.1) circle (1.5pt);
\draw [fill=black] (67.96341651367774,9.465941589917382) circle (0.3pt);
\end{scriptsize}
\end{tikzpicture}
\hspace*{-1.4cm}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.4cm,y=0.4cm]
\clip(54.5,3.2) rectangle (68.5,14);
\draw [color=black] (64.,10.)-- (60.04207935839705,9.202683366710461);
\draw [color=black] (64.,10.)-- (61.10367540176967,7.187144966296193);
\draw [color=black] (64.,10.)-- (63,6.1);
\draw (64.6,6) node[anchor=north west] {\begin{scriptsize}4\end{scriptsize}};
\draw (62.6,6) node[anchor=north west] {\begin{scriptsize}3\end{scriptsize}};
\draw (59.2,9.98) node[anchor=north west] {\begin{scriptsize}1\end{scriptsize}};
\draw (60.7,7.2) node[anchor=north west] {\begin{scriptsize}2\end{scriptsize}};
\draw (63.2,11.2) node[anchor=north west] {\begin{scriptsize}t+4\end{scriptsize}};
\draw (58.6,12.4) node[anchor=north west] {\begin{scriptsize}t+3\end{scriptsize}};
\draw (62.5,4.5) node[anchor=north west] {$G_{(a)_3}$};
\draw [shift={(64.,10.)},line width=0.4pt] plot[domain=5.257759132311878:5.982897107046088,variable=\t]({1.*4.037431066773457*cos(\t r)+0.*4.037431066773457*sin(\t r)},{0.*4.037431066773457*cos(\t r)+1.*4.037431066773457*sin(\t r)});
\draw [shift={(64.,10.)},color=black] plot[domain=0.21715133778930423:5.982897107046091,variable=\t]({1.*4.037431066773395*cos(\t r)+0.*4.037431066773395*sin(\t r)},{0.*4.037431066773395*cos(\t r)+1.*4.037431066773395*sin(\t r)});
\begin{scriptsize}
\draw [fill=black] (65,6.1) circle (1.5pt);
\draw [fill=black] (63,6.1) circle (1.5pt);
\draw [fill=black] (64.,10.) circle (1.5pt);
\draw [fill=black] (60.39320973638185,11.814363142597488) circle (1.5pt);
\draw [fill=black] (60.04207935839705,9.202683366710461) circle (1.5pt);
\draw [fill=black] (61.10367540176967,7.187144966296193) circle (1.5pt);
\draw [fill=black] (68.,10.) circle (0.3pt);
\draw [fill=black] (67.98,9.76) circle (0.3pt);
\draw [fill=black] (67.96341651367774,9.465941589917382) circle (0.3pt);
\end{scriptsize}
\end{tikzpicture}
\end{center}
\vspace*{-.2cm} \caption{The graphs $G_{(a)_i}$}\label{type a}
\end{figure}
\indent (b) Let $\bar{G}$ be the graph $G_{(b)}$ in Figure~\ref{type b}, where $t\geq 1$ and $\{i,t+4\}\in E({\bar{G}})$ for all $i\in[t+3]$. Then $\bar{G}$ has one minimal cycle $C=1-2-\cdots-(t+3)-1$ as in (a). Setting $W=[t+3]\cup\{t+5\}$, we have $T(C)\in \ker \partial_1^W\setminus \mathrm{Im\ } \partial_2^W$. It follows that $\beta_{t+1,t+4}(I)\neq 0$. Therefore $\mathrm{pd}(I)\geq t+1$. For any $W\subseteq [t+5]$ with $|W|=t+4$, $\bar{G}_W$ is connected. So $\beta_{t+2,t+4}(I)=0$. Suppose $W=[t+5]$. Although $T(C)\in \ker \partial_1^W$ one also has $T(C)\in \mathrm{Im\ } \partial_2^W$, because $C$ is the base of a cone with apex ${t+4}$. Hence according to (O-2),
$\beta_{t+2,t+5}(I)=0$. It follows that $\mathrm{pd}(I)=t+1$.
\begin{figure}[ht!]
\begin{center}
\hspace*{-1.8cm}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.4cm,y=0.4cm]
\clip(54.5,5.8) rectangle (68.5,14.1);
\draw [color=black] (64.,10.)-- (60.04207935839705,9.202683366710461);
\draw [color=black] (64.,10.)-- (61.10367540176967,7.187144966296193);
\draw (59.2,9.98) node[anchor=north west] {\begin{scriptsize}1\end{scriptsize}};
\draw (60.7,7.2) node[anchor=north west] {\begin{scriptsize}2\end{scriptsize}};
\draw (63.2,11.2) node[anchor=north west] {\begin{scriptsize}t+4\end{scriptsize}};
\draw (58.6,12.4) node[anchor=north west] {\begin{scriptsize}t+3\end{scriptsize}};
\draw [shift={(64.,10.)},line width=0.4pt] plot[domain=5.257759132311878:5.982897107046088,variable=\t]({1.*4.037431066773457*cos(\t r)+0.*4.037431066773457*sin(\t r)},{0.*4.037431066773457*cos(\t r)+1.*4.037431066773457*sin(\t r)});
\draw [color=black] (64.,10.)-- (60.39320973638185,11.814363142597488);
\draw [color=black] (56.47518012335943,6.228129663627268)-- (60.04207935839705,9.202683366710461);
\draw [color=black] (56.47518012335943,6.228129663627268)-- (61.10367540176967,7.187144966296193);
\draw (54.7,6.7) node[anchor=north west] {\begin{scriptsize}t+5\end{scriptsize}};;
\draw [shift={(64.,10.)},color=black] plot[domain=0.21715133778930423:5.982897107046091,variable=\t]({1.*4.037431066773395*cos(\t r)+0.*4.037431066773395*sin(\t r)},{0.*4.037431066773395*cos(\t r)+1.*4.037431066773395*sin(\t r)});
\draw [color=black] (64.,10.)-- (65.63698868515817,9.130168501265073);
\draw [color=black] (64.,10.)-- (65.66097247720477,10.689114984293646);
\begin{scriptsize}
\draw [fill=black] (64.,10.) circle (1.5pt);
\draw [fill=black] (60.39320973638185,11.814363142597488) circle (1.5pt);
\draw [fill=black] (60.04207935839705,9.202683366710461) circle (1.5pt);
\draw [fill=black] (61.10367540176967,7.187144966296193) circle (1.5pt);
\draw [fill=black] (56.47518012335943,6.228129663627268) circle (1.5pt);
\draw [fill=black] (68.,10.) circle (0.3pt);
\draw [fill=black] (67.98,9.76) circle (0.3pt);
\draw [fill=black] (67.96341651367774,9.465941589917382) circle (0.3pt);
\draw [fill=black] (65.46910214083202,10.137487767221996) circle (0.3pt);
\draw [fill=black] (65.47,9.93) circle (0.3pt);
\draw [fill=black] (65.44511834878543,9.729763302429909) circle (0.3pt);
\end{scriptsize}
\end{tikzpicture}
\end{center}
\vspace*{-.2cm} \caption{The graph $G_{(b)}$}\label{type b}
\end{figure}
\indent (c) Let $\bar{G}$ be the graph $G_{(c)}$ shown in Figure~\ref{type c}. This graph consists of $3$ minimal cycles of length $4$. So $\mathrm{index}(I)=1$. Moreover, $\beta_{2,5}(I)\neq 0$ because
$T(C)\in \ker \partial_1\setminus \mathrm{Im\ }\partial_2$, for all minimal cycles $C$ in $\bar{G}$, and $\beta_{3,5}(I)=0$ because $\bar{G}$ is connected. Hence $\mathrm{pd}(I)=\!2$.
\begin{figure}[ht!]
\begin{center}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.6cm,y=0.6cm]
\clip(7.,4) rectangle (12.5,8.1);
\draw (8.,4.)-- (12.,4.);
\draw (12.,4.)-- (12.,8.);
\draw (12.,8.)-- (8.,8.);
\draw (8.,8.)-- (8.,4.);
\draw (12.,8.)-- (8.,4.);
\draw (7.3,4.553017458995247) node[anchor=north west] {\begin{scriptsize}1\end{scriptsize}};
\draw (12,4.553017458995247) node[anchor=north west] {\begin{scriptsize}2\end{scriptsize}};
\draw (9.8,6.84034134932995) node[anchor=north west] {\begin{scriptsize}5\end{scriptsize}};
\draw (7.3,8.3) node[anchor=north west] {\begin{scriptsize}4\end{scriptsize}};
\draw (12,8.3) node[anchor=north west] {\begin{scriptsize}3\end{scriptsize}};
\begin{scriptsize}
\draw [fill=black] (8.,4.) circle (1.5pt);
\draw [fill=black] (12.,4.) circle (1.5pt);
\draw [fill=black] (12.,8.) circle (1.5pt);
\draw [fill=black] (8.,8.) circle (1.5pt);
\draw [fill=black] (10.,6.) circle (1.5pt);
\end{scriptsize}
\end{tikzpicture}
\end{center}
\caption{The graph $G_{(c)}$}\label{type c}
\end{figure}
\medskip
(d) Let $\bar{G}$ be either of the graphs $G_{(d)_1}, G_{(d)_2}$ in Figure~\ref{type d}.
Both graphs have three minimal cycles of length~$4$. Since $G_{(d)_1}$ is a dipyramid,
by (O-3) one has $\widetilde{H}_2(\Delta(\overline{G_{(d)_1}});\mathbb{K})\neq 0$ which implies that $\beta_{2,6}(I(\overline{G_{(d)_1}}))\neq 0$. Although, $G_{(d)_2}$ is not a dipyramid, it contains the minimal cycle $C=1-2-3-4-1$ which gives a nonzero homology class of $\widetilde{H}_1(\Delta(\overline{G_{(d)_2}})_W;\mathbb{K})\neq 0$, where $W=V(C)\cup\{5\}$. Hence $\beta_{2,5}(I(\overline{G_{(d)_2}}))\neq 0$.
Therefore $\mathrm{pd}(I)\geq 2$ in both cases.
To prove that $\mathrm{pd}(I)=2$ it is enough to show that $\beta_{3,5}(I)=\beta_{3,6}(I)=0$.
Considering any subset $W$ of $[6]$ with $|W|=5$, $\bar{G}_W$ and so $\Delta_W$ is connected in both cases. It follows that $\widetilde{H}_0(\Delta_W;\mathbb{K})=0$, and hence $\beta_{3,5}(I)=0$. Now $\widetilde{H}_1(\Delta;\mathbb{K})=0$ because except for the cycle $C=1-2-3-4-1$ in $G_{(d)_2}$, all other minimal cycles in $G_{(d)_1}, G_{(d)_2}$ are bases of some cones and for the cycle $C$, we have
$$T(C)=\partial_2^W([1,2,5]+[2,3,5]+[3,4,6]-[1,4,6]-[3,5,6]+ [1,5,6]).$$
Consequently, $\beta_{3,6}(I)=0$ and hence $\mathrm{pd}(I)=2$.
\begin{figure}[ht!]
\begin{center}
\hspace*{-.5cm}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.55cm,y=0.55cm]
\clip(5.,0) rectangle (11.5,9.4);
\draw (8.,6.)-- (11.,6.);
\draw (11.,6.)-- (8.74,4.98);
\draw (8.74,4.98)-- (5.66,4.98);
\draw (5.66,4.98)-- (8.,6.);
\draw (8.48,8.86)-- (11.,6.);
\draw (8.48,8.86)-- (8.74,4.98);
\draw (8.48,8.86)-- (5.66,4.98);
\draw [dash pattern=on 1pt off 1pt] (8.48,8.86)-- (8.,6.);
\draw (8.,2.)-- (8.74,4.98);
\draw (8.,2.)-- (5.66,4.98);
\draw [dash pattern=on 1pt off 1pt] (8.,2.)-- (8.,6.);
\draw (8.,2.)-- (11.,6.);
\draw (5.2,4.99) node[anchor=north west] {\begin{scriptsize}3\end{scriptsize}};
\draw (8.74,5.22) node[anchor=north west] {\begin{scriptsize}2\end{scriptsize}};
\draw (11.,6.4) node[anchor=north west] {\begin{scriptsize}1\end{scriptsize}};
\draw (7.4,6.6) node[anchor=north west] {\begin{scriptsize}4\end{scriptsize}};
\draw (8.2,9.62) node[anchor=north west] {\begin{scriptsize}5\end{scriptsize}};
\draw (7.7,2.) node[anchor=north west] {\begin{scriptsize}6\end{scriptsize}};
\draw (7.2,1) node[anchor=north west] {$G_{(d)_1}$};
\begin{scriptsize}
\draw [fill=black] (8.,6.) circle (1.5pt);
\draw [fill=black] (11.,6.) circle (1.5pt);
\draw [fill=black] (8.74,4.98) circle (1.5pt);
\draw [fill=black] (5.66,4.98) circle (1.5pt);
\draw [fill=black] (8.48,8.86) circle (1.5pt);
\draw [fill=black] (8.,2.) circle (1.5pt);
\end{scriptsize}
\end{tikzpicture}
\hspace{2.5cm}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.55cm,y=0.55cm]
\clip(5.,0) rectangle (11.5,9.5);
\draw (8.,6.)-- (11.,6.);
\draw (11.,6.)-- (8.74,4.98);
\draw (8.74,4.98)-- (5.66,4.98);
\draw (5.66,4.98)-- (8.,6.);
\draw (8.48,8.86)-- (11.,6.);
\draw (8.48,8.86)-- (8.74,4.98);
\draw (8.48,8.86)-- (5.66,4.98);
\draw [dash pattern=on 1pt off 1pt] (8.48,8.86)-- (8.,6.);
\draw (8.,2.)-- (5.66,4.98);
\draw [dash pattern=on 1pt off 1pt] (8.,2.)-- (8.,6.);
\draw (8.,2.)-- (11.,6.);
\draw (5.2,4.99) node[anchor=north west] {\begin{scriptsize}3\end{scriptsize}};
\draw (8.74,5.22) node[anchor=north west] {\begin{scriptsize}2\end{scriptsize}};
\draw (11.,6.4) node[anchor=north west] {\begin{scriptsize}1\end{scriptsize}};
\draw (7.4,6.6) node[anchor=north west] {\begin{scriptsize}6\end{scriptsize}};
\draw (8.2,9.62) node[anchor=north west] {\begin{scriptsize}5\end{scriptsize}};
\draw (7.7,2.) node[anchor=north west] {\begin{scriptsize}4\end{scriptsize}};
\draw (7.2,1) node[anchor=north west] {$G_{(d)_2}$};
\begin{scriptsize}
\draw [fill=black] (8.,6.) circle (1.5pt);
\draw [fill=black] (11.,6.) circle (1.5pt);
\draw [fill=black] (8.74,4.98) circle (1.5pt);
\draw [fill=black] (5.66,4.98) circle (1.5pt);
\draw [fill=black] (8.48,8.86) circle (1.5pt);
\draw [fill=black] (8.,2.) circle (1.5pt);
\end{scriptsize}
\end{tikzpicture}
\end{center}
\vspace*{-.3cm} \caption{The graphs $G_{(d)_i}$}\label{type d}
\end{figure}
\end{ex}
Next lemma gives more intuition about the length of minimal cycles in $\bar{G}$, when $I(G)$ has almost maximal finite index. For an integer $k$, we show by $\overline{k}$ the remainder of $k$ modulo $t+3$, i.e. $\overline{k}\equiv k\pmod{t+3}$ with $0 \leq \overline{k} < t + 3$, where $t\geq 1$ is an integer.
\begin{lem}\label{sedaghashang}
Let $G$ be a simple graph on $[n]$. Assume $I:=I(G)$ has almost maximal finite index. Then any minimal cycle in $\bar{G}$ is of length $\mathrm{index}(I)+3$.
\end{lem}
\begin{proof}
Let $\mathrm{index}(I)=t$. Then $\mathrm{pd}(I)=t+1$ which means that $\beta_{i,j}(I)=0$ for all $i>t+1$ and all $j$.
Using Theorem~\ref{index of graphs}, there exists a minimal cycle $C$ in $\bar{G}$ of length $t+3$ which has the smallest length among all the minimal cycles in $\bar{G}$. Let $C=u_1-u_2-\cdots-u_{t+3}-u_1$. Suppose $C'\neq C$ is a minimal cycle in $\bar{G}$ with $C'=v_1-v_2-\cdots-v_l-v_1$. Setting $W=V(C')$ and $T(C')$ as defined in (\ref{cycle kernel}) one has
$T(C')\in \ker \partial_1^W$, while $\mathrm{Im\ }\partial_2^W=0$. Hence $\widetilde{H}_{1}(\Delta_W;\mathbb{K})\neq 0$. Hochster's formula implies that $\beta_{l-3,l}(I)\neq 0$. Since $\beta_{i,j}(I)=0$ for all $i>t+1$, we have $l\leq t+4$. We claim that $l<t+4$. Since $t+3$ is the smallest length of a minimal cycle in $\bar{G}$, it follows that $l=t+3$, as desired.
\medskip
{\em Proof of the claim:} Suppose $l=t+4$ and let $u\in [n]\setminus V(C')$. Note that such $u$ exists since otherwise $V(C)\subset [n]=V(C')$ which implies that $C'$ is not minimal. Let $W=V(C')\cup\{u\}$.
Since $\beta_{t+2,t+5}(I)= 0$, it follows that $\widetilde{H}_{1}(\Delta_W;\mathbb{K})= 0$. Therefore, $T(C')\in \ker \partial_1^W$ implies that $T(C')\in \mathrm{Im\ } \partial_2^W$. Hence, $\{u, v_i\}\in E(\bar{G})$ for all $1\leq i\leq t+4$.
On the other hand, since $n>t+4$ there exist $v, v'\in [n]\setminus V(C)$ with $v\neq v'$. Setting $W=V(C)\cup\{v,v'\}$, by \textbf{(O-4)}(ii), either of the following cases happens:
\begin{itemize}
\item[(i)] either $\{v,u_i\}\in E(\bar{G})$ for all $u_i\in V(C)$ or $\{v',u_i\}\in E(\bar{G})$ for all $u_i\in V(C)$;
\item[(ii)] else, $t=1$ and $\Delta_W$ is isomorphic to the graph $G_{(d)_2}$ in Figure~\ref{type d}. In particular, $\{v,v'\}\in E(\bar{G})$.
\end{itemize}
We show that $V(C)\cap V(C')= \emptyset$. Suppose on contrary that $V(C)\cap V(C')\neq \emptyset$, say $u_1\in V(C')$. Then $\{u_j, u_1\}\in E(\bar{G})$ for all $u_j\in V(C)\setminus V(C')$. Since $C$ is minimal we conclude that $V(C)\setminus V(C')\subseteq \{u_2,u_{t+3}\}$. Therefore $\{u_1,u_3,\ldots,u_{t+2}\}\subset V(C')$.
Note that $V(C)\setminus V(C')\neq \emptyset$ because otherwise $V(C)\subset V(C')$ which does not hold.
If $|V(C)\setminus V(C')|=1$, without loss of generality we may suppose $V(C)\setminus V(C')=\{u_{t+3}\}$. Then since $|V(C')|-|V(C)|=1$, we have $|V(C')\setminus V(C)|=2$. Let $v_{j_1},v_{j_2}\in V(C')\setminus V(C)$. Then $V(C')=\{v_{j_1},v_{j_2}\}\cup\{u_1,\ldots,u_{t+2}\}$ with $\{v_{j_1},v_{j_2}\}\cap\{u_1,\ldots,u_{t+2}\}=\emptyset$. Suppose (i) happens for $v_{j_1},v_{j_2}$. We may assume that $\{v_{j_1}, u_i\}\in E(\bar{G})$ for all $1\leq i\leq t+3$. Since $t\geq 1$, $|\{u_1,\ldots,u_{t+2}\}|\geq 3$ which implies that $v_{j_1}$ is adjacent to at least $3$ vertices of $C'$ in $\bar{G}$ which contradicts the minimality of $C'$. So (i) cannot happen when $|V(C)\setminus V(C')|=1$. Therefore by (ii), $t=1$ and the induced subgraph of $\bar{G}$ on $V(C)\cup\{v_{j_1},v_{j_2}\}$ is isomorphic to $G_{(d)_2}$. Since $V(C')\subset V(C)\cup\{v_{j_1},v_{j_2}\}$, the cycle $C'$ which is of length $5$ is an induced subgraph of $G_{(d)_2}$. This is a contradiction because all cycles in $G_{(d)_2}$ are of length $4$. Therefore $|V(C)\setminus V(C')|=2$.
It follows from $V(C)\setminus V(C')=\{u_2,u_{t+3}\}$ that $V(C')=\{v_{j_1},v_{j_2}, v_{j_3}\}\cup\{u_1,u_3,\ldots,u_{t+2}\}$ with $\{v_{j_1},v_{j_2}, v_{j_3}\}\cap\{u_1,u_3,\ldots,u_{t+2}\}=\emptyset$. If at least two of the vertices $v_{j_1},v_{j_2},v_{j_3}$, say $v_{j_1},v_{j_2}$, are adjacent to all vertices of $C$ in $\bar{G}$, then $v_{j_1}-u_1-v_{j_2}-u_3-v_{j_1}$ is a $4$-cycle in $C'$ which contradicts the minimality of $C'$, because $|C'|\geq 5$. Hence at most one vertex from $v_{j_1},v_{j_2},v_{j_3}$ is adjacent to all vertices of $C$ in $\bar{G}$. If none of them is adjacent to all $u_i$ in $\bar{G}$, by (ii) we have $\{v_{j_1},v_{j_2}\},\{v_{j_2},v_{j_3}\},\{v_{j_1},v_{j_3}\}\in E(\bar{G})$ and hence $C'$ contains a triangle which is a contradiction. Therefore, exactly one vertex among $v_{j_1},v_{j_2},v_{j_3}$, say $v_{j_1}$, is adjacent to all vertices $u_i$ in $\bar{G}$. Now
$\{u_1,u_3,\ldots,u_{t+2}\}\subset V(C')$ and minimality of $C'$ imply that $t=1$ and that $v_{j_1}$ is not adjacent to $v_{j_2},v_{j_3}$.
Setting $W=\{v_{j_2},v_{j_3}\}\cup V(C)$, since (i) does not happen for this $W$, one concludes that $\Delta_W$ is isomorphic to $G_{(d)_2}$. Therefore $\{v_{j_2},v_{j_3}\}\in E(\bar{G})$ and, in $\bar{G}$ the vertex $v_{j_2}$ is adjacent to three successive vertices $u_{\overline{i-1}}, u_{i}, u_{\overline{i+1}}$ of $C$, and the vertex $v_{j_3}$ is adjacent to $u_{\overline{i+1}}, u_{\overline{i+2}}, u_{\overline{i-1}}$, where $1\leq i\leq 4$. Since $\{v_{j_2},v_{j_3}\}\in E(C')$, and since $V(C')=\{v_{j_1}, v_{j_2},v_{j_3},u_1,u_3\}$,
it follows that either $i=1$ or $i=3$, otherwise $C'$ is not minimal. Without loss of generality suppose $i=1$. Thus $v_{j_2}$ is adjacent to $u_1,u_2,u_4$ but not to $u_3$, and $v_{j_3}$ is adjacent to $u_2,u_3,u_4$ but not to $u_1$.
Setting $W=V(C)\cup\{v_{j_1},v_{j_2},v_{j_3}\}$ one has
\begin{align*}
T'=(&\sum_{1\leq i\leq 3}[v_{j_1},u_i,u_{i+1}])-[v_{j_1},u_1,u_4]- [v_{j_2},u_1,u_{2}]+[v_{j_2},u_1,u_{4}]-[v_{j_3},u_2,u_{3}]\\
&-[v_{j_3},u_3,u_4]+[v_{j_2},v_{j_3},u_2]-[v_{j_2},v_{j_3},u_4]\in\ker\partial_2^W.
\end{align*}
Since $\beta_{3,7}(I)=0$, we have $T'\in \mathrm{Im\ } \partial_3^W$ which requires that $\Delta_W$ contains faces of dimension $3$ which is not the case here, a contradiction.
Consequently, $V(C)\cap V(C')= \emptyset$, as desired.
Setting $W=\{u_j\}\cup V(C')$ for some $1\leq j\leq t+3$, since $T(C')\in \ker\partial_1^W$ and $\beta_{t+2,t+5}(I)=0$ we conclude that $u_j$ is adjacent to all vertices of $C'$ in $\bar{G}$. In particular,
$\{u_1,v_i\}, \{u_3,v_i\}\in E(\bar{G})$ for all $1\leq i\leq t+4$. Let $W=V(C')\cup\{u_1,u_3\}$. Then $\Delta_W$ consists of an induced dipyramid $D$.
Thus $T(D)\in \ker \partial_2^W$, while $\mathrm{Im\ } \partial_3^W=0$, where $T(D)$ is defined in (\ref{dipyramid kernel}).
It follows that $\widetilde{H}_{2}(\Delta_W;\mathbb{K})\neq 0$ and so $\beta_{t+2,t+6}(I)\neq 0$, a contradiction. Therefore $l<t+4$ and the claim follows.
\end{proof}
In the next corollary we highlight some information obtained from Observation~\ref{connected induced graphs} about the vertices not belonging to a minimal cycle.
\begin{cor}\label{ostad}
Let $G$ be a simple graph on $[n]$. Assume $I:=I(G)$ has almost maximal finite index. Let $C$ be a minimal cycle in $\bar{G}$. Then
\begin{itemize}
\item[(a)] all vertices in $[n]\setminus V(C)$ are adjacent to some vertex in $V(C)$ in the graph $\bar{G}$.
\item[(b)] For any pair of vertices $v,v'\in [n]\setminus V(C)$ whenever $|N_{\bar{G}}(v)\cap V(C)|\leq 2$, then $V(C)\subseteq N_{\bar{G}}(v')$.
\item[(c)] If $\mathrm{index}(I)=1$, then there are at most two vertices in $[n]\setminus V(C)$ which are not adjacent to all vertices of ${C}$ in $\bar{G}$.
\item[(d)] If $\mathrm{index}(I)>1$, then there is at most one vertex in $[n]\setminus V(C)$ which is not adjacent to all vertices of ${C}$ in $\bar{G}$.
\end{itemize}
\end{cor}
\begin{proof}
Let $\mathrm{index}(I)=t$. By assumption $\mathrm{pd}(I)=t+1$. By Lemma~\ref{sedaghashang} all minimal cycles of $\bar{G}$ are of length $t+3$. Let $C=u_1-u_2-\cdots-u_{t+3}-u_1$ be a minimal cycle of $\bar{G}$.
(a) Let $u_{t+4}\in [n]\setminus V(C)$, and set $W=V(C)\cup \{u_{t+4}\}$. Since $\beta_{t+2,t+4}(I)=0$ we conclude that $\bar{G}_W$ is connected using (O-1). It follows that $u_{t+4}$ is adjacent to some vertex of $C$ in the graph $\bar{G}$.
\medspace
(b) If $|[n]\setminus V(C)|\leq 1$, then there is nothing to prove. Suppose $u_{t+4},u_{t+5}\in [n]\setminus V(C)$. Set $W= V(C)\cup\{u_{t+4},u_{t+5}\}$. Since $\beta_{t+2,t+5}(I)=0$, (O-4)(ii) implies that for each edge $e$ of $C$ we either have $e\cup\{u_{t+4}\}\in \Delta_W$ or $e\cup\{u_{t+5}\}\in \Delta_W$. This in particular shows that if $u_{t+4}$ is adjacent to at most $2$ vertices of $C$ in $\bar{G}$, then $u_{t+5}$ is adjacent to all of them in $\bar{G}$.
\medspace
(c) Suppose $u_{5},u_{6}\in [n]\setminus V(C)$ are not adjacent to all vertices of $C$ in $\bar{G}$. The argument in (O-4)(ii) shows that we may assume that $\{u_1,u_2,u_3\}\subseteq N_{\bar{G}}(u_5)$ but $u_4\notin N_{\bar{G}}(u_5)$ and $\{u_1,u_3,u_4\}\subseteq N_{\bar{G}}(u_6)$ but $u_2\notin N_{\bar{G}}(u_6)$. Now suppose $u_7\in [n]\setminus V(C)$ is not adjacent to all vertices of $C$ in $\bar{G}$. By replacing $u_6$ with $u_7$ in (O-4)(ii) one sees that $u_7$ is not adjacent to $u_2$ in $\bar{G}$, and replacing $u_5$ with $u_7$ in the same argument shows that $u_7$ is adjacent to $u_2$ in $\bar{G}$, a contradiction.
\medspace
(d) Suppose $u_{t+4},u_{t+5}$ are two vertices in $[n]\setminus V(C)$ which are not adjacent to all vertices of $C$ in $\bar{G}$. The argument in (O-4)(ii) shows that $t=1$, a contradiction.
\end{proof}
The crucial point in the classification of the edge ideals with almost maximal finite index is to determine the number of the vertices of the graph with respect to the index of the ideal. In the following, we compute this number.
\begin{prop}\label{number of vertices}
Let $G$ be a simple graph on $[n]$ with no isolated vertex such that $I=I(G)$ has almost maximal finite index $t$. Then $G$ has either $n=t+4$ or $n=t+5$ vertices.
\end{prop}
\begin{proof}
Since $\mathrm{index}(I)=t$ there is a minimal cycle $C=u_1-u_2-\cdots-u_{t+3}-u_1$ in $\bar{G}$. Moreover, $\bar{G}\neq C$, because otherwise $\mathrm{pd}(I)=\mathrm{index}(I)$ by \cite[Theorem~4.1]{BHZ}. Since $C$ is a minimal cycle, $\bar{G}\neq C$ means that there exists $v\in [n]\setminus V(C)$. Therefore $n\geq t+4$.
Suppose on contrary that $n> t+5$. So $n-|V(C)|>2$.
Suppose first $t>1$. It follows from Corollary~\ref{ostad}(d) that there exist $u_{t+4},u_{t+5}\in [n]\setminus V(C)$ such that $u_{t+4},u_{t+5}$ are adjacent to all vertices of $C$ in $\bar{G}$. Therefore $C':=u_{t+4}-u_1-u_{t+5}-u_3-u_{t+4}$ is a $4$-cycle. Since $t>1$, $C'$ is not minimal and hence $\{u_{t+4},u_{t+5}\}\in E(\bar{G})$.
\medspace
Since $u_{t+4}, u_{t+5}$ are not isolated in $G$, there exist $v_1, v_2\in [n]\setminus (V(C)\cup \{ u_{t+4}, u_{t+5}\})$ such that $\{v_1, u_{t+4}\}, \{v_2,u_{t+5}\}\notin E(\bar{G})$. By Corollary~\ref{ostad}(a), $v_1, v_2$ are adjacent to some vertices of $C$ in $\bar{G}$. If $v_1$ is adjacent to at least two vertices in $\bar{G}$, say $u_a, u_b\in V(C)$ such that $b\neq \overline{a+1}$ and $a\neq \overline{b+1}$, then we will have a minimal $4$-cycle $v_1-u_a-u_{t+4}-u_b-v_1$ which contradicts $t>1$. Thus $v_1$ is adjacent to either one vertex $u_a$ or two vertices $u_a, u_{\overline{a+1}}$ of $C$ in $\bar{G}$. In particular, $v_1$ is not adjacent to all vertices of $C$ in $\bar{G}$. Same holds for $v_2$. Corollary~\ref{ostad}(d) implies that $v_1=v_2$.
If $v_1$ is adjacent to only one vertex $u_a$ of $C$ in $\bar{G}$, setting $W=\{u_{t+4},v_1\}\cup V(C)\setminus \{u_a\}$, $\Delta_W$ is not connected and so $\beta_{t+2,t+4}(I)\neq 0$, a contradiction. Therefore, $v_1$ is adjacent to $u_a, u_{\overline{a+1}}$ in $\bar{G}$. Setting $W=\{u_{t+4},u_{t+5},v_1\}\cup V(C)\setminus \{u_a, u_{\overline{a+1}}\}$, $\Delta_W$ is not connected and so $\beta_{t+2,t+4}(I)\neq 0$, a contradiction. Consequently, $n\leq t+5$ when $t>1$.
\iffalse
all vertices of $C$ in $\bar{G}$, then we have a minimal $4$-cycle as $v_1-u_1-u_{t+4}-u_3-v_1$ which contradicts $t>1$.
then setting $W=V(C)\cup\{u_{t+4}, u_{t+5}, v_1\}$, $\Delta_W$ contains an induced dipyramid on the vertex set $W\setminus\{u_{t+5}\}$ and hence $\ker\partial_2^W\neq 0$. Since $\beta_{t+2,t+6}(I)=0$ we have $\ker\partial_2^W= \mathrm{Im\ }\partial_3^W$. This happens if and only if $v_1$ is adjacent to $u_{t+5}$ in $\bar{G}$. In particular, $v_1\neq v_2$. Similarly, if $v_2$ is adjacent to all vertices of $C$, then $v_2$ and $u_{t+4}$ are adjacent in $\bar{G}$. Now if $\{v_1,v_2\}\notin E(\bar{G})$, then setting $W= V(C)\cup\{v_1,v_2, u_{t+4}\}$, $\Delta_W$ contains an induced dipyramid on $W\setminus\{u_{t+4}\}$ and since $\{v_1, u_{t+4}\}\notin E(\bar{G})$ we have $\widetilde{H}_{2}(\Delta_W;\mathbb{K})=0$ and hence $\beta_{t+2,t+6}(I)\neq 0$, a contradiction. Thus $\{v_1,v_2\}\in E(\bar{G})$. Now set $W=V(C)\cup\{u_{t+4}, u_{t+5}, v_1,v_2\}$ and thus
\begin{align*}
T=&\sum_{1\leq i\leq 3}\left([u_{t+4},u_{t+5},u_i,u_{i+1}]-[u_{t+4},v_2,u_i,u_{i+1}]+[u_{t+5}, v_1, u_i,u_{i+1}]+[v_1,v_2,u_i,u_{i+1}]\right)\\
&-[u_{t+4},u_{t+5},u_1,u_{4}]+[u_{t+4},v_2,u_1,u_4]-[u_{t+5}, v_1,u_1,u_4]-[v_1,v_2,u_1,u_4]\in\ker\partial_3^W
\end{align*}
Since $\beta_{t+2, t+7}(I)=0$, we must have $T\in \mathrm{Im\ }\partial_4^W$ which implies that $\Delta_W$ has faces of dimension $4$ which does not hold. Thus either $v_1$ or $v_2$.
\fi
\iffalse
Thus there exists $1\leq j\leq t+3$, such that $v_1$ is not adjacent to $u_j$ in $\bar{G}$. The same holds for $v_2$. Corollary~\ref{ostad}(d) implies that $v_1=v_2$. Without loss of generality suppose $v_1$ is not adjacent to $u_1$.
By Corollary~\ref{ostad}(a), $v_1$ is adjacent to some vertex of $C$ in $\bar{G}$. Suppose there are at least two vertices of $C$ adjacent to $v_1$ in $\bar{G}$ and suppose $1< a< b\leq t+3$ are respectively the smallest and the biggest integers such that $u_a,u_b$ are adjacent to $v_1$ in $\bar{G}$. If $b=a+1$, then $v_1$ is adjacent to only two vertices $u_a, u_{a+1}$ in $\bar{G}$. Setting $W=\{u_{t+4},u_{t+5},v_1\}\cup V(C)\setminus \{u_a, u_{a+1}\}$, $\Delta_W$ is not connected and so $\beta_{t+2,t+4}(I)\neq 0$, a contradiction. Thus $b>a+1$. It follows that
$C'=v_1-u_b-u_{{b+1}}-\cdots-u_{t+3}-u_1-u_2-\cdots-u_{a}-v_1$ is a minimal cycle of length $t+5+a-b=t+3$ by Lemma~\ref{sedaghashang}.
It follows that $b=a+2$ and consequently $u_{t+4}-u_a-v_1-u_b-u_{t+4}$ is a minimal $4$-cycle which contradicts $t>1$.
\fi
\medspace
Now suppose $t=1$. Since $n-|V(C)|>2$ we have $n\geq 7$. By Corollary~\ref{ostad}(c), at least one vertex, say $v_1$ in $[n]\setminus V(C)$ is adjacent to all vertices of $C$ in $\bar{G}$. Since $v_1$ is not isolated in $G$, there exists $v_2\in [n]\setminus(V(C)\cup \{v_1\})$ such that $\{v_1,v_2\}\notin E(\bar{G})$. We claim that $v_2$ is not adjacent to some vertex of $C$ in $\bar{G}$.
\medspace
{\em Proof of the claim:} Suppose on contrary that $v_2$ is adjacent to all vertices of $C$ in $\bar{G}$. Then we get an induced dipyramid $D$ on the vertex set $V(C)\cup\{v_1,v_2\}$. Now set $W=V(C)\cup \{v_1,v_2,v_3\}$ for some $v_3\in [n]\setminus(V(C)\cup \{v_1,v_2\})$. Since $\beta_{3,7}(I)=0$ we have $T(D)\in \mathrm{Im\ }\partial_{3}^W$, with $T(D)$ similar to the one in (\ref{dipyramid kernel}), which implies that each $2$-face of $D$ is contained in a $3$-face of $\Delta_W$ and hence $v_3$ is adjacent to all vertices of $V(C)\cup \{v_1,v_2\}$ in $\bar{G}$. As $v_3$ is not isolated in $G$, there exists $v_4\in [n]\setminus(V(C)\cup \{v_1,v_2,v_3\})$ such that $\{v_3,v_4\}\notin E(\bar{G})$. Replacing $v_3$ with $v_4$ in the above argument we conclude that $v_4$ is also adjacent to all vertices of $V(C)\cup \{v_1,v_2\}$ in $\bar{G}$.
Now
set $W=\{v_1,v_2,v_3,v_4\}\cup V(C)$. Then
\vspace*{-.55cm}
\begin{align*}
T=&\sum_{1\leq i\leq 3}\left([v_1,v_3,u_i,u_{i+1}]-[v_1,v_4,u_i,u_{i+1}]-[v_2,v_3,u_i,u_{i+1}]+[v_2,v_4,u_i,u_{i+1}]\right)\\
&-[v_1,v_3,u_1,u_4]+[v_1,v_4,u_1,u_4]+[v_2,v_3,u_1,u_4]-[v_2,v_4,u_1,u_4]\in\ker\partial_3^W
\end{align*}
while $T\notin\mathrm{Im\ }\partial_{4}^W$, because $\Delta_W$ contains no $4$-face.
This implies that $\beta_{3,8}(I)\neq 0$ which is a contradiction. So the claim follows.
\vspace{0cm}
Without loss of generality suppose $\{v_2,u_4\}\notin E(\bar{G})$. Now consider $v'_3\in [n]\setminus (V(C)\cup\{v_1,v_2\})$. We show that $v'_3$ is adjacent to all vertices of $C$ in $\bar{G}$. Otherwise, setting $W=\{v_2,v'_3\}\cup V(C)$ the same discussion as in (O-4)(ii) shows that $\bar{G}_W$ is isomorphic to the graph $G_{(d)_2}$ in Figure~\ref{type d}, where $\{v'_3,u_2\}\notin \bar{G}_W$. Hence setting $W=\{v_1,v_2,v'_3\}\cup V(C)$, we have
\begin{align*}
T'=(&\sum_{1\leq i\leq 3}[v_1,u_i,u_{i+1}])-[v_1,u_1,u_4]- [v_2,u_1,u_{2}]-[v_2,u_2,u_{3}]-[v'_3,u_3,u_{4}]\\
&+[v'_3,u_1,u_4]-[v_2,v'_3,u_1]+[v_2,v'_3,u_3]\in\ker\partial_2^W,
\end{align*}
while $T'\notin \mathrm{Im\ }\partial^W_3$ because $\{v_2,u_1,u_2\}$ is not contained in a $3$-face of $\Delta_W$, and we get a contradiction. Thus
$v'_3$ is adjacent to all vertices of $C$ in $\bar{G}$. It follows that setting $W=\{v_1,v_2,v'_3\}\cup V(C)$, a dipyramid $D$ with the vertex set $V(C)\cup \{v_1,v'_3\}$ lies in $\Delta_W$ and so $T(D)\in \ker\partial_{2}^W$ which implies that $T(D)\in \mathrm{Im\ }\partial_{3}^W$. Thus each $2$-face of $D$ is contained in a $3$-face of $\Delta_W$. Since $v_2$ is not adjacent to $u_4$ in $\bar{G}$, we conclude that $\{v_1,v'_3\}\in E(\bar{G})$.
Note that by $\beta_{3,5}(I)= 0$, setting $W=V(C)\cup \{v_2\}$, the vertex $v_2$ is adjacent to some vertex $u_i$ of $V(C)$ in $\bar{G}$. Now setting $W=\{v_1,v_2\}\cup V(C)\setminus\{u_i\}$, the same reason implies that $v_2$ is adjacent to some vertex $u_j$ in $V(C)\setminus\{u_i\}$ in the graph $\bar{G}$. Finally, setting $W=\{v_1,v_2, v'_3\}\cup V(C)\setminus \{u_i,u_j\}$ indicates that in the graph $\bar{G}$ the vertex $v_2$ is adjacent to either three vertices $u_i,u_j,u_k$ of $C$ or it is adjacent to the two vertices $u_i,u_j$ of $C$ and to $v'_3$. We show that in the first case $v_2$ is also adjacent to $v'_3$ in $\bar{G}$. Suppose the first case happens. Since $\{v_2,u_4\}\notin E(\bar{G})$, setting $W=\{v_1,v_2,v'_3,u_1,u_3,u_4\}$ we have a minimal cycle $C':=v_2-u_3-u_4-u_1-v_2$ in $\bar{G}_W$ with $T(C')\in \ker\partial_1^W$. Since $\beta_{3,6}(I)=0$, any edge of $C'$ must be contained in a $2$-face of $\Delta_W$ and since $\{v_1,v_2\}\notin E(\bar{G})$ it follows that $v_2$ is adjacent to $v'_3$ in $\bar{G}$.
Now since $v'_3$ is not isolated in $G$, there exists $v'_4\in [n]\setminus (V(C)\cup \{v_1,v_2,v'_3\})$ with $\{v'_3,v'_4\}\notin E(\bar{G})$. Replacing $v'_3$ with $v'_4$ in the above discussion, we see that $v'_4$ is adjacent to all vertices of $C$ in $\bar{G}$. Setting $W=V(C)\cup \{v_2,v'_3,v'_4\}$, we have an induced dipyramid $D$ on the vertex set $V(C)\cup\{v'_3,v'_4\}$ with $T(D)\in \ker\partial^W_{2}$ and since $\{v_2,u_4\}\notin \bar{G}_W$ we have $T(D)\notin \mathrm{Im\ }\partial^W_{3}$ that is a contradiction with $\beta_{3,7}(I)=0$.
Thus $n\leq t+5$ when $t=1$.
\end{proof}
Now we are ready to state the main result of this section which determines the graphs whose edge ideals have almost maximal finite index.
\begin{thm}\label{check-out}
Let $G$ be a simple graph on $[n]$ with no isolated vertex and let $I=I(G)\subset\!S$. Then $I$ has almost maximal finite index if and only if $\bar{G}$ is isomorphic to one of the graphs given in Example~\ref{mesal}.
\end{thm}
\begin{proof}
The ``if" implication follows from Example~\ref{mesal}. We prove the converse.
Suppose $\mathrm{index}(I)=t$. Then there is a minimal cycle $C:=u_1-u_2-\cdots-u_{t+3}-u_1$ in $\bar{G}$. Moreover, by Proposition~\ref{number of vertices} there exists $u_{t+4}\in [n]\setminus V(C)$ which by Corollary~\ref{ostad}(a) is adjacent to some vertex $u_i$ of $V(C)$ in $\bar{G}$. Without loss of generality we may assume that $i=1$. By Proposition~\ref{number of vertices} we have $n-(t+3)\leq 2$. We consider two cases:
\medspace
{\em Case} (i): Suppose $[n]\setminus V(C)=\{u_{t+4}\}$ and let $1\leq l\leq t+3$ be the largest integer such that $\{u_{l},u_{t+4}\}\in E(\bar{G})$.
\item[$\bullet$] If $l=1$, then $\bar{G}=G_{(a)_1}$ in Figure~\ref{type a}.
\item[$\bullet$] If $l=2$, then $\bar{G}=G_{(a)_2}$ in Figure~\ref{type a}.
\item[$\bullet$] If $3\leq l<t+3$, then there is a minimal cycle $C'=u_1-u_{t+4}- u_l- u_{l+1}-\cdots-u_{t+3}-u_1$ of length $t+6-l$. By Lemma~\ref{sedaghashang}, $t+6-l=t+3$ which implies $l=3$. If $\{u_2,u_{t+4}\}\notin E(\bar{G})$, then we will have a minimal $4$-cycle on the vertex set $\{u_1,u_2,u_3,u_{t+4}\}$. It follows from Lemma~\ref{sedaghashang} that $|C|=4$. Hence, $\bar{G}$ is isomorphic to $G_{(c)}$ in Figure~\ref{type c}. If $\{u_2,u_{t+4}\}\in E(\bar{G})$, then $\bar{G}=G_{(a)_3}$ in Figure~\ref{type a}.
\item[$\bullet$] If $l=t+3$, then since $G$ does not have isolated vertices, there exists $1<j<t+3$, such that $\{u_{t+4},u_j\}\notin E(\bar{G})$. Let $k,k'$ with $1\leq k< j<k'\leq t+3$ be respectively the largest index and the smallest index such that $\{u_k,u_{t+4}\}, \{u_{k'},u_{t+4}\}\in E(\bar{G})$. It follows that $C'=u_{t+4}-u_k- u_{k+1}-\cdots-u_{k'}-u_{t+4}$ is a minimal cycle and hence $|C'|=k'-k+2=t+3$. Therefore we have either $(k,k')=(1,t+2)$ or $(k,k')=(2,t+3)$. In both cases $\bar{G}$ is isomorphic to $G_{(a)_3}$.
{\em Case} (ii): Suppose $[n]\setminus V(C)=\{u_{t+4},u_{t+5}\}$. By Corollary~\ref{ostad}(a) both $u_{t+4},u_{t+5}$ are adjacent to at least one vertex of $C$ in $\bar{G}$.
\item[$\bullet$] Suppose in the graph $\bar{G}$ the vertex $u_{t+4}$ is adjacent to at most $2$ vertices of $C$ one of which is $u_1$. Then $u_{t+5}$ is adjacent to all vertices of $C$ in $\bar{G}$ by Corollary~\ref{ostad}(b). Since $\Delta_W$ is connected for $W=[n]\setminus \{u_1\}$, we conclude that $u_{t+4}$ is adjacent to some vertex in $[n]\setminus \{u_1\}$ in $\bar{G}$ and since $u_{t+5}$ is not isolated in $G$, $u_{t+5}$ is not adjacent to $u_{t+4}$ in $\bar{G}$ and hence $u_{t+4}$ is adjacent to some $u_j\in V(C)$ with $j\neq 1$ in $\bar{G}$. We show that either $j=2$ or else $j=t+3$. Otherwise there is a minimal cycle $C':=u_{t+4}-u_1-u_2-\cdots-u_j-u_{t+4}$ of length $j+1$ which is equal to $t+3$, by Lemma~\ref{sedaghashang}. Thus $j=t+2$ which implies that $C'':=u_{t+4}-u_{t+2}-u_{t+3}-u_1-u_{t+4}$ is a minimal $4$-cycle and hence $t=1$. But $T(C''')\in \ker\partial_1\setminus\mathrm{Im\ }\partial_{2}$, where $C''':=u_{5}-u_3-u_{6}-u_1-u_{5}$ is a minimal $4$-cycle in $\bar{G}$. It follows that $\beta_{3,6}(I)\neq 0$, a contradiction. Thus either $j=2$ or else $j=t+3$ and hence $\bar{G}$ is isomorphic to $G_{(b)}$ in Figure~\ref{type b}. Same holds if we interchange $u_{t+4}$ and $u_{t+5}$ in the above argument.
\item[$\bullet$] Now suppose $u_{t+4}, u_{t+5}$ are adjacent to at least $3$ vertices of $C$ in $\bar{G}$.
If $u_{t+4}$ as well as $u_{t+5}$ is not adjacent to some vertices of $C$ in $\bar{G}$, then as seen in the argument of (O-4)(ii), the graph $\bar{G}$ is isomorphic to $G_{(d)_2}$ in Figure~\ref{type d}.
Now consider the case that at least one of the vertices $u_{t+4}, u_{t+5}$, say $u_{t+5}$, is adjacent to all vertices of $C$ in $\bar{G}$. The argument below also works if we interchange $u_{t+4}, u_{t+5}$.
Suppose $u_{t+4}$ is adjacent to (at least) three vertices $u_1, u_k, u_j$ of $C$ with $1<k<j\leq t+3$ in the graph $\bar{G}$. Since $u_{t+5}$ is not isolated in $G$, we have $\{u_{t+4},u_{t+5}\}\notin E(\bar{G})$. If $(k,j)\neq (2,t+3)$, then we get the minimal $4$-cycle $C'=u_{t+4}-u_{1}-u_{t+5}-u_l-u_{t+4}$, where $l=k$ if $k\neq 2$, and else $l=j$, and hence $t=1$. If $(k,j)= (2,t+3)$, then we get the minimal $4$-cycle $C''=u_{t+4}-u_{2}-u_{t+5}-u_{t+3}-u_{t+4}$ and so $t=1$ also in this case.
From $t=1$ we conclude that $u_1, u_k, u_j$ are successive vertices in $C$.
Without loss of generality we may assume that $(k,j)=(2,3)$. Since $\{u_5,u_6\}\notin E(\bar{G})$, in case $\{u_{5}, u_4\}\notin E(\bar{G})$,
the graph $\bar{G}$ is isomorphic to $G_{(d)_2}$, and in case $\{u_{5}, u_4\}\in E(\bar{G})$, it is isomorphic to $G_{(d)_1}$ in Figure~\ref{type d}.
This completes the proof.
\end{proof}
All the arguments so far in this section were characteristic independent; consequently
\begin{cor}\label{final note}
The property of having almost maximal finite index for edge ideals is independent of the characteristic of the base field. In other words, given a simple graph $G$, its edge ideal $I(G)$ has almost maximal finite index over some field if and only if it has this property over all fields.
\end{cor}
\begin{cor}\label{depth}
Let $G$ be a simple graph on $[n]$ with no isolated vertex such that $I=I(G)\subset\!S$ has almost maximal finite index. Then over all fields
\begin{align*}
\mathrm{pd} (I)=\begin{cases} n-3 \quad \text{if } \bar{G}=G_{(c)} \text{ or } G_{(a)_i},\ i=1,2,3,\\ n-4 \quad \text{if } \bar{G}=G_{(b)}\text{ or } G_{(d)_i}, \ i=1,2.\end{cases}
\end{align*}
In particular, $3\leq \mathrm{depth} (I)\leq 4$.
\end{cor}
\begin{proof}
Let $\mathrm{index}(I)=t$. By Theorem~\ref{check-out}, $\bar{G}\in\{G_{(a)_1}, G_{(a)_2}, G_{(a)_3}, G_{(b)}, G_{(c)}, G_{(d)_1}, G_{(d)_2}\}$.
It follows that
\begin{align*}
\hspace{1cm} n=\begin{cases} t+4\quad \text{if } \bar{G}=G_{(c)}, G_{(a)_i},\ i=1,2,3,\\ t+5\quad \text{if } \bar{G}=G_{(b)}, G_{(d)_i},\ i=1,2.\end{cases}
\end{align*}
Since $\mathrm{pd}(I)=t+1$, the assertion follows. The last assertion follows from the Auslander-Buchsbaum formula.
\end{proof}
In the rest of this section we study the last graded Betti numbers of edge ideals with almost maximal finite index. We first see in the following lemma that the graded Betti numbers of the edge ideals with this property are independent of the characteristic of the base field.
The proof takes a great benefit of Katzman's paper \cite{Ka}.
\begin{lem}\label{char 2}
Let $I\subset S$ be the edge ideal of a simple graph with almost maximal finite index. Then the Betti numbers of $I$ are characteristic independent.
\end{lem}
\begin{proof}
\cite[Theorem~4.1]{Ka} states that the Betti numbers of the edge ideals of the graphs with at most $10$ vertices are independent of the characteristic of the base field.
It follows that the graded Betti numbers of $I=I(G)$ with $\mathrm{index} (I)=t$ are characteristic independent when $\bar{G}\in \{G_{(c)},G_{(d)_1}, G_{(d)_2}\}$.
By \cite[Corollary~1.6, Lemma~3.2(b)]{Ka}, if $G$ has a vertex $v$ of degree $1$ or at least $|V(G)|-4$, then the Betti numbers of $I(G)$ are characteristic independent if and only if the Betti numbers of $I(G-\{v\})$ are characteristic independent. Here $G-\{v\}$ is the induced subgraph of $G$ on $V(G)\setminus\{v\}$. Since the vertex $t+4$ is of degree one in $\overline{G_{(b)}}$, it follows that the Betti numbers of $I(\overline{G_{(b)}})$ are characteristic independent if and only if so are the Betti numbers of $I(\overline{G_{(a)_2}})$. For $\bar{G}\in \{G_{(a)_i}:\ 1\leq i\leq 3\}$, since $t+4$ is adjacent to at least $t$ vertices in the graph $G$ and since $|V(G)|=t+4$, it is enough to show that the Betti numbers of $I(G-\{t+4\})$ are characteristic independent. But $G-\{t+4\}$ is the complement of a minimal cycle of length $t+3$. Note that by Hochster's formula, all the linear Betti numbers $\beta_{i,i+2}(I)$ are obtained from computing the dimension of $\widetilde{H}_0(\Delta(\bar{G})_W;\mathbb{K})$ with $W\subseteq V(G)$ and $|W|=i+2$, and this dimension equals the number of connected components of $\bar{G}_W$ minus one. Therefore these Betti numbers do not depend on the characteristic of the base field, see also \cite[Corollary~1.2(b)]{Ka}. Moreover, as seen in \cite[Theorem~4.1, Proposition~4.3]{BHZ}, the edge ideal of the complement of a minimal cycle has one nonzero non-linear Betti number $\beta_{t,t+3}(I)=1$ over all fields. Therefore the Betti numbers of $I(G-\{t+4\})$ are characteristic independent, as desired.
\end{proof}
For the edge ideals with linear resolution all non-linear Betti numbers are zero. For the edge ideals with maximal finite index $t$, it is seen in \cite{EGHP, BHZ} that there is only one nonzero non-linear Betti number $\beta_{t,t+3}(I)=1$ over all fields. In the case of ideals with almost maximal finite index with $\mathrm{index}(I)=t$, the non-linear Betti numbers appear in the last two homological degrees of the minimal free resolution. By the arguments that we had so far, it is easy to compute the $(t+1)$-th graded Betti numbers and also $t$-th non-linear Betti numbers, where $I$ is the edge ideal with almost maximal finite index. Nevertheless, in the cases $\bar{G}=G_{(c)}$ and $\bar{G}=G_{(d)_i}$ for $i=1,2$ one can see the whole Betti table, using {\em Macaulay 2}, \cite{M2}.
Note that since all the graphs in Example~\ref{mesal} have at most $t+5$ vertices, where $\mathrm{index}(I(G))=t$, and since the edge ideals are generated in degree $2$, by Hochster's formula it is enough to consider $\beta_{i,j}(I(G))$ for $ i+2\leq j\leq t+5$.
\begin{prop}\label{Bettis of almost}
Let $G$ be a graph such that $I:=I(G)$ has almost maximal finite index $t$. Then over all fields, $\beta_{t,t+4}(I)=\beta_{t,t+5}(I)=0$ and
\begin{align*}
\beta_{t,t+3}(I)=\begin{cases} 1\quad \text{if } \bar{G}=G_{(a)_1} \text{ or } G_{(a)_2}\text{ or } G_{(b)},\\ 2\quad\text{if } \bar{G}=G_{(a)_3},\\ 3\quad\text{otherwise},\end{cases}
\end{align*}
\begin{align*}
\beta_{t+1,t+3}(I)&=\begin{cases} 1\quad \text{if } \bar{G}=G_{(a)_1} \text{ or } G_{(b)},\\ 0\quad\text{otherwise,}\end{cases}\\
\beta_{t+1,t+4}(I)&=\begin{cases} 2\quad \text{if } \bar{G}=G_{(c)} \text{ or } G_{(d)_2},\\ 0\quad\text{if }\bar{G}=G_{(d)_1},\\ 1\quad\text{otherwise,}\end{cases}\\
\beta_{t+1,t+5}(I)&=\begin{cases} 1\quad \text{if } \bar{G}=G_{(d)_1},\\ 0\quad\text{otherwise.}\end{cases}
\end{align*}
In particular,
\begin{align*}
\hspace*{-2.cm}\mathrm{reg} (I)=\begin{cases} 4\quad \text{if } \bar{G}=G_{(d)_1},\\ 3\quad\text{otherwise.}\end{cases}
\end{align*}
\end{prop}
\begin{proof}
All the equalities are straightforward consequences of the use of Hochster's formula and Observation~\ref{connected induced graphs}. However, the Betti number $\beta_{t,t+3}(I)$ can be also deduced from \cite[Theorem~4.6]{FG}.
It is worth to emphasize that although $\widetilde{H}_1(\Delta,\mathbb{K})$ is spanned by the set of $0\neq T(C)+\mathrm{Im\ }\partial_2$ for all minimal cycles $C$ of $\bar{G}$, this set may not be a basis. In case $\bar{G}=G_{(c)}$, the graph $\bar{G}$ has three minimal cycles $C$ of length $4$ with $T(C)\notin \mathrm{Im\ }\partial_{2}$, but for the cycle $C=1-2-3-4-1$, $T(C)$ is a linear combination of $T(C'), T(C'')$, where $C', C''$ are the two other cycles of $G_{(c)}$. Hence $\dim_{\mathbb{K}}\widetilde{H}_1(\Delta(G_{(c)}),\mathbb{K})=2$. In the case of $G_{(a)_3}$, we have $0\neq T(C)+ \mathrm{Im\ }\partial_{2}$, where $C$ is the minimal cycle on $[t+3]$, but $T(C)$ is a linear combination of $T(C'), T(C''),T(C''')$, where $C'$ is the minimal cycle on $[t+4]\setminus \{2\}$, and $C'', C'''$ are the two triangles in $G_{(a)_3}$ and hence $\dim_\mathbb{K}\widetilde{H}_1(\Delta(G_{(a)_3}),\mathbb{K})=1$.
\end{proof}
\section{Powers of edge ideals with large Index}\label{powers of almost maximal}
Due to a result of Herzog, Hibi and Zheng, \cite[Theorem~3.2]{HHZh1}, if the edge ideal $I:=I(G)$ has a linear resolution, that is $\mathrm{index} (I)=\infty$, then all of its powers have a linear resolution as well. In case $I$ has maximal finite index $t>1$, then by \cite[Corollary~4.4]{BHZ} the ideal $I^s$ has a linear resolution for all $s\geq 2$. Note that in general for any edge ideal $I$ with $\mathrm{index}(I)=1$, one has $\mathrm{index}(I^s)=1$ for all $s\geq 2$, see Remark~\ref{part 2 of theorem} below. In this section we investigate when the higher powers of the edge ideal $I$ with almost maximal finite index have a linear resolution. Indeed, the aim of this section is to prove the following:
\begin{thm}\label{powers}
Let $G$ be a simple graph with no isolated vertex whose edge ideal $I(G)\subset S$ has almost maximal finite index. Then $I(G)^s$ has a linear resolution for all $s\geq 2$ if and only if $G$ is gap-free.
\end{thm}
{ Theorem~\ref{powers} follows from Remarks~\ref{part 2 of theorem} and \ref{part 1 of theorem}, and Theorems~\ref{main G_a3} and \ref{I^k has lin res}. Indeed,} we will see in Remark~\ref{part 1 of theorem} that this theorem holds for $G$ with $G\in\{\overline{G_{(a)_1}}, \overline{G_{(a)_2}}\}$ with $t>1$. For $G=\overline{G_{(a)_3}}$ with $t>1$ and $G=\overline{G_{(b)}}$ we will prove the assertion in Theorem~\ref{main G_a3} and Theorem~\ref{I^k has lin res}, respectively. As it is mentioned in Remark~\ref{part 2 of theorem} below, all other graphs whose edge ideals have almost maximal finite index contain a gap.
Recall that a {\em gap} in a graph $G$ is an induced subgraph on $4$ vertices and a pair of edges with no vertices in common which are not linked by a third edge; see the graph $G_1$ in Figure~\ref{gcd}. The graph $G$ is called {\em gap-free} if it does not admit a gap; equivalently if $\bar{G}$ does not contain an induced $4$-cycle. This property plays an important role in the study of the resolution of powers of edge ideals; for example
\begin{prop}\label{gap free}{$($Francisco-H\`a-Van Tuyl; unpublished, see \cite[Proposition~1.8]{NP} and \cite[Theorem~3.1]{BHZ}$)$}
Let $G$ be a simple graph. If $I(G)^s$ has a linear resolution for some $s\geq 1$, then $G$ is gap-free.
\end{prop}
On the other hand,
\begin{rem}\rm \label{part 2 of theorem}
A more precise statement about the gap-free graphs is given in \cite[Theorem~3.1]{BHZ} which says that for a graph $G$ the following are equivalent:
\begin{itemize}
\item[(a)] $G$ admits a gap;
\item[(b)] $\mathrm{index}(I(G)^s)=1$ for all $s\geq 1$;
\item [(c)] there exists $s\geq 1$ with $\mathrm{index}(I(G)^s)=1$.
\end{itemize}
If $G$ is the graph whose complement is one of $G_{(a)_1}, G_{(a)_2}, G_{(a)_3}, G_{(b)}$ with $t=1$, or one of $G_{(c)}, G_{(d)_1}, G_{(d)_2}$, then $G$ has a gap. So by the above equivalence $\mathrm{index}(I(G)^s)=1$ for all $s\geq 1$ in this case.
\end{rem}\rm
In order to prove Theorem~\ref{powers} for $G=\overline{G_{(a)_3}}$ with $t>1$, we need the following result of Banerjee~\cite{Ba}.
\begin{thm} \cite[Theorem~5.2]{Ba}\label{Banerjee1}
Let $G$ be a simple graph and let $I:=I(G)$ be its edge ideal. Let $\mathcal{G}(I^s)=\{m_1,\ldots, m_r\}$. Then for all $s\geq1$
$$\mathrm{reg}(I^{s+1})\leq\max\{\mathrm{reg}(I^s),\ \mathrm{reg}(I^{s+1}: m_k) + 2s \text{ for } 1\leq k\leq r \},$$
where $(I^{s+1}: m_k)$ denotes the colon ideal, i.e., $(I^{s+1}: m_k)=\{f\in S:\ fm_k\in I^{s+1}\}$.
\end{thm}
As a consequence of this theorem, Banerjee showed in \cite[Theorem~6.17]{Ba} that for any gap-free and cricket-free graph $G$, the ideal $I(G)^s$ has a linear resolution for all $s\geq 2$. A {\em cricket} is a graph isomorphic to the graph $G_2$ in Figure~\ref{gcd}, and a graph $G$ is called {\em cricket-free} if $G$ contains no cricket as an induced subgraph.
Two other classes of graphs which produce edge ideals whose higher powers have linear resolution were given by Erey. She proved in \cite{Er, Er1} that $I(G)^s$ has a linear resolution for all $s\geq 2$ if $G$ is gap-free and also it is either diamond-free or $C_4$-free. A {\em diamond} is a graph isomorphic to the graph $G_3$ in Figure~\ref{gcd}, and a {\em diamond-free} graph is a graph with no diamond as its induced subgraph. A $C_4$-free graph is a graph which does not contain a $4$-cycle as an induced subgraph; i.e. its complement is gap-free.
\begin{figure}[ht!]
\hspace{.1cm}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=.5cm,y=.5cm]
\clip(2.7,5.3) rectangle (22,11.5);
\draw [line width=.7pt] (3,11)-- (3,7);
\draw [line width=.7pt] (5,11)-- (5,7);
\draw [line width=.7pt] (9.04,10.98)-- (9.04,6.98);
\draw [line width=.7pt] (11,9)-- (9.04,6.98);
\draw [line width=.7pt] (11,9)-- (9.04,10.98);
\draw [line width=.7pt] (11,9)-- (13,11);
\draw [line width=.7pt] (11,9)-- (13,7);
\draw [line width=.7pt] (17.02,11)-- (17.02,7);
\draw [line width=.7pt] (21.02,7)-- (17.02,7);
\draw [line width=.7pt] (21.02,11)-- (21.02,7);
\draw [line width=.7pt] (21.02,11)-- (17.02,11);
\draw [line width=.7pt] (17.02,11)-- (21.02,7);
\draw (3.5,6.3) node[anchor=north west] {$G_1$};
\draw (10.5,6.3) node[anchor=north west] {$G_2$};
\draw (18.5,6.3) node[anchor=north west] {$G_3$};
\begin{scriptsize}
\draw [fill=black] (3,11) circle (1.5pt);
\draw [fill=black] (3,7) circle (1.5pt);
\draw [fill=black] (5,11) circle (1.5pt);
\draw [fill=black] (5,7) circle (1.5pt);
\draw [fill=black] (9.04,10.98) circle (1.5pt);
\draw [fill=black] (9.04,6.98) circle (1.5pt);
\draw [fill=black] (11,9) circle (1.5pt);
\draw [fill=black] (13,11) circle (1.5pt);
\draw [fill=black] (13,7) circle (1.5pt);
\draw [fill=black] (17.02,11) circle (1.5pt);
\draw [fill=black] (17.02,7) circle (1.5pt);
\draw [fill=black] (21.02,7) circle (1.5pt);
\draw [fill=black] (21.02,11) circle (1.5pt);
\end{scriptsize}
\end{tikzpicture}
\caption{$G_1$ a gap, $G_2$ a cricket, $G_3$ a diamond}
\label{gcd}
\end{figure}
\begin{rem}\rm \label{part 1 of theorem}
Clearly, the graphs $\overline{G_{(a)_1}}, \overline{G_{(a)_2}}$ are cricket-free and hence the statement of Theorem~\ref{powers} holds in these two cases using \cite[Theorem~6.17]{Ba}. Note that these graphs are gap-free for $t\geq 2$.
On the other hand, the graphs $\overline{G_{(a)_3}}$ and $\overline{G_{(b)}}$ contain crickets for large enough $t$. Indeed, if $t\geq 3$, then the induced subgraph of $\overline{G_{(a)_3}}$ on the vertex set $\{1,2,3, 5,t+4\}$, and if $t\geq 2$, then the induced subgraph of $\overline{G_{(b)}}$ on $\{3,4,5,t+4,t+5\}$ are isomorphic to a cricket. These graphs are not diamond-free in general as well, because for $t\geq 3$, the induced subgraphs on the vertex sets $\{2,4,6,t+4\}$ and $\{3,5,6,t+5\}$ form respectively diamonds in $\overline{G_{(a)_3}}$ and $\overline{G_{(b)}}$. They are not even $C_4$-free for $t\geq 3$ since ${G_{(a)_3}}$ and ${G_{(b)}}$ contain gaps.
Therefore, when $G\in \{\overline{G_{(a)_3}},\overline{G_{(b)}}\}$ and $t$ is large enough, one cannot take benefit of the results of Banerjee or Erey to deduce Theorem~\ref{powers}.
\end{rem}\rm
It is shown in \cite[Section~6]{Ba} that for the edge ideal $I$ of a simple graph $G$ and the minimal generator $m_k$ of $I^s$, $s\geq1$, the ideal $(I^{s+1}: m_k)$ is a quadratic monomial ideal whose polarization coincides with the edge ideal of a simple graph with the construction explained in Lemma~\ref{Banerjee3} below. For the details about the polarization technique, the reader may consult with \cite{HHBook}. Throughout this section, for an edge $e=\{i,j\}$ of a graph $G$ its associated quadratic monomial $x_ix_j$ is denoted by ${\bf x}_e$.
\begin{lem} \cite[Lemma~6.11]{Ba}\label{Banerjee3}
Let $G$ be a simple graph with the edge ideal $I:=I(G)$, and let $m_k={\bf x}_{e_1}\cdots {\bf x}_{e_s}$ be a minimal generator of $I^s$, where $e_1,\ldots, e_s$ are some edges of $G$. Then the polarization $(I^{s+1}: m_k)^{pol}$ of the ideal $(I^{s+1}: m_k)$ is the edge ideal of a new graph $G_{e_1\ldots e_s}$ with the following structure:
\begin{itemize}
\item[(1)] $V(G)\subseteq V(G_{e_1\ldots e_s})$, $E(G)\subseteq E(G_{e_1\ldots e_s})$.
\item[(2)] Any two vertices $u, v$, $u\neq v$, of $G$ that are even-connected with respect to $e_1\cdots e_s$ are connected by an edge in $G_{e_1\ldots e_s}$.
\item[(3)] For every vertex $u$ which is even-connected to itself with respect to $e_1\cdots e_s$ there is a new vertex $u'\notin V(G)$ which is connected to $u$ in $G_{e_1\ldots e_s}$ by an edge and not connected to any other vertex $($so $\{u,u'\}$ is a whisker in $G_{e_1\ldots e_s}$$)$.
\end{itemize}
\end{lem}
In \cite{Ba}, two vertices $u$ and $v$ of a graph $G$ ($u$ may be same as $v$) are said to be {\em even-connected} with respect to an $s$-fold product $e_1\cdots e_s$ in $G$ if there is a path $P=p_0-p_1-\cdots-p_{2k+1}$, $k\geq 1$, in $G$ such that:
\begin{itemize}
\item[(1)] $p_0=u, p_{2k+1}=v.$
\item[(2)] For all $0\leq l \leq k-1$, $\{p_{2l+1}, p_{2l+2}\}=e_i$ for some $1\leq i\leq s$.
\item[(3)] For all $i$, $|\{l:\ 0\leq l\leq k-1, \ \{p_{2l+1}, p_{2l+2} \}=e_i \} | \leq | \{j :\ 1\leq j\leq s, \ e_j=e_i \} |$.
\item[(4)] For all $0 \leq r \leq 2k$, $\{p_r, p_{r+1}\}$ is an edge in $G$.
\end{itemize}
\medskip
Now we are ready to prove Theorem~\ref{powers} for $G=\overline{G_{(a)_3}}$ with $t>1$.
\begin{thm}\label{main G_a3}
Let $G$ be a graph on $n\geq 6$ vertices such that $G_{(a)_3}$ is its complement. Let $I:=I(G)$ be the edge ideal of $G$. Then $I^s$ has a linear resolution for $s\geq 2$.
\end{thm}
\begin{proof}
Note that $t+4=n\geq 6$ implies that $t>1$. We first show that for any
$s\geq 1$ and any $s$-fold product $e_1\cdots e_s$ of the edges in $G$, the graph $\overline{G_{e_1\cdots e_s}}$ is chordal, where $G_{e_1\cdots e_s}$ is a simple graph explained in Lemma~\ref{Banerjee3} with the edge ideal $I(G_{e_1\cdots e_s})=(I^{s+1}:\ {\bf x}_{e_1}\cdots {\bf x}_{e_s})^{pol}$.
Since by \cite[Lemmas~6.14, 6.15]{Ba}, any induced cycle of $\overline{G_{e_1\cdots e_s}}$ is an induced cycle of $\bar{G}$, we conclude that if $\overline{G_{e_1\cdots e_s}}$ contains an induced cycle $C$ of length $>3$, then $C\in \{C_1, C_2\}$, where $C_1=1-2-\cdots-(t+3)-1$ and $C_2=1-(t+4)-3-4-\cdots-(t+3)-1$. Thus, in order to prove that $\overline{G_{e_1\cdots e_s}}$ is chordal, we need to show that
$C_1,C_2$ are not induced cycles in $\overline{G_{e_1\cdots e_s}}$ for $s\geq 1$.
We claim that there exist $k,l\in V(G_{e_1\cdots e_s})$ such that $\{k,l\}\in E(G_{e_1\cdots e_s})\cap E(C_r)$, $r=1,2$. It follows that $C_r$ is not a subgraph of $\overline{G_{e_1\cdots e_s}}$, as desired
\medspace
\textit {Proof of the claim:}
Let $e_1=\{i,j\}$ with $i<j$ and let $s\geq 1$. We choose $\{k,l\}\in E(C_1)$ as follows:
\begin{itemize}
\item[$(a)$] If $e_1=\{4,t+4\}$, then let $k=1$ and $l=t+3$;
\item[$(b)$] if $e_1=\{1,t+2\}$, then let $k=3$ and $l= 2$;
\item[$(c)$] if $e_1=\{2,t+3\}$, then let $k=4$ and $l= 3$;
\item[$(d)$] otherwise, let $k=\overline{i-2}$ and $l=\overline{i-1}$.
\end{itemize}
Since $C_2$ is obtained from $C_1$ by replacing $2$ with $t+4$, in order to find $\{k,l\}\in E(C_2)$, we choose $\{k,l\}\in E(C_2)$ as suggested in $(a)-(d)$ with an extra condition that if $\{k,l\}$ is obtained from $(b),(d)$ and it contains $2$, then we replace $2$ with $t+4$ in this pair.
\medspace
By the above choices of $k,l$, although $\{k,l\}\notin E(G)$, we have $\{k,i\}, \{j, l\}\in E(G)$. It follows that $k-i-j-l$ is a path in $G$ and hence, by definition, $k$ and $l$ are even-connected with respect to $e_1\cdots e_s$. Therefore $\{k,l\}\in E(G_{e_1\cdots e_s})$.
This completes the~proof~of~the~claim.
\medspace
Now since $\overline{G_{e_1\cdots e_s}}$ is chordal for $s\geq 1$, by \cite[Theorem~1]{Fr}, $I(G_{e_1\cdots e_s})$ has a $2$-linear resolution for $s\geq 1$. It follows that for any choice of the edges $e_1,\ldots, e_s$ of $G$ one has
\begin{align*}\mathrm{reg}((I^{s+1}:\ {\bf x}_{e_1}\cdots {\bf x}_{e_s}))=\mathrm{reg}((I^{s+1}:\ {\bf x}_{e_1}\cdots {\bf x}_{e_s})^{pol})=\mathrm{reg}(I(G_{e_1\cdots e_s}))=2
\end{align*}
The first equality follows from \cite[Corollary~1.6.3]{HHBook}.
By Proposition~\ref{Bettis of almost} we have $\mathrm{reg}(I)=3$. Theorem~\ref{Banerjee1} implies that $\mathrm{reg}(I^2)\leq 4$. Since $I^2$ is generated in degree $4$ we conclude that $\mathrm{reg}(I^2)=4$.
Now induction on $s>1$ and using Theorem~\ref{Banerjee1} yield the assertion.
\end{proof}
\medskip
Now it remains to prove Theorem~\ref{powers} for $\overline{G_{(b)}}$. The crucial point in the proof of Theorem~\ref{powers} for $\overline{G_{(a)_3}}$ was to show that $\mathrm{reg}((I(\overline{G_{(a)_3}})^{s+1}:\ m_k))=2$ for all minimal generators $m_k$ of $I(\overline{G_{(a)_3}})^{s}$. Having proved this statement, we deduced that the upper bound of $\mathrm{reg}(I(\overline{G_{(a)_3}})^{s+1})$ in Theorem~\ref{Banerjee1} is $2s+2$ and hence the desired conclusion was followed.
The same method will not work for $\overline{G_{(b)}}$. Indeed,
computations by {\em Macaulay~2}, \cite{M2}, shows that $\mathrm{reg}((I(\overline{G_{(b)}})^{s+1}:\ m_k))=3$, where $s\geq 1$ and $m_k=x_{t+5}^sx_{t+4}^s$ is a minimal generator of $I(\overline{G_{(b)}})^{s}$. Hence the upper bound of $\mathrm{reg}(I(\overline{G_{(b)}})^{s+1})$ in Theorem~\ref{Banerjee1} is at least $2s+3$ which is greater than the degree of the generators of $I(\overline{G_{(b)}})^{s+1}$ and consequently one cannot deduce that this ideal has a linear resolution only by computing the upper bound in Theorem~\ref{Banerjee1}. Therefore
in order to prove Theorem~\ref{powers} for $\overline{G_{(b)}}$ we need some other tool. This tool is provided in the following result of Dao et al.
\begin{lem}\label{Dao}{\cite[Lemma~2.10]{DHS}}
Let $I\subset S$ be a monomial ideal, and let $x$ be a variable appearing in some generator of $I$. Then $$\mathrm{reg}(I)\leq \max\{\mathrm{reg}((I:x)) + 1,\mathrm{reg}(I+(x))\}.$$ Moreover, if $I$ is squarefree, then $\mathrm{reg}(I)$ is equal to one of these terms.
\end{lem}
We will apply this result for $I:=I(\overline{G_{(b)}})^{s+1}$, $s\geq 1$, and $x:=x_{t+5}$. In
Theorem~\ref{linquo of square} we will compute the regularity of the ideal $(I(\overline{G_{(b)}})^{s+1}:x_{t+5})$, $s\geq 1$, by showing that it has linear quotients.
Recall that
a graded ideal $I$ is said to have {\em linear quotients} if there exists a homogeneous generating set of $I$, say $\{f_1,\ldots,f_m\}$, such that the colon ideal $\left((f_1,\ldots,f_{i-1}):f_i\right)$ is generated by variables for all $i>1$. By \cite[Theorem~8.2.1]{HHBook} equigenerated ideals with linear quotients have a linear resolution.
In the next result, Proposition~\ref{step1}, we provide a step of the proof of Theorem~\ref{linquo of square} which is a bit long yet easy to follow. In the proof of this proposition and also Theorem~\ref{linquo of square} we need to order the generators of the given ideals. To this end we should first order the multisets of edges of the associated graphs. We will use the following order in both proofs:
\medspace
{\em Let $G$ be a simple graph. For $e=\{i,j\}\in E(G)$ with $i<j$ and $e'=\{i',j'\}$ with $i'<j'$, we let $e<e'$ if either $i<i'$ or $i=i'$ with $j<j'$.
Let $r\geq 1$. We denote by $\mathbf{e}:=(e_{i_1},\ldots,e_{i_{r}})$ the multiset $\{e_{i_1},\ldots,e_{i_{r}}\}$ of the edges of $G$, where $e_{i_1}\leq\cdots\leq e_{i_{r}}$. If $\mathbf{e'}:=(e_{i'_1},\ldots,e_{i'_{r}})$ is another ordered multiset in $E(G)$ of the same size $r$, we let
$\mathbf{e}\leq \mathbf{e'}$ if either $\mathbf{e}=\mathbf{e'}$ or else there exists $1\leq j\leq r$ such that $e_{i_l}=e_{i'_l}$ for all $l<j$ and $e_{i_j}<e_{i'_j}$.}
\medspace
For the ordered multiset $\mathbf{e}:=(e_{i_1},\ldots,e_{i_{r}})$ of the edges of the graph $G$, we denote by $\x{\mathbf{e}}$ the monomial $\x{e_{1}}\cdots\x{e_{r}}$.
Moreover, we denote by $\mathrm{supp}(m)$ the set of all variables dividing the monomial $m\in S$ and also denote by $\deg_{m}x_{i}$ the largest integer $d$ such that $x_i^d$ divides $m$. We use the notation $m|m'$ ($m\centernot|m'$ resp.) when a monomial $m$ divides (does not divide resp.) a monomial $m'$.
\begin{prop}\label{step1}
Let $C=1-2-\cdots-(t+3)-1$, $t\geq 1$, be a cycle graph and let $t+4$ be a vertex not belonging to $C$. Then for $s\geq 0$ the ideal $L=I(\bar{C})^{s}(x_3,\ldots,x_{t+4})$ has linear quotients.
\end{prop}
\begin{proof}
For $s=0$ the assertion is obvious. Suppose $s\geq 1$.
We order the edges of $G:=\bar{C}$ as described above.
Each element $m$ in the minimal generating set $\mathcal{G}(L)$ of $L$ can be written as $m=\x{\e{}}x_k$,
where $\e{}=(e_{1},\ldots, e_{{s}})$ is an ordered multiset of the edges of $\bar{C}$ and $x_{k}\in \{x_3,\ldots,x_{t+4}\}$.
Note that there may be different multisets associated to $m$ and hence different presentations of $m$ as above.
In this case we consider the presentation of $m$ whose associated ordered multiset of the edges is the smallest. This means that if there is another presentation of $m$ as $\x{\e{}'}x_{k'}$ with $\e{}\neq \e'{}$, then we consider the presentation $\x{\e{}}x_k$ for $m$ if $\e{}<\e{}'$. By this setting, each minimal generator $m_l$ of $L$ has a unique {\em smallest} presentation $m_l=\x{\e{l}}x_k$, where $\e{l}$ denotes the smallest multiset of the edges associated to $m_l$.
Now we order the generators of $L$ as follows: for $m_q,m_l\in\mathcal{G}(L)$ with $m_q=\x{\e{q}}x_{k'}$, $m_l=\x{\e{l}}x_{k}$, we let $m_q<m_l$ if either $\mathbf{e}_q<\mathbf{e}_{l}$ or $\mathbf{e}_q=\mathbf{e}_{l}$ with $k'<k$.
Suppose $\mathcal{G}(L)=\{m_1,\ldots,m_r\}$ with $m_1<\!\cdots\!<m_r$. We show that for any $m_l\in\!\mathcal{G}(L)$ with $l\!>1$, the ideal $\left((m_1,\ldots, m_{l-1}):m_l\right)$ is generated by some variables. Set $J_l:=(m_1,\ldots, m_{l-1})$. By \cite[Proposition~1.2.2]{HHBook}, the ideal $(J_l:m_l)$ is generated by the elements of the set $\{m_q/\gcd(m_q,m_l):\ 1\leq q\leq l-1\}$. Let $m_{q,l}:=m_q/\gcd(m_q,m_l)$ for $m_q<m_l$. Suppose $m_l=\mathbf{x}_{\mathbf{e}_l} x_k$, $m_q=\mathbf{x}_{\mathbf{e}_q} x_{k'}$ with $\mathbf{e}_l:=(e_{1},\ldots,e_{s})$, $\mathbf{e}_q:=(e'_{1},\ldots,e'_{s})$ and $3\leq k, k'\leq t+4$. Let $e_{i}=\{a_i, b_i\}$, $e'_{i}=\{a'_i, b'_i\}$ with $1\leq a_i<b_i-1\leq t+2$ and $1\leq a'_i<b'_i-1\leq t+2$ for $1\leq i\leq s$.
In order to show that $(J_l:m_l)$ is generated by variables, we show that for each $q<l$, there exists $p<l$ such that $m_{p,l}$ is of degree one and it divides $m_{q,l}$. If $\deg m_{q,l}=1$, then we set $p:=q$ and so we are done. Assume that $\deg m_{q,l}>1$. First suppose $q=1$. Then $m_q=x_1^sx_3^{s+1}$. If $x_1| m_{q,l}$, then there exists $1\leq i\leq s$ with $1\notin e_i=\{a_i,b_i\}$. If $b_i\neq t+3$,
then set $e:=\{1,b_i\}$ and if $b_i=t+3$ with $a_i\neq 2$, set $e:=\{1,a_i\}$. Now set $m_p:=(\x{\e {l}}\x{e}/\x{e_i})x_k$.
Since $e<e_i$ we have $p<l$. Moreover, $m_{p,l}=x_1$ and so we are done in this case. Suppose $e_i=\{2,t+3\}$ for all $e_i\in \e{l}$ with $1\notin e_i$. It follows that $x_3|m_{q,l}$. If $k\neq 3$, then set $m_p:=\x{\e{l}}x_{3}$. Since $3<k$ we have $p<l$ and $m_{p,l}=x_3$. If $k=3$, then set $e:=\{1,3\}$ and $m_p:=(\x{\e {l}}\x{e}/\x{e_i})x_{t+3}$.
Now suppose $x_1\centernot| m_{q,l}$. Then $x_3|m_{q,l}$ and $1\in e_i$ for all $1\leq i\leq s$, and since $\deg m_{q,l}>1$ there exists $e_i\in\e{l}$ with $3\notin e_i$. Set $e:=\{1,3\}$ and $m_p:=(\x{\e {l}}\x{e}/\x{e_i})x_k$. So we are done if $q=1$. Now suppose $q>1$ and for all $q'<q$ there is $p'<l$ with $\deg m_{p',l}=1$ and $m_{p',l}|m_{q',l}$. We prove the assertion by induction on $q$.
Suppose there exist $e'_i\in \mathbf{e}_q$ and $e_j\in\mathbf{e}_l$ with $e'_i=e_j$.
The monomials $m'_{l}:= m_l/\mathbf{x}_{e_j}, m'_q:=m_q/\mathbf{x}_{e'_i}$ belong to $I(\bar{C})^{s-1}(x_3,\ldots,x_{t+4})$ and $m'_q<m'_l$ and $m_{q,l}=m'_q/\gcd(m'_q,m'_l)$. If there exists $m'_p\in I(\bar{C})^{s-1}(x_3,\ldots,x_{t+4})$ with $m'_p<m'_l$, where $m'_p/\gcd(m'_p,m'_l)$ is of degree one dividing $m'_q/\gcd(m'_q,m'_l)$, then setting $m_p:=m'_p\mathbf{x}_{e_i}$ one has $m_p\in J_l$ and $\deg m_{p,l}=1$, where $m_{p,l}$ divides $m_{q,l}$, as desired. So it is enough to prove the assertion for $m'_q,m'_l$. Consequently, from now on we may suppose that $\mathbf{e}_q\cap \mathbf{e}_l=\emptyset$. In particular, $\mathbf{e}_q\neq\mathbf{e}_l$ and hence
$\mathbf{e}_q<\mathbf{e}_l$. Since $\mathbf{e}_q, \mathbf{e}_l$ do not share an edge, it follows that $e'_1<e_1$ which means that either $a'_1<a_1$ or $a'_1=a_1$ with $b'_1<b_1$.
{\em Case} (i): $a'_1<a_1$. If $a'_1=k$, then $3\leq k<a_1<b_1-1$ implies that $e:=\{k, b_1\}\in E(\bar{C})$ with $e<e_1$ and hence by interchanging
$x_{a_1}$ in $\x{e_{1}}$ and $x_k$ we get
a smaller presentation for $m_l$, a contradiction.
Therefore $a'_1\neq k$. Note that $a'_1<a_1\leq a_i<b_i$ for all $i$. Thus $x_{a'_1}\centernot|m_l$ which implies that $x_{a'_1}|m_{q,l}$.
Since $a'_1<b_i-1$ for all $i$, we have $e:=\{a'_1,b_i\}\in E(\bar{C})$, unless $\{a'_1,b_i\}=\{1,t+3\}$. If $\{a'_1,b_i\}=\{1,t+3\}$ for all $i$ and if there exists $i$ with $a_i\neq 2$, then set $e:=\{a'_1,a_i\}$.
In both cases we have $e<e_i$ and hence $m_p:= (\x{\e {l}}\x{e}/\x{e_i})x_k<m_l$ with $m_{p,l}=x_{a'_1}$. Suppose $a'_1=1$ and $e_i=\{2,t+3\}$ for all $i$. We have $k\in\{3,t+3,t+4\}$, because otherwise by interchanging $x_{t+3}$ in $\x{e_i}$ and $x_k$ we get a smaller presentation for $m_l$, a contradiction. If $k=3$, then set $e:=\{1,3\}$ and $m_p:=(\x{\e {l}}\x{e}/\x{e_i})x_{t+3}$. If $k\in\{t+3,t+4\}$, since $b'_1\notin\{2,t+3,t+4\}$, we have $x_{b'_1}\centernot|m_l$ and hence $x_{b'_1}|m_{q,l}$. Set $m_p:=\x{\e{l}}x_{b'_1}$.
\medspace
{\em Case} (ii): $a'_1=a_1$ and $b'_1<b_1$. If $x_{b'_1}|m_{q,l}$ then set $m_p:=(\x{\e {l}}\x{e'_1}/\x{e_1})x_k$. Suppose $x_{b'_1}\centernot|m_{q,l}$ and hence $x_{b'_1}|m_l$. If $k=b'_1$, then interchanging $x_{b_1}$ in $\x{e_1}$ and $x_k=x_{b'_1}$ will result in a smaller presentation for $m_l$, a contradiction. Therefore, $k\neq b'_1$ and hence $b'_1\in e_h$ for some $e_h\in\e{l}$. Thus $e_h\neq e_1$. If $b$ is another vertex of $e_h$, then $b\in\{b_1,b_1-1,\overline{b_1+1}\}$ since otherwise we get a smaller presentation of $m_l$ by interchanging $b_1$ in $e_1$ and $b'_1$ in $e_h$, a contradiction.
Since $\deg m_{q,l}>1$, we have $\mathrm{supp}(m_{q,l})\neq \{x_{t+4}\}$. Thus $x_a|m_{q,l}$ for some $a\neq t+4$. If $b_1\notin \{a,\overline{a-1},\overline{a+1}\}$ ($b\notin \{a,\overline{a-1},\overline{a+1}\}$ resp.), then set $e:=\{a,b_1\}$ ($e:=\{a,b\}$ resp.) and $m_p:=(\x{\e {l}}\x{e'_1}\x{e}/(\x{e_1}\x{e_h}))x_k$. Suppose
$b_1, b\in \{a,\overline{a-1},\overline{a+1}\}$ for all $a\neq t+4$ with $x_a|m_{q,l}$.
If $b_1= \overline{a+1}$, since $b_1>1$ we have $a\neq t+3$ and $b_1=a+1$. Since $a_1<b'_1-1<b_1-1=a$ one has $e:=\{a_1,a\}\in E(\bar{C})$. Set $m_p:=(\x{\e{l}}\x{e}/\x{e_1})x_k$. Now suppose $b_1\in \{a,\overline{a-1}\}$ for all $a$ with $x_a|m_{q,l}$ and $a\neq t+4$.
If $a=1$, then since $a'_1=a_1$ is the smallest vertex in $\e{q}$ one has $a_1=1$ and $b_1\in\{1,t+3\}$ which is a contradiction. If $a\in\{2,3\}$, then $b_1\in \{1,2,3\}$ which is again a contradiction because $1\leq a_1<b'_1-1<b_1-1$. Therefore, $a\geq 4$ for all $a$ with $x_a|m_{q,l}$. In particular, $\overline{a-i}=a-i$ for $a\neq t+4$ and $i=1,2,3$. If $a<k$, then set $m_p:=\x{\e{l}}x_a$ and so we are done. Suppose $a\geq k$ for all $a$ with $x_a|m_{q,l}$.
Since $\deg m_{q,l}>1$, there exists $a$ with $x_a\in\mathrm{supp}(m_{q,l})\cap \mathrm{supp}(\x{\e{q}})$. Suppose $e'_{i_1}=\{a,c\}\in\e{q}$.
If $x_{c}|m_{q,l}$, then $c\neq t+4$ implies that $b_1\in \{a,a-1\}\cap\{c,{c-1}\}$. But $c\notin \{a,a-1,\overline{a+1}\}$ and hence $ \{a,a-1\}\cap\{c,{c-1}\}=\emptyset$, a contradiction. Thus $x_{c}\centernot|m_{q,l}$ and consequently $x_{c}|m_l$.
Suppose $c=k$. Then $c\leq a$ and since $\{a,c\}\in E(\bar{C})$ we have $c<a-1$. If $c\neq {a-2}$, then $b\in \{a,{a-1},\overline{a+1}\}$ implies that $e:=\{c, b\}\in E(\bar{C})$. Set $m_p:=(\x{\e{l}}\x{e}\x{e'_1}/(\x{e_1}\x{e_h}))x_a$. We have $m_p\leq m_l$. If $m_p=m_l$, then $b_1=a$ which implies that $(\x{\e{l}}\x{e}\x{e'_1}/(\x{e_1}\x{e_h}))x_a$ is a smaller presentation for $m_l$, a contradiction.
Thus $m_p<m_l$ and $m_{p,l}=x_a$, as desired. Now suppose $k=c={a-2}$. Then $a\geq 5$, and $a_1\in\{a-1,a-2,{a-3}\}$ since otherwise $\{a_1,a-2\}\in E(\bar{C})$ and by interchanging $x_{b_1}$ in $\x{e_1}$ and $x_k=x_{a-2}$ in the presentation of $m_l$ we get a smaller presentation which is a contradiction. From $a_1\in\{a-1,a-2,{a-3}\}$, and $a_1+1<b'_1<b_1\in \{a,a-1\}$ we conclude that $a_1={a-3}$, $b'_1=a-1$ and $b_1=a$. Thus $b\in \{a,a-1,\overline{a+1}\}$ implies that $b=\overline{a+1}$ and therefore interchanging $x_{b'_1}$ in $\x{e_h}$, where $e_h=\{b'_1,b\}=\{a-1,\overline{a+1}\}$, and $x_k=x_{a-2}$ will give a smaller presentation, a contradiction. { Note that since $a\geq 5$, we have $\{b, a-2\}=\{\overline{a+1}, a-2\}\in E(\bar{C})$. }
Assume now that $c\neq k$. Then there is $e_{i_2}\in\e{l}$ with $c\in e_{i_2}$. If $d$ is another vertex of $e_{i_2}$, then we may assume that $d<a$, because $\e{q}\cap\e{l}=\emptyset$ and if $d>a$, then we can set $m_p:=(\x{\e{l}}\x{e'_{i_1}}/\x{e_{i_2}})x_k$ which yields the result.
First assume that $b_1=a$. If $b'_1\leq d$, since $a_1+1<b'_1\leq d<a=b_1$, we have $e:=\{a_1,d\}\in E(\bar{C})$ with $e<e_1$ and hence interchanging $a$ in $e_1$ and $d$ in $e_{i_2}$ will give a smaller presentation for $m_l$, a contradiction. Thus $b_1=a$ implies that $b'_1>d$ and since $d<b'_1<b_1$ we have
$e:=\{d, b_1\}\in E(\bar{C})$, because otherwise $d=1$ and $b_1=t+3$ and since $a_1\leq d$ we have $a_1=1$ and $e_1=\{1,t+3\}$, a contradiction. Moreover, $e_{i_2}\neq e_1,e_h$.
Set $m_p:=(\x{\e{l}}\x{e'_1}\x{e}\x{e'_{i_1}}/(\x{e_1}\x{e_{i_2}}\x{e_h}))x_k$. We have $m_p\leq m_l$. If $b=a$, then $(\x{\e{l}}\x{e'_1}\x{e}\x{e'_{i_1}}/(\x{e_1}\x{e_{i_2}}\x{e_h}))x_k$ is a smaller presentation for $m_l$, a contradiction. Hence $b\neq a$ and thus $m_p<m_l$ with $m_{p,l}=x_a$.
Now assume that $b_1= a-1$ which implies that $b\in \{a,a-1\}$, because $b\in\{b_1-1, b_1,\overline{b_1+1}\}\cap\{a-1,a,\overline{a+1}\}$. If $d<b'_1$, then $d<b'_1<b_1=a-1\leq b$. If $\{d,b\}= \{1,t+3\}$, then $d=1$ implies that $a_1=1$ and since $c\neq a-1$ we have $e_1\neq e_{i_2}$ which implies that $\{1,a-1\}=e_1<e_{i_2}=\{1,c\}$ and hence $b_1=a-1<c$. Moreover, $b=t+3\in \{a-1,a\}$ implies that $a=t+3$ and hence $c=t+3$, a contradiction to $\{a,c\}\in E(\bar{C})$.
Thus $\{d,b\}\neq \{1,t+3\}$ which implies that $e:=\{d,b\}\in E(\bar{C})$. Set $m_p:=(\x{\e{l}}\x{e}\x{e'_1}\x{e'_{i_1}}/(\x{e_h}\x{e_1}\x{e_{i_2}}))x_k$.
Thus we may suppose that $b'_1\leq d$. If $d<a-1$, then set $e:=\{a_1,d\}$ and since $e_1\neq e_{i_2}$ we set $m_p:=(\x{\e{l}}\x{e}\x{e'_{i_1}}/(\x{e_1}\x{e_{i_2}}))x_k$. Suppose now that $d=a-1$.
Note that from $k\leq a$ we conclude that $k\in\{a-1,a\}$ since otherwise if $k=a-2$, then by interchanging $x_{b_1}=x_{a-1}$ in $\x{e_1}$ and $x_{k}=x_{a-2}$ we get a smaller presentation for $m_l$, and
if $k<a-2$, setting $e:=\{k,b\}$, we again get $(\x{\e{l}}\x{e'_1}\x{e}/(\x{e_1}\x{e_h}))x_{a-1}$ as a smaller presentation for $m_l$.
If $x_{a-1}|m_{q,l}$, or if $\deg_{m_q}x_{a-1}< \deg_{m_l}x_{a-1}$, then set $m_{q'}:=(\x{\e{q}}\x{e_{i_2}}/\x{e'_{i_1}})x_{k'}$. Since $m_{q'}<m_q$ and since $\mathrm{supp}(m_{q',l})\subseteq\mathrm{supp}(m_{q,l})$, by induction hypothesis we are done. Suppose $\deg_{m_q}x_{a-1}= \deg_{m_l}x_{a-1}$. Since $x_{a-1}|m_l$ we have $x_{a-1}|m_q$ as well. Note that $k'\neq a-1$, otherwise interchanging $x_{k'}$ and $x_a$ in $\x{e'_{i_1}}$ will result in a smaller presentation for $m_q$, a contradiction. It follows that there exists $e'_{i_3}=\{a-1,f\}\in \e{q}$ for some $f$ with $f\neq c, a_1$ because $\e{q}\cap \e{l}=\emptyset$.
If $x_f|m_{q,l}$, then we must have $a-1=b_1\in \{f,\overline{f-1}, \overline{f+1}\}$, a contradiction.
Thus $x_f\centernot|m_{q,l}$. As $f\neq a,a-1$ we have $f\neq b$ and also $f\neq k$ which implies that $f$ appears in an edge of $\e{l}$. If $f=b'_1$, then again $\e{q}\cap\e{l}=\emptyset$ implies that $b=a$. Therefore we have $e'_{i_1}=\{a,c\}, e'_{i_3}=\{a-1, b'_1\}\in \e{q}$ and $e_{i_2}=\{a-1,c\}, e_h=\{a,b'_1\}\in \e{l}$ which contradict the fact that $\e{l}$ and $\e{q}$ are the smallest multisets associated to $m_l$ and $m_q$, respectively. Thus $f\neq b'_1$ and hence $f\notin e_1\cup e_h\cup e_{i_2}\cup\{k\}$. It follows that there exists $e_{i_4}\neq e_1,e_{i_2}, e_h$ such that $e_{i_4}=\{f,g\}\in \e l$ for some $g$ with $g\neq a-1$. If $g>a$, { then $a\leq t+2$ and hence $\overline{a+1}=a+1$. }
We have $f\neq a+1$ because otherwise we will have $g>a+2$ and since $k\in\{a,a-1\}$ by interchanging $x_f$ in $\x{e_{i_4}}$ and $x_k$ one gets a smaller presentation for $m_l$ which is a contradiction. {Note that assuming $f=a+1$ one deduces from $g>a$ and $g\notin \{a,a+1,\overline{a+2}\}$ that $a+1<g\leq t+3$ and hence $\overline{a+2}=a+2$.}
Now set $e:=\{f,a\}\in E(\bar{C})$ and $m_p:=(\x{\e{l}}\x{e}/\x{e_{i_4}})x_k$. If $g=a$, then we have $e'_{i_1}=\{a,c\}, e'_{i_3}=\{a-1, f\}\in \e{q}$ and $e_{i_2}=\{a-1,c\}, e_{i_4}=\{a,f\}\in \e{l}$ which again contradict the fact that $\e{l}$ and $\e{q}$ are the smallest multisets associated to $m_l$ and $m_q$, respectively. Thus $g\neq a$ which implies that $g<a-1$.
If $g\notin \{a_1,a_1+1, \overline{a_1-1}\}$, then interchanging $a-1$ in $e_1$ and $g$ in $e_{i_4}$ will give a smaller presentation for $m_l$, a contradiction. Thus $g\in \{a_1,a_1+1, \overline{a_1-1}\}$. Since $g\geq a_1$ we have $g\in \{a_1, a_1+1\}$. If $g=a_1$, then $e_1\leq e_{i_4}$ implies that $f\geq a-1$. Since $f\neq a, a-1$ we have $f>a$. This in particular implies that $a\neq t+3$ and hence it follows from $a_1+1<a-1$ that $e:=\{a_1, a\}\in E(\bar{C})$. Now set $m_p:=(\x{\e{l}}\x{e}/\x{e_{i_4}})x_k$. Suppose $g=a_1+1$. Then $e:=\{a_1, f\}\in E(\bar{C})$ because $a_1\leq f$ and $f\notin \{a_1,a_1+1\}$ by $e_{i_4}=\{f, a_1+1\}\in\e{l}$. Moreover, $e':=\{a_1+1, a-1\}\in E(\bar{C})$ because $a_1+1<b'_1<a-1$. If $f<a-1$, then $(\x{\e{l}}\x{e}\x{e'}/(\x{e_1}\x{e_{i_4}})x_k$ is a smaller presentation for $m_l$ which is a contradiction. Thus $f>a$ and hence set $m_p:=(\x{\e{l}}\x{e''}/\x{e_{i_4}})x_k$, where $e'':=\{a_1+1, a\}$.
This completes the proof.
\end{proof}
Now we extend the ideal $L$ of Proposition~\ref{step1} to another ideal which contains $L$ and has linear quotients.
\begin{thm}\label{linquo of square}
Let $I\subset S$ be the edge ideal of the graph $G=\overline{G_{(b)}}$, with $t\geq 1$. Then the ideal $(I^{s+1}:x_{t+5})$ has linear quotients for all $s\geq 0$.
\end{thm}
\begin{proof}
Set $J:= (I^{s+1}:x_{t+5})$. We first determine the minimal generating set $\mathcal{G}(J)$ of $J$. Note that $E(G)=E(\bar{C})\cup\{x_{t+5}x_i:\ 3\leq i\leq t+4\}$, where $C=1-2-\cdots-(t+3)-1$ is the unique induced cycle of $G_{(b)}$ of length$>3$. Hence,
$$I^{s+1}=\sum_{k=0}^{s+1}I(\bar{C})^{s+1-k}(x_{t+5})^k(x_3,\ldots,x_{t+4})^k.$$
By \cite[Proposition~1.2.2]{HHBook}, the ideal $J$ is generated by monomials $m/\gcd(m, x_{t+5})$, where $m\in I^{s+1}$. It follows that
$$J=I(\bar{C})^{s+1}+\sum_{k=0}^{s}I(\bar{C})^{s-k}(x_{t+5})^k(x_3,\ldots,x_{t+4})^{k+1}.$$
Since each edge of $\bar{C}$ contains a vertex in $\{3,\ldots,t+3\}$, we have $I(\bar{C})^{s+1}\subset I(\bar{C})^{s}(x_3,\ldots,x_{t+4})$. Therefore,
$$J=\sum_{k=0}^{s}I(\bar{C})^{s-k}(x_{t+5})^k(x_3,\ldots,x_{t+4})^{k+1}.$$
For $0\leq k\leq s$, let $L_{k}:=I(\bar{C})^{s-k}(x_{t+5})^k(x_3,\ldots,x_{t+4})^{k+1}$. Clearly, for $0\leq k, k'\leq s$ with $k\neq k'$ we have $\mathcal{G}(L_k)\cap\mathcal{G}(L_{k'})=\emptyset$, where $\mathcal{G}(L_{k})$ denotes the minimal generating set of $L_k$. Therefore $\mathcal{G}(J)$ is the disjoint union of all $\mathcal{G}(L_k)$ for $0\leq k\leq s$. In particular, $J$ is generated by monomials of degree $2s+1$. For $s=0$, $J$ is generated by variables and hence we have the assertion. Suppose $s\geq 1$.
We order the multisets of the edges of $\bar{C}$ as described before Proposition~\ref{step1}.
Each element $m_l$ of $\mathcal{G}(L_k)$ can be written as $m_l=\x{\e{l}}{x_{t+5}}^k\x{l}$, where $\e{l}=(e_{1},\ldots,e_{{s-k}})$ is an ordered multiset of the edges of $\bar{C}$ of size $s-k$ with $e_i=\{a_i,b_i\}$, $a_i<b_i$, and $\x{l}:=x_{j_1}\cdots x_{j_{k+1}}$ with $3\leq j_1\leq \cdots\leq j_{k+1}\leq t+4$. Similar to the proof of Proposition~\ref{step1} we consider the {\em smallest} presentation for $m_l$, i.e. the one in which $\e{l}$ is the smallest possible multiset associated to $m_l$. So this presentation is unique.
Now we give an order on the generators of $J$. { To this end we use the lexicographic order $<_{lex}$ on the monomials of the ring $S$ induced by $x_1<x_2<\cdots<x_{t+5}$; see \cite[Section~2.1.2]{HHBook} for the definition of the lexicographic order.
}
For $m_q,m_l\in \mathcal{G}(J)$ we let $m_q<m_l$ in the following cases:
\begin{itemize}
\item $m_q\in L_{k'}$ and $m_l\in L_{k}$ with $0\leq k'<k\leq s$;
\item $m_q,m_l\in L_k$ for some $0\leq k\leq s-1$ and either $\mathbf{e}_q<\mathbf{e}_{l}$ or $\mathbf{e}_q=\mathbf{e}_{l}$ with $\mathbf{x}_q<_{lex}\mathbf{x}_l$;
\item $m_q,m_l\in L_s$, and { either $m_q\neq x_{t+5}^sx_i^{s+1}, m_l\neq x_{t+5}^sx_j^{s+1}$ for all $3\leq i, j\leq t+4$ with $m_q<_{lex}m_l$, or $m_q= x_{t+5}^sx_i^{s+1}$ and $m_l= x_{t+5}^sx_j^{s+1}$ for some $3\leq i<j\leq t+4$, or $m_q\neq x_{t+5}^sx_i^{s+1}$ for all $3\leq i\leq t+4$ and $m_l= x_{t+5}^sx_j^{s+1}$ for some $3\leq j\leq t+4$.}
\end{itemize}
Suppose $\mathcal{G}(J)=\{m_1,\ldots,m_r\}$ with $m_1<\!\cdots\!<m_r$. We show that for any $m_l\in\!\mathcal{G}(J)$ with $l\!>1$, the ideal $\left((m_1,\ldots, m_{l-1}):m_l\right)$ is generated by some variables. Set $J_l:=(m_1,\ldots, m_{l-1})$. By \cite[Proposition~1.2.2]{HHBook}, the ideal $(J_l:m_l)$ is generated by the elements of the set $\{m_q/\gcd(m_q,m_l):\ 1\leq q\leq l-1\}$. Let $m_{q,l}:=m_q/\gcd(m_q,m_l)$ for $m_q<m_l$.
Suppose $m_q=\x{\e q}x_{t+5}^{k'}\x{q}\in \mathcal{G}(L_{k'})$ with $0\leq k'\leq s$ and $\mathbf{e}_{q}=(e'_1,\ldots,e'_{s-k'})\subseteq E(\bar{C})$ with $e'_i=\{a'_i,b'_i\}$, $a'_i<b'_i$, and $\mathbf{x}_{q}=x_{j'_1}\cdots x_{j'_{k'+1}}$ with $3\leq j'_1\leq \cdots\leq j'_{k'+1}\leq t+4$, and suppose $m_l\in \mathcal{G}(L_{k})$ with $k'\leq k\leq s$. Suppose $\deg m_{q,l}>1$.
We show that there is $1\leq p<l$ such that $\deg m_{p,l}=1$ and $m_{p,l}| m_{q,l}$. This will imply that $J$ has linear quotients. We may assume $k\geq 1$ because by Proposition~\ref{step1}, $L_0=I(\bar{C})^{s}(x_3,\ldots,x_{t+4})$ has linear quotients. By the same argument as in the proof of Proposition~\ref{step1} we may assume that $\e{q}\cap \e{l}=\emptyset$.
First assume $q=1$. Then $m_q=x_1^sx_3^{s+1}$ and $x_1|m_{q,l}$, because $k\geq 1$. { If $x_3|\x{l}$, then set $e:=\{1,3\}$ and $m_p:=\x{e}m_l/(x_3x_{t+5})\in L_{k-1}$. Otherwise we have $x_3|m_{q,l}$ and we may set $m_p:=m_lx_3/x_{j_i}$ for some $j_i$.}
Suppose now that $q>1$ and suppose that for all $m_{q'}$ with $q'<q$ there exists $m_{p'}<m_l$ with $\deg m_{p',l}=1$ and $m_{p',l}|m_{q',l}$. We prove the assertion by induction on $q$.
Note that
\begin{enumerate}
\item[(a)] $x_{t+5}\notin \mathrm{supp}(m_{q,l})$, because $\deg_{m_q}x_{t+5}\leq \deg_{m_l}x_{t+5}$.
\item[(b)] Assume $a\in \{3,4,\ldots, j_{k+1}-1\}$. Except for the case where $k=s$ with
$m_l= x_{t+5}^sx_a^{s}x_{j_{s+1}}$, one has $x_a\in (J_l:m_l)$, because $m_p:=x_am_l/x_{j_{k+1}}\in J_l$ and $m_{p,l}=x_a$.
\item[(c)] If $k=s$ and $m_l=x_{t+5}^s x_a^{s}x_{j_{s+1}}$, where $3\leq a<j_{s+1}-1<t+3$, then $e\!:=\{a,j_{s+1}\}\in E(\bar{C})$ and by setting $m_p:=\x{e}m_l/(x_{j_{s+1}}x_{t+5})$ one has $m_p\in L_{s-1}\subseteq J_l$ and $x_a\in (J_l:m_l)$.
\item[(d)] For any $a$ with $j_{k+1} + 1 < a < t + 4$, we have $x_a \in (J_l : m_l)$ because
$m_p:=\mathbf{x}_em_l/(x_{t+5}x_{j_{k+1}})\in L_{k-1}\subseteq J_l$.
\end{enumerate}
\medspace
If $\mathrm{supp}(m_{q,l}) \cap \{x_3, x_4, \ldots , x_{j_{k+1}-1}\}\neq \emptyset$, then we are
done. Indeed, assume that there exists $x_a \in \mathrm{supp}(m_{q,l})\cap\{x_3, x_4, \ldots , x_{j_{k+1}-1}\}$.
Then by (b) and (c), it is sufficient to check only the case where $k = s$,
$m_l = x^s_{t+5}x^s_ax_{j_{s+1}}$, and $j_{s+1} \in \{a + 1, t + 4\}$.
Since $\deg_{m_q}x_a\geq s+1$, if $m_q\in L_s$, then $m_q=x_{t+5}^sx_a^{s+1}>m_l$, a contradiction. Thus $m_q\notin L_s$ and hence there exists $e'_i\in \e{q}$ with $a\in e'_i$. Suppose $d$ is another vertex of $e'_i$. Since $d\notin\{a,a+1, t+4,t+5\}$ we have $x_d|m_{q,l}$. Set $m_p:=\x{e'_i}m_l/(x_ax_{t+5})$. Then $m_p<m_l$ and $m_{p,l}=x_d$ and so we are done.
\medspace
Moreover, if $x_a\in \mathrm{supp}(m_{q,l})\cap \{x_{j_{k+1}+2}, \ldots, x_{t+3}\}$, then we are again done by (d).
Thus, using (a) and the above discussion, we may suppose that
\begin{eqnarray}\label{pizza}
\mathrm{supp}(m_{q,l})\subseteq \{x_1,x_2,x_{j_{k+1}},x_{j_{k+1}+1}, x_{t+4}\}\setminus\{x_{t+5}\}
\end{eqnarray}
\iffalse First assume $B\subseteq\{1,2\}$.
Thus $m_l\notin L_s$ and hence $\e{q}\neq \emptyset\neq \e{l}$.
If $1\in B$, there exists $e'_{i}=\{a'_{i},b'_{i}\}\in\e{q}$ with $1\notin e'_{i}$. If $a'_{i}\neq 2$ set $e:=\{1,a'_{i}\}$, else if $b'_{i}\neq t+3$ set $e:=\{1,b'_{i}\}$. Thus $m_{q'}:=\x{e}m_q/\x{e'_{i}}<m_q$ and $m_{q',l}$ divides $m_{q,l}$. By induction hypothesis we are done. Suppose for all $e'_{i}\in\e{q}$ with $1\notin e'_{i}$ one has $e'_{i}=\{2,t+3\}$. Since $\deg_{m_q}{x_1}<\deg_{m_l}{x_1}$ there exists $e_j=\{1,b_{j}\}\in\e{l}$ with $b_{j}\neq 1,2,t+3$ and since $\e{q}\cap\e{l}=\emptyset$ we have $\deg_{\x{e_q}}{x_{b_{j}}}<\deg_{\x{e_l}}{x_{b_{j}}}$ for all such $b_{j}$. Since $b_{j}\notin B$, we must have $x_{b_{j}}|\x{q}$.
If $b_{j}\neq 3$ by interchanging $x_{b_{j}}$ in $\x{q}$ and $x_{t+3}$ in $\x{e'_{i}}$ we get a smaller presentation of $m_q$ which is a contradiction. Thus $b_{j}=3$. Set $m_{q'}:=\x{e_j}x_{t+3}m_q/(\x{e'_{i}}x_3)$. Then $m_{q'}<m_q$ and $m_{q',l}|m_{q,l}$ and so we are done by induction hypothesis.
Now assume $1\notin B$ and hence$2\in B$. There exists $e'_{i}=\{a'_{i},b'_{i}\}\in\e{q}$ with $2\neq a'_{i}<b'_{i}$. If $a'_{i}\neq 1$, set $e:=\{2,b'_{i}\}$ and $m_{q'}:=\x{e}m_q/\x{e'_{i}}$. Then $m_{q'}<m_q$ and $m_{q',l}|m_{q,l}$ and so we are again done by induction hypothesis. Suppose $e'_{i}=\{1,b'_{i}\}$ for all $e'_i\in \e{q}$ with $2\notin e'_{i}$. It follows that $x_1|m_{q,l}$ because otherwise $s-k'=\deg_{m_q}x_1+\deg_{m_q}x_2<\deg_{m_l}x_1+\deg_{m_l}x_2\leq s-k$ and hence $k'>k$, a contradiction. Since $2\in B$, there exists $e_j=\{2,b_j\}\in \e{l}$. If $b_j\neq t+3$, then set $e:=\{1,b_j\}$ and $m_p:=\x{e}m_l/\x{e_j}$. Suppose $b_j=t+3$ for all $e_j\in\e{l}$ with $2\in e_j$. Since $m_l$ has the smallest presentation, we have $\mathrm{supp}(\x{l})\subseteq\{x_3,x_{t+3},x_{t+4}\}$. If $x_3|\x{l}$, then set $e:=\{1,3\}$ and $m_p:=\x{e}x_{t+3}m_l/(\x{e_j}x_3)$. Suppose $\mathrm{supp}(\x{l})\subseteq\{x_{t+3},x_{t+4}\}$.
Now if $x_{b'_i}|m_{q,l}$, then set $m_p:=x_{b'_i}m_l/x_r$, where $x_r|\x{l}$. If $x_{b'_i}\centernot|m_{q,l}$, then $b'_i\in e_f$ for some $e_f\in \e{l}$. Note that $e_f\neq e'_i$ because $\e{q}\cap\e{l}=\emptyset$. Now set $e:=\{1,b'_i\}$ and $m_p:=\x{e}m_l/\x{e_f}$.
\iffalse Thus by (\ref{pizza}), we may suppose that { if $b\in B\setminus\{1,2{, t+5}\}\neq \emptyset$, then }
\begin{eqnarray}\label{pizza}
\mathrm{supp}(m_{q,l})\subseteq \{x_1,x_2,x_{j_{k+1}}, x_{j_{k+1}+1}\}\setminus\{x_{t+4},x_{t+5}\}.
\end{eqnarray}
Let $x_a|m_{q,l}$.
\fi
Suppose now that $B\setminus\{1,2\}\neq \emptyset$.
\fi
Note that
\begin{itemize}
\item[(i)] if $x_1\in \mathrm{supp}(m_{q,l})$ and $j_1\neq {t+3}, t+4$, then set $e:=\{1,j_1\}$ and $m_p:=\x{e}m_l/(x_{j_1}x_{t+5})$;
\item[(ii)] if $x_2\in \mathrm{supp}(m_{q,l})$ and there exists $j_i\neq 3,{t+4}$, $1\leq i\leq k+1$, then set $e:=\{2,j_i\}$ and $m_p:=\x{e}m_l/(x_{j_i}x_{t+5})$;
\item[(iii)] if $x_{j_{k+1}}\in \mathrm{supp}(m_{q,l})$ and $j_1<j_{k+1}-1{ <t+3}$, then set $e:=\{j_1,j_{k+1}\}$ and $m_p:=\x{e}m_l/(x_{j_1}x_{t+5})$;
\item[(iv)] if $x_{j_{k+1}+1}\in \mathrm{supp}(m_{q,l})$ and $j_1<j_{k+1}{ <t+3}$, then set $e:=\{j_1,j_{k+1}+1\}$ and $m_p:=\x{e}m_l/(x_{j_1}x_{t+5})$.
\end{itemize}
Thus, by (\ref{pizza}) it remains to find $m_p$ in the following cases:
\begin{itemize}
\item[(v)] $x_1\in \mathrm{supp}(m_{q,l})$
and $j_1\in \{{t+3}, t+4\}$;
\item[(vi)] $x_2\in \mathrm{supp}(m_{q,l})$;
and $j_i\in \{3,{t+4}\}$ for all $1\leq i\leq k+1$;
{ \item[(vii)] $x_{j_{k+1}}\in \mathrm{supp}(m_{q,l})$ and either $j_1\in\{ j_{k+1}-1,j_{k+1}\}$ or $j_{k+1}=t+4$;
\item[(viii)] $x_{j_{k+1}+1}\in \mathrm{supp}(m_{q,l})$ and either $j_1=j_{k+1}$ or $j_{k+1}=t+3$;
\item[(ix)] $x_{t+4}\in\mathrm{supp}(m_{q,l})$. }
\end{itemize}
{ In (viii) we have $j_{k+1}\neq t+4$ because $x_{t+5}\notin \mathrm{supp}(m_{q,l})$. Since we will check the case $x_{t+4}\in\mathrm{supp}(m_{q,l})$ in (ix), we may suppose in Case~(vii) that $j_{k+1}\neq t+4$ and in Case~(viii) that $j_{k+1}\neq t+3$. Moreover,
having $j_1\in \{j_{k+1}-1, j_{k+1}\}$ in (vii) we have either $x_{j_1}\in \mathrm{supp}(m_{q,l})$ with $\mathrm{supp}(\x{l})= \{x_{j_1}\}$ or $x_{j_1+1}\in \mathrm{supp}(m_{q,l})$ with $\mathrm{supp}(\x{l})= \{x_{j_1}, x_{j_1+1}\}$. In Case~(viii), since $j_1=j_{k+1}$ we get $x_{j_1+1}\in \mathrm{supp}(m_{q,l})$ and $\mathrm{supp}(\x{l})=\{x_{j_1}\}$. So combining the two cases (vii) and (viii), we will end up with the following ones:
\begin{itemize}
\item[(vii')]
$x_{j_1}\in \mathrm{supp}(m_{q,l})$
and $\mathrm{supp}(\x{l})=\{x_{j_{1}}\}$;
\item[(viii')] $x_{j_1+1}\in \mathrm{supp}(m_{q,l})$
and either $\mathrm{supp}(\x{l})=\{x_{j_{1}}\}$ or $\mathrm{supp}(\x{l})=\{x_{j_{1}}, x_{j_1+1}\}$.
\end{itemize}}
So we replace (vii), (viii) with (vii'), (viii'). Now we prove the assertion in the above five cases. Note that since $m_q\neq m_l$ there exists $1\leq b\leq t+5$ such that $\deg_{m_q}x_b<\deg_{m_l}x_b$. Suppose $B$ is the set of all such $b$.
\medspace
Case~(v): Since $j_1\in \{t+3,t+4\}$ we have $\mathrm{supp}(\x{l})\subseteq \{x_{t+3},x_{t+4}\}$ and hence $j_{k+1}\in \{t+3,t+4\}$ implies that $\mathrm{supp}(m_{q,l})\subseteq\{x_1,x_2,x_{t+3}{, x_{t+4}}\}$ by (\ref{pizza}). Since $x_1\in \mathrm{supp}(m_{q,l})$, there exists $e'_i\in\e{q}$ with $e'_i=\{1,b'_i\}$ for some $3\leq b'_i\leq t+2$. Since $x_{b'_i}\notin\{x_1,x_2,x_{t+3},x_{t+4}\}$ we have $x_{b'_i}|m_l$, and it follows from $\mathrm{supp}(\x{l})\subseteq \{x_{t+3},x_{t+4}\}$ that $x_{b'_i}\centernot|\x{ l}$.
Thus there exists $e_j\in\e{l}$ with $b'_i\in e_j$. If $d$ is another vertex of $e_j$, then $d> 1$ because $\e{l}\cap\e{q}=\emptyset$. Set $m_p:=\x{e'_i}m_l/\x{e_j}$ and so we are done in this case. This case together with (i) imply that if $x_1|m_{q,l}$, then we have the desired $m_p$. Suppose in the remaining cases that $x_1\notin\mathrm{supp}(m_{q,l})$.
\medspace
Case~(vi): Since $x_{j_{k+1}}\in \mathrm{supp}(\x{l})\subseteq \{x_3,x_{t+4}\}$ we have $\mathrm{supp}(m_{q,l})\subseteq\{x_2,x_{3},x_4{, x_{t+4}}\}$ by (\ref{pizza}).
Since $x_2\in \mathrm{supp}(m_{q,l})$, there exists $e'_i\in\e{q}$ with $e'_{i}=\{2,b'_{i}\}$ for some $4\leq b'_{i}\leq t+3$.
First suppose $m_l\in L_s$. Then $m_l=x_{t+5}^s\x{l}$ and $x_{b'_{i}}\centernot|m_l$ because $\mathrm{supp}(\x{l})\subseteq \{x_3,x_{t+4}\}$. Thus $x_{b'_{i}}|m_{q,l}$ and therefore $b'_{i}=4$. In case $x_{t+4}|\x l$ we set $m_p:=x_{4}m_l/x_{t+4}$. Otherwise, we have $m_l=x_{t+5}^sx_3^{s+1}$, and hence we can set $m_p:=x_{t+5}^sx_3^{s}x_{4}$.
Suppose now that $m_l\notin L_s$.
There exists $e_{j}=\{a_{j},b_{j}\}\in\e{l}$ with $a_{j}\neq 2$. If $a_{j}\neq 1$ then set $e:=\{2,b_{j}\}$ and $m_p:=\x{e}m_l/\x{e_{j}}$. Suppose that $e_{j}=\{1,b_{j}\}$ for all $e_{j}\in\e{l}$ with $2\notin e_{j}$.
If $x_{b'_{i}}|m_{q,l}$, then $b'_{i}=4$.
If $x_{t+4}|\x{l}$, then set $m_p:=x_{4}m_l/x_{t+4}$. Otherwise, we have $\mathrm{supp}(\x{l})=\{x_3\}$. Then $b_{j}=3$ for all $e_j = \{1, b_j\} \in \e l$, since otherwise we get a contradiction to the fact that we have considered the smallest presentation for $m_l$.
If $x_2|m_l$, then there exists $e_{r}=\{2,b_{r}\}\in \e{l}$, where $b_{r}>4$ because $\e{q}\cap\e{l}=\emptyset$. Then set $m_p:=\x{e'_{i}}m_l/\x{e_{r}}$. If $x_2\centernot|m_l$, then $m_l=x_{t+5}^kx_1^{s-k}x_3^{s+1}$. Thus $3\in B$ because $3\notin e'_i = \{2, 4\}\in \e q$. If $1\in B$, then set $e:=\{1,4\}$ and $m_{q'}=\x{e}m_q/\x{e'_i}$, and if $1\notin B$, then there exists $e'_f=\{1,b'_f\}\in \e{q}$ with $b'_f>3$ because $e_j=\{1, 3\} \in \e l$ and $\e q \cap \e l = \emptyset$, so set $m_{q'}=\x{e_j}m_q/\x{e'_f}$ and use induction.
Now suppose $x_{b'_i}\centernot|m_{q,l}$. Since $\mathrm{supp}(\x{l})\subseteq\{x_3,x_{t+4}\}$, we have $e_r:=\{1,b'_i\}\in \e{l}$ because all edges in $\e{l}$ contain either $1$ or $2$ and $\e{q}\cap\e{l}=\emptyset$. Since $b'_i>3$ we have $\mathrm{supp}(\x{l})=\{x_{t+4}\}$ because otherwise we get a smaller presentation for $m_l$.
If $1\notin B$, then $e'_f:=\{1,b'_f\}\in \e{q}$ with $b'_f\neq b'_i$ because $\e{q}\cap\e{l}=\emptyset$. If $x_{b'_f}|m_{q,l}$, then set $m_p:=x_{b'_f}m_l/x_{t+4}$. If $x_{b'_f}\centernot|m_{q,l}$, then we have $\{2,b'_f\}\in \e{l}$ since $\e{q}\cap\e{l}=\emptyset$ and each edge of $m_l$ contains either $1$ or $2$. Now we have $\{1,b'_i\}, \{2,b'_f\}\in \e{l}$ and $\{1,b'_f\}, \{2,b'_i\}\in \e{q}$ which contradict the fact that both $\e{q}, \e{l}$ are the smallest multisets associated to $m_q,m_l$ respectively. Suppose $1\in B$. Then we can use inductive hypothesis for $m_{q'}:=\x{e_r}m_q/\x{e'_i}$.
So we are done in this case too. By settling this case and according to Case~(ii) we have the desired $m_p$ if $x_2|m_{q,l}$. Suppose in the remaining cases that $\mathrm{supp}(m_{q,l})\subseteq\{x_{j_{k+1}}, x_{{j_{k+1}}+1}{ , x_{t+4}}\}$.
\medspace
Case~(vii')
Since $\mathrm{supp}(\x l)=\{x_{j_1}\}$, we have $\mathrm{supp}(m_{q,l})\subseteq\{x_{j_1}, x_{j_1+1}, { x_{t+4}\}}$.
There exists $e'_{i_1}=\{j_1,c\}\in\e{q}$ for some $c$ because otherwise $\deg_{m_q}x_{j_1}\leq k'+1\leq k+1=\deg_{m_l}x_{j_1}$, a contradiction. It follows that $j_1\leq t+3$.
Since $x_c\centernot|m_{q,l}$ we have $x_c|m_l$ and since $c\neq j_1$, there exists $e_{i_2}=\{c,d\}\in \e{l}$ for some $d$. By $\e{q}\cap\e{l}=\emptyset$, we have $d\neq j_1$. Note that $d<j_1$, because otherwise interchanging $x_d$ in $\x{e_{j}}$ and $x_{j_1}$ in $\x{l}$ will result in a smaller presentation of $m_l$.
Suppose $d\!<\!j_1\!-\!1$. { If $\!\{d,\!j_1\}\!\neq\!\{1,\!t+3\}$,} then set $e\!:=\!\{d,\!j_1\}$ and $m_p\!:=\!\x{e}\x{e'_{i_1}}\!m_l/(\x{e_{i_2}}\!x_{j_1}\!x_{t+5})$.
{ If $\{d, j_1\}= \{1,t+3\}$, then in case $1\in B$, set $m_{q'}:=\x{e_{i_2}}m_q/\x{e'_{i_1}}<m_q$ and use induction. In case $1\notin B$, there exists $e'_{i_3}=\{1,f\}\in \e{q}$ for some $f\notin\{ c, 1,2,t+3, t+4\}$ which implies that $x_f\centernot| m_{q,l}$. Hence $x_f|m_l$ and thus $e_{i_4}=\{f,g\}\in \e{l}$ for some $g\neq 1$. If $g=t+3$, then $\{c,t+3\}, \{1,f\}\in \e q$ and $\{1,c\}, \{f,t+3\}\in \e l$ which contradict the fact that $\e{q}$ and $\e{l}$ both have minimum presentations. Thus $g\neq t+3$. If $g\neq t+2$, set $e:=\{g,t+3\}$ and $m_p:=\x{e}\x{e'_{i_1}}\x{e'_{i_3}}m_l/(\x{e_{i_2}}\x{e_{i_4}}x_{t+3}x_{t+5})$, and if $g=t+2$, set $e=\{1,t+2\}, e'=\{f, t+3\}$ and $m_p:=\x{e}\x{e'}\x{e'_{i_1}}m_l/(\x{e_{i_2}}\x{e_{i_4}}x_{t+3}x_{t+5})$.
}
Suppose $d=j_1-1$. If $j_1-1\in B$, then set $m_{q'}:=\x{e_{i_2}}m_q/\x{e'_{i_1}}$ and use induction hypothesis. If $j_1-1\notin B$, then $x_{j_1-1}|m_q$. If $x_{j_1-1}|\x{q}$, then interchanging $x_{j_1-1}$ in $\x{q}$ and $x_{j_1}$ in $\x{e'_{i_1}}$ will give a smaller presentation of $m_q$, a contradiction. Therefore there exists $e'_{i_3}=\{j_1-1, f\}\in \e{q}$ for some $f\neq c$. Then $f\notin \{j_1-2,j_1-1, j_1\}$. If $x_f|m_{q,l}$, then $f={j_1+1}$. Set $m_p:=\x{e'_{i_1}}\x{e'_{i_3}}m_l/(\x{e_{i_2}}x_{j_1}x_{t+5})$. Suppose $x_f\centernot|m_{q,l}$.
It follows that there exists $e_{i_4}=\{f, g\}\in \e{l}$ for some $g\neq j_1-1$. We have $f\neq {j_1+1}$ because otherwise, it follows that $g\notin \{j_1,j_1-1,{j_1+1}\}$ which implies that $m_l$ will have a smaller presentation by interchanging $x_f$ in $\x{e_{i_4}}$ and $x_{j_1}$ in $\x{l}$. Since $f\neq j_1+1$ we have either $g< j_1-1$ or $g=j_1$ because otherwise one can interchange $x_g$ in $\x{e_{i_4}}$ and $x_{j_1}$ in $\x{l}$ to get a smaller presentation. If $g=j_1$, then we have $\{j_1,c\}, \{j_1-1, f\}\in \e{q}$ and $\{j_1-1, c\}, \{j_1,f\}\in \e{l}$ which contradict the fact that both $\e{q}, \e{l}$ are the smallest multisets associated to $m_q, m_l$, respectively. Thus $g<j_1-1$. { In case $\{g, j_1\}\neq \{1,t+3\}$, set $e:=\{g, j_1\}$ and $m_p:=\x{e'_{i_1}}\x{e'_{i_3}}\x{e}m_l/(\x{e_{i_2}}\x{e_{i_4}}x_{j_1}x_{t+5})$. In case $\{g, j_1\}= \{1,t+3\}$, set $e:=\{1,t+2\}, e':=\{f,t+3\}$ and $m_p:=\x{e'_{i_1}}\x{e}\x{e'}m_l/(\x{e_{i_2}}\x{e_{i_4}}x_{t+3}x_{t+5})$. Note that since $e'_{i_3}, e_{i_4}\in E(\bar{C})$ we have $f\notin \{1,t+2,t+3\}$ and hence $e'\in E(\bar{C})$. Thus we are done in this case too.
}
\medspace
{ In general, if $x_{j_1}\in \mathrm{supp}(m_{q,l})$, then by (\ref{pizza}) we have ${j_1}\in \{j_{k+1}, j_{k+1}+1, t+4\}$. But ${j_1}\leq j_{k+1}$ implies that $j_1=j_{k+1}$ and hence
$\mathrm{supp}{(\x l)}=\{x_{j_1}\}$. Thus by the discussion in (vii') we are done if $x_{j_1}\in \mathrm{supp}(m_{q,l})$. Therefore, we may assume in the rest of the proof that
$x_{j_1}\notin\mathrm{supp}(m_{q,l})$.}
\medspace
Case~(viii'):
Since $\mathrm{supp}(\x l)\subseteq\{x_{j_1},x_{j_1+1}\}$, we have $\mathrm{supp}(m_{q,l})\subseteq\{x_{j_1+1},x_{j_1+2},x_{t+4}\}$ by (\ref{pizza}). Moreover, if $j_1+1= t+4$, then $\mathrm{supp}(m_{q,l})=\{x_{t+4}\}$ and since this case will be discussed in (ix) we may assume here that $j_1+1\neq t+4$.
Assume first that $x_{j_1}|\x{\e q}$. Then $e'_{i_1}=\{j_1,c\}\in\e{q}$ for some $c$. Since $c\notin\{j_1, j_1+1\}$, we have $e_{i_2}=\{c,d\}\in\e l$ for some $d$. Since $\e{q}\cap \e l=\emptyset$ and since we have the smallest presentation of $m_l$ we have $d<j_1$. { If $\{d, j_1+1\}\neq \{1,t+3\}$, then }
set $e:=\{d,j_1+1\}$ and $m_p:=\x{e'_{i_1}}\x{e}m_l/(\x{e_{i_2}}x_{j_1}x_{t+5})$. { If $\{d, j_1+1\}=\{1,t+3\}$, then set $e:=\{c, t+3\}, e':=\{1,t+2\}$ and $m_p:=\x{e}\x{e'}m_l/(\x{e_{i_2}}x_{t+2}x_{t+5})$. Note that $e:=\{c, t+3\}\in E(\bar{C})$ because $c\notin \{1,t+2,t+3\}$.
Now assume $x_{j_1}\centernot|\x{\e q}$. Suppose $x_{j_1+1}\centernot|\x{\e q}$. Since $x_{j_1+1}|m_{q,l}$ we have $x_{j_1+1}|\x q$. If $x_{j_1+2}|\x{\e q}$, then $e:=\{a,j_1+2\}\in \e q$ for some $a\notin\{j_1+1, j_1+2, \overline{j_1+3}\}$. Since $x_{j_1}\centernot|\x{\e q}$ we have $a\neq j_1$ too. Thus $\{a, j_1+1\}\in E(\bar{C})$ and hence one can interchange $x_{j_1+2}$ in $\x e$ and $x_{j_1+1}$ in $\x q$ to get a smaller presentation for $m_q$, a contradiction. Thus $x_{j_1+2}\centernot|\x{\e q}$. Therefore $\mathrm{supp}(\x{\e q})\cap \mathrm{supp}(m_{q,l})=\emptyset$ which implies that
$\x{\e q}|\x{\e l}$ and since $k'\leq k$ we have $\e q=\e l$. But $\e{q}\cap \e l=\emptyset$ implies that $\e q=\emptyset =\e l$. Thus $m_q, m_l\in L_s$. If $m_l=x_{t+5}^sx_{j_1}^{s+1}$, then set $m_p:=x_{t+5}^sx_{j_1}^{s}x_{j_1+1}$. Suppose $m_l= x_{t+5}^sx_{j_1}^{r}x_{j_1+1}^{s+1-r}$, where $0<r<s+1$. By the order of the generators of $L_s$ we have
$m_q\neq x_{t+5}^sx_{j_1+1}^{s+1}$. Since $\mathrm{supp}(m_q)\subseteq \mathrm{supp}(m_{q,l})\cup \mathrm{supp}(m_l)\subseteq \{x_{j_1}, x_{j_1+1},x_{j_1+2},x_{t+4}, x_{t+5}\}$ and $m_q<m_l$, we have $x_{j_1}\in \mathrm{supp}(m_q)$. If $x_{j_1+2}\in \mathrm{supp}(m_q)$ ($x_{t+4}\in \mathrm{supp}(m_q)$ resp.), then set $m_{q'}:=x_{j_1+1}m_q/x_{j_1+2}$ ($m_{q'}:=x_{j_1+1}m_q/x_{t+4}$ resp.) and use induction. Otherwise, we have $m_q=x_{t+5}^sx_{j_1}^{r'}x_{j_1+1}^{s+1-r'}$ with $0<r'<r$ because $m_q<m_l$. Set $m_{q'}:=x_{j_1}m_q/x_{j_1+1}$ and use induction.
}
\iffalse
If $m_q=x_{t+5}^sx_{j_1+1}^{s+1}$, then since $m_q<m_l$ we have $m_l=x_{t+5}^sx_{i}^{s+1}$ for some $i>j_1+1$ which is a contradiction since $x_{j_1}|m_l$. Therefore $m_q\neq x_{t+5}^sx_{j_1+1}^{s+1}$. According to the order of the generators, since $\e q=\e l$
we have $\x{q}<_{lex}\x{l}$. Since $\mathrm{supp}(\x{l})\subseteq \{x_{j_1}, x_{j_1+1}\}$ and since $x_{j_1+1}|m_{q,l}=\x{q}/\gcd(\x{q},\x{l})$, it follows that $\deg_{\x{q}}x_{j_1+1}>\deg_{\x{l}}x_{j_1+1}$ which implies that $\deg_{\x{q}}x_{j_1}<\deg_{\x{l}}x_{j_1}$ because $k'=k$. But since $\mathrm{supp}(m_{q,l})=\{x_{j_1+1}\}$ we conclude that $\mathrm{supp}(\x{q})\subseteq \{x_{j_1}, x_{j_1+1}\}$ and hence we get a contradiction with $\x{q}<_{lex}\x{l}$.
\fi
Suppose now that $x_{j_1+1}|\x{\e q}$. There exists $e'_{i_1}=\{j_1+1, c\}\in\e{q}$ for some $c$. Since $x_c\notin \mathrm{supp}(\x l)\cup\mathrm{supp}(m_{q,l})$, there exists $e_{i_2}=\{c,d\}\in \e{l}$ for some $d$ with $d\neq j_1+1$. If $d>j_1+1$, then set $m_p:=\x{e'_{i_1}}m_l/\x{e_{i_2}}$. If $d<j_1-1$, then set $e:=\{d,j_1\}$ which is an edge of $\bar{C}$ because $j_1\neq t+3$.
Now set $m_p:=\x{e}\x{e'_{i_1}}m_l/(\x{e_{i_2}}x_{j_1}x_{t+5})$. If $d=j_1-1$, then $c\neq j_1-1$ and hence one can set $e:=\{j_1-1,j_1+1\}, e':=\{c,j_1\}$ which are edges of $\bar{C}$. Now set $m_p:=\x{e}\x{e'}m_l/(\x{e_{i_2}}x_{j_1}x_{t+5})$. Suppose $d=j_1$ which implies that $c\neq j_1-1$. It follows that $x_{j_1}\centernot|\x{q}$ because otherwise, interchanging $x_{j_1}$ in $\x{q}$ and $x_{j_1+1}$ in $\x{e'_{i_1}}$ will give a smaller presentation for $m_q$. Since $x_{j_1}\centernot|\x{\e q}$ we have
${j_1}\in B$. Set $m_{q'}=\x{e_{i_2}}m_q/\x{e'_{i_1}}$ and use induction. So we are done also in this case.
\iffalse
If ${j_1}\notin B$, then $x_{j_1}|m_q$. Note that $x_{j_1}\centernot|\x{q}$ because otherwise, interchanging $x_{j_1}$ in $\x{q}$ and $x_{j_1+1}$ in $\x{e'_{i_1}}$ will give a smaller presentation for $m_q$. Therefore $e'_{i_3}=\{j_1,f\}\in \e{q}$ for some $f\neq c$. Since $f\notin \{j_1+1, j_1\}$ we have $e_{i_4}=\{f,g\}\in \e l$ for some $g$ with $g<j_1$ because $\e{q}\cap\e{l}=\emptyset$ and if $g>j_1$, then there will be a smaller presentation for $m_l$ by interchanging $x_{g}$ in $\x{e_{i_4}}$ and $x_{j_1}$ in $\x{l}$, a contradiction.
{ In case $\{g, j_1+1\}\neq \{1,t+3\}$, set $e:=\{g, j_1+1\}$ and $m_p:=\x{e}\x{e'_{i_3}}m_l/(\x{e_{i_4}}x_{j_1}x_{t+5})$, and otherwise set $e:=\{1,t+2\}$ and $m_p:= \x{e}\x{e'_{i_1}}\x{e'_{i_3}}m_l/(\x{e_{i_2}}\x{e_{i_4}}x_{t+2}x_{t+5})$. So we are doe also in this case.
\fi
{ Now by (iii), (iv), (vii') and (viii') we may assume in the remaining case that $\mathrm{supp}(m_{q,l})=\{x_{t+4}\}$.}
\medspace
Case~(ix): If there exists $b\in B\setminus\{1,2, t+5\}$, then setting $m_{q'}:=x_bm_q/x_{t+4}$, we are done by induction hypothesis.
Suppose $B\subseteq \{1,2,t+5\}$.
If $1\in B$, then $m_q, m_l\notin L_s$ and there exists $e'_{i}=\{a'_{i},b'_{i}\}\in\e{q}$ with $1\notin e'_{i}$. If $a'_{i}\neq 2$ set $e:=\{1,a'_{i}\}$, else if $b'_{i}\neq t+3$ set $e:=\{1,b'_{i}\}$. Then $m_{q'}:=\x{e}m_q/\x{e'_{i}}<m_q$ and $m_{q',l}$ divides $m_{q,l}$. By induction hypothesis we are done. Suppose for all $e'_{i}\in\e{q}$ with $1\notin e'_{i}$ one has $e'_{i}=\{2,t+3\}$. Since $\deg_{m_q}{x_1}<\deg_{m_l}{x_1}$ there exists $e_j=\{1,b_{j}\}\in\e{l}$ with $b_{j}\neq 1,2,t+3$ and since $\e{q}\cap\e{l}=\emptyset$ we have $\deg_{\x{e_q}}{x_{b_{j}}}=0$ for all such $b_{j}$. Since $b_{j}\notin B$, we must have $x_{b_{j}}|\x{q}$. If $b_{j}\neq 3$ by interchanging $x_{b_{j}}$ in $\x{q}$ and $x_{t+3}$ in $\x{e'_{i}}$ we get a smaller presentation of $m_q$ which is a contradiction. Thus $b_{j}=3$. Set $m_{q'}:=\x{e_j}x_{t+3}m_q/(\x{e'_{i}}x_3)$. Then $m_{q'}<m_q$ and $m_{q',l}|m_{q,l}$ and so we are done by induction hypothesis.
Now assume $1\notin B$ and $2\in B$. Again $m_q, m_l\notin L_s$ and there exists $e'_{i}=\{a'_{i},b'_{i}\}\in\e{q}$ with $2\neq a'_{i}<b'_{i}$. If $e'_{i}=\{1,b'_{i}\}$ for all $e'_i\in \e{q}$ with $2\notin e'_{i}$, then $x_1|m_{q,l}$ because otherwise $s-k'=\deg_{m_q}x_1+\deg_{m_q}x_2<\deg_{m_l}x_1+\deg_{m_l}x_2\leq s-k$ and hence $k'>k$, a contradiction. But $x_1\in \mathrm{supp}(m_{q,l})=\{x_{t+4}\}$ is also a contradiction. Therefore there exists $e'_{i}=\{a'_{i},b'_{i}\}\in\e{q}$ with $2< a'_{i}<b'_{i}$.
Set $e:=\{2,b'_{i}\}$ and $m_{q'}:=\x{e}m_q/\x{e'_{i}}$. Then $m_{q'}<m_q$ and $m_{q',l}|m_{q,l}$ and so we are again done by induction hypothesis.
{ Suppose now that
$B=\{t+5\}$. Since $\mathrm{supp}(m_{q,l})=\{x_{t+4}\}$ we have
\begin{eqnarray}\label{simple}
\x{\e{q}}(\x{q}/m_{q,l})=\x{\e{l}}\x{l}.
\end{eqnarray}
Since $\deg_{m_q}x_{t+5}<\deg_{m_l}x_{t+5}$, we have $k'<k$. Moreover, $\deg_{m_l}x_1+\deg_{m_l}x_2\leq s-k$ which implies by (\ref{simple}) that
at most $s-k$ edges of $\e q$ contain either $1$ or $2$.
Now we choose $s-k+1$ edges $e''_1,\ldots, e''_{s-k+1} \in \e{q}$ with the property that no edge in $\e{q}\setminus \{e''_1,\ldots, e''_{s-k+1}\}$ contains $1$ or $2$. Set $\e{p}:=\{{e''_1}, \ldots, e''_{s-k+1}\}$ and $\x{p}:=x_{t+4}\x{\e{q}}\x{q}/(\x{\e p}m_{q,l})$. It follows from the choice of $\e p$ that neither $x_1$ nor $x_2$ divides $\x p$.
Hence $m_p:=\x{\e{p}}x_{t+5}^{k-1}\x{p}\in L_{k-1}$. Since by (\ref{simple}), we have $\x{\e p}\x p/x_{t+4}=\x{\e{l}}\x{l}$, we conclude that $m_{p,l}=x_{t+4}$.
This completes the proof.}
\end{proof}
Now we use Theorem~\ref{linquo of square} to show that $I(\overline{G_{(b)}})^k$ has a linear resolution for $s\geq 2$ when $G_{(b)}$ does not have an induced $4$-cycle, that is the number of its vertices is more than or equal to $7$.
\begin{thm}\label{I^k has lin res}
Let $G$ be a graph on $n\geq 7$ vertices such that $G_{(b)}$ is its complement. Let $I:=I(G)$ be the edge ideal of $G$. Then $I^s$ has a linear resolution for $s\geq 2$.
\end{thm}
\begin{proof}
By construction, $n=t+5$, where $t\geq 2$. We apply Lemma~\ref{Dao} for $I^s$ and $x:=x_{t+5}$ to prove the assertion. To this end, we first compute $\mathrm{reg}(I^s+( x_{t+5}))$. Setting $C=1-2-\cdots-(t+3)-1$, for all $s\geq 1$ we have
\begin{align*}
I^s+( x_{t+5})&=(I(\bar{C})+(x_{t+5})(x_3,\ldots, x_{t+4}))^s+(x_{t+5})
=I(\bar{C})^s+(x_{t+5}).
\end{align*}
Since $x_{t+5}$ does not appear in the support of the generators of $I(\bar{C})^s$, we have
$$\mathrm{reg}(I^s+(x_{t+5}))=\mathrm{reg}(I(\bar{C})^s+(x_{t+5}))=\mathrm{reg}(I(\bar{C})^s).$$
It is proved in \cite[Corollary~4.4]{BHZ} that $I(\bar{C})^s$ has a linear resolution for $s\geq 2$ when $|C|>4$, which is the case here because $t+3>4$. Thus $\mathrm{reg}(I^s+(x_{t+5}))= 2s$ for $s\geq 2$.
On the other hand $(I^s:x_{t+5})$ has linear quotients by Theorem~\ref{linquo of square}, and it is seen in its proof that $(I^s:x_{t+5})$ is generated in degree $2s-1$ for $s\geq 1$. Therefore, $(I^s:x_{t+5})$ has a $(2s-1)$-linear resolution for $s\geq 1$, see \cite[Theorem~8.2.1]{HHBook}, and hence $\mathrm{reg}((I^s:x_{t+5}))=2s-1$. Now using Lemma~\ref{Dao} we have $\mathrm{reg}(I^s)\leq 2s$ for $s\geq 2$. Since $I^s$ is generated in degree $2s$ we conclude that $I^s$ has a linear resolution for $s\geq 2$.
\end{proof}
| {
"timestamp": "2021-03-11T02:21:12",
"yymm": "2001",
"arxiv_id": "2001.03938",
"language": "en",
"url": "https://arxiv.org/abs/2001.03938",
"abstract": "A graded ideal $I$ in $\\mathbb{K}[x_1,\\ldots,x_n]$, where $\\mathbb{K}$ is a field, is said to have almost maximal finite index if its minimal free resolution is linear up to the homological degree $\\mathrm{pd}(I)-2$, while it is not linear at the homological degree $\\mathrm{pd}(I)-1$, where $\\mathrm{pd}(I)$ denotes the projective dimension of $I$. In this paper we classify the graphs whose edge ideals have this property. This in particular shows that for edge ideals the property of having almost maximal finite index does not depend on the characteristic of $\\mathbb{K}$. We also compute the non-linear Betti numbers of these ideals. Finally, we show that for the edge ideal $I$ of a graph $G$ with almost maximal finite index, the ideal $I^s$ has a linear resolution for $s\\geq 2$ if and only if the complementary graph $\\bar{G}$ does not contain induced cycles of length $4$.",
"subjects": "Commutative Algebra (math.AC)",
"title": "Edge ideals with almost maximal finite index and their powers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363476123224,
"lm_q2_score": 0.8175744761936437,
"lm_q1q2_score": 0.8047682737975088
} |
https://arxiv.org/abs/0910.0265 | The centers of gravity of the associahedron and of the permutahedron are the same | In this article, we show that Loday's realization of the associahedron has the the same center of gravity than the permutahedron. This proves an observation made by F. Chapoton. We also prove that this result holds for the associahedron and the cyclohedron as realized by the first author and C. Lange. | \section{Introduction.}\label{se:Intro}
In 1963, J.~Stasheff discovered the associahedron~\cite{stasheff,stasheff2}, a polytope of great importance in algebraic topology.
The associahedron in $\mathbb R^n$ is a simple
$n-1$-dimensional convex polytope. The classical realization of the associahedron given by
S.~Shnider and S.~Sternberg in \cite{shnider_sternberg} was
completed by J.~L.~Loday in 2004~\cite{loday}. Loday gave a
combinatorial algorithm to compute the integer coordinates of the
vertices of the associahedron, and showed that it can be obtained
naturally from the classical permutahedron of dimension $n-1$.
F.~Chapoton observed that the centers of gravity of the
associahedron and of the permutahedron are the same \cite[Section 2.11]{loday}.
As far as we know, this property of Loday's realization has never been proved.
\smallskip
In 2007, the first author and C.~Lange gave a family of realizations
of the associahedron that contains the classical realization of the
associahedron. Each of these realizations is also obtained naturally
from the classical permutahedron \cite{realisation1}. They
conjectured that for any of these realizations, the center of
gravity coincide with the center of gravity of the permutahedron. In
this article, we prove this conjecture to be true.
\smallskip
The associahedron fits in a larger family of polytopes, {\em
generalized associahedra}, introduced by S.~Fomin and A.~Zelevinsky
in \cite{fomin_zelevinsky} within the framework of cluster algebras
(see \cite{chapoton_fomin_zelevinsky,realisation2} for their
realizations).
In 1994, R.~Bott and C.~Taubes discovered the
cyclohedron~\cite{bott_taubes} in connection with knot theory. It
was rediscovered independently by R. Simion \cite{simion}. In
\cite{realisation1}, the first author and C.~Lange also gave a
family of realizations for the cyclohedron, starting with the
permutahedron of type $B$.
We also show that the centers of gravity of the cyclohedron and of
the permutahedron of type $B$ are the same.
The article is organized as follows. In \S\ref{se:1}, we first
recall the realization of the permutahedron and how to compute its
center of gravity. Then we compute the center of gravity of Loday's
realization of the associahedron. In order to do this, we partition
its vertices into isometry classes of triangulations, which
parameterize the vertices, and we show that the center of gravity
for each of those classes is the center of gravity of the
permutahedron.
In \S\ref{se:2}, we show that the computation of the center of
gravity of any of the realizations given by the first author and
C.~Lange is reduced to the computation of the center of gravity of the classical
realization of the associahedron. We do the same for the cyclohedron in \S\ref{se:3}.
We are grateful to Carsten Lange for allowing us to use some of the pictures he made in~\cite{realisation1}.
\section{Center of gravity of the classical permutahedron and
associahedron}\label{se:1}
\subsection{The permutahedron}
Let $S_n$ be the symmetric group acting on the set
$[n]=\{1,2,\dots,n\}$. The {\em permutahedron} $\Perm(S_n)$ is the
classical $n-1$-dimensional simple convex polytope defined as the
convex hull of the points
$$
M(\sigma)=(\sigma(1),\sigma(2),\dots, \sigma (n))\in\mathbb R^n,\qquad \forall \sigma\in S_n.
$$
The {\em center of gravity} (or {\em isobarycenter}) is the unique point $G$ of $\mathbb R^n$ such that
$$
\sum_{\sigma\in S_n} \vect{GM(\sigma)}=\vect 0.
$$
Since the permutation $w_0:i\mapsto n+1-i$ preserves $\Perm(S_n)$,
we see, by sending $M(\sigma)$ to
$$
M(w_0\sigma)=(n+1-\sigma(1),n+1-\sigma(2),\dots, n+1-\sigma (n)),
$$
that the center of gravity is $
G=(\frac{n+1}{2},\frac{n+1}{2},\dots,\frac{n+1}{2}). $
\subsection{Loday's realization}
We present here the realization of the associahedron given by
J.~L.~Loday \cite{loday}. However, instead of using planar binary
trees, we use triangulations of a regular polygon to parameterize
the vertices of the associahedron (see \cite[Remark
1.2]{realisation1}).
\subsubsection{Triangulations of a regular polygon} Let $P$ be a
regular $(n+2)$-gon in the Euclidean plane with vertices
$A_0,A_1,\dots,A_{n+1}$ in counterclockwise direction. A {\em
triangulation of $P$} is a set of $n$ noncrossing diagonals of $P$.
Let us be more explicit. A {\em triangle
of $P$} is a triangle whose vertices are vertices of $P$. Therefore
a side of a triangle of $P$ is either an edge or a diagonal of $P$.
A triangulation of $P$ is then a collection of $n$ distinct
triangles of $P$ with noncrossing sides. Any of the triangles in $T$ can be described as $A_i A_j A_k$ with $0\leq i<j<k\leq n+1$. Each $1\leq j\leq n$ corresponds to a unique triangle $\Delta_j(T)$ in $T$ because the sides of triangles in $T$ are noncrossing.
Therefore we write $T=\{\Delta_1(T),\dots, \Delta_n(T)\}$ for a
triangulation $T$, where $\Delta_j(T)$ is the unique triangle in $T$
with vertex $A_j$ and the two other vertices $A_i$ and $A_k$
satisfying the inequation $0\leq i<j<k\leq n+1$.
Denote by $\SOT_{n+2}$ the set of triangulations of $P$.
\subsubsection{Loday's realization of the associahedron}
Let $T$ be a triangulation of $P$. The {\em weight} $\delta_j(T)$ of
the triangle $\Delta_j(T)=A_i A_jA_k$, where $i<j<k$, is the positive
number
$$
\delta_j(T)=(j-i)(k-j).
$$
The weight $\delta_j(T)$ of $\Delta_j(T)$ represents the product of the number of
boundary edges of $P$ between $A_i$ and $A_j$ passing through vertices
indexed by smaller numbers than $j$ with the number of boundary
edges of $P$ between $A_j$ and $A_k$ passing through vertices indexed
by larger numbers than $j$.
The {\em classical associahedron} $\Ass(S_n)$ is obtained as the
convex hull of the points
$$
M(T)=(\delta_1(T),\delta_2(T),\dots, \delta_n(T))\in \mathbb
R^n,\quad\forall T\in\SOT_{n+2}.
$$
We are now able to state our first result.
\begin{thm}\label{thm:Main} The center of gravity of $\Ass(S_n)$ is $G=(\frac{n+1}{2},\frac{n+1}{2},\dots,\frac{n+1}{2})$.
\end{thm}
In order to prove this theorem, we need to study closely a certain partition of the vertices of $P$.
\subsection{Isometry classes of triangulations}\label{se:centergravity}
As $P$ is a regular $(n+2)$-gon, its isometry group is the dihedral
group ${\mathcal D}_{n+2}$ of order $2(n+2)$. So ${\mathcal D}_{n+2}$ acts on the set
$\SOT_{n+2}$ of all triangulations of $P$: for $f\in{\mathcal D}_{n+2}$ and
$T\in\SOT_{n+2}$, we have $f\cdot T\in\SOT_{n+2}$. We denote by
$\mathcal O (T)$ the orbit of $T\in\SOT_{n+2}$ under the action of
${\mathcal D}_{n+2}$.
We know that $G$ is the center of gravity of $\Ass(S_n)$ if and only if
$$
\sum_{T\in\SOT_{n+2}} \vect{GM(T)} =\vect 0.
$$
As the orbits of the action of ${\mathcal D}_{n+2}$ on $\SOT_{n+2}$ form a partition of the set $\SOT_{n+2}$, it is sufficient to compute
$$
\sum_{T\in\mathcal O} \vect{GM(T)}
$$
for any orbit $\mathcal O$. The following key observation implies
directly Theorem~\ref{thm:Main}.
\begin{thm}\label{thm:key} Let $\mathcal O$ be an orbit of the action of ${\mathcal D}_{n+2}$ on $\SOT_{n+2}$, then $G$ is the center of gravity of $\{M(T)\,|\, T\in\mathcal O\}$. In particular, $
\sum_{T\in\mathcal O} \vect{GM(T)}=\vect 0.
$
\end{thm}
Before proving this theorem, we need to prove the following result.
\begin{prop}\label{prop:canonique} Let $T\in\SOT_{n+2}$ and $j\in [n]$, then
$\displaystyle{\sum_{f\in {\mathcal D}_{n+2}} \delta_j(f\cdot T) =
(n+1)(n+2)}$.
\end{prop}
\begin{proof}
We prove this proposition by induction on $j\in [n]$. For any triangulation $T'$, we denote by
$a_j(T')<j<b_j(T')$ the indices of the vertices of $\Delta_j(T')$. Let $H$ be the group of rotations
in ${\mathcal D}_{n+2}$. It is well-known that for any reflection $s\in {\mathcal D}_{n+2}$, the classes $H$ and
$sH$ form a partition of ${\mathcal D}_{n+2}$ and that $|H|=n+2$. We consider
also the unique reflection $s_k\in{\mathcal D}_{n+2}$ which maps $A_x$ to
$A_{n+3+k-x}$, where the values of the indices are taken in modulo
$n+2$. In particular, $s_k(A_0)=A_{n+3+k}=A_{k+1}$, $s_k(A_1)=A_k$,
$s_k(A_{k+1})=A_{n+2}=A_0$, and so on.
\smallskip
\noindent {\bf Basic step $j=1$:} We know that $a_1(T')=0$ for any
triangulation $T'$, hence the weight of $\Delta_1(T')$ is
$\delta_1(T')=(1-0)(b_1(T')-1)=b_1(T')-1$.
The reflection $s_0\in {\mathcal D}_{n+2}$ maps $A_x$
to $A_{n+3-x}$ (where $A_{n+2}=A_0$ and $A_{n+3}=A_1$). In other
words, $s_0(A_0)=A_1$ and $s_0(\Delta_1(T'))$ is a triangle in
$s_0\cdot T'$. Since
$$
s_0(\Delta_1( T'))= s_0(A_0A_1A_{b_1(T')})= A_0A_1A_{n+3-b_1(T')}
$$
and $0<1<n+3-b_1(T')$, $s_0(\Delta_1(T'))$ has to be
$\Delta_1(s_0\cdot T')$. In consequence, we obtain that
$$
\delta_1(T')+\delta_1(s_0\cdot T')= (b_1(T')-1)+(n+3-b_1(T')-1)=n+1,
$$
for any triangulation $T'$. Therefore
$$
\sum_{f\in {\mathcal D}_{n+2}} \delta_1(f\cdot T) = \sum_{g\in H}\big(
(\delta_1(g\cdot T)+\delta_1(s_0\cdot (g\cdot T))\big)= |H|
(n+1)=(n+1)(n+2),
$$
proving the initial case of the induction.
\smallskip
\noindent {\bf Inductive step:} Assume that, for a given $1\leq j<n$, we
have
$$
\sum_{f\in {\mathcal D}_{n+2}}\delta_j(f\cdot T) = (n+1)(n+2).
$$
We will show that
$$
\sum_{f\in {\mathcal D}_{n+2}}\delta_{j+1}(f\cdot T) = \sum_{f\in
{\mathcal D}_{n+2}}\delta_j(f\cdot T).
$$
Let $r\in H\subseteq {\mathcal D}_{n+2}$ be the unique rotation mapping
$A_{j+1}$ to $A_{j}$. In particular, $r(A_0)=A_{n+1}$. Let $T'$ be a
triangulation of $P$. We have two cases:
\smallskip
\noindent {\bf Case 1.} If $a_{j+1}(T')>0$ then
$a_{j+1}(T')-1<j<b_{j+1}(T')-1$ are the indices of the vertices of
the triangle $r(\Delta_{j+1}(T'))$ in $r\cdot T'$. Therefore, by
unicity, $r(\Delta_{j+1}(T'))$ must be $\Delta_j(r\cdot T')$. Thus
\begin{eqnarray*}
\delta_{j+1}(T')&=&(b_{j+1}(T')-(j+1))(j+1-a_{j+1}(T'))\\
&=&\big((b_{j+1}(T')-1)-j\big)(j-(a_{i+1}(T')-1))\\
&=&\delta_j(r\cdot T').
\end{eqnarray*}
In other words:
\begin{eqnarray}\label{equ:1}
\sum_{{f\in {\mathcal D}_{n+2},\atop a_{j+1}(f\cdot T)\not =
0}}\delta_{j+1}(f\cdot T) & =& \sum_{{f\in {\mathcal D}_{n+2},\atop
a_{j+1}(f\cdot T)\not = 0}}\delta_j(r\cdot(f\cdot T))\\\nonumber &
=& \sum_{{g\in {\mathcal D}_{n+2},\atop b_{j}(g\cdot T)\not =
n+1}}\delta_j(g\cdot T).
\end{eqnarray}
\smallskip
\noindent {\bf Case 2.} If $a_{j+1}(T')=0$, then
$j<b_{j+1}(T')-1<n+1$ are the indices of the vertices of
$r(\Delta_{j+1}(T'))$, which is therefore not $\Delta_j(r\cdot T')$:
it is $\Delta_{b_{j+1}(T')-1}(r\cdot T')$. To handle this, we need
to use the reflections $s_j$ and $s_{j-2}$.
On one hand, observe that $j+1<n+3+j-b_{j+1}(T')$ because
$b_{j+1}(T')<n+1$.
Therefore
$$
s_j(\Delta_{j+1}(T'))=A_{j+1}A_0
A_{n+3+j-b_{j+1}(T')}=\Delta_{j+1}(s_j\cdot T').
$$
Hence
\begin{eqnarray*}
\delta_{j+1}(T')+\delta_{j+1}(s_j\cdot
T')&=&(j+1)(b_{j+1}(T')-(j+1))\\
&&+(j+1)(n+3+j-b_{j+1}(T')-(j+1))\\
&=&(j+1)(n+1-j).
\end{eqnarray*}
On the other hand, consider the triangle $\Delta_j(r\cdot T')$ in
$r\cdot T'$. Since
$$
r(\Delta_{j+1}(T'))=A_{j}A_{b_{j+1}(T')-1}A_{n+1}=\Delta_{b_{j+1}(T')-1}(r\cdot T')
$$
is in $r\cdot T'$, $[j,n+1]$ is a diagonal in $r\cdot T'$. Hence
$b_j(r\cdot T')=n+1$. Thus $\Delta_j(r\cdot T')=A_{a_j(r\cdot
T')}A_j A_{n+1}$ and $\delta_j(r\cdot T')=(j-a_j(r\cdot
T'))(n+1-j)$. We have $s_{j-2}(A_j)=A_{n+1}$, $s_{j-2}(A_{n+2})=A_j$
and $s_{j-2}(A_{a_j(r\cdot T')})=A_{n+1+j-a_j(r\cdot
T')}=A_{j-a_j(r\cdot T')-1}$ since $a_j(r\cdot T')<j$. Therefore
$s_{j-2}(\Delta_j(r\cdot T'))=A_{j-a_j(r\cdot
T')-1}A_jA_{n+1}=\Delta_j(s_{j-2}r\cdot T')$ and
$\delta_j(s_{j-2}r\cdot T')=(a_j(r\cdot T')+1)(n+1-j)$. Finally we
obtain that
\begin{eqnarray*}
\delta_{j}(r\cdot T')+\delta_{j}(s_{j-2}r\cdot
T')&=&(j-a_j(r\cdot T'))(n+1-j)+(a_j(r\cdot
T')+1)(n+1-j)\\
&=&(j+1)(n+1-j).
\end{eqnarray*}
Since $\{H,s_k H\}$ forms a partition of ${\mathcal D}_{n+2}$ for any $k$,
we have
\begin{eqnarray}\label{equ:2}
\sum_{{f\in {\mathcal D}_{n+2},\atop a_{j+1}(f\cdot T)=0}}\delta_{j+1}(f\cdot T) & =& \sum_{{f\in H,\atop a_{j+1}(f\cdot T)=0}}\big(\delta_{j+1}(f\cdot T) +\delta_{j+1}(s_j f\cdot T)\big)\\ \nonumber
&=& \sum_{{f\in H,\atop a_{j+1}(f\cdot T)=0}} (j+1)(n+1-j)\\ \nonumber
&=& \sum_{{rf\in H,\atop b_{j}(rf\cdot T)=n+1}}\big(\delta_{j}(rf\cdot T) +\delta_{j}(s_{j-2} rf\cdot T)\big),\ \textrm{since }r\in H\\ \nonumber
&=& \sum_{{g\in H,\atop b_{j}(g\cdot T)=n+1}}\delta_{j}(g\cdot T).
\end{eqnarray}
\smallskip
\noindent We conclude the induction by adding
Equations~(\ref{equ:1}) and (\ref{equ:2}).
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:key}] We have to prove that
$$
\vect u=\sum_{T'\in\mathcal O(T)} \vect{GM(T')}=\vect 0.
$$
Denote by ${\textnormal{Stab}}(T')=\{f\in{\mathcal D}_{n+2}\,|\, f\cdot T'=T'\}$ the stabilizer of $T'$, then
$$
\sum_{f\in {\mathcal D}_{n+2}} M(f\cdot T) = \sum_{T'\in \mathcal O(T)} |{\textnormal{Stab}}(T')| M(T').
$$
Since $T'\in\mathcal O(T)$, $|{\textnormal{Stab}}(T')|=|{\textnormal{Stab}}(T)|=\frac{2(n+2)}{|\mathcal O(T)|}$, we have
$$
\sum_{f\in {\mathcal D}_{n+2}} M(f\cdot T) = \frac{2(n+2)}{|\mathcal O(T)|} \sum_{T'\in \mathcal O(T)} M(T') .
$$
Therefore by Proposition~\ref{prop:canonique} we have for any $i\in [n]$
\begin{equation}\label{equ:3}
\sum_{T'\in \mathcal O(T)} \delta_i(T')= \frac{|\mathcal
O(T)|}{2(n+2)}(n+1)(n+2)=\frac{|\mathcal O(T)|(n+1)}{2}.
\end{equation}
Denote by $O$ the point of origin of $\mathbb R^n$. Then
$\vect{OM}=M$ for any point $M$ of $\mathbb R^n$. By Chasles'
relation we have finally
$$
\vect u=\sum_{T'\in\mathcal O(T)} \vect{GM(T')}= \sum_{T'\in\mathcal O(T)} (M(T')-G) =\sum_{T'\in\mathcal O(T)} M(T') - |\mathcal O(T)| G.
$$
So the $i^{th}$ coordinate of $\vect u$ is $ \sum_{T'\in \mathcal
O(T)} \delta_i(T')- \frac{|\mathcal O(T)|(n+1)}{2}=0 $, hence $\vect
u =\vect 0$ by (\ref{equ:3}).
\end{proof}
\section{Center of gravity of generalized associahedra of type $A$ and $B$}\label{se:2}
\subsection{Realizations of associahedra} As a Coxeter group (of type $A$), $S_n$ is generated by the simple
transpositions $\tau_i=(i,\, i+1)$, $i\in [n-1]$. The Coxeter graph
$\Gamma_{n-1}$ is then
\begin{figure}[h]
\psfrag{t1}{$\tau_{1}$}
\psfrag{t2}{$\tau_{2}$}
\psfrag{t3}{$\tau_{3}$}
\psfrag{tn}{$\tau_{n-1}$}
\psfrag{dots}{$\ldots$}
\begin{center}
\includegraphics[width=8cm]{ACG.eps}
\end{center}
\end{figure}
Let $\ADG$ be an orientation of $\Gamma_{n-1}$. We distinguish
between {\em up} and {\em down} elements of $[n]$~: an element $i\in
[n]$ is {\em up} if the edge $\{\tau_{i-1}, \tau_i\}$ is directed
from $\tau_i$ to $\tau_{i-1}$ and {\em down} otherwise (we set $1$
and $n$ to be down). Let $\Do_\ADG$ be the set of down elements and
let $\Up_\ADG$ be the set of up elements (possibly empty).
The notion of up and down induces a labeling of the $(n+2)$-gon $P$
as follows. Label $A_0$ by $0$. Then the vertices of $P$ are, in
counterclockwise direction, labeled by the down elements in
increasing order, then by $n+1$, and finally by the up elements in
decreasing order. An example is given in
Figure~\ref{fig:example_labelling}.
\begin{figure}[h]
\psfrag{0}{$0$}
\psfrag{1}{$1$}
\psfrag{2}{$2$}
\psfrag{3}{$3$}
\psfrag{4}{$4$}
\psfrag{5}{$5$}
\psfrag{6}{$6$}
\psfrag{s1}{$\tau_{1}$}
\psfrag{s2}{$\tau_{2}$}
\psfrag{s3}{$\tau_{3}$}
\psfrag{s4}{$\tau_{4}$}
\begin{center}
\begin{minipage}{0.95\linewidth}
\begin{center}
\includegraphics[height=6cm]{example_labelling.eps}
\end{center}
\caption[]{A labeling of a heptagon that corresponds to
the orientation $\ADG$ of $\Gamma_{4}$ shown inside
the heptagon. We have $\Do_{\ADG} = \{1, 3, 5 \}$ and
$\Up_{\ADG} = \{ 2,4\}$.}
\label{fig:example_labelling}
\end{minipage}
\end{center}
\end{figure}
We recall here a construction due to Hohlweg and
Lange~\cite{realisation1}. Consider $P$ labeled according to a
fixed orientation~$\ADG$ of $\Gamma_{n-1}$. For each $l\in [n]$ and
any triangulation $T$ of $P$, there is a unique triangle
$\Delta^\ADG_l(T)$ whose vertices are labeled by $k<l<m$. Now,
count the number of edges of $P$ between $i$ and $k$, whose vertices
are labeled by smaller numbers than $l$. Then multiply it by the
number of edges of $P$ between $l$ and $m$, whose vertices are
labeled by greater numbers than $l$. The result $\omega_l^\ADG(T)$
is called the {\em weight} of $\Delta_l^\ADG(T)$. The injective map
\begin{align*}
M_{\ADG}: \SOT_{n+2} &\longrightarrow \R^n \\
T &\longmapsto (x^\ADG_1(T),x^\ADG_2(T),\dots,x^\ADG_n(T))
\end{align*}
that assigns explicit coordinates to a triangulation is defined as follows:
\[
x^\ADG_j(T) := \begin{cases}
\omega_j^\ADG (T) & \textrm{if } j\in\Do_\ADG\\
n+1-\omega_j^\ADG(T) & \textrm{if } j\in\Up_\ADG.
\end{cases}
\]
Hohlweg and Lange showed that the convex hull $\Ass_\ADG(S_n)$
of~$\{M_{\ADG}(T)\,|\,T\in \SOT_{n+2}\}$ is a realization of the
associahedron with integer coordinates \cite[Theorem
1.1]{realisation1}. Observe that if the orientation $\ADG$ is {\em
canonic}, that is, if $\Up_\ADG=\emptyset$, then
$\Ass_\ADG(S_n)=\Ass(S_n)$.
The key is now to observe that the weight of $\Delta_{j}^\ADG(T)$ in
$T$ is precisely the weight of $\Delta_j (T')$ where $T'$ is a
triangulation in the orbit of $T$ under the action of ${\mathcal D}_{n+2}$, as
stated in the next proposition.
\begin{prop}\label{prop:weight} Let $\ADG$ be an orientation of $\Gamma_{n-1}$. Let $j\in [n]$
and let $A_l$ be the vertex of $P$ labeled by $j$. There is an
isometry $r_j^\ADG\in \mathcal D_{n+2}$ such that:
\begin{enumerate}
\item[(i)] $r_j^\ADG(A_l)=A_j$;
\item[(ii)] the label of the vertex $A_k$ is smaller than $j$ if and
only if the index $i$ of the vertex $A_i=r_j^\ADG(A_k)$ is smaller
than $j$.
\end{enumerate}
Moreover, for any triangulation $T$ of $P$ we have
$\omega_j^\ADG(T)=\delta_j(r_j^\ADG\cdot T).$
\end{prop}
\begin{proof} If $\ADG$ is the canonical orientation, then $r_j^\ADG$ is
the identity, and the proposition is straightforward. In the
following proof, we suppose therefore that $\Up_\ADG\not=\emptyset$.
\smallskip
\noindent Case 1: Assume that $j\in\Do_\ADG$. Let $\alpha$ be the
greatest up element smaller than $j$ and let $A_{\alpha+1}$ be the
vertex of $P$ labeled by $\alpha$. Then by construction of the
labeling, $A_{\alpha}$ is labeled by a larger number than $j$, and
$[A_{\alpha},A_{\alpha+1}]$ is the
unique edge of $P$ such that $A_{\alpha+1}$ is labeled by a
smaller number than $j$. Denote by $\Lambda_\ADG$ the path from
$A_l$ to $A_{\alpha+1}$ passing through vertices of $P$ labeled by
smaller numbers than $j$. This is the path going from $A_l$ to
$A_{\alpha+1}$ in clockwise direction on the boundary of $P$.
By construction, $A_k\in \Lambda_\ADG$ if and only if the label of
$A_k$ is smaller than $j$. In other words, the path $\Lambda_\ADG$
consists of {\em all} vertices of $P$ labeled by smaller numbers
than $j$. Therefore the cardinality of $\Lambda_\ADG$ is $j+1$.
Consider $r_j^\ADG$ to be the rotation mapping $A_l$ to $A_j$.
Recall that a rotation is an isometry preserving the orientation of
the plane. Then the path $\Lambda_\ADG$, which is obtained by
walking on the boundary of $P$ from $A_l$ to $A_{\alpha+1}$ in
clockwise direction, is sent to the path $\Lambda$ obtained by
walking on the boundary of $P$ in clockwise direction from $A_j$ and
going through $j+1=|\Lambda_\ADG|$ vertices of $P$. Therefore
$\Lambda=\{A_0,A_1,\dots, A_j\}$, thus proving the first claim of
our proposition in this case.
\smallskip
\noindent Case 2: assume that $j\in \Up_\ADG$. The proof is almost
the same as in the case of a down element. Let $\alpha$ be the
greatest down element smaller than $j$ and let $A_{\alpha}$ be the
vertex of $P$ labeled by $\alpha$. Then by construction of the
labeling, $A_{\alpha+1}$ is labeled by a larger number than $j$, and
$[A_{\alpha},A_{\alpha+1}]$ is the unique edge of $P$ such that
$A_{\alpha}$ is labeled by a smaller number than $j$. Denote by
$\Lambda_\ADG$ the path from $A_l$ to $A_{\alpha}$ passing through
vertices of $P$ labeled by smaller numbers than $j$. This is the
path going from $A_{\alpha}$ to $A_l$
in clockwise direction on the boundary of $P$.
As above, $A_k\in \Lambda_\ADG$ if and only if the label of $A_k$ is
smaller than $j$. In other words, the path $\Lambda_\ADG$ consists
of all the vertices of $P$ labeled by smaller numbers than $j$.
Therefore, again, the cardinality of $\Lambda_\ADG$ is $j+1$.
Let $r_j^\ADG$ be the reflection mapping $A_\alpha$ to $A_0$ and
$A_{\alpha+1}$ to $A_{n+1}$. Recall that a reflection is an isometry
reversing the orientation of the plane. Then the path
$\Lambda_\ADG$, which is obtained by walking on the boundary of $P$
from $A_\alpha$ to $A_{l}$ in clockwise direction, is sent to the
path $\Lambda$ obtained by walking on the boundary of $P$ in
clockwise direction from $A_\alpha$ and going through
$j+1=|\Lambda_\ADG|$ vertices of $P$. Therefore
$\Lambda=\{A_0,A_1,\dots, A_j\}$. Hence $r_j^\ADG(A_l)$ is sent on
the final vertex of the path $\Lambda$ which is $A_j$, proving the
first claim of our proposition.
\smallskip
Thus it remains to show that for a triangulation $T$ of $P$ we have
$\omega_j^\ADG(T)=\delta_j(r_j^\ADG\cdot T).$ We know that
$\Delta_j^\ADG(T)=A_k A_l A_m$ such that the label of $A_k$ is
smaller than $j$, which is smaller than the label of $A_m$. Write
$A_a=r_j^\ADG(A_k)$ and $A_b=r_j^\ADG(A_m)$. Because of Proposition~\ref{prop:weight}, $a<j<b$ and therefore
$$
r_j^\ADG(\Delta_j^\ADG(T))= A_a A_jA_b=\Delta_j(r_j^\ADG\cdot T).
$$
So $(j-a)$ is the number of edges of $P$ between $A_l$ and $A_k$,
whose vertices are labeled by smaller numbers than $j$. Similarly,
$(b-j)$ is the number of edges between $A_l$ and $A_m$, whose
vertices are labeled by smaller numbers than $j$, and $(b-j)$ is
the number of edges of $P$ between $A_l$ and $A_m$ and whose
vertices are labeled by larger numbers than $j$. So
$\omega_l^\ADG(T)=(j-a)(b-j)=\delta_j(r_j^\ADG\cdot T)$.
\end{proof}
\begin{cor}\label{cor:Canon} For any orientation $\ADG$ of the Coxeter graph of $S_n$ and for any
$j\in [n]$, we have
$$
\sum_{f\in {\mathcal D}_{n+2}} x^\ADG_j(f\cdot T) = (n+1)(n+2).
$$
\end{cor}
\begin{proof} Let $r_j^\ADG\in \mathcal D_{n+2}$ be as in Proposition~\ref{prop:weight}.
Suppose first that $j\in \Up_\ADG$, then
\begin{eqnarray*}
\sum_{f\in {\mathcal D}_{n+2}} x^\ADG_i(f\cdot T) &=&2(n+2)(n+1)-\sum_{f\in {\mathcal D}_{n+2}} \omega_i^\ADG(f\cdot T)\\
&=&2(n+2)(n+1)-\sum_{f\in {\mathcal D}_{n+2}} \delta_j(fr_j^\ADG\cdot T),\ \textrm{by Proposition~\ref{prop:weight}} \\
&=&2(n+2)(n+1)-\sum_{g\in {\mathcal D}_{n+2}} \delta_j(g^\ADG\cdot T),\ \textrm{since $r_j^\ADG\in\mathcal D_{n+2}$} \\
&=& (n+1)(n+2),\ \textrm{by Proposition~\ref{prop:canonique}}
\end{eqnarray*}
If $i\in \Do_\ADG$, the result follows from a similar calculation.
\end{proof}
\subsection{Center of gravity of associahedra}
\begin{thm}\label{thm:Main2} The center of gravity of $\Ass_\ADG(S_n)$ is $G=(\frac{n+1}{2},\frac{n+1}{2},\dots,\frac{n+1}{2})$ for any orientation $\ADG$.
\end{thm}
By following precisely the same arguments as in
\S\ref{se:centergravity}, we just have to show the following
generalization of Theorem~\ref{thm:key}.
\begin{thm}\label{thm:keyGenAss} Let $\mathcal O$ be an orbit of the action of ${\mathcal D}_{n+2}$ on $\SOT_{n+2}$, then $G$ is the center of gravity of $\{M_\ADG(T)\,|\, T\in\mathcal O\}$. In particular, $\sum_{T\in\mathcal O} \vect{GM_\ADG(T)}=\vect 0. $
\end{thm}
\begin{proof} The proof is entirely similar to the proof of Theorem~\ref{thm:key}, using Corollary~\ref{cor:Canon} instead of Proposition~\ref{prop:canonique}.
\end{proof}
\section{Center of gravity of the cyclohedron}\label{se:3}
\subsection{The type $B$-permutahedron}
The hyperoctahedral group $W_n$ is defined by $W_n=\{\sigma\in S_{2n}\,|\, \sigma(i)+\sigma(2n+1-i)=2n+1,\ \forall i\in[n]\}$. The {\em type $B$-permutahedron} $\Perm(W_n)$ is the simple $n$-dimensional convex polytope defined as the convex hull of the points
$$
M(\sigma)=(\sigma(1),\sigma(2),\dots, \sigma (n))\in\mathbb R^{2n},\qquad \forall \sigma\in W_n.
$$
As $w_0=(2n,2n-1,\dots,3,2,1)\in W_n$, we deduce from the same
argument as in the case of $\Perm(S_n)$ that the center of gravity
of $\Perm(W_n)$ is
$$G=(\frac{2n+1}{2},\frac{2n+1}{2},\dots,\frac{2n+1}{2}).$$
\subsection{Realizations of the associahedron}
An orientation~$\ADG$ of~$\Gamma_{2n-1}$ is {\em symmetric} if the
edges $\{\tau_i,\tau_{i+1}\}$ and $\{\tau_{2n-i-1},\tau_{2n-i}\}$
are oriented in \emph{opposite directions} for all~$i\in [2n-2]$.
There is a bijection between symmetric orientations
of~$\Gamma_{2n-1}$ and orientations of the Coxeter graph of
$W_n$ (see \cite[\S1.2]{realisation1}). A triangulation $T\in \SOT_{2n+2}$ is {\em centrally
symmetric} if~$T$, viewed as a triangulation of $P$, is centrally
symmetric. Let $\SOT_{2n+2}^B$ be the set of the centrally symmetric
triangulations of $P$. In \cite[Theorem
1.5]{realisation1} the authors show that for any symmetric
orientation $\ADG$ of $\Gamma_{2n-1}$. The convex hull
$\Ass_\ADG(W_{n})$ of $\{M_{\ADG}(T)\,|\,T\in \SOT^B_{2n+2}\}$ is a
realization of the cyclohedron with integer coordinates.
Since the full orbit of symmetric triangulations under the action of ${\mathcal D}_{2n+2}$ on triangulations provides vertices of $\Ass_\ADG(W_{n})$, and vice-versa, Theorem~\ref{thm:keyGenAss} implies the following corollary.
\begin{cor}\label{cor:Main} Let $\ADG$ be a symmetric orientation of $\Gamma_{2n-1}$, then the center of gravity of $\Ass_\ADG(W_n)$ is $G=(\frac{2n+1}{2},\frac{2n+1}{2},\dots,\frac{2n+1}{2})$.
\end{cor}
| {
"timestamp": "2009-10-02T06:42:52",
"yymm": "0910",
"arxiv_id": "0910.0265",
"language": "en",
"url": "https://arxiv.org/abs/0910.0265",
"abstract": "In this article, we show that Loday's realization of the associahedron has the the same center of gravity than the permutahedron. This proves an observation made by F. Chapoton. We also prove that this result holds for the associahedron and the cyclohedron as realized by the first author and C. Lange.",
"subjects": "Combinatorics (math.CO)",
"title": "The centers of gravity of the associahedron and of the permutahedron are the same",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363508288305,
"lm_q2_score": 0.8175744717487329,
"lm_q1q2_score": 0.8047682720519566
} |
https://arxiv.org/abs/1511.08653 | Longest increasing subsequences and log concavity | Let $\pi$ be a permutation of $[n]=\{1,\dots,n\}$ and denote by $\ell(\pi)$ the length of a longest increasing subsequence of $\pi$. Let $\ell_{n,k}$ be the number of permutations $\pi$ of $[n]$ with $\ell(\pi)=k$. Chen conjectured that the sequence $\ell_{n,1},\ell_{n,2},\dots,\ell_{n,n}$ is log concave for every fixed positive integer $n$. We conjecture that the same is true if one is restricted to considering involutions and we show that these two conjectures are closely related. We also prove various analogues of these conjectures concerning permutations whose output tableaux under the Robinson-Schensted algorithm have certain shapes. In addition, we present a proof of Deift that part of the limiting distribution is log concave. Various other conjectures are discussed. | \section{Introduction}
Let ${\mathfrak S}_n$ be the symmetric group of all permutations of $[n]=\{1,2,\dots,n\}$. We will view $\pi=\pi_1 \pi_2 \dots \pi_n\in{\mathfrak S}_n$ as a sequence (one-line notation). Let $\ell(\pi)$ denote the length of a longest increasing subsequence of $\pi$. For example, if $\pi=4172536$ then $\ell(\pi)=4$ because the subsequence $1256$ is increasing and there is no longer such sequence. Define
$$
L_{n,k}=\{\pi\in{\mathfrak S}_n\ :\ \ell(\pi)=k\} \qmq{and} \ell_{n,k}=\#L_{n,k}
$$
where the hash symbol denotes cardinality.
The statistic $\ell(\pi)$ plays an important role in a number of combinatorial contexts, for example in famous theorems of Erd\H{o}s and Szekeres~\cite{es:cpg} and of Schensted~\cite{sch:lid}.
The problem of determining the distribution of $\ell(\pi)$ in a random permutation $\pi$ of length $n$ was solved in a tour de force by Baik, Deift, and Johansson~\cite{bdj:dli}.
The history of this problem is described by Aldous and Diaconis~\cite{aldous}; see also the recently published book by Romik \cite{romik} on the subject.
The statistic $\ell(\pi)$ is not only interesting from a combinatorial or algorithmic point of view: It is also connected with biology via the Ulam distance which is used to model evolutionary distance in DNA research~\cite{ula:ipb}. The {\em Ulam distance} between $\pi,\sigma\in{\mathfrak S}_n$, denote $U(\pi,\sigma)$, is the minimum number of steps needed to obtain $\sigma$ from $\pi$ where a step consists of taking an element of a sequence and placing it somewhere else in the sequence. If $\id$ is the identity permutation then it is easy to see that $U(\id,\pi)=n-\ell(\pi)$. Indeed, one can fix a longest increasing subsequence of $\pi$ and then move all the elements of $\id$ which are not in that subsequence to the appropriate places to form $\pi$.
A sequence of real numbers $l_1,l_2,\dots,l_n$ is said to be {\em log concave} if $l_{k-1} l_{k+1}\le l_k^2$ for all $k\in [n]$. Here we use the convention that $l_0=l_{n+1}=0$. Log concave sequences appear often in algebra, combinatorics, and geometry; see the survey articles of Stanley~\cite{sta:lus} and Brenti~\cite{bre:lus}. It is interesting to note that there are many other ways to define distance in molecular biology, and these sequences are typically either known to
be log concave, or conjectured to be log concave. See the book \cite{genome} for a collection of examples.
Our main object of study is the following conjecture which appeared in an unpublished manuscript of William Chen from 2008.
\begin{conj}[\cite{che:lqc}]
\label{ell_nk}
For any fixed $n$, the sequence
$$
\ell_{n,1}, \ell_{n,2},\dots,\ell_{n,n}
$$
is log concave.
\end{conj}
We have verified this conjecture for $n\le 50$ by computer and will give other evidence for its truth below.
Let $\mathfrak I_n$ denote the set of involutions in ${\mathfrak S}_n$, i.e., those permutations whose square is the identity. Also define
$$
I_{n,k}=\{\pi\in\mathfrak I_n\ :\ \ell(\pi)=k\} \qmq{and} i_{n,k}=\#I_{n,k}.
$$
We conjecture the following.
\begin{conj}
\label{i_nk}
For any fixed $n$, the sequence
$$
i_{n,1}, i_{n,2},\dots,i_{n,n}
$$
is log concave.
\end{conj}
Again, this conjecture has been verified for $n\le 50$. We will show that these two conjectures are closely related.
The rest of this paper is structured as follows. In the next section we will see that there is a close connection between Conjectures~\ref{ell_nk} and~\ref{i_nk}. We will also derive another relation between sequences counting certain permutations and those counting certain involutions. Section~\ref{htr} restricts attention to permutations whose output tableaux under the Robinson-Schensted map have certain shapes. Fixed-point free involutions are considered in Section~\ref{fpf}. Baik, Deift, and Johansson~\cite{bdj:dli} proved that, with suitable scaling, the sequence in Conjecture~\ref{ell_nk} converges to the Tracy-Widom distribution as $n\rightarrow\infty$. In Section~\ref{twd}, we present a proof of Deift that this distribution is log concave for nonnegative $x$ where $x$ is the independent variable. We end with more conjectures related to~\ref{ell_nk} and~\ref{i_nk}.
\section{Involutions}
\label{inv}
To connect Conjectures~\ref{ell_nk} and~\ref{i_nk}, we will need some properties of the Robinson-Schensted correspondence. For more information about this important map, see the the texts of Sagan~\cite{sag:sym} or Stanley~\cite{sta:ec2}.
Let $\lambda=(\lambda_1,\lambda_2,\dots,\lambda_k)$ be a partition of $n$, written $\lambda\vdash n$.
We denote by $\SYT \lambda$ the set of all standard Young tableaux of shape $\lambda$, and if $P\in\SYT\lambda$, then we will also write $\sh P=\lambda$.
The Robinson-Schensted map is a bijection
$$
\RS:{\mathfrak S}_n \rightarrow \bigcup_{\lambda\vdash n} (\SYT\lambda)^2,
$$
i.e., a permutation of length $n$ is identified with a pair of standard Young tableaux of size $n$ and of the same shape.
We will use the notation $\RS(\pi)=(P,Q)$. Since $\sh P =\sh Q$ we can define the {\em shape of $\pi$} to be the common shape of its output tableaux. It will also be convenient to define for pairs of permutations $\sh(\pi,\sigma)=(\sh\pi,\sh\sigma)$.
We need two important results about $\RS$, the first due to Schensted~\cite{sch:lid} and the second to Sch\"utzenberger~\cite{sch:rcs}.
\begin{thm}
\label{RS}
The map $\RS$ has the following properties.
\begin{enumerate}
\item[(1)] If $\sh\pi=(\lambda_1,\dots,\lambda_k)$ then $\ell(\pi)=\lambda_1$. Also, the length of a longest decreasing subsequence of $\pi$ is the number of cells in the first column of $\lambda$.
\item[(2)] A permutation $\pi$ is an involution if and only if $\RS(\pi)=(P,P)$ for some standard Young tableau $P$.\hfill \qed
\end{enumerate}
\eth
By (2), there is a canonical bijection between involutions and standard Young tableaux. Because of this, we will go freely back and forth between involutions and tableaux without further mention.
One way to prove Conjecture~\ref{ell_nk} would be to find injections $F:L_{n,k-1}\times L_{n,k+1}\rightarrow L_{n,k}^2$ for all $n,k$. Call any map with this domain and range {\em shape preserving} if
$$
\sh(\pi,\pi')=\sh(\sigma,\sigma') \implies \sh F(\pi,\pi') = \sh F(\sigma,\sigma')
$$
for all $(\pi,\pi'), (\sigma,\sigma')\in L_{n,k-1}\times L_{n,k+1}$. We will also apply this terminology to functions
$f:I_{n,k-1}\times I_{n,k+1}\rightarrow I_{n,k}^2$. Our first result shows that if one can prove Conjecture~\ref{i_nk} using a shape-preserving injection, then one gets Conjecture~\ref{ell_nk} for free.
\begin{thm}
\label{sp}
Suppose that there is a shape-preserving injection $f:I_{n,k-1}\times I_{n,k+1}\rightarrow I_{n,k}^2$ for some $n,k$. Then there is a shape-preserving injection $F:L_{n,k-1}\times L_{n,k+1}\rightarrow L_{n,k}^2$.
\eth
\begin{proof}
Given $f$, one can construct $F$ as the composition of the following maps.
\begin{align*}
(\pi,\pi')&\stackrel{RS^2}{\longmapsto} \left( (P,Q),\ (P',Q')\right)\\[5pt]
&\longmapsto \left( (P,P'),\ (Q,Q')\right)\\
&\stackrel{f^2}{\longmapsto} \left(( S,S'),\ (T,T')\right)\\[5pt]
&\longmapsto \left( (S,T),\ (S',T')\right)\\
& \hspace{-0.2cm} \stackrel{(RS^{-1})^2}{\longmapsto} \left(\sigma,\sigma'\right)
\end{align*}
Note that in applying $f^2$ we are treating each of the tableaux $P,\dots,T'$ as involutions in the manner discussed earlier. Also, the fact that $f$ is shape preserving guarantees that $\sh S =\sh T$ and $\sh S' = \sh T'$ so that one can apply the inverse Robinson-Schensted map at the last stage.
\end{proof}\medskip
There is another way to relate the log concavity of sequences such as those in Conjectures~\ref{ell_nk} and~\ref{i_nk}. Let $\Lambda$ be a set of partitions of $n$. Define
$$
L_{n,k}^\Lambda = \{\pi\in L_{n,k}\ |\ \sh\pi\in\Lambda\} \qmq{and} \ell_{n,k}^\Lambda=\# L_{n,k}^\Lambda.
$$
Similarly define $I_{n,k}^\Lambda$ and $i_{n,k}^\Lambda$. Clearly our original sequences are obtained by choosing $\Lambda$ to be all partitions of $n$. At the other extreme, there is also a nice relationship between the log concavity of these two sequences.
\begin{lem}
\label{La}
Let $\Lambda$ contain at most one partition with first row of length $k$ for all $1\le k\le n$. Then $\ell_{n,1}^\Lambda,\dots,\ell_{n,n}^\Lambda$ is log concave if and only if $i_{n.k}^\Lambda,\dots,i_{n,k}^\Lambda$ is log concave.
\end{lem}
\begin{proof}
The hypothesis on $\Lambda$ and Theorem~\ref{RS} imply that $\ell_{n,k}^\Lambda=(i_{n,k}^\Lambda)^2$. The result now follows since the square function is increasing on nonnegative values.
\end{proof}\medskip
\section{Hooks and two-rowed tableaux}
\label{htr}
If one considers permutations whose output tableaux under $\RS$ have a certain shape, it becomes easier to prove log-concavity results analogous to Conjectures~\ref{ell_nk} and~\ref{i_nk}. Using the notation developed at the end of the previous section, let $\Lambda=\hook$ be the set of all partitions of $n$ that have the shape of a hook. That is, these are Ferrers shape of a partition of $n$
in which the first part is $k$, and there are $n-k$ additional parts, each equal to 1, where $1\le k\le n$.
These shapes will be denoted by $(k, 1^{n-k})$.
\begin{thm}
\label{hook}
For any fixed $n$, the sequences
$$
\ell_{n,1}^{\hook}, \ell_{n,2}^{\hook},\dots,\ell_{n,n}^{\hook}
\qmq{and}
i_{n,1}^{\hook}, i_{n,2}^{\hook},\dots, i_{n,n}^{\hook}
$$
are log concave.
\eth
\begin{figure} \begin{center}
\begin{tikzpicture}
\draw[gray] (0,0) grid (5,5);
\draw[ultra thick,dashed] (1,1)--(3,1)--(3,2)--(4,2)--(4,3);
\node at (.55,.7) {$(a,b)$};
\end{tikzpicture}
\caption{ The path $p=EENEN$ \label{p}}
\end{center} \end{figure}
\begin{proof}
Since the set ``$\hook$" satisfies the hypothesis of Lemma~\ref{La}, it suffices to prove the involution result. For an algebraic proof note that, by the hook formula or direct counting,
$$
i_{n,k}^{\hook}=\mbox{number of partitions of shape $(k,1^{n-k})$}= \binom{n-1}{k-1}.
$$
It is well known and easy to prove by cancellation of factorials that this sequence of binomial coefficients is log concave.
There is also a standard combinatorial proof of the log concavity of this sequence using the technique of Lindstr\"om~\cite{lin:vri}, later used to great effect by Gessel and Viennot~\cite{gv:bdp}. We review it here for use in the proof of the next theorem. A {\em NE-lattice path}, $p$, starts at a point $(a,b)\in{\mathbb Z}^2$ and takes unit steps north and east, denoted $N$ and $E$ respectively. See Figure~\ref{p} for an illustration. There is a bijection between the set of partitions $P$ with
$\sh P=(k,1^{n-k})$ and NE-lattice paths from $(a,b)$ to $(a+k-1,b+n-k)$ where the $i$-th step of $p$ is $E$ if and only if $i+1$ is in the first row of $P$. Our example path corresponds to the tableau
$$
P=\begin{ytableau}
1&2&3&5\\
4\\
6
\end{ytableau}\ .
$$
To construct an injection $f:I_{n,k-1}^{\hook}\times I_{n,k+1}^{\hook}\rightarrow (I_{n,k}^{\hook})^2$ we interpret a pair of involutions in the domain as a pair of lattice paths $(p,p')$ where $p$ goes from $(1,0)$ to $(k-1,n-k+1)$ and $p'$ goes from $(0,1)$ to $(k,n-k)$. The paths $p$ and $p'$ must intersect, since $p$ starts on the southwest side of $p'$ and ends on the northeast side of it. Let $z$ be the last (most northeast) point in which $p$ and $p'$ intersect. We then map this pair to $(q,q')$ where $q$ follows $p$ up to $z$ and then follows $p'$, and vice-versa for $q'$. It is now a simple matter to show that this gives a well-defined injection $f$.
\end{proof}\medskip
We next turn our attention to shapes with at most two rows. Let $\Lambda=\2row$ be the set of shapes of the form $(k,n-k)$ as $k$ varies. Note that
$ \ell_{n,k}^{\2row}=0$ for $k<n/2$, but we will still start our sequences at $k=1$ for simplicity.
\begin{thm}
For any fixed $n$, the sequences
$$
\ell_{n,1}^{\2row}, \ell_{n,2}^{\2row},\dots,\ell_{n,n}^{\2row}
\qmq{and}
i_{n,1}^{\2row}, i_{n,2}^{\2row},\dots, i_{n,n}^{\2row}
$$
are log concave.
\label{thm:2row}
\eth
\begin{proof}
The arguments used to prove Theorem~\ref{hook} can be used here as well. One only needs to be careful about the lattice path proof. First of all, one maps a tableau to a lattice path using all the elements of the first row (including $1$) for the $E$ steps, and those of the second row for the $N$ steps. Returning to our example path in Figure~\ref{p}, the corresponding tableau is now
$$
P=\begin{ytableau}
1&2&4\\
3&5
\end{ytableau}\ .
$$
The lattice paths which correspond to $2$-rowed tableaux are the Dyck paths, those which never go above the line of slope $1$ passing through the initial point $(a,b)$.
See Figure~\ref{fig:2row_paths} for an illustration.
One must now check that if $(p,p')$ maps to $(q,q')$ that $q$ and $q'$ are still Dyck. This is clear for $q'$ since the portion of $p$ which it uses lies below the line $y=x-1$ and so certainly lies below $y=x+1$. For $q$ one must use the fact that $z$ is the last point of intersection. Indeed, it is easy to see that if $q'$ is not Dyck then there would have to be an intersection point of $p$ and $p'$ later than $z$ which is a contradiction.
\end{proof}\medskip
\begin{figure}
\centering
\hspace{0.02\textwidth}
\begin{minipage}{.45\textwidth}
\centering
\begin{tikzpicture}[scale=1]
\draw[dashed] (1,-1)--( 2,-1)--(3,-1)--( 3,0.05)--(4,0.05)--(4,1);
\foreach \x/\y in {1/-1, 2/-1, 3/-1, 3/0, 4/0,4/1}
\node[draw=black,fill=white,rectangle, inner sep=1.5mm] at (\x,\y) {};
\foreach \x/\y in {0/0, 1/0, 2/0, 3/0, 4/0, 5/0}
\node[fill=black,circle, inner sep=0.8mm] at (\x,\y) {};
\draw (0,0)--(1,0)--(2,0)--(3,0) -- (4,0)--(5,0);
\node[anchor=north] at (1.5,-1) {$p$};
\node[anchor=north] at (0.5,0) {$p'$};
;
\node[anchor=north west] at (4,0) {$z$};
\draw[dotted] (0,0)--(2,2);
\draw[loosely dotted] (1.2,-0.8)--(4,2);
\end{tikzpicture}
\end{minipage}%
\hfill
\begin{minipage}{0.45\textwidth}
\centering
\begin{tikzpicture}[scale=1]
\draw[dashed] (1,-1)--( 2,-1)--(3,-1)--( 3,0.05)--(4,0.05)--(4,0)--(5,0);
\foreach \x/\y in {1/-1, 2/-1, 3/-1, 3/0, 4/0,5/0}
\node[draw=black,fill=white,rectangle, inner sep=1.5mm] at (\x,\y) {};
\foreach \x/\y in {0/0, 1/0, 2/0, 3/0, 4/0, 4/1}
\node[fill=black,circle, inner sep=0.8mm] at (\x,\y) {};
\draw (0,0)--(1,0)--(2,0)--(3,0) -- (4,0)--(4,1);
\node[anchor=north] at (1.5,-1) {$q$};
\node[anchor=north] at (0.5,0) {$q'$};
;
\node[anchor=north west] at (4,0) {$z$};
\draw[dotted] (0,0)--(2,2);
\draw[loosely dotted] (1.2,-0.8)--(4,2);
\end{tikzpicture}
\end{minipage}%
\hfill
\begin{minipage}{0.45\textwidth}
\centering
\begin{tikzpicture}[scale=0.8]
\end{tikzpicture}
\end{minipage}
\hspace{0.05\textwidth}
\caption{The main step in the lattice path proof of Theorem~\ref{thm:2row} }
\label{fig:2row_paths}
\end{figure}
We remark that the previous result is related to pattern avoidance.
If $\pi\in{\mathfrak S}_k$ is a permutation called the {\em pattern} then a permutation $\sigma$ {\em avoids} $\pi$ if there is no subsequence $\sigma'$ of $\sigma$ of length $k$ which standardizes to $\pi$ when one replaces the smallest element of $\sigma'$ by $1$, the next smallest by $2$, and so forth.
So, by Theorem~\ref{RS} (1), the permutations whose output tableaux under RS consist of at most two rows are exactly those that avoid the pattern $321$.
Thus Theorem~\ref{thm:2row} can be expressed in terms of $321$-avoidance.
Let us now turn to a class of permutations closely related to those of hook shape. A permutation is {\em skew merged} if it is a shuffle of an increasing permutation and a decreasing permutation. These permutations have been characterized by Stankova~\cite{sta:fs} and the generating fuction for this class was found by Albert and Vatter~\cite{av:ge3}. Again using part (1) of Theorem~\ref{RS}, we see that if $\pi$ is skew merged then $\sh\pi$ has first row of length at least $k$ and first column of length at least $n-k$ for some $k$. It follows that $\sh\pi$ is either a hook or the union of a hook and the box in the second row and column. On the other hand, if $\sh\pi$ is a hook then similar reasoning shows that $\pi$ must be skew merged. However, not all permutations whose shape is of the second type are skew merged. For example, both $2413$ and $2143$ have shape $(2,2)$ but the first one is skew merged while the second is not. We will use a superscript $\skm$ in our notation to restrict to the set of skew merged permutations.
\begin{conj}
The sequence
$$
\ell_{n,1}^{\skm}, \ell_{n,2}^{\skm},\dots,\ell_{n,n}^{\skm}
$$
is log concave.
\end{conj}
This conjecture has been verified for $n\le 9$. The reason for the difference in the size of this bound and the previous ones is that for the earlier conjectures we were able to use the hook formula to speed up computations considerably.
It is natural to ask about the analogue of the previous conjecture for involutions. But it turns out that we have already answered this question in Theorem~\ref{hook}. To see why, we need the following result of Stankova.
\begin{thm}[\cite{sta:fs}]
\label{sta}
A permutation is skew merged if and only if it avoids the patterns $2143$ and $3412$.\hfill \qed
\eth
\begin{figure} \begin{center}
\begin{tikzpicture}
\draw (0,0)--(4,0)--(4,4)--(0,4)--(0,0) (2,0)--(2,4) (0,2)--(4,2);
\filldraw (2,4) circle(.1);
\filldraw (4,2) circle(.1);
\draw (2,4.4) node{$(j,n)$};
\draw (4.7,2) node {$(n,j)$};
\draw (1,1) node {$A$};
\draw (3,1) node {$B$};
\draw (1,3) node {$C$};
\draw (3,3) node {$D$};
\end{tikzpicture}
\caption{The diagram of $\iota$ \label{io}}
\end{center} \end{figure}
\begin{cor}
Let $\iota$ be an involution. Then $\iota$ is skew merged if and only if $\iota$ is of hook shape.
\end{cor}
\begin{proof}
We have already observed that the backwards direction holds for all permutations. For the forward implication, we induct on $n$ where $\iota\in{\mathfrak S}_n$. There are two cases depending on whether $n$ is a fixed point or is in a two cycle of $\iota$. In the first case, we have the concatenation $\iota=\iota' n$. Thus $\sh\iota$ is just $\sh\iota'$ with a box for $n$ appended at the end of the first row, and we are done.
In the second case, suppose $n$ is in a cycle with $j<n$. We represent $\iota=\iota_1\dots \iota_n$ as the set of points $(i,\iota_i)$, $i\in[n]$, in the first quadrant of the plane. The vertical line through $(j,n)$ and the horizontal line through $(n,j)$ divide the box containing $\iota$ into four open areas as displayed in Figure~\ref{io}.
Since $\iota$ avoids $2143$ by Theorem~\ref{sta}, the points in area $A$ must be increasing. Since $\iota$ is an involution, if there is a point in area $B$ then there must be a corresponding point in area $C$ and vice-versa. However, this would contradict the fact that $\iota$ also avoids $3412$. So areas $B$ and $C$ are empty. This implies that area $A$ contains exactly the fixed points $1,2,\dots,j-1$. By induction, the involution in area $D$ has shape $(a,1^b)$ for some $a,b$. It is now easy to see that $\sh\iota=(a+j-1,1^{b+2})$ finishing the proof.
\end{proof}\medskip
\section{Fixed-point free involutions}
\label{fpf}
Chen's manuscript also included some conjectures about perfect matchings. These correspond in ${\mathfrak S}_n$ to involutions without fixed points. Their output shapes are characterized by the following result of Sch\"utzenberger.
\begin{thm}[\cite{sch:cr}]
If $\iota$ is an involution then the number of fixed points of $\iota$ equals the number of columns of odd length of $\sh\iota$.\hfill \qed
\eth
Let $\Lambda=\ecol$ be the set of partitions of $n$ all of whose column lengths are even. Chen's perfect matching conjecture can now be stated as follows. The parity of the columns forces the number of elements permuted to be even.
\begin{conj}[\cite{che:lqc}]
For any fixed $n$, the sequence
$$
i_{2n,1}^{\ecol}, i_{2n,2}^{\ecol},\dots,i_{2n,2n}^{\ecol}
$$
is log concave.
\end{conj}
This conjecture has been verified for $2n\le 80$. Given the development so far, it is natural to also make the following conjecture.
\begin{conj}
For any fixed $n$, the sequence
$$
\ell_{2n,1}^{\ecol}, \ell_{2n,2}^{\ecol},\dots,\ell_{2n,2n}^{\ecol}
$$
is log concave.
\end{conj}
A computer has tested this conjecture for $2n\le 80$.
For certain subsets of $\ecol$ it is possible to prove log concavity results. We define the {\em double} of a partition $\lambda=(\lambda_1,\lambda_2,\dots,\lambda_l)$ to be $\lambda^2=(\lambda_1,\lambda_1,\lambda_2,\lambda_2,\dots,\lambda_l,\lambda_l)$. First consider the set $\Lambda=\dhook$ of double hooks, namely the partitions of shape $(k^2,1^{2n-2k})$ where $1\le k\le n$.
\begin{thm}
\label{dhook}
For any fixed $n$, the sequences
$$
\ell_{2n,1}^{\dhook}, \ell_{2n,2}^{\dhook},\dots,\ell_{2n,n}^{\dhook}
\qmq{and}
i_{2n,1}^{\dhook}, i_{2n,2}^{\dhook},\dots, i_{2n,n}^{\dhook}
$$
are log concave.
\eth
\begin{proof}
Appealing to Lemma~\ref{La} again, we only need to prove the statement about involutions. Applying the hook formula gives the following expression in terms of a multinomial coefficient
$$
i_{2n,k}^{\dhook}=\frac{1}{(2n-k)(2n-k+1)}\binom{2n}{1,k-1,k,2n-2k}.
$$
Substituting this into the defining inequality for log concavity and cancelling shows that it suffices to prove
\begin{align*}
& (k-1)(2n-2k)(2n-2k-1)(2n-k+1)(2n-k)\\
& \qquad \le(k+1)(2n-2k+2)(2n-2k+1)(2n-k+2)(2n-k-1).
\end{align*}
Substituting $d=2n-2k$ one can write both sides of this inequality as a polynomial in $k$ and $d$. Moving all terms to the right-hand side we see that one must show
\begin{align*}
2 d^4 + 8 d^3 k + 10 d^2 k^2 + 4 d k^3 + 4 d^3 + 10 d^2 k + 10 d k^2 + 2 k^3 + 2 d^2 + 2dk + 4 k^2 - 4d - 2k -4 \ge0.
\end{align*}
Since $d\ge0$ and $k\ge1$ it is easy to see that the last inequality is valid.
\end{proof}\medskip
We next turn to two-rowed partitions which have been doubled. Let $\Lambda=\d2row$ be partitions of shape $(k^2,(n-k)^2)$ as $k$ varies.
\begin{thm}
For any fixed $n$, the sequences
$$
\ell_{2n,1}^{\d2row}, \ell_{2n,2}^{\d2row},\dots,\ell_{2n,n}^{\d2row}
\qmq{and}
i_{2n,1}^{\d2row}, i_{2n,2}^{\d2row},\dots, i_{2n,n}^{\d2row}
$$
are log concave.
\eth
\begin{proof}
As usual, it suffices to consider the case concerning involutions. The hook formula gives, for $n/2\le k\le n$,
$$
i_{2n,k}^{\d2row}=\frac{(2k-n+1) (2k-n+2)^2 (2k-n+3)}{ (k+1)^2 (k+2)^2 (k+3) (n-k+1)}\binom{2n}{k,k,n-k,n-k}.
$$
One now follows the proof of the previous result, except that one uses the substitutions $d=2k-n$ and $c=k-n$. The result is a polynomial in $c,d$ which has only positive coefficients and so we are done.
\end{proof}\medskip
\section{The Tracy-Widom distribution}
\label{twd}
We now investigate the limiting distribution of the sequence in Conjecture~\ref{ell_nk}. The {\em Tracy-Widom distribution}, first studied by Tracy and Widom \cite{tracy1994level}, has cumulative distribution function
\begin{equation}
\label{F(x)}
F(x)=\exp\left(-\int_x^\infty (t-x) u^2(t) dt\right)
\end{equation}
where $u(x)$ is a solution to the Painlev\'e II equation
\begin{equation}
\label{PII}
u''(x)= x u(x) + 2 u^3(x).
\end{equation}
satisfying
\begin{equation}
\label{bc}
u(x), u'(x) \rightarrow 0 \qmq{as} x\rightarrow\infty.
\end{equation}
Call any twice-differentiable function $f:{\mathbb R}\rightarrow{\mathbb R}$ {\em concave at $x$} if $f''(x)\le 0$. Similarly, if $D\subseteq {\mathbb R}$ then we say $f$ is {\em concave on $D$} if it is concave at any point of $D$. If $f$ only takes on positive values then we will say the function is {\em log concave} at $x$ if the function $\log f$ is concave at $x$. If $f$ is log concave on an interval of radius one about $x$ then one can prove that $f(x-1) f(x+1)\le f(x)^2$.
Percy Deift (personal communication) has shown that the density function of the Tracy-Widom distribution is log concave for nonnegative $x$. We thank him for his kind permission to reproduce his proof here.
\begin{thm}[Deift]
\label{TW}
The density function of the Tracy-Widom distribution is log concave for $x\ge0$.
\eth
\begin{proof}
To get the density function for the Tracy-Widom distribution, one must take the derivative $F'(x)$ where $F(x)$ is as given in equation~\eqref{F(x)}. Then to determine log concavity, we need to compute $(\log F'(x))''$. Sraightforward calculations give
$$
(\log F'(x))'' = - \frac{u^2(x) h^2(x) + 2 u(x) u'(x) h(x) + u^4(x)}{h^2(x)}
$$
where
\begin{equation}
\label{h(x)}
h(x) = \int_x^\infty u^2(t) dt.
\end{equation}
So it suffices to show that
\begin{equation}
\label{uh}
u^2(x) h^2(x) + 2 u(x) u'(x) h(x) + u^4(x)\ge 0.
\end{equation}
Multiplying the Painlev\'e II equation~\ree{PII} by $u'(x)$ and integrating from $x$ to infinity one obtains, with the help of the boundary conditions~\ree{bc},
\begin{equation}
\label{u'2}
(u'(x))^2 = x u^2(x) + h(x) + u^4(x).
\end{equation}
We now make the following claims.
\begin{enumerate}
\item[(i)] We have $u'(x) u(x) < 0$ for $x\ge0$.
\item[(ii)] We have $h(x)\le u^2(x)$ for $x\ge 1/4$.
\end{enumerate}
To prove the first claim we first assume, towards a contradiction, that $u'(x)=0$ for some $x\ge0$. But then equation~\eqref{u'2} forces $u(x)=0$ as well. It follows that $u(x)$ must be the zero function since it is a solution to a second order equation which is our desired contradiction.
Now, since $u'(x)$ is continuous, we must have $u'(x)>0$ for all $x\ge0$ or $u'(x)<0$ for all $x\ge0$. Without loss of generality, we can assume the latter since the substitution of $-u(x)$ for $u(x)$ takes a solution of~\ree{PII} into another solution. We now break the proof of (i) into two cases, the first being when $u(x)\neq 0$ for the given value of $x$. Assume, towards a contradiction, that $u'(x) u(x) > 0$. This forces $u(x)<0$. But then $u(t)<u(x)<0$ for $t>x$ because we have $u'(t)<0$. This contradicts the boundary condition in~\ree{bc} that $u(t)\rightarrow 0$ as $t\rightarrow\infty$. So this case can not occur.
The other case is when $u(x)=0$. But since $u'(x)<0$ there is a $y>x$ where $u(y)<0$ and now we continue as in the previous case. This finishes the proof of (i). Note that in the process we have shown that $u(x)>0$ for $x\ge0$.
To prove (ii), it suffices to show that the function $f(x)=u^2(x)-h(x) $ is nonnegative. Using the definition of $h(x)$, we have
$$
f'(x) = 2 u(x) u'(x) + u^2(x) = u(x) (2 u'(x) + u(x)).
$$
Using equation~\ree{u'2} we see that $(u'(x))^2 > x u^2(x) \ge u^2(x)/4$ for $x\ge1/4$. Recalling that $u'(x)<0$, this translates to $2u'(x)+u(x)<0$. Combining this with $u(x)>0$, we see that $f'(x)<0$. Also the boundary conditions and definition of $h(x)$ show that $f(x)\rightarrow 0$ as $x\rightarrow \infty$. Thus $f(x)\ge 0$ as desired.
We are now ready to prove the theorem itself. Dividing equation~\eqref{uh} by $u^2(x)$, it suffices to prove that
$$
g(x) \stackrel{\rm def}{=} h^2(x) - 2 v(x) h(x) + u^2(x) > 0
$$
where
$$
v(x)\stackrel{\rm def}{=} - u'(x)/u(x) >0
$$
by Claim (i).
We now compute, omitting the independent variable and using the definitions of $h(x)$ and $v(x)$,
\begin{align*}
g'&= -2hu^2 - 2v'h +2v u^2 + 2uu'\\
&=-2hu^2 +2 h \frac{u''u-(u')^2}{u^2} -2u'u+2u u'\\
&=-2h\frac{u^4-u''u+(u')^2}{u^2}.
\end{align*}
Using first the Painlev\'e II equation~\ree{PII} and then equation~\eqref{u'2} on the numerator gives
$$
u^4-u''u+(u')^2 = u^4 - x u^2 - 2 u^4 + (u')^2= h.
$$
Thus $g'(x) = -2h^2(x) / u^2(x) <0$.
We claim that $g(x)\rightarrow 0$ as $x\rightarrow \infty$ which will finish the proof since, together with $g'(x)<0$ for $x\ge0$, this forces $g(x)> 0$. We first need to analyze what happens to $v(x)$. By equation~\eqref{u'2} we have $v^2 = x + u^2 + h/u^2$. As $x\rightarrow\infty$ we know that $u^2(x)\rightarrow 0$ and, by Claim (ii), $h(x)/u^2(x)\le 1$. Thus $v(x)\sim \sqrt{x}$ and $-2v(x) h(x) \rightarrow 0$ since it is known that $h(x)$ decreases as $\exp(-x^{3/2})$.
We already know that the other two summands in $g(x)$ approach zero, so we are done with the claim and with the proof of the theorem.
\end{proof}\medskip
\section{Other conjectures}
\label{oc}
\subsection{More on the Tracy-Widom distribution}
Approximate values for the mean and variance of the Tracy-Widom distribution are $\mu\approx -1.77$ and $\sigma^2\approx 0.81$. So Theorem~\ref{TW}
only addresses what is going on in the tail. It would be very interesting to see what can be said for negative $x$. Note that the point in the proof where the bound $x\ge0$ was imposed is in the proof of Claim (i). (Claim (ii) is only needed to analyze what goes on as $x\rightarrow\infty$ and so that bound is not the controlling one.) In particular, we need $x$ to be nonnegative when using equation~\ree{u'2} to conclude that $u'(x)\neq 0$.
If log concavity of the whole distribution could be proved, then one might be able to prove Conjecture~\ref{ell_nk} as follows. First one would try to find a bound $N$ such that for $n\ge N$ the log concavity of the Tracy-Widom distribution implies the log concavity of the $\ell_{n,k}$ sequence. Then, if $N$ were not too large, one could check log concavity for $n<N$ by computer.
\subsection{Real zeros}
Let $a_0,a_1,\dots,a_n$ be a sequence of real numbers. Consider the corresponding generating function $a(q)=a_0+a_1 q+\dots + a_n q^n$. It is well known that if the $a_k$ are positive then $a(q)$ having only real roots implies that the original sequence is log concave. So one might ask about the polynomials $\ell_n(q)$ and $i_n(q)$ for the sequences in Conjectures~\ref{ell_nk} and~\ref{i_nk}, respectively. (The fact that they both begin with a zero just contributes a factor of $q$.) Unfortunately, neither seem to be real rooted in general. In particular, this is true of $\ell_{12}(q)$ and $i_4(q)$.
\subsection{$q$-log convexity}
The $\ell_n(q)$ discussed in the previous subsection do seem to enjoy another interesting property. We partially order polynomials with real coefficients by letting $f(q)\le g(q)$ if $g(q)-f(q)$ has nonnegative coefficients. Equivalently, for any power $q^i$ its coefficient in $f(q)$ is less than or equal to its coefficient in $g(q)$. Define a sequence $f_1(q), f_2(q), \dots$ to be {\em $q$-log convex} if $f_{n-1}(q) f_{n+1}(q) \ge f_n^2(q)$ for all $n\ge2$. Another conjecture from the paper of Chen is as follows.
\begin{conj}[\cite{che:lqc}]
The sequence $\ell_1(q),\ell_2(q),\dots$ is $q$-log convex.
\end{conj}
We have verified this conjecture up to $n=50$. Interestingly the corresponding conjecture for the involution sequence is false as
$i_3(q) i_5(q) \not\ge i_4^2(q)$.
\subsection{Infinite log concavity}
There is another way in which one can generalize log concavity. The {\em ${\cal L}$-operator} applied to a sequence ${\bf a}=(a_0,a_1, \dots, a_n)$ returns a sequence ${\bf b}=(b_0,b_1,\dots,b_n)$ where $b_k=a_k^2-a_{k-1} a_{k+1}$ for $1\le k \le n$ (with the convention that $a_{-1}=a_{n+1}=0$). Clearly ${\bf a}$ being log concave is equivalent to ${\bf b}={\cal L}({\bf a})$ being nonnegative. But now one can apply the operator multiple times. Call ${\bf a}$ {\em infinitely log concave} if ${\cal L}^i({\bf a})$ is a nonnegative sequence for all $i\ge1$. This concept was introduced by Boros and Moll~\cite{bm:ii} and has since been studied by a number of authors including Br\"and\'en~\cite{bra:isg}, McNamara and Sagan~\cite{ms:ilc}, and Uminsky and Yeats~\cite{uy:uri}. Here is yet another conjecture of Chen.
\begin{conj}[\cite{che:lqc}]
For any fixed $n$, the sequence
$$
\ell_{n,1}, \ell_{n,2},\dots,\ell_{n,n}
$$
is infinitely log concave.
\end{conj}
Note that proving that a given sequence is infinitely log concave can not necessarily be done by computer since one needs to apply the ${\cal L}$-operator an infinite number of times. However, we are able to prove the previous conjecture for small $n$ by using a technique developed by McNamara and Sagan~\cite{ms:ilc}. Given a real number $r$ we say that the sequence ${\bf a}$ is {\em $r$-factor log concave} if $a_k^2\ge r a_{k-1} a_{k+1}$ for all $k$. So the sequence will be log concave as long as $r\ge1$. Let $r_0=(3+\sqrt{5})/2$.
\begin{lem}[\cite{ms:ilc}]
Let ${\bf a}$ be a sequence of nonnegative real numbers. If ${\bf a}$ is $r_0$-factor log concave, then so is ${\cal L}({\bf a})$. If follows that if ${\bf a}$ is $r_0$-factor log concave then it is also infinitely log concave. \hfill \qed
\end{lem}
Using this lemma, we can try to prove that ${\bf a}$ is infinitely log concave as follows. Apply the ${\cal L}$-operator a finite number of times, checking that at each stage the sequence is nonnegative. If the sequence becomes $r_0$-factor log concave after the finite number of applications, then the lemma allows us to stop and conclude infinite log concavity. Using this technique and a computer we have proved the following.
\begin{prop}
The sequence
$$
\ell_{n,1}, \ell_{n,2},\dots,\ell_{n,n}
$$
is infinitely log concave for $n\le50$.
\end{prop}
Finally, one could wonder about infinite log concavity of the involution sequence. But again, it behaves differently and is not infinitely log concave starting at $n=4$.
\newcommand{\etalchar}[1]{$^{#1}$}
| {
"timestamp": "2015-11-30T02:18:31",
"yymm": "1511",
"arxiv_id": "1511.08653",
"language": "en",
"url": "https://arxiv.org/abs/1511.08653",
"abstract": "Let $\\pi$ be a permutation of $[n]=\\{1,\\dots,n\\}$ and denote by $\\ell(\\pi)$ the length of a longest increasing subsequence of $\\pi$. Let $\\ell_{n,k}$ be the number of permutations $\\pi$ of $[n]$ with $\\ell(\\pi)=k$. Chen conjectured that the sequence $\\ell_{n,1},\\ell_{n,2},\\dots,\\ell_{n,n}$ is log concave for every fixed positive integer $n$. We conjecture that the same is true if one is restricted to considering involutions and we show that these two conjectures are closely related. We also prove various analogues of these conjectures concerning permutations whose output tableaux under the Robinson-Schensted algorithm have certain shapes. In addition, we present a proof of Deift that part of the limiting distribution is log concave. Various other conjectures are discussed.",
"subjects": "Combinatorics (math.CO)",
"title": "Longest increasing subsequences and log concavity",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9899864317403412,
"lm_q2_score": 0.8128673246376009,
"lm_q1q2_score": 0.804727622196296
} |
https://arxiv.org/abs/math/0506334 | On the X-rays of permutations | The X-ray of a permutation is defined as the sequence of antidiagonal sums in the associated permutation matrix. X-rays of permutation are interesting in the context of Discrete Tomography since many types of integral matrices can be written as linear combinations of permutation matrices. This paper is an invitation to the study of X-rays of permutations from a combinatorial point of view. We present connections between these objects and nondecreasing differences of permutations, zero-sum arrays, decomposable permutations, score sequences of tournaments, queens' problems and rooks' problems. | \section{Introduction}
Let $\mathcal{S}_{n}$ be the set of all permutations of $[n]=\{1,2,\ldots,n%
\} $ and let $P_{\pi}$ be the permutation matrix corresponding to $\pi \in%
\mathcal{S}_{n}$. For $k=2,\ldots,2n$, the $(k-1)$-th \emph{antidiagonal sum}
of $P_{\pi}$ is $x_{k-1}(\pi)={\textstyle\sum_{i+j=k}}[P_{\pi}]_{i,j}$. The
sequence of nonnegative integers $x(\pi)=x_{1}(\pi )x_{2}(\pi)\ldots
x_{2n-1}(\pi)$ is called the (\emph{antidiagonal}) \emph{X-ray} of $\pi$.
The \emph{diagonal X-ray} of $\pi$, denoted by $x_{d}(\pi)$, is similarly
defined. Note that $x(\pi)=x(\pi^{-1})$, for every $\pi\in\mathcal{S}_{n}$.
The sequence $x(\pi)$ may be also seen as a word over the alphabet $[n]$. As
an example, the following table contains the X-rays of all permutations in $%
\mathcal{S}_{3}$:\vspace{10pt}
{\small
\begin{tabular}{||c|c||c|c||c|c||c|c||c|c||}
\hline
$\pi$ & $x(\pi)$ & $\pi$ & $x(\pi)$ & $\pi$ & $x(\pi)$ & $\pi$ & $x(\pi)$ & $%
\pi$ & $x(\pi)$ \\ \hline
$123$ & $10101$ & $231,312$ & $01110$ & $132$ & $10020$ & $213$ & $02001$ & $%
321$ & $00300$ \\ \hline
\end{tabular}%
}\vspace{10pt}
Although X-rays of permutations are interesting object on their own, among
the reasons why they are of general interest in Discrete Tomography \cite{k}
is that many types of integral matrices can be written as linear
combinations of permutation matrices (for example, binary matrices with
equal row-sums and column-sums, like the adjacency matrices of Cayley
graphs). Deciding whether for a given word $w=w_{1}\ldots w_{2n-1}$ there
exists $\pi\in\mathcal{S}_{n}$ such that $w=x(\pi)$ is an NP-complete
problem \cite{b} (see also \cite{g}). The complexity is polynomial if the
permutation matrix is promised to be wrapped around a cylinder \cite{d}. It
is necessary to keep into account that permutations are not generally
specified by their X-rays: just consider the permutation $\pi=73142865$ and $%
\sigma=72413865$; we have $x(\pi )=x(\sigma)=000110200002100$, $%
x_{d}(\pi)=x_{d}(\sigma)=00021111100010$ and $\pi\neq\sigma^{-1}$. This
hints that an issue concerning X-rays is to quantify how much information
about $\pi$ is contained in $x(\pi)$. In this paper we present some
connections between X-rays of permutations and a variety of combinatorial
objects. From a practical perspective, this may be useful in isolating and
approaching special cases of the above problem.
The remainder of the paper is organized as follows. In Section 2 we consider
the problem of counting X-rays. We prove a bijection between X-rays and
nondecreasing differences of permutations. We define the \emph{degeneracy}
of an X-ray $x(\pi)$ as the number of permutations $\sigma$ such that $%
x(\pi)=x(\sigma)$, and we characterize the X-rays with the maximum
degeneracy. We prove a bijection between X-ray of length $4k+1$ having
maximum degeneracy and zero-sum arrays. In Section 3 we consider the notion
of simple permutations. This notion seems to provide a good framework to
study the degeneracy of X-rays, but the relation between simple permutations
and X-rays with small degeneracy remains unclear. Section 4 is devoted to
binary X-rays, that is X-rays whose entries are only zeros and ones. We
characterize the X-rays of circulant permutation matrices of odd order.
Moreover, we present a relation between binary X-rays, the $n$-queens
problem (see, \emph{e.g.}, \cite{v}), the score sequences of tournaments on $%
n$ vertices (see \cite[Sequence A000571]{s}), and extremal Skolem sequences,
see~\cite[Conjecture 2.2]{N}.
A number of conjectures and open problems will be explicitly formulated or
will simply stand out from the context. We use the standard notation for
integers sequences from the OEIS \cite{s}.
\section{Counting X-rays}
We begin by addressing the following natural question:\ what is the number
of different X-rays of permutations in $\mathcal{S}_{n}$? Although we are
unable to find a generating function for the sequence, we show a bijection
between X-rays and nondecreasing differences of permutations. The \emph{%
difference} of permutations $\pi,\sigma\in\mathcal{S}_{n}$ is the integers
sequence $\pi-\sigma=(w_{1},w_{2},\ldots,w_{n})$, where $w_{1}=\pi_{1}-%
\sigma_{1},w_{2}=\pi_{2}-\sigma_{2},\ldots,w_{n}=\pi_{n}-\sigma_{n}$. For
example, if $\pi=1234$ and $\sigma=2413$, we have $e-2413=(-1,-2,2,1)$. Let $%
x_{n}$ be the numbers of different X-rays of permutations in $\mathcal{S}%
_{n} $. Let $d_{n}$ be the number of nondecreasing differences of
permutations in $\mathcal{S}_{n}$. The number $d_{n}$ equals the number of
different differences $e-\sigma$ with entries rearranged in the
nondecreasing order. In other words, $d_{n}$ equals the number of different
multisets of the form $M(\sigma
)=\{1-\sigma_{1},2-\sigma_{2},\ldots,n-\sigma_{n}\}$, with entries
rearranged in the nondecreasing order. The entries of $x(\pi)$ are then the
entries of the vector $e_{1-\sigma_{1}}+e_{2-\sigma_{2}}+\ldots+e_{n-%
\sigma_{n}}$, where $e_{i}$ is the $i$-th coordinate vector of length $2n-1$%
. For example, for $\pi=3124$ we have $x(3124)=0101200$ and $%
e_{1-3}+e_{2-1}+e_{3-2}+e_{4-4}=(0,1,0,0,0,0,0)+(0,0,0,0,1,0,0)+(0,0,0,0,1,0,0)+(0,0,0,1,0,0,0)=(0,1,0,1,2,0,0)
$. On the basis of this reasoning we can state the following result.
\begin{proposition}
The number $x_{n}$ of different X-rays of permutations in $\mathcal{S}_{n}$
is equal to the number $d_{n}$ \textrm{(}see \textrm{\cite[Sequence A019589]%
{s})} of nondecreasing differences of permutations in $\mathcal{S}_{n}$.
\end{proposition}
Let us define and denote the \emph{degeneracy} of an X-ray $x(\pi)$ by
\[
\delta(x(\pi))=|\{\sigma:x(\sigma)=x(\pi)\}|.
\]
If $x(\pi)$ is such that $\delta(x(\pi))\geq\delta(x(\sigma))$ for all $%
\sigma\in S_{n}$, we write $x_{\max}^{n}=x(\pi)$ and we say that $x(\pi)$
has \emph{maximum degeneracy}. The following table contains $x_{n}$, $%
x_{\max}^{n}$ and $\delta(x_{\max}^{n})$ for $n=1,\ldots,8$.\vspace{10pt}
\begin{tabular}{||llll||llll||}
\hline
$n$ & $x_{n}$ & $x_{\max}^{n}$ & $\delta(x_{\max}^{n})$ & $n$ & $x_{n}$ & $%
x_{\max}^{n}$ & $\delta(x_{\max}^{n})$ \\ \hline
$1$ & \multicolumn{1}{|l}{$1$} & \multicolumn{1}{|l}{$1$} &
\multicolumn{1}{|l||}{$1$} & $5$ & \multicolumn{1}{|l}{$59$} &
\multicolumn{1}{|l}{$001111100$} & \multicolumn{1}{|l||}{$6$} \\ \hline
$2$ & \multicolumn{1}{|l}{$2$} & \multicolumn{1}{|l}{$020,101$} &
\multicolumn{1}{|l||}{$1$} & $6$ & \multicolumn{1}{|l}{$246$} &
\multicolumn{1}{|l}{$00011211000$} & \multicolumn{1}{|l||}{$12$} \\ \hline
$3$ & \multicolumn{1}{|l}{$5$} & \multicolumn{1}{|l}{$01110$} &
\multicolumn{1}{|l||}{$2$} & $7$ & \multicolumn{1}{|l}{$1105$} &
\multicolumn{1}{|l}{$0001111111000$} & \multicolumn{1}{|l||}{$28$} \\ \hline
$4$ & \multicolumn{1}{|l}{$16$} & \multicolumn{1}{|l}{$0012100$} &
\multicolumn{1}{|l||}{$3$} & $8$ & \multicolumn{1}{|l}{$5270$} &
\multicolumn{1}{|l}{$000011121110000$} & \multicolumn{1}{|l||}{$76$} \\
\hline
\end{tabular}%
\vspace{15pt}
It is not difficult to characterize the X-rays with maximum degeneracy. One
can verify by induction that for $n$ even,%
\[
x_{\max}^{n}=00\ldots011\ldots121\ldots110\ldots00,
\]
with $n/2$ left-zeros and right-zeros, and $n/2-1$ ones; for $n$ odd,
\[
x_{\max}^{n}=00..011\ldots110..00,
\]
with $(n-1)/2$ left-zeros and right-zeros, and $n$ ones. Notice that if $%
x(\pi)=x_{\max}^{n}$ (for $n$ odd) then $P_{\pi}$ can be seen as an
hexagonal lattice with all sides of length $\left( n+1\right) /2$. In each
cell of the lattice there is $0$ or $1$, and $1$ is in exactly $n$ cells;
the column-sums are $1$ and the diagonal and anti-diagonal sums are $0$.
This observation describes a bijection between permutations of odd order
whose X-ray is $x_{\max}^{n}$ and zero-sum arrays. An $(m,2n+1)$-\emph{%
zero-sum} array is an $m\times(2n+1)$ matrix whose $m$ rows are permutations
of the $2n+1$ integers $-n,-n+1,\ldots,n$ and in which the sum of each
column is zero \cite{bp}. The matrix
\[
\left[
\begin{array}{rrr}
-1 & 0 & 1 \\
0 & 1 & -1 \\
1 & -1 & 0%
\end{array}
\right]
\]
is an example of $(3,3)$-zero-sum array. Thus we have the next result.
\begin{proposition}
The number $\delta(x_{\max}^{n})$ for $n$ odd is equal to the number of $%
(3,2n+1)$-zero-sum arrays \textrm{(}see \textrm{\cite[Sequence A002047]{s})}.
\end{proposition}
Before concluding the section, it may be interesting to notice that if we
sum entry-wise the X-rays of all permutations in $\mathcal{S}_{n}$ we obtain
the following sequence of $2n-1$ terms:
\[
(n-1)!,2(n-1)!,\ldots,(n-1)(n-1)!,n!,(n-1)(n-1)!,\ldots,2(n-1)!,(n-1)!.
\]
The meaning of the terms of this sequence is clear.
\section{Simple permutations and X-rays}
In the previous section we have considered the X-rays with maximum
degeneracy. What can we say about X-rays with degeneracy $1$? If $%
\delta(x(\pi))=1$ then $\pi$ is an involution (in such a case $%
P_{\pi}=P_{\pi}^{-1}$) but the converse if not necessarily true. In fact
consider the involution $\pi=1267534$. One can verify that $%
x(\pi)=x(\sigma)=x(\rho)=1010000212000$, for $\rho=1275634$ and $%
\sigma=1267453$. In a first approach to the problem, it seems useful to
study what kind of operations can be done \textquotedblleft
inside\textquotedblright\ a permutation matrix $P_{\pi}$ in order to obtain
another permutation, say $P_{\sigma}$, such that $x(\pi)=x(\sigma)$ and $%
P_{\pi}\neq P_{\sigma}^{-1}$. A intuitively good framework for this task is
provided by the notion of \emph{block permutation}. A \emph{segment} and a
\emph{range} of a permutation are a set of consecutive positions and a set
of consecutive values. For example, in the permutation $34512$, the segment
formed by the positions $2,3,4$ is occupied by the values $4,5,1$; the
elements $1,2,3$ form a range. A \emph{block} is a segment whose values form
a range. Every permutation has singleton blocks together with the block $%
12\ldots n$. A permutation is called \emph{simple} if these are the only
blocks \cite{a}. A permutation is said to be a \emph{block permutation} if
it is not simple. Note that if $\pi$ is simple then it is $\pi^{-1}$. Let $%
S=(\pi_{1}\in\mathcal{S}_{n_{1}},\ldots ,\pi_{k}\in\mathcal{S}_{n_{k}})$ be
an ordered set and let $\pi\in\mathcal{S}_{k}$. We assume that in $S$ there
exists $1\leq i\leq k$ such that $n_{i}>1$. We denote by $P(\pi,S)$ the $%
(n_{1}+\cdots +n_{k})$-dimensional permutation matrix which is partitioned
in $k^{2}$ blocks, $B_{1,1},\ldots ,B_{k,k}$, such that $B_{i,j}=P_{\pi_{i}}$
if $\pi(i)=j$ and $B_{i,j}=0$, otherwise. We denote by $\pi\lbrack\pi_{1},%
\ldots ,\pi_{k}]$ (or equivalently by $(\pi)[S]$) the permutation
corresponding to $P(\pi,S)$. For example, let $S=(231,21,312)$ and $\pi=231$%
. Then
\[
P(231,S)=\left[
\begin{array}{ccc}
0 & P_{231} & 0 \\
0 & 0 & P_{21} \\
P_{312} & 0 & 0%
\end{array}
\right]
\]
and $231[S]=231[231,21,312]=56487312$. The matrix $P(231,S)$ can be modified
leaving the X-ray of $(\pi)[S]$ invariant:%
\[
P(231,(312,21,312))=\left[
\begin{array}{ccc}
0 & P_{312}=P_{231}^{T} & 0 \\
0 & 0 & P_{21} \\
P_{312} & 0 & 0%
\end{array}
\right].
\]
It is clear that $56487312$ is a block permutation. Let $\pi$ be a simple
permutation then possibly $\delta(x(\pi))>1$. In fact, the permutation $%
\pi=531642$ is simple, but $\delta(x(\pi))=6$, since $x(\pi
)=00111011100=x(526134)=x(461253)$, plus the respective inverses. The
permutations $526134$ and $461253$ are decomposable. This means that there
possibly exists a decomposable permutation $\sigma$ such that $x(\sigma
)=x(\pi)$, even if $\pi$ is simple. There relation between simple
permutations and X-rays of small degeneracy is not clear. Intuitively, a
simple permutation allows less \textquotedblleft freedom of
movement\textquotedblright\ than a block permutation. It is also intuitive
that we have low degeneracy when the nonzero entries of the X-ray are
\textquotedblleft distributed widely\textquotedblright\ among the $2n-1$
coordinates. The following result is easily proved.
\begin{proposition}
Let $\sigma=\pi\lbrack S]=\pi\lbrack\pi_{1},\ldots,\pi_{k}]$ be a block
permutation. Then $\delta(x(\pi))>1$ if one of the following two conditions
is satisfied:
(1) If $\pi\neq12\ldots n$ then there is at least one $\pi_{i}\in S$ which
is not an involution;
(2) If $\pi=12\ldots n$ then there are at least two $\pi_{i},\pi_{i}\in S$
which are not involution.
\end{proposition}
\begin{proof}
(1) Let $\pi\neq12\ldots n$ be any permutation. Take $\pi_{i}^{-1}$ for some
$\pi_{i}\in S$. Let $\rho=\pi\lbrack\pi_{1},\ldots,\pi_{i}^{-1},\ldots,%
\pi_{k}]$. Since $\sigma$ is a block permutation, $x(\sigma)=x(\rho)$.
However, if $\pi_{i}\neq\pi_{i}^{-1}$ then $\sigma\neq\rho$ and $%
\sigma^{-1}\neq\sigma$. It follows that $x(\sigma)$ does not specify $\sigma$%
. (2) Let $\pi=12\ldots n$. Let all elements of $S$ be involutions except $%
\pi_{i}$. Take $\pi_{i}^{-1}$. Let $\rho=\pi\lbrack\pi
_{1},\ldots,\pi_{i}^{-1},\ldots,\pi_{k}]$. Again, $x(\sigma)=x(\rho)$, but
this time $\rho=\sigma^{-1}$. Then $x(\sigma)$ possibly specifies $\sigma$.
If, for distinct $i,j$, there are $\pi_{i},\pi_{j}\in S$ such that $%
\pi_{i}\neq\pi _{i}^{-1}$ and $\pi_{j}\neq\pi_{j}^{-1}$ then
\[
x(\sigma^{\prime})=x(\pi
\lbrack\pi_{1},\ldots,\pi_{i}^{-1},\ldots,\pi_{j}^{-1},\ldots,\pi_{k}])=x(%
\sigma),
\]
but $x(\sigma)$ does not specify $\sigma$, given that $\rho\neq\sigma^{-1}$.
\end{proof}
This is however not a sufficient condition for having $\delta(x(\pi))>1$.
Permutations with equal X-rays are said to be in the same \emph{degeneracy
class}. The table below contain the number of permutations in $\mathcal{S}%
_{n}$ which are in each degeneracy class, and the number of different
degeneracy classes with the same cardinality, for $n=2,\ldots ,8$. These
numbers provide a partition on $n!$. We denote by $C(n)$ the total number of
degeneracy classes. We write $a(b)$, where $a$ is the number of permutations
in the degeneracy class and $b$ the number of degeneracy classes of the same
cardinality:\vspace{10pt}
\begin{tabular}{||l||}
\hline
$C(2)=1$: 1(2) \\ \hline
$C(3)=2$: 1(4),2(1) \\ \hline
$C(4)=3$: 1(9),2(6),3(1) \\ \hline
$C(5)=5$:\ 1(20),2(26),3(6),4(6),6(1) \\ \hline
$C(6)=10$: 1(49),2(100),3(19),4(43),5(1),6(19),7(2),8(11),9(1),2(1) \\ \hline
$C(7)=20$: 1(114),2(345),3(60),4(229),5(18),6(118),7(11),8(98),10(29) \\
\hline
11(2),12(33),14(13),16(14),18(6),20(4),21(1),22(2),26(1),28(1). \\ \hline
\end{tabular}%
\ .\vspace{10pt}
We conjecture that if $\delta(x(\pi))=1$ then $x(\pi)$ does not have more
than $2$ adjacent nonzero coordinates. However the converse is not true if $%
\pi \in\mathcal{S}_{n}$ for $n\geq8$: for $\pi=17543628$ and $%
\sigma=16547328 $, we have $x(\pi)=x(\sigma)=100000320010001$, but there are
no more than $2$ adjacent coordinates.
\section{Binary X-rays}
In general, it does not seem to be an easy task to characterize X-rays. A
special case is given by X-rays associated with circulant permutation
matrices, for which is available an exact characterization. An X-ray $x(\pi)$
is said to be \emph{binary} if $x_{i}(\pi)\in\{0,1\}$ for every $1\leq
i\leq2n-1$. The set all permutations in $\mathcal{S}_{n}$ with binary X-ray
is denoted by $\mathcal{B}_{n}$. Counting binary X-rays means solving a
modified version of the $n$-queens problem (see, \emph{e.g.}, \cite{v}) in
which two queens do not attack each other if they are in the same
NorthWest-SouthEst diagonal. The permutations with binary X-rays associated
to circulant matrices are characterized in a straightforward way. Let $C_{n}$
be the permutation matrix associated with the permutation $c_{n}=23\ldots n1$%
, that is the \emph{basic circulant permutation matrix}. The matrices in the
set $\mathcal{C}_{n}=\{C_{n}^{0},C_{n},C_{n}^{2},\ldots,C_{n}^{n-1}\}$ ($%
C_{n}^{0}$ is the identity matrix) are called the \emph{circulant
permutation matrices}. The matrix $C_{n}^{k}$ is associated to $c_{n}^{k}$.
Observe that $x(\pi)$ can be seen as a binary number, since $%
x_{i}(\pi)\in\{0,1\}$ for every $i$. Let
\[
d_{j}(\pi)=2^{2n-1-j}\cdot x_{j}(\pi),\quad j=1,2,\ldots,2n-1,
\]
and $d(\pi)=\sum_{i=1}^{2n-1}d_{i}(\pi)$, that is the decimal expansion of $%
x(\pi)$. The table below lists the X-rays of $\mathcal{C}_{3},\mathcal{C}%
_{5} $ and $\mathcal{C}_{7}$, and their decimal expansions:\vspace{10pt}
\[
\begin{tabular}{||l|l|l||l|l|l||}
\hline
$\pi$ & $x(\pi)$ & $d(\pi)$ & $\pi$ & $x(\pi)$ & $d(\pi)$ \\ \hline
$123$ & $10101$ & $21$ & $12345$ & $101010101$ & $341$ \\ \hline
$231$ & $01110$ & $14$ & $23451$ & $010111010$ & $186$ \\ \hline
& & & $34512$ & $001111100$ & $124$ \\ \hline
\end{tabular}%
\ .
\]
For $\pi=c_{n}^{k}$, one can verify that
\[
\begin{array}{ll}
d(\pi) & =\frac{1}{6}2^{\frac{3}{2}n+\frac{1}{2}+k}-\frac{1}{6}2^{\frac {1}{2%
}n+\frac{1}{2}+k}+\frac{1}{3}2^{\frac{3}{2}n+\frac{1}{2}-k}-\frac{1}{3}2^{%
\frac{1}{2}n+\frac{1}{2}-k} \\
& =a(k)\left( 2^{n}-1\right) \left( 2^{n}-1\right) 2^{\frac{1}{2}n-k+\frac{1%
}{2}},%
\end{array}%
\]
where $a(k)=(2^{2k-1}+1)/3$ (A007583).
In the attempt to count binary X-rays, we are able to establish a bijection
between these objects and score sequences of tournaments. A \emph{tournament}
is a loopless digraph such that for every two distinct vertices $i$ and $j$
either $\left( i,j\right) $ or $\left(j,i\right)$ is an arc \cite{bk}. The
\emph{score sequence} of an tournament on $n$ vertices is the vector of
length $n$ whose entries are the out-degrees of the vertices of the
tournament rearranged in nondecreasing order.
\begin{proposition}
Let $b_{n}$ be the number of binary X-rays of permutations in $S_n$ and let $%
s_{n}$ be the number of different score sequences of tournaments on $n$
vertices \textrm{(}see \textrm{\cite[Sequence A000571]{s})}. Then $b_n\leq
s_n$.
\end{proposition}
\begin{proof}
The number $s_{n}$ equals the number of integers lattice points $%
(p_{0},\dots,p_{n})$ in the polytope $P_{n}$ given by the inequalities $%
p_{0}=p_{n}=0$, $2p_{i}-p_{i+1}-p_{i-1}\leq1$ and $p_{i}\geq0$, for $%
i=1,\dots,n-1$, see \cite{bk}. Let $x_{1},\dots,x_{n}$ be the coordinates
related to $p_{1},\dots,p_{n}$ by $p_{i}=x_{1}+\dots+x_{i}-i^{2}$, for $%
i=1,\dots,n$. We can rewrite the inequalities defining the polytope $P_{n}$
in these coordinates as follows: $x_{1}+\dots+x_{i}\geq i^{2}$, $x_{i+1}\geq
x_{i}+1$ and $x_{1}+\dots+x_{n}=n^{2}$. For a permutation $w\in\mathcal{S}%
_{n}$ with a binary X-ray, let $l_{i}=l_{i}(w)$ be the position of the $i$%
-th `$1$' in its X-ray. In other words, the sequence $(l_{1},\dots,l_{n})$
is the increasing rearrangement of the sequence $(w_{1},w_{2}+1,w_{3}+2,%
\dots,w_{n}+n-1)$. Then the numbers $l_{1},\dots ,l_{n}$ satisfy the
inequalities defining the polytope $P_{n}$ (in the $x$-coordinates). Indeed,
$l_{1}+\dots+l_{n}=w_{1}+(w_{2}+1)+\cdots +(w_{n}+n-1)=n^{2}$; $l_{i+1}\geq
l_{i}+1$; and the minimal possible value of $l_{1}+\cdots+l_{i}$ is $%
(1+0)+(2+1)+\cdots+(i+(i-1))=i^{2}$. This finishes the proof. In order, to
prove that $b_{n}=s_{n}$ it is enough to show that, for any integer point $%
(x_{1},\dots,x_{n})$ satisfying the above inequalities, we can find a
permutation $w\in\mathcal{S}_{n}$ with $x_{i}=l_{i}(w)$.
\end{proof}
\begin{conjecture}
\label{conjbin} All binary X-rays of permutations in $\mathcal{S}_{n}$ are
in a bijective correspondence with integer lattice points $%
(x_{1},\dots,x_{n})$ of the polytope given by the inequalities
\[
\begin{array}{l}
x_{1}+\cdots+x_{i}\geq i^{2},\qquad i=1,\dots,n; \\
x_{1}+\cdots+x_{n}=n^{2}, \\
x_{i+1}-x_{i}\geq1,\qquad i=1,\dots,n-1.%
\end{array}
\]
For a permutation $w\in S_{n}$, the corresponding sequence $(x_{1},\dots
,x_{n})$ is defined as the increasing rearrangement of the sequence $%
(w_{1},w_{2}+1,w_{3}+2,\dots,w_{n}+n-1)$.
\end{conjecture}
Again, it is clear that X-rays injectively map into the integer points of
the above polytope. One needs to show that there will be no gaps in the
image. Also, it can be shown that the above conjecture is equivalent to
Conjecture 2.2 from \cite{N} concerning extremal Skolem sequences. The
conjecture turns out to be false, when not restricted to binary X-rays.
We conjecture also that the number of different X-rays of permutations in
\emph{$\mathcal{S}_{n}$} whose possible entries are $0$ and $2$ is equal to
the number of score sequences in tournament with $n$ players, when 3 points
are awarded in each game (see \cite[Sequence A047729]{s}).
\section{Palindromic X-rays}
What can we say about X-rays with special symmetries? The \emph{reverse} of $%
x(\pi)$, denoted by $\overline{x}(\pi)$, is the mirror image of $x(\pi)$. If
$x(\pi)=\overline{x}(\pi)$ then $\pi$ is said to be \emph{palindromic}. The
\emph{reverse} of $\pi$, denoted by $\overline{\pi}$, is mirror image of $%
\pi $. For example, if $\pi=25143$ then $\overline{\pi}=34152$. The
permutation matrix $P_{\overline{\pi}}$ is obtained by writing the rows of $%
P_{\pi}$ in reverse order. In general $\overline{x}(\pi)\neq x(\overline{\pi}%
)$. In fact, for $\pi=25143$, we have $x(\pi)=0011001200$, $\overline{x}%
(\pi)=0021001100$ and $x(\overline{\pi})=0020011010$. We denote by $|M$ and $%
\underline{M}$ the matrices obtained by writing the columns and the rows of
a matrix $M$ in reverse order, respectively.
\begin{proposition}
Let $l_{n}$ be the number of permutations in $\mathcal{S}_{n}$ with
palindromic X-rays and let $i_{n}$ be the number of involutions in $\mathcal{%
S}_{n}$ \textrm{(}see \textrm{\cite[Sequence A000085]{s})}. Then, in
general, $l_{n}>i_{n}$.
\end{proposition}
\begin{proof}
Recall that a permutation $\pi$ is an \emph{involution} if $\pi=\pi^{-1}$.
Since $P_{\pi}=P_{\pi}^{T}$, it is clear that the diagonal X-ray of an
involution $\pi$ is palindromic. The X-ray of $\sigma$ such that $P_{\sigma
}=|P_{\pi}$ is then also palindromic. This shows that $l_{n}\geq i_{n}$.
Now, consider a permutation matrix of the form%
\[
P_{\sigma}=\left[
\begin{array}{cc}
P_{\rho} & 0 \\
0 & P_{\rho}^{T}%
\end{array}
\right],
\]
for some permutation $\rho$ which is not an involution. Then $P_{\rho}\neq
P_{\rho}^{T}$, $P_{\sigma}\neq P_{\sigma}^{T}$ and $\sigma$ is not an
involution, but the diagonal X-ray of $\sigma$ is palindromic. The X-ray of $%
\pi$ such that $P_{\pi}=|P_{\sigma}$ is then also palindromic. This proves
the proposition.
\end{proof}
The next contains the values of $l_{n}$ for small $n$:
\[
\begin{tabular}{||l|l||l|l||l|l||l|l||}
\hline
$n$ & $l_{n}$ & $n$ & $l_{n}$ & $n$ & $l_{n}$ & $n$ & $l_{n}$ \\ \hline
$2$ & $2$ & $4$ & $12$ & $6$ & $128$ & $8$ & $2110$ \\ \hline
$3$ & $4$ & $5$ & $32$ & $7$ & $436$ & $9$ & $8814$ \\ \hline
\end{tabular}
\ .
\]
\begin{proposition}
Let $l_{n,A=D}$ be the number of permutations in $\mathcal{S}_{n}$ with:
(1) equal diagonal and antidiagonal X-rays;
(2) palindromic X-rays.\newline
Let $r_{n}$ be the number of permutations in $\mathcal{S}_{n}$ invariant
under the operation of first reversing and then taking the inverse \textrm{(}%
see \textrm{\cite[Sequnce A097296]{s})}. Then, in general, $l_{n,A=D}>r_{n}$.
\end{proposition}
\begin{proof}
We first construct the permutations which are invariant under the operation
of first reversing and then taking the inverse. Let $\pi\in\mathcal{S}_{n}$
where $n=2k$. We look at $P_{\pi}$ as partitioned in $4$ blocks:
\[
P_{\pi}=\left[
\begin{array}{cc}
A & B \\
C & D%
\end{array}
\right] .
\]
If
\[
P_{\pi}=(\underline{P_{\pi}})^{T}=\left[
\begin{array}{cc}
A & B \\
C & D%
\end{array}
\right] ^{T}=\left[
\begin{array}{cc}
\underline{B} & \underline{A} \\
\underline{D} & \underline{C}%
\end{array}
\right] ^{T}=\left[
\begin{array}{cc}
(\underline{B})^{T} & (\underline{D})^{T} \\
(\underline{A})^{T} & (\underline{C})^{T}%
\end{array}
\right]
\]
then $A=(\underline{B})^{T},B=(\underline{D})^{T},C=(\underline{A})^{T}$ and
$D=(\underline{C})^{T}$. This implies the X-ray of $P_{\pi}$ being
palindromic and, moreover, the diagonal and antidiagonal X-rays being equal.
Note that we can construct $P_{\pi}$ only if $n\equiv0(\func{mod}4)$, and in
this case $r_{n}\neq0$. However, fixed $n\equiv0(\func{mod}4)$, we have $%
r_{n}=r_{n+1}$, since the permutation matrix%
\[
P_{\sigma}=\left[
\begin{array}{ccc}
A & & \mathbf{0} \\
& 1 & \\
\mathbf{0} & & D%
\end{array}
\right] +\left[
\begin{array}{ccc}
\mathbf{0} & & B \\
& 1 & \\
C & & \mathbf{0}%
\end{array}
\right]
\]
can be always constructed from $P_{\pi}$. (Permutation matrices like $P_{\pi}
$ and $P_{\sigma}$ provide the solutions of the \textquotedblleft
rotationally invariant\textquotedblright\ $n$-rooks problem. This points out
that A097296 and A037224 are indeed the same sequence.) Now, the proposition
is easily proved by observing that, for $\rho=369274185$, $P_{\rho}$ is not
of the form of $P_{\sigma}$. A direct calculation shows that $r_{9}=12$ and $%
l_{9,A=D}=20$.
\end{proof}
| {
"timestamp": "2005-06-16T16:15:30",
"yymm": "0506",
"arxiv_id": "math/0506334",
"language": "en",
"url": "https://arxiv.org/abs/math/0506334",
"abstract": "The X-ray of a permutation is defined as the sequence of antidiagonal sums in the associated permutation matrix. X-rays of permutation are interesting in the context of Discrete Tomography since many types of integral matrices can be written as linear combinations of permutation matrices. This paper is an invitation to the study of X-rays of permutations from a combinatorial point of view. We present connections between these objects and nondecreasing differences of permutations, zero-sum arrays, decomposable permutations, score sequences of tournaments, queens' problems and rooks' problems.",
"subjects": "Combinatorics (math.CO); Quantum Physics (quant-ph)",
"title": "On the X-rays of permutations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534322330059,
"lm_q2_score": 0.8198933359135361,
"lm_q1q2_score": 0.8046871285973087
} |
https://arxiv.org/abs/2004.05410 | A lower bound on the saturation number, and graphs for which it is sharp | Let $H$ be a fixed graph. We say that a graph $G$ is $H$-saturated if it has no subgraph isomorphic to $H$, but the addition of any edge to $G$ results in an $H$-subgraph. The saturation number $\mathrm{sat}(H,n)$ is the minimum number of edges in an $H$-saturated graph on $n$ vertices. Kászonyi and Tuza, in 1986, gave a general upper bound on the saturation number of a graph $H$, but a nontrivial lower bound has remained elusive. In this paper we give a general lower bound on $\mathrm{sat}(H,n)$ and prove that it is asymptotically sharp (up to an additive constant) on a large class of graphs. This class includes all threshold graphs and many graphs for which the saturation number was previously determined exactly. Our work thus gives an asymptotic common generalization of several earlier results. The class also includes disjoint unions of cliques, allowing us to address an open problem of Faudree, Ferrara, Gould, and Jacobson. | \section{Introduction}
Given a fixed forbidden graph $H$, what is the minimum number of edges
that any graph $G$ on $n$ vertices can have such that $G$ contains no
copy of $H$, but the addition of any single edge to $G$ results in a
copy of $H$? This question is a variation of the well-known forbidden
subgraph problem in extremal graph theory, which asks for the maximum
number of edges in an $H$-free graph on $n$ vertices. Asking for
the \emph{minimum} number of edges instead (and tailoring the
definition so that this is a sensible question) yields the notion of
the \emph{saturation number} of a graph $H$, first defined by
Erd\H{o}s, Hajnal, and Moon~\cite{EHM}, albeit with slightly
different terminology.
\begin{definition}
Let $H$ be a graph. For any graph $G$, we say that $G$ is
\emph{$H$-free} if it contains no subgraph isomorphic to $H$. We
say that $G$ is \emph{$H$-saturated} if it is $H$-free and for any
$xy \in \overline{E(G)}$, the graph $G+xy$ contains a subgraph
isomorphic to $H$. For $n \geq \sizeof{V(H)}$, let
$\Sat(H,n)$ denote the set of all $H$-saturated graphs on $n$
vertices, and let the \emph{saturation number} of $H$
be \[\sat(H,n) = \min_{G \in \Sat(H,n)} {|E(G)|}.\] In the event
that $\Sat(H,n) = \emptyset$, we adopt the convention that
$\sat(H,n)=\infty$. Note that this will only happen if $H$ has no
edges.
\end{definition}
In their paper introducing the concept, Erd\H{o}s, Hajnal, and
Moon~\cite{EHM} determined $\sat(H,n)$ in the case where $H$ is a
complete graph. Since then, the saturation numbers have been
determined for various classes of graphs. A nice survey on these
results and more was written by Faudree, Faudree, and Schmitt
\cite{survey}.
The best known general upper bound on $\sat(H,n)$ was given by
K\'{a}szonyi and Tuza \cite{KT} and later slightly improved by Faudree
and Gould~\cite{FG13}. However, as mentioned in \cite{survey} and in
\cite{FFGJ09}, there is no known nontrivial general lower bound on
this function. In this paper we give such a bound, and determine a
class of graphs for which this bound is asymptotically sharp: for such
graphs, we can prove that $\sat(H,n) = \alpha_H n + O(1)$, where
$\alpha_H$ and the $O(1)$ term depend on only $H$. (We remark that it
is not known, in general, that the limit
$\lim_{n \to \infty}\frac{\sat(H,n)}{n}$ even exists, even though it is known~\cite{KT} that $\sat(H,n)$ is always
bounded by a linear function of $n$; the existence of
this limit was stated as a conjecture by Tuza~\cite{Tuza88}.)
This class of graphs includes all threshold graphs as well as some
non-threshold graphs. In particular, many previously-studied classes
of graphs fall into this class, including cliques~\cite{EHM},
stars~\cite{KT}, generalized books~\cite{CFG08}, disjoint unions of
cliques~\cite{FFGJ09}, generalized friendship graphs~\cite{FFGJ09},
and several of the ``nearly complete'' graphs of~\cite{FG13}. Our
result can be considered as an asymptotic common generalization of
these previous results: at the cost of no longer determining the
\emph{exact} saturation number as in the previous results, we obtain a
simple unified proof that gives the saturation number up to an
additive constant number of edges.
The rest of the paper is organized as follows. In
Section~\ref{sec:weight} we state and prove our general lower bound on
the saturation number, and prove an upper bound on the saturation
number of the graph $H'$ obtained from a graph $H$ by adding a
dominating vertex. In Section~\ref{sec:sat-sharp}, we define the
\emph{sat-sharp} graphs to be the graphs whose saturation numbers are
asymptotically equal to the lower bound of Section~\ref{sec:weight},
and prove that this class of graphs is closed under adding
isolated vertices and dominating vertices. In
Section~\ref{sec:threshold} we discuss threshold graphs, which are
contained within the class of sat-sharp graphs and encompass several graphs
whose saturation numbers were previously determined. Finally, in
Section~\ref{sec:disjoint-cliques}, we prove that any graph consisting
of a disjoint union of cliques is sat-sharp, and discuss the
implications of this.
\section{A weight function and some general bounds}\label{sec:weight}
In this section, we will define a weight function for a general graph
$H$, and prove that it gives a lower bound on the saturation number
$\sat(H,n)$. We will also prove a general bound relating the
saturation number of $H$ to the saturation number of the graph
$H'$ obtained from $H$ by adding a dominating vertex.
\begin{definition}
For a vertex $x$ in a graph $G$, let $N_G(x)$ and $N_G[x]$ denote the
\emph{open} and \emph{closed neighborhoods} of $x$ respectively:
\begin{align*}
N_G(x)&=\{y \in V(G)\colon\, xy \in E(G)\}, \\
N_G[x]&=N_G(x) \cup \{x\}.
\end{align*}
Let $d_G(x) = \sizeof{N_G(x)}$ denote the degree of $x$,
and for a vertex set $S$, let $d_{G,S}(x)$ denote the number of
neighbors of $x$ in the set $S$:
\[d_{G,S}(x) = \sizeof{N(x) \cap S}.\]
When the graph $G$ is clear from context, we omit it from our
notation and simply write $N(v)$, $d(v)$, or $d_S(v)$ as
appropriate.
\end{definition}
\begin{definition}
Let $uv$ be an edge in a graph with $d(u) \leq d(v)$. Define the
\emph{weight} $\wt(uv)$ of the edge $uv$ by
\[ \wt(uv) = 2\sizeof{N(u) \cap N(v)} + \sizeof{N(v) - N(u)}. \]
Define the weight of the graph $H$ by
\[ \wt(H) = \min_{uv \in E(H)} \wt(uv). \]
If $E(H)=\emptyset$, we define $\wt(H)=\infty$.
\end{definition}
\begin{lemma}\label{lem:wtlower}
For every graph $H$, there exists a constant $c'_H$ such that
\[ \sat(H, n) \geq \frac{\wt(H)-1}{2}n - c'_H. \]
\end{lemma}
\begin{proof}
First observe $\wt(H) \geq 1$ for all $H$ and that the claim is
trivial when $\wt(H) = 1$, so (as $\wt(H)$ is an integer) we may
assume that $\wt(H) \geq 2$. Let $G$ be an $H$-saturated graph,
let $x^*$ be a vertex of minimum degree in $G$, and let
$B = N_G(x^*)$, so that $\sizeof{B} = d_G(x^*)$. Observe that if
$d_G(x^*) \geq \wt(H)-1$, then the degree-sum formula immediately
gives $\sizeof{E(G)} \geq \frac{\wt(H)-1}{2}n$, so we may assume
that $d_G(x^*) < \wt(H) - 1$. As both of these quantities are
integers, we have $d_G(x^*) \leq \wt(H) - 2$.
Consider any vertex $y \in V(G) - N[x^*]$. By hypothesis, the graph
$G+x^*y$ contains a copy of $H$. Let $\phi : V(H) \to V(G+x^*y)$ be
an embedding of $H$ into $G+x^*y$. Since $G$ is $H$-free, the new
edge $x^*y$ must be the image of some edge $uv \in E(H)$. We may
take our notation so that $d_H(u) \leq d_H(v)$. Let
$b = \sizeof{N_H(u) \cap N_H(v)}$ and let $a = d_H(v) - b$,
so that $\wt(uv) = a + 2b$.
We first claim that $d_G(y) \geq a+b-1$; this requires considering
two cases, depending on whether $y = \phi(u)$ or $y = \phi(v)$.
If $y = \phi(u)$, then
\[d_G(y) \geq \delta(G) = d_G(x^*) \geq d_H(v) - 1 = a+b-1.\]
Similarly, if $y = \phi(v)$, then
$d_G(y) \geq d_H(v) -1 \geq a+b-1$. This establishes the claim.
Now, observe that regardless of whether $y = \phi(u)$ or
$y = \phi(v)$, we have
\[\phi(N_H(u) \cap N_H(v)) \subset N_G(x^*) = B.\] So in
$G$, the vertex $y$ has $b$ guaranteed neighbors in $B$, together
with at least $a-1$ additional edges which may go to $B$ or may go
to $\bar{B} - x^*$, where $\bar{B} = V(G) - B$. Therefore,
\[2d_{G,B}(y) + d_{G,\bar{B}}(y) \geq 2b + a - 1 = \wt(uv)-1 \geq \wt(H)-1\]
for all $y \in \bar{B} - x^*$.
Now, note that $\sum_{x \in B}d_G(x) \geq \sum_{y \in \bar{B}} d_{G,B}(y)$. So it follows that
\begin{align*}
\sizeof{E(G)} &= \frac{1}{2}\left(\sum_{x \in B} d_G(x)+\sum_{y \in \bar{B}} d_G(y) \right)\\
&\geq \frac{1}{2}\left(\sum_{y \in \bar{B}} d_{G,B}(y) +\sum_{y \in \bar{B}} d_{G,B}(y) \right)\\
&= \frac{1}{2}\sum_{y \in \bar{B}} \left(2d_{G,B}(y)+d_{G,\bar{B}}(y)\right)\\
&\geq \frac{2d_G(x^*)+\left(\wt(H)-1\right)\sizeof{\bar{B} - x^*}}{2}\\
&= \frac{2d_G(x^*)+\left(\wt(H)-1\right)\left(n-1-d_G(x^*)\right)}{2}\\
&= \frac{\wt(H)-1}{2}n - c'_H,
\end{align*}
where \[c'_H = \frac{d_G(x^*)(\wt(H) - 3)+(\wt(H) - 1)}{2}.\] Since we
assume $0 \leq d_G(x^*) \leq \wt(H)-2$, the value $c'_H$, considered
as a formal function of the quantity $d_G(x^*)$, is
maximized at $d_G(x^*) = \wt(H)-2$ whenever $\wt(H) \geq 2$ (the
case $\wt(H) = 2$, which would imply that this formal function has a negative derivative
in $d_G(x^*)$, implies that $d_G(x^*)=0$). Therefore,
\[ \sat(H, n) \geq \frac{\wt(H)-1}{2}n -
\frac{\wt(H)^2-4\wt(H)+5}{2} \] for $\wt(H) \geq 2$.
\end{proof}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{scope}[xshift=-2cm]
\apoint{} (u) at (-.67cm, 0cm) {};
\apoint{} (v) at (.67cm, 0cm) {};
\apoint{} (w) at (0cm, 1cm) {};
\apoint{} (z1) at (-.5cm, 1.5cm) {};
\apoint{} (z2) at (.5cm, 1.5cm) {};
\draw (w) -- (v) -- (u) -- (w) -- (z1);
\draw (w) -- (z2);
\node[anchor=north] at (0cm, -1em) {$H$};
\end{scope}
\begin{scope}[xshift=2cm]
\apoint{} (u) at (-.67cm, 0cm) {};
\apoint{} (v) at (.67cm, 0cm) {};
\apoint{} (w) at (0cm, 1cm) {};
\apoint{} (z1) at (-.5cm, 1.5cm) {};
\apoint{} (z2) at (.5cm, 1.5cm) {};
\apoint{v^*} (x) at (0cm, 2cm) {};
\draw (w) -- (v) -- (u) -- (w) -- (z1);
\draw (w) -- (z2);
\draw (x) -- (z1); \draw (x) -- (z2); \draw (x) -- (w);
\draw (x) .. controls ++(0:1cm) and ++(45:1cm) .. (v);
\draw (x) .. controls ++(180:1cm) and ++(135:1cm) .. (u);
\node[anchor=north] at (0cm, -1em) {$H'$};
\end{scope}
\end{tikzpicture}
\caption{Forming $H'$ by adding a dominating vertex $v^*$ to the
graph $H$.}
\label{fig:add-dom}
\end{figure}
A central goal of this paper is to explore the effect on the
saturation number of the operation of adding a dominating vertex to
$H$, as shown in Figure~\ref{fig:add-dom}. It turns out that that
this gives a general upper bound on the saturation number of the new
graph in terms of the saturation number of $H$; we wish to know when
this upper bound is sharp. We believe that this upper bound is in
the same general spirit as Lemma 9 of K\'aszonyi--Tuza~\cite{KT}.
\begin{lemma}\label{lem:adddom}
If $H'$ is obtained from $H$ by adding a dominating vertex $v^*$,
then for all $n \geq \sizeof{V(H')}$, we have
$\sat(H', n) \leq (n-1) + \sat(H, n-1)$.
\end{lemma}
\begin{proof}
It suffices to produce an $H'$-saturated graph with at most the
indicated number of edges. Let $G$ be a minimum $H$-saturated graph
on $n-1$ vertices, and let $G'$ be obtained from $G$ by adding a new
dominating vertex $x^*$. It is clear that
$\sizeof{E(G')} = (n-1) + \sat(H, n-1)$; we show that $G'$ is
$H'$-saturated.
First we argue that $G'$ is $H'$-free. Suppose to the contrary that
$\phi : V(H') \to V(G')$ is an embedding of $H'$ into $G'$. If
$x^* \notin \phi(V(H'))$, then $\phi$ is an embedding of $H'$ into
$V(G')$. Hence $G'$ has a copy of $H'$ and thus a copy of $H$,
contradicting the $H$-saturation of $G$. If $\phi(v^*) = x^*$, then
the restriction of $\phi$ to $V(H)$ is an embedding of $H$ into $G$,
again a contradiction.
Hence we can assume that $\phi(v^*) \neq x^*$ and there is some
vertex $w^* \in V(H)$ with $\phi(w^*) = x^*$. Construct a new
embedding $\phi_0 : V(H) \to V(G)$ by letting
$\phi_0(w^*) = \phi(v^*)$ and taking $\phi_0(w) = \phi(w)$ for all
$w \neq w^*$. Since $\phi(v^*)$ dominates the image of $\phi$ in $G$
(as $x^*$ was a dominating vertex of $H$), we see that $\phi_0$ is
still a valid embedding. Hence we have again obtained a copy of $H$
in $G$, a contradiction. We conclude that $G'$ is $H'$-free.
Finally we argue that adding any missing edge to $G'$ produces a new
copy of $H'$. Since $x^*$ is dominating, any missing edge in $G'$ is
an edge of the form $yz$ where $y,z \in V(G)$. Now $G + yz$
contains a copy of $H$, since $G$ is $H$-saturated; adding the
dominating vertex $x^*$ to this copy of $H$ gives a copy of $H'$ in
$G'+yz$.
\end{proof}
\section{Sat-sharp graphs}\label{sec:sat-sharp}
For a graph $H$, let
$\satlim(H) = \lim_{n \to \infty} \frac{\sat(H,n)}{n}$, provided that
this limit exists. Say a graph $H$ is \emph{sat-sharp} if
$\satlim(H) = \frac{\wt(H) - 1}{2}$. Moreover, say that a graph $H$ is
\emph{strongly sat-sharp} if
$\sat(H,n) = \frac{\wt(H) - 1}{2}n + O(1)$. Note that any strongly
sat-sharp graph is also sat-sharp. Also, note that by adopting the
convention that $w(H)=\infty$ when $E(H)=\emptyset$, we can conclude
that any graph with no edges is strongly sat-sharp since
$\sat(H,n)=\infty$ for all $n \geq \sizeof{V(H)}$.
In this section we will show that the classes of sat-sharp graphs and
strongly sat-sharp graphs are each closed under adding isolated and
dominating vertices. To express these results concisely, we write
statements like ``if $H$ is (strongly) sat-sharp, then $H'$ is
(strongly) sat-sharp'' as shorthand for the pair of statements ``if
$H$ is sat-sharp, then $H'$ is sat-sharp; if $H$ is strongly
sat-sharp, then $H'$ is strongly sat-sharp''.
As $K_1$ is strongly sat-sharp, these closure results immediately
imply that all threshold graphs are strongly sat-sharp (as we will
discuss in Section~\ref{sec:threshold}). They also imply that
\emph{any} graph $H$ which can be proven to be (strongly) sat-sharp
gives rise to many (strongly) sat-sharp graphs derived from $H$ by
these operations. In particular, we will prove in
Section~\ref{sec:disjoint-cliques} that a disjoint union of cliques is
strongly sat-sharp, although it is not in general a threshold graph;
this implies that any graph obtained from a disjoint union of cliques
via these operations is also strongly sat-sharp.
\begin{lemma}
If $H$ is a (strongly) sat-sharp graph, and $H'$ is obtained from $H$ by adding
isolated vertices, then $H'$ is (strongly) sat-sharp, and $\satlim(H') = \satlim(H)$.
\end{lemma}
\begin{proof}
For all $n \geq \sizeof{V(H')}$, a graph $G$ is $H'$-saturated if
and only if it is $H$-saturated, hence $\sat(H', n) = \sat(H, n)$
for all sufficiently large $n$.
\end{proof}
To handle the operation of adding a dominating vertex, we prove the
following two lemmas, which taken together show that the class of
(strongly) sat-sharp graphs is closed under the operation of
adding a dominating vertex.
\begin{lemma}\label{lem:sat-dom-smallwt}
Let $H$ be a $k$-vertex (strongly) sat-sharp graph, and let $H'$
be obtained from $H$ by adding a dominating vertex $v^*$.
If $H$ has no isolated vertices, or if $\wt(H) \leq k-2$,
then $\wt(H') = 2 + \wt(H)$ and $\satlim(H') = 1 + \satlim(H)$.
Moreover, $H'$ is also (strongly) sat-sharp.
\end{lemma}
\begin{lemma}\label{lem:sat-isol-dom-bigwt}
Let $H$ be a $k$-vertex graph with an isolated vertex $u$,
and let $H'$ be obtained from $H$ by adding a dominating vertex
$v^*$. If $\wt(H) > k-2$, then $H'$ is strongly sat-sharp, with $\wt(H') = k$ and
$\satlim(H') = \frac{k-1}{2}$.
\end{lemma}
Note that Lemma~8 does not actually require the graph $H$ to be
sat-sharp, although that is the main case we are concerned with. In
the case where $H$ has no edges and so $\wt(H) = \infty$, the
hypothesis of Lemma~\ref{lem:sat-isol-dom-bigwt} applies, yielding
$\satlim(K_{1,k}) = \frac{k-1}{2}$; this is an asymptotic version of
the exact result of K\'aszonyi and Tuza~\cite{KT}.
\begin{proof}[Proof of Lemma~\ref{lem:sat-dom-smallwt}]
Let $\epsilon(H, n) = \sat(H,n) - \satlim(H)n$, so that $\epsilon(H,n) = o(n)$
when $H$ is sat-sharp and $\epsilon(H,n) = O(1)$ when $H$ is strongly sat-sharp.
By Lemma~\ref{lem:adddom}, we have
\[\sat(H', n) \leq (n-1) + \sat(H, n-1) = (\satlim(H)+1)n + \epsilon(H, n-1) - 1 .\] If we can prove that $\wt(H') \geq \wt(H) + 2$, then
Lemma~\ref{lem:wtlower} will give
\[\sat(H', n) \geq \frac{\wt(H')-1}{2}n-c'_{H'} = (\satlim(H)+1)n -
c'_{H'}.\] In particular, this implies that
$\satlim(H') = \frac{\wt(H')-1}{2}$ and that
$\sizeof{\epsilon(H', n)} \leq \sizeof{\epsilon(H,n)} + \sizeof{c'_H} + 1$, so if
$H$ is (strongly) sat-sharp, then $H'$ is (strongly) sat-sharp.
An edge $e \in E(H)$ can be viewed (and its weight computed) either
as an edge of $H$ or as an edge of $H'$. We will use $\wt(e)$ and
$\wt'(e)$ to refer to the weight of such an edge computed in $H$ or
$H'$, respectively. Observe that if $uw \in E(H)$, then when we pass
to $H'$, we add $v^*$ as a new element of $N(u) \cap N(w)$ and change
nothing else about the sets $N(u) \cap N(w)$ or $N(w) - N(u)$. Hence,
$\wt'(e) = \wt(e) + 2$ for all $e \in E(H')$.
The only remaining edges of $H'$ are edges of the form $v^*u$ for
$u \in V(H)$. We claim that all such edges have weight at most
$2 + \wt(H)$. If $u$ is isolated in $H$, then we have
\[ \wt'(uv^*) = 2\sizeof{N(u) \cap N(v^*)} +
\sizeof{N(v^*) - N(u)} = 0 + k = k \geq 2+\wt(H), \] where the
last inequality follows from the assumption that $\wt(H) \leq k-2$
(since we assumed that $u$ is isolated and that $H$ either obeys
this weight inequality or is isolate-free).
On the other hand, if $u$ is not isolated in $H$, let $ut$ be
another edge incident to $u$. Observe that
\begin{align*}
\wt'(v^*u) &= 2\sizeof{N(u) \cap N(v^*)} + \sizeof{N(v^*) - N(u)} \\
&= 2d_H(u) + (k - d_H(u)) \\
&= d_H(u) + k
\end{align*}
and that $\wt(ut) \leq (d_H(u)-1) + (k-1)$ for any edge
$ut \in E(H)$. It follows that
$\wt'(v^*u) \geq \wt(ut)+2 = \wt'(ut)$. Hence, an edge of minimum
weight in $H'$ is found among the edges of $H$, and the smallest
such weight is $\wt(H) + 2$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:sat-isol-dom-bigwt}]
We again write $\wt(e)$ to refer to the weight of an edge $e$
computed in $H$ and $\wt'(e)$ to refer to the weight of an edge $e$
when computed in $H'$.
As previously discussed, we have $\wt'(e) = \wt(e) + 2$ for every edge $e \in E(H)$.
On the other hand, considering the isolated vertex $u$, we see that $\wt(uv^*) = k$,
as $\sizeof{N(u) \cap N(v^*)} = 0$ and $\sizeof{N(v^*) - N(u)} = k$.
If $\wt(H)+2 > k$, then this implies $\wt(H') = k$, with the only
edges of minimum weight being those edges joining $v^*$ with an
isolated vertex of $H$. This establishes the first claim of the lemma.
Lemma~\ref{lem:wtlower} now gives the lower bound
\[\sat(H',n) \geq \frac{k-1}{2}n - c'_{H'}.\] We establish a matching upper bound by constructing an
$H'$-saturated graph on $n$ vertices, for any $n \geq \sizeof{V(H')}$.
Let any $n \geq \sizeof{V(H')}$ be given, and write $n = qk + r$,
where $0 \leq r < k$. Let $G$ be the $n$-vertex graph consisting of $q$ disjoint copies of $K_k$
and a single copy of $K_r$. Clearly
\[ \sizeof{E(G)} =\frac{n-r}{k}{k \choose 2} + {r \choose 2} \leq
\frac{k-1}{2}n -\frac{k^2}{8}.\] So if we can argue that $G$ is
$H'$-saturated, then we will have $\satlim(H') = \frac{k-1}{2}$, and
we will have that $H'$ is strongly sat-sharp.
It is clear that $G$ is $H'$-free, since $H'$ is connected and has
$k+1$ vertices, while every component of $G$ has at most $k$
vertices. We claim that adding any edge to $G$ produces a subgraph
isomorphic to $H$. Let $xy$ be a missing edge in $G$; we may assume
that $y$ lies in a copy of $K_k$. Let $Q$ be the set of vertices of
the copy of $K_k$ containing $y$.
Now observe that we can embed $H'$ into $G+xy$ by any injection $\phi : V(H') \to V(G)$
that satisfies:
\begin{itemize}
\item $\phi(u) = x$, and
\item $\phi(v^*) = y$,
\item $\phi(V(H) - \{u,v^*\}) = Q - y$,
\end{itemize}
and with $k-1$ vertices in $Q-y$, there is enough room to complete
the last part of the embedding. The key point is that there is no
edge, in $H'$, from $u$ to any vertex of $H'$ except for $v^*$, and
all vertices of $H'$ except for $u$ are being embedded into a clique
of $G$, so any edges they require are present. Thus, $G$ is
$H'$-saturated, which completes the proof.
\end{proof}
\section{Threshold Graphs}\label{sec:threshold}
A natural class of strongly sat-sharp graphs is the class of \emph{threshold graphs}. A
simple graph $G$ with vertex set $\{v_1, \ldots, v_n\}$ is a
\emph{threshold graph} if there exist weights
$x_1, \ldots, x_n \in \mathbb{R}$ such that, for all $i \neq j$, we have
$v_iv_j \in E(G)$ if and only if $x_i + x_j \geq 0$. Threshold graphs
were first introduced by Chv\'atal and Hammer~\cite{CH73, CH77},
albeit with a slightly different definition than the one we give here.
Threshold graphs admit many equivalent characterizations. For our
purposes, the following characterization is the most useful one.
\begin{theorem}[Chv\' atal--Hammer~\cite{CH77}; see also~\cite{MP}]
For a simple graph $G$, the following are equivalent:
\begin{enumerate}
\item $G$ is a threshold graph;
\item $G$ can be obtained from $K_1$ by iteratively adding a new vertex
which is either an isolated vertex, or dominates all previous vertices;
\end{enumerate}
\end{theorem}
In fact, \cite{MP} gives several other equivalent characterizations of
threshold graphs, but this is the one we will be interested in.
The results of Section~\ref{sec:sat-sharp}, together with this
characterization, immediately imply that all threshold graphs are
strongly sat-sharp. Furthermore, when a construction sequence for a threshold graph $G$
is given, one can use the lemmas from Section~\ref{sec:sat-sharp} to easily
compute $\wt(G)$ by iteratively computing the weight of each intermediate
subgraph, keeping track of the previous subgraph's weight and whether
or not it had an isolated vertex.
As discussed in the introduction, several graphs whose saturation
numbers were determined in previous work fall into the class of
strongly sat-sharp graphs. In particular, complete graphs~\cite{EHM},
stars~\cite{KT}, generalized books~\cite{CFG08}, stars plus an edge
\cite{FFGJ-trees}, and ``nearly complete'' graphs of the form $K_t - H$
for $H \in \{K_{1,3},K_4-K_{1,2},K_4-K_2\}$ are all threshold graphs.
Thus, all of these graphs are strongly sat-sharp, and their saturation
number is determined (up to a constant number of edges) by the
results of Section~\ref{sec:sat-sharp}.
As a non-example, we note that among the ``nearly complete'' graphs of
\cite{CFG08}, the graph $K_t - 2K_2$ is not a threshold graph, and in
fact \cite{CFG08} prove that
$\sat(K_t - 2K_2) = (t-\frac{5}{2})n + O(1)$, whereas
Lemma~\ref{lem:wtlower} only gives the lower bound
$\sat(K_t-2K_2, n) \geq (t - \frac{7}{2})n$.
K\' aszonyi and Tuza~\cite{KT} observed the ``irregularity'' that if
$H$ is the graph obtained from $K_4$ by adding a pendant edge, then
$\sat(H, n) \leq \frac{3}{2}n$ while $\sat(K_4, n) = 2n-3$, so that
$\sat(H,n) < \sat(K_4, n)$ for sufficiently large $n$ even though
$K_4 \subset H$. Both $K_4$ and the graph $H$ are threshold graphs; in
terms of our weight function, the irregularity can be seen to arise
from the fact that all edges of $K_4$ have weight $5$ while the
pendant edge of $H$ has weight $4$.
\section{$H$-saturated construction when $H$ is the disjoint union of cliques}\label{sec:disjoint-cliques}
Faudree, Ferrara, Gould, and Jacobson~\cite{FFGJ09} determined the
saturation numbers of generalized friendship graphs $F_{t,p,\ell}$,
consisting of $t$ copies of $K_p$ which all intersect in a common
$K_\ell$ but are otherwise pairwise disjoint. When $\ell=0$, this includes
the case of $tK_p$, consisting of $t$ disjoint copies of $K_p$. They
also determined the saturation numbers of two disjoint cliques,
$K_p \cup K_q$, when $p \neq q$, but left determining the saturation
number of three or more disjoint cliques with general orders as an
open problem. Here, we give a proof that all of these graphs are
strongly sat-sharp, and determine their saturation numbers up to an additive
constant for all sufficiently large $n$.
\begin{proposition}\label{prop:disjoint-cliques}
Let $2 \leq p_1 \leq \cdots \leq p_m$ be positive integers. The
graph $H=K_{p_1} \cup \cdots \cup K_{p_m}$ is strongly sat-sharp. In
particular, \[(p_1-2)n-c'_H \leq \sat(H,n) \leq (p_1-2)n + c_H\] for
some constants $c_H,c'_H$ depending only on $H$ and for all
$n \geq \sum_{i=1}^m p_i$.
\end{proposition}
\begin{proof}
First, note that $\wt(H)=2(p_1-2)+1$. So by
Lemma~\ref{lem:wtlower}, \[\sat(H,n) \geq (p_1-2)n-c'_H\] for some
constant $c'_H$.
On the other hand, let $G$ be the graph on $n$ vertices defined as the
join, $G=K_{p_1-2} \vee G'$ where $G'=K_{t} \cup I$, the disjoint
union of a clique on $t= 1+ \sum_{i=2}^m p_i $ vertices and a set $I$
with $n-t-p_1+2$ isolated vertices. Figure~\ref{fig:disclique-sat} shows
the graph $G$ that is constructed for $H = H_4 \cup K_5 \cup K_6$.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{scope}[yshift=.5cm]
\foreach \i in {1,...,12}
{
\apoint{} (v\i) at (30*\i : 1cm) {};
\foreach \j in {1,...,\i}
{
\draw (v\i) -- (v\j);
}
}
\end{scope}
\begin{scope}[yshift=-1.25cm, yscale=.15]
\apoint{} at (-1.5cm, 0cm) {};
\apoint{} at (-1cm, 0cm) {};
\apoint{} at (-.5cm, 0cm) {};
\node at (0cm, 0cm) {$\cdots$};
\apoint{} at (.5cm, 0cm) {};
\apoint{} at (1cm, 0cm) {};
\apoint{} at (1.5cm, 0cm) {};
\draw (0cm, 0cm) circle (1.75cm);
\node[anchor=north] at (0cm, -1.75cm) {$I$};
\end{scope}
\node[anchor=north west] at (-2cm, 2cm) {$G'$};
\draw (-2cm, 2cm) rectangle (2cm, -2cm) {};
\apoint{} (u) at (-4cm, 1cm) {};
\apoint{} (v) at (-4cm, -1cm) {};
\draw (u) -- (v);
\foreach \i in {0,...,10}
{
\node[coordinate] (z\i) at (-2cm, 2cm-.4*\i cm) {};
\draw (u) -- (z\i);
\draw (v) -- (z\i);
}
\end{tikzpicture}
\caption{Construction of the saturated graph $G$ for $H = K_4 \cup K_5 \cup K_6$.}
\label{fig:disclique-sat}
\end{figure}
We claim that $G$ is $H$-free and $H$-saturated. To see that $G$ is
$H$-free, consider its maximal cliques. Let $Q$ denote the subgraph of
$G$ induced by the vertices of the $K_{p_1-2}$ and the $K_t$. Then $Q$
is a maximal clique with $p_1-2+t$ vertices. All other maximal cliques
of $G$ are formed from the $K_{p_1-2}$ and one vertex from
$I$. Therefore, if we were to find a copy of $H$ in $G$, then each of
the disjoint cliques of $H$ must be found in $Q$. But $Q$ only has
$|V(H)|-1$ vertices so this cannot happen.
To see that $G$ is $H$-saturated, consider the graph $G+xy$ for
some $xy \notin E(G)$. Without loss of generality, either
$x,y \in I$ or $x \in I$ and $y \in K_t$. In either case, the vertices
of $K_{p_1-2} \cup \{x,y\}$ form a $p_1$-clique in $G+xy$, while at
least $t-1=p_2 + \cdots +p_m$ vertices of the $K_t$ remain disjoint
from this clique and can be used to embed the remaining cliques of
$H$. So $G$ is $H$-saturated.
Since $G$ has $(p_1-2)\left(n+1-\sum_{i=1}^m p_i\right)+{p_1+\cdots+p_m-1 \choose 2}$ edges, it follows that \[\sat(H,n) \leq (p_1-2)n + c_H\] for some constant $c_H$. Therefore, $H$ is strongly sat-sharp.
\end{proof}
An immediate corollary to this proposition and the results of Section~\ref{sec:sat-sharp} is the following result.
\begin{corollary}
Let $\ell$ and $2 \leq p_1 \leq \cdots \leq p_m$ be positive
integers. Let $H' = K_{p_1} \cup \cdots \cup K_{p_m}$, and let
$H=K_{\ell} \vee H'$. Then $H$ is sat-sharp. In
particular, \[(p_1+\ell-2)n-c'_H \leq \sat(H,n) \leq (p_1+\ell-2)n + c_H\]
for some constants $c_H,c'_H$ depending only on $H$ and for all
$n \geq \ell+ \sum_{i=1}^m p_i$.
\end{corollary}
Note that this class of graphs includes all generalized friendship
graphs $F_{t,p,\ell}$ for $p \geq l+2$. Since $F_{t,p,\ell}$ for
$p=\ell+1$ is a threshold graph, we already know from the discussion
in Section~\ref{sec:threshold} that it is strongly sat-sharp.
While a disjoint union of cliques is not, in general, a threshold
graph, each of its components is a threshold
graph. Proposition~\ref{prop:disjoint-cliques} therefore suggests that
perhaps a disjoint union of threshold graphs is always strongly sat-sharp.
More boldly, the following conjecture appears to be plausible:
\begin{conjecture}\label{coj:disjoint-union}
If $H_1$ and $H_2$ are (strongly) sat-sharp graphs, then their
disjoint union $H_1 + H_2$ is (strongly) sat-sharp. That is, the
class of (strongly) sat-sharp graphs is closed under taking disjoint
unions.
\end{conjecture}
Conjecture~\ref{coj:disjoint-union}, together with the other closure
properties from Section~\ref{sec:sat-sharp}, would immediately imply
Proposition~\ref{prop:disjoint-cliques}. We have found ad-hoc
constructions for some small disjoint unions of particular threshold
graphs which suggest that Conjecture~\ref{coj:disjoint-union} might
hold, but it has been difficult to extract a general construction.
\section*{Acknowledgements}
We would like to thank Ron Gould for the interesting talk on
saturation numbers that he gave at the Atlanta Lecture Series in 2018
which stimulated work on this paper.
\bibliographystyle{plain} | {
"timestamp": "2020-04-14T02:06:52",
"yymm": "2004",
"arxiv_id": "2004.05410",
"language": "en",
"url": "https://arxiv.org/abs/2004.05410",
"abstract": "Let $H$ be a fixed graph. We say that a graph $G$ is $H$-saturated if it has no subgraph isomorphic to $H$, but the addition of any edge to $G$ results in an $H$-subgraph. The saturation number $\\mathrm{sat}(H,n)$ is the minimum number of edges in an $H$-saturated graph on $n$ vertices. Kászonyi and Tuza, in 1986, gave a general upper bound on the saturation number of a graph $H$, but a nontrivial lower bound has remained elusive. In this paper we give a general lower bound on $\\mathrm{sat}(H,n)$ and prove that it is asymptotically sharp (up to an additive constant) on a large class of graphs. This class includes all threshold graphs and many graphs for which the saturation number was previously determined exactly. Our work thus gives an asymptotic common generalization of several earlier results. The class also includes disjoint unions of cliques, allowing us to address an open problem of Faudree, Ferrara, Gould, and Jacobson.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "A lower bound on the saturation number, and graphs for which it is sharp",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534349454033,
"lm_q2_score": 0.8198933271118221,
"lm_q1q2_score": 0.8046871221827129
} |
https://arxiv.org/abs/2002.11025 | New bounds for perfect $k$-hashing | Let $C\subseteq \{1,\ldots,k\}^n$ be such that for any $k$ distinct elements of $C$ there exists a coordinate where they all differ simultaneously. Fredman and Komlós studied upper and lower bounds on the largest cardinality of such a set $C$, in particular proving that as $n\to\infty$, $|C|\leq \exp(n k!/k^{k-1}+o(n))$. Improvements over this result where first derived by different authors for $k=4$. More recently, Guruswami and Riazanov showed that the coefficient $k!/k^{k-1}$ is certainly not tight for any $k>3$, although they could only determine explicit improvements for $k=5,6$. For larger $k$, their method gives numerical values modulo a conjecture on the maxima of certain polynomials.In this paper, we first prove their conjecture, completing the explicit computation of an improvement over the Fredman-Komlós bound for any $k$. Then, we develop a different method which gives substantial improvements for $k=5,6$. | \section{Introduction}
For positive integers $k\geq 2$ and $n \geq 1$, consider a subset $C\subset \{1,\ldots, k\}^n$ with the property that for any $k$ distinct elements of $C$ there exists a coordinate where they all differ. We call such a set a \emph{perfect $k$-hash code} of length $n$, or simply $k$-hash for brevity. The name is motivated by the idea that if each coordinate of $C$ is interpreted as a $k$-hash function on a set $U$ of cardinality $|C|$, then any $k$ elements of $U$ are hashed onto $\{1,2,\ldots,k\}$ by at least one function.
Determining the largest possible cardinality of such a set $C$ as a function of $k$ and $n$ is a classic combinatorial problem in theoretical computer science. One standard formulation is to study, for fixed $k$, the grow of the largest possible $|C|$ as $n$ goes to infinity. It is known that $|C|$ grows exponentially in $n$. Then one usually defines the \emph{rate} of the code as \footnote{Here and in the whole paper $\log{x}$ is understood to be in base two.}
\begin{equation}
R=\frac{\log|C|}{n}
\end{equation}
and asks for bounds on the rate of codes of maximal cardinality as $n\to \infty$. This formulation of the problem can also be cast as a problem, in information theory, of determining the zero-error capacity under list decoding for certain channels (REF).
In this paper we consider upper bounds on $R_k$, defined as the limsup, as $n\to\infty$, of the rate of largest $k$-hash codes of length $n$.
A simple packing argument (see \cite{Elias1}) shows that for all $k\geq 2$ one has $R_k\leq \log(k/(k-1))$.
For $k=3$, the simplest non-trivial case, this evaluates to $\log(3/2) \approx 0.5850$ and is still the best known upper bound to date (the best lower bound is $1/4\log(9/5)\approx 0.212$).
For $k\geq 4$, the first important result was derived by Fredman and Koml{\'o}s \cite{FredmanKomlos}, who proved that
\begin{equation}
\label{eq:fredmankomlosRk}
R_k\leq k!/k^{k-1}\,.
\end{equation}
We also refer to \cite{Korner1}, \cite{korner86}, \cite{Korner2} and \cite{Korner3} where the Fredman-Koml{\'o}s bound (and some generalizations to hypergraphs) has been cast using the language of graph entropy and to \cite{nilli} where a simple probabilistic proof has been presented.
Improvements were obtained for $k=4$ in \cite{Arikan1}, \cite{Arikan2} and more recently in \cite{DalaiVenkatJaikumar}, \cite{DalaiVenkatJaikumar2}. The most recent progress we are aware of was obtained in \cite{venkat} where the Fredman-Koml{\'o}s bound is proved to be non-tight for any $k\geq 5$, with an explicit new numerical bound for $k=5,6$. For larger $k$, the authors show that an explicit improvement of the Fredman-Koml{\'o}s bound can be obtained subject to a conjecture on the maxima of certain polynomials. Other recent papers on this topic that deserve to be recalled are \cite{Jaikumar2} where the asymptotic behavior of $R_k$ has been studied and \cite{CostaDalai} where the authors attempt to use the polynomial method to upperbound $R_3$ and they state some limitations of this method.
In this paper we make further progress on this problem. We first prove the conjecture formulated in \cite{venkat} and thus complete their proof of explicit new upper bounds on $R_k$ which beat the Fredman-Koml{\'o}s bound for all $k\geq 5$.
Our main contribution is then to expand on the idea used in \cite{DalaiVenkatJaikumar2} to derive a further improvement for $k=5,6$.
In Section \ref{background} we give a brief summary of the approaches used in \cite{DalaiVenkatJaikumar2} and in \cite{venkat}, upon which we build our contribution.
In Section \ref{sec:venkat_conj} prove the conjecture stated in \cite{venkat} and give a numerical evaluation of the ensuing bound for $k>6$. In Section \ref{sec:newbounds} we present our improvement for $k=5,6$.
\section{Background}
\label{background}
The bounds presented in \cite{FredmanKomlos}, \cite{Arikan2}, \cite{DalaiVenkatJaikumar2} and \cite{venkat} can all be derived by starting with the following Lemma on graph covering (see \cite{Jkumar}).
\begin{Lemma}[Hansel \cite{hansel}]
Let $K_r$ be a complete graph on $r$ vertices and let $G_1,\ldots,G_m$ be bipartite graphs on those same vertices such that $\cup_{i}G_i=K_r $. Let finally $\tau(G_i)$ represent the number of non-isolated vertices in $G_i$. Then
\begin{equation}
\sum_{i=1}^m \tau(G_i) \geq \log r\,.
\end{equation}
\end{Lemma}
The connection with $k$-hashing comes from the following application. Given a $k$-hash code $C$, fix any $(k-2)$-elements subset $\{x_1,x_2,\ldots, x_{k-2}\}$ in $C$. For any coordinate $i$ let $G_i^{x_1, \ldots, x_{k-2}}$ be the bipartite graph with vertex set $G\setminus\{x_1,x_2,\ldots, x_{k-2}\}$ and edge set
\begin{equation}
\label{eq:FKgraph}
E=\left\{ (v,w) : x_{1,i},x_{2,i},\ldots, x_{k-2,i},v,w \mbox{ are all distinct} \right\}\,.
\end{equation}
Then, since $C$ is a $k$-hash code, we note that $\bigcup_i G_i^{x_1, \ldots, x_{k-2}}$ is the complete graph on $G\setminus\{x_1,x_2,\ldots, x_{k-2}\}$ and so
\begin{equation}
\label{eq:hansel_hash}
\sum_{i=1}^n \tau(G_i^{x_1, \ldots, x_{k-2}})\geq \log(|C|-k+2)\,.
\end{equation}
This inequality can be used to prove upper bounds on $|C|$. Since it holds for any choice of $x_1,x_2,\ldots, x_{k-2}$, one can show that the right hand side is small by proving that left hand side cannot be too large for all possible choices of $x_1,x_2,\ldots, x_{k-2}$. One can either use it for some specific choice or take expectation over any random selection.
Let $f_{i}$ be probability distribution of the $i$-th coordinate of $C$, that is, $f_{i,a}$ is the fraction of elements of $C$ whose $i$-th coordinate is $a$. Note that the graph in \eqref{eq:FKgraph} is empty if the $x_{1,i},x_{2,i},\ldots, x_{k-2,i}$ are not all distinct. We will say in this case that $x_1,x_2,\ldots,x_{k-2}$ \emph{collide} in coordinate $i$.
Then, we have
\begin{equation}
\tau(G_i^{x_1, \ldots, x_{k-2}})=
\begin{cases}
0 \hspace{2cm} x_1,\ldots,x_k \mbox{ collide in coordinate }i\\
\left(\frac{|C|}{|C|-k+2}\right)\left(1-\sum_{j=1}^{k-2}f_{i,x_{ji}}\right) \hspace{0.5cm}\mbox{otherwise}
\end{cases}
\end{equation}
So, one can make the left hand side in \eqref{eq:hansel_hash} small by either taking a set $x_{1}, \ldots,x_{{k-2}}$ which collide in many coordinates, so forcing the corresponding $\tau$'s to zero, or by taking a set which uses ``popular'' values in many coordinates.
The Fredman-Koml{\'o}s bound is obtained by taking expectation in $\eqref{eq:hansel_hash}$ over a uniform random extraction of $x_1,x_2,\ldots, x_{k-2}$. By linearity of expectation the computation can be performed over each single coordinate. Denoting with $\mathbb{E}$ the expectation, for large $n$ and $|C|$
\begin{align*}
\mathbb{E} [ & \tau(G_i^{x_1, \ldots, x_{k-2}})] \\
& =\left(1+o(1)\right)\sum_{\stackrel{\text{distinct }}{ a_1,\ldots,a_{k-2}}} f_{i,a_1}f_{i,a_2}\cdots f_{i,a_{k-2}}(1-f_{i,a_1}\cdots-f_{i,a_{k-2}} )
\end{align*}
where the coefficient $o(1)$ is due to sampling without replacement. One can show that the worst-case $f_i$ is the uniform distribution, which gives
\begin{equation}
\label{eq:fredmankomlostau}
\mathbb{E} [ \tau(G_i^{x_1, \ldots, x_{k-2}})] \leq \frac{k!}{k^{k-1}}\left(1+o(1)\right)\,.
\end{equation}
The procedures used in \cite{DalaiVenkatJaikumar2} and \cite{venkat} are based on the idea that one can also take $x_1,x_2,\ldots, x_{k-2}$ uniformly from a subset $C'\subset C$ which ensures they collide in all coordinates $i$ in some subset $T \subset \{1,2\ldots, n\}$. Then, if $g_{i,a}$ is the frequency of symbol $a$ in the coordinate $i\notin T$ of $C'$, one has
\begin{align}
\mathbb{E} [ & \tau(G_i^{x_1, \ldots, x_{k-2}})] \nonumber \\
& =\left(1+o(1)\right)\sum_{\stackrel{\text{distinct }}{ a_1,\ldots,a_{k-2}}} g_{i,a_1}g_{i,a_2}\cdots g_{i,a_{k-2}}(1-f_{i,a_1}\cdots-f_{i,a_{k-2}} )\label{eq:exptaugf}
\end{align}
The worst case $g$ and $f$ here, if taken independently, give in general a value which exceeds the $k!\slash k^{k-1}$ of \eqref{eq:fredmankomlostau}.
In \cite{DalaiVenkatJaikumar2}, for $k=4$, it was shown that one can deal with this by also taking $C'$ randomly from a partition of $C$ (based on the values in positions $i\in T$), thus adding an additional (outer) expectation. In that case $g_i$ is also random and constrained to satisfy $\mathbb{E}[g_i]=f_i$. Using some concavity argument it was shown that under this random selection the bound \eqref{eq:fredmankomlostau} still holds for $i\notin T$, thus gaining on average compared to \cite{FredmanKomlos}. However, for $k>4$ that approach seems infeasible. The idea used in \cite{venkat} is to suppress the random selection of $C'$ and show that one can carefully choose $C'$ so that $x_1, \ldots, x_{k-2}$ collide in a portion of the coordinates large enough to more than compensate the increase in $\mathbb{E}[\tau(G_i^{x_1, \ldots, x_{k-2}})]$ for $i\notin T$ obtained in \eqref{eq:exptaugf} with respect to \eqref{eq:fredmankomlostau}. This leads to a proof that \eqref{eq:fredmankomlosRk} is not tight for all $k>4$. However, explicit numerical improvements were only proved for $k=5,6$, and given for $k>6$ modulo a conjecture on the optimal value of some polynomials.
In the next two sections we present our contribution. First we prove the conjecture formulated by the authors in \cite{venkat}, thus completing their proof of the new bounds on $R_k$ for all $k$. Then, we prove stronger results for $k=5,6$. Our idea is based on a symmetrization of \eqref{eq:exptaugf} which allow us to resurrect the random selection of $C'$ in an effective way, replacing the concavity argument of \cite{DalaiVenkatJaikumar2} with new bounds on the maxima of some polynomials.
\section{Guruswami-Riazanov bounds}
\label{sec:venkat_conj}
A crucial role in all bounds discussed in this paper is played by the sum appearing in equation \eqref{eq:exptaugf}. We simplify the notation and set, for general probability vectors $g=(g_1,\ldots,g_k)$ and $f=(f_1,\ldots,f_k)$,
\begin{equation}
\psi(g,f) =\sum_{\sigma\in S_k} g_{\sigma(1)}g_{\sigma(2)}\cdots g_{\sigma(k-2)}f_{\sigma(k-1)}\,,
\end{equation}
observing that equation \eqref{eq:exptaugf} can be rewritten as
\begin{equation}
\label{eq:Expinpsi}
\mathbb{E} [ \tau(G_i^{x_1, \ldots, x_{k-2}})] = \left(1+o(1)\right)\psi(g_i,f_i)
\end{equation}
We can now prove the conjecture stated in \cite{venkat}.
\begin{Proposition}[Conjecture 1 \cite{venkat}]\label{conjecture}
Under the constraints $f_i\geq \gamma,\forall i$, $\psi(g,f)$ attains a maximum in a point $(g,f)$ with vector $f$ of the form $f=(\gamma,\dots,\gamma,1-(k-1)\gamma)$.
\end{Proposition}
\proof
Since $\psi(g,f)$ is invariant under (identical) permutations on $g$ and $f$, we can study maxima for which $g_k$ is the minimum among the values $g_1,g_2,\dots, g_k$ and show that for those points $f=(\gamma,\dots,\gamma,1-(k-1)\gamma)$. We prove this by considering the components of $f$ one by one. Assume on the contrary that $f_1>\gamma$. Given $\epsilon \leq f_1-\gamma$, set $\tilde{f}=(f_1-\epsilon,f_2\ldots,f_{k-1},f_k+\epsilon)$. Then
\begin{align*}
\psi(g,\tilde{f}) & = \sum_{\sigma:\sigma(k-1)\neq 1,k} g_{\sigma(1)}g_{\sigma(2)}\dots g_{\sigma(k-2)}f_{\sigma(k-1)}\\ &\qquad +
\sum_{\sigma:\sigma(k-1)=1} g_{\sigma(1)}g_{\sigma(2)}\dots g_{\sigma(k-2)}(f_1-\epsilon)\\
&\qquad +
\sum_{\sigma:\sigma(k-1)=k} g_{\sigma(1)}g_{\sigma(2)}\dots g_{\sigma(k-2)}(f_k+\epsilon)\\
& = \psi(g,f) - \epsilon\cdot \sum_{\sigma:\sigma(k-1)=1} g_{\sigma(1)}g_{\sigma(2)}\dots g_{\sigma(k-2)}\\
& \qquad +
\epsilon \cdot \sum_{\sigma:\sigma(k-1)=k} g_{\sigma(1)}g_{\sigma(2)}\dots g_{\sigma(k-2)}\,.
\end{align*}
Since we assumed $g_1\geq g_k$,
\begin{equation}
\sum_{\sigma:\sigma(k-1)=1} g_{\sigma(1)}g_{\sigma(2)}\dots g_{\sigma(k-2)}\leq \sum_{\sigma:\sigma(k-1)=k} g_{\sigma(1)}g_{\sigma(2)}\dots g_{\sigma(k-2)}.
\end{equation}
and hence $\psi(g,\tilde{f})\geq \psi(g,f)$. By repeating the above procedure for $f_2$, $f_3,\ldots,f_{k-1}$, we find that indeed $f=(\gamma,\dots,\gamma,1-(k-1)\gamma)$ maximizes $\psi(g,f)$ under the considered constraints whenever $g_k$ is the minimum among $g_1,g_2,\dots, g_k$, and in particular for the optimal $g$ sorted in this way.\endproof
It terms of $g$, it was already shown in \cite{venkat} that assuming the above result one could show that the maximum value of $\psi(g,f)$, under the constraint that $f_i\geq \gamma,\forall i$, is attained at a point $(g,f)$ with $g$ of the form $(\beta,\beta,\ldots,1-(k-1)\beta)$.
Assuming this, it was shown in \cite{venkat} that a new explicit numerical bound can be given on $R_k$ which strictly improves the Fredman-Koml\'os bound for all $k$.
Table \ref{tab:numbounds} gives numerical results\footnote{We believe that due to a minor error in the computation, the bound given for $R_5$ in \cite{venkat} is not really the best possible using their method. We report here the optimal.} for the first values of $k$.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
$k$ & 5 & 6 & 7 & 8 \\
\hline
Bound from \cite{FredmanKomlos} & 0.19200 & 0.092593 & 0.04284 & 0.019227\\
\hline
Bound from \cite{venkat} & 0.19079 & 0.092279 & 0.04279 & 0.019213\\
\hline
\end{tabular}
\caption{Numerical values for the bounds on $R_k$ from \cite{FredmanKomlos} and from \cite{venkat} in light of Proposition \ref{conjecture}. All numbers are rounded upwards.}
\label{tab:numbounds}
\end{table}
\section{Better bounds for small $k$}
\label{sec:newbounds}
In this section we combine insights from both the approaches of \cite{DalaiVenkatJaikumar2} and \cite{venkat}. Instead of looking at one subcode $C'$, as done in \cite{venkat}, we follow the idea in \cite{DalaiVenkatJaikumar2}. We consider a partition $\{C_{\omega}: \omega \in \Omega\}$ of our $k$-hash code $C$ and randomly select a subcode $C_{\omega}$. Then we randomly extract codewords $x_1,\ldots,x_{k-2}$ from $C_{\omega}$ and bound the expected value in \eqref{eq:exptaugf} over both random code and codewords. At this point, we replace the concavity argument of \cite{DalaiVenkatJaikumar2} with a symmetrization trick combined with new bounds on the maxima of certain polynomials. This procedure leads to the following nontrivial improvement on the rates $R_5$ and $R_6$.
\begin{Theorem}\label{main}
For $k=5,6$ the following bounds hold
\begin{itemize}
\item $R_5\leq 0.1697$;
\item $R_6\leq 0.0875$.
\end{itemize}
\end{Theorem}
\subsection{Proof of Theorem \ref{main}}
Here our goal is to find a family of subcodes such that any $k-2$ codewords $x_1,x_2,\dots,x_{k-2}$ of a given subcode $C_{\omega}$ collide in all coordinates of $T=[1,\ell]$ for a carefully chosen value of $\ell$, that is, for any coordinate $t\in T$ there exist $i,j$ such that $x_{i,t}=x_{j,t}$. This will ensure that the coordinates from $T$ contribute $0$ to the LHS of \eqref{eq:hansel_hash}. To do this, we cover all the possible prefixes of length $\ell$; the following lemma can be seen as a special case of the known results on the fractional clique covering number (see \cite{PartialCovering}).
\begin{Lemma}\label{covering}
For any positive $\epsilon$, for $\ell$ large enough, there exists a partition $\Omega$ of $\{1,2,\dots,k\}^{\ell}$ such that:
\begin{enumerate}
\item $|\Omega|\leq \left\lfloor{\left(\frac{k}{k-3}+\epsilon\right)^{\ell}}\right\rfloor$.
\item For all $\omega\in\Omega$ and $i=1,\ldots,\ell$, the $i$-th projection of $\omega$ has cardinality at most $ k-3$.
\end{enumerate}
In particular, for any $\omega\in\Omega$, any $k-2$ sequences in $\omega$ collide in all coordinates $i=1,\ldots, \ell$.
\end{Lemma}
\proof
For any $i\in [1,k]$, consider the set $A_i=\{i,i+1,\dots,i+(k-4)\}$ , where the sums are performed modulo $k$ in $[1,k]$.
To a string $s=(i_1,\dots,i_{\ell})$ in $[1,k]^{\ell}$ we associate a set $\omega_s=A_{i_1}\times A_{i_2}\times \dots \times A_{i_{\ell}}\subset [1,k]^\ell$. Fix a word $x\in [1,k]^{\ell}$, and choose uniformly at random the string $s$; the probability that $x\not\in \omega_s$ is $1-\left(\frac{k-3}{k}\right)^{\ell}$. Therefore, if we choose randomly $h$ strings $s_1,\dots,s_h$, the probability that $x\not \in (\omega_{s_1}\cup \dots \cup \omega_{s_h})$ is $\left(1-\left(\frac{k-3}{k}\right)^{\ell}\right)^h$. Hence, the expected number of words $x\in [1,k]^{\ell}$ that do not belong to any of the $\omega_{s_1}, \dots, \omega_{s_h}$ is
\begin{align*}
\mathbb{E}(|\{x\in[1,k]^ \ell:\ x\not \in \omega_{s_1}\cup \dots \cup \omega_{s_h}\}|) = k^{\ell}\left(1-\left(\frac{k-3}{k}\right)^{\ell}\right)^h.
\end{align*}
If this value is smaller than $1$, then there exists a choice of $s_1,\dots, s_h$ such that that the family $\{\omega_{s_1},\dots, \omega_{s_h}\}$ covers the whole set $[1,k]^{\ell}$.
This happens whenever
$$k^{\ell}\left(1-\left(\frac{k-3}{k}\right)^{\ell}\right)^h<1 $$
or equivalently
$$h>\frac{-\ell \log{k}}{\log\left(1-\left(\frac{k-3}{k}\right)^{\ell}\right)}\,, $$
which holds for
$$h>\ell \left(\frac{k}{k-3}\right)^{\ell} \frac{\log{k}}{\log{e}}.$$
For $\ell$ large enough, setting $h=\left\lfloor{\left(\frac{k}{k-3}+\epsilon\right)^{\ell}}\right\rfloor$ we have the desired inequality.
Removing possible intersections between the sets $\omega_s$ we obtain a partition of $[1,k]^\ell$ with the desired properties, since condition 2) is satisfied by construction.
\endproof
Let $\Omega=\{\omega_1,\ldots,\omega_h\}$ be a partition of $[1,k]^\ell$ as derived from Lemma \ref{covering} and consider the family of subcodes $C_{\omega_1},\dots, C_{\omega_h}$ of $C$ defined by
$$C_\omega=\{x\in C: (x_1,x_2,\ldots, x_{\ell})\in \omega\}.$$
Clearly, any $k-2$ codewords $x_1,x_2,\dots,x_{k-2}$ of a given subcode $C_{\omega}$ collide in all coordinates of $T=[1,\ell]$. As in \cite{DalaiVenkatJaikumar2}, define a subcode $C_{\omega}$ to be \emph{heavy} if $|C_{\omega}|> n$ and to be \emph{light} otherwise. We can show that, if $\ell$ is not too large, most of the codewords are contained in heavy subcodes.
Indeed, if we consider $\ell$ such that $\left(\frac{k}{k-3}+\epsilon\right)^{\ell}\leq 2^{nR-2\log{n}}$,
that is $\ell\leq\frac{nR-2\log{n}}{\log\left(\frac{k}{k-3}+\epsilon\right)}$, we have that
$$\left|\bigcup_{C_{\omega} \mbox{ is ligth}} C_{\omega}\right|\leq n\left(\frac{k}{k-3}+\epsilon\right)^{\ell}\leq n2^{nR-2\log{n}}= \frac{|C|}{n}.$$
This means that at least a fraction $(1-1/n)$ of the codewords are in heavy subcodes. If we remove from $C$ the light codes, the rate changes by an amount $\frac{1}{n}\log(1-1/n)$, which vanishes as $n$ grows. So, in the following we can assume, without loss of generality, that all the subcodes are heavy.
We are finally ready to describe our strategy to pick the codewords $x_1,\dots,x_{k-2}$: first we choose a subcode $C_{\omega}$ with probability $\lambda_{\omega}=|C_{\omega}|/|C|$ and then we pick uniformly at random (and without replacement) $x_1,\dots,x_{k-2}$ from $C_{\omega}$. Since those codewords collide in all the coordinates from the set $T=[1,\ell]$, we obtain in \eqref{eq:hansel_hash}:
\begin{align}
\log(|C|-k+2)& \leq\mathbb{E}_{\omega\in \Omega}(\mathbb{E}[\sum_{i\in [\ell+1,n]}\tau(G_i^{x_1,x_2,\dots,x_{k-2}})]) \\
&=\sum_{i\in [\ell+1,n]}\mathbb{E}_{\omega\in \Omega}(\mathbb{E}[\tau(G_i^{x_1,x_2,\dots,x_{k-2}})])\label{eq:sumellton}.
\end{align}
Let again $f_{i}$ be probability distribution of the $i$-th coordinate of $C$, and let $f_{i|\omega}$ be the distribution of the subcode $C_\omega$.
Invoking \eqref{eq:Expinpsi} for the expectation over the random choice of $x_1,\ldots,x_{k-2}$, we can write for $i\in [\ell+1,n]$
\begin{align*}
\mathbb{E}_{\omega\in \Omega} (\mathbb{E}[\tau(G_i^{x_1,x_2,\dots,x_{k-2}})])
=(1+o(1))\sum_{\omega\in \Omega} \lambda_{\omega} \psi(f_{i|\omega},f_i).
\end{align*}
Since $f_i=\sum_{\mu \in \Omega} \lambda_{\mu} f_{i|\mu}$ and $\psi$ is linear in its second variable, we have that
$$
\mathbb{E}_{\omega\in \Omega}(\mathbb{E}[\tau(G_i^{x_1,x_2,\dots,x_{k-2}})])
=(1+o(1))\sum_{\omega,\mu\in \Omega}\lambda_{\omega}\lambda_{\mu}\psi(f_{i|\omega},f_{i|\mu})\,.
$$
We exploit now a simple yet effective trick. Since the sum above is symmetric in $\omega$ and $\mu$, we can write
\begin{align}
\mathbb{E}_{\omega\in \Omega} (\mathbb{E}[\tau &(G_i^{x_1,x_2,\dots,x_{k-2}})]) \nonumber\\
&=\left(1+o(1)\right)\frac{1}{2}\sum_{\omega,\mu\in \Omega}\lambda_{\omega}\lambda_{\mu}[\psi(f_{i|\omega},f_{i|\mu})+\psi(f_{i|\mu},f_{i|\omega})].\label{simmetrizzata}
\end{align}
Here, we note that $f_{i|\omega}$ has no relation with $f_{i|\mu}$.
Therefore we can just consider the following polynomial function over two generic probability vectors $p=(p_1,p_2,\dots,p_k)$ and $q=(q_1,q_2,\dots,q_k)$
\begin{align}
\Psi(p;q)& : =\psi(p,q)+\psi(q,p)\nonumber\\
&=\sum_{\sigma\in S_k} p_{\sigma(1)}p_{\sigma(2)}\dots p_{\sigma(k-2)}q_{\sigma(k-1)}+ q_{\sigma(1)}q_{\sigma(2)}\dots q_{\sigma(k-2)}p_{\sigma(k-1)}.\label{eq:defPsi}
\end{align}
Because of \eqref{simmetrizzata}, if $M_k$ is the maximum of $\Psi$ over probabilistic vectors $p$ and $q$, equation \eqref{eq:sumellton} says that
\begin{align*}
\log{|C|} & \leq (1+o(1))\frac{1}{2}(n-\ell)\sum_{\omega,\mu\in \Omega}\lambda_{\omega}\lambda_{\mu}M_k\\
& =(1+o(1))\frac{1}{2}(n-\ell) M_k.
\end{align*}
Recalling that $|C|=2^{n R}$ and taking $\ell=\left\lfloor{\frac{nR-2\log{n}}{\log\left(\frac{k}{k-3}+\epsilon\right)}}\right\rfloor$, we obtain
\begin{align*}
R\leq (1+o(1))\left[1-\frac{R-2\log(n)/n}{\log\left(\frac{k}{k-3}+\epsilon\right)}\right]\frac{M_k}{2}.
\end{align*}
Rearranging the terms, taking $n\to\infty$ first and then $\epsilon\to 0$, we deduce the following proposition.
\begin{Proposition}\label{FromMaxToRate}
Let $M_k$ be the maximum of $\Psi$ over probabilistic vectors $p=(p_1,p_2,\dots,p_k)$ and $q=(q_1,q_2,\dots,q_k)$. Then we have the following upperbound on $R_k$
$$R_k\leq \left(\frac{2}{M_k}+\frac{1}{\log(k/(k-3))}\right)^{-1}.$$
\end{Proposition}
In the next subsection we will prove that $M_5=\frac{15(48+\sqrt{5})}{1936}\approx 0.389226$ and $M_6=24/125$. This implies Theorem \ref{main}.
\subsection{Bounds on $\Psi$}
The goal of this subsection is to find the maximum of the function $\Psi$ as defined in \eqref{eq:defPsi}. For this purpose we first introduce two lemmas that provide some restrictions on this maximum.
\begin{Lemma}\label{lagrange1}
Let $\bar{p}=(\bar{p}_1,\dots,\bar{p}_k)$ and $\bar{q}=(\bar{q}_1,\dots,\bar{q}_k)$ be two probabilistic vectors. If $(\bar{p};\bar{q})$ is a maximum for $\Psi$ such that $\bar{p}_1,\bar{p}_2,\bar{q}_1,\bar{q}_2$ are nonzero, then also $(\frac{\bar{p}_1+\bar{p}_2}{2},\frac{\bar{p}_1+\bar{p}_2}{2},\bar{p}_3,\dots,\bar{p}_k;\frac{\bar{q}_1+\bar{q}_2}{2},\frac{\bar{q}_1+\bar{q}_2}{2},\bar{q}_3,\dots,\bar{q}_k)$ is a maximum for $\Psi$.
\end{Lemma}
\proof
If $\bar{P}=(\bar{p};\bar{q})$ is a maximum for $\Psi(p;q)$ under the constraints $p_1+p_2+\dots+p_k=1$ and $q_1+q_2+\dots+q_k=1$, then it is a maximum also under the stronger constraints $p_1+p_2=c_1$, $q_1+q_2=c_2$ where $c_1=\bar{p}_1+\bar{p}_2$, $c_2=\bar{q}_1+\bar{q}_2$, and $p_i=\bar{p}_i,q_i=\bar{q}_i$ for $i\in\{3,4,\dots,k\}$. Because of the Lagrange multiplier method this means that:
$$\frac{\partial \Psi}{\partial p_1}\Big|_{\bar{P}}=\frac{\partial \Psi}{\partial p_2}\Big|_{\bar{P}}$$
and
$$\frac{\partial \Psi}{\partial q_1}\Big|_{\bar{P}}=\frac{\partial \Psi}{\partial q_2}\Big|_{\bar{P}}\,.$$
It follows that:
$$(\bar{p}_1-\bar{p}_2)a+(\bar{q}_1-\bar{q}_2)b=0$$
and
$$(\bar{q}_1-\bar{q}_2)d+(\bar{p}_1-\bar{p}_2)c=0$$
where $a=\frac{\partial^2 \Psi}{\partial p_1\partial p_2}\big|_{\bar{P}}$, $b=\frac{\partial^2 \Psi}{\partial p_1\partial q_2}\big|_{\bar{P}}=\frac{\partial^2 \Psi}{\partial q_1\partial p_2}\big|_{\bar{P}}=c$ and $d=\frac{\partial^2 \Psi}{\partial q_1\partial q_2}\big|_{\bar{P}}$.
If we set $\bar{p}_1-\bar{p}_2=x$, $\bar{q}_1-\bar{q}_2=y$, the previous equations became:
$$\begin{cases} ax+by=0;\\
cx+dy=0.
\end{cases}$$
In the case $ad-bc\not=0$ the previous system admits only the solution $x=y=0$ that means $\bar{p}_1=\bar{p}_2$ and $\bar{q}_1=\bar{q}_2$. It is clear that here we have $\bar{p}_1=\frac{\bar{p}_1+\bar{p}_2}{2}=\bar{p}_2$, $\bar{q}_1=\frac{\bar{q}_1+\bar{q}_2}{2}=\bar{q}_2$ and hence the thesis is satisfied.
Let us assume $ad-bc=0$.
Then there exists a line $L$ of points $P(t)$ such that $P(1)=\bar{P}$, $P(0)=(\frac{\bar{p}_1+\bar{p}_2}{2},\frac{\bar{p}_1+\bar{p}_2}{2},\bar{p}_3,\dots,\bar{p}_k;\frac{\bar{q}_1+\bar{q}_2}{2},\frac{\bar{q}_1+\bar{q}_2}{2},\bar{q}_3,\dots,\bar{q}_k)$ and
$$\frac{\partial \Psi}{\partial p_1}\Big|_{P(t)}-\frac{\partial \Psi}{\partial p_2}\Big|_{P(t)}=\frac{\partial \Psi}{\partial q_1}\Big|_{P(t)}-\frac{\partial \Psi}{\partial q_2}\Big|_{P(t)}=0.$$
It follows that $\Psi(P(t))$ is constantly equal to the value of $\Psi$ in $\bar{P}=P(1)$. Since $(\frac{\bar{p}_1+\bar{p}_2}{2},\frac{\bar{p}_1+\bar{p}_2}{2},\bar{p}_3,\dots,\bar{p}_k,\frac{\bar{q}_1+\bar{q}_2}{2},\frac{\bar{q}_1+\bar{q}_2}{2},\bar{q}_3,\dots,\bar{q}_k)$ belongs to the line $L$, this point is also a maximum for $\Psi$.
\endproof
With essentially the same proof we also obtain the following result.
\begin{Lemma}\label{lagrange2}
Let $\bar{p}=(\bar{p}_1,\dots,\bar{p}_k)$ and $\bar{q}=(\bar{q}_1,\dots,\bar{q}_k)$ be two probabilistic vectors. If $(\bar{p};\bar{q})$ is a maximum for $\Psi$ such that $\bar{p}_1,\bar{p}_2$ are nonzero while $\bar{q}_1=\bar{q}_2=0$ then also $(\frac{\bar{p}_1+\bar{p}_2}{2},\frac{\bar{p}_1+\bar{p}_2}{2},\bar{p}_3,\dots,\bar{p}_k;0,0,\bar{q}_3,\dots,\bar{q}_k)$ is a maximum for $\Psi$.
\end{Lemma}
In the next two lemmas, we will provide some further restrictions on the maximum of $\Psi$ using just some combinatorial arguments.
\begin{Lemma}\label{LemmaAmmazzaCasi1}
We have that:
$$\Psi(0,p_2,\dots,p_k;0,q_2,\dots, q_k)\leq \Psi(0,p_2,\dots,p_k;q_2,0,q_3,\dots, q_k).$$
\end{Lemma}
\proof
Because of the definition, we have that $\Psi(0,p_2,\dots,p_k;0,q_2,\dots, q_k)$ evaluates as
$$\sum_{\sigma:\ \sigma(k)=1} p_{\sigma(1)}p_{\sigma(2)}\dots p_{\sigma(k-2)}q_{\sigma(k-1)}+ q_{\sigma(1)}q_{\sigma(2)}\dots q_{\sigma(k-2)}p_{\sigma(k-1)}.$$
Similarly, we have that $\Psi(0,p_2,\dots,p_k;q_2,0,q_3,\dots, q_k)$ equals
\begin{align*}
\sum_{\sigma:\ \sigma(k)=1} & p_{\sigma(1)}p_{\sigma(2)}\dots p_{\sigma(k-2)}q_{\sigma(k-1)}+ q_{\sigma(1)}q_{\sigma(2)}\dots q_{\sigma(k)}p_{\sigma(k-1)}+\\
& (k-2)p_2q_2\left(\sum_{\sigma\in Sym(3,\dots,k)} p_{\sigma(3)}\dots p_{\sigma(k-1)}+ q_{\sigma(3)}\dots q_{\sigma(k-1)}\right).
\end{align*}
The claim follows since each term of the last sum is non negative.
\endproof
The following Lemma is in the same spirit of Proposition \ref{conjecture}.
\begin{Lemma}\label{LemmaAmmazzaCasi2}
We have that:
$$\Psi(p_1,\dots,p_{k-3},0,0,0;q_1,q_2,\dots, q_k)\leq \Psi\left(1,0\dots,0;0,\frac{1}{(k-1)},\dots,\frac{1}{(k-1)}\right).$$
\end{Lemma}
\proof
We suppose, without loss of generality that $q_1$ is the minimum among the values $q_1,q_2,\dots,q_{k-3}$. Setting $p=(p_1,\dots,p_{k-3},0,0,0)$ and $q=(q_1,\dots,q_k)$, we have
\begin{align*}
\Psi(p;q)& =
\sum_{\sigma:\ \sigma(k-1)\not\in \{1,2\}} q_{\sigma(1)}q_{\sigma(2)}\dots q_{\sigma(k-2)}p_{\sigma(k-1)}\\&
+\frac{p_1+p_2}{2}\sum_{\sigma:\ \{\sigma(k-1), \sigma(k)\}=\{1,2\}} q_{\sigma(1)}\dots q_{\sigma(k-2)}\\
&+(p_1q_2+q_1p_2)(k-2)\sum_{\sigma\in Sym(3,\dots,k)} q_{\sigma(3)}\dots q_{\sigma(k-1)}.\end{align*}
Similarly, setting $p'=(p_1+p_2,0,p_3,\dots,p_{k-3},0,0,0)$, we have that:
\begin{align*}
\Psi(p';q)& =
\sum_{\sigma:\ \sigma(k-1)\not\in \{1,2\}} q_{\sigma(1)}q_{\sigma(2)}\dots q_{\sigma(k-2)}p_{\sigma(k-1)}\\&+
\frac{p_1+p_2}{2}\sum_{\sigma:\ \{\sigma(k-1), \sigma(k)\}=\{1,2\}} q_{\sigma(1)}\dots q_{\sigma(k-2)}\\& +(p_1+p_2)q_2(k-2)\sum_{\sigma\in Sym(3,\dots,k)} q_{\sigma(3)}\dots q_{\sigma(k-1)}.
\end{align*}
Since $q_1\leq q_2$ we have that
$$
\Psi(p;q)\leq \Psi(p';q)\,.
$$
Reiterating the previous procedure, since $q_1$ is the minimum among the values $q_1,\dots,q_{k-3}$, we obtain
\begin{equation}\label{maggiorazione}\Psi(p_1,\dots,p_{k-3},0,0,0;q_1,q_2,\dots, q_k)\leq \Psi(1,0,\dots,0,0;q_1,q_2,\dots, q_k).\end{equation}
Since $q_1$ does not appear in the value of $\Psi(1,0,\dots,0,0;q_1,q_2,\dots, q_k)$, this is certainly maximized for $q_1=0$. Finally, due to the Muirhead's inequality, we obtain that the RHS of \eqref{maggiorazione} is maximized for $q_2=q_3=\dots=q_k=\frac{1}{k-1}$.
\endproof
As a consequence of the previous lemmas, $\Psi$ attains a maximum in a point of one of the following types:
\begin{itemize}
\item[a)] $\left(1,0\dots,0;0,\frac{1}{(k-1)},\dots,\frac{1}{(k-1)}\right)$;
\item[b)] $(1/k,\dots,1/k;1/k,\dots,1/k)$;
\item[c)] $(0,0,\alpha,\dots,\alpha,\beta,\beta;\gamma,\gamma,\delta,\dots,\delta,0,0)$\\ where $(k-4)\alpha+2\beta=1$ and $2\gamma+(k-4)\delta=1$;
\item[d)] $(0,0,\alpha,\dots,\alpha,\beta;\gamma,\gamma,\delta,\dots,\delta,0)$\\
where $(k-3)\alpha+\beta=1$ and $2\gamma+(k-3)\delta=1$;
\item[e)] $(0,0,1/(k-2),\dots,1/(k-2);\gamma,\gamma,\delta,\dots,\delta)$\\
where $2\gamma+(k-2)\delta=1$;
\item[f)] $(0,\alpha,\dots,\alpha,\beta;\gamma,\delta,\dots,\delta,0)$\\
where $(k-2)\alpha+\beta=1$ and $\gamma+(k-2)\delta=1$;
\item[g)] $(0,1/(k-1),\dots,1/(k-1);\gamma,\delta,\dots,\delta)$\\
where $\gamma+(k-1)\delta=1$.
\end{itemize}
In particular, because of Lemma \ref{LemmaAmmazzaCasi2}, a maximum with three or more $p$-coordinates (resp. $q$-coordinates) equal to zero is also attained in a point of the form $(a)$. Otherwise, there are at most two zero coordinates both for the vector $p$ and for the vector $q$. Due to Lemma \ref{LemmaAmmazzaCasi1}, we can then assume those zeros are in different positions and finally, using Lemma \ref{lagrange1} and \ref{lagrange2}, we obtain the required characterization of the maximum.
For $k=5,6$, we have inspected using Mathematica all cases listed above and determined the maximum explicitly.
\begin{Theorem}\label{max}
\begin{itemize}
The following hold:
\item for $k=5$, the global maximum of $\Psi$ is $\frac{15(48+\sqrt{5})}{1936}\approx 0.389226$ and is obtained in case $(g)$ with $\delta=1/44(4+\sqrt{5})$ and $\gamma=1-4\delta$;
\item for $k=6$, the global maximum of $\Psi$ is $24/125=0.192$, obtained in case $(a)$.
\end{itemize}
\end{Theorem}
Theorem \ref{main} follows immediately from Theorem \ref{max} and Proposition \ref{FromMaxToRate}.
\begin{Remark}
For $k>6$, the value obtained for $p$ and $q$ as in case $(a)$, which we conjecture to be the true maximum, is too big to improve the known upper bounds on $R_k$.
\end{Remark}
\section*{Acknowledgements}
This research was partially supported by Italian Ministry of Education under Grant PRIN 2015 D72F16000790001. Helpful discussions with Jaikumar Radhakrishnan and Venkatesan Guruswami are gratefully acknowledged.
| {
"timestamp": "2020-02-26T02:18:10",
"yymm": "2002",
"arxiv_id": "2002.11025",
"language": "en",
"url": "https://arxiv.org/abs/2002.11025",
"abstract": "Let $C\\subseteq \\{1,\\ldots,k\\}^n$ be such that for any $k$ distinct elements of $C$ there exists a coordinate where they all differ simultaneously. Fredman and Komlós studied upper and lower bounds on the largest cardinality of such a set $C$, in particular proving that as $n\\to\\infty$, $|C|\\leq \\exp(n k!/k^{k-1}+o(n))$. Improvements over this result where first derived by different authors for $k=4$. More recently, Guruswami and Riazanov showed that the coefficient $k!/k^{k-1}$ is certainly not tight for any $k>3$, although they could only determine explicit improvements for $k=5,6$. For larger $k$, their method gives numerical values modulo a conjecture on the maxima of certain polynomials.In this paper, we first prove their conjecture, completing the explicit computation of an improvement over the Fredman-Komlós bound for any $k$. Then, we develop a different method which gives substantial improvements for $k=5,6$.",
"subjects": "Combinatorics (math.CO); Information Theory (cs.IT)",
"title": "New bounds for perfect $k$-hashing",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978712648206549,
"lm_q2_score": 0.8221891370573388,
"lm_q1q2_score": 0.8046869076560453
} |
https://arxiv.org/abs/1502.05030 | Spherical sets avoiding a prescribed set of angles | Let $X$ be any subset of the interval $[-1,1]$. A subset $I$ of the unit sphere in $R^n$ will be called \emph{$X$-avoiding} if $<u,v >\notin X$ for any $u,v \in I$. The problem of determining the maximum surface measure of a $\{ 0 \}$-avoiding set was first stated in a 1974 note by Witsenhausen; there the upper bound of $1/n$ times the surface measure of the sphere is derived from a simple averaging argument. A consequence of the Frankl-Wilson theorem is that this fraction decreases exponentially, but until now the $1/3$ upper bound for the case $n=3$ has not moved. We improve this bound to $0.313$ using an approach inspired by Delsarte's linear programming bounds for codes, combined with some combinatorial reasoning. In the second part of the paper, we use harmonic analysis to show that for $n\geq 3$ there always exists an $X$-avoiding set of maximum measure. We also show with an example that a maximiser need not exist when $n=2$. | \section{Introduction}
Witsenhausen \cite{witsenhausen74} in 1974 presented the following problem:
Let $S^{n-1}$ be the unit sphere in $\mathbb{R}^n$ and suppose $I \subset S^{n-1}$
is a Lebesgue measurable set such that no two vectors in $I$ are orthogonal. What is the largest possible Lebesgue surface measure of $I$?
Let $\alpha(n)$ denote the supremum of the measures of such sets $I$,
divided by the total measure of $S^{n-1}$.
The first upper bounds for $\alpha(n)$ appeared in
\cite{witsenhausen74}, where Witsenhausen deduced that $\alpha(n) \leq 1/n$.
In \cite{frankl-wilson81} Frankl and Wilson proved their powerful combinatorial result on
intersecting set systems, and
as an application they gave the first exponentially decreasing upper bound
$\alpha(n) \leq (1+o(1))(1.13)^{-n}$.
Raigorodskii \cite{raigorodskii99} improved the bound to $(1+o(1))(1.225)^{-n}$
using a refinement of the Frankl-Wilson method.
Gil Kalai conjectured in his weblog \cite{kalai09} that an extremal example
is to take two opposite caps, each of geodesic radius $\pi/4$; if true, this implies that
$\alpha(n) = (\sqrt{2} + o(1))^{-n}$.
Besides being of independent interest, the above \emph{Double Cap Conjecture}
is also important
because, if true, it would imply new lower bounds for the
measurable chromatic number of Euclidean space, which we now discuss.
Let $c(n)$ be the smallest integer $k$ such that $\mathbb{R}^n$ can be partitioned into sets $X_1, \dots, X_k$,
with $\|x-y\|_2 \neq 1$ for each $x,y \in X_i$, $1 \leq i \leq k$. The number $c(n)$ is
called the \emph{chromatic number of $\mathbb{R}^n$}, since the sets $X_1,\dots,X_k$ can
be thought of as colour classes for a proper colouring of the graph on the vertex set $\mathbb{R}^n$, in which
we join two points with an edge when they have distance $1$.
Frankl and Wilson \cite[Theorem~3]{frankl-wilson81} showed that $c(n) \geq (1+o(1))(1.2)^n$,
proving a conjecture of Erd\H{o}s that $c(n)$ grows exponentially.
Raigorodskii in 2000 \cite{raigorodskii00}
improved the lower bound to $(1+o(1))(1.239)^n$.
Requiring the classes $X_1,\dots,X_k$ to be Lebesgue measurable yields the
\emph{measurable chromatic number} $c_m(n)$. Clearly $c_m(n) \geq c(n)$. Remarkably,
it is still open if the inequality is strict for at least one $n$, although one can prove better lower bounds
on $c_m(n)$. In particular, the exponent in Raigorodskii's bound
was recently beaten by Bachoc, Passuello and Thiery \cite{bachoc14} who showed that $c_m(n) \geq (1.268+o(1))^n$. If the Double Cap Conjecture is true,
then $c_m(n)\ge (\sqrt{2}+o(1))^n$ because, as it is not hard to show,
$c_m(n) \geq 1/\alpha(n)$ for every $n\ge 2$.
Note that the best known asymptotic upper bound
on $c_m(n)$ (as well as on $c(n)$) is $(3+o(1))^n$, by Larman and Rogers~\cite{larman+rogers:72}.
Despite progress on the asymptotics of $\alpha(n)$, the upper bound of $1/3$ for $\alpha(3)$
has not been improved since the original statement of the problem in \cite{witsenhausen74}.
Note that the two-cap construction gives $\alpha(3)\ge 1-1/\sqrt{2}=0.2928...$\ .
Our first main result is that $\alpha(3) < 0.313$. The proof involves tightening a Delsarte-type
linear programming upper bound (cf.\ \cite{delsarte73}, \cite{delsarte77}, \cite{bachoc09}, \cite{oliveira09}) by adding combinatorial constraints.
Let $\mathcal{L}$ be the $\sigma$-algebra of Lebesgue
surface measurable subsets of $S^{n-1}$, and let $\lambda$ be the surface
measure, for simplicity normalised so that $\lambda(S^{n-1}) = 1$.
For $X \subset [-1,1]$,
a subset $I \subset S^{n-1}$ will be called \emph{$X$-avoiding} if
$\langle \xi, \eta \rangle \notin X$ for all $\xi, \eta \in I$, where $\langle \xi, \eta \rangle$ denotes the
standard inner product of the vectors $\xi, \eta$. The corresponding extremal problem
is to determine
\begin{align}\label{eq:ir}
\alpha_X(n):=\sup \{ \lambda(I)~:~\text{$I \in \mathcal{L}$, $I$ is $X$-avoiding} \}.
\end{align}
For example, if $t \in (-1,1)$ and $X=[-1,t)$, then $I \subset S^{n-1}$ is $X$-avoiding if and only if its geodesic diameter
is at most $\arccos(t)$. Thus Levy's isodiametric inequality~\cite{levy:pcaf} shows that $\alpha_X$
is given by a spherical cap of the appropriate size.
A priori, it is not clear that the value of $\alpha_X(n)$
is actually attained by some measurable $X$-avoiding set $I$
(so Witsenhausen \cite{witsenhausen74} had to use supremum to
define $\alpha(n)$).
We prove in Theorem~\ref{thm:attainment} that the supremum is attained as a maximum
whenever $n \geq 3$. Remarkably,
this result holds under no additional assumptions whatsoever on the set $X$. However, in a sense
only closed sets $X$ matter: our Theorem~\ref{thm:closureIR} shows that $\alpha_X(n)$ does not change if we replace
$X$ by its closure. When $n=2$ the conclusion of Theorem~\ref{thm:attainment} fails; that is,
the supremum in \eqref{eq:ir} need not be a maximum: an example is given in Theorem
\ref{thm:irCircle}.
Besides also answering a natural question, the importance of
the attainment result can also be seen through the historical lens: In 1838 Jakob Steiner tried
to prove that a circle maximizes the area among all plane figures having some given perimeter.
He showed that any non-circle could be improved, but he was not able to
rule out the possibility that a sequence of ever improving plane shapes of equal perimeter could have
areas approaching some supremum which is not achieved as a maximum.
Only 40 years later in 1879 was the proof
completed, when Weierstrass showed that a maximizer must indeed exist.
The layout of the paper will be as follows. In Section \ref{sec:prelim} we make some general
definitions and fix notation. In Section \ref{sec:comb} we prove a simple and general proposition
giving combinatorial upper bounds for $\alpha_X(n)$; this is basically a formalisation of
the method used
by Witsenhausen in \cite{witsenhausen74} to obtain the $\alpha(n) \leq 1/n$ bound.
We then apply the proposition to calculate $\alpha_X(2)$ when $|X|=1$.
In Section \ref{sec:lp} we deduce linear programming
upper bounds for $\alpha(n)$, in the spirit of the Delsarte bounds for binary
\cite{delsarte73} and spherical \cite{delsarte77} codes.
We then strengthen the linear programming bound in the $n=3$ case
in Section \ref{sec:lp+comb} to obtain the first main result.
In Section \ref{sec:max} we prove
that the supremum $\alpha_X(n)$ is a maximum when $n \geq 3$, and in
Section \ref{sec:closure}
we show that $\alpha_X(n)$ remains unchanged when $X$ is replaced with its
topological closure. In Section \ref{sec:single} we formulate a conjecture
generalising the Double Cap Conjecture for the sphere in $\mathbb{R}^3$, in which other
forbidden inner products are considered.
\section{Preliminaries}\label{sec:prelim}
If $u,v \in \mathbb{R}^n$ are two vectors, their standard inner product will be denoted $\langle u, v \rangle$.
All vectors will be assumed to be column vectors.
The transpose of a matrix $A$ will be denoted $A^t$.
We denote by $SO(n)$ the group of $n \times n$ matrices $A$ over $\mathbb{R}$
having determinant $1$, for which $A^t A$ is equal to the identity matrix.
We will think of $SO(n)$ as a compact topological group, and we will always
assume its Haar measure is normalised so that $SO(n)$ has measure $1$.
We denote by $S^{n-1}$ the set of unit vectors in $\mathbb{R}^n$,
\[
S^{n-1} = \{ x \in \mathbb{R}^n : \langle x, x \rangle = 1 \},
\]
equipped with its usual topology. The Lebesgue measure $\lambda$ on $S^{n-1}$
is always taken to be normalised so that $\lambda(S^{n-1}) = 1$.
Recall that the standard surface measure of $S^{n-1}$ is
\begin{equation}\label{eq:OmegaN}
\omega_n = \frac{2 \pi^{n/2}}{\Gamma(n/2)},
\end{equation} where $\Gamma$ denotes Euler's gamma-function.
The Lebesgue $\sigma$-algebra on $S^{n-1}$ will be denoted by $\mathcal{L}$.
When $(X, \mathcal{M}, \mu)$ is a measure space and $1 \leq p < \infty$,
we use
\[
L^p(X) = \left\{ f :~\text{$f$ is an $\mathbb{R}$-valued $\mathcal{M}$-measurable function and
$\int |f|^p\,\mathrm{d}\mu < \infty$} \right\}.
\]
For $f \in L^p(X)$, we define $\|f\|_p := \left( \int |f|^p\,\mathrm{d}\mu \right)^{1/p}$. Identifying
two functions when they agree $\mu$-almost everywhere, we make $L^p(X)$
a Banach space with the norm $\| \cdot \|_p$.
We will use bold letters (for example $\bm{X}$) for random variables.
The expectation of a function $f$ of a random variable $\bm{X}$ will be denoted
$\mathbb{E}_{\bm{X}}[f(\bm{X})]$, or just $\mathbb{E}[f(\bm{X})]$. The probability of an event $E$
will be denoted $\mathbb{P}[E]$.
When $X$ is a set, we use $\mathbbm{1}_X$ to denote its characteristic function; that is
$\mathbbm{1}_X(x) = 1$ if $x \in X$ and $\mathbbm{1}_X(x) = 0$ otherwise.
If $G = (V,E)$ is a graph, a set $I$ is called \emph{independent}
if $\{u,v\} \notin E$ for any $u,v \in I$. The \emph{independence number} $\alpha(G)$ of $G$
is the cardinality of a largest independent set in $G$. We define
$\alpha_X(n)$ as in \eqref{eq:ir}, and for brevity we let $\alpha(n) = \alpha_{\{0\}}(n)$.
\section{Combinatorial upper bound}\label{sec:comb}
Let us begin by deriving a simple ``combinatorial'' upper bound for the quantity $\alpha_X(n)$.
\begin{proposition}\label{pr:comb}
Let $n \geq 2$ and $X \subset [-1,1]$. For a finite subset $V \subset S^{n-1}$,
we let $H=(V,E)$ be the graph on the vertex set $V$ with edge set defined by
putting $\{ \xi, \eta \} \in E$ if and only if $\langle \xi, \eta \rangle \in X$.
Then $\alpha_X(n) \leq \alpha(H) / |V|$.
\end{proposition}
\begin{proof} Let $I \subset S^{n-1}$ be an $X$-avoiding set, and
take a uniform $\bm{O}\in SO(n)$. Let the random variable $\bm{Y}$ be the number of
$\xi\in V$ with $\bm{O}\xi\in I$. Since $\bm{O}\xi\in S^{n-1}$ is uniformly distributed
for every $\xi\in V$, we have by the linearity of expectation that $\mathbb{E}(\bm{Y})=|V|\, \lambda(I)$.
On the other hand, $\bm{Y} \le \alpha(H)$ for every outcome $\bm{O}$.
Thus $\lambda(I) \leq \alpha(H)/|V|$.
\end{proof}
We next use Proposition~\ref{pr:comb} to find the largest possible Lebesgue measure
of a subset of the unit circle in $\mathbb{R}^2$ in which no two points lie at some fixed
forbidden angle.
\begin{theorem}\label{thm:irCircle}
Let $X = \{x\}$ and put $t = \frac{\arccos{x}}{2\pi}$.
If $t$ is rational and $t = p/q$ with $p$ and $q$ coprime integers, then
\begin{align*}
\alpha_X(2) = \begin{cases}
1/2, &~\text{if $q$ is even,}\\
(q-1)/(2q), &~\text{if $q$ is odd.}
\end{cases}
\end{align*}
In this case $\alpha_X(2)$ is attained as a maximum. If $t$ is irrational then
$\alpha_X(2) = 1/2$, but there exists no measurable $X$-avoiding set $I \subset S^1$
with $\lambda(I) = 1/2$.
\end{theorem}
\begin{proof}
Write $\alpha = \alpha_X(2)$, and identify $S^1$ with the interval $[0,1)$
via the map $(\cos x, \sin x) \mapsto x/2\pi$.
We regard $[0,1)$ as a group with the operation of addition modulo $1$.
Notice that $I \subset [0,1)$ is $X$-avoiding if and only if $I \cap (t+I) = \emptyset$.
This implies immediately that $\alpha \leq 1/2$ for all values of~$x$.
Now suppose $t = p/q$ with $p$ and $q$ coprime integers, and
suppose that $q$ is even.
Let $S$ be any open subinterval of $[0,1)$ of length $1/q$, and
define $T : [0,1) \to [0,1)$ by $T x = x+t \mod 1$.
Using the fact that $p$ and $q$ are coprime,
one easily verifies that $I = S \cup T^2 S \cup \cdots \cup T^{q-4} S \cup T^{q-2} S$
has measure $1/2$. Also $S$ is $X$-avoiding since
$T S = T S \cup T^3 S \cup \cdots \cup T^{q-3} S \cup T^{q-1} S$
is disjoint from $S$. Therefore $\alpha = 1/2$.
Next suppose $q$ is odd.
With notation as before, a similar argument shows that
$I \cup T^2 I \cup \cdots \cup T^{q-3} I$
is an $X$-avoiding set of measure $(q-1)/(2q)$,
and Proposition~\ref{pr:comb} shows that this is largest possible,
since the points $x, T x, T^2 x, \dots, T^{q-1} x$ induce a $q$-cycle.
Finally suppose that $t$ is irrational. By Dirichlet's approximation theorem
there exist infinitely many pairs of coprime integers $p$ and $q$ such that
$| t - p/q | < 1/q^2$. For each such pair, let $\varepsilon = \varepsilon(q) = | t - p/q |$.
Using an open interval $I$ of length $\frac{1}{q} - \varepsilon$ and applying
the same construction as above with $T$ defined by $Tx = x+p/q$, one obtains an
$X$-avoiding set of measure at least
$((q-1)/2)(1/q - \varepsilon) = 1/2 - o(1)$. Alternatively,
the lower bound $\alpha \ge 1/2$ follows from
Rohlin's tower theorem (see e.g.\ \cite[Theorem~169]{kalikow+mccutcheon:oet}) applied to the ergodic transformation $Tx = x+t$. Therefore $\alpha = 1/2$.
However this supremum can never be attained. Indeed, if $I \subset [0,1)$ is
an $X$-avoiding set with $\lambda(I) = 1/2$ and $T$ is defined by $Tx = x+t$, then
$I \cap T I = \emptyset$ and $T I \cap T^2 I = \emptyset$. Since $\lambda(I)=1/2$,
this implies that $I$ and $T^2 I$ differ only on a nullset,
contradicting the ergodicity of the irrational rotation $T^2$.
\end{proof}
\section{Gegenbauer polynomials and Schoenberg's theorem}\label{sec:gegenbauer}
Before proving the first main result, we recall the Gegenbauer polynomials
and Schoenberg's theorem from the theory of spherical harmonics.
For $\nu > -1/2$, define the \emph{Gegenbauer weight function}\index{Gegenbauer weight function}
\[
r_\nu(t) := (1-t^2)^{\nu-1/2},~~ -1 < t < 1.
\]
To motivate this definition, observe that if we take a uniformly distributed vector $\bm{\xi}\in S^{n-1}$, $n \geq 2$,
and project it to any given axis, then the density of the obtained random variable $\bm{X}\in[-1,1]$ is proportional to
$r_{(n-2)/2}$, with the coefficient
$\left( \int_{-1}^1 r_{(n-2)/2}(x)\,\mathrm{d}x \right)^{-1} =\frac{\omega_{n-1}}{\omega_n}$
where $\omega_n$ is as in (\ref{eq:OmegaN}). (In particular, $\bm{X}$ is
uniformly distributed in $[-1,1]$ if $n=3$.)
Applying the Gram-Schmidt process to the polynomials $1,t,t^2,\dots$ with respect to the inner
product $\langle f, g \rangle_\nu = \int_{-1}^1 f(t) g(t) r_\nu(t)\,\mathrm{d}t$, one obtains
the \emph{Gegenbauer polynomials}\index{Gegenbauer polynomials}
$C_i^\nu(t)$, $i=0,1,2,\dots$, where $C_i^\nu$ is of degree $i$. For a concise
overview of these polynomials, see e.g.\ \cite[Section B.2]{dai+xu:athasb}. Here, we always use the normalisation
$C_i^\nu(1)=1$.
For a fixed $n \geq 2$, a continuous function $f : [-1,1] \to \mathbb{R}$ is called
\emph{positive definite}\index{positive definite!function on $S^{n-1}$}
if for every set of distinct points $\xi_1, \dots, \xi_s \in S^{n-1}$, the
matrix $(f(\langle \xi_i, \xi_j \rangle))_{i,j=1}^s$ is positive semidefinite.
We will need the following result of Schoenberg~\cite{shoenberg:38}
(for a modern presentation, see e.g.\ \cite[Theorem~14.3.3]{dai+xu:athasb}).
\begin{theorem}[Schoenberg's theorem]\label{thm:schoenberg}\index{Schoenberg's theorem}
For $n \geq 2$, a continuous function $f : [-1,1] \to \mathbb{R}$ is positive definite if and only if
there exist coefficients $a_i \geq 0$, for $i \geq 0$, such that
\[
f(t) = \sum_{i=0}^\infty a_i C_i^{(n-2)/2}(t),~~~~\text{for all}~t \in [-1,1].
\]
Moreover, the convergence on the right-hand side is absolute and uniform for every
positive definite function $f$.
\end{theorem}
For a given positive definite function $f$, the coefficients $a_i$ in
Theorem~\ref{thm:schoenberg} are unique and can be computed
explicitly; a formula is given in \cite[Equation (14.3.3)]{dai+xu:athasb}.
We are especially interested in the case $n=3$. Then $\nu = 1/2$, and the
first few Gegenbauer polynomials $C_i^{1/2}(x)$ are
\begin{gather*}
C_0^{1/2}(x) = 1,~~~C_1^{1/2}(x) =x,~~~C_2^{1/2}(x) = \frac{1}{2} \left( 3x^2 - 1\right),\\
C_3^{1/2}(x) = \frac{1}{2}\left( 5x^3 - 3x \right),
~~~C_4^{1/2}(x) = \frac{1}{8}\left( 35x^4 -30x^2+3 \right).
\end{gather*}
\section{Linear programming relaxation}\label{sec:lp}
Schoenberg's theorem allows us to
set up a linear program whose value upper bounds
$\alpha(n)$ for $n \geq 3$. The same result appears in
\cite{bachoc09} and \cite{oliveira09}; we present a self-contained
(and slightly simpler) proof for the reader's convenience.
In the next section we strengthen the linear program,
obtaining a better bound for~$\alpha(3)$.
\begin{lemma}\label{lm:pt}
Suppose $f,g \in L^2(S^{n-1})$ and define $k : [-1,1] \to \mathbb{R}$ by
\begin{align}\label{eq:correlation}
k(t) := \mathbb{E}[f(\bm{O} \xi) g(\bm{O} \eta)],
\end{align}
where the expectation is taken over randomly chosen $\bm{O} \in SO(n)$,
and $\xi,\eta \in S^{n-1}$ are any two points satisfying $\langle \xi, \eta \rangle = t$.
Then $k(t)$ exists for every $-1 \leq t \leq 1$, and
$k$ is continuous. If $f=g$, then $k$ is positive definite.
\end{lemma}
\begin{proof}
The expectation in \eqref{eq:correlation} clearly does not depend on the particular
choice of $\xi,\eta \in S^{n-1}$. Fix any point $\xi_0 \in S^{n-1}$ and let
$P : [-1,1] \to SO(n)$ be any continuous function satisfying
$\langle \xi_0, P(t) \xi_0 \rangle = t$ for each $-1 \leq t \leq 1$. We have
\begin{align}\label{eq:correlation1}
k(t) = \mathbb{E}[f(\bm{O}\xi_0) g(\bm{O}P(t)\xi_0)].
\end{align}
The functions $O \mapsto f(O\xi_0)$ and $O \mapsto g(OP(t)\xi_0)$ on $SO(n)$
belong to $L^2(SO(n))$; being an inner product in $L^2(SO(n))$, the expectation
\eqref{eq:correlation1} therefore exists for every $t \in [-1,1]$.
We next show that $k$ is continuous.
For each $O \in SO(n)$, let $R_O : L^2(SO(n)) \to L^2(SO(n))$ be the
\emph{right translation} operator defined by $(R_O F)(O') = F(O' O)$,
for $F \in L^2(SO(n))$. For fixed $F$, the map $O \mapsto R_O F$ is continuous
from $L^2(SO(n))$ to $L^2(SO(n))$ (see e.g. \cite[Lemma~1.4.2]{deitmar09}).
Therefore the function $t \mapsto R_{P(t)} F$ is continuous from $[-1,1]$
to $L^2(SO(n))$. Using $F(O) = g(O\xi_0)$, the continuity of $k$ follows.
Now suppose $f=g$; we show that $k$ is positive definite. Let $\xi_1, \dots, \xi_s \in S^{n-1}$.
We need to show the $s \times s$ matrix $K = (k(\xi_i, \xi_j))_{i,j=1}^s$ is positive
semidefinite. But if $v = (v_1, \dots, v_s)^T \in \mathbb{R}^s$ is any column vector, then
\begin{align*}
v^T K v &= \sum_{i=1}^s \sum_{j=1}^s \mathbb{E}[f(\bm{O} \xi_i) f(\bm{O} \xi_j)] v_i v_j
= \mathbb{E} \left[ \left( \sum_{i=1}^s f(\bm{O} \xi_i) v_i \right)^2 \right] \geq 0.
\end{align*}
\end{proof}
\begin{theorem}\label{thm:lp-upper-bound}
$\alpha(n)$ is no more than the
value of the following infinite-dimensional linear program.
\begin{align}\label{eq:lpweak}
\begin{gathered}
\max x_0\\
\sum_{i=0}^\infty x_i = 1\\
\sum_{i=0}^\infty x_i C_i^{(n-2)/2}(0) = 0\\
x_i \geq 0,~\text{for all $i = 0,1,2,\dots$}~.
\end{gathered}
\end{align}
\end{theorem}
\begin{proof}
Let $I \in \mathcal{L}$ be a $\{0\}$-avoiding subset of $S^{n-1}$ with $\lambda(I)>0$.
We construct a feasible solution to the linear program \eqref{eq:lpweak}
having value $\lambda(I)$.
Let $k : [-1,1] \to \mathbb{R}$ be defined as in \eqref{eq:correlation}, with $f=g=\mathbbm{1}_I$.
Then $k$ is a positive definite function satisfying $k(1) = \lambda(I)$ and $k(0) = 0$.
By Theorem~\ref{thm:schoenberg},
$k$ has an expansion in terms of the Gegenbauer polynomials:
\begin{align}\label{eq:gegenbauer}
k(t) = \sum_{i=0}^\infty a_i C_i^{(n-2)/2}(t),
\end{align}
where each $a_i\ge 0$ and the convergence of the right-hand side is uniform on $[-1,1]$.
Moreover, for each fixed $\xi_0 \in S^{n-1}$, we have by Fubini's theorem and
\eqref{eq:correlation} that
\begin{align}\label{eq:gegenbauerInt}
\int_{S^{n-1}} k(\langle \xi_0, \eta \rangle) \,\mathrm{d}\eta
&= \int_{S^{n-1}} \int_{S^{n-1}} k(\langle \xi, \eta \rangle) \, \mathrm{d} \xi \,\mathrm{d}\eta\\
&= \mathbb{E} \left[ \left( \int_{S^{n-1}} \mathbbm{1}_I(\bm{O} \xi) \, \mathrm{d}\xi \right)^2 \right]
= \lambda(I)^2.
\end{align}
Note that
\begin{align*}
\int_{S^{n-1}} C_i^{(n-2)/2}(\langle \xi_0, \eta \rangle) \, \mathrm{d}\eta
= \frac{\omega_{n-1}}{\omega_n} \int_{-1}^1 C_i^{(n-2)/2}(t) (1-t^2)^{(n-3)/2}\,\mathrm{d}t = 0
\end{align*}
whenever $i \geq 1$ by the definition of the Gegenbauer polynomials.
Putting \eqref{eq:gegenbauer} and \eqref{eq:gegenbauerInt} together
and using that $C_0^{(n-2)/2} \equiv 1$, we conclude
that $a_0 = \lambda(I)^2$.
Recalling that $C_i^{(n-2)/2}(1) = 1$ for $i \geq 0$, we find that
setting $x_i = a_i / \lambda(I)$ for $i=0,1,2,\dots$ gives a feasible solution of
value $\lambda(I)$ to the linear program \eqref{eq:lpweak}.\qedhere
\end{proof}
Unfortunately in the case $n=3$, the value of \eqref{eq:lpweak} is at least $1/3$,
which is the same bound obtained when Witsenhausen first stated
the problem in \cite{witsenhausen74}. This can be seen from the feasible solution
$x_0 = 1/3, x_2=2/3$ and $x_i = 0$ for all $i \neq 0,2$.
\section{Adding combinatorial constraints}\label{sec:lp+comb}
For each $\xi \in S^{n-1}$ and $-1 < t < 1$, let $\sigma_{\xi,t}$ be the unique
probability measure on the Borel subsets of $S^{n-1}$ whose support is equal to the set
$$
\xi^t := \{ \eta \in S^{n-1} : \langle \eta, \xi \rangle = t \},
$$ and which is invariant under all
rotations fixing $\xi$.
Now let $n=3$.
As before, let $I \in \mathcal{L}$ be a $\{0\}$-avoiding subset of $S^2$ and define $k : [-1,1] \to \mathbb{R}$
as in \eqref{eq:correlation} with $f=g=\mathbbm{1}_I$; i.e.
\[
k(t) = \mathbb{E}[\mathbbm{1}_I(\bm{O}\xi) \mathbbm{1}_I(\bm{O}\eta)],
\]
where $\xi, \eta \in S^2$ satisfy $\langle \xi, \eta \rangle = t$.
Our aim now is to strengthen \eqref{eq:lpweak} for the case $n=3$
by adding combinatorial inequalities coming from Proposition~\ref{pr:comb}
applied to the sections of $S^2$ by affine planes.
We proceed as follows.
Let $p$ and $q$ be coprime integers with $1/4 \leq p/q \leq 1/2$, and let
$$
t_{p,q} = \sqrt{\frac{-\cos(2\pi p/q)}{1-\cos(2\pi p/q)}}.$$
Let $\xi \in S^{n-1}$ be arbitrary.
If we take two orthogonal unit vectors with endpoints in $\xi^{t_{p,q}}$ and
the centre $\xi_0=t_{p,q}\xi$ of this circle, then we get an isosceles triangle with side lengths $(1-t_{p,q}^2)^{1/2}$
and base $\sqrt{2}$; by the Cosine Theorem, the angle at $\xi_0$ is
$2\pi p/q$.
Let $\xi_0, \eta_0 \in S^{n-1}$ be arbitrary points satisfying $\langle \xi_0, \eta_0 \rangle = t_{p,q}$.
By Fubini's theorem we have
\begin{align*}
k(t_{p,q}) &= \mathbb{E}[\mathbbm{1}_I(\bm{O}\xi_0) \mathbbm{1}_I(\bm{O}\eta_0)]
= \int_{\xi_0^{t_{p,q}}} \mathbb{E}[\mathbbm{1}_I(\bm{O}\xi_0) \mathbbm{1}_I(\bm{O}\eta)]
\, \mathrm{d}\sigma_{\xi_0,t_{p,q}}(\eta)\\
&= \mathbb{E} \left[ \mathbbm{1}_I(\bm{O}\xi_0) \int_{\xi_0^{t_{p,q}}} \mathbbm{1}_I(\bm{O}\eta)
\, \mathrm{d}\sigma_{\xi_0,t_{p,q}}(\eta) \right].
\end{align*}
But if $q$ is odd, then $\int_{\xi_0^{t_{p,q}}} \mathbbm{1}_I(O \eta) \, \mathrm{d}
\sigma_{\xi_0,t_{p,q}}(\eta) \leq \frac{q-1}{2q}$ for all $O \in SO(n)$
by Proposition~\ref{pr:comb} applied to the circle $(O\xi_0)^{t_{p,q}}\cong S^1$,
since the subgraph it induces contains a cycle of length $q$.
Therefore $k(t_{p,q}) \leq \lambda(I) \frac{q-1}{2q}$.
It follows that the inequalities
\begin{align}\label{eq:combIneq}
\sum_{i=0}^\infty x_i C_i^{1/2}(t_{p,q}) \leq (q-1)/2q,
\end{align}
are valid for the relaxation and can be added to \eqref{eq:lpweak}.
The same holds for the inequalities $\sum_{i=0}^\infty x_i C_i^{1/2}(-t_{p,q}) \leq (q-1)/2q$.
So we have just proved the following result.
\begin{theorem}
$\alpha(3)$ is no more than the value of the following infinite-dimensional linear program.
\begin{align}\label{eq:lp-super-strong}
\begin{gathered}
\max x_0\\
\sum_{i=0}^\infty x_i = 1\\
\sum_{i=0}^\infty x_i C_i^{1/2}(0) = 0\\
\sum_{i=0}^\infty x_i C_i^{1/2}(\pm t_{p,q}) \leq (q-1)/2q,~~\text{for $q$ odd,~~$p,q$
coprime}\\
x_i \geq 0,~\text{for all $i = 0,1,2,\dots$}\ .
\end{gathered}
\end{align}
\end{theorem}
Rather than attempting to find the exact value of the linear program \eqref{eq:lp-super-strong},
the idea will be to discard all but finitely many of the combinatorial constraints, and
then to apply the weak duality theorem of linear programming. The dual linear program
has only finitely many variables, and any feasible solution gives an upper bound
for the value of program \eqref{eq:lp-super-strong}, and therefore also for $\alpha(3)$.
At the heart of the proof is the verification of the feasibility of a particular dual
solution which we give explicitly.
While part of the verification has been carried out by computer in order to deal with
the large numbers that appear, it can be done using only rational arithmetic and can therefore
be considered rigorous.
\begin{theorem}
$\alpha(3) < 0.313$.
\end{theorem}
\begin{proof}
Consider the following linear program
\begin{align}\label{eq:lpstrong}
\max \Big\{ x_0 &: \sum_{i=0}^\infty x_i = 1, \sum_{i=0}^\infty x_i C_i^{1/2}(0) = 0,
\sum_{i=0}^\infty x_i C_i^{1/2}(t_{1,3}) \leq 1/3,\\
&\sum_{i=0}^\infty x_i C_i^{1/2}(t_{2,5}) \leq 2/5,
\sum_{i=0}^\infty x_i C_i^{1/2}(-t_{2,5}) \leq 2/5,\nonumber\\
&~~~~~~~~~~~x_i \geq 0,~\text{for all $i = 0,1,2,\dots$}\nonumber \Big\}.
\end{align}
The linear programming dual of \eqref{eq:lpstrong} is the following.
\begin{align}\label{eq:lpdual}
\begin{gathered}
\min ~b_1 + \frac{1}{3}b_{1,3} + \frac{2}{5}b_{2,5} + \frac{2}{5}b_{2,5-}\\
b_1 + b_0 + b_{1,3} + b_{2,5} + b_{2,5-} \geq 1\\
b_1 + C_i^{1/2}(0) b_0 + C_i^{1/2}(t_{1,3}) b_{1,3} + C_i^{1/2}(t_{2,5}) b_{2,5}
+ C_i^{1/2}(-t_{2,5}) b_{2,5-} \geq 0
~\text{for $i = 1,2,\dots$}\\
b_1, b_0 \in \mathbb{R}, ~b_{1,3}, b_{2,5}, b_{2,5-} \geq 0.
\end{gathered}
\end{align}
By linear programming duality,
any feasible solution for program \eqref{eq:lpdual} gives an upper bound
for \eqref{eq:lpstrong}, and therefore also for $\alpha(3)$.
So in order to prove the claim $\alpha(3) < 0.313$, it suffices
to give a feasible solution to \eqref{eq:lpdual} having objective value no more than $0.313$.
Let
\begin{align*}
b = (b_1, b_0, b_{1,3}, b_{2,5}, b_{2,5-})
= \frac{1}{10^6}(128614, 404413, 36149, 103647, 327177).
\end{align*}
It is easily verified that $b$ satisfies the first constraint of \eqref{eq:lpdual}
and that its objective value less than $0.313$.
To verify the infinite family of constraints
\begin{align}\label{eq:constraints}
b_1 + C_i^{1/2}(0) b_0 + C_i^{1/2}(t_{1,3}) b_{1,3} + C_i^{1/2}(t_{2,5}) b_{2,5}
+ C_i^{1/2}(-t_{2,5}) b_{2,5-} \geq 0
\end{align}
for $i=1,2,\dots$, we apply Theorem~8.21.11 from \cite{szego92} (where
$C_i^{\lambda}$ is denoted as $P_i^{(\lambda)}$), which implies
\begin{align}\label{eq:gegenbauer-upper-bound}
| C_i^{1/2}(\cos{\theta}) | \leq \frac{\sqrt{2}}{\sqrt{\pi} \sqrt{\sin{\theta}}}\,
\frac{\Gamma(i+1)}{\Gamma(i+3/2)}
+ \frac{1}{\sqrt{\pi} 2^{3/2} (\sin{\theta})^{3/2}}\, \frac{\Gamma(i+1)}{\Gamma(i+5/2)}
\end{align}
for each $0 < \theta < \pi$.
Note that $t_{1,3} = 1/\sqrt{3}$ and $t_{2,5}=5^{-1/4}$.
When $\theta \in A:= \{ \pi/2, \arccos{t_{1,3}}, \arccos{t_{2,5}}, \arccos{(-t_{2,5})} \}$,
we have $\sin{\theta} \in \{ 1, \sqrt{\frac{2}{3}}, \gamma \}$, where
$\gamma =\frac{2}{\sqrt{5+\sqrt{5}}}$. The right-hand side of equation
\eqref{eq:gegenbauer-upper-bound} is maximized over $\theta\in A$ at $\sin{\theta} = \gamma$
for each fixed $i$, and since the right-hand side is decreasing in $i$,
one can verify using rational arithmetic only that it is no greater than
$128614 / 871386 = b_1 / (b_0+b_{1,3}+b_{2,5}+b_{2,5-})$ when $i \geq 40$, by
evaluating at $i=40$. Therefore,
\begin{align*}
&b_1 + C_i^{1/2}(0) b_0 + C_i^{1/2}(t_{1,3}) b_{1,3} + C_i^{1/2}(t_{2,5}) b_{2,5}
+ C_i^{1/2}(-t_{2,5}) b_{2,5-}\\
\geq&~ b_1 - (b_0+b_{1,3}+b_{2,5}+b_{2,5-})\max_{\theta \in A}\{ | C_i^{1/2}(\cos{\theta}) | \}\\
\geq&~0
\end{align*}
when $i \geq 40$. It now suffices to check that $b$ satisfies the constraints
\eqref{eq:constraints} for $i=0,1,\dots,39$. This can also be accomplished using
rational arithmetic only.
\end{proof}
The rational arithmetic calculations required in the above proof were carried out with \emph{Mathematica}.
When verifying the upper bound for the right-hand side of \eqref{eq:gegenbauer-upper-bound},
it is helpful to recall the identity $\Gamma(i+1/2) = (i-1/2)(i-3/2) \cdots (1/2) \sqrt{\pi}$.
When verifying the constraints \eqref{eq:constraints} for $i=0,1,\dots,39$, it can be
helpful to observe that $t_{1,3}$ and $\pm t_{2,5}$ are roots of the polynomials
$x^2 - 1/3$ and $x^4-1/5$ respectively; this can be used to cut down the degree
of the polynomials $C_i^{1/2}(x)$ to at most $3$ before evaluating them. The ancillary folder
of the \texttt{arxiv.org} version of this paper contains a \emph{Mathematica} notebook that
verifies all calculations.
The combinatorial inequalities
of the form \eqref{eq:combIneq} we chose to include in the strengthened linear program
\eqref{eq:lpstrong} were found as follows: Let $L_0$ denote the linear program \eqref{eq:lpweak}.
We first find an optimal solution $\sigma_0$ to $L_0$.
We then proceed recursively; having defined the linear program $L_{i-1}$ and found
an optimal solution $\sigma_{i-1}$,
we search through the inequalities \eqref{eq:combIneq} until
one is found which is violated by $\sigma_{i-1}$, and we strengthen $L_{i-1}$ with
that inequality to produce $L_i$. At each stage, an optimal solution to $L_i$
is found by first solving the dual minimisation problem, and then applying
the complementary slackness theorem from linear programming
to reduce $L_i$ to a linear programming
maximisation problem with just a finite number of variables.
Adding more inequalities of the form $\eqref{eq:combIneq}$ appears
to give no improvement on the upper bound.
Also adding the constraints $\sum_{i=0}^\infty x_i C_i^{1/2}(t) \geq 0$
for $-1\leq t \leq 1$ appears to give no improvement.
A small (basically insignificant)
improvement can be achieved by allowing the odd cycles to embed into $S^2$ in
more general ways, for instance with the points lying on two different latitudes rather
than just one.
\section{Adjacency operator}\label{sec:adj}
Let $n \geq 3$. For $\xi \in S^{n-1}$ and $-1 < t < 1$, we use the notations $\xi^t$
and $\sigma_{\xi,t}$ from Section \ref{sec:lp+comb}.
For $f \in L^2(S^{n-1})$ define $A_tf: S^{n-1}\to\I R$ by
\begin{align}\label{eq:transOperator}
(A_t f)(\xi) := \int_{\xi^t} f(\eta)\,\mathrm{d}\sigma_{\xi,t}(\eta),\quad \xi\in S^{n-1}.
\end{align}
Here we establish some basic properties of $A_t$ which will be helpful later.
The operator $A_t$ can be thought of as an adjacency operator for the graph
with vertex set $S^{n-1}$, in which we join two points with an edge
when their inner product is $t$.
Adjacency operators for infinite graphs are explored in greater detail and generality in
\cite{bachoc13}.
\begin{lemma}\label{lm:adjacency}
For every $t \in (-1,1)$, $A_t$ is a bounded linear operator from $L^2(S^{n-1})$ to
$L^2(S^{n-1})$ having operator norm equal to $1$.
\end{lemma}
\begin{proof}
The right-hand side of \eqref{eq:transOperator} involves integration
over nullsets of a function $f \in L^2(S^{n-1})$ which is only defined almost everywhere,
and so strictly speaking one should argue that \eqref{eq:transOperator} really makes sense.
In other words, given a particular representative~$f$ from its
$L^2$-equivalence class, we need to check that the integral on the right-hand side of
\eqref{eq:transOperator} is defined for almost all $\xi \in S^{n-1}$,
and that the $L^2$-equivalence class of $A_t f$ does not
depend on the particular choice of representative~$f$.
Our main tool will be Minkowski's integral inequality (see e.g.\ \cite[Theorem~6.19]{folland99}).
Let $e_n = (0,\dots,0,1)$ be the $n$-th basis
vector in $\mathbb{R}^n$ and let
\[
S = \{ (x_1, x_2, \dots, x_n) : x_n=0, x_1^2 + \dots + x_{n-1}^2 =1 \}
\]
be a copy of $S^{n-2}$ inside $\mathbb{R}^n$. Considering $f$ as a particular measurable function
(not an $L^2$-equivalence class), we define $F : SO(n) \times S \to \mathbb{R}$ by
\[
F(\rho,\eta)
=f\left (\rho\left(t e_n + \sqrt{1-t^2}\, \eta\right)\right),\qquad \rho \in SO(n),\ \eta\in S.
\]
Let us formally check all the hypotheses of Minkowski's integral inequality applied
to $F$, where $SO(n)$ is equipped with the Haar measure, and where $S$ is
equipped with the normalised Lebesgue measure;
this will show that the function $\tilde{F} : SO(n) \to \mathbb{R}$ defined by
$\tilde{F}(\rho) = \int_S F(\rho, \eta)\,\mathrm{d}\eta$
belongs to $L^2(SO(n))$.
Clearly the function $F$ is measurable.
To see that the function $\rho \mapsto F(\rho, \eta)$ belongs to $L^2(SO(n))$
for each fixed $\eta \in S$, simply note that
\[
\int_{SO(n)} \left| F(\rho,\eta) \right|^2 \,\mathrm{d}\rho
= \int_{SO(n)} \left| f(\rho(te_n + \sqrt{1-t^2}\,\eta)) \right|^2 \,\mathrm{d}\rho
= \|f\|_2^2.
\]
That the function $\eta \mapsto \| F(\cdot, \eta) \|_2$ belongs to $L^1(S)$
then also follows easily (in fact, this function is constant):
\[
\int_S \left( \int_{SO(n)} \left| F(\rho,\eta) \right|^2 \,\mathrm{d}\rho \right)^{1/2} \,\mathrm{d}\eta
= \int_S \|f\|_2 \,\mathrm{d}\eta
= \|f\|_2.
\]
Minkowski's integral inequality now gives that the function
$\eta \mapsto F(\rho, \eta)$ is in $L^1(S)$ for a.e.\ $\rho$,
the function $\tilde{F}$ is in $L^2(SO(n))$, and its norm can be
bounded as follows:
\begin{align}
\|\tilde{F}\|_2 &=
\left( \int_{SO(n)} \left| \int_S F(\rho,\eta) \,\mathrm{d}\eta \right|^2
\,\mathrm{d}\rho\right)^{1/2}\nonumber\\
&\leq \int_{S} \left( \int_{SO(n)} |F(\rho,\eta)|^2 \,\mathrm{d}\rho \right)^{1/2} \,\mathrm{d}\eta
= \|f\|_2.\label{eq:FNormBound}
\end{align}
Applying \eqref{eq:FNormBound} to $f-g$ where $g$ is a.e.\ equal to $f$,
we conclude that the $L^2$-equivalence class of $\tilde{F}$ does
not depend on the particular choice of representative $f$ from its equivalence class.
Now $(A_t f)(\xi)$ is simply $\tilde{F}(\rho)$, where $\rho \in SO(n)$
can be any rotation such that $\rho e_n = \xi$. This shows that the
integral in \eqref{eq:transOperator} makes sense for almost all $\xi \in S^{n-1}$.
We have $\|A_t\| \leq 1$
since for any $f \in L^2(S^{n-1})$,
\begin{align*}
\| A_t f \|_2 = \left( \int_{S^{n-1}} \left| (A_t f)(\xi) \right|^2 \,\mathrm{d}\xi \right)^{1/2}
&= \left( \int_{SO(n)} \left| (A_t f)(\rho e_n) \right|^2 \,\mathrm{d}\rho \right)^{1/2}\\
&= \left( \int_{SO(n)} \left| \tilde{F}(\rho) \right|^2 \,\mathrm{d}\rho \right)^{1/2}
\leq \| f \|_2,
\end{align*}
by \eqref{eq:FNormBound}.
Finallly, applying $A_t$
to the constant function $1$ shows that $\|A_t\| = 1$.\qedhere
\end{proof}
\begin{lemma}\label{lm:twoPoint}
Let $f$ and $g$ be functions in $L^2(S^{n-1})$, let $\xi, \eta \in S^{n-1}$
be arbitrary points, and write $t = \langle \xi, \eta \rangle$.
If $\bm{O} \in SO(n)$ is chosen uniformly at random with respect to the Haar measure
on $SO(n)$, then
\begin{align}\label{eq:k-t}
\int_{S^{n-1}} f(\zeta) (A_t g)(\zeta) \,\mathrm{d}\zeta = \mathbb{E}[ f(\bm{O}\xi) g(\bm{O}\eta) ],
\end{align}
which is exactly the definition of $k(t)$ from \eqref{eq:correlation}.
\end{lemma}
\begin{proof}
We have
\begin{align*}
\int_{S^{n-1}} f(\zeta) (A_t g)(\zeta)\,\mathrm{d}\zeta &=
\int_{SO(n)} f(O\xi) (A_t g)(O\xi)\,\mathrm{d}O\\
&= \int_{SO(n)} f(O\xi) \int_{(O \xi)^{t}} g(\psi)
\,\mathrm{d}\sigma_{O\xi, t}(\psi)\,\mathrm{d}O,
\end{align*}
If $H$ is the subgroup of all elements in $SO(n)$ which fix $\xi$, then the
above integral can be rewritten
\begin{align*}
\int_{SO(n)} f(O\xi) \int_H g(Oh\eta)
\,\mathrm{d}h \,\mathrm{d}O.
\end{align*}
By Fubini's theorem, this integral is equal to
\begin{align*}
&\int_H \int_{SO(n)} f(O\xi) g(Oh\eta) \,\mathrm{d}O \,\mathrm{d}h\\
=& \int_H \int_{SO(n)} f(O h^{-1} \xi) g(O\eta) \,\mathrm{d}O \,\mathrm{d}h\\
=& \int_{SO(n)} f(O \xi) g(O\eta) \,\mathrm{d}O,
\end{align*}
where we use the right-translation invariance of the Haar integral on $SO(n)$ at the first equality,
and the second inequality follows by noting that the integrand is constant
with respect to $h$.
\end{proof}
\begin{lemma}\label{lm:SelfAdj} For every $t\in (-1,1)$, the operator $A_t:L^2(S^{n-1})\to L^2(S^{n-1})$ is self-adjoint.
\end{lemma}
\begin{proof}
Fix $\xi,\eta \in S^{n-1}$ that satisfy
$\langle \xi, \eta \rangle = t$. Lemma~\ref{lm:twoPoint} implies that for any $f,g\in L^2(S^{n-1})$,
we have
\[
\langle A_t f, g \rangle = \mathbb{E}_{\bm{O} \in SO(n)}[ f(\bm{O}\xi) g(\bm{O} \eta) ]
= \langle f, A_t g \rangle,
\]
giving the required.\end{proof}
\section{Existence of a measurable maximum independent set}\label{sec:max}
Let $n \geq 2$ and $X \subset [-1,1]$.
From Theorem~\ref{thm:irCircle} we know that the supremum $\alpha_X(n)$
is sometimes attained as a maximum, and sometimes not.
It is therefore interesting to ask when a maximizer exists. The main positive
result in this direction is Theorem~\ref{thm:attainment}, which says
that a largest measurable $X$-avoiding set always exists when $n \geq 3$.
Remarkably, this result holds under no additional restrictions (not even Lebesgue measurability)
on the set $X$ of forbidden inner products.
Before arriving at this theorem, we shall need to establish a number of technical results.
For the remainder of this section we suppose $n \geq 3$.
For $d \geq 0$, let $\mathrsfs{H}_d^n$ be the vector space of homogeneous
polynomials $p(x_1,\dots,x_n)$ of degree $d$ in $n$ variables belonging to the kernel of the Laplace operator; that is
\[
\frac{\partial^2 p}{\partial x_1^2} + \cdots + \frac{\partial^2 p}{\partial x_n^2} = 0.
\]
Note that each $\mathrsfs{H}_d^n$ is finite-dimensional.
The restrictions of the elements of $\mathrsfs{H}_d^n$ to the surface of the unit
sphere are called the \emph{spherical harmonics}. For fixed $n$, we
have $L^2(S^{n-1}) = \oplus_{d=0}^\infty \mathrsfs{H}_d^n$
(\cite[Theorem~2.2.2]{dai+xu:athasb}); that is, each function in $L^2(S^{n-1})$ can be
written uniquely as an infinite sum of elements from $\mathrsfs{H}_d^n$, $d=0,1,2,\dots$,
with convergence in the $L^2$-norm.
Recall the definition \eqref{eq:transOperator} of the adjacency operator from
Section \ref{sec:adj}:
\[
(A_t f)(\xi) := \int_{\xi^t} f(\eta) \,\mathrm{d}\sigma_{\xi,t}(\eta),\quad f\in L^2(S^{n-1}).
\]
The next lemma states that each spherical harmonic is an eigenfunction
of the operator $A_t$. It extends the Funk-Hecke formula (\cite[Theorem~1.2.9]{dai+xu:athasb})
to the Dirac measures, obtaining the eigenvalues of $A_t$ explicitly.
The proof relies on the fact that integral kernel operators $K$
having the form $(Kf)(\xi) = \int f(\zeta) k(\langle \zeta, \xi \rangle) \,\mathrm{d}\zeta$
for some function $k : [-1,1] \to \mathbb{R}$
are diagonalised by the spherical harmonics, and moreover that the eigenvalue
of a specific spherical harmonic depends only on its degree.
\begin{proposition}\label{pr:adjEigs}
Let $t \in (-1,1)$. Then for every spherical harmonic $Y_d$ of degree~$d$,
\[
(A_t Y_d)(\xi) = \int_{\xi^t} Y_d(\eta) \,\mathrm{d}\sigma_{\xi,t}(\eta)
= \mu_d(t) Y_d(\xi), ~~\xi \in S^{n-1},
\]
where $\mu_d(t)$ is the constant
\[
\mu_d(t) = C_d^{(n-2)/2}(t)
(1-t^2)^{(n-3)/2}.
\]
\hide{ \[
\mu_d(t) = C_d^{(n-2)/2}(t)
(1-t^2)^{(n-3)/2} \bigg/ C_d^{(n-2)/2}(1).
\]
}
\end{proposition}
\begin{proof}
Let $\,\mathrm{d}s$ be the Lebesgue measure on $[-1,1]$ and
let $\{ f_\alpha \}_\alpha$ be a net of functions in $L^1([-1,1])$
such that $\{ f_\alpha \,\mathrm{d}s \}$ converges to the Dirac point
mass $\delta_t$ at $t$ in the weak-* topology on the set of Borel measures on $[-1,1]$.
By Theorem~1.2.9 in \cite{dai+xu:athasb}, we have
\[
\int_{S^{n-1}} Y_d(\eta) f_\alpha(\langle \xi, \eta \rangle) \,\mathrm{d}\eta = \mu_{d,\alpha} Y_d(\xi),
\]
where
\[
\mu_{d,\alpha} = \int_{-1}^1 C_d^{(n-2)/2}(s) (1-s^2)^{(n-3)/2} f_\alpha(s) \,\mathrm{d}s.
\]
\hide{ \[
\mu_{d,\alpha} = \int_{-1}^1 C_d^{(n-2)/2}(s) (1-s^2)^{(n-3)/2} f_\alpha(s) \,\mathrm{d}s
\bigg/ C_d^{(n-2)/2}(1).
\]
}
By taking limits, we finish the proof.
\end{proof}
The next lemma is a general fact about weakly convergent sequences in
a Hilbert space.
\begin{lemma}\label{lm:weakCompact}
Let $\mathcal{H}$ be a Hilbert space and let $K : \mathcal{H} \to \mathcal{H}$
be a compact operator. Suppose $\{ x_i \}_{i=1}^\infty$ is a sequence in
$\mathcal{H}$ converging weakly to $x \in \mathcal{H}$. Then
\[
\lim_{i \to \infty} \langle K x_i, x_i \rangle = \langle K x, x \rangle.
\]
\end{lemma}
\begin{proof}
Let $C$ be the maximum of $\| x \|$ and $\sup_{i \geq 1} \| x_i \| $,
which is finite by the Principle of Uniform Boundedness.
Let $\{ K_m \}_{m=1}^\infty$ be a sequence of finite rank operators such that
$K_m \to K$ in the operator norm as $m \to \infty$. Clearly
\[
\lim_{i \to \infty} \langle K_m x_i, x_i \rangle = \langle K_m x,x \rangle
\]
for each $m=1,2,\dots$\ . Let $\varepsilon > 0$ be given and choose $m_0$ so that
$\| K - K_{m_0} \| < \varepsilon/(3C^2)$. Choosing $i_0$ so that
$| \langle K_{m_0} x_i, x_i \rangle - \langle K_{m_0} x, x \rangle | < \varepsilon/3$
whenever $i \geq i_0$, we have
\begin{align*}
& | \langle K x_i, x_i \rangle - \langle K x, x \rangle | \\
\leq& | \langle K x_i, x_i \rangle - \langle K_{m_0} x_i, x_i \rangle |
+ | \langle K_{m_0} x_i, x_i \rangle - \langle K_{m_0} x, x \rangle |
+ | \langle K_{m_0} x, x \rangle - \langle Kx, x \rangle | \\
\leq& \| K - K_{m_0} \| C^2 + \varepsilon/3 + \| K - K_{m_0} \| C^2 \ <\ \varepsilon,
\end{align*}
and the lemma follows.
\end{proof}
The next corollary is also a result stated in \cite{bachoc13}.
\begin{corollary}\label{cor:compact}
If $n \geq 3$ and $t \in (-1,1)$, then $A_t$ is compact.
\end{corollary}
\begin{proof}
The operator $A_t$ is diagonalisable by Proposition~\ref{pr:adjEigs},
since the spherical harmonics
form an orthonormal basis for $L^2(S^{n-1})$. It therefore suffices
to show that its eigenvalues cluster only at $0$.
By Theorem~8.21.8 of \cite{szego92} and Proposition~\ref{pr:adjEigs}, the eigenvalues
$\mu_d(t)$ tend to zero as $d \to \infty$. The eigenspace corresponding
to the eigenvalue $\mu_d(t)$ is precisely the vector space of
spherical harmonics of degree $d$, which is finite dimensional.
Therefore $A_t$ is compact.
\end{proof}
For each $\xi \in S^{n-1}$, let $C_h(\xi)$ be the open spherical cap
of height $h$ in $S^{n-1}$ centred at $\xi$. Recall that $C_h(\xi)$
has volume proportional to $\int_{1-h}^1 (1-t^2)^{(n-3)/2} \,\mathrm{d}t$.
\begin{lemma}\label{lm:caps}
For each $\xi \in S^{n-1}$, we have $\lambda(C_h(\xi)) = \Theta(h^{(n-1)/2})$, and\\
$\lambda(C_{h/2}(\xi)) \geq \lambda(C_{h}(\xi))/2^{(n-1)/2} - o(h^{(n-1)/2})$
as $h \to 0^+$.
\end{lemma}
\begin{proof}
If $f(h) = \int_{1-h}^1 (1-t^2)^{(n-3)/2} \,\mathrm{d}t$, then
we have $\frac{df}{dh}(h) = (2h-h^2)^{(n-3)/2}$.
Since $f(0) = 0$, the smallest power of $h$ occurring in $f(h)$ is of order
$(n-1)/2$. This gives the first result.
For the second, note that the coefficient
of the lowest order term in $f(h)$ is $2^{(n-1)/2}$
times that of $f(h/2)$.
\end{proof}
\begin{lemma}\label{lm:density}
Suppose $n \geq 3$ and
let $I \subset S^{n-1}$ be a Lebesgue measurable set with $\lambda(I) > 0$.
Define $k:[-1,1] \to \mathbb{R}$ by
\[
k(t) := \int_{S^{n-1}} \mathbbm{1}_I(\zeta) (A_t \mathbbm{1}_I)(\zeta) \,\mathrm{d}\zeta,
\]
which by Lemma~\ref{lm:twoPoint} is the same as Definition \eqref{eq:correlation}
applied with $f=g=\mathbbm{1}_I$.
If $\xi_1, \xi_2 \in S^{n-1}$ are Lebesgue density points of $I$,
then $k(\langle \xi_1, \xi_2 \rangle) > 0$.
\end{lemma}
\begin{proof}
Let $t = \langle \xi_1, \xi_2 \rangle$.
If $t = 1$, then the conclusion holds since $k(1)=\lambda(I)>0$.
If $t=-1$, then $\xi_2 = -\xi_1$, and by the Lebesgue density theorem
we can choose $h>0$ small enough that
$\lambda(C_h(\xi_i) \cap I) > \frac{2}{3} \lambda(C_h(\xi_i))$ for $i=1,2$.
By Lemma~\ref{lm:twoPoint} we have
\begin{align*}
k(-1) &= \mathbb{E}[\mathbbm{1}_I(\bm{O}\xi_1) \mathbbm{1}_I(\bm{O}(-\xi_1))]\\
&\geq \mathbb{E}[\mathbbm{1}_{I \cap C_h(\xi_2) }(\bm{O}\xi_1) \mathbbm{1}_{I \cap C_h(\xi_2) }(\bm{O}(-\xi_1))]
\geq \frac{1}{3}\lambda(C_h(\xi_1)).
\end{align*}
From now on we may therefore assume $-1 < t < 1$.
Let $h>0$ be a small number which
will be determined later. Suppose $x \in C_h(\xi_1)$. The intersection
$x^t \cap C_h(\xi_2)$ is a spherical cap in the $(n-2)$-dimensional sphere
$x^t$ having height proportional to $h$; this is because $C_h(\xi_2)$ is
the intersection of $S^{n-1}$ with a certain halfspace $H$, and
$x^t \cap C_h(\xi_2) = x^t \cap H$.
We have $\sigma_{x,t}(x^t \cap C_h(\xi_2)) = \Theta(h^{(n-2)/2})$ by
Lemma~\ref{lm:caps}, and it follows
that there exists $D>0$ such that $\sigma_{x,t}(x^t \cap C_h(\xi_2)) \leq Dh^{(n-2)/2}$
for sufficiently small $h>0$.
If $x \in C_{h/2}(\xi_1)$, then $x^t \cap C_{h/2}(\xi_2) \neq \emptyset$
since $x^t$ is just a rotation of the hyperplane $\xi_1^t$ through an
angle equal to the angle between $x$ and $\xi_1$.
Therefore $x^t \cap C_h(\xi_2)$ is a spherical cap
in $x^t$ having height at least $h/2$.
Thus there exists $D' > 0$ such that
$\sigma_{x,t}(x^t \cap C_h(\xi_2)) \geq D' h^{(n-2)/2}$ for all $x \in C_{h/2}(\xi_1)$,
by Lemma~\ref{lm:caps}.
Now choose $h>0$ small enough that
$\lambda(C_h(\xi_i) \cap I) \geq (1 - \frac{D'}{2^n D}) \lambda(C_h(\xi_i))$ for $i=1,2$;
this is possible by the Lebesgue density theorem
since $\xi_1$ and $\xi_2$ are density points.
We have by Lemma~\ref{lm:twoPoint} that
\[
k(t) = \mathbb{P}[\bm{\eta}_1 \in I, \bm{\eta}_2 \in I],
\]
if $\bm{\eta}_1$ is chosen uniformly at random from $S^{n-1}$, and if
$\bm{\eta}_2$ is chosen uniformly at random from $\bm{\eta}_1^t$.
Then
\begin{align*}
k(t) &\geq \mathbb{P}[\bm{\eta}_1 \in I \cap C_h(\xi_1), \bm{\eta}_2 \in I \cap C_h(\xi_2)]\\
&\geq \mathbb{P}[\bm{\eta}_1 \in C_h(\xi_1), \bm{\eta}_2 \in C_h(\xi_2)]
-\mathbb{P}[\bm{\eta}_1 \in C_h(\xi_1) \setminus I, \bm{\eta}_2 \in C_h(\xi_2) ]\\
&~~~~~~~~~~~~~~-\mathbb{P}[\bm{\eta}_1 \in C_h(\xi_1), \bm{\eta}_2 \in C_h(\xi_2) \setminus I].
\end{align*}
The first probability is at least
\begin{align*}
D' h^{(n-2)/2} \lambda(C_{h/2}(\xi_1))
\geq \frac{D'}{2^{(n-1)/2}} h^{(n-2)/2} \lambda(C_h(\xi_1)) - o(h^{(2n-3)/2})
\end{align*}
by Lemma~\ref{lm:caps}. The second and third probabilities are each no more than
$$
\frac{D'}{2^n D} \lambda(C_h(\xi_1)) D h^{(n-2)/2}
= \frac{D'}{2^n} \lambda(C_h(\xi_1)) h^{(n-2)/2}
$$ for
sufficiently small $h>0$, and therefore by the first part
of Lemma~\ref{lm:caps},
\[
k(t) \geq \frac{D'}{2^{(n-1)/2}} \lambda(C_h(\xi_1)) h^{(n-2)/2} - o(h^{(2n-3)/2})
- \frac{D'}{2^{n-1}} \lambda(C_h(\xi_1)) h^{(n-2)/2},
\]
and this is strictly positive for sufficiently small $h>0$.
\end{proof}
We are now in a position to prove the second main result of this paper.
\begin{theorem}\label{thm:attainment}
Suppose $n \geq 3$ and let $X$ be any subset of $[-1,1]$.
Then there exists an $X$-avoiding set $I \in \mathcal{L}$
such that $\lambda(I) = \alpha_X(n)$.
\end{theorem}
\begin{proof}
We may suppose that $1 \not\in X$ for otherwise every $X$-avoiding set
is empty and the theorem holds with $I=\emptyset$.
Let $\{ I_i \}_{i=1}^\infty$ be a sequence of measurable $X$-avoiding sets
such that $\lim_{i \to \infty} \lambda(I_i) = \alpha_X(n)$. Passing to a
subsequence if necessary, we may suppose that the sequence
$\{ \mathbbm{1}_{I_i} \}$ of characteristic functions converges weakly in
$L^2(S^{n-1})$; let $h$ be its limit. Then $0 \leq h \leq 1$ almost everywhere
since $0 \leq \mathbbm{1}_{I_i} \leq 1$ for every $i$.
Denote by $I'$ the set $h^{-1}((0,1])$, and let $I$ be the set of Lebesgue
density points of $I'$. We claim that $I$ is $X$-avoiding.
For all $t \in X \setminus \{ -1 \}$,
the operator $A_t : L^2(S^{n-1}) \to L^2(S^{n-1})$ is self-adjoint and compact
by Lemma~\ref{lm:SelfAdj} and Corollary~\ref{cor:compact}.
Since $\langle A_t \mathbbm{1}_{I_i}, \mathbbm{1}_{I_i} \rangle = 0$ for each $i$,
Lemma~\ref{lm:weakCompact} implies $\langle A_t h, h \rangle = 0$.
Since $h \geq 0$, it follows from the definition of $A_t$ that
$\langle A_t \mathbbm{1}_{I'}, \mathbbm{1}_{I'} \rangle = 0$,
and therefore also that $\langle A_t \mathbbm{1}_{I}, \mathbbm{1}_{I} \rangle = 0$.
But if there exist points $\xi, \eta \in I$ with
$t_0 = \langle \xi, \eta \rangle \in X \setminus \{ -1 \}$, then
$\langle A_{t_0} \mathbbm{1}_{I}, \mathbbm{1}_{I} \rangle > 0$ by Lemma~\ref{lm:density}.
Thus, in order to show that $I$ is $X$-avoiding, it remains to derive a contradiction from assuming that $-1\in X$ and $-\xi,\xi\in I$ for some $\xi\in S^{n-1}$. Since $\xi$ and $-\xi$ are Lebesgue density points of $I$, there is a spherical cap $C$ centred at $\xi$
such that $\lambda(I \cap C) > \frac{2}{3} \lambda(C)$ and
$\lambda(I \cap (-C)) > \frac{2}{3} \lambda(C)$. The same applies to $I_i$ for all large $i$
(since a cap
is a continuity set). But this contradicts the fact that
$I_i$ and its reflection $-I_i$ are disjoint for every $i$. Thus $I$ is $X$-avoiding.
Finally, we have
\begin{align*}
\lambda(I) &= \lambda(I')
\geq \langle \mathbbm{1}_{S^{n-1}}, h \rangle
= \lim_{i \to \infty} \langle \mathbbm{1}_{S^{n-1}}, \mathbbm{1}_{I_i} \rangle
= \lim_{i \to \infty} \lambda(I_i)
= \alpha_X(n),
\end{align*}
whence $\lambda(I) = \alpha_X(n)$ since $\lambda(I) \leq \alpha_X(n)$.
\end{proof}
\section{Invariance of $\alpha_X(n)$ under taking the closure of $X$}\label{sec:closure}
Again let $n \geq 2$ and $X \subset [-1,1]$. We will use $\overline{X}$ to
denote the toplogical closure of $X$ in $[-1,1]$.
In general it is false that $X$-avoiding sets are $\overline{X}$-avoiding.
In spite of this, we have the following result.
\begin{theorem}\label{thm:closureIR}
Let $X$ be an arbitrary subset of $[-1,1]$.
Then $\alpha_X(n) = \alpha_{\overline{X}}(n)$.
In particular $\alpha_X(n) = 0$ if $1 \in \overline{X}$.
\end{theorem}
\begin{proof}
Clearly $\alpha_X(n) \geq \alpha_{\overline{X}}(n)$
For the reverse inequality,
let $I' \subset S^{n-1}$ be any measurable $X$-avoiding set. Let
$I \subset I'$ be the set of Lebesgue density points of $I'$,
and define $k:[-1,1] \to \mathbb{R}$ by
$k(t) = \int_{S^{n-1}} \mathbbm{1}_I(\zeta) (A_t \mathbbm{1}_I)(\zeta) \,\mathrm{d}\zeta$.
Then $k$ is continuous by Lemma~\ref{lm:pt} and Lemma~\ref{lm:twoPoint},
and since $k(t) = 0$ for every
$t \in X$, it follows that $k(t) = 0$ for every $t \in \overline{X}$.
Lemma~\ref{lm:density} now implies that $I$ is $\overline{X}$-avoiding.
The theorem now follows since $I'$ was arbitrary, and $\lambda(I) = \lambda(I')$
by the Lebesgue density theorem.
\end{proof}
\section{Single forbidden inner product}\label{sec:single}
An interesting case to consider is when $|X|=1$, motivated by the fact that $1/\alpha_{\{t\}}(n)$ is
a lower bound on the measurable chromatic number of $\I R^n$ for any $t\in(-1,1)$ and this freedom
of choosing $t$ may lead to better bounds.
Let us restrict ourselves to the special case when $n=3$ (that is, we look at the 2-dimensional sphere).
For a range of $t\in [-1,\cos \frac{2\pi}5]$, the best construction that we could find consists of one or two spherical caps as follows. Given $t$, let $h$ be the maximum height of an open spherical cap which is $\{t\}$-avoiding. A simple calculation shows that $h=1-\sqrt{(t+1)/2}$. If $t\le -1/2$, then we just
take a single cap $C$ of height $h$, which gives that $\alpha_{\{t\}}(3)\ge h/2$ then. When $-1/2< t\le 0$,
we can add another cap $C'$ whose centre is opposite to that of $C$. When $t$ reaches $0$,
the caps $C$ and $C'$ have the same height (and we get the two-cap construction from Kalai's conjecture).
When $0<t\le \frac{2\pi}5$, we can form a $\{t\}$-avoiding set by taking two caps of the same height $h$.
(Note that the last construction cannot be optimal for $t>\frac{2\pi}5$, as then the two caps can be arranged
so that a set of positive measure can be added, see the third picture of Figure~\ref{fg:1}.)
\begin{figure}
\begin{center}
\includegraphics[height=4cm]{C3.pdf}\hspace{1cm}
\includegraphics[height=4cm]{C4.pdf}\hspace{1cm}
\includegraphics[height=4cm]{C5.pdf}
\end{center}
\caption{$\{t\}$-Avoiding set for $t=-\frac12$, $0$ and $\cos\frac{2\pi}5$}
\label{fg:1}
\end{figure}
Calculations show that the above construction gives the following lower bound (where $h=1-\sqrt{(t+1)/2}$):
\begin{equation}\label{eq:lower}
\alpha_{\{t\}}(3)\ge \left\{\begin{array}{ll}
\frac h2,& -1\le t\le -\frac12,\\
h+t-ht,& -\frac12\le t\le 0,\\
h,& 0\le t\le \cos \frac{2\pi}5.
\end{array}\right.
\end{equation}
We conjecture.that the bounds in (\ref{eq:lower}) are all equalities. In particular, our conjecture states that,
for $t\le -1/2$, we can strengthen Levy's isodiametric inequality by forbidding a single inner product $t$ instead of the whole interval $[-1,t]$.
As in Section~\ref{sec:lp+comb}, one can write an infinite linear program that gives an upper bound on $\alpha_{\{t\}}(3)$. Although our numerical experiments indicate that the upper bound
given by the LP exceeds the lower bound in (\ref{eq:lower}) by at most $0.062$ for all $-1\le t\le 0.3$, we were not able to
determine the exact value of $\alpha_{\{t\}}(3)$ for any single $t\in(0,\cos \frac{2\pi}5]$.
\section*{Acknowledgements}
Both authors acknowledge Anusch Taraz and his research group
for their hospitality during the summer
of 2013. The first author would like to thank his thesis advisor Frank Vallentin
for careful proofreading, and for pointing him to the Witsenhausen problem \cite{witsenhausen74}.
\bibliographystyle{alpha}
| {
"timestamp": "2015-02-26T02:18:34",
"yymm": "1502",
"arxiv_id": "1502.05030",
"language": "en",
"url": "https://arxiv.org/abs/1502.05030",
"abstract": "Let $X$ be any subset of the interval $[-1,1]$. A subset $I$ of the unit sphere in $R^n$ will be called \\emph{$X$-avoiding} if $<u,v >\\notin X$ for any $u,v \\in I$. The problem of determining the maximum surface measure of a $\\{ 0 \\}$-avoiding set was first stated in a 1974 note by Witsenhausen; there the upper bound of $1/n$ times the surface measure of the sphere is derived from a simple averaging argument. A consequence of the Frankl-Wilson theorem is that this fraction decreases exponentially, but until now the $1/3$ upper bound for the case $n=3$ has not moved. We improve this bound to $0.313$ using an approach inspired by Delsarte's linear programming bounds for codes, combined with some combinatorial reasoning. In the second part of the paper, we use harmonic analysis to show that for $n\\geq 3$ there always exists an $X$-avoiding set of maximum measure. We also show with an example that a maximiser need not exist when $n=2$.",
"subjects": "Combinatorics (math.CO); Metric Geometry (math.MG); Optimization and Control (math.OC)",
"title": "Spherical sets avoiding a prescribed set of angles",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795110351222,
"lm_q2_score": 0.815232489352,
"lm_q1q2_score": 0.8046177637205824
} |
https://arxiv.org/abs/2301.05507 | Correlation-Based And-Operations Can Be Copulas: A Proof | In many practical situations, we know the probabilities $a$ and $b$ of two events $A$ and $B$, and we want to estimate the joint probability ${\rm Prob}(A\,\&\,B)$. The algorithm that estimates the joint probability based on the known values $a$ and $b$ is called an and-operation. An important case when such a reconstruction is possible is when we know the correlation between $A$ and $B$; we call the resulting and-operation correlation-based. On the other hand, in statistics, there is a widely used class of and-operations known as copulas. Empirical evidence seems to indicate that the correlation-based and-operation derived inthis https URLis a copula, but until now, no proof of this statement was available. In this paper, we provide such a proof. | \section{Formulation of the problem}
\noindent{\bf Correlation-based ``and"-operation.} In many practical situations, we know the probabilities $a$ and $b$ of two events $A$ and $B$, and we need to estimate the joint probability ${\rm Prob}(A\,\&\,B)$. An algorithm $f_\&(a,b)$ that transforms the known values $a$ and $b$ into such an estimate is usually called an {\it and-operation}.
One important case when such an estimate is possible is when, in addition to the probabilities $a$ and $b$, we also know the correlation $\rho$ between the corresponding two random events. It is known (see, e.g., \cite{Lucas 1995,Miralles 2022}) that in this case, we can uniquely determine the probability of ${\rm Prob}(A\,\&\,B)$ as
$$a\cdot b+\rho\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}.\eqno{(1)}$$
While this formula is true whenever the correlation is known, this formula does not lead to an everywhere defined and-operation. For example, for $a=b=0.1$ and $\rho=-1$, this formula leads to a meaningless negative probability
$$0.1\cdot 0.1+(-1)\cdot \sqrt{0.1\cdot 0.9\cdot 0.1\cdot 0.9}=0.01-0.09=-0.08<0.$$
To avoid such meaningless estimates, we need to take into account that the joint probability ${\rm Prob}(A\,\&\,B)$ must satisfy Fr\'echet inequalities (see, e.g., \cite{Frechet 1935}):
$$\max(a+b-1,0)\le {\rm Prob}(A\,\&\,B)\le\min(a,b).\eqno{(2)}$$
So, if an expert claims to know the correlation $\rho$ and the estimate for ${\rm Prob}(A\,\&\,B)$ based on this value $\rho$ is smaller than the lower bound $\max(a+b-1,0)$ -- which cannot be -- a reasonable idea is to take the closest possible value of the joint probability, i.e., the value $\max(a+b-1,0)$. Similarly, if the estimate for ${\rm Prob}(A\,\&\,B)$ based on the expert-provided value $\rho$ is larger than the upper bound $\min(a,b)$ -- which also cannot be -- a reasonable idea is to take the closest possible value of the joint probability, i.e., the value $\min(a,b)$. Thus, we arrive at the following and-operation -- which we will call {\it correlation-based and-operation}:
$$f_\rho(a,b)=T_{a,b}\left(a\cdot b+\rho\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\right),\eqno{(3)}$$
where
$$T_{a,b}(c)=\max(a+b-1,0)\mbox{ if } c<\max(a+b-1,0);$$
$$T_{a,b}(c)=c \mbox{ if } \max(a+b-1,0)\le c\le\min(a,b);\mbox{ and}\eqno{(4)}$$
$$T_{a,b}(c)=\min(a,b)\mbox{ if } \min(a,b)<c.$$
\medskip
\noindent{\bf Question: is this and-operation a copula?} In probability theory, there is a known class of and-operations known as {\it copulas} (see, e.g., \cite{Nelsen 2007,Schweizer 2011}). These are functions $C(a,b)$ for which, for some random 2-D vector $(X,Y)$, the joint cumulative distribution function $F_{XY}(x,y)\stackrel{\rm def}{=}{\rm Prob}(X\le x\,\&\,Y\le y)$ has the form $F_{XY}(x,y)=C(F_X(x),F_Y(y))$, where $F_X(x)\stackrel{\rm def}{=}{\rm Prob}(X\le x)$ and $F_Y(y)\stackrel{\rm def}{=}{\rm Prob}(Y\le y)$ are known as {\it marginals}.
One important aspect of (3)-(4) is that these formulas can be expressed as a copula (2-copula) family as described in \cite{Miralles 2022}, allowing us to operate not only with precise probabilities, but also with interval probabilities and probability boxes.
A 2-copula must satisfy the following properties:
\begin{enumerate}
\item Grounded: $C(0,b)=C(a,0)=0$
\item Uniform margins: $C(a,1)=a;C(1,b)=b$
\item 2-increasing: $C(\overline a,\overline b)+C(\underline a,\underline b)-C(\overline a,\underline b)-C(\underline a,\overline b)\ge 0$ for all $\underline a<\overline a$ and $\underline b<\overline b$
\end{enumerate}
It is easy to see that (3)-(4) satisfies the two first properties. In \cite{Miralles 2022} the third property was checked for a dense set of tuples $(\underline a,\overline a,\underline b,\overline b,\rho)$, and for all these tuples, the inequality was satisfied. However, at that moment, we could not prove that the correlation-based and-operation is indeed a 2-copula.
In this paper we provide the missing proof.
\section{Main result}
\noindent{\bf Proposition.} {\it For every $\rho\in[-1,1]$, the correlation and-operation $f_\rho(a,b)$ described by the formulas (3)-(4) is a copula.}
\medskip
\noindent{\bf Proof.}
\medskip
\noindent $1^\circ$. It is known that the desired inequality has the following property -- if we represent a box $[\underline a,\overline a]\times[\underline b,\overline b]$ as a union of several sub-boxes, then the left-hand side of the desired inequality is equal to the sum of the left-hand sides corresponding to sub-boxes.
Indeed, as one can easily check, there is the following {\it additivity} property: for each box consisting of several sub-boxes, the left-hand side of the inequality (4a) that corresponds to the larger box is equal to the sum of expressions (4a) corresponding to sub-boxes. Thus, if the expressions corresponding to sub-boxes are non-negative, then the expression (4a) corresponding to the larger box is also non-negative.
In general, the and-operation described by the formula (4) has three different expressions. So, to prove that the expression (4a) corresponding to this expression is also non-negative, we need to consider cases when at different vertices of the box, we may have different expressions. Good news is that every box whose vertices are described by different expressions can be represented as the union of sub-boxes in which:
\begin{itemize}
\item either all vertices are described by the same expression
\item or two vertices are on the boundary between the areas of different expressions.
\end{itemize}
This is easy to see visually: the following box, in which the slanted line represents the boundary between the areas
\medskip
\begin{center}
\begin{picture}(150,50)
\put(0,0){\line(1,0){150}}
\put(0,50){\line(1,0){150}}
\put(0,0){\line(0,1){50}}
\put(150,0){\line(0,1){50}}
\put(50,0){\line(1,1){50}}
\end{picture}
\end{center}
\medskip
\noindent can be represented as the union of sub-boxes with the desired property:
\medskip
\begin{center}
\begin{picture}(150,50)
\put(0,0){\line(1,0){150}}
\put(0,50){\line(1,0){150}}
\put(0,0){\line(0,1){50}}
\put(150,0){\line(0,1){50}}
\put(50,0){\line(1,1){50}}
\put(50,0){\line(0,1){50}}
\put(100,0){\line(0,1){50}}
\end{picture}
\end{center}
\medskip
Thus, to prove that our and-operation is a copula, it is sufficient to consider only boxes of the following type:
\begin{itemize}
\item boxes for which all four vertices belong to the same area, and
\item boxes for which two vertices belong to the boundary between two areas.
\end{itemize}
The functions $\max(a+b-1,0)$ and $\min(a,b)$ are known to be copulas, so if all four vertices belong to one of these areas, then the desired inequality (4a) is satisfied. So, it is sufficient to consider:
\begin{itemize}
\item boxes for which all four vertices belong to the new area, in which the and-operation is described by the expression (1); we will consider such boxes in Parts 2--4 of this proof, and
\item boxes for which two vertices belong to the boundary between two areas; these boxes will be considered in the following Parts of the proof.
\end{itemize}
\medskip
\noindent $2^\circ$. Let us start by considering boxes for which all four vertices belongs to the area in which the and-operation is described by the formula (1).
\medskip
It is known \cite{Durante 2010} -- and it is easy to prove by considering infinitesimal differences $\overline x-\underline x$ and $\overline y-\underline y$ -- that for smooth functions, the desired inequality is equivalent to the fact that the partial derivative $$\frac{\partial C}{\partial a}$$ is non-decreasing in $b$, i.e., equivalently, that the mixed derivative is non-negative: $$d\stackrel{\rm def}{=}\frac{\partial^2 C}{\partial a\,\partial b}\ge 0.$$ Thus, to prove that $f_\rho(a,b)$ is a copula, it is sufficient to prove that its mixed derivative is non-negative everywhere where the new formula is applied.
Indeed, at the points where the formula (1) is applied, the derivative of $f_\rho(a,b)$ with respect to $a$ has the has the form
$$\frac{\partial f_\rho}{\partial a}=b+\rho\cdot \frac{1-2\cdot a}{2\cdot\sqrt{a\cdot (1-a)}}\cdot \sqrt{b\cdot (1-b)},\eqno{(4{\rm b})}$$ and thus, the mixed derivative has the following form:
$$d=\frac{\partial}{\partial b}\left(\frac{\partial f_\rho}{\partial a}\right)=1+\rho\cdot \frac{(1-2\cdot a)\cdot (1-2\cdot b)}
{4\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}}.\eqno{(5)}$$
Since the expression (1) does not change if we swap $a$ and $b$, it is sufficient to consider the case when $a\le b$.
When $\rho=0$, we get a known copula $f_0(a,b)=a\cdot b$. So, it is sufficient to consider cases when $\rho\ne 0$. This can happen
when $\rho>0$ and when $\rho<0$. Let us consider these cases one by one.
\medskip
\noindent $3^\circ$. Let us first consider the case when $\rho>0$.
\medskip
In this case, since $a\le b$, we have $\min(a,b)=a$ and thus, the condition (4) takes the form
$$a\cdot b+\rho\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\le a,\eqno{(6)}$$
i.e., equivalently,
$$\rho\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\le a-a\cdot b=a\cdot (1-b)\eqno{(7)}$$
and thus,
$$\rho\le \frac{a\cdot (1-b)}{\sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}}=\frac{\sqrt{a\cdot (1-b)}}{\sqrt{(1-a)\cdot b}}.\eqno{(8)}$$
For all such $\rho$, we need to prove that the expression (5) is non-negative.
When both $a$ and $b$ are larger than 0.5 or both are smaller than 0.5, the differences $1-2a$ and $1-2b$ have the same sign and thus, their product is non-negative and the expression (5) is non-negative. So, the only case when we need to check that $d\ge 0$ is when one of the two values $a$ and $b$ is smaller than 0.5 and another one is larger than 0.5. Since $a\le 0.5$, this means that $a<0.5<b$. In this case, the condition $d\ge 0$ takes the form
$$1-\rho\cdot \frac{(1-2\cdot a)\cdot (2\cdot b-1)}{4\cdot\sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}}\ge 0,\eqno{(9)}$$
i.e., equivalently,
$$\rho\cdot \frac{(1-2\cdot a)\cdot (2\cdot b-1)}{4\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}}\le 1,\eqno{(10)}$$ and
$$\rho\le\frac{4\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}}{(1-2\cdot a)\cdot (2\cdot b-1)}.\eqno{(11)}$$
So, to prove that we always have $d\ge 0$, we need to prove that every $\rho$ that satisfies the inequality (8) also satisfies the inequality (11). Clearly, if some value $\rho$ satisfies the inequality (11), then every smaller value $\rho$ also satisfies this inequality. Thus, to prove the desired implication, it is sufficient to check that the inequality (11) is satisfied for the largest possible value $\rho$ that satisfies the inequality (8), i.e., for the value $\rho$ which is equal to the right-hand side of the inequality (8). For this $\rho$, the desired inequality (11) takes the form
$$\frac{\sqrt{a\cdot (1-b)}}{\sqrt{(1-a)\cdot b}}\le \frac{4\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}}{(1-2\cdot a)\cdot (2\cdot b-1)}.\eqno{(12)}$$
Dividing both sides by $\sqrt{a\cdot (1-b)}$, we get an equivalent inequality
$$\frac{1}{\sqrt{(1-a)\cdot b}}\le \frac{4\cdot \sqrt{(1-a)\cdot b}}{(1-2\cdot a)\cdot (2\cdot b-1)}.\eqno{(13)}$$
Multiplying both sides by both denominators, we get the following equivalent inequality:
$$(1-2\cdot a)\cdot(2\cdot b-1)\le 4\cdot (1-a)\cdot b.\eqno{(14)}$$
If we open parentheses, this inequality takes the equivalent form
$$2\cdot b-4\cdot a\cdot b-1+2\cdot a\le 4\cdot b-4\cdot a\cdot b,\eqno{(15)}$$
i.e., by adding $4\cdot a\cdot b-2\cdot b$ to both sides, the form
$$-1+2\cdot a\le 2\cdot b.\eqno{(16)}$$ We are considering the case when $a\le b$ -- since, as we have mentioned earlier, it is sufficient to only consider this case. Thus, the equivalent inequality (12) is also true and hence, for the case when $\rho>0$, we indeed have $d\ge 0$.
\medskip
\noindent $4^\circ$. To complete the proof, it is now sufficient to consider the case when $\rho<0$.
\medskip
In this case, if one of the values $a$ and $b$ is smaller than 0.5 and another one is larger than 0.5, then the differences $1-2\cdot a$ and $1-2\cdot b$ have different signs, so the right-hand side of the expression (5) for $d$ is larger than 1 and thus, non-negative. Thus, it is sufficient to consider the cases when:
\begin{itemize}
\item either both $a$ and $b$ are larger than 0.5
\item or both $a$ and $b$ are smaller than 0.5.
\end{itemize}
Let us consider these two cases one by one.
\medskip
\noindent $4.1^\circ$. Let us first consider the case when $a>0.5$ and $b>0.5$.
\medskip
In this case, $a+b-1>0$, so the inequality (4) takes the form
$$a\cdot b+\rho\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\ge a+b-1,\eqno{(17)}$$
i.e., equivalently, that
$$|\rho|\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\le a\cdot b-a-b+1=(1-a)\cdot (1-b),\eqno{(18)}$$
or that
$$|\rho|\le\frac{(1-a)\cdot (1-b)}{\sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}}=\frac{\sqrt{(1-a)\cdot(1-b)}}{\sqrt{a\cdot b}}.\eqno{(19)}$$
In this case, the condition $d\ge 0$ that the value (5) is non-negative takes the form
$$1-|\rho|\cdot \frac{(2\cdot a-1)\cdot (2\cdot b-1)}{4\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}},\eqno{(20)}$$
i.e., equivalently,
$$|\rho|\cdot \frac{(2\cdot a-1)\cdot (2\cdot b-1)}{4\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}}\le 1\eqno{(21)}$$
and
$$|\rho|\le \frac{4\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}}{(2\cdot a-1)\cdot (2\cdot b-1)}.\eqno{(22)}$$
Similarly to the case when $\rho>0$, to check that all values $|\rho|$ satisfying the inequality (19) also satisfies the inequality (22), it is sufficient to check that the largest possible value $|\rho|$ satisfying the inequality (19) satisfies the inequality (22), i.e., that
$$\frac{\sqrt{(1-a)\cdot(1-b)}}{\sqrt{a\cdot b}}\le
\frac{4\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}}
{(2\cdot a-1)\cdot (2\cdot b-1)}.\eqno{(23)}$$
If we divide both sides by $\sqrt{(1-a)\cdot (1-b)}$, we get the following equivalent inequality
$$\frac{1}{\sqrt{a\cdot b}}\le \frac{4\cdot \sqrt{a\cdot b}}{(2\cdot a-1)\cdot (2\cdot b-1)}.\eqno{(24)}$$
Multiplying both sides by both denominators, we get the following equivalent inequality
$$(2\cdot a-1)\cdot (2\cdot b-1)\le 4\cdot a\cdot b.\eqno{(25)}$$
Opening parentheses, we get
$$4\cdot a\cdot b-2\cdot a-2\cdot b+1\le 4\cdot a\cdot b.\eqno{(26)}$$
Adding $2\cdot a+2\cdot b-4\cdot a\cdot b$ to both sides, we get an equivalent inequality
$$1\le 2\cdot a+2\cdot b,\eqno{(27)}$$ which is true since we consider the case when $a+b>1$.
So, in this case, we indeed have $d\ge 0$.
\medskip
\noindent $4.2^\circ$. Let us now consider the case when $a<0.5$ and $b<0.5$.
\medskip
In this case, $a+b-1<0$, so the inequality (4) takes the form
$$a\cdot b+\rho\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\ge 0,\eqno{(28)}$$
i.e., equivalently, that
$$|\rho|\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\le a\cdot b,\eqno{(29)}$$
or that
$$|\rho|\le\frac{a\cdot b}{\sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}}=\frac{\sqrt{a\cdot b}}{\sqrt{(1-a)\cdot(1-b)}}.\eqno{(30)}$$
In this case, the condition $d\ge 0$ that the value (5) is non-negative takes the form
$$1-|\rho|\cdot \frac{(1-2\cdot a)\cdot (1-2\cdot b)}{4\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}},\eqno{(31)}$$
i.e., equivalently,
$$|\rho|\cdot \frac{(1-2\cdot a)\cdot (1-2\cdot b)}{4\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}}\le 1\eqno{(32)}$$
and
$$|\rho|\le \frac{4\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}}{(1-2\cdot a)\cdot (1-2\cdot b)}.\eqno{(33)}$$
Similarly to the cases when $\rho>0$ and when $a+b>1$, to check that all values $|\rho|$ satisfying the inequality (30) also satisfies the inequality (33), it is sufficient to check that the largest possible value $|\rho|$ satisfying the inequality (30) satisfies the inequality (33), i.e., that
$$\frac{\sqrt{a\cdot b}}{\sqrt{(1-a)\cdot(1-b)}}\le \frac{4\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}}{(1-2\cdot a)\cdot (1-2\cdot b)}.\eqno{(34)}$$
If we divide both sides by $\sqrt{a\cdot b}$, we get the following equivalent inequality
$$\frac{1}{\sqrt{(1-a)\cdot (1-b)}}\le \frac{4\cdot \sqrt{(1-a)\cdot (1-b)}}{(1-2\cdot a)\cdot (1-2\cdot b)}.\eqno{(35)}$$
Multiplying both sides by both denominators, we get the following equivalent inequality
$$(1-2\cdot a)\cdot (1-2\cdot b)\le 4\cdot (1-a)\cdot (1-b).\eqno{(36)}$$
Opening parentheses, we get
$$1-2\cdot a-2\cdot b+4\cdot a\cdot b\le 4-4\cdot a-4\cdot b+4\cdot a\cdot b.\eqno{(37)}$$
Adding $4\cdot a+4\cdot b-4\cdot a\cdot b-1$ to both sides, we get an equivalent inequality
$$2\cdot a+2\cdot b\le 3,\eqno{(38)}$$ which is true since we consider the case when $a+b<1$.
So, in this case, we indeed have $d\ge 0$.
\medskip
In all cases when have $d\ge 0$, thus, the and-operation $f_\rho(a,b)$ is indeed a copula. Thus, for boxes in which all four vertices belong to the area described by the expression (1), the inequality (4a) is always satisfied.
\medskip
\noindent $5^\circ$. Let us now consider the boxes in which two vertices belong to the boundary between two areas. First, we will consider the case when $\rho>0$ and then, we will consider the case when $\rho<0$.
\medskip
\noindent $6^\circ$. Let us first consider the case when $\rho>0$. For this case, let us first describe the boundaries between the areas.
\medskip
\noindent $6.1^\circ$. Let us analyze which of the three areas listed in formula (4) are possible in this case.
\medskip
When $\rho>0$, we have $$a\cdot b+\rho\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\ge a\cdot b,$$ and since it is known that we always have $a\cdot b\ge \max(a+b-1,0)$, we have $$a\cdot b+\rho\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\ge \max(a+b-1,0).$$ So, for $\rho>0$, we cannot have the first of the three cases described by the formula (4). So, we only have two areas:
\begin{itemize}
\item the area where the and-operation is described by the formula (1), and
\item the area where the and-operation is described by the formula $\min(a,b)$.
\end{itemize}
\medskip
\noindent $6.2^\circ$.
Let us describe the two possible areas and the boundary between these two areas.
\medskip
The first area is characterized by the inequality
$$a\cdot b+\rho\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\le \min(a,b).\eqno{(39)}$$
Similarly to the previous part of the proof, without losing generality, we can consider the case when $a\le b$. In this case, the inequality (39) describing the first area takes the following form:
$$a\cdot b+\rho\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\le a.\eqno{(40)}$$
If we subtract $a\cdot b$ from both sides of this inequality, we get the following equivalent inequality:
$$\rho\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\le a\cdot (1-b).\eqno{(41)}$$
Both sides of this inequality are non-negative, so we can get an equivalent inequality is we square both sides:
$$\rho^2\cdot a\cdot (1-a)\cdot b\cdot (1-b)\le a^2\cdot (1-b)^2.\eqno{(42)}$$
The cases when $a$ or $b$ are equal to 0 or 1 can be obtained by taking a limit from the cases when both $a$ and $b$ are located insyed the interval $(0,1)$. For such values, we can divide both side of the inequality by positive numbers $a^2$, $b$, and $1-b$, and get the following equivalent inequality:
$$\rho^2\cdot \frac{1-a}{a}\le\frac{1-b}{b},\eqno{(43)}$$
i.e., equivalently,
$$\rho^2\cdot \frac{1-a}{a}\le\frac{1}{b}-1.\eqno{(44)}$$
By adding 1 to both sides of this inequality, we get
$$\frac{a+\rho^2\cdot (1-a)}{a}\le \frac{1}{b},\eqno{(45)}$$
i.e., equivalently, that
$$b\le \frac{a}{a+\rho^2\cdot (1-a)}.\eqno{(46)}$$
This inequality describes the first area, in which the and-operation is described by the formula (1).
Thus, the boundary between the two areas is described by the equality
$$b=\frac{a}{a+\rho^2\cdot (1-a)}.\eqno{(47)}$$
\medskip
\noindent{\it Comment.}
One can see that for $a=0$ we get $b=0$, for $a=1$, we get $b=1$.
\medskip
\noindent $6.3^\circ$. Let us prove that for all $a$, the corresponding boundary value $b$ is greater than or equal to $a$ -- i.e., that for all the points $(a,b)$ on this boundary, we have $a\le b$.
\medskip
Indeed, for the expression (47), the desired inequality $a\le b$ takes the form
$$a\le \frac{a}{a+\rho^2\cdot (1-a)}.\eqno{(48)}$$
If we divide both sides by $a$ and multiply both sides by the denominator of the right-hand side, we get the following equivalent inequality
$$a+\rho^2\cdot (1-a)\le 1.\eqno{(49)}$$
If we move all the terms to the right-hand side, we get an equivalent inequality
$$0\le 1-a-\rho^2\cdot (1-a)=(1-\rho^2)\cdot (1-a).\eqno{(50)}$$
This inequality is always true, since $\rho^2\le 1$ and $a\le 1$, so indeed, for all boundary points, we have $a\le b$.
\medskip
\noindent $6.4^\circ$. Let us prove that the boundary describes $b$ as an increasing function of $a$.
\medskip
By applying, to the equality (47) that describes the boundary, the same transformations that show the equivalent of inequalities (43) and (46), we can conclude that the equality (47) is equivalent to
$$\rho^2\cdot \frac{1-a}{a}=\frac{1-b}{b},\eqno{(51)}$$
i.e., to
$$\rho^2\cdot \left(\frac{1}{a}-1\right)=\frac{1}{b}-1.\eqno{(52)}$$
The left-hand side is decreasing with respect to $a$, the right-hand side is a decreasing function of $b$. Thus, as $a$ increases, the left-hand side decreases, thus the right-hand side also decreases and hence, the value $b$ increases as well.
\medskip
\noindent $6.5^\circ$. For $\rho=1$ the condition (46) describing the first area takes the form $b\le a$. Since we have $a\le b$, this means that this condition is only satisfies for $a=b$. For these values, the expression (4a) is equal to $$a\cdot a+\sqrt{a\cdot(1-a)\cdot a\cdot (1-a)}=a^2+a\cdot (1-a)=a^2+a-a^2=$$ $$a=\min(a,b),\eqno{(53)}$$ which means that our and-operation is always equal to $\min(a,b)$. The expression $\min(a,b)$ is known to be a copula.
So, we only need to prove the fact that our and-operation is a copula for the case when $\rho<1$. This is the case we will consider from now on.
\medskip
\noindent $6.6^\circ$. Let us prove that for $\rho<1$, the only boundary points for which $a=b$ are points for which $a=b=0$ and $a=b=1$.
\medskip
Indeed, as we have mentioned, the points $(0,0)$ and $(1,1)$ are boundary points. Let us prove, by contradiction, that there are no other boundary points for which $a=b$. Indeed, when $a=b$, the equality (52) that describes the boundary takes the form:
$$\rho^2\cdot \left(\frac{1}{a}-1\right)=\frac{1}{a}-1.\eqno{(54)}$$
Dividing both sides of this equality by the non-zero right-hand side, we get $\rho^2=1$. This contradicts to the fact that we are considering the case when $\rho<1$ and thus, $\rho^2<1$. This contradiction shows that other boundary points with $a=b$ are not possible.
\medskip
\noindent $6.7^\circ$. The boundary consists of a curved line that is separate from the line $a=b$ -- except for the endpoints. So, if we limit ourselves to a sub-box $[\varepsilon,1-\varepsilon]\times [\varepsilon,1-\varepsilon]$ for some small $\varepsilon>0$, the boundary line is separated from the line $a=b$ -- there is the smallest distance $\delta>0$ between points of these two lines. So, if we have a box that includes both points with $a\le b$ and with $a\ge b$, we can divide this box into sub-boxes of linear size $<\delta/2$ and thus, make sure that every sub-box that contains boundary points with $a\le b$ cannot contain any points with $a=b$ -- and therefore, only contains points with $a\le b$.
So, due to additivity, it is sufficient to prove the inequality (4a) for boxes for which:
\begin{itemize}
\item two vertices lie on the boundary, and
\item we have $a\le b$ for all the points from this sub-box.
\end{itemize}
This will allow us to prove the inequality (4a) for all sub-boxes of the square $[\varepsilon,1-\varepsilon]\times [\varepsilon,1-\varepsilon]$. We can do it for any $\varepsilon$ and thus, in the limit, get the desired inequality for all sub-boxes of the original square $[0,1]\times [0,1]$ as well.
So, suppose that we have a box for which:
\begin{itemize}
\item two vertices lie on the boundary, and
\item we have $a\le b$ for all the points from this box.
\end{itemize}
Since the boundary describes the increasing function of $a$, the corresponding box has the form
\medskip
\begin{center}
\begin{picture}(50,50)
\put(0,0){\line(1,0){50}}
\put(0,50){\line(1,0){50}}
\put(0,0){\line(0,1){50}}
\put(50,0){\line(0,1){50}}
\put(0,0){\line(1,1){50}}
\end{picture}
\end{center}
\medskip
So, in the corresponding box:
\begin{itemize}
\item the two vertices $(\underline a,\underline b)$ and $(\overline a,\overline b)$ are on the boundary,
\item the vertex $(\overline a,\underline b)$ is in the first area, i.e., for this point, we have the expression (1), and
\item the vertex $(\underline a,\overline b)$ is in the second area, i.e., here $C(\underline a,\overline b)=\min(\underline a,\overline b)$.
\end{itemize}
The desired inequality (4a) has the form
$$C(\overline a,\underline b)-C(\underline a,\underline b)\le
C(\overline a,\overline b)-C(\underline a,\overline b).\eqno{(55)}$$
The points $(\overline a,\overline b)$ and $(\underline a,\overline b)$ are both in the second area for which $C(a,b)=\min(a,b)$ -- to be more precise, the second of these points is in the boundary, which means it also satisfies the condition $C(a,b)=\min(a,b)$. For all the points from the box, $a\le b$, so we have
$$C(\overline a,\overline b)-C(\underline a,\overline b)=\min(\overline a,\overline b)-\min(\underline a,\overline b)=\overline a-\underline a.\eqno{(56)}$$
On the other hand, for the difference in the left-hand side of the formula (55), we have
$$C(\overline a,\underline b)-C(\underline a,\underline b)=\int_{\underline a}^{\overline a} \frac{\partial C}{\partial a}\,da.\eqno{(57)}$$
So, if we prove that the partial derivative $\partial C/\partial a$ is always smaller or equal than 1, we would indeed conclude that
$$C(\overline a,\underline b)-C(\underline a,\underline b)=\int_{\underline a}^{\overline a} 1\,da=\overline a-\underline a,\eqno{(58)}$$
i.e., exactly, the desired inequality (55).
For the points $(\overline a,\underline b)$ and $(\underline a,\underline b)$ -- and the points from the interval connecting these two points -- the expression $C(a,b)$ is described by the formula (1). Thus, the partial derivative of $C(a,b)$ with respect to $a$ is described by the formula (4b). Thus, the inequality $$\frac{\partial C}{\partial a}(a,b)\le 1,\eqno{(59)}$$ takes the form
$$b+\rho\cdot \frac{1-2\cdot a}{2\cdot\sqrt{a\cdot (1-a)}}\cdot \sqrt{b\cdot (1-b)}\le 1.\eqno{(60)}$$
Subtracting $b$ from both sides of (60), we get an equivalent inequality
$$\rho\cdot \frac{1-2\cdot a}{2\cdot\sqrt{a\cdot (1-a)}}\cdot \sqrt{b\cdot (1-b)}\le 1-b.\eqno{(61)}$$
To separate the variables, we can divide both sides by $\sqrt{b\cdot (1-b)}$, then we get an equivalent inequality
$$\rho\cdot \frac{1-2\cdot a}{2\cdot\sqrt{a\cdot (1-a)}}\le \sqrt{\frac{1-b}{b}}.\eqno{(62)}$$
By taking the square root of both sides of the inequality (46), we conclude that:
$$\rho\cdot \sqrt{\frac{1-a}{a}}\le \sqrt{\frac{1-b}{b}}.\eqno{(63)}$$
Thus, if we prove that the left-hand side of the inequality (62) is smaller than or equal to the left-hand side of the inequality (63), i.e., that
$$\rho\cdot \frac{1-2\cdot a}{2\cdot\sqrt{a\cdot (1-a)}}\le \rho\cdot \sqrt{\frac{1-a}{a}};\eqno{(64)}$$
this will prove the inequality (62) and thus, the desired upper bound (60) on the partial derivative. We can simplify the inequality (64) by dividing both sides by $\rho$ and multiplying both sides by $2\cdot \sqrt{a\cdot (1-a)}$. Then, we get an equivalent inequality
$$1-2\cdot a\le 2\cdot (1-a)=2-2\cdot a,\eqno{(65)}$$ which is equivalent to $1\le 2$ and is, thus, always true. Thus, (55) holds, so the inequality (4a) is true for all the boxes in which two vertices are located on the boundary.
This completes the proof of the Proposition for the case when $\rho>0$.
\medskip
\noindent $7^\circ$. Let us now consider the case when $\rho<0$.
For this case, let us first describe the boundaries between the areas.
\medskip
\noindent $7.1^\circ$. Let us analyze which of the three areas listed in formula (4) are possible in this case.
\medskip
When $\rho<0$, we have $$a\cdot b+\rho\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\le a\cdot b,$$ and since it is known that we always have $a\cdot b\le \min(a,b)$, we have $$a\cdot b+\rho\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\le\min(a,b).$$ So, for $\rho<0$, we cannot have the third of the three cases described by the formula (4). So, we only have two areas:
\begin{itemize}
\item the area where the and-operation is described by the formula (1), and
\item the area where the and-operation is described by the formula $$\max(a+b-1,0).$$
\end{itemize}
\medskip
\noindent $7.2^\circ$.
Let us describe the two possible areas and the boundary between these two areas.
\medskip
The first area is characterized by the inequality $C(a,b)\ge \max(a+b-1,0)$, i.e., equivalently, by two inequalities
$$a\cdot b-|\rho|\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\ge 0\eqno{(66)}$$
and
$$a\cdot b-|\rho|\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}\ge a+b-1.\eqno{(67)}$$
Let us consider these two inequalities one by one.
\medskip
\noindent $7.2.1^\circ$. The inequality (66) is equivalent to:
$$a\cdot b\ge |\rho|\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}.\eqno{(68)}$$
To separate the variables, let us divide both sides of this inequality by $$a\cdot \sqrt{b\cdot (1-b)},$$ then we get an equivalent inequality
$$\sqrt{\frac{b}{1-b}}\ge |\rho|\cdot \sqrt{\frac{1-a}{a}}.\eqno{(69)}$$
Both sides of this inequality are non-negative, thus if we square both sides, we get an equivalent inequality
$$\frac{b}{1-b}\ge \rho^2\cdot \frac{1-a}{a}.\eqno{(70)}$$
Reversing both sides, we get an equivalent inequality
$$\frac{1-b}{b}\le \frac{a}{\rho^2\cdot (1-a)},\eqno{(71)}$$
i.e., equivalently,
$$\frac{1}{b}-1\le \frac{a}{\rho^2\cdot (1-a)}.\eqno{(72)}$$
By adding 1 to both sides, we get
$$\frac{1}{b}\le \frac{\rho^2\cdot (1-a)+a}{\rho^2\cdot (1-a)},\eqno{(73)}$$
i.e., equivalently,
$$b\ge \frac{\rho^2\cdot (1-a)}{\rho^2\cdot (1-a)+a}.\eqno{(74)}$$
\medskip
\noindent $7.2.2^\circ$. The inequality (67) is equivalent to
$$a\cdot b-a-b+1\ge |\rho|\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)},\eqno{(75)}$$
i.e.,
$$(1-a)\cdot (1-b)\ge |\rho|\cdot \sqrt{a\cdot (1-a)\cdot b\cdot (1-b)}.\eqno{(76)}$$
To separate the variables, let us divide both sides by $(1-a)\cdot \sqrt{b\cdot (1-b)}$, then we get an equivalent inequality
$$\sqrt{\frac{1-b}{b}}\ge |\rho|\cdot \sqrt{\frac{a}{1-a}}.\eqno{(77)}$$
Both sides of this inequality are non-negative, thus if we square both sides, we get an equivalent inequality
$$\frac{1-b}{b}\ge \rho^2\cdot \frac{a}{1-a},\eqno{(78)}$$
i.e., equivalently,
$$\frac{1}{b}-1\ge \rho^2\cdot \frac{a}{1-a}.\eqno{(79)}$$
By adding 1 to both sides, we get
$$\frac{1}{b}\ge \frac{\rho^2\cdot a + (1-a)}{1-a},\eqno{(80)}$$
i.e., equivalently,
$$b\le \frac{1-a}{\rho^2\cdot a+(1-a)}.\eqno{(81)}$$
\medskip
\noindent $7.2.3^\circ$. By combining the inequalities (74) and (81), we get the following description of the area in which the and-operation is described by the formula (1):
$$\frac{\rho^2\cdot (1-a)}{\rho^2\cdot (1-a)+a}\le b\le \frac{1-a}{\rho^2\cdot a+(1-a)}.\eqno{(82)}$$
Thus, the boundary between the two areas consists of the following two curves:
$$b=\frac{\rho^2\cdot (1-a)}{\rho^2\cdot (1-a)+a}\eqno{(83)}$$
and
$$b=\frac{1-a}{\rho^2\cdot a+(1-a)}.\eqno{(84)}$$
\medskip
\noindent $7.3^\circ$. Let us prove that:
\begin{itemize}
\item the curve (83) lies in the area where $a+b\le 1$, and
\item the curve (84) lies in the area where $a+b\ge 1$.
\end{itemize}
\medskip
\noindent $7.3.1^\circ$. Let us first prove that for each value $b$ described by the formula (83), we have $a+b\le 1$.
\medskip
We need to prove the inequality
$$a+\frac{\rho^2\cdot (1-a)}{\rho^2\cdot (1-a)+a}\le 1.\eqno{(85)}$$
Subtracting $a$ from both sides, we get an equivalent inequality
$$\frac{\rho^2\cdot (1-a)}{\rho^2\cdot (1-a)+a}\le 1-a.\eqno{(86)}$$
Dividing both sides by $1-a$ and multiplying both sides by the denominator of the left-hand side, we get the following equivalent inequality:
$$\rho^2\le \rho^2\cdot (1-a)+a=\rho^2+(1-\rho^2)\cdot a,\eqno{(87)}$$ which is, of course, always true, since $\rho^2\le 1$ and $a\ge 0$. The statement is proven.
\medskip
\noindent $7.3.2^\circ$. Let us now prove that for each value $b$ described by the formula (84), we have $a+b\ge 1$.
\medskip
We need to prove the inequality
$$a+\frac{1-a}{\rho^2\cdot a+(1-a)}\ge 1.\eqno{(88)}$$
Subtracting $a$ from both sides, we get an equivalent inequality
$$\frac{1-a}{\rho^2\cdot a+(1-a)}\ge 1-a.\eqno{(89)}$$
Dividing both sides by $1-a$ and multiplying both sides by the denominator of the left-hand side, we get the following equivalent inequality:
$$1 \ge \rho^2\le a+(1-a)=1-(1-\rho^2)\cdot a,\eqno{(90)}$$ which is, of course, always true. The statement is proven.
\medskip
\noindent $7.4^\circ$. Similarly to Part 6 of this proof, it is sufficient to prove the inequality (4a) for boxes in which two vertices are on the boundary and for which:
\begin{itemize}
\item either we have $a+b\le 1$ for all the points from the box,
\item or we have $a+b\ge 1$ for all the points from the box.
\end{itemize}
Let us consider the two parts of the boundary one by one.
\medskip
\noindent $7.4.1^\circ$. Let us first consider the case when we have $a+b\le 1$ for all the points from the box. In this case, the corresponding part of the boundary is described by the formula (83). By reformulating this expression in the equivalent form
$$b=\frac{1}{1+\displaystyle\frac{a}{\rho^2\cdot (1-a)}}=
\frac{1}
{
1+\displaystyle\frac{1}{\rho^2}\cdot
\displaystyle\frac{1}{\displaystyle\frac{1}{a}-1}
},\eqno{(91)}$$
we can see that $b$ is a decreasing function of $a$. Thus, the corresponding box has the form
\medskip
\begin{center}
\begin{picture}(50,50)
\put(0,0){\line(1,0){50}}
\put(0,50){\line(1,0){50}}
\put(0,0){\line(0,1){50}}
\put(50,0){\line(0,1){50}}
\put(0,50){\line(1,-1){50}}
\end{picture}
\end{center}
\medskip
So, in the corresponding box:
\begin{itemize}
\item the two vertices $(\underline a,\overline b)$ and $(\overline a,\underline b)$ are on the boundary,
\item the vertex $(\overline a,\overline b)$ is in the first area, i.e., for this point, we have the expression (1), and
\item the vertex $(\underline a,\underline b)$ is in the second area, i.e., here $C(\underline a,\overline b)=\max(\underline a+\underline b-1,0)$.
\end{itemize}
So, for three vertices, we have $C(a,b)=\max(a+b-1,0)$. Since for all the points from the box, we have $a+b\le 1$, this means that for three vertices, we have $C(a,b)=0$. In this case, the inequality (4a) is clearly true.
\medskip
\noindent $7.4.2^\circ$. Let us now consider the case when we have $a+b\ge 1$ for all the points from the box. In this case, the corresponding part of the boundary is described by the formula (84). By reformulating this expression in the equivalent form
$$b=\frac{1}{1+\rho^2\cdot \displaystyle\frac{a}{1-a}}=
\frac{1}
{
1+\rho^2\cdot
\displaystyle\frac{1}{\displaystyle\frac{1}{a}-1}
},\eqno{(91)}$$
we can see that $b$ is also a decreasing function of $a$. Thus, the corresponding box has the same form as in the case $a+b\le 1$:
\medskip
\begin{center}
\begin{picture}(50,50)
\put(0,0){\line(1,0){50}}
\put(0,50){\line(1,0){50}}
\put(0,0){\line(0,1){50}}
\put(50,0){\line(0,1){50}}
\put(0,50){\line(1,-1){50}}
\end{picture}
\end{center}
\medskip
So, in the corresponding box:
\begin{itemize}
\item the two vertices $(\underline a,\overline b)$ and $(\overline a,\underline b)$ are on the boundary,
\item the vertex $(\underline a,\underline b)$ is in the first area, i.e., for this point, we have the expression (1), and
\item the vertex $(\overline a,\overline b)$ is in the second area, i.e., here $C(\underline a,\overline b)=\max(\underline a+\underline b-1,0)$.
\end{itemize}
Similarly to Part 6 of the proof, we can show that the desired inequality (4a) is satisfied if we the corresponding partial derivatives is smaller than or equal to 1, i.e., if
$$\frac{\partial C}{\partial a}=b-|\rho|\cdot \frac{1-2\cdot a}{2\cdot \sqrt{a\cdot (1-a)}}\cdot \sqrt{b\cdot (1-b)}\le 1.\eqno{(92)}$$
Subtracting $b$ from both sides, we get an equivalent inequality
$$-|\rho|\cdot \frac{1-2\cdot a}{2\cdot \sqrt{a\cdot (1-a)}}\cdot \sqrt{b\cdot (1-b)}\le 1-b.\eqno{(93)}$$
We can separate the variable if we divide both sides by $\sqrt{b\cdot (1-b)}$, then we get an equivalent inequality
$$-|\rho|\cdot \frac{1-2\cdot a}{2\cdot \sqrt{a\cdot (1-a)}}\le \sqrt{\frac{1-b}{b}}.\eqno{(94)}$$
We know a lower bound on the expression in the right-hand side -- it is provided by the inequality (77). Thus, to prove the inequality (94), it is sufficient to prove that the left-hand side of the formula (94) is smaller than or equal to this lower bound, i.e., that
$$-|\rho|\cdot \frac{1-2\cdot a}{2\cdot \sqrt{a\cdot (1-a)}}
\le|\rho|\cdot \sqrt{\frac{a}{1-a}}.\eqno{(95)}$$
Let us prove this inequality. Dividing both sides of (95) by $|\rho|$ and multiplying both sides by $2\cdot \sqrt{a\cdot (1-a)}$, we get an equivalent inequality $-(1-2\cdot a)\le 2\cdot a$, i.e., $2\cdot a-1\le 2\cdot a$, which is always true. Thus, the inequality (94) holds, hence the inequality (92) also holds, and therefore, in this case, the inequality (4a) that describes a copula is also true.
\medskip
\noindent $8^\circ$. We have considered all possible cases, and in all these cases, we have shown that the inequality (4a) -- that defines a copula -- is true. Thus, our and-operation is indeed a copula. The proposition is proven.
\section*{Acknowledgments}
This research was partly funded by the
EPSRC and ESRC CDT in Risk and Uncertainty (EP/L015927/1), established within
the Institute for Risk and Uncertainty at the University of Liverpool. This work has
been carried out within the framework of the EUROfusion Consortium, funded by the
European Union via the Euratom Research and Training Programme (Grant Agreement
No 101052200 - EUROfusion).
Views and opinions expressed are however those
of the author(s) only and do not necessarily reflect those of the European Union or the
European Commission. Neither the European Union nor the European Commission
can be held responsible for them.
V.K. was supported in part by the National Science Foundation
grants 1623190 (A Model of Change for Preparing a New Generation
for Professional Practice in Computer Science), and HRD-1834620 and
HRD-2034030 (CAHSI Includes), and by the AT\&T Fellowship in
Information Technology. He was also supported by the program of the
development of the Scientific-Educational Mathematical Center of
Volga Federal District No. 075-02-2020-1478, and by a grant from
the Hungarian National Research, Development and Innovation Office
(NRDI).
| {
"timestamp": "2023-01-16T02:10:25",
"yymm": "2301",
"arxiv_id": "2301.05507",
"language": "en",
"url": "https://arxiv.org/abs/2301.05507",
"abstract": "In many practical situations, we know the probabilities $a$ and $b$ of two events $A$ and $B$, and we want to estimate the joint probability ${\\rm Prob}(A\\,\\&\\,B)$. The algorithm that estimates the joint probability based on the known values $a$ and $b$ is called an and-operation. An important case when such a reconstruction is possible is when we know the correlation between $A$ and $B$; we call the resulting and-operation correlation-based. On the other hand, in statistics, there is a widely used class of and-operations known as copulas. Empirical evidence seems to indicate that the correlation-based and-operation derived inthis https URLis a copula, but until now, no proof of this statement was available. In this paper, we provide such a proof.",
"subjects": "Other Statistics (stat.OT); Logic (math.LO); Probability (math.PR); Applications (stat.AP); Computation (stat.CO)",
"title": "Correlation-Based And-Operations Can Be Copulas: A Proof",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795098861571,
"lm_q2_score": 0.8152324803738429,
"lm_q1q2_score": 0.8046177539226517
} |
https://arxiv.org/abs/2001.05018 | Walking to infinity on gaussian lines | We study analogies between the rational integers on the real line and the Gaussian integers on other lines in the complex plane. This includes a Gaussian analog of Bertrands Postulate, the Chinese Remainder Theorem, and the periodicity of divisibility. We also computationally investigate the distribution of Gaussian primes along these lines and leave the reader with several open problems. | \section{\@startsection {section}{1}{\z@}
{-30pt \@plus -1ex \@minus -.2ex}
{2.3ex \@plus.2ex}
{\normalfont\normalsize\bfseries\boldmath}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}
{-3.25ex\@plus -1ex \@minus -.2ex}
{1.5ex \@plus .2ex}
{\normalfont\normalsize\bfseries\boldmath}}
\renewcommand{\@seccntformat}[1]{\csname the#1\endcsname. }
\makeatother
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{conjecture}{Conjecture}
\newtheorem{proposition}{Proposition}
\newtheorem{corollary}{Corollary}
\newtheorem*{definition}{Definition}
\newtheorem*{remark}{Remark}
\newcommand{\mathcal{R}(L)}{\mathcal{R}(L)}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbb{I}}{\mathbb{I}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{{\mathcal{G}\mathcal{P}}(L)}{{\mathcal{G}\mathcal{P}}(L)}
\newcommand{{\mathcal{D}}(L)}{{\mathcal{D}}(L)}
\newcommand{\alpha}{\alpha}
\newcommand{\alpha_0}{\alpha_0}
\newcommand{\alpha_n}{\alpha_n}
\newcommand{\Z[i]}{\mathbb{Z}[i]}
\newcommand{{\text{Re}}}{{\text{Re}}}
\newcommand{{\text{Im}}}{{\text{Im}}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray*}}{\begin{eqnarray*}}
\newcommand{\end{eqnarray*}}{\end{eqnarray*}}
\begin{document}
\begin{center}
\uppercase{\bf Walking to infinity along Gaussian lines}
\vskip 20pt
{\bf Elsa Magness
}\\
{\smallit Department of Mathematics, Seattle University, Seattle, WA 98122, USA}\\
{\tt emagness@brynmawr.edu}\\
\vskip 10pt
{\bf Brian Nugent
}\\
{\smallit Department of Mathematics, Seattle University, Seattle, WA 98122, USA}\\
{\tt bnugent@uw.edu}\\
\vskip 10pt
{\bf Leanne Robertson}\\
{\smallit Department of Mathematics, Seattle University, Seattle, WA 98122, USA}\\
{\tt robertle@seattleu.edu}\\
\end{center}
\vskip 20pt
\vskip 30pt
\centerline{\bf Abstract}
\noindent
We study analogies between the rational integers on the real line and the Gaussian integers on other lines in the complex plane. This includes a Gaussian analog of Bertrand's Postulate, the Chinese Remainder Theorem, and the periodicity of divisibility. We also computationally investigate the distribution of Gaussian primes along these lines and leave the reader with several open problems.
\pagestyle{myheadings}
\thispagestyle{empty}
\baselineskip=12.875pt
\vskip 30pt
\section{Introduction.}\label{intro}
Is it possible to walk from the origin in the complex plane to infinity using steps of bounded length and stepping only on Gaussian primes? Several authors have worked on this intriguing question since it was first posed by Basil Gordon in 1962. Erd\"os conjectured that such a walk to infinity is impossible, but the problem remains unsolved today (see \cite{Gethner} for a discussion of the contradictory references to Erd\"os' role in this problem). In 1970, Jordan and Rabung \cite{JR} showed that steps of length at least 4 would be required, and in 1998, Gethner, Wagon, and Wick~\cite{Gethner} showed that steps of length $\sqrt{26}$ or less are insufficient to reach infinity. In the same paper they showed that it is impossible to walk to infinity on any line in the complex plane by stepping only on Gaussian primes and taking steps of bounded length, and thus established the Gaussian analog of the classical result that there are arbitrarily long sequences of composites on the real line. In 2017, West and Sittinger \cite{West} generalized
this result and showed that in any quadratic field of class number 1, it is similarly impossible to walk to infinity along any line using steps of bounded length and stepping only on primes in the ring of integers of the field. Motivated by these results, we further investigate the idea of walking to infinity on lines in the complex plane stepping only on Gaussian integers, and analogies to walking to infinity along the real line.
Recall that the ring $\mathbb{Z} [i]$ of {\em Gaussian integers} consists of all complex numbers of the form $\alpha = a+bi$, where $a$ and $b$ are rational integers. Following Gethner et al., we call a line in the complex plane a {\em{Gaussian line}} if it contains two, and hence infinitely many, Gaussian integers. We call a Gaussian line {\em{primitive}} if the integers on the line do not all share a common divisor. With these definitions, we ask what you might discover if instead of wandering freely on Gaussian integers in the complex plane, you walked along a primitive Gaussian line stepping only on Gaussian integers? How different or similar would this stroll to infinity be to that of walking to infinity along the real line stepping only on rational integers? Would you stroll on infinitely many Gaussian primes, or perhaps none at all? Could you observe an analog of Bertrand's postulate on your walk? Would you see a periodicity of divisibility similar to that on the real line? What other properties of the Gaussian integers might you discover?
An overview of the paper and our results is as follows. In Section \ref{background}, we establish the background and notation used throughout.
In Section \ref{primes}, we investigate the distribution of Gaussian primes on Gaussian lines. We discuss what a theorem of T. Tao says about primes on Gaussian lines and formulate and computationally support an extension of Bertrand's Postulate to these lines. The main questions posed in this section are equivalent to famous open problems about quadratic polynomials representing prime numbers, so we turn to more tractable problems in subsequent sections. In Section~\ref{div}, we prove key divisibility properties of Gaussian integers on Gaussian lines that are important for the rest of the paper. This includes an analogy of the periodicity of divisibility of rational integers on the real line and a characterization of the rational integers and Gaussian primes that divide some Gaussian integer on a given Gaussian line. In Section~\ref{CRT}, we extend the Chinese Remainder Theorem to Gaussian lines and prove a theorem that shows there are always infinitely many Gaussian lines that satisfy any given CRT-type divisibility properties.
Finally, in Section \ref{divisor} we return to questions raised in Section \ref{div} and completely characterize the set of Gaussian integers that divide some Gaussian integer on a given Gaussian line.
\section{Background and Notation.}\label{background}
We begin with some background on Gaussian integers and by establishing the notation concerning Gaussian lines that is used throughout the paper.
\medskip
The unit group of the Gaussian integers $\mathbb{Z}[i]$ is
$\{\pm 1, \pm i\}$, so two Gaussian integers, $\alpha$ and $\beta$, are {\em associates} if and only if $\alpha=\pm \beta$ or $\alpha = \pm i \beta$.
The {\em norm} of the Gaussian integer $\alpha=a+bi$ is defined by $N(a+bi)=\alpha\cdot\overline\alpha =a^2+b^2\in\mathbb{Z}$, where the ``bar'' denotes complex conjugation, and its {\em trace} is defined by $Tr(a+bi)=\alpha+\overline\alpha=2a\in\mathbb{Z}$. Unique factorization holds in $\mathbb{Z}[i]$, and this gives the Gaussian integers a well-defined notion of primality. To avoid confusion, we use the terminology {\em rational prime} for a prime in the rational integers $\mathbb{Z}$, and {\em Gaussian prime} for a prime in $\mathbb{Z} [i]$.
The Gaussian primes can be classified in terms of the factorization of the rational primes $p\in \mathbb{Z}$ into Gaussian primes as follows:
\begin{center}
\begin{enumerate}
\item If $p=2$, then $p$ ramifies in $\mathbb{Z} [i]$. Specifically, $2=-i(1+i)^2$, so $1+i$ is a Gaussian prime of norm 2.
\item If $p \equiv 1 \pmod 4$, then $p = \pi\cdot\overline{\pi}$ splits as a product of two conjugate Gaussian primes of norm $p$ that are not associates in $\mathbb{Z}[i]$.
\item If $p \equiv 3 \pmod 4$, then $p$ remains prime in $\mathbb{Z} [i]$ and has norm $p^2$.\\
\end{enumerate}
\end{center}
Every Gaussian prime is an associate of one of the Gaussian primes described above.
If $\pi$ is a Gaussian prime then we say $\pi$ \textit{lies over} $p$ if $\pi$ divides the rational prime $p$.
For every Gaussian line $L$, we distinguish two Gaussian integers, $\alpha_0=a+bi$ and $\delta=c+di$, that define $L$ as follows. Let $\alpha_0$ be the Gaussian integer on $L$ of minimum norm, and if there are two such integers, let $\alpha_0 $ be the one with the larger real part. If $L$ is vertical, then take $\delta=i$. Otherwise, let $\alpha_1$ be the Gaussian integer on $L$ closest to $\alpha_0$ (so $N(\alpha_1-\alpha_0)$ is minimal) and with ${\text{Re}}(\alpha_1)>{\text{Re}}(\alpha_0)$. Then take $\delta=\alpha_1-\alpha_0$. Thus $\alpha_0$ is on the line $L$, but $\delta$ is not, provided $\alpha_0\neq 0$. Note that there are only two primitive Gaussian lines with $\alpha_0=0$, namely the real line ${\text{Im}}(z)=0$ and the imaginary line ${\text{Re}}(z)=0$.
With $\alpha_0$ and $\delta$ defined in this way, the lemma below describes all Gaussian integers on $L$. This lemma is essentially Lemma 4.2 in \cite{Gethner}, except that we describe the primitive case and specify $\alpha_0$ and $\delta$, since this is convenient for our work.
\begin{lemma}\label{notation} Let $L$ be a Gaussian line, and let $\alpha_0=a+bi$ and $\delta=c+di$ be as defined above. Then $c$ and $d$ are relatively prime, $c\geq 0$, and the Gaussian integers on $L$ are exactly the Gaussian integers $\alpha_n$ given by
$$ \alpha_n=\alpha_0+\delta n,\ n\in\mathbb{Z}.\label{an}$$
Moreover, $L$ is primitive if and only if $\alpha_0$ and $\delta$ are relatively prime over $\mathbb{Z}[i]$.
\end{lemma}
\begin{proof} If $L$ is vertical then $\delta=i$ and $\alpha_0=k$ for some $k\in\mathbb{Z}$. Then the Gaussian integers on $L$ are given by $\alpha_n=k+ni, n\in\mathbb{Z}$, $L$ is primitive, and $\alpha_0$ and $\delta$ are relatively prime. Thus, the lemma holds for all vertical Gaussian lines.
If $L$ is not vertical, then by our choice of $\delta=c+di$ we have $c> 0$ and $L$ has slope $d/c$. Thus $c$ and $d$ must be relatively prime since otherwise there would be a Gaussian integer on $L$ between $\alpha_0$ and $\alpha_1$, contradicting our choice of $\alpha_1$. Let $\beta$ be a Gaussian integer on $L$. Then $\beta=\alpha_0+r\delta$ for some real number $r$. But, $r=(\beta-\alpha_0)/\delta$ is in the quotient field $\mathbb{Q}(i)$, so $r\in\mathbb{Q}$. Now $r\delta=rc+rdi=\beta-\alpha_0\in\mathbb{Z}[i]$ implies $rc, rd\in\mathbb{Z}$. Since $c$ and $d$ are relatively prime, it follows that $r\in\mathbb{Z}$ as needed.
For the second part of the lemma, first suppose $\alpha_0$ and $\delta$ have a common Gaussian prime divisor $\pi$. Then $\pi$ divides $\alpha_0+\delta n$ for all $n\in\mathbb{Z}$, $i.e.$, $\pi$ divides all Gaussian integers $\alpha_n$ on $L$ and $L$ is not primitive. Conversely, if $\alpha_0$ and $\delta$ are relatively prime, then $\alpha_0$ and ${\alpha_1}=\alpha_0+\delta$ are also relative prime, and $L$ must be primitive since it contains at least two Gaussian integers that do not share a common divisor. \end{proof}
Throughout this paper, we define a Gaussian line $L$ by its values of
$\alpha_0$ and $\delta$ as given in Lemma \ref{notation}. Given these values, we also define a rational integer $\Delta$ associated to $L$ by
\begin{equation}\label{D}\Delta=ad-bc.\end{equation}
Note that if $\alpha_n=x+yi=\alpha_0+n\delta$, $n\in\mathbb{Z}$, is any other Gaussian integer on $L$, then $x=a+nc$ and $y=b+nd$ and so
$\Delta$ can also be computed by
$\Delta=xd-yc$. That is, $\Delta$ can easily be computed from the values of $\alpha_n$ and $\delta$, not just from $\alpha_0$ and $\delta$. In Section \ref{div}, we use $\Delta$ to characterize the set of rational integers that divide some Gaussian integer on $L$.
Another use of $\Delta$ is given by the following easy lemma.
\begin{lemma}\label{delta} Let $L$ be a primitive Gaussian line. Then $\Delta=0$ if and only if $L$ is the real or imaginary line, which holds if and only if $\alpha_0=0$.\end{lemma}
\begin{proof} The only part of the lemma that doesn't follow directly from the definitions is the fact that if $\Delta=0$ then $L$ is the real or imaginary line. For this assume $\Delta=0$, so
$ad=bc$. Since $c$ and $d$ are relatively prime, it follows that $c\mid a$ and $d\mid b$. Thus, $a=cx$ and $b=dy$ for some $x,y\in\mathbb{Z}$. This gives $cdx=cdy$. We may assume $c$ and $d$ are both nonzero since otherwise $a$ or $b$ is equal to zero and $L$ is the real or imaginary line. Thus, it follows that $x=y$ and $\alpha_0=x\delta$. Hence, $x=0$ since $\alpha_0$ and
$\delta$ are relatively prime and $\delta\neq 0$. Therefore, $\alpha_0=0$, and $L$ is either the real or imaginary line.
\end{proof}
\section{Primes on Gaussian Lines.}\label{primes} One of the first questions we had when we began our study of Gaussian lines was about the distribution of Gaussian primes on these lines. We wondered whether every primitive Gaussian line contains infinitely many Gaussian primes, or if the existence of even one prime is guaranteed. This led us to consider what T.~Tao's~\cite{Tao} beautiful theorem about arbitrarily shaped constellations in the Gaussian primes says about primes on Gaussian lines, and to formulate and computationally support an analog of Bertrand's Postulate to Gaussian lines.
\medskip
The real and imaginary lines contain infinitely many primes, so it is natural to wonder whether every primitive Gaussian line similarly contains infinitely many Gaussian primes. Finding even one other primitive Gaussian line that contains infinitely many Gaussian primes is a very difficult problem, however, since this is equivalent (by taking norms) to finding a quadratic polynomial that takes on infinitely many rational prime values and no such polynomials are known. For example, determining whether or not there are infinitely many Gaussian primes on the Gaussian line with
$\alpha_0=1$ and $\delta=i$ ($i.e.$ Gaussian primes of the form $\alpha_n=1+ni$) is equivalent to determining whether or not there are infinitely many rational primes of the form $1+n^2$, which is
Landau's fourth problem given at the 1912 International Congress of Mathematicians and remains open today. In general, it is also not known whether every irreducible quadratic polynomial attains at least one prime value, so similarly we cannot easily decide whether every primitive Gaussian line contains at least one Gaussian prime.
Despite the difficulty of finding a Gaussian line that contains infinitely many Gaussian primes, we can apply a result of Iwaniec and Lemke Oliver to prove that infinitely many Gaussian lines contain infinitely many elements that are the product of at most two Gaussian primes.
For example, it is a deep theorem of Iwaniec \cite{Iwaniec}
that there are infinitely many values of $n$ such that $1+n^2$ is the product of at most two rational primes, from which it is immediate that the vertical Gaussian line defined by $\alpha_0=1$ and $\delta =i$ contains infinitely many elements that are the product of at most two Gaussian primes. Iwaniec notes that his proof generalizes to show that
if $G(n)=An^2+Bn+C$ is an irreducible polynomial with $A>0$ and $C$ odd, then there exist infinitely many integers $n$ such that $G(n)$ has at most two rational prime factors. This theorem also follows from a result of Lemke Oliver~\cite{Oliver} generalizing Iwaniec's work. Applied to Gaussian lines, this result yields the following:
\begin{theorem}\label{2primes}
Let $L$ is a primitive Gaussian line such that $1+i $ does not divide $\alpha_0$. Then $L$ contains infinitely Gaussian integers that are the product of at most two Gaussian primes.
\end{theorem}
\begin{proof} Let $L$ be a primitive Gaussian line with $\alpha_0=a+bi$, $\delta=c+di$, and $\Delta=ad-bc$ as defined in (\ref{D}). Assume $1+i$ does not divide $\alpha_0$.
The norm of an arbitrary Gaussian integer $\alpha_n$ on $L$ can be viewed as a quadratic polynomial $f(n)$ as follows:
\begin{align}\label{N} f(n)=N(\alpha_n)=N(\alpha_0+\delta n)
&=N(\delta)n^2+Tr(\alpha_0\overline\delta)n+N(\alpha_0)\\
&=(c^2+d^2)n^2+2(ac+bd)n+a^2+b^2\nonumber.
\end{align}
The discriminant of $f$ is equal to $-4\Delta^2$, which is negative unless $\Delta=0$. Thus, $f$ is irreducible over $\mathbb{Z}$ unless $L$ is the real or imaginary line, by Lemma \ref{delta}. Moreover, the leading coefficient of $f$ is positive and the constant term $N(\alpha_0)$ is odd, since we are assuming $1+i$ does not divide $\alpha_0$. It follows from Iwaniec's theorem discussed above that there are infinitely many $n$ such that $f(n)=N(\alpha_n)$ is a product of at most two rational primes, $i.e.$, $\alpha_n$ is a product of at most two Gaussian primes.
\end{proof}
Unfortunately Theorem \ref{2primes} does not say anything about the distribution of Gaussian primes on Gaussian lines. For this, we first apply T. Tao's \cite{Tao} astonishing theorem about arbitrarily shaped constellations in the Gaussian primes to Gaussian lines. His theorem says the following:
\begin{theorem}[T. Tao \cite{Tao}] Given any distinct Gaussian integers $v_1,\ldots,v_{k}$, there are infinitely many sets $\{\alpha+rv_1,\ldots, \alpha+rv_{k}\}$, with $\alpha\in\mathbb{Z}[i]$ and $r\in\mathbb{Z}\setminus \{0\}$, all of whose elements are Gaussian primes.\end{theorem}
By choosing $\delta=c+di\in\mathbb{Z}[i]$ with $\gcd(c,d)=1$ as usual, we can apply Tao's theorem with $v_1=\delta$, $v_2=2\delta,\ldots,v_{k}=k\delta$. The theorem guarantees the existence of infinitely many pairs $(\alpha, r)$ such that all the elements in the set
$$P_{\alpha, r}=\{\alpha+r\delta,\alpha+2r\delta,\ldots, \alpha+kr\delta\}$$
are Gaussian primes. For each $\alpha$, there is a primitive Gaussian line $L_\alpha$ with slope $m=d/c$ ($i.e.$, $\delta=c+di$) that passes through all the elements in $P_{\alpha, r}$. Thus, $L_\alpha$ contains $k$ Gaussian primes in arithmetic progression. It is possible that infinitely many of the sets $P_{\alpha, r}$ are actually on the same Gaussian line (that is, infinitely many of the lines $L_\alpha$ have the same $\alpha_0$). In this case, we thus have a Gaussian line that contains infinitely many Gaussian primes. It follows that for a fixed slope $m\in\mathbb{Q}$, either there is a Gaussian line with slope $m$ that contains infinitely many Gaussian primes or, for all $k\geq 1$, there are infinitely many Gaussian lines with slope $m$ that contain $k$ Gaussian primes in arithmetic progression. Considering this for all $m$ and excluding the real and imaginary lines (the case $\alpha_0=0$), gives the following:
\begin{corollary} At least one of the following two statements is true:
\begin{enumerate}
\item There is a Gaussian line with $\alpha_0\neq 0$ that contains infinitely many Gaussian primes.
\item For every rational integer $m$ and every positive integer $k$, there are infinitely many distinct Gaussian lines with slope $m$ that contain $k$ Gaussian primes in arithmetic progression.
\end{enumerate}
\end{corollary}
Note that if the first statement in the corollary is true, then by taking norms it is also true that there is a quadratic polynomial that takes on infinitely many prime values.
Regarding the second statement, note that it is not possible for a Gaussian line to contain infinitely many Gaussian primes in arithmetic progression. This follows from the result of Gethner et al. \hskip-.05in \cite{Gethner} mentioned earlier that every Gaussian line contains arbitrarily long sequences of consecutive Gaussian composites.
We also wondered where to look for primes on Gaussian lines. On the real line, Bertrand's Postulate guarantees the existence of a rational prime between $n$ and $2n$ for every rational integer $n\geq 3$. In other words, there exists a prime between $n$ and the next integer that is divisible by $n$. We wondered if the analogous statement holds on Gaussian lines. If $\alpha_n$ is on a Gaussian line $L$ then to characterize the next Gaussian integer on $L$ divisible by $\alpha_n$, we define a function $\nu:\mathbb{Z}[i]\rightarrow \mathbb{Z}$ by
\begin{equation}\label{morm}
\nu(x+iy) = \frac{N(x+iy)}{{\rm gcd}(x,y)}.
\end{equation}
The function $\nu$ is useful because if $\beta\in\mathbb{Z}[i]$ then the smallest positive rational integer divisible $\beta$ is $\nu(\beta)$, and furthermore, $\nu(\beta)$ divides every rational integer that is divisible by $\beta$. For example, if $\beta = 2+6i = 2(1+3i)$ then the smallest positive rational integer divisible by $\beta$ is $2(1+3i)(1-3i)=20=\nu(\beta)$,
and $\beta$ divides a rational integer $r$ if and only if $r$ is divisible by 20.
With regards to Bertrand's postulate, if $\alpha_n$ is on a Gaussian line $L$ then the next Gaussian integer on $L$ divisible by $\alpha_n$ is
$\alpha_{n+\nu(\alpha_n)}=\alpha_{n}+\nu(\alpha_n)\cdot\delta$. Notice that $\nu(r)=r$ for all $r\in \mathbb{Z}$, so Conjecture \ref{bertrand2} below is equivalent to Bertrand's Postulate when $L$ is the real line. We include a second conjecture because
$\alpha_{n+N(\alpha_n)}=\alpha_n+ N(\alpha_n)\cdot\delta$ is also divisible by $\alpha_n$ and, as we discuss below, it is more efficient to use the norm when searching for Gaussian primes on lines.
Thus, we make the following two conjectures that generalize Bertrand's Postulate.
\medskip
\begin{conjecture} [Strong Bertrand for Gaussian lines] \label{bertrand2}
Let $L$ be a primitive Gaussian line. If $n>1$, then there is always at least one Gaussian prime on $L$ that lies between $\alpha_n$ and $\alpha_{n+\nu(\alpha_n)}$.
\end{conjecture}
\begin{conjecture} [Weak Bertrand for Gaussian lines] \label{bertrand}
Let $L$ be a primitive Gaussian line. If $n>1$, then there is always at least one Gaussian prime on $L$ that lies between $\alpha_n$ and $\alpha_{n+N(\alpha_n)}$.
\end{conjecture}
\medskip
We wrote a program in Sage \cite{Sage} to search for lines $L$ where Conjecture~\ref{bertrand} fails for some Gaussian integer on $L$. We tested well over $10^{10}$ consecutive Gaussian integers on about $\numprint{700,000}$ lines and the conjecture held in every case. About $\numprint{607,000}$ of the lines we checked had $\alpha_0=1$ and $\delta=c+di$, where $c$ and $d$ were relatively prime integers ranging from one to 1,000. Additionally, we checked over 24,000 lines where $c$ and $d$ were random integers between 300 and $10^{18}$. Finally, we checked about 65,000 lines with $\alpha_0\neq 1$.
Our algorithm for testing Conjecture \ref{bertrand} relies on the fact that if $\alpha_\ell=\pi$ is a Gaussian prime between $\alpha_n$ and $\alpha_{n+N(\alpha_n)}$ for some $0<n<\ell$, then $\pi$ is also between $\alpha_k$ and $\alpha_{k+N(\alpha_k)}$ whenever $n<k<\ell$. This holds because $N(\alpha_n)<N(\alpha_k)$ whenever $0<n<k$ by our choice of $\alpha_0$ being the element of smallest norm on $L$. The corresponding statement does not hold for $\nu(\alpha_n)$, which is why we focus on Conjecture~\ref{bertrand}.
Specifically, for every line $L$ that we tested, we found a sequence of $10^{10}$ Gaussian integers $\alpha_{\ell_i}$, $1\leq i\leq 10^{10}$, on $L$ such that the following three conditions are satisfied:
\begin{enumerate}
\item Each $\alpha_{\ell_i}$ is a Gaussian prime;
\item The first Gaussian prime in the sequence, $\alpha_{\ell_1}$, lies between $\alpha_1$ and $\alpha_{1+N(\alpha_1)}$, $i.e.$, $1<\ell_1<1+N(\alpha_1)$;
\item For $i\geq 1$, the Gaussian prime $\alpha_{\ell_{i+1}}$ lies between the previous prime $\alpha_{\ell_i}$ and the Gaussian integer $\alpha_{{\ell_i}+N(\alpha_{\ell_i})}$ on $L$, $i.e.$, $\ell_i<\ell_{i+1}<1+N(\alpha_{\ell_i})$.
\end{enumerate}
This verifies Conjecture 2 on the line $L$ for all $1<n\leq\ell_{10^{10}}$.
If either conjecture is true, then it would follow that there are infinitely many Gaussian primes on every Gaussian line. But proving either conjecture for even one Gaussian line (with $\alpha_0\neq 0$) would give a Gaussian line with infinitely many Gaussian primes, and hence a quadratic polynomial that takes on infinitely many rational prime values.
\section{Divisibility on Gaussian Lines.}\label{div}
Every second integer on the real line is divisible by 2, every third by 3, every fourth by 4, and so on. We wondered if this basic periodicity property of divisibility extends to Gaussian lines, and furthermore, if there is a simple way to characterize those Gaussian primes that occur as divisors on a particular Gaussian line.
In this section we show that the answer to both of these questions is {\em YES}.
\medskip
Throughout this section, let $L$ be a primitive Gaussian line with $\alpha_0=a+bi$ and $\delta = c+di$ as defined in Section \ref{background}. Then $\alpha_0$ and $\delta$ are relatively prime Gaussian integers, $c$ and $d$ are relatively prime rational integers, and the Gaussian integers on $L$ are exactly the numbers $\alpha_n=\alpha_0+\delta n$, $n\in \mathbb{Z}$.
Also, recall the function $\nu:\mathbb{Z}[i]\rightarrow \mathbb{Z}$ defined in (\ref{morm}) as it is used here and throughout the rest of the paper
In the special case where $L$ is the real line, we have $\alpha_0=0$, $\delta=1$, and $\alpha_n=n$, $n\in\mathbb{Z}$. In this case, divisibility of integers on the line $L$ by a rational integer $r$ is periodic with period $r$. Our first theorem shows that this periodicity generalizes to arbitrary primitive Gaussian lines, specifically that divisibility by a Gaussian integer $\beta$ is periodic with period $\nu(\beta)$. Note that the periodicity of divisibility on the real line is a special case of the following theorem.
\begin{theorem} \label{periodicity}
Suppose $\beta\in\mathbb{Z}[i]$ divides some Gaussian integer $\alpha_t$ on $L$.
Then $\beta$ divides $\alpha_{n}$ if and only if $n \equiv t \pmod {\nu(\beta)}$.
\end{theorem}
\begin{proof}
Suppose $\beta $ divides $\alpha_{t}$ for some $t$.
Then $\beta $ and $\delta$ are relatively prime, since any common divisor would also divide $\alpha_0= \alpha_t - \delta t$, but $\delta$ and $\alpha_0$ are relatively prime. Thus, $\beta$ divides $\alpha_n$ if and only if $\beta$ divides $\alpha_n-\alpha_t$, which in turn holds if and only if $\beta$ divides $n-t$ since $\alpha_n- \alpha_t = \delta(n-t)$.
But $n-t \in \mathbb{Z}$, so $\beta$ divides $\alpha_n$ if and only if $\nu(\beta)$ divides $n-t$.
\end{proof}
Theorem \ref{periodicity} implies that consecutive Gaussian integers $\alpha_n$ and $\alpha_{n+1}$ on $L$ are always relatively prime over $\mathbb{Z}[i]$, just as consecutive rational integers on the real line are always relatively prime over $\mathbb{Z}$. Also, because Theorem \ref{periodicity} is about Gaussian integers that divide some element on $L$, a natural follow-up problem is to characterize those Gaussian integers that occur as divisors of elements on $L$. In this section we specialize to rational integer and Gaussian prime divisors, and in Section \ref{divisor} we give the complete characterization of the set of Gaussian integer divisors.
We define the {\em divisor set of $L$}, denoted ${\mathcal{D}}(L)$, to be the set of Gaussian integers that divide some Gaussian integer on $L$. Our main result in Section \ref{divisor} (Theorem \ref{bigtheorem}) is a complete characterization of this set. Here we begin by characterizing two of its subsets, the {\em rational set} and the {\em Gaussian-prime set}, which we need for our work in Section~\ref{CRT}. We define the {\em rational set of $L$}, denoted $\mathcal{R}(L)$, to be the set of rational integers that divide some Gaussian integer on $L$, and the {\em Gaussian-prime set of $L$}, denoted ${\mathcal{G}\mathcal{P}}(L)$,
to be the set of non-rational Gaussian primes that divide some Gaussian integer on $L$. For easy reference, below are the set theoretical definitions of these three sets for a given Gaussian line $L$:
\begin{align*}
\mathcal{R}(L) &=\{r \in \mathbb{Z} : r|\alpha_n \text{ for some } n \in \mathbb{Z} \}; \\
{\mathcal{G}\mathcal{P}}(L) &= \{ \pi\in\mathbb{Z}[i] : \pi\ \text{is a Gaussian prime, }\pi \not\in \mathbb{Z}, \text{ and }\pi|\alpha_n \text{ for some } n \in \mathbb{Z}\};\\
{\mathcal{D}}(L) &= \{ \beta\in\mathbb{Z}[i]:\beta|\alpha_n\ \text{for some } n \in \mathbb{Z} \}.
\end{align*}
Note that an element in any of these three sets does not necessarily lie on the line $L$, but simply divides some Gaussian integer that lies on $L$.
In general, the divisor set ${\mathcal{D}}(L)$ of $L$ is not closed under multiplication. For example, suppose
$1+2i$ divides $\alpha_0$ and $1-2i$ divides $\alpha_1$, so $1+2i, 1-2i\in{\mathcal{D}}(L)$. Since $\nu(1+2i)=\nu(1-2i)=5$, it follows from Theorem \ref{periodicity} that $1+2i$ and $1-2i$ both divide every fifth Gaussian integer on $L$, starting with $\alpha_0$ and $\alpha_1$ respectively. Thus,
no integer on $L$ is divisible by their product $i.e.$,
$(1+2i)( 1-2i)=5\notin{\mathcal{D}}(L)$, and ${\mathcal{D}}(L)$ is not closed under multiplication. Our first lemma shows that this type of restriction from Theorem~\ref{periodicity} is really the only property preventing the divisor set from being closed under multiplication.
\begin{lemma} \label{ncrt}
Let $\beta$ and $\gamma$ be in the divisor set ${\mathcal{D}}(L)$ of $L$. If $\nu (\beta)$ and $\nu (\gamma)$ are relatively prime, then $\beta \gamma$ is in ${\mathcal{D}}(L)$.
\end{lemma}
\begin{proof}
Suppose $\beta,\gamma\in{\mathcal{D}}(L)$. Then, by Theorem \ref{periodicity}, there exist integers $s$ and $t$ such that $\beta|\alpha_n$ if and only if $n \equiv s \pmod{\nu (\beta)}$ and $\gamma|\alpha_n$ if and only if $n \equiv t \pmod{\nu (\gamma)}$. By the Chinese Remainder Theorem, there is an $n$ that satisfies both congruences. Therefore, $\beta \gamma \in {\mathcal{D}}(L)$.
\end{proof}
We use Lemma \ref{ncrt} to prove our next theorem and characterize the rational set of $L$. Recall from (\ref{D}) that $\Delta=ad-bc$ is a rational integer associated to $L$.
\begin{theorem} \label{ZL} Let $r\in\mathbb{Z}$. Then $r$ is in the rational set $\mathcal{R}(L)$ of $L$ if and only if r divides~$\Delta$.\end{theorem}
\begin{proof} Note that if $r,s\in\mathbb{Z}$ satisfy $rs\in\mathcal{R}(L)$, then $r\in\mathcal{R}(L)$ and $s\in\mathcal{R}(L)$ by the definition of the rational set.
It follows from this and Lemma \ref{ncrt} that it is sufficient to prove Theorem \ref{ZL} for prime powers.
Let $r=p^t$, where $p$ is a rational prime. Then $r\in\mathcal{R}(L)$ if and only if $p^t$ divides $\alpha_n$ for some $n\in\mathbb{Z}$. We have that $\alpha_n=\alpha_0+n\delta$, so ${\text{Re}}(\alpha_n)=a+nc$ and ${\text{Im}}(\alpha_n)=b+nd$.
Thus, $p^t$ divides $\alpha_n$ if and only if $p^t$ divides both $a+nc$ and $b+nd$. Recall that $c$ and $d$ are relatively prime, so at least one of them is not divisible by $p$. With out loss of generality, we assume that $p$ does not divide $c$. Then $c$ has a multiplicative inverse modulo $p^t$. Thus we have:
\begin{eqnarray*} p^t|\alpha_n &\Longleftrightarrow &a+nc\equiv 0\pmod{p^t}
\quad {\rm and}\quad b+nd\equiv 0\pmod{p^t}\\
&\Longleftrightarrow &b\equiv -nd\pmod {p^t}, \text{ where } n\equiv -ac^{-1}\pmod {p^t}\\
&\Longleftrightarrow &b\equiv ac^{-1}d\pmod {p^t}\\
&\Longleftrightarrow &ad\equiv bc\pmod{p^t}\\
&\Longleftrightarrow &p^t|\,\Delta,
\end{eqnarray*}
as needed.
\end{proof}
Thus, the rational integers that divide some Gaussian integer $\alpha_n$ on $L$ are exactly the divisors of
$\Delta$. Consequently, the rational set $\mathcal{R}(L)$ of $L$ is finite unless $\Delta=0$; that is, unless $L$ is the real or imaginary line.
Our next theorem characterizes the Gaussian prime set of $L$ and shows, by contrast, that this set is always infinite.
\begin{theorem} \label{PL} Let $\pi$ be a Gaussian prime with $\pi\not\in\mathbb{Z}$. Then $\pi\in{\mathcal{G}\mathcal{P}}(L)$ if and only if $\pi$ does not divide $\delta$.
\end{theorem}
\begin{proof}
First suppose $\pi$ divides $\delta$. Then $\pi$ does not divide $\alpha_n=\alpha_0+\delta n$ for all $n$ since $\alpha_0$ and $\delta$ are relatively prime. Thus, $\pi \not \in {\mathcal{G}\mathcal{P}}(L)$ in this case.
Conversely, suppose $\pi$ does not divide $\delta$. Let $\pi$ lie over the rational prime $p$. If $p$ divides $\Delta$, then $p$ divides some Gaussian integer $\alpha_n$ on $L$ by Theorem~\ref{ZL}. Thus $\pi$ also divides $\alpha_n$, and $\pi\in{\mathcal{G}\mathcal{P}}(L)$ as needed.
Thus, from now on we assume $p$ does not divide $\Delta$, and show that $\pi\in{\mathcal{G}\mathcal{P}}(L)$ in this case as well.
As in (\ref{N}), the norm of an arbitrary Gaussian integer $\alpha_n$ on $L$ can be viewed as a quadratic polynomial
$$f(n)=N(\alpha_0+\delta n)=N(\delta)n^2+Tr(\alpha_0\overline\delta\,)n+N(\alpha_0),$$
with discriminant Disc$(f)=-4\Delta^2$.
If $p\neq 2$, then $p\equiv 1\pmod 4$ since $\pi\notin\mathbb{Z}$. In this case, $-1$ is a square modulo $p$ and so Disc$(f)$ is a non-zero square modulo $p$. Therefore, $f(n)$ has two distinct roots modulo $p$, so there are $r,s\in\mathbb{Z}$, $r\not\equiv s\pmod p$, such that
$N(\alpha_r)\equiv N(\alpha_s)\equiv 0\pmod p$. It follows from Theorem \ref{periodicity} that $\pi$ and $\overline\pi$ both divide exactly one of $\alpha_r$ and $\alpha_s$. Thus $\pi\in{\mathcal{G}\mathcal{P}}(L)$ in this case.
If $p=2$, then Disc$(f)\equiv 0\pmod p$ and $f$ has a double root modulo $p$.
It follows that $\pi$ divides either $\alpha_0$ or $\alpha_1$. Thus, $\pi\in{\mathcal{G}\mathcal{P}}(L)$ in this case as well.
\end{proof}
Since $\delta\neq 0$, it follows from Theorem \ref{PL} that the divisor set of a Gaussian line always contains infinitely many Gaussian primes. In particular, we have the following corollary to Theorem \ref{PL}.
\begin{corollary} \label{minprimes} The divisor set ${\mathcal{D}}(L)$ of $L$ contains at least one Gaussian prime that lies over $p$ for every rational prime $p\equiv 1\pmod 4$.
\end{corollary}
\begin{proof}
Let $\pi$ be a Gaussian prime that lies over the rational prime $p\equiv 1\pmod 4$. Suppose that neither $\pi$ nor $\overline{\pi}$ are in ${\mathcal{D}}(L)$. Then neither is in ${\mathcal{G}\mathcal{P}}(L)$ and so both divide $\delta$ by Theorem \ref{PL}. Thus $p$ divides $\delta$ and $p$ is a common divisor of $c$ and $d$, which contradicts $L$ being primitive.\end{proof}
Taken together, Theorems \ref{ZL} and \ref{PL} imply that if a Gaussian prime $\pi$ divides $\delta$, and $\pi$ lies over $p$, then $p$ does not divide $\Delta$ (or, equivalently, $\pi$ does not divide $\Delta$). This can also be seen directly: If $\pi$ is a common divisor of both $\delta$ and $\Delta$, then $\pi$ divides $d$ since $d\alpha_0=\Delta+b\delta$ and $\alpha_0$ and $\delta$ are relatively prime. Now, $\delta=c+di$, so $\pi$ also divides $c$. But $c, d\in \mathbb{Z}$, so it follows that $p$ is a common divisor of $c$ and $d$, which contradicts $L$ being primitive.
Theorems \ref{ZL} and \ref{PL} characterize the rational and Gaussian prime set of a Gaussian line. In Section \ref{divisor} we use these theorems to give a complete characterization of the divisor set as well. First we turn to some consequences of the theorems in this section.
\section{The Chinese Remainder Theorem for Gaussian Lines.}\label{CRT}
In this section we prove a theorem about Gaussian lines that is analogous to the Chinese Remainder Theorem for $\mathbb{Z}$. We also use the Chinese Remainder Theorem for $\mathbb{Z}[i]$ to prove that there are always infinitely many Gaussian lines that satisfy any given CRT-type divisibility properties.
\medskip
The Chinese Remainder Theorem (CRT) for $\mathbb{Z}$ implies that there will always be a solution to a system of linear congruences over $\mathbb{Z}$ when the moduli are pairwise relatively prime. It is well known that this theorem generalizes with the same proof to the Gaussian integers (or to any Euclidean domain). We state this more general version here since we will need it in our later work.
\begin{theorem}[CRT for $\Z[i]$] \label{chinese}
Let $\mu_1,\mu_2,\ldots, \mu_k$ be pairwise relatively prime Gaussian integers and $\beta_1, \beta_2,\ldots, \beta_k$ be arbitrary Gaussian integers.
Then the system of $k$ congruences
$$x \equiv \beta_j \pmod{\mu_j}, \ 1\leq j\leq k,$$
has a unique solution $\tau\in\mathbb{Z}[i]$ modulo $\mu_1\mu_2\ldots \mu_k$.
\end{theorem}
Note that CRT for $\mathbb{Z}$ is just Theorem \ref{chinese} with $\beta_j, \mu_j\in\mathbb{Z}$, $1\leq j\leq k$. In the spirit of this paper, we extend CRT for $\mathbb{Z}$ to CRT for Gaussian lines. First we restate CRT for $\mathbb{Z}$ in terms of divisibility since the analogous statement for Gaussian lines is given in terms of divisibility.
\begin{theorem}[CRT for $\mathbb{Z}$] \label{crtZ}
Let $m_1, m_2,\ldots, m_k$ be pairwise relatively prime rational integers and $b_1, b_2,\ldots, b_k$ be arbitrary rational integers. Then there is a unique rational integer $t$ modulo $m_1 m_2\cdots m_k$ such that
$$m_1|{(t+b_1)}, \ m_2|{(t+b_2)}, \ \ldots, \ m_k|{(t+b_k)}.$$
\end{theorem}
We use the function $\nu:\mathbb{Z}[i]\rightarrow\mathbb{Z}$ defined (\ref{morm}) to extend Theorem \ref{crtZ} to any Gaussian line. Since $\nu(n)=n$ for all $n\in\mathbb{Z}$, the following theorem is exactly CRT for $\mathbb{Z}$ when $L$ is the real line.
\begin{theorem}[CRT for Gaussian lines] \label{crtGL} Let $L$ be a primitive Gaussian line, and suppose
$\mu_1, \mu_2,\ldots, \mu_k$ are Gaussian integers in the divisor set ${\mathcal{D}}(L)$ of $L$ such that $\nu(\mu_1),\nu(\mu_2), \ldots, \nu(\mu_k)$ are pairwise relatively prime. Let $b_1, b_2,\ldots, b_k$ be arbitrary rational integers. Then there is a unique rational integer $t$ modulo $\nu(\mu_1)\nu(\mu_2) \cdots \nu(\mu_k)$ such that
$$ \mu_1|\alpha_{t+b_1}, \ \mu_2|\alpha_{t+b_2}, \ \ldots, \ \mu_k|\alpha_{t+b_k}.$$
\end{theorem}
\begin{proof}
Since $\mu_j\in{\mathcal{D}}(L)$, $1\leq j\leq k$, it follows from Theorem \ref{periodicity} that for each $j$ there exists $m_j\in\mathbb{Z}$ such that $\mu_j$ divides the Gaussian integer $\alpha_{n}$ on $L$ if and only if $n \equiv m_j \pmod {\nu(\mu_j)}$.
By Theorem \ref{crtZ}, the system of $k$ congruences
$$x \equiv m_j-b_j \pmod {\nu(\mu_{j})}, \ 1\leq j\leq k,$$
has a unique solution $x\equiv t\pmod{\nu(\mu_1)\nu(\mu_2) \cdots \nu(\mu_k)}$.
Thus, for $1\leq j\leq k$, we have $t+b_j\equiv m_j \pmod{\nu(\mu_j)}$ and $\alpha_{t+b_j}$ is divisible by $\mu_j$ as needed.
\end{proof}
Now, suppose you want to find a primitive Gaussian line that satisfies certain CRT-type divisibility properties. For instance, suppose you want a line where $2+i$ divides $\alpha_1$, $2+3i$ divides $\alpha_2$, and $4080 + 1397i$ divides $\alpha_3$. It follows from our next theorem that infinitely many such lines exist (one example in this case is the line defined by $\alpha_0=1$ and $\delta=6297+8234i$), and the proof shows how to construct them.
\begin{theorem} \label{newchinese}
Let $b_1, b_2, \ldots, b_k $ be
rational integers and $\mu_1, \mu_2, \ldots, \mu_k$ be pairwise relatively prime Gaussian integers. Then there are infinitely many primitive Gaussian lines $L$ such that $\mu_j $ divides the Gaussian integer $ \alpha_{b_j}$ \hskip-.03in on $L$ for all $1 \leq j\leq k$.
\end{theorem}
\begin{proof}
To show there are infinitely many primitive Gaussian lines $L$ that satisfy the desired divisibility conditions, we show that there are infinitely many Gaussian integers
$\alpha_0=a+bi$ and $\delta=c+di$ that satisfy all of the following properties:
\begin{quote} \begin{enumerate}
\item[(a)] $N(\alpha_0+n\delta)>N(\alpha_0)$ for all $n\neq 0$, $n\in\mathbb{Z}$;
\item[(b)] gcd$(c,d)=1$ and $c\geq 0$;
\item[(c)] $\alpha_0$ and $\delta$ are relatively prime over $\Z[i]$;
\item[(d)] $\mu_j $ divides $ \alpha_{b_j}\hskip-.035in=\alpha_0+b_j\delta$ for all $1 \leq j\leq k$.
\end{enumerate}
\end{quote}
We first choose $\alpha_0$. For $1 \leq j\leq k$, let $\gamma_j\in\mathbb{Z}[i]$ be a common divisor of $\mu_j$ and $b_j$ with maximal norm (each $\gamma_j$ is uniquely defined up to multiplication by a unit in $\Z[i]$). Let $\lambda$ be any Gaussian integer that is relatively prime to both $\mu_1\mu_2\cdots\mu_k$ and $b_1b_2\cdots b_k$.
Define $\alpha_0$ by
$$\alpha_0=\lambda\prod_{j=1}^k \gamma_j\in\Z[i].$$
There are infinitely many possibilities for $\lambda$, so there are infinitely many possibilities for $\alpha_0$.
For each $\alpha_0$, we show there are infinitely many $\delta=c+di\in\Z[i]$ such that the above properties (a)--(d) are satisfied. Property (d) is equivalent to $\delta$ being a solution to the system of $k$ congruences
$$\alpha_0+b_j x\equiv 0\pmod{\mu_j}, \ \ 1 \leq j\leq k.$$
Dividing by $\gamma_j$ for each $j$, this is equivalent to $\delta$ being a solution to the system
$$x\equiv -\left({\alpha_0\over\gamma_j}\right)\kappa_j^{-1} \pmod{\omega_j}, \ \ 1 \leq j\leq k,$$
where each $\kappa_j=b_j/\gamma_j\in\Z[i]$ is relatively prime to $\omega_j=\mu_j/\gamma_j\in\Z[i].$
Note that each $\alpha_0/\gamma_j$ is also relatively prime to $\omega_j$ since
$\omega_1,\omega_2,\ldots,\omega_k$
are pairwise relatively prime. Thus, any solution to this latter system of congruences is relatively prime to the product $\omega_1\omega_2\cdots\omega_k$.
Since $\delta$ will be a solution, we include an additional congruence to insure that any solution is also relatively prime to $\alpha_0$ and so property~(c) will automatically be satisfied. Let $\beta$ be the product of all the Gaussian primes that divide $\alpha_0$ but do not divide
$\omega_1\omega_2\cdots\omega_k$, and let $\beta=1$ if no such Gaussian primes exist.
Then $\delta$ is relatively prime to $\alpha_0$ if it is relatively prime to both $\beta$ and
$\omega_1\omega_2\cdots\omega_k$.
Thus, to insure that properties (c) and (d) are both satisfied, it is sufficient that $\delta$ be a solution to the following system of $k+1$ congruences:
\begin{align*}
x & \equiv 1\pmod\beta,\ \ {\text{and}} \\
x &\equiv -\left({\alpha_0\over\gamma_j}\right)\kappa_j^{-1} \pmod{\omega_j}, \ \ 1 \leq j\leq k.
\end{align*}
This system has a unique solution
$\tau=r+si\in\Z[i]$ modulo $\beta\omega_1\omega_2\cdots\omega_k$ by CRT for Gaussian integers (Theorem
\ref{chinese}) since $\beta,\omega_1,\omega_2,\ldots,\omega_k$
are pairwise relatively prime. Thus, it remains to construct $\delta=c+di$ that satisfies properties (a) and (b), and such that $\delta\equiv\tau\pmod{\beta\omega_1\omega_2\cdots\omega_k}$, so that properties (c) and (d) hold as well.
To satisfy property (a), we construct $\delta=c+di$
such that
$$N(\alpha_n)=N(\alpha_0+n\delta)=(c^2+d^2)n^2+2(ac+bd)n+a^2+b^2, \ \ n\in\mathbb{Z},$$
obtains its minimum value only when $n=0$. For any $c,d\in\mathbb{Z}$, the quadratic function,
$$f(x)=(c^2+d^2)x^2+2(ac+bd)x+a^2+b^2, \ \ x\in\mathbb{R},$$
obtains its absolute minimum when $f^\prime(x)=0$, $i.e.$, when
$x={-(ac+bd)/({c^2+d^2})}.$
Thus, since $f$ is symmetric, for property (a) to be satisfied and $f(0)$ to be the minimum integer value of $f$, it is sufficient that $c$ and $d$ satisfy
\begin{equation}\label{1prime}-{1\over 2}<{{ac+bd}\over{c^2+d^2}}<{1\over 2}.\end{equation}
For a fixed $d$,
$$\lim_{c\rightarrow\infty}\left({{ac+bd}\over{c^2+d^2}}\right)=0,$$
so (\ref{1prime}) holds for all $c$ larger than some bound that depends on $d$. We use this fact to complete the proof.
It is sufficient to choose $\delta=c+di$ such that (\ref{1prime}) holds, gcd$(c,d)=1$, $c\geq 0$, and
$\delta\equiv\tau\equiv r+si\pmod{\beta\omega_1\omega_2\cdots\omega_k}$.
Let $M=N(\beta\omega_1\omega_2\cdots\omega_k)\in\mathbb{Z}$. We first consider $s=0$. In this case, $\tau=r$ is relatively prime to $M$ since is a non-zero rational integer that is relatively prime to
$\beta\omega_1\omega_2\cdots\omega_k$.
It follows from Dirichlet's Theorem on Primes in Arithmetic Progressions, that there are infinitely many rational primes congruent to $r$ modulo $M$. Thus, we can choose a rational prime $p$ such that $p\equiv r\pmod{M}$, $p>M$, and $p$ is large enough so that (\ref{1prime}) holds for $c=p$ and $d=M$.
Define $\delta$ by
$\delta = p+Mi$. Then, (\ref{1prime}) holds and gcd$(c,d)=1$, since $p$ is prime and larger than $M$.
Also, $\delta\equiv\tau\pmod{M}$, so $\delta\equiv\tau\pmod{\beta\omega_1\omega_2\cdots\omega_k}$, since $\beta\omega_1\omega_2\cdots\omega_k$ divides $M$.
Thus, $\alpha_0$ and $\delta$ define a primitive Gaussian line that satisfies the divisibility conditions stated in Theorem~\ref{newchinese}. Moreover, according to Dirichlet's Theorem, there are infinitely many choices of the prime $p$. Thus, there are infinitely many choices for $\delta$, and so infinitely many primitive Gaussian lines with the same $\alpha_0$ that satisfy the conditions.
Similarly, if $r=0$, then $\tau=si$, where $s$ is a non-zero rational integer that is relatively prime to $M$. Proceed as above to get a rational prime $p\equiv s\pmod{M}$, $p>M$, and $p$ is large enough so that (\ref{1prime}) holds for $c=M$ and $d=p$.
Define $\delta$ by $\delta=M+pi$. Then $\alpha_0$ and $\delta$ define a primitive Gaussian line that satisfies the divisibility conditions stated in Theorem \ref{newchinese}. Again, by Dirichlet's Theorem, there are infinitely many choices of the prime $p$ and thus infinitely many primitive Gaussian lines with this $\alpha_0$ that satisfy these conditions.
Finally, suppose $r$ and $s$ are both non-zero rational integers. Let $h$ be the smallest positive rational divisor of $r$ such that gcd$(r/h,M)=1$. Again by
Dirichlet's Theorem, we can find a rational prime $p>s$ such that $p\equiv r/h\pmod M$
and $p$ is large enough so that (\ref{1prime}) holds for $c=ph$ and $d=s$.
Define $\delta$ by
$$\delta=ph+si.$$
Then $\delta\equiv\tau\pmod{\beta\omega_1\omega_2\cdots\omega_k}$.
To see that gcd$(ph,s)=1$, first observe that gcd$(p,s)=1$ since $p>s$ is prime. Also, gcd$(h,s)=1$, since any common rational prime divisor $q$ of $h$ and $s$ is also a common divisor of $\tau$ and $M$. Hence, there is a Gaussian prime that lies over $q$ that divides both $\tau$ and $\beta\omega_1\omega_2\cdots\omega_k$, which is a contradiction since they are relatively prime. Thus, as above, $\alpha_0$ and $\delta$ define a primitive Gaussian line that satisfies the required divisibility conditions, and again there are infinitely many choices of $\delta$ by Dirichlet's Theorem.
\end{proof}
\section{The Divisor Set of a Gaussian Line.}\label{divisor}
We now return to questions about divisibility on Gaussian lines related to those discussed in Section \ref{div}. For a given Gaussian line $L$, we first characterize those Gaussian-prime powers that exactly divide some Gaussian integer on $L$. Using this, our main theorem in this section gives a complete characterization of the divisor set ${\mathcal{D}}(L) $ of $L$.
\medskip
Theorem \ref{PL} in Section \ref{div} resolves the question of which Gaussian primes occur in the divisor set ${\mathcal{D}}(L)$ of $L$, but does not address division by prime powers. For example, Theorem~\ref{PL} does not answer the following question: If $\pi\in{\mathcal{D}}(L)$, then is $\pi^{100}$ guaranteed to be in ${\mathcal{D}}(L)$? Nor does it say anything about which prime powers $\pi^k$ {\em exactly divide} some Gaussian integer $\alpha_n$ on $L$ ($i.e.$, $\pi^k$ divides $\alpha_n$, but $\pi^{k+1}$ does not). For example, if $\pi^{50}\in{\mathcal{D}}(L)$, then certainly $\pi, \pi^2,\ldots,\pi^{49}\in{\mathcal{D}}(L)$, but is $\pi$ guaranteed to exactly divide some Gaussian integer on $L$? What about $\pi^2$ or $\pi^3$ or any other power of $\pi$? Our next theorem shows that the answer to all of these questions is {\em YES} whenever $\pi$ lies over a rational prime $p\equiv 1\pmod 4$, but is conditional for other values of $p$. We restrict to lines with $\Delta\neq 0$ since this simplifies the proof and exact division by all prime powers holds on the real and imaginary lines.
\begin{theorem} \label{exact} Let $L$ be a primitive Gaussian line with $\Delta\neq 0$. Suppose $\pi$ is a Gaussian prime that lies over the rational prime $p$.
\begin{enumerate}
\item If $p\equiv 1\pmod 4$, then the following are equivalent:
\begin{enumerate}
\item $\pi$ does not divide $\delta$.
\item $\pi^k\in{\mathcal{D}}(L)$ for some positive integer $k$.
\item For every positive integers $r$, $\pi^r$ exactly divides some Gaussian integer on $L$. In particular, $\pi^r\in{\mathcal{D}}(L)$ for all positive integers $r$.
\end{enumerate}
\item If $p=2$, then the following are equivalent:
\begin{enumerate}
\item $1+i$ does not divide $\delta$.
\item $(1+i)^k\in{\mathcal{D}}(L)$ for some positive integer $k$.
\item Let $2^s$ be the exact power of $2$ that divides $\Delta$, and $\beta\in\Z[i]$ have 2-power norm. Then $\beta$ exactly divides some Gaussian integer $\alpha_n$ on $L$ if and only if $\beta$ is an associate of $2, 2^2, \ldots, 2^s$, or $2^{s}(1+i)$. That is, $(1+i)^t\in{\mathcal{D}}(L)$ if and only if $0\leq t\leq 2s+1$, but $(1+i)^t$ exactly divides a Gaussian integer on $L$ if and only if in addition $t$ is even or $t=2s+1$.
\end{enumerate}
\item If $p\equiv 3\pmod 4$ (so $\pi$ is an associate of $p$), then $p^k$ exactly divides some Gaussian integer $\alpha_n$ on $L$ if and only if $p^k$ divides $\Delta$.
\end{enumerate}
\end{theorem}
\begin{proof} We consider the three cases separately.
\medskip
\noindent {\underline{\em{Case 1}}:}
Suppose $p\equiv 1\pmod 4$. Statements 1(a) and 1(b) are equivalent by Theorem \ref{PL}. Since 1(c) trivially implies 1(b), we only need to show that 1(b) implies 1(c). For this, suppose that $\pi^k\in{\mathcal{D}}(L)$, say $\pi^k$ divides $\alpha_m$. Then $\pi^h$ exactly divides $\alpha_m$ for some $h\geq k$. Let $r$ be a positive integer. If $r< h$, then 1(c) holds since $\pi^r$ exactly divides $\alpha_n$ for $n=m+p^rq$, where $q$ is any integer not divisible by $p$. To see this, write $\alpha_n=\alpha_0+(m+p^rq)\delta=\alpha_m+p^rq\delta$, and use that $\pi^h$ exactly divides $\alpha_m$ while $\pi^r$ exactly divides $p^rq\delta$. Note that by considering the special case where $r=1$, this shows in general that if a Gaussian prime $\pi$ does not divide $\delta$ then $\pi$ {\em exactly} divides some Gaussian integer $\alpha_n$ on $L$.
We use induction and the general fact for $r=1$ given above to show that 1(c) holds for $r\geq h$ as well. If $r=h$, then $\pi^r$ exactly divides $\alpha_m$ by hypothesis, so 1(c) holds in this case. Suppose it holds for some $t\geq h$, say $\pi^t$ exactly divides $\alpha_s$. Let $\omega=\alpha_s/\pi^t\in\Z[i]$.
For $q\in\mathbb{Z}$, consider
$$\alpha_{s+p^{t}q}=\alpha_s+p^{t}q\delta=\pi^t(\omega+\overline\pi^{t}\delta q),$$
where $p=\pi\overline\pi$ and $\overline\pi$ is not an associate of $\pi$ since $p\equiv 1\pmod 4$.
Now, $\overline{\pi}^{t}\delta$ has no rational integer divisors since $\pi\nmid\delta$. Also, $\omega$ and $\overline{\pi}^{t}\delta$ are relatively prime since $\alpha_s$ and $p^t\delta$ are relatively prime. Thus, the numbers $\omega+\overline\pi^{t}\delta q$, $q\in\mathbb{Z}$, are the Gaussian integers on a different primitive Gaussian line $L^\prime$ with $\delta^\prime=\overline\pi^{t}\delta$. Since $\pi \nmid \delta^\prime$, it follows from the general result for $r=1$, that there is a $q_0\in\mathbb{Z}$ such that $\pi$ exactly divides the Gaussian integer $\omega+\overline\pi^{t}\delta q_0$ on $L^\prime$. Thus $\pi^{t+1}$ exactly divides the Gaussian integer $\alpha_n$ on $L$ for $n=s+p^{t}q_0$, and 1(c) holds for $r=t+1$. By induction it holds for all $r$.
\medskip
\noindent {\underline{\em{Case 2}}:}
Suppose $p=2$. As above, it is sufficient to prove statement 2(b) implies 2(c). Suppose $(1+i)^k\in{\mathcal{D}}(L)$ for some positive integer $k$. Then $(1+i)\in{\mathcal{D}}(L)$ and $1+i$ does not divide $\delta$.
Let $2^s$, $s\geq 0$, be the exact power of $2$ that divides $\Delta$. Then $2^s\in{\mathcal{D}}(L)$, but $2^{s+1}\not\in{\mathcal{D}}(L)$ by Theorem \ref{ZL}. Since $2$ ramifies in $\Z[i]$, this is equivalent to $(1+i)^{2s}\in{\mathcal{D}}(L)$, but $(1+i)^{2s+2}\not\in{\mathcal{D}}(L)$.
We first claim that $(1+i)^{2s+1}\in{\mathcal{D}}(L)$. For this, note that since $(1+i)^{2s}\in{\mathcal{D}}(L)$, there is a Gaussian integer $\alpha_m$ on $L$ such that $(1+i)^{2s}$ divides $\alpha_m$. If $(1+i)^{2s+1}$ divides $\alpha_m$ then $(1+i)^{2s+1}\in{\mathcal{D}}(L)$ as claimed. So suppose $(1+i)^{2s+1}$ does not divide $\alpha_m$. By Theorem \ref{periodicity}, $(1+i)^{2s}$ divides $\alpha_{m+2^s}$ since $\nu((1+i)^{2s})=2^s$. Now,
$$\alpha_{m+2^s}=\alpha_m+2^s\delta=2^s(\omega+\delta),$$
where $\omega=\alpha_m/2^s\in\Z[i]$ is not divisible by $1+i$. Since neither $\omega$ nor $\delta$ is divisible by $1+i$, their sum must be divisible by $1+i$. Thus, $(1+i)^{2s+1}$ divides $\alpha_{m+2^s}$, and $(1+i)^{2s+1}\in{\mathcal{D}}(L)$ in this case as well.
Thus we have $(1+i)^t\in{\mathcal{D}}(L)$ if and only if $0\leq t\leq 2s+1$, and so it remains to consider exact division by $(1+i)^t$. We consider $t$ even and $t$ odd separately. First suppose that $t=2h$ is even. We claim that $(1+i)^t$ exactly divides some Gaussian integer $\alpha_n$ on $L$, or equivalently, that $2^h$ divides $\alpha_n$ but $2^h(1+i)$ does not. This is true when $t=s$ since $(1+i)^{2s}$ exactly divides either $\alpha_m$ or $\alpha_{m+2^s}$ by the preceding paragraph. So suppose that for some $h$, $0<h\leq s$, we have $(1+i)^{2h}$ exactly divides $\alpha_n$ for some $n$. Consider,
$$\alpha_{n+2^{h-1}}=\alpha_n+2^{h-1}\delta=2^{h-1}(\omega+\delta),$$
where $\omega=\alpha_n/2^{h-1}\in\Z[i]$ is divisible by $(1+i)^2$ and $\delta$ is not divisible by $1+i$. Thus, $\omega+\delta$ is not divisible by $1+i$ , and so $(1+i)^{2h-2}$ exactly divides $\alpha_{n+2^{h-1}}$.
The claim for odd $t$ follows by induction.
Now suppose $t$ is odd and $(1+i)^t$ exactly divides some Gaussian integer $\alpha_r$ on $L$. For instance, this holds for $t=2s+1$ since $(1+i)^{2s+1}$ exactly divides either $\alpha_m$ or $\alpha_{m+2^s}$. Write $t=2j+1$, so $\nu((1+i)^t)=2^{j+1}$. Thus, by Theorem \ref{periodicity}, $(1+i)^t$ divides $\alpha_n$ for $n=r+2^{j+1}q$, $q\in\mathbb{Z}$. Now,
$$\alpha_n=\alpha_{r+2^{j+1}q}=\alpha_r+2^{j+1}\delta q=(1+i)^t\left( \omega+\mu(1+i)\delta q\right),$$
where $\omega=\alpha_r/(1+i)^t\in\Z[i]$ is not divisible by $1+i$ and $\mu\in\Z[i]$ is a unit. Now, the real and imaginary parts of $\mu(1+i)\delta$ must be relatively prime since $1+i$ does not divide $\delta$ and the real and imaginary part of $\delta$ are relatively prime. Also, $\omega$ and $\mu(1+i)\delta$ are relatively prime over $\Z[i]$ since $1+i$ does not divide $\omega$ and $\alpha_r$ and $\delta$ are relatively prime.
Thus, the numbers $\omega+(1+i)\delta q$, $q\in\mathbb{Z}$, are the Gaussian integers on a different primitive Gaussian line $L^\prime$ with $\delta^\prime=(1+i)\delta$. Since $1+i$ divides $\delta^\prime$, it follows from
Theorem \ref{ZL} that none of the Gaussian integers $\omega+(1+i)\delta q$ are divisible by $1+i$, that is,
$(1+i)\not\in{{\mathcal{D}}(L^\prime)}$. Thus, $(1+i)^{t+1}\not\in{\mathcal{D}}(L)$, or equivalently, $2^{j+1}\not\in{\mathcal{D}}(L)$. This is a contradiction unless $j=s$. Therefore, if $t$ is odd then $(1+i)^t$ exactly divides some Gaussian integer on $L$ if and only if $t=2s+1$.
\medskip
\noindent {\underline{\em{Case 3}}:}
Suppose $p\equiv 3\pmod 4$. Then $p$ remains prime in $\Z[i]$ and $\pi$ is an associate of $p$. By Theorem \ref{ZL}, we know that then $p^k$ divides some Gaussian integer $\alpha_n$ on $L$ if and only if $p^k$ divides $\Delta$. For {\em exact} divisibility, let $s$ be such that $p^s$ exactly divides $\Delta$. Then $p^s$ exactly divides some $\alpha_m$ on $L$ since $p^{{s+1}}\not\in{\mathcal{D}}(L)$. Then, as in the case $p=2$, we have that $p^{s-1}$ exactly divides
$$\alpha_{m+p^{s-1}}=\alpha_m+p^{s-1}\delta=p^{s-1}\left( \omega+\delta \right),$$
since $\omega=\alpha_m/p^{s-1}$ is divisible by $p$ but $\delta$ is not.
Continue in the same way to get that $p^k$ exactly divides some Gaussian integer on $L$ for all $k$ with $0\leq k\leq s$.
\end {proof}
Putting Theorem \ref{exact} together with the results in Section \ref{div} yields a characterization of the divisor set ${\mathcal{D}}(L)$ of $L$ as follows.
\begin{theorem}\label{bigtheorem} Let $L$ be a primitive Gaussian line with $\Delta\neq 0$. A Gaussian integer $\beta$ is in the divisor set ${\mathcal{D}}(L)$ of $L$
if and only if $\beta$ can be written as
$$\beta= \mu r(1+i)^t\pi_1^{k_1}\pi_2^{k_2} \cdots \pi_m^{k_m},$$
where the variables in this expression are defined as follows:
\begin{enumerate}
\item[(a)] $\mu\in\{\pm1, \pm i\}$ is a unit in $\Z[i]$;
\item[(b)] $r$ is a rational integer that divides $\Delta$;
\item[(c)] $t=0$ if $1+i$ divides $\delta$, and $t\in \{0, 1\}$ otherwise;
\item[(d)] For $1\leq j\leq m$, $\pi_j$ is a Gaussian prime such that $\pi_j$ does not divide $\delta$,
$N(\pi_j)\neq 2$, and $N(\pi_j)\neq N(\pi_n)$ for $j\neq n$;
\item[(e)] For $1\leq j\leq m$, $k_j \geq 0$ is a rational integer.
\end{enumerate}
\end{theorem}
\begin{proof}
By Lemma \ref{ncrt}, it is sufficient to characterize those $\beta\in{\mathcal{D}}(L)$ where $\nu(\beta)$ is a prime power. Thus, let $p$ be a rational prime and $\beta\in\Z[i]$ satisfy $\nu(\beta)=p^n$ for some positive integer $n$.
First suppose $p\equiv 1\pmod 4$, and let $\pi$ be a Gaussian primes that lies over $p$. We may assume $\pi\in{\mathcal{G}\mathcal{P}}(L)$ by Corollary \ref{minprimes} of Theorem \ref{PL}. If $\overline\pi\not\in{\mathcal{G}\mathcal{P}}(L)$, then by Theorems \ref{ZL} and~\ref{exact}, $\beta\in{\mathcal{D}}(L)$ if and only if $\beta = \mu p^t\pi^k$, where $\mu$ is a unit in $\Z[i]$ and $t$ and $k$ are non-negative integers, and $p^t$ divides $\Delta$. If, in addition, $\overline\pi\in{\mathcal{G}\mathcal{P}}(L)$, then $\beta$ can also be of the form $\mu p^t\overline\pi^k$.
If $p=2$ then, up to associates, $1+i$ is the only Gaussian prime that lies over $p$.
Let $2^s$ be the power of $2$ that exactly divides $\Delta$. It follows from Theorem \ref{exact} that $\beta\in{\mathcal{D}}(L)$ if and only if $\beta = \mu 2^r(1+i)^t$, where $\mu$ is a unit in $\Z[i]$, $0\leq r\leq s$, and
$t=0$ if $1+i$ divides $\delta$ and $t\in \{0, 1\}$ otherwise.
Finally, if $p\equiv 3\pmod 4$, then $p$ remains prime in $\Z[i]$. In this case, it follows from Theorem \ref{ZL} that $\beta\in{\mathcal{D}}(L)$ if and only if $\beta=\mu p^r$, where $\mu$ is a unit in $\Z[i]$ and $p^r$ divides $\Delta$.
\end{proof}
| {
"timestamp": "2020-01-16T02:01:11",
"yymm": "2001",
"arxiv_id": "2001.05018",
"language": "en",
"url": "https://arxiv.org/abs/2001.05018",
"abstract": "We study analogies between the rational integers on the real line and the Gaussian integers on other lines in the complex plane. This includes a Gaussian analog of Bertrands Postulate, the Chinese Remainder Theorem, and the periodicity of divisibility. We also computationally investigate the distribution of Gaussian primes along these lines and leave the reader with several open problems.",
"subjects": "Number Theory (math.NT)",
"title": "Walking to infinity on gaussian lines",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9898303413461358,
"lm_q2_score": 0.8128673155708975,
"lm_q1q2_score": 0.8046007324406586
} |
https://arxiv.org/abs/0909.3822 | A derivation of Benford's Law ... and a vindication of Newcomb | We show how Benford's Law (BL) for first, second, ..., digits, emerges from the distribution of digits of numbers of the type $a^{R}$, with $a$ any real positive number and $R$ a set of real numbers uniformly distributed in an interval $[ P\log_a 10, (P +1) \log_a 10) $ for any integer $P$. The result is shown to be number base and scale invariant. A rule based on the mantissas of the logarithms allows for a determination of whether a set of numbers obeys BL or not. We show that BL applies to numbers obtained from the {\it multiplication} or {\it division} of numbers drawn from any distribution. We also argue that (most of) the real-life sets that obey BL are because they are obtained from such basic arithmetic operations. We exhibit that all these arguments were discussed in the original paper by Simon Newcomb in 1881, where he presented Benford's Law. | \section{Introduction.}
Benford's Law (BL) asserts that in certain sets of numbers, most of them of real-life origin, the first digit is distributed non-uniformly in the form
\begin{equation}
P_B^{(1)}(d) = \log_{10} \left( 1 + \frac{1}{d} \right) , \label{BL}
\end{equation}
where $d$ is the first digit of the number and $\log_{10}$ is the logarithm base 10. In other words, $P_B^{(1)}(d)$ is the fraction of the numbers with first digit $d$ in the given set. There are also forms of Benford's Law for second, third, etc., digits, namely $P_B^{(n)}(d)$. Table 1 shows the values of $P_B^{(1)}(d)$ for $d = 1, 2, \dots , 9$.
\begin{table}[htdp]
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline \hline
$d$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline
$P_B^{(1)}$ & $\> 0.3010 \>$& $\> 0.1761 \>$& $\> 0.1249 \>$& $\> 0.0969 \>$ & $\> 0.0792 \>$& $\> 0.0669 \>$ & $\> 0.0580 \>$ &$\> 0.0512 \>$& $\> 0.0458\>$ \\
\hline \hline
\end{tabular}
\caption{First digit Benford law.}
\end{table}
BL has been found to be obeyed quite well in a variety of situations, many of them checked by Franck Benford himself\cite{Benford}. These sets include population census, stock markets indeces, utilities bills, tax returns, areas of rivers, physical and mathematical constants, and molecular weights, among others\cite{Benford,Fewster}. At first sight, the Law is certainly baffling and counterintuitive\cite{Lines} since one's naive intuition is that digits of numbers should be uniformly or randomly distributed. Although Franck Benford has been credited with the law for his work of 1938\cite{Benford}, the law was originally discovered by the astronomer Simon Newcomb in 1881\cite{Newcomb} as a follow up of the observation that the pages of tables of logarithms in his university library were worn out following BL, as given by equation (\ref{BL}). What is rarely told is that Newcomb {\it derived} Benford's Law. His demonstration for us may now look obscure, and probably just sketchy, because he used arguments that were not so difficult to those familiar with concepts of log tables ... and certainly we are not. We shall advance a plausible explanation of Newcomb's observation of the worn pages of the log tables and argue why many sets of real-life origin also obey BL; alas, this argument was also used by Newcomb.
We shall first prove a general result that appears to be known already\cite{Fewster,Hill,Pietronero,Raimi,others}, although to the best of our knowledge it has not been shown explicitely in the form here presented; we shall see that yields exactly all digit's Benford's distributions, allowing also for concluding that it is scale and number base invariant\cite{Hill}. We demonstrate that if $R$ is a set of real numbers uniformly distributed, then, the distributions of digits of $a^R$ obey BL for any real positive number $a$. Then, we discuss the main result of Newcomb, namely, the fact that a given set of numbers obeys BL if the mantissas of their logarithms are uniformly distributed. We then analyze two main type of sequences of numbers that obey BL, those that are obtained from multiplication of numbers drawn from any distribution and those that are part of a geometric progression of numbers uniformly distributed in an arbitrary interval.
\section{A general result concerning Benford's Law.}
Let $\{R_1, R_2, \dots , R_N\}$ be a sequence of real numbers drawn from a uniform distribution in the interval $R_i \in \left[ \left. P \log_a 10, (P +1) \log_a 10 \right) \right.$, with $P$ any integer. Then, the first, second, ..., digit distributions of the sequence $\{a^{R_1}, a^{R_2}, \dots , a^{R_N}\}$, with $a$ any real positive number, approaches Benford's Law, Eq.(\ref{BL}) and its generalizations, as $N \to \infty$ .
Let us look first at the first digit distribution. In Fig. \ref{GR-2} we plot $a^R$ vs $R$ in a semi-log (base $a$) scale. In this graph, $a^R$ vs $R$ appears as a straight line. Now take in the $R$-axis the sequence $\{R_1, R_2, \dots , R_N\}$ within the interval $\left[ \left. P \log_a 10, (P +1) \log_a 10 \right) \right.$. Take any number of the sequence, say $R_i$. Then, in the logarithmic scale, $\log_a a^{R_i}$ must lie within any of the following ``bins": $b_1$, the interval $[\log_{a}\left(1\times 10^P\right), \log_{a} \left(2\times 10^P\right) )$; or $b_2$, the interval $[ \log_{a}\left(2 \times 10^P\right), \log_{a} \left( 3 \times 10^P\right) )$; $\dots$; or, $b_9$, the interval $[ \log_{a} \left(9 \times 10^P\right), \log_{a} \left( 10 \times 10^P\right) )$. The main point is this: if $ \log_{a} a^{R_i}$ lies within the bin $b_d$, then the first digit of $a^{R_i}$ is $d$.
\begin{figure}[ htbp!]
\centering
\includegraphics[width=0.5\textwidth,keepaspectratio]{fig1.pdf}
\caption{$a^R$ vs $R$ in semi-log scale. The dotted line shows an example of an arbitrary point $R_i$ in the chosen interval, such that the first digit of $a^{R_i}$ is 2 because it falls in bin $b_2$.}
\label{GR-2}
\end{figure}
Since $R_i$ was drawn from a uniform distribution, it has the same chance to take any value within the interval $\left[ \left. P\log_a 10, (P +1) \log_a 10 \right) \right.$ and, therefore, the probability of $ \log_a a^{R_i}$ to fall within the bin $b_d$ is the length of the bin $b_d$ divided by the length of the full interval, namely
\begin{eqnarray}
P\left[ \log_a a^{R_i} \in b_d \right] &=& \frac{\log_{a} \left((d + 1) \times 10^P \right) - \log_{a} \left(d \times 10^P \right)}{\log_{a} \left(10^{P+1} \right) - \log_{a} \left(10^P \right)} \nonumber \\
&=& \frac{ \log_{a} \left(\frac{1 + d}{d} \right)}{\log_{a} 10} \nonumber \\
& = & \log_{10} \left( 1 + \frac{1}{d} \right) .\label{BL1}
\end{eqnarray}
This has the form of Benford's Law for the first digit $P_B^{(1)}(d)$, Eq. (\ref{BL}). Thus, as $N \to \infty$, the first digit distribution of the sequence $\{a^{R_1}, a^{R_2}, \dots , a^{R_N}\}$ will approach $P_B^{(1)}(d)$. We shall call this the {\it General Result} (GR). Note that GR is independent of the integer value of $P$ of the interval as long as the sequence is uniformly distributed. Clearly, the result holds if we change the interval to $\left[ \left. P \log_a 10, (P + M) \log_a 10 \right) \right.$ with $M$ any integer. Note that we never used the fact that neither $a$, nor $R$, nor $d$, are numbers base 10; the graph in Fig. \ref{GR-2} is plotted for numbers base 10 for illustration purposes, but the result would have been the same for any number base. Thus, we conclude that BL is base invariant, i.e. valid for any number base $K$, with $d = 0, 1, 2, \dots, K -1$.
The second, third, ..., digit distributions follow right away with the same argument. For instance, the probability that the second digit of $a^{R_i}$ is $d$, equals the sum of the lenghts of the ``sub-bins",
\begin{eqnarray}
b_{1d} & = & \left[\log_{a}\left((1 + \frac{d}{10}) \times 10^P\right), \log_{a} \left((1 + \frac{ d+1}{10}) \times 10^P\right) \right) \nonumber \\
b_{2d} & = & \left[\log_{a}\left((2 + \frac{d}{10}) \times 10^P\right), \log_{a} \left((2 + \frac{ d+1}{10}) \times 10^P\right) \right)\nonumber \\
& \vdots & \nonumber \\
b_{9d} & = & \left[\log_{a}\left((9 + \frac{d}{10}) \times 10^P\right), \log_{a} \left((9 + \frac{ d+1}{10}) \times 10^P\right) \right)\nonumber
\end{eqnarray}
where $d$ can now take all values $0, 1, \dots, 9$. Thus, the second digit distribution is,
\begin{eqnarray}
P_B^{(2)}(d) &=& \frac{1}{\log_{a} \left(10^{P+1} \right) - \log_{a} \left(10^P \right)} \sum_{m=1}^9 \left[ \log_{a} \left((m + \frac{ d+1}{10}) \times 10^P \right) - \log_{a} \left((m + \frac{ d}{10}) \times 10^P \right) \right] \nonumber \\
&=& \frac{1}{\log_{a} \left(10^{P+1} \right) - \log_{a} \left(10^P \right)} \sum_{m=1}^9 \log_{a}\frac{ 10m + d + 1}{10 m + d } \nonumber \\
& = & \sum_{m=1}^9 \log_{10} \left( 1 + \frac{ 1}{10 m + d } \right). \label{BL2}
\end{eqnarray}
The argument is easily generalized to the $n$-th digit and the result is,
\begin{equation}
P_B^{(n)}(d) = \sum_{m=10^{n-2}}^{10^{n-1}-1} \log_{10} \left( 1 + \frac{ 1}{10 m + d } \right). \label{BLn}
\end{equation}
The present derivation is extremely simple. Although Newcomb never wrote the formulas for $P_B^{(n)}(d)$, given his mastery of log tables and numerical analysis\cite{Newcomb-PT}, it is clear that he knew them since he writes the values of $P_B^{(1)}(d)$ and $P_B^{(2)}(d)$ explicitely and mentions how $P_B^{(3)}(d)$ and $P_B^{(4)}(d)$ behave (the latter are almost uniform). Due to Newcomb's most important result, as we discuss in the next section, it seems to the author that he knew a derivation very similar to this one. Benford's Law and its generalizations have been rigorously shown by Hill\cite{Hill} to follow as a consequence of base invariance of the underlying law. We have no pretense of such a mathematical rigour here, but rather to show its simplicity to a wider audience.
\subsection{Scale invariance.}
A very important property of BL that follows from GR above is the fact that BL is scale invariant\cite{Pietronero}. Add to the values $R_i$ any constant value $c$. This is equivalent to consider a uniform sequence of numbers $R_i$ in the interval $\left[ \left. c + P \log_a 10, c + (P +1) \log_a 10 \right) \right.$. Referring to Fig. \ref{GR-2} one can see that in the semi-log graph this also amounts to shift the interval in the ordinate by a constant amount, $a^c$; one also sees, however, that the sizes of the bins $b_n$ remain unchanged. Thus, the sequence $\{a^c a^{R_1}, a^c a^{R_2}, \dots, a^c a^{R_N}\}$ also obeys BL. But this new sequence is the same as the original one $\{a^{R_1}, a^{R_2}, \dots , a^{R_N}\}$ multiplied by a constant, arbitrary, factor $a^c$.
\subsection{The mantissa rule.}
The sequences or sets of numbers that usually follow BL are not of the form $\{ a^R \}$. So, one can enquiry for a rule that tells us if some sequences do follow BL or not. The answer was also given by Newcomb in his two-page paper\cite{Newcomb}. We shall demonstrate now that for a given sequence of numbers $\{A_1, A_2, \dots , A_N\}$, if the mantissas of the logarithms of $A_i$, namely of $\{ \log_{10} A_1, \log_{10} A_2, \dots , \log_{10} A_N \}$, are uniformly distributed, then the sequence $\{A_1, A_2, \dots , A_N\}$ obeys BL. Before we give the demonstration, we note that GR can be restated much simpler for the case $a = 10$, namely, if a sequence of numbers $R_i$ are uniformly distributed in the interval $[0, 1)$, the sequence $10^{R_i}$ follows BL. We use this form below.
The demonstration can be done writing the log of $A_i$ as,
\begin{equation}
\log_{10} A_i = C(A_i) + m(A_i) ,
\end{equation}
where $C(A_i)$ is an integer, the so-called {\it characteristic} of the log, and $m(A_i)$ the fractional part of the logarithm or {\it mantissa}. Note that by definition the mantissas of logarithms base 10 are within the interval $[0, 1)$. It is clear, then, that when taking the ``antilogarithm" $10^{\log_{10} A_i} = 10^{C(A_i)} 10^{m(A_i)}$ the digits of $A_i$ will be determined only by $10^{m(A_i)}$ since the factor $10^{C(A_i)}$ just determines the position of the decimal point. Thus, the distribution of digits is determined by considering the sequence of the mantissas only, namely of the sequence $\{ m( A_1),m(A_2), \dots , m(A_N )\}$. Hence, if the latter are uniformly distributed, by GR the sequence $\{10^{m(A_1)}, 10^{m(A_2)}, \dots, 10^{m(A_N)} \}$ obeys BL and, therefore, so does the sequence $\{A_1, A_2, \dots , A_N\}$.
This result is very useful since allows us to check if a sequence of numbers obeys BL by looking at the distribution of the mantissas of their logarithms. This is a simple operational rule, instead of a logical one by checking at the digits themselves.
\section{Some sequences that obey BL.}
The next question is which type of sequences or sets of numbers follow BL. Answering this in an exhaustive fashion appears as a difficult task. Here, we discuss two general type of sequences that can be shown quite clearly that obey BL. With these two, we shall conjecture about the general case. We discuss these cases below.
\subsection{Products of variables with arbitrary distributions.}
Consider the set of numbers $\{Q_1, Q_2, \dots , Q_N\}$, with $Q_i$ given by the product of $M$ numbers,
\begin{equation}
Q_i = R_1^{(i)} R_2^{(i)} \cdots R_M^{(i)} ,\label{prod}
\end{equation}
where $R$ are the absolute values of numbers drawn from an {\it arbitrary} distribution (up to a requirement to be given below). We now show that in the limit $N \to \infty$ and $M \to \infty$, the sequence $\{Q_1, Q_2, \dots , Q_N\}$ obeys BL.
The idea is to use the mantissa rule. For this, we consider the log base 10 of the numbers $Q_i$,
\begin{equation}
\log_{10} Q_i = \log_{10} R_1^{(i)} + \log_{10} R_2^{(i)} + \cdots + \log_{10} R_M^{(i)} .
\end{equation}
We now introduce the requirement that the {\it distribution of the logarithms} of the numbers $R$ have finite first and second moments. Then, in the limit $M \to \infty$, by the Central Limit Theorem (CLT)\cite{Feller}, the distribution of $\log_{10} Q$ is the normal distribution. That is, the values of $\log_{10} Q$ are distributed as,
\begin{equation}
\rho(\log_{10} Q) = \frac{1}{\sqrt{2 \pi} \sigma} \> e^{- (\log_{10} Q - \log_{10} Q_0)^2/2 \sigma^2 }. \label{gauss}
\end{equation}
Note that this is not the log-normal distribution, but simply the normal distribution for the variable $\log_{10} Q$. The centroid $\log_{10} Q_0 \approx M c_0$ and $\sigma \approx \sqrt{M} \sigma_0$, where $c_0$ and $\sigma_0$ are the first and second moments of the distribution of the {\it logarithms} of the numbers $R$. This point will be further discussed below.
We proceed to show that the mantissas of the sequence $\{\log_{10} Q_1, \log_{10} Q_2, \dots , \log_{10} Q_N\}$ are uniformly distributed in the interval $[0, 1)$, in the limits mentioned. Before we give the general condition, we can see the how this limit is achieved. Assume that the gaussian function given by Eq.(\ref{gauss}) is already wide enough such that it covers several orders of magnitude, or ``decades'', of the values of $Q$; see Fig. \ref{Gaus} where the decades are denoted by $P - 2$, $P - 1$, $ \dots$, $P + 3$ . The mantissas of the $\log_{10} Q$ are the decimal values within the intervals $P - L$ and $P - (L + 1)$. Thus, we can ``shift'' all intervals within all decades to a single interval, thus placing the mantissas within the same interval. Adding all the values of the mantissas yields, almost, a uniform distribution. This procedure is the same as considering the sum of an infinite number of gaussians each centered at $(\log_{10} Q_0 - P)$ with $P$ taking all the integer values; in the limit $M \to \infty$, equivalent to $\sigma \to \infty$, one gets the exact result,
\begin{equation}
\lim_{\sigma \to \infty} \> \frac{1}{\sqrt{2 \pi} \sigma} \sum_{P= -\infty}^{\infty} \> e^{- (\log_{10} Q - \log_{10} Q_0 + P)^2/2 \sigma^2 } = 1 .
\end{equation}
This proves that the mantissas are uniformly distributed in the limit, for logarithms normally distributed. Although the previous result is strictly valid only in the limit $\sigma \to \infty$, the convergence is extremely fast. For instance, for $\sigma \approx 1$, the sum differs from 1 in the eighth significant figure. One finds strong deviations from the uniform distribution as $\sigma$ becomes much smaller than 1, that is, when the gaussian covers less than one decade.
\begin{figure}
\begin{center}
\begin{minipage}[h]{.50\textwidth}
\includegraphics[width=.99\textwidth]{fig2a.pdf}
\end{minipage}%
\hfill
\begin{minipage}[h]{.50\textwidth}
\centering
\includegraphics[width=.99\linewidth]{fig2b.pdf}
\end{minipage}
\end{center}
\caption{First panel, normal distribution of $\log_{10}(A)$, covering 5 decades approximately. Second panel, the mantissas of the normal distribution within one decade, i.e. in the interval $[0, 1)$; the dotted line is the sum of only 5 decades, adding to 1 within 3 significant figures.}
\label{Gaus}
\end{figure}
On the other hand, since $\sigma$ depends not only on $M$ but also on the second moment $\sigma_0$ of the distribution of logarithms of $R$, i.e. $\sigma \approx \sqrt{M} \sigma_0$, the convergence might be very slow if the width, or support, of the distribution of $R$ itself is very narrow. As particular examples, considering $R$ taken from a uniform distribution in the interval $[1, 10)$, requires $M$ to be less than 10 (about 4 or 5) to converge to BL. Conversely, for $R$ in the interval $[5, 6)$ takes $M \approx 400$ to yield BL.
The result of this section, namely of the product of numbers obeying BL, is very robust and general\cite{Pietronero} in the sense that even if the distribution of the numbers $R$ lack second moment, the logarithm of $R$ may not\cite{Lorentz}. This is because the logarithm function is very ``slow'' and tends to smooth the original distribution. Moreover, even if the numbers $R$ are correlated, the action of the logarithm and the limit of very large products (i.e. large values of $M$) may again yield a normal distribution of the logarithms\cite{GUE,Flores}.
\subsection{Generalized geometric sequences of variables uniformly distributed.}
Here we consider a geometric sequence of products of the form,
\begin{equation}
\{Z^{(1)}, Z^{(2)}, \dots, Z^{(N)}, \dots \} = \{ R_1^{(1)}, R_1^{(2)} R_2^{(2)}, R_1^{(3)}R_2^{(3)}R_3^{(3)}, \dots,
R_1^{(N)}R_2^{(N)}\cdots R_N^{(N)}, \dots \} ,\label{geo}
\end{equation}
where $R_i^{(J)}$ are numbers uniformly distributed in an arbitrary interval $[a, b]$, with $a$ and $b$ real positive numbers. This sequence obeys BL. Although this result may be generalized to arbitrary distributions, we restrict the results here to uniform distributions. We note that if $a = b$, the above sequence is a true geometric progression with ratio $a$. Thus, geometric progressions also obey BL (except if $a = 10^L$ with $L$ any integer).
Again, we first consider the sequence of the logarithms of the products, $\log_{10} Z^{(J)}$. Since we do not have an analytic demonstration, we resort to a numerical one. In Fig. \ref{geofig} a particular example shows that the distribution of $\log_{10} Z^{(J)}$ becomes uniformly distributed as $J \to \infty$. As this distribution covers many decades of $Z^{(J)}$, obviously the mantissas of $\log_{10} Z^{(J)}$ also become uniform in $[0, 1)$. A numerical comparison with BL is also included. We have extensively verified that these results hold for any sequence of this type, including true geometric progressions\cite{Raimi}.
\begin{figure}
\begin{center}
\begin{minipage}[h]{.33\textwidth}
\includegraphics[width=.99\textwidth]{fig3a.pdf}
\end{minipage}%
\hfill
\begin{minipage}[h]{.33\textwidth}
\centering
\includegraphics[width=.99\linewidth]{fig3b.pdf}
\end{minipage}
\begin{minipage}[h]{.33\textwidth}
\includegraphics[width=.99\textwidth]{fig3c.pdf}
\end{minipage}%
\hfill
\end{center}
\caption{Numerical analysis of the first 10,000 terms of one realization of a generalized geometric sequence, as given by Eq.(\ref{geo}), for numbers uniformly distributed in the interval $[1.0, 9.9]$. First panel shows the uniform distribution of $\log_{10}(A)$ covering more than 40 decades. Second panel shows that the distribution of mantissas is uniform in $[0, 1)$. The third panel is a comparison of the exact Benford Law (circles) with the distribution of the first digit of the 10,000 terms considered (triangles).}
\label{geofig}
\end{figure}
\subsection{A conjecture on the general case.}
From the above two cases, it appears that a generalization is as follows: As long as the distribution of logarithms is wide enough, namely, covering many decades of the set considered, the mantissa distribution will tend to become uniformly distributed. An analogous argument was recently used by Fewster\cite{Fewster} to illustrate when Benford's law should be obeyed.
\section{Why the pages of log tables wear out following BL?}
Simon Newcomb initiates his article by pointing out that the log tables were worn out more at the beginning than at the end, i.e. following BL.
That is, since the tables are for logarithms of numbers going in order from $1.000\dots$ to $9.999\dots$, he found that the pages for numbers starting with 1 were more used than those for numbers starting with 2, etc. Newcomb gave an explanation of these observation by assuming that ``natural" numbers, i.e. those appearing in Nature, were obtained by ratios of other numbers. Then, he argued that no matter the underlying law of the primitive numbers, their ratios (in the limit of many ratios) had the mantissas of their logarithms uniformly distributed. He then simply stated that this implied Benford Law. As we have seen, the mantissa rule is equivalent to GR. It is fairly evident that Newcomb certainly knew this result, and thus, that he must be credited with the derivation of BL. We mention, once more, that the arguments given in this article are essentially contained in Newcomb's original paper.
An interesting aspect is why Newcomb considered that ``natural" numbers were the result of ratios, or products for that matter, of other numbers. In the light of the previous sections and a bit of second-guessing, we can advance an explanation for this assertion by Newcomb. Moreover, this may also well be the explanation for the agreement of actual real-life data with BL.
To begin, we should recall why log tables were used in the first place. We are well into the era of electronic calculators, be it a pocket-size one or a huge supercomputer: numerical calculations are now their task not ours. But as recently as the early 1970's, not to mention in the XIX century, numerical calculations were done by hand and/or sliding rules. And the log tables were essential to realize those tedious and lengthy tasks. As a matter of fact, logarithms were invented (or discovered?) by John Napier in 1614 to perform lengthy calculations! In the words of Napier himself\cite{e},
{\it Seeing there is nothing that is so troublesome to mathematical practice, nor that doth more molest and hinder calculations, than the multiplications, divisions, square and cubic extractions of great numbers. ... I began therefore to consider in my mind by what certain and ready art I might remove those hindrances.} - John Napier, {\it Mirifici logarithmorum canonis descriptio} (1614).
That is, the trouble appears when one must make calculations by hand, specially multiplications, of numbers with {\it many} digits. It is lengthy, tedious and prone to produce mistakes. Thus, one goes to the tables to find out the logarithms of the numbers involved, performs sums and subtractions which are much easier, and then taking antilogarithms the result is found. The point is, where did those {\it long} numbers come from? Those were definitely not made up, neither read out from somewhere else, nor measured. The long numbers came themselves from {\it multiplication, divisions or powers} of smaller numbers. The latter may be random, or measured or taken arbitrarily from somewhere else, indeed. But, we insist, the long numbers did arise from operations performed on smaller numbers. As we have seen in the previous section, multiplication of numbers tipically tend to BL, even if only few factors are involved, as long as they arise from wide distributions. In other words, the numbers that people looked their logarithms for, typically, obeyed already BL. Since in the XIX century numbers were not churned out from a computer but arised from arithmetic operations performed by real people, it seems to the author that for Newcomb these were ``naturally" produced. This may also explain why many sets of real-life data obey BL, that is, unless one asks a computer for a random number, numbers that quantify a property, be it the area of a lake or the weight of a molecule, usually arise from arithmetic operations performed on measured quantities with arbitrary constants and units.
{\bf Acknowledgments}. I thank R. Esquivel and A. Robledo for several important references.
| {
"timestamp": "2009-09-21T19:28:24",
"yymm": "0909",
"arxiv_id": "0909.3822",
"language": "en",
"url": "https://arxiv.org/abs/0909.3822",
"abstract": "We show how Benford's Law (BL) for first, second, ..., digits, emerges from the distribution of digits of numbers of the type $a^{R}$, with $a$ any real positive number and $R$ a set of real numbers uniformly distributed in an interval $[ P\\log_a 10, (P +1) \\log_a 10) $ for any integer $P$. The result is shown to be number base and scale invariant. A rule based on the mantissas of the logarithms allows for a determination of whether a set of numbers obeys BL or not. We show that BL applies to numbers obtained from the {\\it multiplication} or {\\it division} of numbers drawn from any distribution. We also argue that (most of) the real-life sets that obey BL are because they are obtained from such basic arithmetic operations. We exhibit that all these arguments were discussed in the original paper by Simon Newcomb in 1881, where he presented Benford's Law.",
"subjects": "Probability (math.PR); History and Overview (math.HO)",
"title": "A derivation of Benford's Law ... and a vindication of Newcomb",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407168145569,
"lm_q2_score": 0.8267117962054049,
"lm_q1q2_score": 0.8045895811379982
} |
https://arxiv.org/abs/1107.4392 | Lower bounds for sumsets of multisets in Z_p^2 | The classical Cauchy-Davenport theorem implies the lower bound n+1 for the number of distinct subsums that can be formed from a sequence of n elements of the cyclic group Z_p (when p is prime and n<p). We generalize this theorem to a conjecture for the minimum number of distinct subsums that can be formed from elements of a multiset in (Z_p)^m; the conjecture is expected to be valid for multisets that are not "wasteful" by having too many elements in nontrivial subgroups. We prove this conjecture in (Z_p)^2 for multisets of size p+k, when k is not too large in terms of p. | \section{Introduction}
Determining the number of elements in a particular abelian group that can be written as sums of given sets of elements is a topic that goes back at least two centuries. The most famous result of this type, involving the cyclic group ${\mathbb Z}_p$ of prime order $p$, was established by Cauchy in 1813~\cite{Cauchy} and rediscovered by Davenport in 1935~\cite{Dav1,Dav2}:
\begin{lemma}[Cauchy--Davenport Theorem]
Let $A$ and $B$ be subsets of ${\mathbb Z}_p$, and define $A+B$ to be the set of all elements of the form $a+b$ with $a\in A$ and $b\in B$. Then $\#(A+B) \ge \min\{ p, \#A + \#B - 1\}$.
\end{lemma}
\noindent The lower bound is easily seen to be best possible by taking $A$ and $B$ to be intervals, for example. It is also easy to see that the lower bound of $\#A + \#B - 1$ does not hold for general abelian groups ${\mathbf G}$ (take $A$ and $B$ to be the same nontrivial subgroup of ${\mathbf G}$). There is, however, a well-known generalization obtained by Kneser in 1953~\cite{Kne}, which we state in a slightly simplified form that will be quite useful for our purposes (see \cite[Theorem 4.1]{Nat} for an elementary proof):
\begin{lemma}[Kneser's Theorem]
Let $A$ and $B$ be subsets of a finite abelian group ${\mathbf G}$, and let $m$ be the largest cardinality of a proper subgroup of ${\mathbf G}$. Then $\#(A+B) \ge \min\{ \#{\mathbf G}, \#A + \#B - m\}$.
\end{lemma}
Given a sequence $A = (a_1,\dots,a_k)$ of (not necessarily distinct) elements of an abelian group ${\mathbf G}$, a related result involves its {\em sumset} $\Sigma A$, which is the set of all sums of any number of elements chosen from $A$ (not to be confused with $A+A$, which it contains but usually properly):
\[
\Sigma A = \bigg\{ \sum_{j\in J} a_j \colon J\subseteq \{1,\dots,k\} \bigg\}.
\]
(Note that we allow $J$ to be empty, so that the group's identity element is always an element of $\Sigma A$.) When ${\mathbf G}={\mathbb Z}_p$, one can prove the following result by writing $\Sigma A = \{0, a_1\} + \cdots + \{ 0,a_k \}$ and applying the Cauchy--Davenport theorem inductively:
\begin{lemma}
\label{CD pre lemma}
Let $A=(a_1,\dots,a_k)$ be a sequence of nonzero elements of ${\mathbb Z}_p$. Then $\#\Sigma A \ge \min\{p,k + 1\}$.
\end{lemma}
\noindent This result can also be proved directly by induction on $k$, and in fact such a proof will discover why the order $p$ of the cyclic group must be prime (intuitively, the sequence $A$ could lie completely within a nontrivial subgroup). For a formal proof, see \cite[Lemma 2]{DEF}. Again the lower bound is easily seen to be best possible, by taking $a_1=\cdots=a_k$.
It is a bit misleading to phrase such results in terms of sequences, since the actual order of the elements in the sequence is irrelevant (given that we are considering only abelian groups). We prefer to use {\em multisets}, which are simply sets that are allowed to contain their elements with multiplicity. If we let $m_x$ denote the multiplicity with which the element $x$ occurs in the multiset $A$, then the definition of $\Sigma A$ can be written in the form
\[
\Sigma A = \bigg\{ \sum_{x\in{\mathbf G}} \delta_x x \colon 0\le \delta_x \le m_x \bigg\},
\]
where $\delta_x x$ denotes the group element $x+\cdots+x$ obtained by adding $\delta_x$ summands all equal to~$x$.
When using multisets, we should choose our notation with care: the hypotheses of such results tend to involve the total number of elements of the multiset $A$ counting multiplicity, while the conclusions involve the number of distinct elements of $\Sigma A$. Consequently, throughout this paper, we use the following notational conventions:
\begin{itemize}
\item $|S|$ denotes the total number of elements of the multiset $S$, counted with multiplicity;
\item $\#S$ denotes the number of distinct elements of the multiset $S$, or equivalently the number of elements of $S$ considered as a (mere) set.
\end{itemize}
In this notation, Lemma~\ref{CD pre lemma} can be restated as:
\begin{lemma}
\label{CD lemma}
Let $A$ be a multiset contained in ${\mathbb Z}_p$ such that $0\notin A$. Then $\#\Sigma A \ge \min\{p,|A| + 1\}$.
\end{lemma}
\noindent The purpose of this paper is to improve, as far as possible, this lower bound for multisets contained in the larger abelian group $\F_p^2$. We cannot make any progress without some restriction upon our multisets: if a multiset is contained within a nontrivial subgroup of $\F_p^2$ (of cardinality $p$), then so is its sumset, in which case the lower bound $\min\{p,|A| + 1\}$ from Lemma~\ref {CD lemma} is the best we can do. Therefore we restrict to the following class of multisets. We use the symbol $\text{\bf 0}=(0,0)$ to denote the identity element of~$\F_p^2$.
\begin{definition}
\label{valid definition}
A multiset $A$ contained in $\F_p^2$ is called {\em valid} if:
\begin{itemize}
\item $\text{\bf 0}\notin A$; and
\item every nontrivial subgroup contains fewer than $p$ points of $A$, counting multiplicity.
\end{itemize}
\end{definition}
\noindent The exact number $p$ in the second condition has been carefully chosen: any nontrivial subgroup of $\F_p^2$ is isomorphic to ${\mathbb Z}_p$, and so Lemma~\ref{CD lemma} applies to these nontrivial subgroups. In particular, any multiset $A$ containing $p-1$ nonzero elements of a nontrivial subgroup will automatically have that entire subgroup contained in its sumset $\Sigma A$, so allowing $p$ nonzero elements in a nontrivial subgroup would always be ``wasteful''.
We believe that the following lower bound should hold for sumsets of valid multisets:
\begin{conjecture}
\label{2d conjecture}
Let $A$ be a valid multiset contained in $\F_p^2$ such that $p \le |A| \le 2p-3$. Then $\#\Sigma A \ge (|A|+2-p)p$. In other words, if $|A| = p+k$ with $0\le k\le p-3$, then $\#\Sigma A \ge (k+2)p$.
\end{conjecture}
It is easy to see that this conjectured lower bound would be best possible: if $A$ is the multiset that contains the point $(1,0)$ with multiplicity $p-1$ and the point $(0,1)$ with multiplicity $k+1$, then the set $\Sigma A$ is precisely $\big\{ (s,t)\colon s\in{\mathbb Z}_p,\, 0\le t\le k+1 \big\}$, which has $(k+2)p$ distinct elements. Conjecture~\ref {2d conjecture} is actually part of a larger assertion (see Conjecture~\ref{any d conjecture}) concerning lower bounds for sumsets in ${\mathbb Z}_p^m$.
One of our results completely resolves the first two cases of this conjecture:
\begin{theorem}
\label{conjecture true for k tiny}
Let $p$ be a prime.
\begin{enumerate}
\item If $A$ is any valid multiset contained in $\F_p^2$ with $|A| = p$, then $\#\Sigma A \ge 2p$.
\item Suppose that $p\ge5$. If $A$ is any valid multiset contained in $\F_p^2$ with $|A| = p+1$, then $\#\Sigma A \ge 3p$.
\end{enumerate}
\end{theorem}
It turns out that proving part (b) of the theorem requires a certain amount of computation for a finite number of primes (see the remarks following the proof of the theorem in Section~\ref{thm proofs section}). Extending the conjecture to larger values of $k$ would require, by our methods, more and more computation to take care of small primes $p$ as $k$ grows. However, we are able to establish the conjecture when $p$ is large enough with respect to $k$, or equivalently when $k$ is small enough with respect to $p$:
\begin{theorem}
\label{conjecture true for k small in terms of p}
Let $p$ be a prime, and let $2\le k\le \sqrt{p/(2\log p+1)}-1$ be an integer. If $A$ is any valid multiset contained in $\F_p^2$ with $|A| = p+k$, then $\#\Sigma A \ge (k+2)p$.
\end{theorem}
A contrapositive version of Theorem~\ref{conjecture true for k small in terms of p} is also enlightening:
\begin{corollary}
\label{main corollary}
Let $p$ be a prime, and let $2\le k\le \sqrt{p/(2\log p+1)}-1$ be an integer. Let $A$ be a multiset contained in $\F_p^2 \setminus \{\text{\bf 0}\}$ with $|A| = p+k$. If $\#\Sigma A < (k+2)p$, then there exists a nontrivial subgroup of $\F_p^2$ that contains at least $p$ points of $A$, counting multiplicity.
\end{corollary}
Our methods of proof stem from two main ideas. First, we will obviously exploit the structure of $\F_p^2$ as a direct sum of cyclic groups of prime order, within which we can apply the known Lemma~\ref{CD lemma} after using projections. Section~\ref{direct projects section} contains several elementary lemmas in this vein (see in particular Lemma~\ref{sweep to the left lemma}). It is important for us to utilize the flexibility coming from the fact that $\F_p^2$ can be decomposed as the direct sum of two subgroups in many different ways. Second, our methods work best when there exists a single subgroup that contains many elements of the given multiset; however, by selectively replacing pairs of elements with their sums, we can increase the number of elements in a subgroup in a way that improves our lower bounds upon the sumset (see Lemma~\ref {any j lemma}). These methods, which appear in Section~\ref {thm proofs section}, combine to provide the proofs of Theorems \ref {conjecture true for k tiny} and~\ref{conjecture true for k small in terms of p}. Finally, Section~\ref{conjecture section} contains a generalization of Conjecture~\ref{2d conjecture} to higher-dimensional direct sums of ${\mathbb Z}_p$, together with examples demonstrating that the conjecture would be best possible.
\section{Sumsets in abelian groups and direct products}
\label{direct projects section}
All of the results in this section are valid for general finite abelian groups and have correspondingly elementary proofs, although the last two lemmas seem rather less standard than the first few. In this section, ${\mathbf G}$, ${\mathbf H}$, and ${\mathbf K}$ denote finite abelian groups, and $e$ denotes a group's identity element.
\begin{lemma}
\label{a la carte lemma}
Let $B_0,B_1,B_2,\dots,B_j$ be multisets in ${\mathbf G}$, and set $A = B_0 \cup B_1 \cup \dots \cup B_j$. For each $1\le i\le j$, specify an element $x_i \in \Sigma B_i$, and set $C = B_0 \cup \{ x_1, \dots ,x_j\}$. Then $\Sigma C \subseteq \Sigma A$.
\end{lemma}
\begin{proof}
For each $1\le i\le j$, choose a submultiset $D_i \subseteq B_i$ such that the sum of the elements of $D_i$ equals $x_i$. By definition, every element $y$ of $\Sigma C$ equals the sum of the elements of some subset $E$ of $B_0$, plus $\sum_{i\in I} x_i$ for some $I\subseteq \{1,\dots,j\}$. But then $y$ equals the sum of the elements of $E \cup \bigcup_{i\in I} D_i$, which is an element of $\Sigma A$ since $E \cup \bigcup_{i\in I} D_i \subseteq B_0 \cup \bigcup_{1\le i\le j} B_i = A$.
\end{proof}
\begin{lemma}
\label{kneser bound}
Let $A_1,A_2,\dots,A_j$ be multisets in ${\mathbf G}$, and set $A = A_1 \cup \dots \cup A_j$. If $m$ is the largest cardinality of a proper subgroup of ${\mathbf G}$, then either $\Sigma A = {\mathbf G}$ or $\#\Sigma A \ge (\sum_{i=1}^j \# \Sigma A_i) - (j-1)m$.
\end{lemma}
\begin{proof}
Since $\Sigma A = \Sigma A_1 + \Sigma A_2 + \cdots + \Sigma A_j$ (viewed as ordinary sets), this follows immediately by inductive application of Kneser's theorem.
\end{proof}
For the remainder of this section, we will be dealing with groups that can be decomposed into a direct sum.
\begin{definition}
A subgroup ${\mathbf H}$ of ${\mathbf G}$ is called an {\em internal direct summand} if there exists a subgroup ${\mathbf K}$ of ${\mathbf G}$ such that ${\mathbf G}$ is the internal direct sum of ${\mathbf H}$ and ${\mathbf K}$, or in other words, such that ${\mathbf H} \cap {\mathbf K} = \{e\}$ and ${\mathbf H} + {\mathbf K} = {\mathbf G}$. Equivalently, ${\mathbf H}$ is an internal direct summand of ${\mathbf G}$ if there exists a {\em projection homomorphism} $\pi_{\mathbf H}\colon {\mathbf G} \to {\mathbf H}$ that is the identity on ${\mathbf H}$. Note that this projection homorphism does depend on the choice of ${\mathbf K}$ but is uniquely determined by $\pi_{\mathbf H}^{-1}(e) = {\mathbf K}$.
\end{definition}
\begin{lemma}
\label{pi commutes with Sigma lemma}
For any homomorphism $f\colon {\mathbf G}\to {\mathbf H}$, and any subset $X$ of ${\mathbf G}$, we have $f(\Sigma X) = \Sigma (f(X))$. In particular, if ${\mathbf H}$ is an internal direct summand of ${\mathbf G}$, then $\pi_{\mathbf H}(\Sigma X) = \Sigma(\pi_{\mathbf H}(X))$ for any subset $X$ of ${\mathbf G}$.
\end{lemma}
\begin{proof}
Given $y\in f(\Sigma X)$, there exists $x\in \Sigma X$ such that $f(x)=y$. Hence we can find $x_1,\dots,x_j\in X$ such that $x_1+\cdots+x_j = x$, and so $f(x_1+\cdots+x_j) = y$. But $f$ is a homomorphism, and so $f(x_1)+\cdots+f(x_j) = y$, so that $y \in \Sigma(f(X))$. This shows that $f(\Sigma X) \subseteq \Sigma(f(X))$; the proof of the reverse inclusion is similar.
\end{proof}
\begin{lemma}
\label{pi split lemma}
Let ${\mathbf G} = {\mathbf H} \oplus {\mathbf K}$, and let $D$ and $E$ be multisets contained in ${\mathbf H}$ and ${\mathbf K}$, respectively. For any $z\in {\mathbf G}$,
\[
z \in \Sigma(D\cup E) \quad\text{if and only if}\quad \pi_{\mathbf H}(z) \in\Sigma D \text{ and } \pi_{\mathbf K}(z) \in\Sigma E.
\]
\end{lemma}
\begin{proof}
Since $z = \pi_{\mathbf H}(z) + \pi_{\mathbf K}(z)$, the ``if'' direction is obvious. For the converse, note that
\[
\pi_{\mathbf H}(z) \in \pi_{\mathbf H}\big( \Sigma (D\cup E) \big) = \Sigma\big( \pi_{\mathbf H}(D \cup E) \big)
\]
by Lemma~\ref {pi commutes with Sigma lemma}. On the other hand, $\pi_{\mathbf H}(D) = D$ and $\pi_{\mathbf H}(E) = \{e\}$, and so
\[
\pi_{\mathbf H}(z) \in \Sigma\big( \pi_{\mathbf H}(D) \cup \pi_{\mathbf H}(E) \big) = \Sigma \big( D \cup \{e\} \big) = \Sigma D
\]
(since the sumset is not affected by whether $e$ is an allowed summand). A similar argument shows that $\pi_{\mathbf K}(z) \in \Sigma E$, which completes the proof of the lemma.
\end{proof}
\begin{lemma}
\label{direct product lemma}
Let ${\mathbf H}$ and ${\mathbf K}$ be subgroups of ${\mathbf G}$ satisfying ${\mathbf H} \cap {\mathbf K} = \{e\}$. Let $D$ and $E$ be multisets contained in ${\mathbf H}$ and ${\mathbf K}$, respectively. Then $\#\Sigma(D\cup E) = \#\Sigma D \cdot \#\Sigma E$.
\end{lemma}
\begin{proof}
Notice that every element of $\Sigma(D\cup E)$ is contained in ${\mathbf H}+{\mathbf K}$; therefore we may assume without loss of generality that ${\mathbf G} = {\mathbf H} \oplus {\mathbf K}$. In particular, we may assume that ${\mathbf H}$ and ${\mathbf K}$ are internal direct summands of ${\mathbf G}$, so that the projection maps $\pi_{\mathbf H}$ and $\pi_{\mathbf K}$ exist and every element $z\in {\mathbf G}$ has a unique representation $z=x+y$ where $x\in {\mathbf H}$ and $y\in {\mathbf K}$; note that $x=\pi_{\mathbf H}(z)$ and $y=\pi_{\mathbf K}(z)$ in this representation.
To establish the lemma, it therefore suffices to show that $z = \pi_{\mathbf H}(z) + \pi_{\mathbf K}(z) \in \Sigma(D\cup E)$ if and only if $\pi_{\mathbf H}(z) \in\Sigma D \text{ and } \pi_{\mathbf K}(z) \in\Sigma E$; but this is exactly the statement of Lemma~\ref{pi split lemma}.
\end{proof}
The next lemma is a bit less standard yet still straightforward: in a direct product of two abelian groups, it characterizes the elements of a sumset that lie in a given coset of one of the direct summands.
\begin{lemma}
\label{orthogonal structure lemma}
Let ${\mathbf H}$ and ${\mathbf K}$ be subgroups of ${\mathbf G}$ satisfying ${\mathbf H} \cap {\mathbf K} = \{e\}$. Let $D$ and $E$ be multisets contained in ${\mathbf H}$ and ${\mathbf K}$, respectively. For any $y\in {\mathbf K}$:
\begin{enumerate}
\item if $y\in \Sigma E$, then $({\mathbf H}+\{y\}) \cap \Sigma(D\cup E) = \Sigma D + \{y\}$;
\item if $y\notin \Sigma E$, then $({\mathbf H}+\{y\}) \cap \Sigma(D\cup E) = \emptyset$.
\end{enumerate}
\end{lemma}
\begin{proof}
As in the proof of Lemma~\ref {direct product lemma}, we may assume without loss of generality that ${\mathbf G} = {\mathbf H} \oplus {\mathbf K}$. Suppose that $z$ is an element of $({\mathbf H}+\{y\}) \cap \Sigma(D\cup E)$. Since $z\in {\mathbf H}+\{y\}$, we may write $z=x+y$ for some $x\in{\mathbf H}$, whence $\pi_{\mathbf K}(z) = \pi_{\mathbf K}(x) + \pi_{\mathbf K}(y) = e+y = y$. On the other hand, since $z\in \Sigma(D\cup E)$, we see that $y \in \Sigma E$ by Lemma~\ref{pi split lemma}. In other words, the presence of any element $z\in ({\mathbf H}+\{y\}) \cap \Sigma(D\cup E)$ forces $y\in \Sigma E$, which establishes part (b) of the lemma.
We continue under the assumption $y\in \Sigma E$ to prove part (a). The inclusions $\Sigma D + \{y\} \subseteq {\mathbf H}+\{y\}$ and $\Sigma D + \{y\} \subseteq \Sigma(D\cup E)$ are both obvious, and so $\Sigma D + \{y\} \subseteq ({\mathbf H}+\{y\}) \cap \Sigma(D\cup E)$. As~for the reverse inclusion, let $z \in ({\mathbf H}+\{y\}) \cap \Sigma(D\cup E)$ as above; then $\pi_{\mathbf H}(z) \in \Sigma D$ by Lemma~\ref{pi split lemma}, whence $z = \pi_{\mathbf H}(z) + \pi_{\mathbf K}(z) = \pi_{\mathbf H}(z) + y \in \Sigma D + \{y\}$ as required.
\end{proof}
Finally we can establish the lemma that we will make the most use of when we return to the setting ${\mathbf G}=\F_p^2$ in the next section.
\begin{lemma}
\label{sweep to the left lemma}
Let ${\mathbf G} = {\mathbf H} \oplus {\mathbf K}$, and let $C$ be a multiset contained in ${\mathbf G}$. Let $D=C\cap {\mathbf H}$, let $F = C \setminus D$, and let $E = \pi_{\mathbf K}(F)$. Then $\#\Sigma C \ge \#\Sigma D \cdot \#\Sigma E$.
\end{lemma}
\begin{proof}
Lemma~\ref {direct product lemma} tells us that $\#\Sigma (D\cup E) = \#\Sigma D \cdot \#\Sigma E$, and so it suffices to show that $\#\Sigma C \ge \#\Sigma (D\cup E)$. We accomplish this by showing that
\begin{equation}
\# \big( ({\mathbf H} + \{y\}) \cap \Sigma C \big) \ge \# \big( ({\mathbf H} + \{y\}) \cap \Sigma (D\cup E) \big)
\label{one line at a time}
\end{equation}
for all $y\in{\mathbf K}$.
For any $y\in{\mathbf K} \setminus \Sigma E$, Lemma~\ref{orthogonal structure lemma} tells us that $({\mathbf H} + \{y\}) \cap \Sigma (D\cup E) = \emptyset$, in which case the inequality~\eqref{one line at a time} holds trivially. For any $y \in \Sigma E$, Lemma~\ref{orthogonal structure lemma} tells us that $({\mathbf H} + \{y\}) \cap \Sigma (D\cup E) = \Sigma D + \{y\}$, and so the right-hand side of the inequality~\eqref{one line at a time} equals $\#\Sigma D$.
On the other hand, since $\Sigma E = \Sigma (\pi_{\mathbf K}(F)) = \pi_{\mathbf K}(\Sigma F)$ by Lemma~\ref{pi commutes with Sigma lemma}, there exists at least one element $z\in \Sigma F$ satisfying $\pi_{\mathbf K}(z)=y$; as ${\mathbf G} = {\mathbf H} \oplus {\mathbf K}$, this is equivalent to saying that $z \in {\mathbf H} + \{y\}$. Since $\Sigma D \subseteq {\mathbf H}$, we have $\Sigma D + \{z\} \subseteq {\mathbf H} + \{y\}$ as well. But the inclusion $\Sigma D + \{z\} \subseteq \Sigma D + \Sigma F = \Sigma C$ is trivial, and therefore $\Sigma D + \{z\} \subseteq ({\mathbf H} + \{y\}) \cap \Sigma C$; in particular, the left-hand side of the inequality~\eqref{one line at a time} is at least $\#\Sigma D$. Combined with the observation that the right-hand side equals $\#\Sigma D$, this lower bound establishes the inequality~\eqref{one line at a time} and hence the lemma.
\end{proof}
These lemmas might be valuable for studying sumsets in more general abelian groups. They will prove to be particularly useful for studying sumsets in $\F_p^2$, however, essentially because there are many ways of writing $\F_p^2$ as an internal direct sum of two subgroups (which are simply lines through $\text{\bf 0}$).
\section{Lower bounds for sumsets}
\label{thm proofs section}
In this section we establish Theorems~\ref {conjecture true for k tiny} and~\ref {conjecture true for k small in terms of p}; the proofs employ two combinatorial propositions which we defer to the next section. It would be possible to prove these two theorems at the same time, at the expense of a bit of clarity; however, we find it illuminating to give complete proofs of Theorem~\ref {conjecture true for k tiny} (the cases $|A|=p$ and $|A|=p+1$) first, as the proofs will illustrate the methods used to prove the more general Theorem~\ref {conjecture true for k small in terms of p}. Seeing the limitations of the proof of Theorem~\ref {conjecture true for k tiny} will also motivate the formulation of our main technical tool, Lemma~\ref {any j lemma}.
Throughout this section, $A$ will denote a valid multiset contained in $\F_p^2$. For any $x\in\F_p^2$, we let $\lin x$ denotes the subgroup of $\F_p^2$ generated by $x$ (that is, the line passing through both the origin $\text{\bf 0}$ and $x$), and we let $m_x$ denote the multiplicity with which $x$ appears in $A$, so that $|A| = \sum_{x\in\F_p^2} m_x$. The fact that $A$ is valid means that $m_\text{\bf 0}=0$ and $\sum_{t\in\lin x} m_t < p$ for every $x\in\F_p^2\setminus\{\orig\}$.
Our first lemma quantifies the notion that we can establish sufficiently good lower bounds for the cardinality of $\Sigma A$ if we know that there are enough elements of $A$ lying in one subgroup of~$\F_p^2$. Naturally, the method of proof is to partition $A$ into the elements lying in that subgroup and all remaining elements, project the remaining elements onto a complementary subgroup, and then use Lemma~\ref{CD lemma} in each subgroup separately.
\begin{lemma}
\label{conjecture true if enough on line lemma}
Let $A$ be any valid multiset contained in $\F_p^2$.
Suppose that for some $x\in\F_p^2\setminus\{\orig\}$,
\begin{equation}
\label{symmetric bound}
\sum_{y\in\lin x} m_y \ge |A| - (p-1).
\end{equation}
Then $\#\Sigma A \ge (|A|+2-p)p$.
\end{lemma}
\begin{remark}
The conclusion is worse than trivial if $|A| < p-1$; also, the fact that $A$ is valid means that the left-hand side of equation~\eqref{symmetric bound} is at most $p-1$, and so the lemma is vacuous if $|A| > 2p-2$. Therefore in practice the lemma will be applied only to multisets $A$ satisfying $p-1 \le |A| \le 2p-2$.
\end{remark}
\begin{proof}
Let $D = A \cap \lin x$; note that $|D| \le p-1$ since $A$ is a valid multiset, and note also that $|D| = \sum_{y\in\lin x} m_y \ge |A| - (p-1)$ by assumption. Set $F = A \setminus D$. Choose any nontrivial subgroup ${\mathbf K}$ of $\F_p^2$ other than $\lin x$, and set $E = \pi_{\mathbf K}(F)$. Then by Lemma~\ref{sweep to the left lemma}, we know that $\#\Sigma A \ge \#\Sigma D \cdot \#\Sigma E$. By Lemma~\ref {CD lemma} and the fact that $\text{\bf 0}\notin D\cup E$, we obtain
\begin{align}
\#\Sigma A &\ge \min \big\{ p, 1 + |D| \big\} \cdot \min \big\{ p, 1 + |E| \big\} \notag \\
&= \min \big\{ p, 1 + |D| \big\} \cdot \min \big\{ p, 1 + |A| - |D| \big\},
\label{actually special case}
\end{align}
since $|E| = |F| = |A|-|D|$. The inequalities $|D| \le p-1$ and $|A| - |D| \le p-1$ ensure that $p$ is the larger element in both minima, and so we have simply
\[
\#\Sigma A \ge (1+|D|)(1+|A|-|D|) = \tfrac14|A|^2 + |A| + 1 - \big( |D| - \tfrac12|A| \big)^2.
\]
The pair of inequalities $|D| \le p-1$ and $|A| - |D| \le p-1$ is equivalent to the inequality $\big| |D| - \tfrac12|A| \big| \le p-1- \tfrac12|A|$; therefore
\[
|\Sigma A| \ge \tfrac14|A|^2 + |A| + 1 - \big( p-1- \tfrac12|A| \big)^2 = (|A|+2-p)p,
\]
as claimed.
\end{proof}
This lemma alone is sufficient to establish Theorem~\ref{conjecture true for k tiny}.
\begin{proof}[Proof of Theorem~\ref{conjecture true for k tiny}(a)]
When $|A|=p$, the right-hand side of the inequality~\eqref {symmetric bound} equals 1, and so the inequality holds for any $x\in A$. Therefore Lemma~\ref {conjecture true if enough on line lemma} automatically applies, yielding $\#\Sigma A \ge (|A|+2-p)p = 2p$ as desired. (In fact essentially the same proof gives the more general statement: if $A$ is a multiset contained in $\F_p^2\setminus\{\text{\bf 0}\}$ but not contained in any proper subgroup, and $|A|\ge p$, then $\#\Sigma A \ge 2|A|$.)
\end{proof}
\begin{proof}[Proof of Theorem~\ref{conjecture true for k tiny}(b)]
We are assuming that $|A| = p+1$. Suppose first that there exists a nontrivial subgroup of $\F_p^2$ that contains at least two points of $A$ (including possibly two copies of the same point). Choosing any nonzero element $x$ in that subgroup, we see that the inequality~\eqref {symmetric bound} is satisfied, and so Lemma~\ref {conjecture true if enough on line lemma} yields $\#\Sigma A \ge (|A|+2-p)p = 3p$ as desired.
From now on we may assume that there does not exist a nontrivial subgroup of $\F_p^2$ that contains at least two points of $A$. Since there are only $p+1$ nontrivial subgroups of $\F_p^2$, it must be the case that $A$ consists of exactly one point from each of these $p+1$ subgroups; in particular, the elements of $A$ are distinct. We can verify the assertion for $p\le11$ by exhaustive computation (see the remarks after the end of this proof), so from now on we may assume that $p\ge13$.
Suppose first that all sums of pairs of distinct elements from $A$ are distinct. All these sums are elements of $\Sigma A$, and thus $\#\Sigma A \ge \binom{p+1}2 >3p$ since $p\ge13$.
The only remaining case is when two pairs of distinct elements from $A$ sum to the same point of $\F_p^2$. Specifically, suppose that there exist $x_1,y_1,x_2,y_2\in A$ such that $x_1+y_1=x_2+y_2$. Partition $A = B_0 \cup B_1 \cup B_2$ where $B_1=\{x_1,y_1\}$ and $B_2=\{x_2,y_2\}$ and hence $B_0 = A \setminus \{x_1,y_1,x_2,y_2\}$; note that this really is a partition of $A$, as the fact that $x_1+y_1=x_2+y_2$ forces all four elements to be distinct. Moreover, if we define $z=x_1+y_1=x_2+y_2$, then we know that $z\ne\text{\bf 0}$ since $x_1$ and $y_1$ are in different subgroups.
Define $C$ to be the multiset $B_0 \cup \{ z, z \}$; by Lemma~\ref {a la carte lemma}, we know that $\#\Sigma A \ge \#\Sigma C$. Define $D = C \cap \lin z$; we claim that $|D| = 3$. To see this, note that $A$ has exactly one point in every nontrivial subgroup, and in particular $A$ has exactly one point in $\lin z$. Furthermore, that point cannot be $x_1$ for example, since then $y_1 = z-x_1$ would also be in that subgroup; similarly that point cannot be $x_2$, $y_1$, or $y_2$. We conclude that $B_0$ has exactly one point in $\lin z$, whence $C$ has exactly three points in $\lin z$.
Now define $F = C \setminus D$, so that $|F| = |C| - |D| = (|B_0|+2)-3 = (|A|-4+2)-3 = p-4$. Let ${\mathbf K}$ be any nontrivial subgroup other than $\lin z$, and set $E = \pi_{\mathbf K}(F)$. The lower bounds $\#\Sigma D \ge 4$ and $\#\Sigma E \ge p-3$ then follow from Lemma~\ref {CD lemma}. By Lemma~\ref{sweep to the left lemma}, we conclude that $\#\Sigma C \ge \#\Sigma D \cdot \#\Sigma E = 4(p-3) > 3p$ since $p\ge13$.
\end{proof}
\begin{remark}
The computation that verifies Theorem~\ref{conjecture true for k tiny}(b) for $p\le11$ should be done a little bit intelligently, since there are $10^{12}$ subsets $A$ of ${\mathbb Z}_{11}^2$ (for example) consisting of exactly one nonzero element from each nontrivial subgroup. We describe the computation in the hardest case $p=11$. Let us write the elements of ${\mathbb Z}_{11}^2$ as ordered pairs $(s,t)$ with $s$ and $t$ considered modulo~$11$. By separately dilating the two coordinates of ${\mathbb Z}_{11}^2$ (which does not alter the cardinality of $\Sigma A$), we may assume without loss of generality that $A$ contains both $(1,0)$ and $(0,1)$. We also know every such $A$ contains a subset of the form $\{ (i,i), (j,2j), (k,3k), (\ell,4\ell) \}$ for some integers $1\le i,j,k,\ell \le 10$. Therefore the cardinality of every such $\Sigma A$ is at least as large as the cardinality of one of the subsumsets $\Sigma \big( \{ (1,0), (0,1), (i,i), (j,2j), (k,3k), (\ell,4\ell) \} \big)$.
There are $10^4$ such subsumsets, and direct computation shows that all of them have more than $33$ distinct elements except for the sixteen cases $\Sigma \big( \{ (1,0), (0,1), \pm(1,1), \pm(1,2), \pm(1,3), \pm(1,4) \} \big)$, which each contain $32$ distinct elements. It is then easily checked that any subsumset of the form $\Sigma \big( \{ (1,0), (0,1), \pm(1,1), \pm(1,2), \pm(1,3), \pm(1,4), (m,5m) \} \big)$ with $1\le m\le10$ contains more than 33 distinct elements. This concludes the verification of Theorem~\ref{conjecture true for k tiny}(b) for $p=11$, and the cases $p\le7$ are verified even more quickly.
\end{remark}
We now foreshadow the proof of Theorem~\ref{conjecture true for k small in terms of p} by reviewing the structure of the proof of Theorem~\ref{conjecture true for k tiny}(b). In that proof, we quickly showed that the desired lower bound held if there were enough elements of $A$ in the same subgroup. Also, the desired lower bound certainly held if there were enough distinct sums of pairs of elements of~$A$. If however no subgroup contained enough elements of $A$ and there were only a few distinct sums of pairs of elements of $A$, then we showed that we could find multiple pairs of elements summing to the same point in $\F_p^2$. Replacing those elements in $A$ with multiple copies of their joint sum, we found that the corresponding subgroup now contained enough elements to carry the argument through.
The following lemma quantifies the final part of this strategy, where we replace $j$ pairs of elements of $A$ with their joint sum and then use our earlier ideas to bound the cardinality of the sumset from below.
\begin{lemma}
\label{any j lemma}
Let $A$ be any valid multiset contained in $\F_p^2$, and let $z\in\F_p^2\setminus\{\orig\}$. For any integer $j$ satisfying
\begin{equation}
0\le j \le \tfrac12 \sum_{t\in\F_p^2\setminus\lin z} \min\{m_t,m_{z-t}\},
\label{allows choosing pairs}
\end{equation}
we have
\[
\#\Sigma A \ge \min\bigg\{ p, 1 + j + \sum_{y\in\lin z} m_y \bigg\} \min\bigg\{ p, 1 + |A| - 2j - \sum_{y\in\lin z} m_y \bigg\}.
\]
\end{lemma}
\begin{remark}
This can be seen as a generalization of Lemma~\ref {conjecture true if enough on line lemma}, as equation~\eqref {actually special case} is the special case $j=0$ of this lemma.
\end{remark}
\begin{proof}
Partition $A = B_0 \cup B_1 \cup \cdots \cup B_j$, where for each $1\le i\le j$, the multiset $B_i$ has exactly two elements, neither contained in $\lin z$, that sum to $z$ (the complimentary submultiset $B_0$ is unrestricted). The upper bound~\eqref{allows choosing pairs} for $j$ is exactly what is required for such a partition to be possible; the factor of $\frac12$ arises because the sum on the right-hand side of~\eqref{allows choosing pairs} double-counts the pairs $(t,z-t)$ and $(z-t,t)$. Then set $C$ equal to $B_0$ with $j$ additional copies of $z$ inserted. By Lemma~\ref{a la carte lemma}, we know that $\#\Sigma A \ge \#\Sigma C$.
Now let $D$ be the intersection of $C$ with the subgroup $\lin z$, and let $F = C \setminus D$. Let ${\mathbf K}$ be any nontrivial subgroup other than $\lin z$, and set $E = \pi_{\mathbf K}(F)$. By Lemma~\ref{sweep to the left lemma}, we know that $\#\Sigma C \ge \#\Sigma D \cdot \#\Sigma E$. However, the number of elements of $D$ (counting multiplicity) is $j$ more than the number of elements of $B_0 \cap \lin z$; this is the same as $j$ more than the number of elements of $A \cap \lin z$ (since no elements of $B_1,\dots,B_j$ lie on $\lin z$), or in other words $j + \sum_{y\in\lin z} m_y$. Similarly, the number of elements of $E$ (equivalently, of $F$) is equal to the number of elements of $B_0 \setminus \lin z$; this is the same as $2j$ less than the number of elements of $A \setminus \lin z$, or in other words $|A| - 2j - \sum_{y\in\lin z} m_y$. The lower bounds $\#\Sigma D \ge \min\big\{ p, 1 + j + \sum_{y\in\lin z} m_y \big\}$ and $\#\Sigma E \ge \min\big\{ p, 1 + |A| - 2j - \sum_{y\in\lin z} m_y \big\}$ then follow from Lemma~\ref {CD lemma}; the chain of inequalities $\#\Sigma A \ge \#\Sigma C \ge \#\Sigma D \cdot \#\Sigma E$ establishes the lemma.
\end{proof}
We are now ready to use Lemma~\ref{any j lemma} to establish Conjecture~\ref{2d conjecture} when $|A|=p+k$, for all but finitely many primes $p$ depending on~$k$. Let $H_k = 1 + \tfrac12 + \cdots + \tfrac1k$ denote the $k$th harmonic number.
\begin{theorem}
\label{conjecture true for p large in terms of k}
Let $k\ge2$ be any integer, and let $A$ be any valid multiset contained in $\F_p^2$ such that $|A| = p+k$. If $p\ge 4(k+1)^2 H_k - 2k$, then $\#\Sigma A \ge (k+2)p$.
\end{theorem}
\begin{remark}
Using the elementary bound $H_k \le \gamma + \log(k+1)$, where $\gamma$ denotes the Euler--Mascheroni constant, we see that Theorem~\ref{conjecture true for p large in terms of k} holds as long as $p \ge 4(k+1)^2 (\gamma + \log(k+1))$. Theorem~\ref{conjecture true for k small in terms of p} can thus be readily deduced from Theorem~\ref {conjecture true for p large in terms of k} as follows: If $k+1 \le \sqrt{p/(2\log p+1)}$ then $p \ge 4(k+1)^2 (\tfrac14 + \tfrac12 \log p)$. In this case we certainly have $p \ge (1 + 2\log 2) (k+1)^2$, whence $\log p \ge \frac45 + 2 \log(k+1)$ and $\tfrac14 + \tfrac12 \log p \ge \gamma + \log(k+1)$.
\end{remark}
\begin{proof}
If there are $k+1$ elements of $A$ in some nontrivial subgroup, then we are done by Lemma~\ref {conjecture true if enough on line lemma}. Therefore we may assume that there are at most $k$ points in each subgroup; in particular, $m_x\le k$ for all $x\in\F_p^2$. We now argue that if $\Sigma A$ is small, then there must be lots of pairs of elements of $A$ that add to the same element of $\F_p^2$, at which point we will be able to invoke Lemma~\ref {any j lemma}. We may assume that $\Sigma A \ne \F_p^2$, for otherwise we are done immediately.
For each $1 \le i \le k$, we define the level set $A_i = \{x \in \F_p^2 : m_x \ge i\}$. Notice that $A$ can be written precisely as the multiset union $A_1 \cup A_2 \cup \cdots \cup A_k$, and so $\sum_{i=1}^k \#A_i = |A| = p+k$. Let $B_i$ be the multiset formed by the sums of pairs of elements of $A_i$ not in the same subgroup:
\[
B_i = \big\{ x + y \colon x,y\in A_i,\, \lin x\ne\lin y \big\}.
\]
Note that $\text{\bf 0} \notin B_i$ (the restriction $\lin x\ne\lin y$ ensures that $x \ne -y$) and that every element of $B_i$ occurs with even multiplicity (the restriction $\lin x\ne\lin y$ ensures that $x \ne y$). It is not hard to estimate the relative sizes of $\#A_i$ and $|B_i|$: for each $x \in A_i$ there are at most $k$ elements of $A$ lying in the subgroup $\lin x$. Since each such $x$ occurs with multiplicity at least $i$ in $A$, there are at most $k/i$ distinct values of $y$ excluded by the condition $\lin x \ne \lin y$. Hence
$|B_i| \ge \#A_i (\#A_i - k/i)$, which implies that
\begin{equation}\label{bound on a_i}
\#A_i \le \frac{k}{i} + \sqrt{|B_i|}.
\end{equation}
Since $\sum_{i=1}^k \#A_i$ is fixed, this shows that $|B_i|$ cannot be very small on average. At the same time, $\#B_i$ cannot get very large: if $\sum_{i=1}^k \#B_i \ge (2k+1)p$, then (under our assumption that $\Sigma A \ne \F_p^2$) Lemma~\ref{kneser bound} already yields
\[
\#\Sigma A \ge \sum_{i=1}^k \#\Sigma A_i - (k-1)p > \sum_{i=1}^k \#B_i - (k-1)p \ge (k+2)p.
\]
where the middle inequality holds because $B_i \subseteq \Sigma A_i$. We may therefore assume henceforth that
\begin{equation} \label{bound on b_i}
\sum_{i=1}^k \#B_i < (2k+1)p.
\end{equation}
Let us now introduce the weighted height parameter
\begin{equation} \label{eta def}
\eta = \max_{1\le i \le k} \left\{ \frac{i|B_i|}{2\#B_i} : \#B_i > 0 \right\}.
\end{equation}
We shall show shortly that $\eta > k+1$. Assuming so, then for some $1 \le i \le k$, we have
\[
\frac{|B_i|}{2\#B_i} > \frac{k+1}{i},
\]
so by the pigeonhole principle, there exists some $z \in B_i$ (in particular $z\ne\text{\bf 0}$) occurring with multiplicity greater than $2(k+1)/i$; since this multiplicity is an even integer, it must be at least $2(k+2)/i.$ For each solution $x+y = z$ corresponding to an occurrence of $z$ in $B_i$, we have by construction that $x,y \notin \lin z$ and $m_x, m_y \ge i$, so for this particular choice of~$z$,
\[
\tfrac12 \sum_{t \in \F_p^2 \setminus\lin z } \min\{m_t, m_{z-t}\} \ge k+2.
\]
Furthermore, $\sum_{y\in\lin z} m_y \le k$ by assumption. Therefore we are free to apply Lemma~\ref{any j lemma} with $j = (k+2) - \sum_{y\in\lin z} m_y,$
which gives the lower bound
\[
\#\Sigma A \ge \min\{p,k+3\} \min\bigg\{p,p - k - 3 + \sum_{y\in\lin z} m_y\bigg\} \ge (k+3)(p-k-3) \ge (k+2)p
\]
(the last step used the inequality $p \ge (k+3)^2$, which certainly holds under the hypotheses of the theorem).
It remains only to verify that $\eta > k+1$. Summing the inequality~\eqref{bound on a_i} over all $1\le i\le k$ yields
\[
p+k = \sum_{i=1}^k \#A_i \le k H_k + \sum_{i=1}^k \sqrt{|B_i|} \le kH_k + \sqrt{2\eta} \sum_{i=1}^k \sqrt{\frac{\#B_i}{i}},
\]
using the definition~\eqref{eta def} of~$\eta$. We estimate the rightmost sum using Cauchy--Schwarz together with the inequality~\eqref{bound on b_i}:
\[
\sum_{i=1}^k \sqrt{\frac{\#B_i}{i}} \le \bigg(\sum_{i=1}^k \#B_i\bigg)^{1/2}\bigg(\sum_{i=1}^k \frac1i\bigg)^{1/2} < \sqrt{(2k+1) p H_k}.
\]
Combining the previous two inequalities gives $p+k - kH_k < \sqrt{ \eta(4k+2) p H_k}$, so that
\[
\eta > \frac{(p+k-kH_k)^2}{(4k+2) pH_k} > \frac{p(p+2(k-kH_k))}{(4k+2) pH_k} = \frac{(p+2k)-2kH_k}{(4k+2)H_k} \ge \frac{4(k+1)^2H_k-2kH_k}{(4k+2)H_k}
\]
by the hypothesis on the size of~$p$. In other words,
\[
\eta > \frac{2(k+1)^2-k}{2k+1} = k+1+\frac1{2k+1},
\]
which completes the proof of the theorem.
\end{proof}
\section{A wider conjecture}
\label{conjecture section}
As mentioned earlier, Conjecture~\ref{2d conjecture} is just one part of a more far-reaching conjecture concerning sumsets of multisets in ${\mathbb Z}_p^m$. Before formulating that wider conjecture, we must expand the definition of a valid multiset to ${\mathbb Z}_p^m$.
\begin{definition}
\label{more valid definition}
Let $p$ be an odd prime, and let $m$ be a positive integer. A multiset $A$ contained in ${\mathbb Z}_p^m$ is {\em valid} if:
\begin{itemize}
\item $\text{\bf 0}\notin A$; and
\item for each $1\le d\le m$, every subgroup of ${\mathbb Z}_p^m$ that is isomorphic to ${\mathbb Z}_p^d$ contains fewer than $dp$ points of $A$, counting multiplicity.
\end{itemize}
\end{definition}
\noindent When $m=1$, a multiset contained in ${\mathbb Z}_p$ is valid precisely when it does not contain $0$; when $m=2$ and $|A| < 2p$, this definition of valid agrees with Definition~\ref{valid definition} for multisets contained in $\F_p^2$. Note that in particular, Definition~\ref{more valid definition}(b) implies that every valid multiset contained in ${\mathbb Z}_p^m$ has at most $mp-1$ elements, counting multiplicity. We now give an example showing that this upper bound $mp-1$ can in fact be achieved. Throughout this section, let $\{x_1,\dots,x_m\}$ denote a generating set for ${\mathbb Z}_p^m$, and let ${\mathbf K}_d = \lin{x_1,\dots,x_d}$ denote the subgroup of ${\mathbb Z}_p^m$ generated by $\{x_1,\dots,x_d\}$, so that ${\mathbf K}_d \cong {\mathbb Z}_p^d$.
\begin{example}
\label{valid example}
Let $A_1$ be the multiset consisting of $p-1$ copies of $x_1$; for $2\le j\le m$ let $A_j = \{ x_j + ax_1 \colon 0\le a\le p-1 \}$; and define $B_m = \bigcup_{j=1}^m A_j$. Then $|B_m| = (p-1) + (m-1)p = mp-1$ and $\text{\bf 0}\notin B_m$. To verify that $B_m$ is a valid subset of ${\mathbb Z}_p^m$, let ${\mathbf H}$ be any subgroup of ${\mathbb Z}_p^m$ that is isomorphic to ${\mathbb Z}_p^d$; we need to show that $B_m$ contains fewer than $dp$ points of ${\mathbf H}$.
First suppose that $x_1\notin{\mathbf H}$, which implies that $bx_1\notin{\mathbf H}$ for every nonzero multiple $bx_1$ of~$x_1$. Then for each $2\le j\le m$, at most one of the elements of $A_j$ can be in ${\mathbf H}$, since the difference of any two such elements is a nonzero multiple of $x_1$. Therefore $|B_m\cap{\mathbf H}| = \ell$ for some $1\le\ell\le m-1$, and in fact all $\ell$ of these elements are of the form $x_j+ax_1$ for $\ell$ distinct values of~$j$. Since no such element is in the subgroup spanned by the others, we conclude that $d\ge\ell$, and so the necessary inequality $|B_m\cap{\mathbf H}|=\ell\le d<dp$ is amply satisfied.
Now suppose that $x_1\in{\mathbf H}$. Then for each $2\le j\le m$, either all or none of the elements of $A_j$ are in ${\mathbf H}$. By reindexing the $x_i$, we may choose an integer $1\le\ell\le m$ such that ${\mathbf H}$ contains $A_1 \cup \cdots \cup A_\ell$ and is disjoint from $A_{\ell+1} \cup \cdots \cup A_m$. In particular, $|B_m\cap{\mathbf H}| = (p-1)+(\ell-1)p = \ell p-1$. But ${\mathbf H}$ contains $\{x_1,\dots,x_\ell\}$ and hence $d\ge\ell$, so that $\ell p-1\le dp-1$ as required.
\end{example}
We may now state our wider conjecture; Conjecture~\ref{2d conjecture} is the special case $q=1$ of part (a) of this conjecture.
\begin{conjecture}
\label {any d conjecture}
Let $p$ be an odd prime. Let $m$ be a positive integer, and let $A$ be a valid multiset of ${\mathbb Z}_p^m$ with $|A|\ge p$. Write $|A| = qp+k$ with $0\le k\le p-1$.
\begin{enumerate}
\item If $0\le k\le p-3$, then $\#\Sigma A \ge (k+2)p^q$.
\item If $k=p-2$, then $\#\Sigma A \ge p^{q+1}-1$.
\item If $k=p-1$, then $\#\Sigma A \ge p^{q+1}$.
\end{enumerate}
In particular, if $|A| = mp-1$ then $\Sigma A = {\mathbb Z}_p^m$.
\end{conjecture}
We remark that the quantity $dp$ in Definition~\ref{more valid definition}, bounding the number of elements in a valid multiset that can lie in a rank-$d$ subgroup, has been carefully chosen in light of this conjecture: by Conjecture~\ref {any d conjecture}(c), any valid multiset $A$ with at least $dp-1$ elements counting multiplicity must satisfy $\#\Sigma A \ge p^d$. In particular, if $A$ is a valid multiset contained in a subgroup ${\mathbf H} < {\mathbb Z}_p^m$ that is isomorphic to ${\mathbb Z}_p^d$, then $|A|\ge dp-1$ implies that $\Sigma A = {\mathbf H}$. Therefore allowing $dp$ elements in such a subgroup would always be ``wasteful''. Of course, the validity of Definition~\ref{more valid definition} for rank-$d$ subgroups depends crucially upon the truth of Conjecture~\ref {any d conjecture}(c) for $q=d-1$.
The conjecture is restricted to multisets $A$ with $|A|\ge p$ because we already know the truth for smaller multisets, for which the definition of ``valid'' is simply the condition that $\text{\bf 0} \notin A$: when $|A|\le p-1$, the best possible lower bound is $\#\Sigma A \ge |A|+1$ as in Lemma~\ref {CD lemma}. We remark that Peng~\cite[Theorem 2]{P1} has proved Conjecture~\ref {any d conjecture}(c) in the case $m=2$ and $q=1$, under even a slightly weaker hypothesis; in other words, he has shown that if $A$ is a valid multiset contained in $\F_p^2$ with $|A| = 2p-1$, then $\Sigma A = \F_p^2$. (We remark that Mann and Wou~\cite{MW} have proved in the case that $A$ is actually a set---that is, a multiset with distinct elements---that $\#A = 2p-2$ suffices to force $\Sigma A = \F_p^2$.) Peng considers the higher-rank groups ${\mathbb Z}_p^m$ as well, but the multisets he allows (see \cite[Theorem 1]{P2}) form a much wider class than our valid multisets, and so his conclusions are much weaker than Conjecture~\ref{any d conjecture} for $q\ge2$. Finally, we mention that we have completely verified Conjecture~\ref{any d conjecture} by exhaustive computation for the groups ${\mathbb Z}_p^2$ with $p\le 7$ and also for the group ${\mathbb Z}_3^3$.
It is easy to see that all of the lower bounds in Conjecture~\ref{any d conjecture}(a), if true, would be best possible. Given $q\ge1$ and $0\le k\le p-3$, let $A'$ be any valid multiset contained in ${\mathbf K}_q$ with $|A'| = qp-1$ (such as the one given in Example~\ref{valid example} with $m=q$), and let $A$ be the union of $A'$ with $k+1$ copies of $x_{q+1}$. Then $\Sigma A = \{ y + ax_{q+1}\colon y\in \Sigma A',\, 0\le a\le k+1\}$ and thus $\#\Sigma A' = (k+2)\#\Sigma A \le (k+2)p^q$ since $\Sigma A$ is contained in~${\mathbf K}_q$. Similarly, the fact that there exists a valid multiset contained in ${\mathbf K}_{q+1}$ with $qp+(q-1)=(q+1)p-1$ elements (such as the one given in Example~\ref{valid example} with $m=q+1$) shows that the lower bound in Conjecture~\ref{any d conjecture}(c) would be best possible, since the sumset of this multiset would still be contained in ${\mathbf K}_{q+1}$ and thus would have at most $p^{q+1}$ distinct elements.
The lower bound in Conjecture~\ref{any d conjecture}(b) might seem counterintuitive, especially in comparison with the pattern established in Conjecture~\ref{any d conjecture}(a). However, we can give an explicit example showing that the lower bound $p^{q+1}-1$ for $\#\Sigma A$ cannot be increased:
\begin{example}
\label{border example}
When $p$ is an odd prime, define $B'_m$ to be the set $B_m$ from Example~\ref{valid example} with one copy of $x_1$ removed, so that $B'_m$ contains $x_1$ with multiplicity only $p-2$. Since $B_m$ is a valid multiset contained in ${\mathbb Z}_p^m$, so is $B'_m$. We have $|B'_m| = |B_m|-1 = (mp-1)-1 = (m-1)p + (m-2)$, and we claim that $-x_1\notin \Sigma B'_m$; this will imply that $\#\Sigma B'_m \le p^m-1$, and so the lower bound for $\#\Sigma A$ in Conjecture~\ref{any d conjecture}(b) cannot be increased. (In fact it is not hard to show that every other element of ${\mathbb Z}_p^m$ is in $\Sigma B'_m$, and so $\#\Sigma B'_m$ is exactly equal to $p^m-1$.)
Suppose for the sake of contradition that $-x_1\in \Sigma B'_m$, and let $C$ be a submultiset of $B'_m$ such that $-x_1 = \sum_{y\in C}y$. For each $2\le j\le m$, define $\ell_j = |C\cap A_j|= \#\big( C \cap \{ x_j + ax_1\colon 0\le a\le p-1 \} \big)$. Then
\[
-x_1 = \sum_{y\in C} y = t x_1 + \ell_2 x_2 + \ell_3 x_3 + \cdots + \ell_m x_m
\]
for some integer~$t$. It follows from this equation that each $\ell_j$ must equal either $0$ or $p$. However, if $\ell_j = p$ then
\[
\sum_{y \in C\cap A_j} y = \sum_{0\le a\le p-1} (x_j + ax_1) = px_j + \frac{p(p-1)}2 x_1 = \text{\bf 0}
\]
(since $p$ is odd). So in either case, if $s = |C\cap A_1|$ is the multiplicity with which $x_1$ appears in $C$, then
\[
-x_1 = \sum_{y\in C} y = s x_1 + \sum_{j=2}^m \sum_{y\in C\cap A_j} y = s x_1 + \text{\bf 0} + \cdots + \text{\bf 0}.
\]
This is a contradiction, however, since $s$ must lie between $0$ and $p-2$. Therefore $-x_1$ is indeed not an element of $\Sigma B'_m$, as claimed.
\end{example}
The line of questioning in this section turns out to be uninteresting when $p=2$: when the multiset $A$ does not contain $\text{\bf 0}$, the condition that no rank-$1$ subgroup of ${\mathbb Z}_2^m$ contain $2$ points of $A$ is simply equivalent to $A$ not containing any element with multiplicity greater than~$1$. It is easy to check that if $A$ consists of any $q$ points in ${\mathbb Z}_2^m$ that do not lie in any subgroup isomorphic to ${\mathbb Z}_2^{q-1}$, then $\Sigma A$ fills out the entire rank-$q$ subgroup generated by~$A$. In other words, the analogous definition of ``valid'' for multisets in ${\mathbb Z}_2^m$ would simply be a set of $q$ points that generate a rank-$q$ subgroup of ${\mathbb Z}_2^m$, and we would always have $\#\Sigma A = 2^{|A|} = 2^{\#A}$ for valid (multi)sets in~${\mathbb Z}_2^m$.
\section*{Acknowledgments}
The collaboration leading to this paper was made possible thanks to Jean--Jacques Risler, Richard Kenyon, and especially Ivar Ekeland; the authors also thank the University of British Columbia and the Institut d'\'Etudes Politiques de Paris for their undergraduate exchange program.
The first author thanks Andrew Granville for conversations that explored this topic and eventually led to the formulation of the conjectures herein.
| {
"timestamp": "2012-09-03T02:03:09",
"yymm": "1107",
"arxiv_id": "1107.4392",
"language": "en",
"url": "https://arxiv.org/abs/1107.4392",
"abstract": "The classical Cauchy-Davenport theorem implies the lower bound n+1 for the number of distinct subsums that can be formed from a sequence of n elements of the cyclic group Z_p (when p is prime and n<p). We generalize this theorem to a conjecture for the minimum number of distinct subsums that can be formed from elements of a multiset in (Z_p)^m; the conjecture is expected to be valid for multisets that are not \"wasteful\" by having too many elements in nontrivial subgroups. We prove this conjecture in (Z_p)^2 for multisets of size p+k, when k is not too large in terms of p.",
"subjects": "Number Theory (math.NT)",
"title": "Lower bounds for sumsets of multisets in Z_p^2",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9926541727735794,
"lm_q2_score": 0.8104789155369047,
"lm_q1q2_score": 0.8045252774527137
} |
https://arxiv.org/abs/1102.5347 | On the maximum number of isosceles right triangles in a finite point set | Let $Q$ be a finite set of points in the plane. For any set $P$ of points in the plane, $S_{Q}(P)$ denotes the number of similar copies of $Q$ contained in $P$. For a fixed $n$, Erdős and Purdy asked to determine the maximum possible value of $S_{Q}(P)$, denoted by $S_{Q}(n)$, over all sets $P$ of $n$ points in the plane. We consider this problem when $Q=\triangle$ is the set of vertices of an isosceles right triangle. We give exact solutions when $n\leq9$, and provide new upper and lower bounds for $S_{\triangle}(n)$. | \section{Introduction}
In the 1970s Paul Erd\H{o}s and George Purdy \cite{D,E,F} posed the
question, \textquotedblleft Given a finite set of points $Q$, what is
the maximum number of similar copies $S_{Q}(n)$ that can be
determined by $n$ points in the plane?\textquotedblright. This problem remains open in general. However, there has been some progress regarding the order of magnitude of this maximum as a function of $n$. Elekes and Erd\H{o}s \cite{B} noted that $S_{Q}\left( n\right) \leq n\left(
n-1\right) $ for any pattern $Q$ and they also gave a quadratic lower bound
for $S_{Q}(n)$ when $\left\vert Q\right\vert =3$ or when all the
coordinates of the points in $Q$ are algebraic numbers. They also
proved a slightly subquadratic lower bound for all other patterns
$Q$. Later, Laczkovich and Ruzsa \cite{LR97} characterized precisely
those patterns $Q$ for which $S_{Q}\left( n\right) =\Theta(n^{2})$. In spite of this, the coefficient of the quadratic term is not known for any non-trivial pattern; it is not even known if $\lim_{n\rightarrow \infty}S_Q(n)/n^2$ exists!
Apart from being a natural question in Discrete Geometry, this
problem also arose in connection to optimization of algorithms
designed to look for patterns among data obtained from scanners,
digital cameras, telescopes, etc. (See \cite{Bra, BrMoPa, BP05} for
further references.)
Our paper considers the case when $Q$ is the set of vertices of an isosceles right triangle.
The case when $Q$ is the set of vertices of an equilateral triangle has been considered in
\cite{A}. To avoid redundancy, we refer to an isosceles right
triangle as an ${\mathsf{IRT}}$ for the remainder of the paper. We
begin with some definitions.
Let $P$ denote a finite set of points in the plane. We
define $S_{\triangle}(P)$ to be the number of triplets in $P$ that are the vertices of an ${\mathsf{IRT}}$. Furthermore, let
\[
S_{\triangle}(n)=\max_{|P|=n}S_{\triangle}(P).
\]
As it was mentioned before, Elekes and Erd\H{o}s established that $S_{\triangle}(n)=\Theta(n^2)$ and it is implicit from their work that $1/18 \leq \liminf_{n\rightarrow\infty
S_{\triangle}(n)/n^{2} \leq 1$. The main goal of this paper is to
derive improved constants that bound the function
$S_{\triangle}(n)/n^{2}$. Specifically, in Sections \ref{sec:
lower} and \ref{sec: upper}, we prove the following result:
\begin{theorem}
\[
0.433064 < \liminf_{n\rightarrow\infty
\frac{S_{\triangle}(n)}{n^{2}}\leq\frac{2}{3} < 0.66667.
\]
\end{theorem}
We then proceed to determine, in Section \ref{sec: small cases}, the
exact values of $S_{\triangle }(n)$ when $3\leq n\leq9$. Several
ideas for the proofs of these bounds come from the equivalent bounds
for equilateral triangles in \cite{A}.
\section{Lower Bound\label{sec: lower}}
We use the following definition.
For $z\in P$, let $R_{\pi/2}(z,P)$ be the $\pi/2$
counterclockwise rotation of $P$ with center $z$. Furthermore, let $\deg
_{\pi/2}(z)$ be the number of isosceles right triangles in $P$ such that $z$
is the right-angle vertex of the triangle.
If $z\in P$, then $\deg_{\pi/2}(z)$ can be computed by simply rotating our
point set $P$ by $\pi/2$ about $z$ and counting the number of points in the
intersection other than $z$. Therefore,
\begin{equation}
\deg_{\pi/2}(z)=|P\cap R_{\pi/2}(z,P)|-1. \label{degpi/2
\end{equation}
Due to the fact that an ${\mathsf{IRT}}$ has only one right angle, then
\[
S_{\triangle}(P) = \sum_{z\in P} \deg_{\pi/2}(z).
\]
That is, the sum computes the number of ${\mathsf{IRT}}$s in $P$. From this identity
an initial $5/12$ lower bound can be derived for $\liminf_{n\rightarrow \infty}S_\triangle(n)/n^2$ using the set
\[
P=\left\{ (x,y)\in\mathbb{Z}^{2}:0\leq x\leq \sqrt{n},0\leq y\leq \sqrt{n}\right\}.
\]
We now improve this bound.
The following theorem generalizes our method for finding a lower bound. We denote by $\Lambda$ the lattice generated by the points
$(1,0)$ and $(0,1)$; furthermore, we refer to points in $\Lambda$ as \emph{lattice points}. The next result provides a formula for the leading term of $S_{\triangle}(P)$ when our
points in $P$ are lattice points enclosed by a given shape. This theorem, its
proof, and notation, are similar to Theorem 2 in \cite{A}, where the
authors obtained a similar result for equilateral triangles in place of
${\mathsf{IRT}}$s.
\begin{theorem}
\label{integral} Let K be a compact set with finite perimeter and area 1.
Define $f_{K}:\mathbb{C}\rightarrow\mathbb{R}^{+}$ as $f_{K}(z)=Area(K\cap
R_{\pi/2}(z,K))$ where $z\in K$. If $K_{n}$ is a similar copy of $K$
intersecting $\Lambda$ in exactly $n$ points, then
\[
S_{\triangle}(K_{n}\cap\Lambda)=\left( \int_{K}f_{K}(z)\,dz\right)
n^{2}+O(n^{3/2}).
\]
\end{theorem}
\begin{proof}
Given a compact set $L$ with finite area and perimeter, we have that
\[
\left\vert rL\cap\Lambda\right\vert ={\mathrm{Area}(rL)}
+O(r)=r^{2}\mathrm{Area}(L)+O(r),
\]
where $rL$ is the scaling of $L$ by a factor $r$. Therefore,
\begin{align*}
S_{\triangle}(K_{n}\cap\Lambda) & =\sum_{z\in
K_{n}\cap\Lambda}|(\Lambda \cap K_{n})\cap
R_{\pi/2}(z,(K_{n}\cap \Lambda))|-1\\
& =\sum_{z\in K_{n}\cap\Lambda}\mathrm{Area}(K_{n}\cap R_{\pi/2
(z,K_{n}))+O(\sqrt{n}).
\end{align*}
We see that each error term in the sum is bounded by the perimeter
of $K_{n}$, which is finite by hypothesis. Thus,
\begin{align*}
S_{\triangle}(K_{n}\cap\Lambda) & =n^{2}\sum_{z\in K_{n}\cap\Lambda}\frac
{1}{n^{2}}\mathrm{Area}(K_{n}\cap R_{\pi/2}(z,K_{n}))+O(n^{3/2})\\
& =n^{2}\sum_{z\in K_{n}\cap\Lambda}\frac{1}{n}\mathrm{Area}(\frac{1
{\sqrt{n}}\left( K_{n}\cap R_{\pi/2}(z,K_{n})\right) )+O(n^{3/2})\\
& =n^{2}\sum_{z\in K_{n}\cap\Lambda}\frac{1}{n}\mathrm{Area}\left( \frac
{1}{\sqrt{n}}K_{n}\cap R_{\pi/2}\left( \frac{z}{\sqrt{n}},\frac{1}{\sqrt{n
}K_{n}\right) \right) +O(n^{3/2})\text{.
\end{align*}
The last sum is a Riemann approximation for the function $f_{(1/\sqrt{n
)K_{n}}$ over the region $(1/\sqrt{n})K_{n}$, thus
\[
S_{\triangle}(K_{n}\cap\Lambda)=n^{2}\left( \int_{\frac{1}{\sqrt{n}}K_{n
}f_{\frac{1}{\sqrt{n}}K_{n}}(z)\,dz+O\left( \frac{1}{\sqrt{n}}\right)
\right) +O(n^{3/2}).
\]
Since
\[
\mathrm{Area}\left( \frac{1}{\sqrt{n}}K_{n}\right) =\frac{1}{n
\mathrm{Area}(K_{n})=\frac{1}{n}(n+O(\sqrt{n}))=1+O\left( \frac{1}{\sqrt{n
}\right) =\mathrm{Area}(K)+O\left( \frac{1}{\sqrt{n}}\right) ,
\]
it follows that,
\[
\int_{\frac{1}{\sqrt{n}}K_{n}}f_{\frac{1}{\sqrt{n}}K_{n}}(z)\,dz=\int_{K
f_{K}(z)\,dz+O\left( \frac{1}{\sqrt{n}}\right) \text{.
\]
As a result,
\begin{align*}
S_{\triangle}(K_{n}\cap\Lambda) & =n^{2}\int_{\frac{1}{\sqrt{n}}K_{n
}f_{\frac{1}{\sqrt{n}}K_{n}}(z)\,dz+O(n^{3/2})\\
& =n^{2}\int_{K}f_{K}(z)\,dz+O(n^{3/2})\text{.}\qedhere
\end{align*}
\end{proof}
The importance of this theorem can be seen immediately. Although our $5/12$ lower bound for $\liminf_{n\rightarrow\infty} S_{\triangle}(n)/n^{2}$ was derived by summing the degrees of each point in a square lattice, the same result can be obtained by letting $K$ be the square $\{ (x,y):|x| \leq \frac{1}{2}, |y| \leq \frac{1}{2} \}$. It follows that $f_K(x,y)=(1-|x|-|y|)(1-||x|-|y||)$ and
\[
S_{\triangle}(K_{n}\cap\Lambda)=\left( \int_{K}f_{K}(z)\,dz\right)
n^{2}+O(n^{3/2}) = \frac{5}{12}n^{2}+O(n^{3/2})\text{.
\]
An improved lower bound will follow
provided that we find a set $K$ such that the value for the
integral in Theorem \ref{integral} is larger than $5/12$. We get a larger value for the integral by letting $K$ be
the circle $\{ z \in \mathbb{C} : |z|\leq 1/\sqrt{\pi} \}$. In this case
\begin{equation} \label{circle function}
f_K(z)=\frac{2}{\pi} \arccos (\frac{\sqrt{2 \pi}}{2} |z|)-|z|\sqrt{\frac{2}{\pi}-|z|^2}
\end{equation}
and
\[
S_{\triangle}(K_{n}\cap\Lambda)=\left( \int_{K}f_{K}(z)\,dz\right)
n^{2}+O(n^{3/2}) = \left( \frac{3}{4}-\frac{1}{\pi}\right)n^{2}+O(n^{3/2})\text{.
\]
It was conjectured in \cite{A} that not only does $\lim_{n\rightarrow \infty} E(n)/n^2$ exist, but it is attained by the uniform lattice in the shape of a circle. ($E(n)$ denotes the maximum number of equilateral triangles determined by $n$ points in the plane.) The corresponding conjecture in the case of the isosceles right triangle turns out to be false. That is, if $\lim_{n\rightarrow \infty}S_{\triangle}(n)/n^2$ exists, then it must be strictly greater than $3/4-1/\pi$. Define $\overline{\Lambda}$ to be the translation of $\Lambda$ by the vector $(1/2,1/2)$. The following lemma will help us to improve our lower bound.
\begin{lemma} \label{Lem: lambda}
If $(j,k)\in \mathbb{R}^2$ and $\Lambda^\prime=\Lambda$ or $\Lambda^\prime=\overline{\Lambda}$, then
\[
R_{\pi/2}((j,k),\Lambda^\prime) \cap \Lambda^\prime=\left\{
\begin{array}{l}
\Lambda^\prime \text{ if } (j,k)\in \Lambda \cup \overline{\Lambda}, \\
\varnothing \text{ else.}
\end{array}
\right.
\]
\end{lemma}
\begin{proof}
Observe that
\[R_{\pi/2} ((j,k),(s,t)) =
\begin{pmatrix}
0 & -1\\
1& 0\\
\end{pmatrix}
\begin{pmatrix}
s-j\\
t-k\\
\end{pmatrix}
+\begin{pmatrix}
j\\
k\\
\end{pmatrix}
=
\begin{pmatrix}
k-t+j\\
s-j+k\\
\end{pmatrix}.
\]
First suppose $(s,t)\in \Lambda$. Since $s,t\in \mathbb{Z}$, then $(k-t+j,s-j+k)\in \Lambda$ if and only if $k-j\in \mathbb{Z}$ and $k+j\in \mathbb{Z}$. This can only happen when either both $j$ and $k$ are half-integers (i.e., $(j,k)\in \overline{\Lambda}$), or both $j$ and $k$ are integers (i.e., $(j,k)\in \Lambda$). Now suppose $(s,t)\in \overline{\Lambda}$. In this case, because both $s$ and $t$ are half-integers, we conclude that $(k-t+j, s-j+k)\in \overline{\Lambda}$ if and only if both $k-j\in \mathbb{Z}$ and $k+j\in \mathbb{Z}$. Once again this occurs if and only if $(j,k)\in \Lambda \cup \overline{\Lambda}$.
\end{proof}
Recall that if $K$ denotes the circle of area 1, then $(3/4-1/\pi)n^2$ is the leading term of $S_{\triangle}(K_n\cap \Lambda)$. The previous lemma implies that, if we were to adjoin a point $z\in \mathbb{R}^2$ to $K_n\cap \Lambda$ such that $z$ has half-integer coordinates and is located near the center of the circle formed by the points of $K_n\cap \Lambda$, then $\deg_{\pi/2}(z)$ will approximately equal $|K_n\cap \Lambda|$. We obtain the next theorem by further exploiting this idea.
\begin{theorem}
\[.43169\approx \frac{3}{4}-\frac{1}{\pi} < .433064 < \liminf_{n\rightarrow \infty}\frac{S_{\triangle}(n)}{n^2}\]
\end{theorem}
\begin{proof}
Let $K$ be the circle of area 1, $A= K_{m_1}\cap \Lambda$, and $B = K_{m_2}\cap\overline{\Lambda}$. Moreover, position $B$ so that its points are centered on the circle formed by the points in $A$ (See Figure \ref{Fig: Centered}). We let $n=m_1+m_2=|A \cup B|$ and $m_2=x\cdot m_1$, where $0<x<1$ is a constant to be determined.
\begin{figure}[h]
\centering
\includegraphics{AUBandPlot.eps}
\caption{(a) Set $B$ (gray points) centered on set $A$ (black
points), (b) Plot of the $n^2$ coefficient of $S_\triangle(A \cup
B)$ as $x$ ranges from 0 to 1.} \label{Fig: Centered}
\end{figure}
We proceed to maximize the leading coefficient of
$S_{\triangle}(A\cup B)$ as $x$ varies from 0 to $1$. By Lemma
\ref{Lem: lambda}, there cannot exist an \irt$\:$whose right-angle
vertex lies in $A$ while one $\pi/4$ vertex lies in $A$ and the
other lies in $B$. Similarly, there cannot exist an \irt whose
right angle-vertex lies in $B$ while one $\pi/4$ vertex lies in $A$
and the other lies in $B$. Therefore, each \irt with vertices in
$A\cup B$ must fall under one of the following four cases:
\bigskip
\\
\noindent \textit{Case 1: All three vertices in $A$}. Using Theorem \ref{integral}, it follows that there are $(3/4 - 1/\pi)m_1^2 + O(m_1^{3/2})$ {\irt}s in this case. Since $m_1=n/(1+x)$, the number of {\irt}s in terms of $n$ equals
\begin{equation}\label{Case 1}
\left(\frac{3}{4} - \frac{1}{\pi}\right)\frac{n^2}{(1+x)^2} + O(n^{3/2}).
\end{equation}
\\
\noindent \textit{Case 2: All three vertices in $B$}. By Theorem \ref{integral}, there are $(3/4 - 1/\pi)m_2^2 + O(m_2^{3/2})$ {\irt}s in this case. This time $m_2=nx/(1+x)$ and the number of {\irt}s in terms of $n$ equals
\begin{equation}\label{Case 2}
\left(\frac{3}{4} - \frac{1}{\pi}\right)\frac{n^2x^2}{(1+x)^2} + O(n^{3/2}).
\end{equation}
\\
\noindent \textit{Case 3: Right-angle vertex in $B$, $\pi/4$ vertices in $A$}. The relationship given by Lemma \ref{Lem: lambda} allows us to slightly adapt the proof of Theorem \ref{integral} in order to compute the number of {\irt}s in this case. The integral approximation to the number of {\irt}s in this case is given by
\[\sum_{z\in K_{m_2}\cap \overline{\Lambda}} |(K_{m_1}\cap\Lambda)\cap R_{\pi/2}(z,(K_{m_1}\cap\Lambda))| = m_1^2\left(\int_{\frac{1}{\sqrt{m_1}}K_{m_2}} f_{\frac{1}{\sqrt{m_1}}K_{m_1}}(z)\,dz \right) + O(m_1^{3/2}).\]
But
\[\mathrm{Area}\left(\frac{1}{\sqrt{m_1}}K_{m_2}\right) = \mathrm{Area}\left(\sqrt{\frac{m_2}{m_1}}K\right) + O(\sqrt{m_1}),\]
so
\[m_1^2\left(\int_{\frac{1}{\sqrt{m_1}}K_{m_2}} f_{\frac{1}{\sqrt{m_1}}K_{m_1}}(z)\,dz \right) + O(m_1^{3/2}) = m_1^2\left(\int_{\sqrt{\frac{m_2}{m_1}}K} f_{K}(z)\,dz \right) + O(m_1^{3/2}).\]
Expressing this value in terms of $n$ gives
\begin{equation}\label{Case 3}
\left(\int_{\sqrt{x}K} f_{K}(z)\,dz \right)\frac{n^2}{(1+x)^2} + O(n^{3/2}).
\end{equation}
\\
\noindent \textit{Case 4: Right-angle vertex in $A$, $\pi/4$ vertices in $B$}. As in Case 3, the number of {\irt}s is given by
\begin{equation}\label{Eqn: integral}
\sum_{z\in K_{m_1}\cap \Lambda} |(K_{m_2}\cap\overline{\Lambda})\cap R_{\pi/2}(z,(K_{m_2}\cap \overline{\Lambda}))| = m_2^2\left(\int_{\frac{1}{\sqrt{m_2}}K_{m_1}} f_{\frac{1}{\sqrt{m_2}}K_{m_2}}(z)\,dz \right) + O(m_2^{3/2}).
\end{equation}
Now recall that $f_{(1/\sqrt{m_2})K_{m_2}}(z) = \mathrm{Area}\left((1/\sqrt{m_2})K_{m_2}\cap R_{\pi/2}(z,(1/\sqrt{m_2})K_{m_2})\right)$. It follows that $f_{(1/\sqrt{m_2})K_{m_2}}(z_0) = 0$ if and only if $z_0$ is farther than $\sqrt{2/\pi}$ from the center of $(1/\sqrt{m_2})K_{m_2}$. Thus for small enough values of $m_2$, the region of integration in Equation (\ref{Eqn: integral}) is actually $(\sqrt{2/m_2})K_{m_2}$, so it does not depend on $m_1$. We consider two subcases.
First, if $x \leq 1/2$ (i.e., $m_2 \leq m_1/2$), then
\[\sqrt{\frac{2}{\pi}} = \frac{1}{\sqrt{m_2}}\frac{\sqrt{2m_2}}{\sqrt{\pi}} \leq \frac{1}{\sqrt{m_2}}\frac{\sqrt{2}}{\sqrt{\pi}}\frac{\sqrt{m_1}}{\sqrt{2}} = \frac{1}{\sqrt{m_2}}\sqrt{\frac{m_1}{\pi}}.\]
The left side of the above inequality is the radius of $(\sqrt{2/m_2})K_{m_2}$, meanwhile the right side is the radius of $(1/\sqrt{m_2})K_{m_1}$, thus the region of integration where $f_{\frac{1}{\sqrt{m_2}}K_{m_2}}$ is nonzero equals $(\sqrt{2/m_2})K_{m_2}$. Hence, the number of {\irt}s equals
\begin{align}\nonumber
m_2^2\left(\int_{\sqrt{\frac{2}{m_2}}K_{m_2}} f_{\frac{1}{\sqrt{m_2}}K_{m_2}}(z)\,dz \right) + O(m_2^{3/2}) &= m_2^2\left(\int_{\sqrt{2}K} f_{K}(z)\,dz \right) + O(m_2^{3/2})\\ \label{Case 4A} &= \left(\int_{\sqrt{2}K} f_{K}(z)\,dz \right)x^2 n^2 + O(n^{3/2}).
\end{align}
Now we consider the case $x>1/2$ (i.e., $m_2> m_1/2$). In this case, $f_{\frac{1}{\sqrt{m_2}}K_{m_2}}$ is nonzero for all points in $\frac{1}{\sqrt{m_2}}K_{m_1}$. Thus the number of {\irt}s in this case equals
\begin{align}\nonumber
m_2^2\left(\int_{\frac{1}{\sqrt{m_2}}K_{m_1}} f_{\frac{1}{\sqrt{m_2}}K_{m_2}}(z)\,dz \right) + O(m_2^{3/2}) &= m_2^2\left(\int_{\sqrt{\frac{m_1}{m_2}}K} f_{K}(z)\,dz \right) + O(m_2^{3/2})\\ \label{Case 4B}
&=\left(\int_{\sqrt{\frac{1}{x}}K} f_{K}(z)\,dz \right)\frac{n^2x^2}{(1+x)^2} + O(n^{3/2})
\end{align}
By Equation (\ref{circle function}), we have that for $t>0$,
\begin{align*}
\int_{tK}f_{K}(z)\,dz =&2\pi \int_{0}^{t/\sqrt{\pi }}\left( \frac{2}{\pi }
\arccos ( \frac{\sqrt{2\pi }}{2}r) -r\sqrt{\frac{2}{\pi }-r^{2}}
\right) r\,dr \\
=&\frac{1}{2\pi }\left( 4t^2\arccos ( \frac{t}{\sqrt{2}}) +2\arcsin
( \frac{t}{\sqrt{2}}) -t(t^{2}+1)\sqrt{2-t^{2}}\right).
\end{align*}
Therefore, putting all four cases together (i.e., expressions
(\ref{Case 1}), (\ref{Case 2}), (\ref{Case 3}), and either
(\ref{Case 4A}) or (\ref{Case 4B})), we obtain that the $n^2$
coefficient of $S_\triangle (A\cup B)$ equals
\begin{equation*}
\frac{1}{4\pi (x+1)^{2}}\left(8x\arccos \sqrt{\frac{x}{2}}+4\arcsin \sqrt{\frac{
}{2}}+(5\pi -4)x^{2}+ (3\pi -4)-2(x+1)\sqrt{2x-x^{2}}\right)
\end{equation*}
if $0<x\leq1/2$, or
\begin{multline*}
\frac{1}{4\pi (x+1)^{2}}\left(8x\left( \arccos \sqrt{\frac{x}{2}}+\arccos \sqrt{
\frac{1}{2x}}\right) +4\arcsin \sqrt{\frac{x}{2}}+4x^{2}\arcsin \sqrt{\frac{1
}{2x}}+ \right.\\ \left.
(3\pi -4)(x^{2}+1)-2(x+1)\left( \sqrt{2x-x^{2}}+\sqrt{2x-1}\right) \right)
\end{multline*}
if $1/2<x<1$. Letting $x$ vary from 0 to 1, it turns out that this
coefficient is maximized (see Figure \ref{Fig: Centered}) when $x\approx
.0356067$ (this corresponds to when the radius of $B$ is
approximately 18.87\% of the radius of $A$). Letting $x$ equal this
value gives $0.433064$ as a decimal approximation to the maximum
value attained by the $n^2$ coefficient.
\end{proof}
At this point, one might be tempted to further increase the
quadratic coefficient by placing a third set of lattice points
arranged in a circle and centered on the circle formed by $B$. It
turns out that forming such a configuration does not improve the
results in the previous theorem. This is due to Lemma
\ref{Lem: lambda}. More specifically, given our construction from
the previous theorem, there is no place to adjoin a point $z$ to the
center of $A\cup B$ such that $z\in \Lambda$ or $z\in
\overline{\Lambda}$. Hence, if we were to add the point $z$ to the
center of $A\cup B$, then any new {\irt}s would have their
right-angle vertex located at $z$ with one $\pi/4$ vertex in $A$ and
the other $\pi/4$ vertex in $B$. Doing so can produce at most $2m_2
= 2xm_1\approx .0712m_1$ new {\irt}s (recall that $x\approx
.0356066$ in our construction). On the other hand, adding $z$ to
the perimeter of $A$, gives us $m_1f_{K}(1/\sqrt{\pi}) \approx
.1817m_1$ new {\irt}s.
\section{Upper Bound\label{sec: upper}}
We now turn our attention to finding an upper
bound for $S_{\triangle
}(n)/n^{2}$. It is easy to see that $S_{\triangle}(n)\leq n^{2}-n$,
since any pair of points can be the vertices of at most 6 ${\mathsf{IRT}}$s. Our next theorem
improves this bound. The idea is to
prove that there exists a point in $P$ that does not belong to many
${\mathsf{IRT}}$s. First, we need the following definition.
For every $z\in P$, let $R_{\pi/4}^{+}(z,P)$ and
$R_{\pi/4}^{-}(z,P)$ be the dilations of $P$, centered at $z$, by a factor of
$\sqrt{2}$ and $1/\sqrt{2}$, respectively; followed by a $\pi/4$
counterclockwise rotation with center $z$. Furthermore, let $\deg_{\pi/4
^{+}(z)$ and $\deg_{\pi/4}^{-}(z)$ be the number of isosceles right triangles
$zxy$ with $x,y\in P$ such that $zxy$ is ordered in counterclockwise order,
and $zy$, respectively $zx$, is the hypotenuse of the triangle $zxy$.
Much like the case of $\deg_{\pi/2}$, $\deg_{\pi/4}^{+}$ and $\deg_{\pi/4
^{-}$ can be computed with the following identities,
\[
\deg_{\pi/4}^{+}\left( z\right) =\left\vert P\cap R_{\pi/4}^{+}(z,P)\right\vert
-1\text{ and }\deg_{\pi/4}^{-}\left( z\right) =\left\vert P\cap R_{\pi/4
^{-}(z,P)\right\vert -1\text{.
\]
\begin{theorem}
\label{theorem3}For $n\geq3$,
\[
S_{\triangle}(n)\leq\left\lfloor \frac{2}{3}(n-1)^{2}-\frac{5}{3}\right\rfloor
.
\]
\end{theorem}
\begin{proof}
By induction on $n$. If $n=3$, then $S_{\triangle}(3)\leq1 = \left\lfloor \left(
2\cdot4-5\right) /3\right\rfloor$. Now suppose the theorem holds for
$n=k$. We must show this implies the theorem holds for $n=k+1$. Suppose that
there is a point $z\in P$ such that $\deg_{\pi/2}(z)+\deg_{\pi/4
^{+}(z)+\deg_{\pi/4}^{-}(z)\leq\lfloor(4n-5)/3\rfloor$. Then by induction,
\begin{align*}
S_{\triangle}(k+1) & \leq\deg_{\pi/2}(z)+\deg_{\pi/4}^{+}(z)+\deg_{\pi
/4}^{-}(z)+S_{\triangle}(k)\\
& \leq\left\lfloor \frac{4k-1}{3}\right\rfloor +\left\lfloor \frac{2
{3}(k-1)^{2}-\frac{5}{3}\right\rfloor =\left\lfloor \frac{2}{3}k^{2}-\frac
{5}{3}\right\rfloor .
\end{align*}
The last equality can be verified by considering the three possible residues
of $k$ when divided by 3. Hence, our theorem is proved if we can find a point
$z\in P$ with the desired property.
Let $x,y\in P$ be points such that $x$ and $y$ form the diameter of $P$. In
other words, if $w\in P$, then the distance from $w$ to any other point in $P$
is less than or equal to the distance from $x$ to $y$. We now prove that
either $x$ or $y$ is a point with the desired property mentioned above. We
begin by analyzing $\deg_{\pi/4}^{-}$. We use the same notation from Theorem 1 in \cite{A}.
Define $N_{x}=P \cap R_{\pi/4}^{-}(x,P)\backslash\{x\}$ and $N_{y}=P \cap R_{\pi/4
^{-}(y,P)\backslash\{y\}$. It follows from our identities that, $\deg_{\pi
/4}^{-}(x)=\vert N_{x}\vert $ and $\deg_{\pi/4}^{-}(y)=\vert
N_{y}\vert $. Furthermore, by the Inclusion-Exclusion Principle for
finite sets, we have $\vert N_{x}\vert +\vert N_{y}\vert =\vert N_{x}\cup
N_{y}\vert +\vert N_{x}\cap N_{y}\vert .$
We shall prove by contradiction that $|N_{x}\cap N_{y}|\leq1$. Suppose that there
are two points $u,v\in N_{x}\cap N_{y}$. This means that there are points
$u_{x},v_{x},u_{y},v_{y}\in P$ such that the triangles $xu_{x}u,xv_{x
v,yu_{y}u,yv_{y}v$ are ${\mathsf{IRT}}$s oriented counterclockwise with right
angle at either $u$ or $v$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.91]{thelem4.eps}
\end{center}
\caption{Proof of Theorem 4.
\label{theoremlemma4
\end{figure}
But notice that the line segments $u_{x}u_{y}$ and $v_{x}v_{y}$ are simply the
$(\pi/2$)-counterclockwise rotations of $xy$ about centers $u$ and $v$
respectively. Hence, $u_{x}u_{y}v_{x}v_{y}$ is a parallelogram with two sides
having length $xy$ as shown in Figure \ref{theoremlemma4}(a). This is a
contradiction since one of the diagonals of the parallelogram is longer than
any of it sides. Thus, $|N_{x}\cap N_{y}|\leq1$. Furthermore, $x\notin N_{y}$ and $y\notin N_{x}$, so $|N_{x}\cup N_{y}|\leq n-2$ and thus
\[
\deg_{\pi/4}^{-}(x)+\deg_{\pi/4}^{-}(y)=\left\vert N_{x} \cup N_{y} \right\vert
+\left\vert N_x \cap N_{y}\right\vert \leq n-2+1=n-1\text{.
\]
This also implies that
\[
\deg_{\pi/4}^{+}(x)+\deg_{\pi/4}^{+}(y)\leq n-1,
\]
since we can follow the exact same argument applied to the reflection of $P$
about the line $xy$.
We now look at $\deg_{\pi/2}(x)$ and $\deg_{\pi/2}(y)$. First we need the
following lemma.
\begin{lemma}
For every $p\in P$, at most one of $R_{\pi/2}(x,p)$ or $R_{\pi/2}(y,p)$
belongs to $P$.
\end{lemma}
\begin{proof}
Let $p_{x}=R_{\pi/2}(x,p)$ and $p_{y}=R_{\pi/2}(y,p)$ (see Figure
\ref{theoremlemma4}(b)). Note that the distance $p_{x}p_{y}$ is exactly the
distance $xy$ but scaled by $\sqrt{2}$. This contradicts the fact that $xy$ is
the diameter of $P$.
\end{proof}
Let us define a graph $G$ with vertex set $V(G)=P\backslash\{x,y\}$ and where
$uv$ is an edge of $G$, (i.e., $uv\in E(G)$) if and only if $v=R_{\pi/2}(x,u)$
or $v=R_{\pi/2}(y,u)$.
\begin{lemma} \label{inequalitylemma
\[
0\leq\deg_{\pi/2}(x)+\deg_{\pi/2}(y)-|E(G)|\leq1\text{.
\]
\end{lemma}
\begin{proof}
The left inequality follows from the fact each edge counts an ${\mathsf{IRT}}$
in either $\deg_{\pi/2}(x)$ or $\deg_{\pi/2}(y)$ and possibly in both.
However, if $uv$ is an edge of $G$ so that $v=R_{\pi/2}(x,u)$ and $u=R_{\pi
/2}(y,v)$, then $xuyv$ is a square, so this can only happen for at most one edge.
\end{proof}
Now, let $\deg_{G}(u)$ be the number of edges in $E(G)$ incident to $u$. We prove the following lemma.
\begin{lemma}
\label{deglemma} For every $u\in V(G)$, $\deg_{G}(u)\leq2$.
\end{lemma}
\begin{proof}
Suppose $uv_{1}\in E(G)$, see Figure \ref{Fig: lemma56}(a). Without loss of
generality we can assume that $u=R_{\pi/2}(y,v_{1})$. If $v_{3}=R_{\pi
/2}(y,u)\in P$, then we conclude that $xv_{3}>xy$ or $xv_1 >xy$, because $\angle xyv_{3
\geq\pi/2$ or $\angle xyv_1 \geq \pi/2$. This contradicts the fact that $xy$ is the diameter of $P$.
Similarly, if $v_{2}$ and $v_{4}$ are defined as $u=R_{\pi/2}(x,v_{4})$ and
$v_{2}=R_{\pi/2}(x,u)$, then at most one of $v_{2}$ or $v_{4}$ can be in $P$.
\end{proof}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.91]{lemma56.eps}
\end{center}
\caption{Proof of Lemmas \ref{deglemma} and \ref{pathlemma}.
\label{Fig: lemma56
\end{figure}
We still need one more lemma for our proof.
\begin{lemma}
\label{pathlemma} All paths in $G$ have length at most 2.
\end{lemma}
\begin{proof}
We prove this lemma by contradiction. Suppose we can have a path of length 3
or more. To assist us, let us place our points on a cartesian coordinate
system with our diameter $xy$ relabeled as the points $(0,0)$ and $(r,0)$,
furthermore, assume $p,q\geq0$ and that the four vertices of the path of length $3$ are $(p,-q)$, $(q,p)$, $(r-p,q-r)$, and $(r-q,r-p)$. Our aim is to show that the distance between
$(r-q,r-p)$ and $(r-p,q-r)$ contradicts that $r$ is the diameter of $P$. Now,
if paths of length 3 were possible, then the distance between every pair of
points in Figure \ref{Fig: lemma56}(b) must be less than or equal to $r$.
Since $d((p,-q),(q,p))\leq r$ then $p^{2}+q^{2}\leq r^{2}/2$.
Now let us analyze the square of the distance from $(r-q,r-p)$ to $(r-p,q-r)$. Because $2(p^2+q^2)\geq (p+q)^2$, it follows that
\begin{align*}
d^{2}((r-q,r-p),(r-p,q-r)) & =(-q+p)^{2}+(2r-p-q)^{2}\\
& =4r^{2}-4r(p+q)+2(p^{2}+q^{2})\\
& \geq 4r^{2}-4\sqrt{2}r\sqrt{p^2+q^2}+2(p^{2}+q^{2})=\left(2r-\sqrt{2(p^2+q^2)}\right)^2.
\end{align*}
But $\sqrt{2(p^2+q^2)} \leq r$, so $(2r-\sqrt{2(p^2+q^2)})\geq r$ and thus
\[
d^{2}((r-q,r-p),(r-p,q-r))\geq r^{2}.
\]
Equality occur if and only if
$p=r/2$ and $q=r/2$; otherwise, $d((r-q,r-p),(r-p,q-r))$ is strictly
greater than $r$, contradicting the fact that the diameter of $P$ is $r$.
Therefore if $p\neq r/2$ or $q\neq r/2$ then there is no path of length
3. In the case that $p=r/2$ and $q=r/2$ the points $(q,p)$ and $(r-q,r-p)$ become the same and so do the points $(p,-q)$ and $(r-p,q-r)$. Thus we are left with a path of length 1.
\end{proof}
It follows from Lemmas \ref{deglemma} and \ref{pathlemma} that all paths of
length 2 are disjoint. In other words, $G$ is the union of disjoint paths of
length less than or equal to 2. Let $a$ denote the number of paths of length 2
and $b$ denote the number of paths of length 1, then
\[
\left\vert E(G)\right\vert =2a+b\text{ \textup{and} }3a+2b\leq n-2.
\]
Recall from Lemma \ref{inequalitylemma} that either $\deg_{\pi/2}(x)+\deg_{\pi/2}(y)=\left\vert E(G)\right\vert$ or $\deg_{\pi/2}(x)+\deg_{\pi/2}(y)=\left\vert E(G)\right\vert + 1$. If $\deg_{\pi/2}(x)+\deg_{\pi/2}(y)=\left\vert E(G)\right\vert$, then
\[
2\left\vert E(G)\right\vert =4a+2b\leq n-2+a\leq n-2+\frac{n-2}{3},\]
so $\deg_{\pi/2}(x)+\deg_{\pi/2}(y)=\left\vert E(G)\right\vert \leq\frac{2}{3}\left( n-2\right).$
Moreover, if $\deg_{\pi/2}(x)+\deg_{\pi/2}(y)=\left\vert E(G)\right\vert +1$,
then $b\geq1$ and we get a minor improvement,
\[
2\left\vert E(G)\right\vert =4a+2b\leq n-2+a\leq n-4+\frac{n-2}{3},
\]
so $\deg_{\pi/2}(x)+\deg_{\pi/2}(y)=\left\vert E(G)\right\vert +1\leq \left( 2n-7\right)/3 < \frac
{2}{3}\left( n-2\right) $.
We are now ready to put everything together. Between the two points $x$ and
$y$, we derived the following bounds:
\begin{align*}
\deg_{\pi/2}(x)+\deg_{\pi/2}(y) & \leq\frac{2}{3}(n-2),\\
\deg_{\pi/4}^{+}(x)+\deg_{\pi/4}^{+}(y) & \leq(n-1)\text{, and}\\
\deg_{\pi/4}^{-}(x)+\deg_{\pi/4}^{-}(y) & \leq(n-1)\text{.
\end{align*}
Because the degree of a point must take on an integer value, it must be the
case that either $x$ or $y$ satisfies $\deg_{\pi/2}+\deg_{\pi/4}^{+}+\deg
_{\pi/4}^{-}\leq\left\lfloor (4n-5)/3\right\rfloor $.
\end{proof}
\section{Small Cases\label{sec: small cases}}
In this section we determine the exact values of $S_{\triangle}(n)$ when $3\leq
n\leq9$.
\begin{theorem}
\label{smallcases} For $3\leq n\leq9$, $S_{\triangle}(3)=1$, $S_{\triangle
}(4)=4$, $S_{\triangle}(5)=8$, $S_{\triangle}(6)=11$, $S_{\triangle}(7)=15$,
$S_{\triangle}(8)=20$, and $S_{\triangle}(9)=28$.
\end{theorem}
\begin{figure}[h]
\begin{center}
\includegraphics[
height=2.75in ] {optimalsets.eps}
\caption{Optimal sets achieving equality for $S_{\triangle}(n)$.
\label{Fig: optimal}
\end{center}
\end{figure}
\begin{proof}
We begin with $n=3$. Since $3$ points uniquely determine a triangle, and there is an \irt$\:$with 3 points (Figure \ref{Fig: smallcases}(a)), this
situation becomes trivial and we therefore conclude that
$S_{\triangle}(3)=1.$
Now let $n=4$. In Figure \ref{Fig: smallcases}(b) we show a
point-set $P$ such that $S_{\triangle}(P)=4$. This implies that
$S_{\triangle}(4)\geq4$. However, $S_{\triangle}(4)$ is also bounded
above by $\tbinom{4}{3}=4$. Hence, $S_{\triangle}(4)=4$.
To continue with the proof for the remaining values of $n$, we need the
following two lemmas.
\begin{lemma}
\label{p=4} Suppose $|P|=4$ and $S_{\triangle}(P)\geq2$. The sets in Figure
\ref{Fig: smallcases}(b)--(e), not counting symmetric repetitions, are the
only possibilities for such a set $P$.
\end{lemma}
\begin{proof}
Having $S_{\triangle}(P)\geq2$ implies that we must always have more
than one ${\mathsf{IRT}}$ in $P$. Hence, we can begin with a single
${\mathsf{IRT}}$ and examine the possible ways of adding a point and
producing more ${\mathsf{IRT}}$s. We accomplish this task in Figure
\ref{Fig: smallcases}(a). The 10 numbers in the figure indicate the
location of a point, and the total number of ${\mathsf{IRT}}$s after
its addition to the set of black dots. All other locations not
labeled with a number do not increase the number of
${\mathsf{IRT}}$s. Therefore, except for symmetries, all the
possibilities for $P$ are shown in Figures \ref{Fig:
smallcases}(b)--(e).
\end{proof}
\begin{figure}[ph]
\centering \includegraphics[height = 7.5in]{smallcases1.eps}
\caption{{}Proof of Theorem 5. Each circle with a number indicates
the location of a point and the total number of ${\mathsf{IRT}}$s
resulting from its addition to the base set of black dots.
\label{Fig: smallcases
\end{figure}
\begin{lemma}
\label{sumlemma} Let $P$ be a finite set with $|P|=n$. Suppose that
$S_{\triangle}(A)\leq b$ for all $A\subseteq P$ with $|A|=k$. Then
\[
S_{\triangle}(P)\leq\left\lfloor \frac{n\left( n-1\right) \left(
n-2\right) b}{k\left( k-1\right) \left( k-2\right) }\right\rfloor .
\]
\end{lemma}
\begin{proof}
Suppose that within $P$, every $k$-point configuration contains at most $b$
${\mathsf{IRT}}$s. The number of ${\mathsf{IRT}}$s in $P$ can then be counted
by adding all the ${\mathsf{IRT}}$s in every $k$-point subset of $P$. However,
in doing so, we end up counting a fixed ${\mathsf{IRT}}$ exactly $\tbinom
{n-3}{k-3}$ times. Because $S_{\triangle}(A)\leq b$ we get,
\[
\binom{n-3}{k-3}S_{\triangle}(P) = \sum_{\substack{A\subseteq P\\\left\vert A\right\vert
=k}}S_{\triangle}(A)\leq\binom{n}{k}b.
\]
Notice that $S_{\triangle}(P)$ can only take on integer values so,
\[
S_{\triangle}(P)\leq\left\lfloor \frac{\binom{n}{k}b}{\binom{n-3}{k-3
}\right\rfloor =\left\lfloor \frac{n\left( n-1\right) \left( n-2\right)
b}{k\left( k-1\right) \left( k-2\right) }\right\rfloor .\qedhere
\]
\end{proof}
Now suppose $|P|=5$. If $S_{\triangle}(A)\leq1$ for all $A\subseteq P$ with
$|A|=4$, then by Lemma \ref{sumlemma}, $S_{\triangle}(P)\leq2$. Otherwise, by
Lemma \ref{p=4}, $P$ must contain one of the 4 sets shown in Figures
\ref{Fig: smallcases}(b)--\ref{Fig: smallcases}(e). The result now follows by examining the
possibilities for producing more ${\mathsf{IRT}}$s by placing a fifth point in
the 4 distinct sets. In Figures \ref{Fig: smallcases}(b),
\ref{Fig: smallcases}(c), \ref{Fig: smallcases}(d), and \ref{Fig: smallcases
(e) we accomplish this task. In the same way as we did in Lemma \ref{p=4},
every number in a figure indicates the location of a point, and the total number of
${\mathsf{IRT}}$s after its addition to the set of black dots. It
follows that the maximum value achieved by placing a fifth point is $8$ and so
$S_{\triangle}(5)=8$. The point-set that uniquely achieves equality is shown
in Figure \ref{Fig: smallcases}(f). Moreover, there is exactly one set $P$
with $S_{\triangle}(P)=6$ (shown in Figure \ref{Fig: smallcases}(g)), and two sets $P$ with $S_{\triangle}(P)=5$
(Figures \ref{Fig: smallcases}(h) and \ref{Fig: smallcases}(i)).
Now suppose $|P|=6$. If $S_{\triangle}(A)\leq4$ for all $A\subseteq P$ with
$|A|=5$, then by Lemma \ref{sumlemma}, $S_{\triangle}(P)\leq8$. Otherwise, $P$
must contain one of the sets in Figures \ref{Fig: smallcases}(f)--\ref{Fig: smallcases}(i). We now check all possibilities for adding more
${\mathsf{IRT}}$s by joining a sixth point to our 4 distinct sets. This is
shown in Figures \ref{Fig: smallcases}(f)--\ref{Fig: smallcases}(i). It follows that the maximum
value achieved is $11$ and so $S_{\triangle}(6)=11$. The point-set that
uniquely achieves equality is shown in Figure \ref{Fig: smallcases}(j). Also,
except for symmetries, there are exactly 3 sets $P$ with $S_{\triangle}(P)=10$
(Figures \ref{Fig: smallcases}(k)--\ref{Fig: smallcases}(m)) and only one set $P$ with
$S_{\triangle}(P)=9$ (Figure \ref{Fig: smallcases}(n)).
Now suppose $|P|=7$. If $S_{\triangle}(A)\leq8$ for all $A\subseteq P$ with
$|A|=6$, then by the Lemma \ref{sumlemma}, $S_{\triangle}(P)\leq14$.
Otherwise, $P$ must contain one of the sets in Figures \ref{Fig: smallcases
(j)--\ref{Fig: smallcases}(n). We now check all possibilities
for adding more ${\mathsf{IRT}}$s by joining a seventh point to our 5 distinct
configurations. We complete this task in Figures \ref{Fig: smallcases
(j)--\ref{Fig: smallcases}(n). Because the maximum value achieved is $15$, we deduce that
$S_{\triangle}(7)=15$. In this case, there are exactly two point-sets that
achieve 15 ${\mathsf{IRT}}$s.
The proof for the values $n=8$ and $n=9$ follows along the same
lines, but there are many more intermediate sets to be considered.
We omit the details. All optimal sets are shown in Figure \ref{Fig:
optimal}.
\end{proof}
Inspired by our method used to prove exact values of
$S_\triangle(n)$, a computer algorithm was devised to construct the
best 1-point extension of a given base set. This algorithm,
together with appropriate heuristic choices for some initial sets,
lead to the construction of point sets with many ${\mathsf{IRT}}$s
giving us our best lower bounds for $S_\triangle(n)$ when $10\leq
n\leq25$. These lower bounds are shown in Table 1 and the
point-sets achieving them in Figure \ref{Fig: bestconst}.
\begin{table}[ph] \centering
\begin{tabular}{||c||c|c|c|c|c|c|c|c||}
\multicolumn{9}{c}{}\\
\hline
$n$ & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17\\
\hline
$S_{\triangle}(n)\geq$ & 35 & 43 & 52 & 64 & 74 & 85 & 97 & 112\\
\hline
\multicolumn{9}{c}{}\\
\hline
$n$ & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25\\
\hline
$S_{\triangle}(n)\geq$ & 124 & 139 & 156 & 176 & 192 & 210 & 229 & 252\\
\hline
\end{tabular}
\caption{Best lower bounds for
$S_\triangle (n).$\label{tab: small values}
\end{table}
\begin{figure}
[pht]
\begin{center}
\includegraphics[
height=1.69in
{bestconstr.eps
\caption{Best constructions $A_{n}$ for $n\leq25$. Each set $A_{n}$
is obtained as the union of the starting set (in white) and the
points with label $\leq n$. The value $S_{\triangle}(A_n)$ is given by
Table
\ref{tab: small values}.
\label{Fig: bestconst
\end{center}
\end{figure}
\bigskip
\noindent \textbf{Acknowledgements.} We thank Virgilio Cerna who, as
part of the CURM mini-grant that supported this project, helped to
implement the program that found the best lower bounds for smaller
values of $n$. We also thank an anonymous referee for some useful
suggestions and improvements to the presentation.
| {
"timestamp": "2011-03-01T02:00:09",
"yymm": "1102",
"arxiv_id": "1102.5347",
"language": "en",
"url": "https://arxiv.org/abs/1102.5347",
"abstract": "Let $Q$ be a finite set of points in the plane. For any set $P$ of points in the plane, $S_{Q}(P)$ denotes the number of similar copies of $Q$ contained in $P$. For a fixed $n$, Erdős and Purdy asked to determine the maximum possible value of $S_{Q}(P)$, denoted by $S_{Q}(n)$, over all sets $P$ of $n$ points in the plane. We consider this problem when $Q=\\triangle$ is the set of vertices of an isosceles right triangle. We give exact solutions when $n\\leq9$, and provide new upper and lower bounds for $S_{\\triangle}(n)$.",
"subjects": "Combinatorics (math.CO)",
"title": "On the maximum number of isosceles right triangles in a finite point set",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9896718483945665,
"lm_q2_score": 0.8128673133042217,
"lm_q1q2_score": 0.8044718964573142
} |
https://arxiv.org/abs/1907.07172 | Ordinal pattern probabilities for symmetric random walks | An ordinal pattern for a finite sequence of real numbers is a permutation that records the relative positions in the sequence. For random walks with steps drawn uniformly from $[-1,1]$, we show an ordinal pattern occurs with probability $\frac{|[1,w]|}{2^n n!}$, where $[1,w]$ is a weak order interval in the affine Weyl group $\widetilde{A}_n$. For random walks with steps drawn from a symmetric Laplace distribution, the probability is $\frac{1}{2^n \prod_{j=1}^n \mathrm{lev}(\pi)_j}$, where $\mathrm{lev}(\pi)_j$ measures how often $j$ occurs between consecutive values in $\pi$. Permutations whose consecutive values are at most two positions apart in $\pi$ are shown to occur with the same probability for any choice of symmetric continuous step distribution. For random walks with steps from a mean zero normal distribution, ordinal pattern probabilities are determined by a matrix whose $ij$-th entry measures how often $i$ and $j$ are between consecutive values. | \section{Introduction}
\label{s:introduction}
Let $(a_1,\ldots,a_n) \in \R^n$ be an arbitrary finite sequence of real numbers. A permutation $\pi$ such that $\pi(i) = j$ if $a_i$ is the $j$-th largest position is called the \emph{ordinal pattern for $(a_1,\ldots,a_n)$}.
For a given sequence $Z_1,Z_2,\ldots$ of continuous random variables, it is natural to ask what the probability is that a given permutation $\pi \in S_n$ occurs as an ordinal pattern in a length $n$ subsequence of outcomes. It is known \cite{bandt-shiha} that for exchangeable random variables, such as those that are independent and identically distributed, this probability is $1/n!$ for all $\pi \in S_n$. By contrast, the distribution on $S_n$ is never uniform for ordinal patterns in positions of a random walk when $n \geq 3$.
Exact probabilities have been calculated for ordinal pattern occurrence in a random walk for a few cases. For $n = 3,4$, Bandt and Shiha \cite{bandt-shiha}, DeFord and Moore \cite{deford-moore}, and Zare \cite{zare} gave values for the case of normally distributed steps of mean zero. For $n = 3,4$, DeFord and Moore \cite{deford-moore} gave piece-wise polynomials for the case of uniform distributions on $[b-1,b]$ for $b \in \left[\frac{1}{2},1\right)$.
In \cite{elizalde-martinez} and \cite{martinez}, Elizalde and Martinez showed that certain pairs of permutations have the same probability of occurring as an ordinal pattern in a random walk regardless of choice of continuous step distribution. Furthermore, Martinez \cite{martinez} gave a detailed description of regions of steps that generate a given ordinal pattern $\pi$ in terms of a hyperplane arrangement equivalent to the braid arrangement. In this paper, we use hyperplane arrangements and the tools developed in \cite{elizalde-martinez} and \cite{martinez} to find probabilities for ordinal pattern occurrence in certain random walks.
\subsection{Main results}
\label{ss:results}
Let $X_1, X_2,\ldots$ be independent and identically distributed continuous random variables called \emph{steps} with probability density function $f:\R \ra \R$. Let $Z_i := \sum_{j = 1}^{i - 1} X_j$ be the \emph{positions} of the random walk. For $\pi \in S_{n+1}$, let $\PRB(f,\pi)$ denote the probability that $\pi$ occurs as an ordinal pattern in a length $n + 1$ consecutive subsequence of positions generated by $n$ steps drawn from $f$. All of the main results of the paper are statements about $\PRB(f,\pi)$ for various choices of density function $f$.
Let $\mathrm{lev}(\pi)_j$ denote the number of positions $i$ such that $\pi(i) \leq j < \pi(i+1)$ or such that $\pi(i+1) \leq j < \pi(i)$. In Section \ref{s:laplace}, we use a direct calculation to show that when $f$ is a Laplace distribution, the value of $\PRB(f,\pi)$ is computed from the values of $\mathrm{lev}(\pi)_1,\ldots,\mathrm{lev}(\pi)_n$.
\\ \\
\noindent
\textbf{Theorem \ref{t:laplace}.}
Let $\pi \in S_{n+1}$ and let $f$ be the density function for a Laplace distribution with mean zero. Then
\[
\PRB(f,\pi) = \frac{1}{2^n \prod_{j=1}^n \mathrm{lev}(\pi)_j}.
\]
We say a permutation is \emph{almost consecutive} if its consecutive values are at most two positions apart in its $1$-line notation. Recall that a function is symmetric if $f(-x) = f(x)$ for all $x \in \R$. In Section \ref{s:universal}, we show that $\PRB(f,\pi)$ does not depend upon the choice of symmetric density function $f$ if $\pi$ is almost consecutive.
\\ \\
\noindent
\textbf{Theorem \ref{t:acformula}.}
Let $\pi \in S_{n+1}$ be an almost consecutive permutation. Let $f:\R \ra \R$ be a symmetric density function for a continuous probability distribution. Then
\[
\PRB(f,\pi) = \frac{1}{2^n \PROD_{i=1}^n \mathrm{lev}(\pi)_i}.
\]
In Section \ref{s:affineA}, we show that when $f$ is the density function for the uniform distribution on $[-1,1]$, we calculate $\PRB(f,\pi)$ by counting regions of the {\typeA} inside a rational polytope constructed from $\pi$. The counted regions correspond to elements in a weak order interval of the {\affWeyl}.
\\ \\
\noindent
\textbf{Theorem \ref{t:weakordertheorem}.}
Let $f = \frac{1}{2}\Chi_{[-1,1]}$ be the uniform density function on $[-1,1]$. Let $\pi \in S_{n+1}$. Then
\[
\PRB(f,\pi) = \frac{|[1,w]|}{2^n n!},
\]
where $[1,w]$ is a weak order interval of the {\affWeyl}.
\\
A corollary is that $1 (n + 1)$ or $(n + 1) 1$ appears in the $1$-line notation for $\pi \in S_{n+1}$ if and only if $\PRB(f,\pi) = \frac{1}{2^n n!}$. By contrast, for the Laplace and normal distributions, the other values in the $1$-line notation for $\pi$ typically influence the value of $\PRB(f,\pi)$.
In Section \ref{s:normaldistribution}, we show that when $f$ is the density function for a mean zero normal distribution, we can sometimes compare $\PRB(f,\pi)$ to $\PRB(f,\tau)$. Let $\mathrm{lev}(\pi)_{i,j}$ be the number of positions $k \in [n]$ satisfying $\pi(k) \leq i,j < \pi(k+1)$ or $\pi(k+1) \leq i,j < \pi(k)$.
\\ \\
\noindent
\textbf{Theorem \ref{t:comparegaussian}.}
Let $f:\R \ra \R$ be the density function for the normal distribution with mean zero and any variance. Let $\pi, \tau \in S_{n+1}$. Suppose $\mathrm{lev}(\pi)_{i,j} \leq \mathrm{lev}(\tau)_{i,j}$ for all $(i,j) \in \Phi^+$. Then $\PRB(f,\pi) \geq \PRB(f,\tau)$.
\subsection{The larger context for these results}
\label{ss:greatercontext}
Using ordinal patterns to analyze a time series is sometimes called \emph{ordinal analysis}. Bandt and Pompe \cite{bandt-pompe} introduced permutation entropy, which involves computing ordinal pattern frequencies in a time series. Subsequently, many papers suggested applying ordinal analysis to a variety of applied contexts. The survey \cite{amigo} shows that ordinal pattern frequency often captures qualitative features of a time series. For example, iterated map dynamical systems with deterministic chaotic behavior tend to have forbidden ordinal patterns, whereas white noise does not.
We can interpret the results of this paper as providing a kind of fingerprint for certain random processes. DeFord and Moore \cite{deford-moore} define KL divergence for the distribution of patterns of length $n$ in $Z$ from those in $X$ by
\[
\mathcal{D}_{\text{KL}_n}(X || Z) = \sum_{\pi \in S_n} P_X(\pi) \log\left(\frac{P_X(\pi)}{P_Z(\pi)}\right),
\]
where $X$ and $Z$ are random variables. Thus, comparisons to walks with steps from symmetric uniform and Laplace densities can be made via this version of KL divergence.
By Theorem \ref{t:acformula}, there exists a value $p_n$ for the probability that an almost consecutive permutation arises from a length $n$ sequence generated by $n - 1$ steps from a symmetric density function. Thus, for fixed $n$, we have a Bernoulli trial whose ``success'' probability is the same for any random walk whose steps have a symmetric density function. In \cite{denoncourt}, the following values are determined for $p_n$:
\begin{center}
\begin{tabular} { c | c }
$n$ & $p_n$\\
\hline
$2$ & $1$\\
$3$ & $1$\\
$4$ & $2/3$\\
$5$ & $5/12$\\
$6$ & $251/960$\\
$7$ & $463/2880$\\
$8$ & $15281 / 161280$
\end{tabular}
\end{center}
Although the symmetric density function hypothesis is limited in scope, the above values hold for the mean zero Gaussian distribution. Thus, after transforming a sequence generated by a random walk to have mean zero steps, it is reasonable to expect to see values close to those given above on a long enough time scale.
\section{Ordinal pattern preliminaries}
\label{s:ordinal}
Throughout the paper, random walks have $n$ steps and $n + 1$ positions. The $n$ steps are outcomes of $n$ independent and identically distributed continuous random variables $X_1,\ldots,X_n$. Every tuple $(x_1,\ldots,x_n) \in \R^n$ of steps generates a tuple $(z_1,\ldots,z_{n+1}) \in \R^{n+1}$ of walk positions, where $z_1 = 0$, and $z_i = x_1 + \cdots + x_{i-1}$ for $i > 1$. We say a walk $(z_1,\ldots,z_{n+1})$ has \emph{ordinal pattern $\pi \in S_{n+1}$} if $\pi(i) = j$ whenever $z_i$ is the $j$-th largest position of $(z_1,\ldots,z_{n+1})$. We refer to $(x_1,\ldots,x_n)$ as \emph{step coordinates} for the random walk and the generated tuple $(z_1,\ldots,z_{n+1})$ as \emph{walk coordinates} for the random walk. Ordinal pattern probabilities are calculated as integrals over regions of $\R^n$, but the ordinal patterns themselves are permutations in $S_{n+1}$ derived from walk coordinates in $\R^{n+1}$.
Following \cite{elizalde-martinez}, we define a map $p:\R^n \setminus Z \ra S_{n+1}$ by
\begin{equation*}
p(x_1,\ldots,x_n) = \pi,
\end{equation*}
where
\begin{equation*}
\pi(i) = \left|\{j \in \{1,\ldots,n+1\}: z_i \geq z_j\} \right|
\end{equation*}
and $Z$ is the measure zero set of steps $(x_1,\ldots,x_n)$ such that $z_i = z_j$ for some $i \neq j$.
\begin{definition}
\label{d:generate}
Let $\pi \in S_{n+1}$. Let $\bm{x} \in \R^n$. We say $\bm{x}$ \emph{generates $\pi$} if $p(\bm{x}) = \pi$. We denote the region of tuples that generate $\pi$ by $D_\pi$. Thus,
\[
D_\pi = \{\bm{x} \in \R^n : p(\bm{x}) = \pi \}.
\]
\end{definition}
We interpret $p$ as mapping a tuple of steps to the ordinal pattern of the generated walk positions. The region $D_\pi$ contains all such tuples for a fixed $\pi \in S_{n+1}$.
In \cite[Section 2]{elizalde-martinez}, Elizalde and Martinez define an \emph{edge diagram} as a collection of oriented vertical line segments connecting $(i,\pi(i))$ and $(i,\pi(i+1))$. The orientation is downward if $\pi(i) > \pi(i+1)$ and upward otherwise. A \emph{level} is a vertical interval, denoted $\ubar{j}$, whose $y$-coordinates are in $[j,j+1]$. In the edge diagram, the \emph{edge} $e_i$ is a formal sum $e_i = \sum_{j=\pi(i)}^{\pi(i+1) - 1} \ubar{j}$ if $\pi(i) < \pi(i+1)$ and $e_i = -\sum_{j = \pi(i+1)}^{\pi(i) - 1} \ubar{j}$ if $\pi(i) > \pi(i+1)$. In Section \ref{s:laplace}, we introduce a tuple $\mathrm{lev}(\pi)$ that records the number of edges that contain $\ubar{j}$ or its negation. (See figure \ref{f:edgediagram} for an example of an edge diagram and $\mathrm{lev}(\pi)$.) An edge $e_i$ contains $\ubar{j}$ or its negation if $\pi(i) \leq j < \pi(i+1)$ or $\pi(i+1) \leq j < \pi(i)$.
We primarily use the edge diagram as a visual description of $\mathrm{lev}(\pi)$. The edge diagram can be read off from the matrix given in the next definition, as can properties of the region $D_\pi$.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=.5]
\foreach \x in {1,2,...,6}
\draw[dotted] (0,\x)--(15,\x);
\draw(20.5,3.5) node{$\implies \mathrm{lev}(\pi) = (2,4,3,2,2)$};
\draw[thick,*->-*](4.5,3) node[left]{3}--(4.5,1);
\draw[thick,*->-*](5.5,1) node[left]{1}--(5.5,5);
\draw[thick,*->-*](6.5,5) node[left]{5}--(6.5,6);
\draw[thick,*->-*](7.5,6) node[left]{6}--(7.5,2);
\draw[thick,*->-*](8.5,2) node[left]{2}--(8.5,4) node[right]{4};
\draw(0.5,7) node{Levels};
\draw(6.0,8) node{Edge diagram};
\draw(6.0,7) node{for $\pi = 315624$};
\draw(11.5,7) node{Level count};
{\foreach \x in {1,2,...,5}
\draw (0.5,0.5+\x) node{$\ubar{\x}$};
}
\draw (11.5,1.5) node{$2$};
\draw (11.5,2.5) node{$4$};
\draw (11.5,3.5) node{$3$};
\draw (11.5,4.5) node{$2$};
\draw (11.5,5.5) node{$2$};
\end{tikzpicture}
\caption{The edge diagram for a permutation and its level count. Figure adapted from \cite{elizalde-martinez}.}
\label{f:edgediagram}
\end{figure}
\begin{definition}
\label{d:represent}
(Martinez \cite[Section 2.1]{martinez})
Let $L:S_{n+1} \ra GL_n(\C)$ be defined by $L(\pi) = L_\pi$, where $L_\pi$ is a matrix whose entries are given by
\[
\left(L_\pi\right)_{ij} :=
\begin{cases}
1, & \;\;\; \text{if } \pi(i) \leq j < \pi(i+1),\\
-1, & \;\;\; \text{if } \pi(i+1) \leq j < \pi(i),\\
0 & \;\;\; \text{otherwise.}
\end{cases}
\]
Thus, the $i$-th coordinate of $L_\pi(x_1,\ldots,x_n)$ is given by
\[
L_\pi(x_1,\ldots,x_n)_i =
\begin{cases}
x_{\pi(i)} + \cdots + x_{\pi(i+1) - 1}, & \;\; \text{if } \pi(i) < \pi(i+1),\\
-(x_{\pi(i+1)} + \cdots + x_{\pi(i) - 1}), & \;\; \text{if } \pi(i+1) < \pi(i).
\end{cases}
\]
\end{definition}
Thus, the edges in the edge diagram may be expressed as $e_i = \sum_{j=1}^n (L_\pi)_{i,j} \ubar{j}$. That is, the orientation of edges and which levels appear in edges of an edge diagram can be read from the rows of $L_\pi$.
\begin{example}
\label{ex:ellpi}
Let $\pi = 315624$. Then
\[
L_\pi =
\begin{bmatrix}
-1 & -1 & 0 & 0 & 0\\
1 & 1 & 1 & 1 & 0\\
0 & 0 & 0 & 0 & 1\\
0 & -1 & -1 & -1 & -1\\
0 & 1 & 1 & 0 & 0
\end{bmatrix}
.
\]
\end{example}
The next three lemmas capture the basic properties of the representation $L$.
\begin{lemma}
\label{l:represent}
(Martinez \cite[Lemma 2.1.3]{martinez})
The function $L:S_{n+1} \ra GL_n(\C)$ given in Definition \ref{d:represent} is a homomorphism.
\end{lemma}
\begin{lemma}
\label{l:image}
(Martinez \cite[Lemma 2.1.4]{martinez})
Let $\pi \in S_{n+1}$. Then
\[
L_\pi\left(\R_{> 0}^n\right) = D_\pi.
\]
\end{lemma}
\begin{lemma}
\label{l:det}
(Martinez \cite[Lemma 2.1.3]{martinez})
Let $\pi \in S_{n+1}$. Then $\mathrm{det}(L_\pi) = \pm 1$.
\end{lemma}
Since steps are given by independent and identically distributed continuous random variables, the probability that an ordinal pattern $\pi$ occurs depends only on $\pi$ and the associated probability density function $f:\R \ra \R$. The associated joint density function for a walk of $n$ steps is always the function $g:\R^n \ra \R$ defined by $g(x_1,\ldots,x_n) = \prod_{i=1}^n f(x_i)$.
Suppose $f:\R \ra \R$ is a density function. Denote the probability that an ordinal pattern $\pi \in S_{n+1}$ occurs in a random walk of $n$ steps by $\PRB(f,\pi)$, where $f$ is the density function for the steps. Thus,
\begin{equation}
\label{e:density}
\PRB(f,\pi) = \int_{D_\pi} f(x_1) f(x_2) \cdots f(x_n) \; \D x_1 \D x_2 \cdots \D x_n.
\end{equation}
Since $L_\pi$ is invertible and linear, the change-of-variables theorem applies. Since the determinant of $L_\pi$ is $\pm 1$, the absolute value of the Jacobian is always $1$.
\begin{lemma}
\label{l:general}
Let $\pi \in S_{n+1}$. Let $f:\R \ra \R$ be a density function for a continuous distribution. Let $g:\R^n \ra \R$ be the joint density function for the independent steps produced by $f$ defined by $g(x_1,\ldots,x_n) = \prod_{i=1}^n f(x_i)$. Then
\[
\PRB(f,\pi) = \INT_{D_\pi} g (\bm{x}) \; \D \bm{x} = \INT_{\R_{> 0}^n} g(L_\pi(\bm{x})) \; \D \bm{x} = \INT_{\R_{> 0}^n} \prod_{i=1}^n f(L_\pi(\bm{x})_i) \; \D \bm{x},
\]
where $L_\pi(\bm{x})_i$ is the $i$-th coordinate of $L_\pi(\bm{x})$.
\end{lemma}
\begin{proof}
The second equality follows from Lemma \ref{l:image}, Lemma \ref{l:det}, and the change-of-variables theorem.
\end{proof}
\begin{example}
\label{ex:generalintegral}
Suppose $f$ is the density function of a continuous distribution. Lemma \ref{l:general} implies
\[
\PRB(f,2413) = \INT_0^\infty \INT_0^\infty \INT_0^\infty f(y + z) \cdot f(-x - y - z) \cdot f(x + y) \; \D x \; \D y \; \D z.
\]
\end{example}
In principle, Lemma \ref{l:general} allows for the calculation of $\PRB(f,\pi)$ for any suitable density function $f$. However, evaluation of the integral is probably computationally infeasible in general. The main reason to use the second integral of Lemma \ref{l:general} instead of the first is that the integration always occurs over $\R_{>0}^n$.
Since all probability distributions in this paper are assumed to be continuous, the existence of a (not necessarily continuous) density function is guaranteed. Also, walk positions overlap with probability zero, which implies that ordinal pattern probabilities add to $1$. This is also true of many discrete distributions, but we do not address the discrete distribution case in this paper.
Ordinal pattern probabilities are invariant under changes in scale. If $\bm{x} \in D_\pi$, then $c \bm{x} \in D_\pi$ for any $c > 0$. Thus, the region of integration in \eqref{e:density} does not change under the substitution of $c\bm{x}$ for $\bm{x}$, which proves the next lemma. This property is called \emph{scale invariance}.
\begin{lemma}
\label{l:scale-invariance}
Let $f:\R \ra \R$ be a density function of a continuous probability distribution. Let $c \in \R_{>0}$ and let $h:\R \ra \R$ be defined by $h(x) = c f(cx)$. Then $h$ is also a density function and $\PRB(f,\pi) = \PRB(h,\pi)$ for all $\pi \in S_{n+1}$.
\end{lemma}
\section{Pattern probabilities when steps are from Laplace densities}
\label{s:laplace}
The Laplace distribution, also called the double exponential distribution, has density function $f:\R \ra \R$ given by
\[
f(x) = \frac{1}{2b}\EXP{-\frac{|x - \mu|}{b}},
\]
where $\mu$ is the mean and $b$ is a scale parameter. In this section, we restrict our attention to the mean zero case. A mean zero Laplace distribution arises as the distribution for a random variable expressed as the difference of two identically distributed exponential random variables.
By Lemma \ref{l:scale-invariance}, ordinal pattern probabilities are scale invariant. Thus, we lose no generality by restricting our attention to the choice $b = 1$. For the remainder of the section, the density function $f$ is defined by
\[
f(x) = \frac{1}{2}\EXP{-|x|}.
\]
\begin{definition}
\label{d:levelcount}
Let $\pi \in S_{n+1}$. Denote the number of $i \in [n]$ such that $\pi(i) \leq j < \pi(i+1)$ or $\pi(i+1) \leq j < \pi(i)$ by $\mathrm{lev}(\pi)_j$. We call the tuple $(\mathrm{lev}(\pi)_1,\ldots,\mathrm{lev}(\pi)_n)$ the \emph{level count of $\pi$}.
\end{definition}
Note that $\mathrm{lev}(\pi)_j$ counts the number of times level $\ubar{j}$ is contained in an edge of the edge diagram of $\pi$. (See figure \ref{f:edgediagram}.) Alternativly, it is the sum of the absolute values of the entries in column $j$ of $L_\pi$.
\begin{example}
Let $\pi = 315624$. Then $\mathrm{lev}(\pi) = (2,4,3,2,2)$ as shown in Figure \ref{f:edgediagram}.
\end{example}
Recall from Lemma \ref{l:general} that $\PRB(f,\pi) = \int_{\R_{>0}^n} \prod_{i=1}^n f(L_\pi(\bm{x})_i) \D \bm{x}$.
\begin{theorem}
\label{t:laplace}
Let $\pi \in S_{n+1}$ and let $f$ be the density function for a mean zero Laplace distribution. Then
\[
\PRB(f,\pi) = \frac{1}{2^n \prod_{j=1}^n \mathrm{lev}(\pi)_j}.
\]
\end{theorem}
\begin{proof}
Every factor of the last integrand in Lemma \ref{l:general} has the form $f(\pm(x_a + \cdots + x_b))$, where each $x_k > 0$. Thus,
\[
f(\pm(x_a + \cdots + x_b)) = \EXP{-\left|\pm(x_a + \cdots + x_b)\right|} = \EXP{-(x_a + \cdots + x_b)}.
\]
The term $-x_j$ appears in the above sum whenever $\pi(i) \leq j < \pi(i+1)$ or $\pi(i+1) \leq j < \pi(i)$.
Thus, by Definition \ref{d:levelcount}, there are $\mathrm{lev}(\pi)_j$ factors of $\prod_{i=1}^n f(L_\pi(\bm{x})_i)$ contributing $-x_j$ inside the exponential for the overall product. By Lemma \ref{l:general},
\begin{align*}
\PRB(f,\pi) &= \INT_{\R_{> 0}^n} \prod_{i=1}^n f(L_\pi(\bm{x})_i) \; \D \bm{x}\\
&= \INT_{\R_{> 0}^n} \frac{1}{2^n}\PROD_{j=1}^n \EXP{-\mathrm{lev}(\pi)_j \; x_j} \; \textrm{d} x_1 \cdots \textrm{d} x_n\\
&= \frac{1}{2^n} \PROD_{j = 1}^n \INT_0^{\infty} \EXP{-\mathrm{lev}(\pi)_j \; x_j} \; \textrm{d} x_j\\
&= \frac{1}{2^n \PROD_{j = 1}^n \mathrm{lev}(\pi)_j}.
\end{align*}
\end{proof}
\section{Universal pattern probabilities for symmetric step densities}
\label{s:universal}
Martinez \cite[Section 5]{martinez} introduced a hyperplane arrangement $\mathcal{D}_n$ in $\R^n$ such that for any $\pi \in S_{n+1}$, the set $D_\pi$ is a region of $\mathcal{D}_n$. Furthermore, in \cite[Lemma 5.1.2]{martinez} it was shown that the walls of $D_\pi$ are defined by the row vectors of $L_{\pi^{-1}}$. This allows us to show that for certain $\pi$, we may express $D_\pi$ as a union of cells of the type $B$ Coxeter arrangement, which establishes the main result of this section. Namely, permutations whose consecutive values are at most two positions apart have the same ordinal pattern probabilities as the Laplace distribution. Thus, when $\pi$ is such a permutation, we have $\PRB(f,\pi) = \frac{1}{2^n \PROD_{j=1}^n \mathrm{lev}(\pi)_j}$, \emph{regardless of choice of symmetric density function} for the steps in the random walk.
\subsection{Hyperplane arrangement preliminaries, notation, and terminology}
\label{ss:hyperplaneterminology}
The hyperplane arrangement notation and terminology we use in this section is similar to that found in \cite[Section 1.4]{abramenko-brown} or \cite[Chapter 2]{bjorner-etal}. In particular, a \emph{hyperplane arrangement} is a set $\HYP = \{H_i\}_{i \in I}$ of finitely many hyperplanes. In this section, the arrangements under consideration are \emph{central}, which means they pass through the origin. Thus, associated to each $H_i$ is a linear function $f_i:\R^n \ra \R$ such that $H_i = \{\bm{x} \in \R^n :\;\; f_i(\bm{x}) = 0\}$. For each $H_i \in \HYP$, let
\begin{align*}
H_i^+ &:= \{\bm{x} \in \R^n : \;\;f_i(\bm{x}) > 0\};\\
H_i^0 &:= \{\bm{x} \in \R^n : \;\;f_i(\bm{x}) = 0\}; \text{ and}\\
H_i^- &:= \{\bm{x} \in \R^n : \;\;f_i(\bm{x}) < 0\}.
\end{align*}
A \emph{cell with respect to $\HYP$} is a nonempty set $C$ obtained by choosing for each $i \in I$ a sign $\sigma_i \in \{+,0,-\}$ such that $\bm{x} \in H_i^{\sigma_i}$ for all $\bm{x} \in C$. The sequence $(\sigma_1,\ldots,\sigma_n)$ is called the \emph{sign sequence for $C$}. The cell $C$ is represented by
\begin{equation}
\label{e:halfspaces}
C = \bigcap_{i \in I} H_i^{\sigma_i}.
\end{equation}
The intersection may be redundant. Cells such that $\sigma_i \neq 0$ for all $i \in I$ are called \emph{regions}. The regions of $\HYP$, denoted $\REG{\HYP}$, are the nonempty convex open subsets that partition $\R^n \setminus \cup_{i \in I} H_i$. Note that the collection of all cells partition $\R^n$. However, the cells that are not regions have measure zero and thus contribute nothing to the probability calculations of this section.
\subsection{A hyperplane arrangement for steps of a random walk}
\label{ss:braidarrangement}
Let $H_{i,j}$ be the hyperplane defined by
\[
H_{i,j} = \left\{(x_1,\ldots,x_n) \in \R^n : \;\; x_i + x_{i+1} + \cdots + x_{j-1} = 0\right\}.
\]
Let
\[
\CS_n := \left\{ H_{i,j} : \;\; i,j \in [n] \text{ and } i < j \right\}
\]
be the hyperplane arrangement defined in \cite[Section 5.1]{martinez}.
As noted in \cite[Section 5.1]{martinez}, the arrangement $\CS_n$ is obtained from the standard braid arrangement via a linear substitution. The arrangements have the same face poset and the same number of regions. However, since the geometry is different and the calculation of $\PRB(f,\pi)$ is not always uniform across regions, we distinguish between the two arrangements in this paper. The next lemma motivates the choice of the arrangement $\CS_n$.
\begin{lemma}
\label{l:regions}
(Martinez \cite[Lemma 5.1.1]{martinez})
The set of regions of $\CS_n$ is $\{D_\pi : \;\;\pi \in S_{n+1}\}$.
\end{lemma}
We say a cell is a \emph{face} of the region $R$ if the cell's sign sequence matches $R$'s sign sequence except for one hyperplane $H$ whose sign is $0$. In this case, we say that $H$ is a \emph{wall} of $R$. For convenience, let $H_{j,i} = H_{i,j}$ when $j > i$.
\begin{lemma} \label{l:walls} (Martinez \cite[Lemma 5.1.2]{martinez})
Let $\pi \in S_{n+1}$. The set of walls of $D_\pi$ is
\[
\left\{ H_{\pi^{-1}(i), \pi^{-1}(i + 1)} \in \CS_n: \;\;i \in [n]\right\}.
\]
\end{lemma}
Suppose $\HYP$ and $\HYP'$ be hyperplane arrangements such that $\HYP \subseteq \HYP'$. Then, the cells of $\HYP$ may be written as a union of cells of $\HYP'$. The next lemma follows from the fact that the collection of walls $\HYP_R$ of a region $R$ forms a hyperplane arrangement in its own right. We use this in Section \ref{ss:typeb} to express $D_\pi$ as a union of cells from the type $B$ Coxeter arrangement.
\begin{lemma}
\label{l:chamberunion}
Let $\HYP_R$ be the collection of walls for a region $R$ of a hyperplane arrangement $\HYP$. If $\HYP'$ is any hyperplane arrangement such that $\HYP_R \subseteq \HYP'$, then
\[
R = \bigcup_{j = 1}^k C_j,
\]
for some collection of cells $C_1, \ldots, C_k$ of $\HYP'$.
\end{lemma}
\subsection{The type \texorpdfstring{$B$}{B} Coxeter arrangement}
\label{ss:typeb}
We represent a signed permutation on $[n]$ as a pair $(\omega,\epsilon)$, where $\omega \in S_n$ is a permutation on $[n]$ and $\epsilon \in \{-1,+1\}^n$ is a choice of sign for each position. A signed permutation $(\omega,\epsilon)$ acts on $\R^n$ by mapping $(x_1,\ldots,x_n)$ to $\left(\epsilon_1 x_{\omega(1)},\ldots,\epsilon_n x_{\omega(n)}\right)$.
The \emph{type $B$ Coxeter arrangement} is defined by the following hyperplanes:
\begin{align}
\label{e:ijplus}
X_{i,j,+} = \{ (x_1,\ldots,x_n) \in \R^n : x_i + x_j &= 0\};\\
\label{e:ijminus}
X_{i,j,-} = \{ (x_1,\ldots,x_n) \in \R^n : x_i - x_j &= 0\}; \text{ and}\\
\label{e:coords}
X_i = \{ (x_1,\ldots,x_n) \in \R^n : x_i &= 0\},
\end{align}
where $i,j \in [n]$ and $i < j$. It is known that the group $B_n$ of all signed permutations acts simply transitively on the regions of the type $B$ hyperplane arrangement, which implies that there are $2^n n!$ regions. See \cite[Section 7]{bjorner-wachs} or \cite[Section 1.15]{humphreys}, for example. Furthermore, the group $B_n$ is generated by reflections so that every group element can be represented as a matrix with determinant $\pm 1$.
For a symmetric density function $f:\R \ra \R$, we have $f(-x) = f(x)$ for all $x \in \R$. Let $g:\R^n \ra \R$ be the joint density function defined by $g(x_1,\ldots,x_n) = \prod_{i=1}^n f(x_i)$. Since $f$ is symmetric and products are invariant under permutations, we have $g(w \cdot \bm{x}) = g(\bm{x})$ for any signed permutation $w \in B_n$.
\begin{lemma}
\label{l:probregions}
Let $f:\R \ra \R$ be a symmetric density function of a continuous probability distribution. Let $g:\R^n \ra \R$ be the joint density for the random walk of $n$ steps given by $g(x_1,\ldots,x_n) = \prod_{i=1}^n f(x_i)$. Then, for any signed permutation $w \in B_n$, and any region $R$ of the type $B$ hyperplane arrangement, we have
\[
\int_{R} g(\bm{x}) \D \bm{x} = \frac{1}{2^n n!}.
\]
\end{lemma}
\begin{proof}
Let $R_i$ and $R_j$ be arbitrary regions. Since $B_n$ acts simply transitively on regions, there exists $w \in B_n$ such that $w(R_i) = R_j$. Since the absolute value of the Jacobian for $w$ is $1$, the fact that $g(w \cdot \bm{x}) = g(\bm{x})$ and the change-of-variables theorem imply
\[
\int_{R_i} g(\bm{x}) \D \bm{x} = \int_{R_i} g(w \cdot \bm{x}) \D \bm{x} = \int_{R_j} g(\bm{x}) \D \bm{x}.
\]
Since there are $2^n n!$ regions, the result follows.
\end{proof}
Recall Lemma \ref{l:chamberunion}: If the walls of a region $R$ lie in an arrangement $\HYP'$ distinct from the one that defined $R$, then we can write $R$ as a union of cells from $\HYP'$.
Thus, if the walls of a region $D_\pi \in \CS_n$ are type $B$ hyperplanes, the value $\PRB(f,\pi)$ can be calculated by counting type B regions contained in $D_\pi$.
\begin{lemma}
\label{l:countregions}
Suppose $f:\R \ra \R$ is a symmetric density function. Suppose the walls of $D_\pi$ are hyperplanes in the type $B$ Coxeter arrangement. Then
\[
\PRB(f,\pi) = \frac{1}{2^n \PROD_{i=1}^n \mathrm{lev}(\pi)_i}.
\]
\end{lemma}
\begin{proof}
The hypothesis and Lemma \ref{l:chamberunion} imply that
\[
D_\pi = \bigcup_{i = 1}^k R_i \; \cup \; \bigcup_{j=1}^{\ell} C_j,
\]
where each $R_i$ is a region of the type $B$ Coxeter arrangement and each cell $C_j$ is a measure $0$ cell of the arrangement. Thus,
\[
\PRB(f,\pi) = \int_{D_\pi} f(x_1)\cdots f(x_n) \D x_1 \cdots \D x_n = \sum_{i=1}^k \int_{R_i} f(x_1)\cdots f(x_n) \D x_1 \cdots \D x_n.
\]
Since $f$ is symmetric, the joint density function $g(x_1,\ldots,x_n) = \prod_{i=1}^n f(x_i)$ is invariant under the action of $B_n$.
By Lemma \ref{l:probregions}, the hypotheses imply $\PRB(f,\pi) = \frac{k}{2^n n!}$, where $k$ is the number of type $B$ regions contained in $D_\pi$. Since $k$ depends only on $\pi$, not on the choice of symmetric density function, we may choose $f$ to be the Laplace distribution. The result then follows from Theorem \ref{t:laplace}.
\end{proof}
\subsection{Almost consecutive permutations}
\label{ss:almostconsecutive}
It remains to identify the permutations $\pi \in S_{n+1}$ such that the walls of $D_\pi$ are hyperplanes of the type $B$ Coxeter arrangement. Recall from Lemma \ref{l:walls} that the set of walls for $D_\pi$ is the set of all $H_{\pi^{-1}(i),\pi^{-1}(i+1)}$ such that $i \in [n]$. In Lemma \ref{l:acwalls}, we show the walls of $D_\pi$ are type $B$ hyperplanes if $\pi$ is a permutation whose consecutive values occur no more than two positions apart in its $1$-line notation. These permutations (or their inverses) are called \emph{key permutations} in \cite{page} and \emph{3-determined permutations} in \cite{avgustinovich-kitaev}. In both papers it is shown that the counting sequence for these permutations, which is sequence \href{http://oeis.org/A003274/}{A003274} of the OEIS, grows asymptotically like $(1.4655\ldots)^n$.
\begin{definition}
\label{d:almostconsecutive}
We say $\pi \in S_{n+1}$ is \emph{almost consecutive} if $|\pi^{-1}(i+1) - \pi^{-1}(i)| \leq 2$ for all $i \in [n]$.
\end{definition}
\begin{example}
Let $\pi = 1423$. Then $\pi$ is almost consecutive since all instances of consecutive values are at most two positions apart in the $1$-line notation. By contrast, the permutation $2413$ is not almost consecutive, since the values $2$ and $3$ are three positions apart.
\end{example}
\begin{lemma}
\label{l:acwalls}
Let $\pi \in S_{n+1}$ be an almost consecutive permutation. Then the walls of $D_\pi$ are hyperplanes of the type $B$ Coxeter arrangement.
\end{lemma}
\begin{proof}
By Lemma \ref{l:walls}, the walls of $D_\pi$ are $H_{\pi^{-1}(i), \pi^{-1}(i+1)}$, where $i \in [n]$. Definition \ref{d:almostconsecutive} then implies that every wall of $D_\pi$ has the form $H_{k,k+1}$ or $H_{k,k+2}$ for some $k$. Thus, a given wall of $D_\pi$ is defined by an equation of the form $x_k = 0$ or $x_k + x_{k+1} = 0$, which is a hyperplane of the form~\eqref{e:coords} or~\eqref{e:ijplus}. In either case, a wall of $D_\pi$ is a hyperplane in the type $B$ Coxeter arrangement.
\end{proof}
\begin{theorem}
\label{t:acformula}
Let $\pi \in S_{n+1}$ be an almost consecutive permutation. Let $f:\R \ra \R$ be a symmetric density function. Then
\[
\PRB(f,\pi) = \frac{1}{2^n \PROD_{i=1}^n \mathrm{lev}(\pi)_i}.
\]
\end{theorem}
\begin{proof}
The result follows from Lemma \ref{l:acwalls} and Lemma \ref{l:countregions}.
\end{proof}
\section{Uniform random walk patterns and Affine \texorpdfstring{$A$}{A}} \label{s:affineA}
Throughout this section, the only density function $f:\R \ra \R$ under consideration is the uniform density function on $[-1,1]$. It is defined by $f(x) = \frac{1}{2}$ for $x \in [-1,1]$ and $f(x) = 0$ otherwise.
In Section \ref{ss:polytope}, we show that $\PRB(f,\pi)$ is related to the volume of a rational polytope derived from $\pi$. This rational polytope turns out to be an alcoved polytope, which is a union of the regions (called alcoves) of the {\typeA}. A lot is known about the {\typeA} and the {\affWeyl} that acts upon its regions. Thus, the early sections of the chapter are devoted to translating everything into the language of type $A$ root systems. The main result, Theorem \ref{t:weakordertheorem}, states that $\PRB(f,\pi)$ can be computed by counting the number of elements of a weak order interval of $\A_n$.
\subsection{The polytope of steps that generate \texorpdfstring{$\pi$}{p}}
\label{ss:polytope}
We now define a rational polytope $P_\pi$ that is used to reduce the problem of calculating $\PRB(f,\pi)$ to the problem of calculating the volume of $P_\pi$.
\begin{definition}
\label{d:rationalpolytope}
Let $\pi \in S_{n+1}$. Let $m_i = \mathrm{min}\{\pi(i),\pi(i+1)\}$ and $M_i = \mathrm{max}\{\pi(i),\pi(i+1)\}$. We call the rational polytope $P_\pi$ satisfying
\begin{align}
\label{e:rationalcoords}
&x_i \geq 0,\\
\label{e:linearsystem}
0 \leq &x_{m_i} + \cdots + x_{M_i - 1} \leq 1,
\end{align}
for all $i \in [n]$ the \emph{polytope of steps for $\pi$}.
\end{definition}
\begin{example}
\label{ex:linearsystem}
Let $\pi = 2413$. The system of inequalities defining $P_\pi$ is given by
\begin{align*}
0 \leq x_2 + x_3 &\leq 1,\\
0 \leq x_1 + x_2 + x_3 &\leq 1, \\
0 \leq x_1 + x_2 &\leq 1, \text{ and}\\
x_1,x_2,x_3 &\geq 0.
\end{align*}
\end{example}
Recall that Lemma \ref{l:general} expresses $\PRB(f,\pi)$ as $\int_{\R_{>0}^n} \prod_{i=1}^n f(L_\pi(\bm{x})_i) \D \bm{x}$. Also recall Definition \ref{d:represent}, which expresses the $i$-th coordinate of $L_\pi(x_1,\ldots,x_n)$ as
\[
L_\pi(x_1,\ldots,x_n)_i =
\begin{cases}
x_{\pi(i)} + \cdots + x_{\pi(i+1) - 1}, & \;\; \text{if } \pi(i) < \pi(i+1),\\
-(x_{\pi(i+1)} + \cdots + x_{\pi(i) - 1}), & \;\; \text{if } \pi(i+1) < \pi(i).
\end{cases}
\]
\begin{lemma}
\label{l:volumepolytope}
Let $f = \frac{1}{2} \Chi_{[-1,1]}$ be the uniform density function on $[-1,1]$. Let $\pi \in S_{n+1}$. Let $P_\pi$ be the polytope of steps for $\pi$. Then
\[
\PRB(f,\pi) = \frac{1}{2^n} \cdot \mathrm{volume}(P_\pi).
\]
\end{lemma}
\begin{proof}
Let $m_i = \mathrm{min}\{\pi(i),\pi(i+1)\}$ and $M_i = \mathrm{max}\{\pi(i),\pi(i+1)\}$.
By Lemma \ref{l:general}, we have
\begin{equation}
\label{e:integralpolytope}
\PRB(f,\pi) = \INT_{\R_{>0}^n} \prod_{i=1}^n \frac{1}{2} \Chi_{[-1,1]}(L_\pi(\bm{x})_i) \; \D \bm{x} = \frac{1}{2^n} \INT_{\R_{>0}^n} \prod_{i=1}^n \Chi_{[-1,1]}(L_\pi(\bm{x})_i) \;\D \bm{x}.
\end{equation}
Let $\bm{x} = (x_1,\ldots,x_n) \in \R_{\geq 0}^n$. Then $x_j \geq 0$ for all $j \in [n]$. Thus $L_\pi(x_1,\ldots,x_n)_i \in [-1,1]$ if and only if
\[
0 \leq x_{m_i} + \cdots + x_{M_i - 1} \leq 1.
\]
The last integrand of \eqref{e:integralpolytope} is $1$ if the system of inequalities defining $P_\pi$ in Definition \ref{d:rationalpolytope} is satisfied, and $0$ otherwise. Thus, the last integral of \eqref{e:integralpolytope} calculates the volume of $P_\pi$.
\end{proof}
\begin{remark}
A consequence of the coordinate inequalities $x_i \geq 0$ and those that have the form $0 \leq x_{m_i} + \cdots + x_{M_i - 1} \leq 1$ is that $x_a + \cdots + x_b \leq 1$ for any $a,b \in [m_i, M_i)$.
In particular, it is a consequence of Lemma \ref{l:existcoordinate} that $x_i \leq 1$ for all $i \in [n]$, which implies $P_\pi \subseteq [0,1]^n$.
\end{remark}
\subsection{Type \texorpdfstring{$A$}{A} root system preliminaries}
Let $\epsilon_1,\ldots,\epsilon_{n+1}$ be the standard basis of $\R^{n+1}$. Let $(\cdot,\cdot)$ be the standard inner product on $\R^{n+1}$. Let
\[
V = \left\{\lambda \in \R^{n+1} : \; \;(\lambda, \epsilon_1 + \cdots + \epsilon_{n+1}) = 0 \right\}.
\]
The set
\[
\Phi = \left\{\epsilon_i - \epsilon_j \in V : \; i,j \in [n+1] \text{ and } i < j\right\}
\]
is called the \emph{root system of type $A_n$}. The sets
\begin{align*}
\Phi^+ &= \{\epsilon_i - \epsilon_j \in V : \; i,j \in [n+1] \text{ and } i < j\} \text{ and}\\
\Phi^- &= \{-\lambda \in V : \; \lambda \in \Phi^+\},
\end{align*}
respectively, are called the set of \emph{positive roots} and the set of \emph{negative roots}, respectively.
\begin{notation*}
We often abbreviate $\epsilon_i - \epsilon_j \in \Phi^+$ by $(i,j) \in \Phi^+$.
\end{notation*}
Let $\alpha_i = \epsilon_i - \epsilon_{i+1}$. Then
\[
\Delta = \{ \epsilon_i - \epsilon_{i+1} \in V: \;\; i \in [n]\} = \{ \alpha_i \in V: \;\; i \in [n]\}
\]
is a basis for $V$. The vectors $\alpha_1,\ldots,\alpha_n$ contained in $\Delta$ are called \emph{simple roots}. There is a dual basis to $\Delta$ consisting of vectors $\omega_1,\ldots,\omega_n$ satisfying $(\omega_i, \alpha_j) = \delta_{ij}$. The dual basis is called the basis of \emph{fundamental coweights}.
The \emph{\Weyl} is the group generated by reflections about the hyperplanes orthogonal to the simple roots. Explicitly, the reflection $s_i$ about the hyperplane orthogonal to $\alpha_i$ is given by
\begin{equation}
\label{e:reflection}
s_i (\lambda) = \lambda - (\lambda, \alpha_i) \alpha_i.
\end{equation}
The map that sends the adjacent transposition $(i \;\; i+1) \in S_{n+1}$ to the reflection $s_i \in A_n$ is called the \emph{geometric representation}. It is a faithful representation of the symmetric group as a Coxeter group. See \cite[Section 4.2]{bjorner-brenti2}, for example.
The representation $L$ given in Definition \ref{d:represent} is closely related to the geometric representation of $S_{n+1}$ as the {\Weyl}.
\begin{lemma}
\label{l:matrixgeometric}
The matrix representation of $s_i$ in the basis of simple roots is $I + M$, where $I$ is the identity matrix, and $M$ is the matrix whose only nonzero entries $A$ are given by $M_{i,i-1} = 1$, $M_{i,i} = -2$, and $M_{i,i+1} = 1$.
\end{lemma}
\begin{proof}
This follows directly from \eqref{e:reflection} and appears in the proof of \cite[Proposition 4.2.1]{bjorner-brenti2}.
\end{proof}
\begin{lemma}
\label{l:coxeterrep}
Let $\pi \in S_{n+1}$. The matrix representation of $\pi$ in the basis of simple roots is $L_\pi^T$. Consequently, the matrix $L_\pi$ is the matrix representation of $\pi$ in the basis of fundamental coweights.
\end{lemma}
\begin{proof}
Recall from Lemma \ref{l:represent} that the function $L:S_{n+1} \ra GL_n(\R)$ that maps $\pi$ to $L_\pi$ is a representation. Thus, it suffices to check the result for the adjacent transpositions.
Let $\pi$ be the adjacent transposition $(i \;\; i+1)$. We may exhaustively check that $L_\pi^T$ is the geometric representation given in Lemma \ref{l:matrixgeometric}.
Note that $\pi(j) = j$ and $\pi(j+1) = j + 1$ except for $j \in \{i-1,i,i+1\}$. Thus all rows of $L_\pi$ match the identity matrix except rows $i-1$, $i$, and $i+1$.
Since $\pi(i-1) = i-1$, and $\pi(i) = i + 1$, the $(i-1)$-st row of $L_\pi$, if it exists, has a $1$ in columns $i-1$ and $i$ and $0$'s in all other positions. Similarly, if row $i+1$ exists, there is a $1$ in columns $i$ and $i+1$ and $0$'s in all other positions. Since $\pi(i) = i+1$ and $\pi(i) = i$, the only nonzero entry of row $i$ is a $-1$ in column $i$.
In summary, we may wite $L_\pi$ as $I + N$, where the only nonzero entries of $N$ are given by $N_{i-1,i} = 1$, $N_{i,i} = -2$, and $N_{i+1,i} = 1$. This is the transpose of the matrix for the geometric representation given in Lemma \ref{l:matrixgeometric}.
\end{proof}
\subsection{The affine arrangement of type \texorpdfstring{$A_n$}{An} in step coordinates}
The definition of the {\typeA} and its connected components involve inner products of the form $(\lambda, \epsilon_i - \epsilon_j)$. Note that $\lambda$, expressed in the basis of fundamental coweights as $x_1 \omega_1 + \cdots + x_n \omega_n$, satisfies
\begin{align*}
(\lambda, \epsilon_i - \epsilon_j) &= (x_1 \omega_1 + \cdots + x_n \omega_n\; , \; \alpha_i + \cdots + \alpha_{j-1})\\
&= x_i + \cdots + x_{j-1}.
\end{align*}
The linear isomorphism mapping $x_1 \omega_1 + \cdots + x_n \omega_n$ to $(x_1,\ldots,x_n)$ translates results about the {\typeA} to results about $P_\pi$. We refer to the image $(x_1,\ldots,x_n) \in \R^n$ of this isomorphism as \emph{step coordinates} in reference to the steps of the random walk. Whenever it makes sense, we expand the standard results and definitions about the {\typeA} into the basis $\omega_1,\ldots,\omega_n$ in anticipation of what is needed to calculate the volume of $P_\pi$.
\begin{definition}
\label{d:consecutiveaffine}
Let $(i,j) \in \Phi^+$ and $a \in \Z$. Let
\begin{align*}
H_{ij}^a &= \{\lambda \in V : (\lambda, \epsilon_i - \epsilon_j) = a\}\\
&= \{x_1 \omega_1 + \cdots + x_n \omega_n \in V : x_i + \cdots + x_{j-1} = a\}.
\end{align*}
The collection of all hyperplanes of the form $H_{ij}^a$ is called the \emph{\typeA}. The connected components of $V \setminus \cup H_{ij}^a$ are called \emph{alcoves}. The group generated by the set of reflections about hyperplanes of the form $H_{ij}^a$ is the \emph{\affWeyl}.
\end{definition}
Let $\alc$ be an alcove of the affine walk arrangement. For any $\lambda \in \alc$, and any pair $(i,j) \in \Phi^+$, Definition \ref{d:consecutiveaffine} implies the existence of an integer $k_\alc (i,j)$ such that $(\lambda, \epsilon_i - \epsilon_j)$ is strictly between $k_\alc (i,j)$ and $k_\alc (i,j) + 1$.
\begin{definition}
\label{d:address}
Let $\alc$ be an alcove of the affine walk arrangement. Let $\Phi^+$ be the set of positive roots. The function $k_\alc : \Phi^+ \ra \Z$ such that
\begin{align*}
\alc &= \left\{ \lambda \in V : \; \;
k_\alc (i,j) < (\lambda, \epsilon_i - \epsilon_j) < k_\alc (i,j) + 1 \text{ for all } (i,j) \in \Phi^+ \right\}\\
&= \left\{ x_1 \omega_1 + \cdots + x_n \omega_n \in V: \;\; k_\alc (i,j) < x_i + \cdots + x_{j-1} < k_\alc (i,j) + 1 \text{ for all } (i,j) \in \Phi^+\right\}
\end{align*}
is called the \emph{address of $\alc$}. The alcove
\begin{align*}
\alc_{\circ} &= \left\{\lambda \in V : \;\; 0 < (\lambda,\epsilon_i - \epsilon_j) < 1 \textrm{ for all } (i,j) \in \Phi^+ \right\}\\
&= \left\{x_1 \omega_1 + \cdots + x_n\omega_n \in V : \;\; 0 < x_i + \cdots + x_{j-1} < 1 \textrm{ for all } (i,j) \in \Phi^+ \right\}
\end{align*}
is called the \emph{fundamental alcove}.
\end{definition}
Thus, the fundamental alcove is the unique alcove whose address is the constant zero function from $\Phi^+$ to $\Z$.
\begin{example}
\label{ex:alcove}
Every point in the unit hypercube that is not in the measure zero union of hyperplanes of the affine walk arrangement lies in some alcove. For example, the point $(0.2, 0.05, 0.8, 0.4, 0.7)$ in step coordinates is in the alcove $\alc$ whose address is shown in Figure \ref{f:alcove}.
\end{example}
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=.25]
\foreach \i in {1,2,3,4,5} {
\pgfmathsetmacro{\a}{int(\i + 1)}
\foreach \j in {\a,...,6} {
\draw (2*\i,2*\j) node{$\i \j$};
}
}
\draw (14.5,9.5) node{$k_\alc$};
\draw[very thick,->] (13,7) .. controls (14,9.5) and (15,6) .. (17,7);
\draw (20, 4) node{$0$};
\draw (20, 6) node{$0$};
\draw (20, 8) node{$1$};
\draw (20, 10) node{$1$};
\draw (20, 12) node{$2$};
\draw (22, 6) node{$0$};
\draw (22, 8) node{$0$};
\draw (22, 10) node{$1$};
\draw (22, 12) node{$1$};
\draw (24, 8) node{$0$};
\draw (24, 10) node{$1$};
\draw (24, 12) node{$1$};
\draw (26, 10) node{$0$};
\draw (26, 12) node{$1$};
\draw (28, 12) node{$0$};
\end{tikzpicture}
\caption{The address of the alcove $\alc$ containing the point in Example \ref{ex:alcove}.}
\label{f:alcove}
\end{figure}
The group $\A_n$ has generating set $s_1,\ldots,s_{n+1}$, where $s_1,\ldots,s_n$ are the same generators from $A_n$ that reflect about the hyperplanes $H_{i \; i+1}^0$. The generator $s_{n+1}$ reflects about the hyperplane $H_{1 n}^1$. Thus, the action of $s_i$ is to swap the $i$-th and $(i+1)$-st coordinates of elements of $V$. The action of $s_{n+1}$ is to swap the first and last coordinates, add one to the first coordinate and subtract one from the last coordinate. See \cite[page 86]{shi1} or \cite[Section 4.3]{humphreys}, for example.
The first part of the next lemma provides a correspondence between the the group $\A_n$ and the alcoves of the {\typeA}. The last part provides the link to calculating the volume of $P_\pi$. Recall that Lemma \ref{l:coxeterrep} identifies $L_\pi^T$ as the matrix representing $\pi$ in the geometric representation. Also recall from Lemma \ref{l:det} that the determinant of $L_\pi$ is $\pm 1$.
\begin{lemma}
The following are true about the {\affWeyl}.
\label{l:affineprelims}
\begin{enumerate}[(i)]
\item
The {\affWeyl} acts simply transitively on the alcoves of the {\typeA}.
\item
Every element of $\A_n$ is a product of an element of $A_n$ and a translation.
\item
Elements of $\A_n$ acting on step coordinates are volume-preserving on $\R^n$ relative to the standard inner product on $\R^n$ and Lebesgue measure.
\end{enumerate}
\end{lemma}
\begin{proof}
Part (i) is \cite[Theorem 4.5]{humphreys}. Part (ii) is \cite[Proposition 4.2]{humphreys}.
Lemma \ref{l:coxeterrep} shows that elements of $A_n$ expressed as matrices relative to the basis of fundamental coweights have the form $L_\pi$ for some $\pi \in S_{n+1}$. Since translation preserves volume in any basis under any inner product, part (ii) and Lemma \ref{l:det} prove part (iii).
\end{proof}
\begin{remark}
When we convert to coordinates in $\R^n$ via the basis of fundamental coweights, we are calculating volumes and integrals with a standard Lebesgue measure on $\R^n$ equipped with the standard inner product. This is not the same inner product as the one on $V$. To see this difference in inner product visually, compare \cite[Figure 5.1]{martinez} to a standard centrally-symmetric representation of the braid arrangement in the plane.
\end{remark}
Part (i) of Lemma \ref{l:affineprelims} ensures that $w(\alc_\circ)$ in the next definition is an alcove.
\begin{definition}
\label{d:alcovelabel}
Let $w \in \A_n$. The \emph{alcove of $w$}, denoted $\alc_w$, is the alcove $w(\alc_\circ)$.
\end{definition}
\subsection{Computing the volume of \texorpdfstring{$P_\pi$}{Pp} by counting alcoves}
\begin{lemma}
\label{l:polytopeunion}
Let $\pi \in S_{n+1}$ and let $P_\pi$ be the polytope of steps for $\pi$. Let $\alc$ be an alcove of the {\typeA} expressed in step coordinates. Then $\alc \subseteq P_\pi$ or $\alc \cap P_\pi = \emptyset$.
\end{lemma}
\begin{proof}
The address $k_\alc: \Phi^+ \ra \Z$ for $\alc$ determines a system of inequalities where each inequality has the form $k_\alc(i,j) < x_i + \cdots + x_{j-1} < k_\alc(i,j) + 1$, for each $(i,j) \in \Phi^+$. This includes the pairs $(m_i, M_i - 1) \in \Phi^+$ in Definition \ref{d:rationalpolytope}. If $k_\alc(m_i,M_i - 1) = 0$ for all $i \in [n]$, then every $\bm{x} \in \alc$ satisfies all the inequalities that define $P_\pi$, which implies $\alc \subseteq P_\pi$. Otherwise, the sum of coordinates $x_{m_i} + \cdots + x_{M_i - 1}$ is incompatible with $P_\pi$ for some $i \in [n]$, which implies $\alc \cap P_\pi = \emptyset$.
\end{proof}
\begin{lemma}
\label{l:affinealcovecount}
Let
\begin{align*}
P &= \left\{ \lambda \in V : \;\; 0 < (\lambda, \alpha_i) < 1 \text{ for all } i \in [n]\right\}\\
&= \left\{ x_1 \omega_1 + \cdots + x_n \omega_n : \;\; 0 < x_i < 1 \text{ for all } i \in [n]\right\}.
\end{align*}
In step coordinates, the parallelepiped $P$ is the unit cube $[0,1]^n$. There are $n!$ alcoves of the {\typeA} contained in $P$.
\end{lemma}
\begin{proof}
See the proof of \cite[Theorem 4.9]{humphreys} or \cite[Section 3]{lam-postnikov2}.
\end{proof}
\begin{corollary}
\label{c:polytopevolume}
In step coordinates, each alcove of the {\typeA} has volume $1/n!$. Thus,
\[
\PRB(f,\pi) = \frac{K_\pi}{2^n n!},
\]
where $K_\pi$ is the number of alcoves contained in $P_\pi$.
\end{corollary}
\begin{proof}
The set of points not in any alcove has measure zero. Thus part (iii) of Lemma \ref{l:affineprelims} and Lemma \ref{l:affinealcovecount} show that alcoves have volume $1/n!$ in step coordinates. The result then follows from Lemma \ref{l:polytopeunion} and Lemma \ref{l:volumepolytope}.
\end{proof}
Not every function from $\Phi^+$ to $\Z$ is the address of an alcove. A characterization of such functions is given by Shi's Theorem. See \cite[Lemma 6.1.3]{shi1} or \cite[Theorem 5.2]{shi2}.
\begin{theorem}
\label{t:shi}
(Shi's Theorem) A function $k:\Phi^+ \ra \Z$ is the address of an alcove if and only if
\[
k(i,t) + k(t,j) \leq k(i,j) \leq k(i,t) + k(t,j) + 1
\]
for all $i,t,j$ satisfying $1 \leq i < t < j \leq n + 1$.
\end{theorem}
Shi's Theorem and Corollary \ref{c:polytopevolume} provide a straightforward, though inefficient, method for computing $\PRB(f,\pi)$. This method, and an alternative one based on \cite{stanley}, is given in \cite{denoncourt}.
\begin{proposition}
\label{p:computeuniform}
Let $\pi \in S_{n+1}$. Let $f$ be the uniform density function on $[-1,1]$. Let $K_\pi$ denote the number of functions $k:\Phi^+ \ra \N$ satisfying the inequalities
\[
k(i,t) + k(t,j) \leq k(i,j) \leq k(i,t) + k(t,j) + 1,
\]
where $i < t < j$, and also satisfying the equalities $k(i,j) = 0$ whenever there exists $c$ such that $\pi(c) \leq i < j \leq \pi(c+1)$ or $\pi(c+1) \leq i < j \leq \pi(c)$. Then
\[
\PRB(f,\pi) = \frac{K_\pi}{2^n n!}.
\]
\end{proposition}
\subsection{A characterization of the weak order in terms of alcove addresses}
\label{ss:weakordersection}
Recall that $\A_n$ is generated by reflections $s_1,\ldots,s_{n+1}$. The \emph{length of $w$}, denoted $\ell(w)$, is the smallest number of generators in an expression of $w$ as a product of generators. Define a relation $\ra$ by the condition $w \ra ws$ if $s$ is a generator and $\ell(ws) > \ell(w)$. The \emph{weak order} on $\A_n$ is defined as the transitive closure of the relation $\ra$.
The main result of this section, Lemma \ref{l:weakcharacter1}, characterizes the weak order on $\A_n$ in terms of alcove addresses. It might be folklore or known. There is an indirect way to prove the lemma by combining \cite[Theorem 4.1]{shi3} with \cite[Theorem 5.3]{bjorner-brenti2}. The approach given below uses a geometric characterization of the weak order on $\A_n$ given in \cite{humphreys}.
For a given hyperplane $H_{ij}^a$ of the {\typeA}, two sides of the hyperplane are determined by the conditions $(\lambda, \epsilon_i - \epsilon_j) > a$ and $(\lambda, \epsilon_i - \epsilon_j) < a$. We say a hyperplane $H$ \emph{separates} $\alc$ from $\alc_\circ$ if $\alc$ and $\alc_\circ$ lie on two sides of $H$. Based on the conditions for determining sides, we determine whether $H_{ij}^a$ separates $\alc$ and $\alc_\circ$ from the the address of $\alc$.
\begin{lemma}
\label{l:separatingaddress}
Let $H_{ij}^a$ be a hyperplane in the {\typeA}, let $\alc_\circ$ denote the fundamental alcove, and let $\alc$ be an arbitrary alcove. If $a > 0$, then $H_{ij}^a$ separates $\alc$ from $\alc_\circ$ if and only if $k_\alc(i,j) \geq a$. If $a \leq 0$, then $H_{ij}^a$ separates $\alc$ from $\alc_\circ$ if and only if $k_\alc(i,j) \leq a - 1$.
\end{lemma}
\begin{proof}
Suppose $a > 0$. Since $k_{\alc_\circ}(i,j) = 0$, we have $\alc_\circ$ on the side of $H_{ij}^a$ where $(\lambda,\epsilon_i - \epsilon_j) < a$. Note that $\alc$ is on the side where $(\lambda, \epsilon_i - \epsilon_j) > a$ if and only $k_\alc(i,j) \geq a$. Thus $H_{ij}^a$ separates $\alc_\circ$ and $\alc$ if and only if $k_\alc(i,j) \geq a$.
The argument for $a < 0$ is similar.
\end{proof}
\begin{lemma}
\label{l:separatingweak}
Let $\mathcal{L}(w)$ be the set of hyperplanes separating $\alc_w$ from $\alc_\circ$. Then $u \leq w$ in the weak order if and only if $\mathcal{L}(u) \subseteq \mathcal{L}(w)$.
\end{lemma}
\begin{proof}
This is \cite[Theorem 4.5]{humphreys}.
\end{proof}
Let $k:\Phi^+ \ra \Z$ and $k':\Phi^+ \ra \Z$ be addresses. We write $k' \leq_A k$ if $k'(i,j) \leq k(i,j)$ whenever both are nonnegative or $k'(i,j) \geq k(i,j)$ whenever both are nonpositive. We write $k' \leq k$ if $k'(i,j) \leq k(i,j)$ for all $(i,j) \in \Phi^+$, which is the standard notation for function comparison.
\begin{lemma}
\label{l:weakcharacter1}
Let $u,w \in \A_n$. Then $u \leq w$ if and only if $k_u \leq_A k_w$.
\end{lemma}
\begin{proof}
The result follows from Lemma \ref{l:separatingaddress} and Lemma \ref{l:separatingweak}.
\end{proof}
The addresses of alcoves in $P_\pi$ are all greater than equal to $0$. Thus, we simplify the previous lemma to characterize weak order as a comparison of addresses as functions.
\begin{corollary}
\label{c:weakcharacter2}
Let $u,w \in \A_n$. Suppose $k_u(i,j) \geq 0$ and $k_w(i,j) \geq 0$ for all $(i,j) \in \Phi^+$. Then $u \leq w$ if and only if $k_u \leq k_w$.
\end{corollary}
\subsection{Ideals in the root poset determine the alcoves in \texorpdfstring{$P_\pi$}{Pp}}
\label{ss:rootposet}
If we set $k(i,j) = 0$ whenever required by Proposition \ref{p:computeuniform}, and greedily set $k(i,j)$ to the maximum amount allowed by Shi's theorem, then we obtain a maximal address satisfying the system of linear inequalities defining the polytope $P_\pi$. By Corollary \ref{c:weakcharacter2}, if this turns out to be a unique maximum address satisfying the system, then the alcoves in $P_\pi$ correspond to a weak order interval of $\A_n$. We use a construction due to Sommers \cite{sommers} to show that this is the case.
There is a standard order $\leq$ on $\Phi^+$, called \emph{the root poset}, such that $(i',j') \leq (i,j)$ if and only if $i \leq i' < j' \leq j$, which is equivalent to $[i',j'] \subseteq [i,j]$. Recall that an ideal is a down-closed subset of a poset.
\begin{definition} \label{d:ideal}
Let $\pi \in S_{n+1}$. For $i \in [n]$, we say $(\pi(i),\pi(i+1))$ is a \emph{consecutive root for $\pi$} if $\pi(i) < \pi(i+1)$. Similarly, if $\pi(i+1) < \pi(i)$, we say $(\pi(i+1),\pi(i))$ is a \emph{consecutive root for $\pi$}. Denote the collection of consecutive roots for $\pi$ by $C_\pi$. Define the \emph{root ideal of $\pi$}, denoted $\IDEAL_\pi$, by
\begin{equation*}
\IDEAL_\pi := \{(i',j') : (i',j') \leq (i,j) \text{ for some } (i,j) \in C_\pi\}
\end{equation*}
\end{definition}
The motivation for defining $\IDEAL_\pi$ comes from the next lemma, which states that the address of any alcove in $P_\pi$ is $0$ on the ideal $I_\pi$.
\begin{lemma} \label{l:ideal}
Let $k_\alc:\Phi^+ \ra \Z$ be the address of an alcove $\alc$ in the polytope $P_\pi$ of steps for $\pi$. For any $(i',j') \in \IDEAL_\pi$ and any $(i,j) \in C_\pi$ such that $k(i,j) = 0$, we have $k(i',j') = 0$.
\end{lemma}
\begin{proof}
Given that $(i,j) \in C_\pi$, we know $x_i + \cdots + x_{j-1} \leq 1$. Thus, if $i' > i$ and $j' < j$, we know $x_{i'} + \cdots + x_{j' - 1} \leq 1$. It follows that $k(i',j') = 0$.
\end{proof}
In the next definition, it is more convenient to regard elements of $\Phi^+$ as vectors, rather than using our abbreviation as pairs $(i,j)$ of integers.
\begin{definition}
\label{d:rootideal}
For a fixed root $\alpha \in \Phi^+$ and a fixed ideal $\IDEAL$ of $\Phi^+$, let $\alpha_{\IDEAL}$ be defined by
\[
\alpha_{\IDEAL} = \text{min} \; \left\{k : \sum_{i = 1}^{k + 1} \gamma_i = \alpha \text{ with } \gamma_i \in \IDEAL \right\}.
\]
\end{definition}
In other words, the smallest number of joins needed to express $\alpha$ as a join in the root poset using only elements of $\IDEAL$ is $\alpha_{\IDEAL} + 1$. The value of $\alpha_{\IDEAL}$ is zero for any element of $\IDEAL$.
As in Section \ref{ss:weakordersection}, we write $k' \leq k$ if $k'(i,j) \leq k(i,j)$ for all $(i,j) \in \Phi^+$ for addresses that are always nonnegative. The next lemma is a dual version of \cite[Theorem 2]{armstrong}.
\begin{lemma}
\label{l:sommers}
(Sommers \cite[Section 5]{sommers})
For any ideal $\IDEAL$ of $\Phi^+$ that contains all the simple roots, there exists a unique maximum address $k_\IDEAL:\Phi^+ \ra \Z$ such that $k_\IDEAL(i,j) = 0$ for all $(i,j) \in \IDEAL$. It is defined by
\[
k_\IDEAL(i,j) = (i,j)_{\IDEAL}.
\]
\end{lemma}
\begin{proof}
In the proof of \cite[Lemma 5.1 part (2)]{sommers}, it is shown that $k(i,j) \leq (i,j)_\IDEAL$ for any address satisfying $k(i,j) = 0$ for all $(a,b) \in \IDEAL$. In \cite[Lemma 5.2 part (2)]{sommers}, it is shown that there exists an address $k$ such that $k(i,j) = (i,j)_\IDEAL$ for all $(i,j) \in \Phi^+$. Since any address $k'$ satisfying $k'(i,j) = 0$ for all $(i,j) \in \IDEAL$ must also satisfy $k'(i,j) \leq (i,j)_\IDEAL = k(i,j)$, it follows that $k$ is the unique maximum address such that $k(i,j)$ is zero on $\IDEAL$.
\end{proof}
\begin{example}
\label{ex:ideal}
The alcove address of Figure \ref{f:alcove} has $k(i,j) = 0$ for any $(i,j)$ where $j = i + 1$ as well as $(1,3)$ and $(2,4)$. The maximum alcove guaranteed by Lemma \ref{l:sommers} is obtained by filling the entries with the maximum possible value that the conditions of Shi's theorem allows. The address of this alcove is given in Figure \ref{f:maximumalcove}. Its values are, as expected, larger than those of Figure \ref{f:alcove}.
\end{example}
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=.25]
\foreach \i in {1,2,3,4,5} {
\pgfmathsetmacro{\a}{int(\i + 1)}
\foreach \j in {\a,...,6} {
\draw (2*\i,2*\j) node{$\i \j$};
}
}
\draw (14.5,9.5) node{$k_\IDEAL$};
\draw[very thick,->] (13,7) .. controls (14,9.5) and (15,6) .. (17,7);
\draw (20, 4) node{$0$};
\draw (20, 6) node{$0$};
\draw (20, 8) node{$1$};
\draw (20, 10) node{$1$};
\draw (20, 12) node{$2$};
\draw (22, 6) node{$0$};
\draw (22, 8) node{$0$};
\draw (22, 10) node{$1$};
\draw (22, 12) node{$2$};
\draw (24, 8) node{$0$};
\draw (24, 10) node{$1$};
\draw (24, 12) node{$2$};
\draw (26, 10) node{$0$};
\draw (26, 12) node{$1$};
\draw (28, 12) node{$0$};
\end{tikzpicture}
\caption{The maximum address that is zero on the ideal $\IDEAL$ of Example \ref{ex:ideal}.}
\label{f:maximumalcove}
\end{figure}
To apply Lemma \ref{l:sommers} to $\IDEAL_\pi$ requires that $\IDEAL_\pi$ contain all the simple roots.
\begin{lemma}
\label{l:existcoordinate}
Let $\pi \in S_{n+1}$. Let $m_i = \mathrm{min}\{\pi(i),\pi(i+1)\}$ and $M_i = \mathrm{max}\{\pi(i),\pi(i+1)\}$. For any $j \in [n]$, there exists $i \in [n]$ such that $m_i \leq j < M_i$. Thus every $(j,j+1) \in \Delta$ is in $\IDEAL_\pi$.
\end{lemma}
\begin{proof}
Suppose otherwise. Let $k$ be such that $\pi(k) = j$. Then both $\pi(k-1)$ and $\pi(k+1)$, if defined, must be greater than $j$. If there exists an index $b$ such that $\pi(b) > j$ and $\pi(b+1) < j$, or vice versa, then $i = b$ is such that $m_i \leq j < M_i$. Thus, to the left of $k - 1$ and to the right of $k + 1$, the values must stay above $j$. Since $\pi$ is a permutation, this implies $j = 1$. However, one of $\pi(k-1)$ or $\pi(k+1)$ is defined, and $\pi(k)$ is the minimum of the two values, which implies $m_i \leq j < M_i$ for either $i = k - 1$ or $i = k$.
\end{proof}
\begin{corollary}
Let $f = \frac{1}{2}\Chi_{[-1,1]}$ be the uniform density function on $[-1,1]$. Let $\tau,\pi \in S_{n+1}$. Suppose $\IDEAL_\pi \subseteq \IDEAL_\tau$. Then
\[
\PRB(f,\pi) \geq \PRB(f,\tau).
\]
\end{corollary}
\begin{proof}
Lemma \ref{l:existcoordinate} implies that $\IDEAL_\pi$ and $\IDEAL_\tau$ contain all the simple roots. The hypothesis $\IDEAL_\pi \subseteq \IDEAL_\tau$ implies that $k_{\IDEAL_\tau}(i,j) = 0$ whenever $k_{\IDEAL_\pi}(i,j) = 0$. Lemma \ref{l:sommers} implies that $k_{\IDEAL_\tau}(i,j) \leq k_{\IDEAL_\pi}(i,j)$ for all $(i,j) \in \Phi^+$. The result then follows from Corollary \ref{c:weakcharacter2}.
\end{proof}
\begin{theorem}
\label{t:weakordertheorem}
Let $f = \frac{1}{2}\Chi_{[-1,1]}$ be the uniform density function on $[-1,1]$. Let $\pi \in S_{n+1}$ and let $\IDEAL_\pi$ be the root ideal of $\pi$. Let the address of $w \in \A_n$ be given by $k_w(i,j) = (i,j)_{\IDEAL_\pi}$ for all $(i,j) \in \Phi^+$. Then,
\[
\PRB(f,\pi) = \frac{|[1,w]|}{2^n n!},
\]
where $[1,w]$ consists of all $v \in \A_n$ such that $v \leq w$ in the weak order on $\A_n$.
\end{theorem}
\begin{proof}
Lemma \ref{l:existcoordinate} implies that we may apply Lemma \ref{l:sommers} to $\IDEAL_\pi$. The result then follows from Lemma \ref{l:sommers} and Corollary \ref{c:weakcharacter2}.
\end{proof}
Weak order intervals of $\A_n$ satisfy condition $(1)$ of \cite[Proposition 3.5]{lam-postnikov2}, which implies $P_\pi$ is an \emph{alcoved polytope} in the sense of \cite{lam-postnikov1} and \cite{lam-postnikov2}. Thus \cite[Theorem 3.2]{lam-postnikov1} provides yet another computational approach to calculating the volume of $P_\pi$, although we do not pursue that approach in this paper.
The next proposition is somewhat surprising, in the sense that two consecutive entries of $\pi$ can completely determine $\PRB(f,\pi)$. This is not the case for the Laplace or normal density functions, and it is reasonable to suspect that a typical density function does not exhibit this property.
\begin{proposition}
\label{p:minimumprob}
Let $f = \frac{1}{2}\Chi_{[-1,1]}$ be the uniform density function on $[-1,1]$. Let $\pi \in S_{n+1}$. Then $1 (n + 1)$ or $(n + 1) 1$ occur in consecutive positions in the $1$-line notation for $\pi$ if and only if
\[
\PRB(f,\pi) = \frac{1}{2^n n!}
\]
\end{proposition}
\begin{proof}
If $1$ and $n + 1$ are consecutive in the $1$-line notation, then $\IDEAL_\pi$ is all of $\Phi^+$. Thus $w = 1$ in Theorem \ref{t:weakordertheorem}, which implies there is only one element of $\A_n$ in the interval $[1,w]$.
Conversely, if there are no consecutive occurrences of $1$ and $(n+1)$, then the ideal $\IDEAL_\pi$ does not contain $(1,n)$. Thus $(1,n)$ is the join of at least $2$ elements of $\IDEAL_\pi$, which implies $k_{\IDEAL_\pi}(1,n) > 0$. This implies $[1,w]$ contains more than one element.
\end{proof}
\section{Pattern probability comparisons for the normal distribution}
\label{s:normaldistribution}
\begin{figure}[htb] \label{f:edgediagram2}
\centering
\begin{tikzpicture}[scale=.5]
\foreach \x in {1,2,...,6}
\draw[dotted] (0,\x)--(9,\x);
\draw(15.5,3.5) node{$\implies \mathrm{lev}(\pi) = \begin{bmatrix} 2 & 2 & 1 & 1 & 0\\ 0 & 4 & 3 & 2 & 1\\ 0 & 0 & 3 & 2 & 1\\ 0 & 0 & 0 & 2 & 1\\0 & 0 & 0 & 0 & 2 \end{bmatrix}$};
\draw[thick,*->-*](4.5,3) node[left]{3}--(4.5,1);
\draw[thick,*->-*](5.5,1) node[left]{1}--(5.5,5);
\draw[thick,*->-*](6.5,5) node[left]{5}--(6.5,6);
\draw[thick,*->-*](7.5,6) node[left]{6}--(7.5,2);
\draw[thick,*->-*](8.5,2) node[left]{2}--(8.5,4) node[right]{4};
\draw(0.5,7) node{Levels};
\draw(6.0,8) node{Edge diagram};
\draw(6.0,7) node{for $\pi = 315624$};
{\foreach \x in {1,2,...,5}
\draw (0.5,0.5+\x) node{$\ubar{\x}$};
}
\end{tikzpicture}
\caption{The edge diagram for a permutation and its level count matrix. Figure adapted from \cite{elizalde-martinez}.}
\end{figure}
As Zare \cite{zare} suggested, when $f$ is a normal distribution, we calculate $\PRB(f,\pi)$ by finding the volume of a spherical simplex. General equations exist to compute such volumes. See \cite{aomoto} or \cite{ribando}, for example. However, they appear to be computationally intensive, as is Lemma \ref{l:general} when it is applied to the normal distribution. Nonetheless, there are a few direct comparisons we can make involving alcoves and levels of the edge diagram.
Recall that the alcoves of Section \ref{s:affineA} are simplices of volume $1/n!$, by Corollary \ref{c:polytopevolume}. For a given origin-centered ball $B$ in $\R^n$, we obtain an underestimate for $\PRB(f,\pi)$ by counting all alcoves in $D_\pi$ that are fully contained in $B$. Similarly, we obtain an overestimate for $\PRB(f,\pi)$ by counting all alcoves in $D_\pi$ that intersect $B$ or are fully contained in $B$. The address of an alcove and the radius of the ball suffice to determine whether an alcove is fully contained in $B$ or intersects $B$ or is disjoint from $B$.
\begin{proposition}
\label{p:bounds}
Let $B$ be an origin-centered ball in $\R^n$. Let $m_\pi$ be the number of alcoves fully contained in $D_\pi$ and $B$. Let $M_\pi$ be the number of alcoves fully contained in $D_\pi$ that have nonempty intersection with $B$. Then
\[
\frac{m_\pi}{n! \text{ volume}(B)} \leq \PRB(f,\pi) \leq \frac{M_\pi}{n! \text{ volume}(B)}.
\]
\end{proposition}
Note that hypercubes with integer-valued vertices could be used instead of alcoves, but one would need to determine whether the hypercube is fully contained in $D_\pi$ or intersects $D_\pi$ or is disjoint from $D_\pi$. For alcoves, this is directly determined from the alcove's address.
For $\pi \in S_{n+1}$, we defined $\mathrm{lev}(\pi)$ on $[n]$ to measure how often a value lies between two consecutive values of $\pi$. We extend the definition of $\mathrm{lev}(\pi)$ to arbitrary pairs of $[n]$.
\begin{definition}
\label{d:levelmatrix}
Let $\pi \in S_{n+1}$. Denote the number of positions $k$ such that $\pi(k) \leq i,j < \pi(k+1)$ or $\pi(k+1) \leq i,j < \pi(k)$ by $\mathrm{lev}(\pi)_{i,j}$. Note that $\mathrm{lev}(\pi)_{i,i}$ is the same as $\mathrm{lev}(\pi)_i$ defined in Definition \ref{d:levelcount}.
\end{definition}
The measure of the spherical simplex that determines $\PRB(f,\pi)$ for the normal distribution is completely determined by the values of $\mathrm{lev}(\pi)$, as will be seen in the proof of Theorem \ref{t:comparegaussian}. Although such measures may be difficult to calculate, we can sometimes use $\mathrm{lev}(\pi)$ and $\mathrm{lev}(\tau)$ to compare $\PRB(f,\pi)$ and $\PRB(f,\tau)$.
Recall that Lemma \ref{l:general} expresses $\PRB(f,\pi)$ as $\int_{D_\pi} g(L_\pi(\bm{x})) \D \bm{x}$, where $g$ is the joint density function defined by $g(x_1,\ldots,x_n) = f(x_1)f(x_2)\cdots f(x_n)$.
\begin{theorem}
\label{t:comparegaussian}
Let $f:\R \ra \R$ be the density function for a normal distribution with mean zero and any variance. Let $\pi, \tau \in S_{n+1}$ and suppose $\mathrm{lev}(\pi)_{i,j} \leq \mathrm{lev}(\tau)_{i,j}$ for all $(i,j) \in \Phi^+$. Then $\PRB(f,\pi) \geq \PRB(f,\tau)$.
\end{theorem}
\begin{proof}
By scale invariance (Lemma \ref{l:scale-invariance}), we may assume $f$ is given by $f(x) = Ke^{-x^2}$ for some $K \in \R$. Every factor in Lemma \ref{l:general} has the form $f(\pm(x_a + \cdots + x_b))$. We have
\begin{align*}
f(\pm(x_a + \cdots + x_b)) &= \EXP{-(x_a + \cdots + x_b)^2}\\
&= \EXP{-\SUM_{k=a}^b x_k^2} \; \cdot \; \EXP{-\sum 2 x_k x_{k'}},
\end{align*}
where the sum in the second exponential is over all pairs between $a$ and $b$ (inclusive). By Definition \ref{d:levelmatrix}, there are $\mathrm{lev}(\pi)_{j,j}$ factors contributing one term of the form $-x_j^2$ and $\mathrm{lev}(\pi)_{i,j}$ factors contributing one term of the form $-2x_i x_j$ to the overall product of exponentials. Thus, if $\mathrm{lev}(\pi)_{i,j} \leq \mathrm{lev}(\tau)_{i,j}$ for all $(i,j)$, the integrand in Lemma \ref{l:general} for $\pi$ is always at least as large as the integrand for $\tau$.
\end{proof}
In \cite[Lemma 2.3]{elizalde-martinez}, Elnitsky and Martinez showed that if $L_\pi$ can be obtained from $L_\tau$ by a permutation of rows and columns, then $\PRB(f,\pi) = \PRB(f,\tau)$ for any choice of density function $f$, symmetric or otherwise. By including their guaranteed equalities, we obtain more comparable pairs of permutations than what is guaranteed by Theorem \ref{t:comparegaussian}.
Write $\pi \equiv \tau$ if $L_\pi$ can be obtained from $L_\tau$ by a permutation of rows and columns. Write $\pi \leq_{\mathrm{lev}} \tau$ if $\mathrm{lev}(\pi)_{i,j} \leq \mathrm{lev}(\tau)_{i,j}$ for all $i,j \in [n]$. Define $\leqslant$ as the transitive closure of of the relation $\equiv$ and the partial order $\leq_{\mathrm{lev}}$. We then have a broader collection of comparable pairs of permutations for the normal distribution.
\begin{corollary}
\label{c:comparegaussian}
Let $f$ be a normal distribution with mean zero. Let $\pi, \tau \in S_{n+1}$. If $\pi \leqslant \tau$, then $\PRB(f,\pi) \geq \PRB(f,\tau)$.
\end{corollary}
\section{Concluding remarks and problems}
\label{s:postscript}
In \cite{zare}, Zare asks which permutations occur most frequently in random walks with a normal or uniform distribution of mean zero for its steps. Our results provide an imprecise heuristic: permutations with large consecutive changes in its $1$-line notation are less likely to occur than permutations with small consecutive changes. In other words, for permutations where $\mathrm{lev}(\pi)_{i,j}$ is large, we expect $\PRB(f,\pi)$ to be small, and vice versa, for a large class of symmetric density functions of a continuous probability distribution.
As a general problem, we would like to know what general hypotheses are needed to prove $\PRB(f,\pi) \geq \PRB(f,\tau)$ whenever $\mathrm{lev}(\pi)_{i,j} \leq \mathrm{lev}(\tau)_{i,j}$ for all $(i,j) \in \Phi^+$. However, this question is probably too open-ended. We have evidence for the following more precise conjecture.
\\ \\
\textbf{Conjecture:} Let $f:\R \ra \R$ be a density function that is log-concave on $\R_{>0}$ and symmetric on $\R$. Let $\pi,\tau \in S_{n+1}$ and suppose $\mathrm{lev}(\pi)_{i,j} \leq \mathrm{lev}(\tau)_{i,j}$ for all $(i,j) \in \Phi^+$. Then $\PRB(f,\pi) \geq \PRB(f,\tau)$.
\\ \\
From the perspective of computation, Proposition \ref{p:computeuniform} provides a direct approach to computing ordinal pattern probabilities when the steps are uniform. Theorem \ref{t:weakordertheorem} reduces the problem to finding the size of a weak order interval in $\A_n$. A lot is known about these intervals, which is enough to make the computation easier in some cases. For example, in \cite{lapointe-morse}, Lapointe and Morse show that the weak order on the quotient $\PB{k+1}$ is order-isomorphic to the $k$-Young lattice. Furthermore, some intervals of the $k$-Young lattice are intervals of the Young lattice. The size of intervals of the Young lattice is given by a classical determinant formula due to Kreweras, thus providing an alternative calculation to Proposition \ref{p:computeuniform} for some permutations. (See \cite[Section 2.3.7]{kreweras}.)
However, the affine symmetric group contains many weak order intervals isomorphic to weak order intervals of the symmetric group. By \cite[Theorem 1.4]{dittmer-pak}, computing the size of weak order intervals in $S_n$ is $\#P$-complete. Unless there is something special about the weak order intervals in Theorem \ref{t:weakordertheorem}, computing $\PRB(f,\pi)$ is hard when $f$ is uniform.
\\ \\
\textbf{Conjecture}: Computing $\PRB(f,\pi)$ for the uniform density function $f$ on $[-1,1]$ and arbitrary $\pi \in S_n$ is $\#P$-complete.
\begin{center}
ACKNOWLEDGEMENTS
\end{center}
The author would like to thank Jim and Kate Daly and Emily Pavey for their support. The author also thanks Dana Ernst, Michael Falk, and Jim Swift of NAU's Algebra, Combinatorics, Geometry, and Topology Seminar for their comments on a talk based on an earlier version of this paper.
| {
"timestamp": "2019-07-29T02:15:19",
"yymm": "1907",
"arxiv_id": "1907.07172",
"language": "en",
"url": "https://arxiv.org/abs/1907.07172",
"abstract": "An ordinal pattern for a finite sequence of real numbers is a permutation that records the relative positions in the sequence. For random walks with steps drawn uniformly from $[-1,1]$, we show an ordinal pattern occurs with probability $\\frac{|[1,w]|}{2^n n!}$, where $[1,w]$ is a weak order interval in the affine Weyl group $\\widetilde{A}_n$. For random walks with steps drawn from a symmetric Laplace distribution, the probability is $\\frac{1}{2^n \\prod_{j=1}^n \\mathrm{lev}(\\pi)_j}$, where $\\mathrm{lev}(\\pi)_j$ measures how often $j$ occurs between consecutive values in $\\pi$. Permutations whose consecutive values are at most two positions apart in $\\pi$ are shown to occur with the same probability for any choice of symmetric continuous step distribution. For random walks with steps from a mean zero normal distribution, ordinal pattern probabilities are determined by a matrix whose $ij$-th entry measures how often $i$ and $j$ are between consecutive values.",
"subjects": "Combinatorics (math.CO)",
"title": "Ordinal pattern probabilities for symmetric random walks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771774699747,
"lm_q2_score": 0.8152324848629215,
"lm_q1q2_score": 0.8044528103948676
} |
https://arxiv.org/abs/2012.02824 | High order steady-state diffusion approximations | We derive and analyze new diffusion approximations of stationary distributions of Markov chains that are based on second- and higher-order terms in the expansion of the Markov chain generator. Our approximations achieve a higher degree of accuracy compared to diffusion approximations widely used for the past fifty years, while retaining a similar computational complexity. To support our approximations, we present a combination of theoretical and numerical results across three different models. Our approximations are derived recursively through Stein/Poisson equations, and the theoretical results are proved using Stein's method. | \section{Introduction.}\label{intro}
\section{Introduction}
\label{fse1}
We propose a new class of approximations for stationary
distributions of Markov chains. The new approximations will be
numerically demonstrated to be accurate in three models: the $M/M/n$ queue known as the Erlang-C model, the hospital model proposed in \cite{DaiShi2017}, and
the autoregressive (AR(1)) model studied in \cite{BlanGlyn2018}.
In addition to numerical results, for the Erlang-C model we provide theoretical guarantees that our approximation achieves higher-order accuracy.
Consider a one-dimensional, positive-recurrent, discrete-time Markov chain (DTMC) $X=\{X(n), n\geq 0\}$ taking values on a subset of $\mathbb{R}$. We introduce our approach in the DTMC setting, but continuous-time Markov chains (CTMC) can be treated analogously; see Section~\ref{fse3} where we treat the Erlang-C model. Call $\mathbb{E} \big( X(1) - X(0) | X(0) = x \big)$ the drift of the DTMC. We center and scale our DTMC by defining
$\tilde X=\{\tilde X(n), n\geq 0\}$, where
$\tilde X(n)=\delta(X(n)-R)$ for some constants $\delta>0$ and
$R\in \mathbb{R}$. We typically take $R$ to be the point where the drift of $X$ equals zero, which also happens to be the equilibrium of the corresponding fluid model; c.f., \cite{Stol2015} or \cite{Ying2016}. The scaling parameter $\delta$ is related to stochastic fluctuations around $R$.
Let $\tilde X(0)$ have the stationary distribution of $\tilde
X$, let $W=\tilde X(0)$, and let $W'=\tilde X(1)$. Stationarity implies that
\ben{\label{f0} \mathbb{E} f(W')-\mathbb{E} f(W)=0 } for all
$f:\mathbb{R}\to \mathbb{R}$ such that the expectations exist. \blue{Setting $\Delta = W' - W$, for sufficiently smooth $f(x)$ we can expand} the left-hand side to get
\begin{align}
0 =&\ \mathbb{E} f(W') - \mathbb{E} f(W) = \mathbb{E} \bigg[\sum_{i=1}^{n} \frac{1}{i!} \Delta^i f^{(i)}(W) + \frac{1}{(n+1)!} \Delta^{n+1} f^{(n+1)}(\xi)\bigg], \quad n \geq 0, \label{eq:taylorgeneric}
\end{align}
where $\xi = \xi^{(n)}$ lies between $W$ and $W'$. Note that $\Delta$ equals our scaling term $\delta$ multiplied by the displacement of the unscaled DTMC. Informally, the DTMCs we consider are those where the moments of the (unscaled) displacement are bounded by a constant independent of $\delta$, while $\delta$ itself is close to zero. In this setting, the right-hand side of \eqref{eq:taylorgeneric} is governed by its lower-order terms when $\delta$ is small. This motivates our approximations of $W$.
Letting $\mathcal{W}$ be the state space of $\tilde{X}$, for each $x \in \mathcal{W}$ let $ b(x)= \mathbb{E}(\Delta |W=x) $
be the drift of the DTMC at state $x$.
Let $(\underline{w}, \overline{w})$ be the smallest interval containing $\mathcal{W}$, and assume $b(x)$ is extended to be defined on all of $(\underline{w}, \overline{w})$. The precise form of the extension is unimportant for the time being and we will see in our examples that this extension often has a natural form.
We approximate $W$ by a continuous random variable $Y \in (\underline{w}, \overline{w})$ with density
\ben{\label{f10}
\frac{\kappa}{v(x)} \exp\Big(\int_0^x \frac{b(y)}{v(y)} dy \Big),\quad x\in (\underline{w}, \overline{w}),
}
where $\kappa$ is the normalizing constant
and $v(x):(\underline{w}, \overline{w})\to\mathbb{R}_+$ is some function to be specified.
We note that the distribution of $Y$ is determined for a given \emph{fixed} set of system parameters of the Markov chain.
In particular, $Y$ is well defined even when no limit is studied, so the stationary distribution of the unscaled DTMC $X$ would then be approximated by $Y/\delta + R$.
To discuss how to choose $v(x)$, suppose $(\underline{w}, \overline{w})=\mathbb{R}$ and consider the diffusion process $\{Y(t), t\ge 0\}$ given by
\begin{align}
Y(t) = Y(0)+ \int_0^t b(Y(s))ds + \int_0^t \sqrt{2 v(Y(s))} dB(s), \label{eq:sde}
\end{align}
where $\{B(t), t\ge 0\}$ is the standard Brownian motion.
Under mild regularity conditions on $b(x)$ and $v(x)$, the above diffusion process is well defined and has a unique stationary distribution whose density is given by (\ref{f10}); \blue{for a proof, see Chapter 15.5 of \cite{KarlTayl1981}}. Furthermore, the stationary density in (\ref{f10}) is characterized by
\begin{align}\label{eq:bar}
\mathbb{E} b(Y)f'(Y)+ \mathbb{E} v(Y)f''(Y)=0 \quad \text{ for all suitable } f:\mathbb{R}\to \mathbb{R}.
\end{align}
When one or both of the endpoints of $(\underline{w}, \overline{w})$ are finite, we would account for this by adding suitable boundary reflection terms.
In this paper we think of $Y$ as being a \emph{diffusion approximation} of $W$.
The characterization equation (\ref{eq:bar}) is well known for Markov processes; c.f., \cite{EthiKurt1986}. A related version is called the \emph{basic adjoint relationship} in the context of multidimensional reflecting Brownian motions by \cite{HarrWill1987}.
Equation (\ref{eq:bar}) is known in the Stein research community as the \emph{Stein equation}; see, for example, \cite{ChenGoldShao2011}.
Ensuring that $Y$ is a good approximation of $W$ requires a careful choice of $v(x)$. If we consider \eqref{eq:taylorgeneric} with $n = 2$ and ignore the third-order error term, then a natural choice is to use $v(x)=v_1(x)$, where $v_1(x)$ is an extension of $\frac{1}{2}\mathbb{E}(\Delta^2|W=x)$ to all of $(\underline{w}, \overline{w})$. Choosing a diffusion approximation in such a way was done in \cite{MandMassReim1998} and \cite{ WardGlyn2003}, as well as more recently in \cite{DaiShi2017}.
Despite this natural choice, most of the literature in the last fifty years did not use $v_1 $ to develop diffusion approximations. Instead, the typical choice is $v(x) = v_0$, where
\begin{align}
\label{eq:v0}
v_0= v_1(0) = \frac{1}{2}\mathbb{E}(\Delta^2|W=0);
\end{align}
i.e., $v_0$ is $v_1(x)$ evaluated at the fluid equilibrium $x = 0$. For examples, see \cite{HalfWhit1981,HarrNguy1993,Gurv2014, Ward2012}. It is usually the case that $v_0$ and $v_1(W)$ are asymptotically close, so using $v_0$ is enough to prove a limit theorem, which is the focus of most of the diffusion approximation literature. We however, show that using $v_0$ instead of $v_1$ can lead to significant excess error.
\blue{One such case is the Erlang-C model. It was shown in \cite{BravDaiFeng2016} that for a large class of performance measures, the $v_0$ approximation error is at most $C/\sqrt{R}$, where $R$ is a parameter known as the offered load and $C>0$ is a constant. In Section~\ref{fse3} we prove this upper bound is tight. On the other hand, the $v_1$ error vanishes at a faster rate of $1/R$. Moreover, the $v_1$ error is much smaller than the $v_0$ error, even in cases when $R$ is small. }
Given the performance of the $v_1$ approximation in the Erlang-C model, it is natural to wonder whether the $v_1$ error vanishes at a faster rate (compared to the $v_0$ error) for other models as well. The answer is mixed; e.g., it is not true for the model in Section~\ref{fse4}.
In this paper we provide other options for $v(x)$ beyond $v_0$ and $v_1(x)$. For $n \geq 1$, we define a $v_n$ approximation to be one that uses information from the first $n+1$ terms of the Taylor expansion in \eqref{eq:taylorgeneric}; $v_n$ approximations are not unique. We adopt the convention that $v_n$ can refer to either the function $v_n(x)$, or the $v_n$ approximation itself. As a preview, we can use third-order information from the Taylor expansion is by setting
\begin{align}
\label{eq:v2}
v(x) = v_2(x) = v_2^{(\eta)}(x)=\max \bigg\{\frac{a(x)}{2} -\frac{b(x)c(x)}{3a(x)} -\frac{a(x)}{6}\Big(\frac{c(x)}{a(x)}\Big)', \eta \bigg\} \quad \text{ for } x\in (\underline{w}, \overline{w}),
\end{align}
where $a(x)$ and $c(x)$ are extensions of $ \mathbb{E}(\Delta^2|W=x)$ and $ \mathbb{E}(\Delta^3|W=x)$ to $(\underline{w}, \overline{w})$, respectively, and
$\eta>0$ is a tuneable parameter selected to keep $v_2(x)$ positive.
We formally motivate and derive \eqref{eq:v2} in Section~\ref{sec:v2def}, where we also elaborate on the need for $\eta$ and how to choose it. Going beyond $v_2$, we derive $v_3$ for the hospital model of Section~\ref{fse4} and the AR(1) model of Section~\ref{fse5}. In both cases, numerical work suggests that finding an approximation that achieves either a faster convergence rate of the error to zero, or a significantly lower approximation error than $v_0$, requires us to go as far as $v_3$. For a discussion on how to determine which $v_n$ to use, see Section~\ref{sec:hospcompare}.
This paper is limited to the setting where the Markov chain is one-dimensional because the derivation of $v_n$ for $n \geq 2$ exploits the one-dimensional nature of the Poisson equation; for an example, see Section~\ref{sec:v2def}. At present we do not know how to generalize this to the multidimensional setting.
The theoretical framework underpinning our work is Stein's method, which was pioneered by \cite{Stei1972}. Specifically, we use the generator comparison framework of Stein's method, which dates back to \cite{Barb1988} and was popularized recently in queueing theory by \cite{Gurv2014}. We remark that deriving the $v_n$ approximations requires only algebra, which is handy from a practical standpoint as one can derive and implement the approximations numerically without worrying about justifying them theoretically.
In addition to deriving the $v_n$ approximations, we also use Stein's method to provide theoretical guarantees. For the Erlang-C model, Theorem~\ref{thm:md-high} establishes Cram\'er-type moderate-deviations error bounds. If $Y$ is an approximation of $W$, then moderate-deviations bounds refer to bounds on the relative error
\begin{align*}
\bigg| \frac{\mathbb{P}(Y \geq z)}{\mathbb{P}(W \geq z)} - 1 \bigg| \quad \text{ and } \quad \bigg| \frac{\mathbb{P}(Y \leq z)}{\mathbb{P}(W\leq z)} - 1 \bigg| .
\end{align*}
Compared to the Kolmogorov distance $\sup_{z \in \mathbb{R}} \big| \mathbb{P}(W \geq z) - \mathbb{P}(Y \geq z) \big|$, the relative error is a much more informative measure when the value being approximated is small,
as is the case in the approximation of small tail probabilities. For
many stochastic systems modeling service operations such as customer
call centers and hospital operations, these small probabilities
represent important performance metrics; e.g.,\ at most $1\%$ of
customers waiting more than 10 minutes before getting into
service.
To summarize, our main contribution is to present a new family of $v_n$ approximations for Markov chains. Using a combination of theoretical and numerical results, we show that the $v_n$ approximations perform significantly better than the traditional $v_0$ approximation across three separate models. Our results suggest that $v_1, v_2, v_3, \ldots$ can, and should, be used whenever possible to achieve much greater approximation accuracy. Before moving on to the main body of the paper, we first provide a brief review of related literature.
\subsection{Literature Review}
\label{sec:literature}
\paragraph{Steady-state diffusion approximations.}
In the last fifty years, diffusion approximations have been a major
research theme in the applied probability community for approximate
steady-state analysis of many stochastic systems; c.f.,
\cite{King1961a,HalfWhit1981,HarrNguy1993,MandZelt2009}. Some of
these approximations were initially motivated by \emph{process-level
limit theorems} that establish functional central limits in certain
asymptotic parameter regions; e.g., \cite{Reim1984,Bram1998,Will1998a}. The pioneering paper of
\cite{GamaZeev2006} initiated a wave of research providing
\emph{steady-state limit theorems}, justifying steady-state
approximations on top of process-level convergence. For some examples of these, see \cite{Tezc2008, ZhanZwar2008, BudhLee2009, Kats2010, GamaStol2012, DaiDiekGao2014, Gurv2014a, YeYao2016}.
Steady-state limit theorems do \emph{not} provide a rate of
convergence or an error bound. Recently, building on earlier work by \cite{GurvHuanMand2014}, \cite{Gurv2014} developed a general
approach to proving the rate of convergence for steady-state
performance measures of many stochastic systems.
In the setting of the $M/Ph/n+M$ queue with phase-type service time distributions, \cite{BravDai2017} refined the approach in
\cite{Gurv2014}, casting it into the Stein framework that has been
extensively studied in the last fifty years. The Stein framework allows
one to obtain an error bound, not just a limit theorem, for approximate
steady-state analysis of a stochastic system with a \emph{fixed} set
of system parameters. Readers are referred to \cite{BravDaiFeng2016}
for a tutorial introduction to using Stein's method for steady-state
diffusion approximations of Erlang-A and Erlang-C models, where
error bounds were established under a variety of metrics, including the Wasserstein distance, Kolmogorov distance, and moment difference.
\paragraph{Stein's method and moderate deviations.}
Stein's method was first introduced by \cite{Stei1972}. We refer the reader to the book by \cite{ChenGoldShao2011} for an introduction to Stein's method.
Moderate deviations date back to
\cite{Cram1938}, who obtained expansions for tail probabilities for sums of independent
random variables about the normal distribution.
Stein's method for moderate deviations for general dependent random variables was first studied in \cite{ChFaSh13a}. See \cite{ChFaSh13b}, \cite{ShZhZh18}, \cite{Zh19}, \cite{FaLuSh19} for further developments.
\paragraph{Refined mean-field approximations.} First-order approximations, such as mean-field, or fluid model approximations capture the deterministic flow of the Markov chain while ignoring the stochastic effects. A recent series of papers, \cite{GastHoud2017}, \cite{GastLateMass2018}, \cite{GastBortTrib2019}, explored refined mean-field approximations for computing moments of the Markov chain stationary distribution. In those papers, the authors were able to explicitly compute correction terms to the mean-field approximation, which significantly improves the accuracy of the approximation and speeds up the rate at which the approximation error converges to zero. However, the computation of these correction terms rests on assuming that the mean-field model is globally exponentially stable and that the drift of the Markov chain is differentiable. These assumptions fail to hold even for some basic queueing models; e.g., the Erlang-C model.
\subsection{Notation and Organization of the Paper}
\label{sec:notation}
For $a, b \in \mathbb{R}$, we use $a^+, a^-, a \wedge b$, and $a \vee b$ to denote $\max(a,0)$, $\max(-a,0)$, $\min(a,b)$, and $\max(a, b)$, respectively. We adopt the convention that $\sum_{l=k_1}^{k_2}=0$ if $k_2<k_1$. In Section~\ref{sec:v2def}, we derive several versions of $v_2$ and discuss how to analyze the approximation error using Stein's method. In Sections~\ref{fse3}--\ref{fse5}, we study the performance of various $v_n$ approximations for three different Markov chains. To keep the main paper a reasonable length, some details of the proofs are left to the Appendix.
\section{Deriving the Diffusion Approximations}\label{sec:v2def}
\blue{In the previous section, we said that for $n \geq 1$, a $v_n$ approximation is one that uses information from the first $n+1$ terms of the Taylor expansion in \eqref{eq:taylorgeneric}. In this section, we justify $v_2(x)$ proposed in \eqref{eq:v2} by tapping into the third-order terms in \eqref{eq:taylorgeneric}. For examples of accessing fourth-order terms, we refer the reader to the derivations of $v_3$ for the models in Sections~\ref{fse4} and \ref{fse5}.} What follows can be repeated for continuous-time Markov chains (CTMC), with the identity $\mathbb{E} G f(W)=0$ replacing $ \mathbb{E} f(W') - \mathbb{E} f(W) = 0$, where $G$ is the generator of the CTMC. \blue{As our starting point, we recall from \eqref{eq:taylorgeneric} that
\begin{align*}
0 =&\ \mathbb{E} f(W') - \mathbb{E} f(W) = \mathbb{E} \bigg[\sum_{i=1}^{n} \frac{1}{i!} \Delta^i f^{(i)}(W) + \frac{1}{(n+1)!} \Delta^{n+1} f^{(n+1)}(\xi)\bigg],
\end{align*}
where $\Delta = W' - W$, and that $b(x), a(x)$, and $c(x)$ are extensions of $\mathbb{E}(\Delta|W=x)$, $\mathbb{E}(\Delta^2|W=x)$, and $ \mathbb{E}(\Delta^3|W=x)$ to $(\underline{w}, \overline{w})$, respectively. Let $d(x)$ be an extension of $ \mathbb{E}(\Delta^4|W=x)$ to $(\underline{w}, \overline{w})$. Setting $n = 3$ in the expansion above yields }
\begin{align}
\mathbb{E} b(W) f'(W)+\frac{1}{2}\mathbb{E} a(W)f''(W) + \frac{1}{6} \mathbb{E} c(W) f'''(W) = -\frac{1}{24} \mathbb{E} d(W) f^{(4)}(\xi_1) \label{eq:taylorthird}
\end{align}
where $\xi_1$ lies between $W$ and $W'$. We implicitly assume $f(x)$ is sufficiently differentiable and the expectations above exist. Since $\Delta$ is small, we treat the right-hand side as error and use the left-hand side to derive a diffusion approximation. The challenge to overcome is that the stationary density of the diffusion is characterized by \eqref{eq:bar}, which considers only the first two derivatives of a function $f(x)$, whereas the left-hand side of \eqref{eq:taylorthird} contains three derivatives. \blue{We therefore convert $f'''(W)$ into an expression involving $f''(W)$ plus some error.} Consider \eqref{eq:taylorgeneric} again, but with $ n = 2$:
\begin{align}
\mathbb{E} b(W) f'(W)+\frac{1}{2}\mathbb{E} a(W)f''(W) = -\frac{1}{6} \mathbb{E} c(W) f'''(\xi_2), \label{eq:taylorsecond}
\end{align}
for some $\xi_2$ between $W$ and $W'$. Fix $f(x)$ and let $g(x) = \int_{0}^{x} \frac{c(y)}{a(y)} f''(y) dy$. Note that
\begin{align*}
g''(x)=&\ \Big( \frac{c(x)}{a(x)} f''(x)\Big)' =\Big(\frac{c(x)}{a(x)}\Big)'f''(x)+\frac{c(x)}{a(x)}f'''(x).
\end{align*}
Evaluating \eqref{eq:taylorsecond} with $g(x)$ in place of $f(x)$ there yields
\begin{align*}
\mathbb{E} \frac{b(W)c(W)}{a(W)}f''(W)+\mathbb{E} \frac{a(W)}{2}\Big(\frac{c(W)}{a(W)}\Big)'f''(W)+\frac{1}{2}\mathbb{E} c(W)f'''(W) = -\frac{1}{6} \mathbb{E} c(W) g'''(\xi_2).
\end{align*}
Rearranging terms, we have
\ben{\label{f3}
\frac{1}{6}\mathbb{E} c(W)f'''(W) = -\mathbb{E}\Big( \frac{b(W)c(W)}{3a(W)} +\frac{a(W)}{6}\Big(\frac{c(W)}{a(W)}\Big)' \Big)f''(W) -\frac{1}{18} \mathbb{E} c(W) g'''(\xi_2).
}
Substituting \eqref{f3} into \eqref{eq:taylorthird}, we obtain
\begin{align}
& \mathbb{E} b(W)f'(W)+\mathbb{E}\Big(\frac{a(W)}{2}-\frac{b(W)c(W)}{3a(W)}-\frac{a(W)}{6}\Big(\frac{c(W)}{a(W)}\Big)'\Big)f''(W) \notag \\
=&\ \frac{1}{18} \mathbb{E} c(W) g'''(\xi_2) -\frac{1}{24} \mathbb{E} d(W) f^{(4)}(\xi_1). \label{f4}
\end{align}
The left-hand side resembles the generator of a diffusion process. Define
\begin{align}
\underline{v}_2(x) = \frac{a(x)}{2}-\frac{b(x)c(x)}{3a(x)}-\frac{a(x)}{6}\Big(\frac{c(x)}{a(x)}\Big)' , \quad x \in (\underline{w}, \overline{w}), \label{eq:underlinev2}
\end{align}
and let $v_2(x) = (\underline v_2(x) \vee \eta)$ for some $\eta > 0$ to recover the $v_2(x)$ in \eqref{eq:v2}. The value of $\eta$ should be chosen close to zero, and if $\inf_{x \in (\underline{w}, \overline{w})} \underline v_2(x) > 0$, then we can pick $v_2(x) = \underline v_2(x)$.
We enforce $v(x) > 0$ because there may be issues with the integrability of the density in \eqref{f10} if $v(x)$ is allowed to be negative. For instance, in all three examples considered in this paper, $b(x) > 0$ when $x$ is to the left of the fluid equilibrium of $W$, and $b(x) < 0$ when $x$ is to the right of the fluid equilibrium; i.e.,\ the DTMC drifts back toward its equilibrium. This drift toward the equilibrium is intimately tied to the positive recurrence of the DTMC and can therefore be thought of as a reasonable assumption even if we go beyond this paper's three examples. Now, if $v(x)$ is allowed to be negative, it may be that $\kappa = \infty$ in \eqref{f10}; e.g.,\ if $v(x) < 0$ for $x > K$ for some threshold $K$. Conversely, $\inf_{x \in (\underline{w}, \overline{w})} v(x) > 0$ is sufficient to ensure that $\kappa < \infty$ in all three of our examples. Another, more intuitive, reason that $v(x) > 0$ is that a diffusion coefficient cannot be negative.
\subsection{The $v_2$ Approximation Error}\label{sec:theoryv2}
Let us discuss the error of our $v_2$ approximation. For simplicity, let us assume that $(\underline{w}, \overline{w}) = \mathbb{R}$ and that $\inf_{x \in \mathbb{R}} \underline v_2(x) >0$, i.e., $v_2(x)$ equals the untruncated version $\underline v_2(x)$. We discuss in Section~\ref{sec:hybrid} what happens when the latter assumption does not hold. Suppose $Y$ is a random variable with density as in \eqref{f10} and with $v(x)$ there equal to $v_2(x)$, \blue{i.e.,
\begin{align*}
\frac{\kappa}{v_2(x)} \exp\Big(\int_0^x \frac{b(y)}{v_2(y)} dy \Big),\quad x\in (\underline{w}, \overline{w}),
\end{align*}}
and assume for simplicity that $(\underline{w}, \overline{w}) = \mathbb{R}$. Fix a test function $h: \mathbb{R} \to \mathbb{R}$ with $\mathbb{E} \abs{h(Y)} < \infty$, and let $f_h(x)$ be the solution to the Poisson equation
\begin{align}
b(x) f_h'(x) + v_2(x) f_h''(x) = \mathbb{E} h(Y) - h(x), \quad x \in \mathbb{R}. \label{eq:poissonde}
\end{align}
Assume that $\mathbb{E} \abs{f_h (W)} < \infty$, which is typically true in practice, and take expected values with respect to $W$ to get
\begin{align*}
\mathbb{E} h(Y) - \mathbb{E} h(W) =&\ \mathbb{E} b(W) f_h'(W) + \mathbb{E} v_2(W) f_h''(W) = \frac{1}{18} \mathbb{E} c(W) g_{h}'''(\xi_2) -\frac{1}{24} \mathbb{E} d(W) f_{h}^{(4)}(\xi_1).
\end{align*}
The last equality follows from \eqref{f4}, and $g_h(x) = \int_{0}^{x} \frac{c(y)}{a(y)} f_h''(y) dy$. We have again made an implicit assumption that $f_h(x)$ is sufficiently regular. The regularity of $f_h(x)$ is entirely determined by the regularity of $b(x)$, $v(x)$, and $h(x)$.
The right-hand side equals
\begin{align}
&\frac{1}{18} \mathbb{E} \Big[c(W) \Big(\frac{c(x)}{a(x)}\Big)''\Big|_{x = \xi_2}f_h''(\xi_2)\Big] + \frac{2}{18} \mathbb{E} \Big[ c(W) \Big(\frac{c(x)}{a(x)}\Big)'\Big|_{x = \xi_2} f_h'''(\xi_2)\Big] \notag \\
&+ \frac{1}{18} \mathbb{E} \Big[c(W) \frac{c(\xi_2)}{a(\xi_2)} f_h^{(4)}(\xi_2)\Big] -\frac{1}{24} \mathbb{E} d(W) f_h^{(4)}(\xi_1) \label{eq:v2taylorerror}
\end{align}
because
\begin{align*}
g_h'''(x) =&\ \Big( \frac{c(x)}{a(x)} f_h''(x)\Big)'' = \Big(\frac{c(x)}{a(x)}\Big)''f_h''(x) + 2\Big(\frac{c(x)}{a(x)}\Big)'f_h'''(x)+\frac{c(x)}{a(x)}f_h^{(4)}(x).
\end{align*}
Note that \eqref{eq:v2taylorerror} contains a term involving $f_h''(x)$ that is not captured by $\underline v_2(x)$. To capture that term, we can consider
\begin{align*}
\overline v_2(x) = \frac{a(x)}{2}-\frac{b(x)c(x)}{3a(x)}-\frac{a(x)}{6}\Big(\frac{c(x)}{a(x)}\Big)' - \frac{1}{18} c(x) \Big(\frac{c(x)}{a(x)}\Big)'' , \quad x \in \mathbb{R}.
\end{align*}
Truncating $\overline v_2(x)$ produces yet another $v_2$ approximation with error
\begin{align}
&\frac{1}{18} \mathbb{E} \Big[c(W) \Big( \Big(\frac{c(x)}{a(x)}\Big)''\Big|_{x = \xi_2}f_h''(\xi_2) - \Big(\frac{c(W)}{a(W)}\Big)'' f_h''(W)\Big)\Big] \notag \\
&+ \frac{2}{18} \mathbb{E} \Big[ c(W) \Big(\frac{c(x)}{a(x)}\Big)'\Big|_{x = \xi_2} f_h'''(\xi_2)\Big] + \frac{1}{18} \mathbb{E} \Big[c(W) \frac{c(\xi_2)}{a(\xi_2)} f_h^{(4)}(\xi_2)\Big] -\frac{1}{24} \mathbb{E} d(W) f_h^{(4)}(\xi_1) \label{eq:v2taylorerroralt}
\end{align}
in place of \eqref{eq:v2taylorerror}. In order to decide between $\underline v_2(x)$ and $\overline v_2(x)$, let us compare the two error terms in \eqref{eq:v2taylorerror} and \eqref{eq:v2taylorerroralt}. We stress that the following is an informal discussion meant to develop intuition. Theoretical guarantees for $v_2$ must be established on a case-by-case basis and fall outside the scope of this paper.
Consider first the error term \eqref{eq:v2taylorerror}. Recall that $a(x),c(x)$, and $d(x)$ equal $\mathbb{E}(\Delta^k|W=x)$ for $k = 2,3,4$, respectively. Now $\Delta = W' - W$ equals $\delta$ times the one-step displacement of the Markov chain. Let us assume that the displacement is bounded by a constant independent of $\delta$, in which case $\mathbb{E}(\Delta^k|W=x)$ shrinks at the rate of at least $\delta^k$ as $\delta \to 0$. In particular, $d(x)$ shrinks at least as fast as $\delta^4$. Since $a(x)$ is the extension of the strictly positive function $\mathbb{E} (\Delta^{2} | W=x)$, we assume that this extension is also strictly positive. Furthermore, we assume that $a(x)$ is of order $\delta^2$, as opposed to merely shrinking at a rate of at least $\delta^2$. Formally, we assume that $\inf\{\delta^{-2} \abs{a(x)} : \delta \in (0,1),\ x \in (\underline{w}, \overline{w})\} > 0$, which implies that, provided the derivatives exist, $c(x) \frac{c(x)}{a(x)}$, $c(x) \Big(\frac{c(x)}{a(x)}\Big)'$, and $c(x) \Big(\frac{c(x)}{a(x)}\Big)''$ all shrink at a rate of at least $\delta^4$ as $\delta \to 0$, making them comparable to $d(x)$.
Now consider the error term \eqref{eq:v2taylorerroralt}, focusing on the first line there. Provided $a(x),c(x),$ and $f_h(x)$ are sufficiently differentiable, the mean value theorem implies
\begin{align*}
&\mathbb{E} \Big[c(W) \Big( \Big(\frac{c(x)}{a(x)}\Big)''\Big|_{x = \xi_2}f_h''(\xi_2) - \Big(\frac{c(W)}{a(W)}\Big)'' f_h''(W)\Big)\Big] \\
=&\ \mathbb{E} \Big[c(W) (\xi_2 - W) \Big( \Big(\frac{c(x)}{a(x)}\Big)'' f_h''(x)\Big)'\Big|_{x = \xi_3}\Big].
\end{align*}
Under the two assumptions from before, the terms in front of the derivatives of $f_h(x)$ above shrink at a rate of at least $\delta^5$. If the rest of the terms in \eqref{eq:v2taylorerroralt} shrink at the rate of $\delta^4$, then using $\overline v_2(x)$ instead of $\underline v_2(x)$ as the $v_2$ approximation would not make the error converge to zero faster. For this reason and also because $\underline v_2(x)$ is simpler than $\overline v_2(x)$, we work with $\underline v_2(x)$ in the models we consider.
\section{Erlang-C Model}\label{fse3}
\blue{In this section we consider the Erlang-C model. We prove that the $v_1$ error converges to zero at a faster rate than the $v_0$ error. We also conduct numerical experiments where we observe that the $v_1$ error is much smaller, often by a factor of 10, than the $v_0$ error. After defining the model, we introduce the approximations in Section~\ref{sec:ecv0v1} and then present theoretical and numerical results in Sections~\ref{sec:erlangctheory} and \ref{sec:erlangcnumerical}, respectively. }
\blue{The Erlang-C, or $M/M/n$, system has a single buffer served by $n$ homogeneous servers working in} a first-come-first-served manner. Customers arrive according to a Poisson process with rate $\lambda$, and service times are i.i.d., exponentially distributed with mean $1/\mu$. We let $R = \lambda/\mu$ and $\rho = \frac{\lambda}{n\mu} = R/n$ be the offered load and utilization, respectively.
Let $X(t)$ be the number of customers in the system at time $t$. We assume that $\rho < 1$, implying that $X = \{X(t), t \geq 0\}$ a positive recurrent CTMC. Set $\delta = 1/\sqrt{R}$, $\tilde X = \{ \tilde X(t) = \delta(X(t) - R),\ t \geq 0\}$, and let $W$ be the random variable having the stationary distribution of $\tilde X$. The support of $W$ is $\mathcal{W} =\{\delta(k-R): k\in \mathbb{Z}_+\}$, so we let
\begin{align}
(\underline{w}, \overline{w}) = (-\delta R, \infty) = (-\sqrt{R}, \infty). \label{eq:smallestinterval}
\end{align}
The generator of $\tilde X$ satisfies (cf. Eq. (3.6) of \cite{BravDaiFeng2016})
\ben{ \label{eq:gx}
G_{\tilde X}f(x)=\lambda (f(x+\delta)-f(x))+\mu\big[ (x/\delta+R) \wedge n \big] (f(x-\delta)-f(x)),
}
where $x = \delta (k-R)$ for some integer $k \geq 0$. Proposition 1.1 in \cite{Hend1997} states that
\ben{ \label{eq:gxbar}
\mathbb{E} G_{\tilde X} f(W) = \mathbb{E} \Big[ \lambda (f(W+\delta)-f(W))+\mu\big[ (W/\delta+ R) \wedge n \big] (f(W-\delta)-f(W)) \Big]=0
}
for all $f(x)$ such that $\mathbb{E} \abs{f(W)} < \infty$.
\subsection{The $v_0$ and $v_1$ Approximations}
\label{sec:ecv0v1}
Let us perform Taylor expansion on the left-hand side of \eqref{eq:gxbar}:
\begin{align}
\mathbb{E} b(W) f'(W)+\mathbb{E} \frac{a(W)}{2}f''(W) =&\ -\frac{1}{6}\big( \delta^3 \lambda f^{'''}(\xi_1) - \delta^3 \mu\big[ (W/\delta+ R) \wedge n \big]f^{'''}(\xi_2)\big), \label{f6}
\end{align}
where $\xi_{1} \in (W,W+\delta)$, $\xi_2 \in (W-\delta, W)$,
\begin{align}
& b(x) = \delta \big(\lambda -\mu \big[(x/\delta+R)\wedge n\big]\big), \quad \text{ and } \label{eq:tb}\\
& a(x) = \delta^2 \big(\lambda+ \mu \blue{[(x/\delta+R)\wedge n]} \big) = 2\mu - \delta b(x), \quad x\in \mathcal{W}. \label{eq:ta}
\end{align}
The second equality in \eqref{eq:ta} holds because $\delta^2 = 1/R = \mu/\lambda$. Let
\begin{align}
\beta= \delta(n-R)>0, \quad \text{ or } \quad n = R +\beta \sqrt{R}. \label{eq:staffing}
\end{align}
When $\beta$ is fixed and $R,n \to \infty$, the asymptotic regime is known as the Halfin-Whitt regime; see \cite{HalfWhit1981}. It is also
known as the \emph{quality and efficiency--driven regime} because in
this parameter region, the system simultaneously achieves short
average waiting time (quality) and high server utilization
(efficiency); \cite{GansKoolMand2003}. Some of our results assume that $\beta$ is fixed, while others do not.
By considering the cases when $x \leq \beta$ and $x > \beta$ in \eqref{eq:tb}, we see that $b(x) = -(\mu x \wedge \mu \beta)$ for $x \in \mathcal{W}$, and we extend $b(x)$ to the entire real line via
\begin{align}
b(x) = -(\mu x \wedge \mu \beta), \quad x \in \mathbb{R}. \label{eq:bdef}
\end{align}
We also want a strictly positive extension of $a(x)$ to $\mathbb{R}$. Since $\mathcal{W} \subset [ -\sqrt{R}, \infty)$, we define
\begin{align}
a(x) = 2\mu - \delta b(-\sqrt{R} \vee x), \quad x \in \mathbb{R}, \label{eq:adef}
\end{align}
and since $b(x)$ is nonincreasing and $b(-\sqrt{R}) = \mu \sqrt{R} = \mu/\delta$, we have $a(x) \geq a(-\sqrt{R}) = \mu$. Recall from \eqref{f10} that our diffusion approximations all have density of the form
\begin{align*}
\frac{\kappa}{v(x)} \exp\Big(\int_0^x \frac{b(y)}{v(y)} dy \Big),\quad x\in \mathbb{R},
\end{align*}
for some normalizing constant $\kappa > 0$. The $v_0$ and $v_1$ approximations are obtained by setting
\begin{align*}
v(x) = v_0 = \frac{1}{2} a(0) = \mu \quad \text{ and } \quad v(x) = v_1(x) = \frac{1}{2} a(x), \quad x \in \mathbb{R}.
\end{align*}
Let $Y_0$ and $Y_1$ be the random variables corresponding to $v_0$ and $v_1$, respectively.
\begin{remark}
To better approximate $W$, we can use a diffusion process defined on $[-\sqrt{R},\infty)$ with a reflecting condition at the left boundary of $x = -\sqrt{R}$. However, our theorems in Section~\ref{sec:erlangctheory} are intended for the asymptotic regimes when $R \to \infty$. Since the probability of an empty system shrinks rapidly as $R$ grows, the choice between a reflected diffusion on $[-\sqrt{R}, \infty)$ and a diffusion defined on $\mathbb{R}$ is inconsequential.
\end{remark}
\subsection{Theoretical Guarantees for the Approximations}\label{sec:erlangctheory}
We now present several theoretical results showing that the $v_1$ error vanishes faster than the $v_0$ error. Define the class of all Lipschitz-$1$ functions by
\begin{align*}
\text{\rm Lip(1)} = \big\{h: \mathbb{R} \to \mathbb{R} \ \big|\ \abs{h(x)-h(y)} \leq \abs{x-y} \text{ for all $x,y\in \mathbb{R}$}\big\}.
\end{align*}
It was shown in \cite{BravDaiFeng2016} that
\begin{align}
\sup_{h \in \text{\rm Lip(1)}} \big| \mathbb{E} h(W) - \mathbb{E} h(Y_0) \big| \leq \frac{205}{\sqrt{R}}, \quad \text{ if } R < n. \label{eq:oldmain}
\end{align}
The quantity on the left-hand side above is known as the Wasserstein distance and, as was shown in \cite{GibbSu2002}, convergence in the Wasserstein distance implies convergence in distribution. To add to the result of \cite{BravDaiFeng2016}, we prove the following lower bound in Section~\ref{sec:lowerbound} of the electronic companion.
\begin{proposition}
\label{prop:lowerbound}
Assume $n = R + \beta \sqrt{R}$ for some fixed $\beta > 0$.
There exists a constant $C(\beta) > 0$ depending only on $\beta$ such that
\begin{align*}
\big| \mathbb{E} W - \mathbb{E} Y_0 \big| \geq \frac{C(\beta)}{\sqrt{R}}.
\end{align*}
\end{proposition}
An immediate implication of Proposition~\ref{prop:lowerbound} is that the Wasserstein distance between $W$ and $Y_0$ is at least $C(\beta)/\sqrt{R}$. The assumption that $\beta$ is fixed can likely be removed (with additional effort), but that is not the focus of our paper. We turn to the $v_1$ approximation. Define $W_2 = \{h: \mathbb{R} \to \mathbb{R}\ |\ h(x), h'(x) \in \text{\rm Lip(1)} \}$ and for two random variables $U,V$, define the $W_2$ distance as
\begin{align*}
d_{W_2}(U,V) = \sup_{h \in W_2} \big| \mathbb{E} h(U) - \mathbb{E} h(V) \big|.
\end{align*}
Although $W_2 \subset \text{\rm Lip(1)}$, it still rich enough to imply convergence in distribution. In particular, Lemma 3.5 of \cite{Brav2017} shows that by approximating the indicator function of a half line by Lipschitz functions with bounded second derivatives, convergence
in the $d_{W_2}$ distance implies convergence in distribution. The following result first appeared as Theorem 3.1 in \cite{Brav2017}.
\begin{theorem}
\label{thm:w2}
There exists a constant $C > 0$ (independent of $\lambda, n$, and $\mu$) such that for all $n \geq 1, \lambda > 0$, and $\mu > 0$ satisfying $1 \leq R < n $,
\begin{align*}
\sup_{h \in W_2} \big| \mathbb{E} h(W) - \mathbb{E} h(Y_1) \big| \le \frac{C}{R}.
\end{align*}
\end{theorem}
Note that $h(x) = x$ belongs to $W_2$, so Theorem~\ref{thm:w2} and Proposition~\ref{prop:lowerbound} tell us that the the $v_1$ approximation error of $\mathbb{E} (W)$ is guaranteed to vanish faster than the $v_0$ error as $R \to \infty$.
Error bounds of the flavor of Theorem~\ref{thm:w2} were established in \cite{GurvHuanMand2014, Gurv2014, BravDai2017, BravDaiFeng2016}, all of which studied convergence rates for steady-state diffusion approximations of various models. The rate of $1/R$ is an order of magnitude better than the rates in any of the previously mentioned papers, where the authors obtained rates that would be equivalent to $1/\sqrt{R}$ in our model.
Going beyond error bounds for smooth test functions, we now present moderate-deviations bounds for our two approximations. Namely, we are interested in the relative error of approximating the cumulative distribution function (CDF) and complementary CDF (CCDF). We define the relative error of the right tail to be
\begin{align*}
\abs{\frac{\mathbb{P}(Y_i \geq z)}{\mathbb{P}(W \geq z)} - 1}, \quad i = 0,1.
\end{align*}
The relative error for the left tail is defined similarly. The first result is for $v_0$.
\begin{theorem}
\label{thm:md-std}
Assume that $n = R + \beta \sqrt{R}$ for some fixed $\beta > 0$. \blue{There exist positive constants $c_0$ and $C$ depending only on $\beta$ such that
\begin{align}
& \left|\frac{\mathbb{P}(Y_0\geq z)}{\mathbb{P}(W\geq z)}-1\right|\leq \frac{C}{\sqrt{R}}\left(1+z\right) \quad \text{ for } 0<z\leq c_0 R^{1/2}\ \text{and} \label{f12} \\
&\left|\frac{\mathbb{P}(Y_0\leq -z)}{\mathbb{P}(W\leq -z)}-1\right|\leq \frac{C}{\sqrt{R}}\left(1+z^3\right),\ \text{ for } 0<z\leq \min\{c_0 R^{1/6}, R^{1/2}\}. \label{f15}
\end{align}
}
\end{theorem}
The second result presents analogous bounds for the $v_1$ approximation.
\begin{theorem}
\label{thm:md-high}
Assume $n = R + \beta \sqrt{R}$ for some fixed $\beta > 0$.
\blue{There exist positive constants $c_1$ and $C$ depending only on $\beta$ such that
\begin{align}
& \left|\frac{\mathbb{P}(Y_1\geq z)}{\mathbb{P}(W\geq z)}-1\right|\leq \frac{C}{\sqrt{R}}\left(1+\frac{z}{\sqrt{R}}\right) \quad \text{ for } 0< z\leq c_1 R\ \text{and} \label{f13}\\
&\left|\frac{\mathbb{P}(Y_1\leq -z)}{\mathbb{P}(W\leq -z)}-1\right|\leq \frac{C}{\sqrt{R}}\left(1+z+\frac{z^4}{\sqrt{R}}\right),\ \text{ for } 0<z\leq \min\{c_1 R^{1/4}, R^{1/2}\} \label{f16}.
\end{align}
}
\end{theorem}
Inequality \eqref{f13} follows from Theorem~4.1 of \cite{Brav2017}. We prove \eqref{f16} in Section \ref{fse8};
Theorem \ref{thm:md-std} follows from a similar and simpler proof in Section~\ref{fap3}.
These are called moderate deviations bounds because they cover the case when $z$ is ``moderately'' far from the origin, with ``moderately'' being quantified by intervals of the form $z \in [0,c_0R^{1/2}]$, $z \in [0,c_1R]$, etc. In contrast, large-deviations results focus on understanding the behavior of $\mathbb{P}(W \geq z)$ as $z \to \infty$. To compare the two theorems, suppose $z = c_0 \sqrt{R}$ and consider the upper bounds in \eqref{f12} and \eqref{f13}. The $v_0$ error is guaranteed to be bounded as $R$ grows, while the $v_1$ error shrinks at a rate of at least $1/\sqrt{R}$.
\subsection{Numerical Results}\label{sec:erlangcnumerical}
Although the Erlang-C system depends on three parameters $\lambda$, $\mu$, and $n$, the stationary distribution depends on only $\rho$ and $n$; see Appendix C in \cite{Alle1990}. Figure~\ref{fig:ecmoment} displays the relative errors of the $v_0$ and $v_1$ approximations of $\mathbb{E} (W)$ when $5 \leq n \leq 100$ and $0.5 \leq \rho \leq 0.99$. Note that the $v_1$ error is about ten times smaller.
We also compare how well $v_0$ and $v_1$ approximate the CCDF of $W$. Figure~\ref{fig:ecmd1} plots the relative error of approximating $\mathbb{P}(W/\delta +R \geq z)$ for various values of $z$ when $n = 10$ and $\rho \in [0.5,0.99]$, and shows that the $v_1$ error is again much smaller. In results not reported in the paper, we observed that the $v_1$ error remains much smaller even as we vary $n$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.55]{scaledmean.pdf}
\caption{The errors increase towards the bottom-right corner of each plot. This is due to the fact that $\mathbb{E} (W)$ is very close to zero in that region and not because approximations perform poorly. \label{fig:ecmoment}}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.24]{moddev3dn10part1.pdf}
\caption{The $v_1$ approximation is much more accurate. }
\label{fig:ecmd1}
\end{figure}
\section{Hospital Model}\label{fse4}
In this section we consider the discrete-time model for hospital inpatient flow proposed by \cite{DaiShi2017}. Numerical experiments presented later suggest that both the $v_1$ and $v_2$ errors vanish at the same rate as the $v_0$ approximation error. To observe a faster convergence rate, we have to resort to the $v_3$ approximation.
Consider a discrete-time queueing model with $N$ identical servers. Let $X(n)$ be the number of customers in the system at the end
of time unit $n$. Given $X(0)$, we define
\begin{align*}
X(n) = X(n-1) + A(n) - D(n), \quad n \geq 1,
\end{align*}
where $A(n) \sim$ Poisson$(\Lambda)$ represents the number of new arrivals in the time period $[n-1,n)$. At the end of each time period, every customer in service flips a coin and, with probability $\mu \in (0,1)$, departs the system at the start of the next time period. Thus, conditioned on $X(n-1) = k$, we have $D(n) \sim $ Binomial$(k\wedge N,\mu)$. Assuming $ \Lambda < N \mu$, \cite{DaiShi2017} showed that
$X=\{X(n): n=1, 2, \ldots\}$ is a positive-recurrent DTMC.
\blue{This DTMC is similar to the Erlang-C model, but unlike the Erlang-C model where the customer count only changes by one at a time, the jump size $X(1) - X(0)$ is not bounded because $A(n)$ is unbounded. As a result, computing the stationary distribution takes a long time when $\Lambda$ is large and the utilization $\rho = \Lambda/(N\mu)$ is near one because the state-space truncation has to be large to account for potential arrivals. }
We are interested in the scaled DTMC $\tilde X = \{ \tilde X(n) = \delta (X(n) - N)\}$. To stay consistent with \cite{DaiShi2017}, we center $X(n)$ around $N$. We consider the parameter ranges studied in \cite{DaiShi2017}, which, given some constant $\beta>0$, are
\begin{align}\label{eq:hosqed}
\Lambda=\sqrt{N}-\beta, \quad \mu=\delta=1/\sqrt{N}
\end{align}
\subsection{Motivating the Need for a $v_3$ Approximation}
\label{sec:hospcompare}
This section contains an informal discussion aimed at explaining why the $v_0$, $v_1$, and $v_2$ errors vanish at the same rate of $\delta = 1/\sqrt{N}$, and why we need the $v_3$ approximation to observe a convergence rate of $\delta^2 = 1/N$.
Initialize $\tilde X(0)$ according to the stationary distribution of $\tilde X$, let $W = \tilde X(0)$, $W' = \tilde X(1)$, and set $\Delta = W' - W$. The support of $W$ is $\mathcal{W}=\{ \delta(k-N): k \in \mathbb{N} \} \subset [-\delta N, \infty)$. As we are accustomed to doing by this point, we let $b(x) = \mathbb{E}(\Delta | W = x)$. We know from (37) and (38) of \cite{DaiShi2017} that for $x \in \mathcal{W}$,
\begin{align}
b(x) = \mathbb{E}(\Delta| W= x) =&\ \delta( x^{-} - \beta ), \quad \text{ and } \label{eq:1}\\
\mathbb{E}(\Delta^2|W=x) =&\ 2\delta + \big(b^{2}(x)-\delta b(x) -\delta^2 - 2\delta^2\beta\big) + \delta^3 x^{-}. \label{eq:2}
\end{align}
For the higher moments of $\Delta$, let us use $\epsilon(x)$ to represent a generic function that may change from line to line but always satisfies the property
\begin{align}
\abs{\epsilon(x)} \leq C (1 + \abs{x})^{5} \label{eq:epsdef}
\end{align}
for some constant $C>0$ that depends only on $\beta$. We show in Section~\ref{sec:hosmecproof} that
\begin{align}
\mathbb{E}(\Delta^3|W=x) = \delta^2 \epsilon(x), \quad \mathbb{E}(\Delta^4|W=x) = \delta^2 \epsilon(x), \quad \mathbb{E}(\Delta^5 |W=x) = \delta^3 \epsilon(x), \quad x \in \mathcal{W}. \label{eq:345}
\end{align}
All of our $v_n$ approximations share the same drift $b(x)$, but the diffusion coefficients vary. As always, $v_1(x) = \frac{1}{2} \mathbb{E}(\Delta^2|x )$. Since $b(x) = 0$ at $x = -\beta$, \eqref{eq:2} implies that
\begin{align}
v_0 =&\ v_1(-\beta) = \frac{1}{2}\big( 2\delta - \delta^2 (1+2\beta) + \delta^3 \beta\big). \label{eq:hospv0}
\end{align}
The following informal discussion assumes that all functions are sufficiently differentiable and that all expectations exist. Our starting point, as always, is the Poisson equation
\begin{align}
b(x) f_h'(x) + v(x) f_h''(x) = \mathbb{E} h(Y) - h(x), \quad x \in \mathbb{R}, \label{eq:hosppoisson}
\end{align}
where $v(x)$ is a temporary placeholder and $Y$ has density given by \eqref{f10}. The Taylor expansion in \eqref{eq:taylorgeneric} tells us that
\begin{align*}
\mathbb{E} b(W) f_h'(W) + \mathbb{E} \bigg[\sum_{i=2}^{n} \frac{1}{i!} \Delta^i f_h^{(i)}(W) + \frac{1}{(n+1)!} \Delta^{n+1} f_h^{(n+1)}(\xi)\bigg] = 0,
\end{align*}
where $\xi = \xi^{(n)}$ lies between $W$ and $W'$. Subtracting this equation from \eqref{eq:hosppoisson} and taking expected values there with respect to $W$ we see that for $n \geq 1$,
\begin{align}
\mathbb{E} h(W) - \mathbb{E} h(Y) =&\ -\mathbb{E} v(W) f_h''(W) + \mathbb{E} \bigg[\sum_{i=2}^{n} \frac{1}{i!} \Delta^i f_h^{(i)}(W) + \frac{1}{(n+1)!} \Delta^{n+1} f_h^{(n+1)}(\xi)\bigg]. \label{eq:hosperr}
\end{align}
Consider the $v_0$ error. When $v(x) = v_0$, equation \eqref{eq:hosperr} with $n = 2$ there becomes
\begin{align}
\mathbb{E} h(W) - \mathbb{E} h(Y) =&\ \mathbb{E} \Big(\frac{1}{2}\Delta^2 - v_0 \Big) f_h''(W) + \frac{1}{6} \mathbb{E} \Delta^3 f_h'''(\xi). \label{eq:v0errorhosp}
\end{align}
Note that $f_h(x)$ depends on $v(x)$ because it solves \eqref{eq:hosppoisson}.
In Lemma~3 of \cite{DaiShi2017}, the authors proved that $\abs{f_h''(x)} \leq C/\delta$ and $\abs{f_h'''(x)} \leq C/\delta$ for some constant $C > 0$ dependent only on $\beta$. Assuming that $f_h''(x)$ and $f_h'''(x)$ indeed grow at the rate of $C/\delta$ as $\delta \to 0$, the forms of $\mathbb{E} (\Delta^2 | W = x)$ and $v_0$ in \eqref{eq:2} and \eqref{eq:hospv0} yield
\begin{align*}
\frac{1}{2}\mathbb{E} (\Delta^2 | W = x) - v_0 =&\ \frac{1}{2} \Big(b^{2}(x)-\delta b(x) + \delta^3 (x^{-}-\beta)\Big) = \frac{1}{2} \Big(b^{2}(x)-\delta b(x) + \delta^2 b(x)\Big).
\end{align*}
Since $b(x) = \delta(x^{-}-\beta)$, this quantity is of order $\delta^2$. Now \eqref{eq:345} says that $\mathbb{E} (\Delta^3| W =x)$ is also of order $\delta^2$. Therefore, we expect both $ \mathbb{E} \big(\Delta^2/2 - v_0 \big) f_h''(W)$ and $\mathbb{E} \Delta^3 f_h'''(\xi)$ to be of order $\delta$, so even if $ \mathbb{E} \big(\Delta^2/2 - v_0 \big) f_h''(W)$ were not present in \eqref{eq:v0errorhosp}, the approximation error would still be of order $\delta$ due to $\mathbb{E} \Delta^3 f_h'''(\xi)$. We believe this is why the $v_0$ and $v_1$ errors appear to vanish at the same rate despite $v_1(x)$ capturing the entire second order term of
\begin{align}
\mathbb{E} \bigg[\sum_{i=2}^{n} \frac{1}{i!} \Delta^i f_h^{(i)}(W) + \frac{1}{(n+1)!} \Delta^{n+1} f_h^{(n+1)}(\xi)\bigg]. \label{eq:hosptaylorerror}
\end{align}
Going beyond $v_0$ and $v_1$, we see that for the error to be of order $\delta^2$, the diffusion approximation must capture all the terms in \eqref{eq:hosptaylorerror} that are of order $\delta$. If we assume for the moment that the derivatives of $f_h(x)$ are all of order $1/\delta$, we see that our approximation has to capture all terms of order $\delta^2$ or larger in the functions $\{ \mathbb{E}(\Delta^i|x)\}_{i=1}^{\infty}$.
From \eqref{eq:1}, \eqref{eq:2}, and \eqref{eq:345} we see that $\mathbb{E} (\Delta^{1} | x)$ through $\mathbb{E} (\Delta^{4} | x)$ are all of order $\delta$ or $\delta^2$, while $\mathbb{E}(\Delta^{5} | x)$ is of order $\delta^3$. Thus, if we want the error to be of order $\delta^2$, our approximation must capture the terms of order $\delta$ and $\delta^2$ in $\mathbb{E} (\Delta^{1} | x)$ through $\mathbb{E} (\Delta^{4} | x)$ and can ignore terms of order $\delta^3$ like $\mathbb{E}(\Delta^{5} | x)$. We remark that $v_2(x)$ in \eqref{eq:v2} depends only on $\mathbb{E} (\Delta^{1} | x)$ through $\mathbb{E} (\Delta^{3} | x)$ and not on $\mathbb{E} (\Delta^{4} | x)$. We suspect this is why we observe the $v_2$ error to be of order $\delta$. In Section~\ref{app:hospital_proofs} we derive a $v_3$ approximation of the form
\begin{align}
\label{eq:hosv3}
v_3(x)=\max\Big\{ \delta +\frac{1}{2} \Big(\delta^{2} 1(x<0)- \delta b(x)-\delta^2-2\delta^2\beta\Big), \delta/2 \Big\}
\end{align}
and in the following section we present numerical results that suggest that the error of this approximation converges to zero at a rate of $\delta^2$.
\subsection{Numerical Results} \label{sec:hospital_numeric}
In Figure~\ref{fig:hospmoments} we compare the $v_n$ approximations for $\mathbb{E} (W)$ when $\beta = 1$ and $N \in \{4,16,64,256\}$. The values of $\mathbb{E} (W)$ are estimated using a simulation; the width of the $95\%$ confidence intervals (CIs) is on the order of $10^{-4}$. Though we do not report them, the $v_0, v_1$, and $v_2$ approximation errors appear to decay at the rate of $1/\sqrt{N}$, but the $v_3$ error in the table appears to decay linearly in $N$. When it comes to approximating the CCDF, $v_3$ also outperforms the other approximations; see Figure~\ref{fig:hosp64} for an example when $N = 64$ and $\beta = 1$. Our findings were consistent for other values of $\beta$ and $N$.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{moment_comparison_beta1.pdf}
\end{subfigure}%
\begin{subfigure}[t]{0.45\textwidth}
\centering
\setlength\tabcolsep{3pt}
\renewcommand{\arraystretch}{0.5}
\vspace{-3.5cm}
\begin{tabular}{rc | c | c }
$N$ & $\beta$ & $\mathbb{E} W$ &$95\% \text{CI for } \big| \mathbb{E} (W) - \mathbb{E} (Y_3) \big|$ \\
\hline
4 & 1 & -0.933 & $[0.0007, 0.0008] $ \\
16 & 1 & -0.865 & $[0.0016, 0.0017]$ \\
64 & 1 & -0.823 &$[0.0004, 0.0005]$ \\
256 & 1 & -0.801 & $[0,0.0002]$ \\~\\
\end{tabular}
\end{subfigure}
\caption{$\beta = 1$. $Y_n$ corresponds to the $v_n$ approximation. }
\label{fig:hospmoments}
\end{figure}
\vspace{-1cm}
\begin{figure}
\centering
\includegraphics[width=3.5in]{N64beta1.pdf}
\caption{$N = 64$ and $\beta = 1$. $\mathbb{P}(W \geq x) \approx 10^{-6}$ for $x$ at the right endpoint of the $x$-axis. }
\label{fig:hosp64}
\end{figure}
\section{AR(1) Model}\label{fse5}
In this section we consider the first-order autoregressive model with random coefficient and general error distribution, which we refer to as the AR$(1)$ model. \cite{BlanGlyn2018} studied this model and used an Edgeworth expansion to approximate its stationary distribution. Following the notation of \cite{BlanGlyn2018}, we compare the performance their expansion to our diffusion approximations.
We encounter the issue that for large values of $x$, the untruncated $v_3(x)$ becomes sufficiently negative to make our usual solution of truncation from below perform poorly when approximating the tail of the distribution. We resolve this via a hybrid approximation that uses both $v_3(x)$ and $v_2(x)$ to construct a diffusion coefficient $\hat v_3(x)$ that combines the extra accuracy of $v_3(x)$ with the positivity of $v_2(x)$.
To introduce the model, let $\{X_n, n\geq 1\}$, $\{Z_n, n\geq 1\}$ be two independent sequences of i.i.d.\ random variables. We assume that both $X_1$ and $Z_1$ are exponentially distributed with unit mean so that we can compare our approximations to those of \cite{BlanGlyn2018}. Given $D_{0} \in \mathbb{R}$ and $\alpha>0$, consider the DTMC $D=\{D_n, n\geq 0\}$ defined as
\begin{align}
D_{n+1} = e^{-\alpha Z_{n+1}} D_n + X_{n+1} \label{eq:defar1}
\end{align}
and let $D_\infty > 0$ denote the random variable having the stationary distribution of $D$. Using \eqref{eq:defar1} we can see that $D_{\infty}$ is equal in distribution to $\sum_{k=0}^\infty X_k e^{-\alpha\sum_{j=0}^{k-1}Z_j}$, which is the random variable studied in Section 4 of \cite{BlanGlyn2018}.
We consider $\tilde D = \{ \tilde D_n = \delta (D_n - R),\ n \geq 0\}$, where $\delta = \sqrt{\alpha}$ and, to be consistent with \cite{BlanGlyn2018}, we choose $R = 1/\alpha$. The asymptotic regime we consider is $\alpha \to 0$, so going forward we assume that $\alpha \in (0,1)$.
It follows from \eqref{eq:defar1} that
\[
\tilde D_{n+1} = e^{-\alpha Z_{n+1}} \tilde D_n + \delta\big(X_{n+1} + R(e^{-\alpha Z_{n+1}} - 1)\big).
\]
Let $W = \delta(D_{\infty}-R)$ and $W' = e^{-\alpha Z}W + \delta\big(X + R(e^{-\alpha Z} - 1)\big)$, where $(X, Z)$ is an independent copy of $(X_1, Z_1)$, which is also independent of $W$. Since $D_{\infty} > 0$, the support of $W$ is $\mathcal{W} = (-1/\sqrt{\alpha}, \infty)$, which grows as $\alpha \to 0$.
Stationarity implies that $\mathbb{E} f(W')-\mathbb{E} f(W)=0$ provided $\mathbb{E} \abs{f(W)} < \infty$.
Note that the one-step jump size
\begin{align*}
\Delta = W' - W = W\big(e^{-\alpha Z}-1\big) + \delta\big(X + R(e^{-\alpha Z} - 1)\big) = \delta \big(D_{\infty}(e^{-\alpha Z} - 1) + X\big)
\end{align*}
does not depend on the choice of $R$. To present our diffusion approximations, we need expressions for $\mathbb{E}(\Delta^{k} | W = x)$. The following lemma is proved in Section~\ref{app:ar1proof}.
\begin{lemma}\label{lem:ar1}
Recall that $\delta = \sqrt{\alpha}$. For any $k \geq 1$,
\begin{align*}
\mathbb{E}(\Delta^{k} | D_{\infty} = d) = \delta^{k} k! \bigg(1 + \sum_{i=1}^{k} (-1)^{i}d^{i} \prod_{j=1}^{i} \frac{ \alpha }{1 + j \alpha} \bigg), \quad d > 0.
\end{align*}
\end{lemma}
The relationship between $D_{\infty}$ and $W$ implies that
\begin{align*}
\mathbb{E}(\Delta^{k} | W = x) = \mathbb{E}(\Delta^{k} | D_{\infty} = x/\delta + R) =&\ \delta^{k} k! \bigg(1 + \sum_{i=1}^{k} (-1)^{i}\Big( x\sqrt{\alpha} + 1 \Big)^{i} \prod_{j=1}^{i} \frac{1}{1 + j \alpha} \bigg),
\end{align*}
for $x \in \mathcal{W}$, where we used the facts that $\delta = \sqrt{\alpha}$ and $R = 1/\alpha$ in the second equality. Extending $\mathbb{E}(\Delta^{k} | W = x)$ to all $x\in \mathbb{R}$ in the obvious way, we now state the $v_0$, $v_1$, and $v_2$ approximations, whose forms are all standard. Namely, $v_{2}(x)$ follows from \eqref{eq:v2}, $v_1(x) = \mathbb{E}(\Delta^{2} | W = x)/2$, and $v_0 = \mathbb{E}(\Delta^{2} | W = x^{\ast})$, where $x^{\ast} = 1+1/\alpha$ solves $\mathbb{E}(\Delta | W = x) = 0$. To present $v_3(x)$, let us note that $\mathbb{E}(\Delta^{k} | W = x)$ takes the form $\delta^k p_{k}(x)$ for some degree-$k$ polynomial $p_{k}(x)$; we omit the dependence on $\alpha$ to ease notation. Given a truncation level $\eta > 0$, we let $v_3(x) = (\underline{v}_3(x) \vee \eta)$, where
\begin{align*}
\underline{v}_3(x) =&\ \delta^2 \Big(\frac{p_{2}(x)}{2}-\frac{p_{1}(x)\bar p_3(x)}{ \underline{p}_2(x)}-\delta \underline{p}_2(x)\Big(\frac{\bar p_3(x)}{\underline{p}_2(x)}\Big)'\Big), \\
\bar p_3(x) =&\ \frac{1}{6} \Big( p_3(x) - \frac{p_1(x)p_4(x)}{ 2 p_2(x)} - \frac{1}{4} \delta p_2(x)\Big(\frac{p_4(x)}{p_2(x)}\Big)' \Big),\\
\underline p_2(x) =&\ \Big(\frac{p_2(x)}{2}-\frac{p_1(x)p_3(x)}{3p_2(x)}-\frac{p_2(x)}{6}\Big(\frac{p_3(x)}{p_2(x)}\Big)'\Big).
\end{align*}
We derive $v_3(x)$ by successively applying the ``trick'' we used in Section~\ref{sec:v2def} to derive the $v_2(x)$ approximation in order to gain access to higher order terms in the Taylor expansion. The details are left to Section~\ref{app:ar1proof} of the electronic companion. Before presenting our numerical results, we discuss how to modify the $v_3$ approximation to overcome the issue that $\underline{v}_3(x)$ becomes negative for large values of $x$.
\subsection{Hybrid Approximation} \label{sec:hybrid}
Figure~\ref{fig:v3plots} displays $\underline{v}_3(x)$ for several values of $\alpha$. When $\alpha = 0.001$ or $0.01$, the plots of $\underline{v}_3(x)$ are very close to zero but remain nonnegative. However, $\underline{v}_3(x)$ is negative when $\alpha = 0.1, 0.5$, or $0.9$. The behavior of $\underline{v}_3(x)$ in the left tail is not as important because the left boundary of the support of $W = \sqrt{\alpha} (D - 1/\alpha)$ is $-1/\sqrt{\alpha}$; we therefore ignore the negativity in the left part of $\underline v_3(x)$ for $x < 0$. The farther $\underline{v}_3(x)$ drops below zero, the worse we expect the truncated $\underline{v}_3(x)$ to perform. For example, the plot with $\alpha = 0.9$ in Figure~\ref{fig:arbig} of Section~\ref{sec:ar1numerical} shows that while $v_3$ performs well in regions where $\underline{v}_3(x)>0$, it does not perform as well when estimating $\mathbb{P}(W > x)$ for large $x$.
To improve upon the $v_3$ approximation, we propose the hybrid approximation $\hat v_3(x) = \underline{v}_3(x) 1(x \leq K) + \underline{v}_2(x) 1(x > K)$. The threshold $K$ is numerically chosen to equal the right-most point of intersection of $\underline{v}_2(x)$ and $\underline{v}_3(x)$. The idea is for $\hat v_3(x)$ to enjoy the increased accuracy of $\underline{v}_3(x)$ in the center with the performance of $\underline{v}_2(x)$ far in the tail.
We expect $\hat v_3(x)$ to outperform a truncated $\underline{v}_3(x)$ when $\underline{v}_3(x)$ drops far below zero; e.g., when $\alpha = 0.9$. If $\underline{v}_3(x)$ is nonnegative, we expect little benefit from $\hat v_3(x)$; e.g., when $\alpha = 0.001$. Our expectations are consistent with our numerical findings in Section~\ref{sec:ar1numerical}.
\begin{figure}
\includegraphics[width=2.5in]{v3plots.pdf}
\centering
\caption{ When $\alpha = 0.001$ and $0.01$, $\underline{v}_3(x)$ is nonnegative at all points plotted. \label{fig:v3plots}}
\end{figure}
Lastly, we remark on what can be done in the case when both $\underline{v}_2(x)$ and $\underline{v}_3(x)$ are negative in the same region: instead of falling back on $\underline{v}_2(x)$, we can combine $\underline{v}_3(x)$ with $v_1(x)$, which is always positive because $v_1(x) = \frac{1}{2} \mathbb{E} (\Delta^{2} | W=x) > 0 $ for all $x \in \mathcal{W}$.
\subsection{Numerical Results} \label{sec:ar1numerical}
It is well known that Edgeworth expansions, obtained for the probability distribution at a particular point, can suffer from two issues: (1) they may not be a proper probability distribution function, and (2) they may not be sufficiently accurate in the tails.
Our diffusions approximate the entire distribution.
We compare the quality of the $v_n$ and $\hat v_3$ approximations to the Edgeworth expansion of \cite{BlanGlyn2018}. Figure~\ref{fig:arbig} displays the relative error of approximating the CCDF of $W$ for different values of $\alpha$ and contains two plots: one where $\alpha$ is close to zero and one where $\alpha$ is far from zero. In the latter plot, the hybrid approximation is the best performer because $\alpha = 0.9$ and $\underline{v}_3(x)$ is negative in Figure~\ref{fig:v3plots}, whereas in the former plot $v_3$ is the best performer because $\alpha = 0.001$ and $\underline{v}_3(x)$ is nonnegative in Figure~\ref{fig:v3plots}. In addition to estimating the CCDF of $W$, Table~\ref{tab:ar1log} compares the performance of our approximations when estimating the expectation of a smooth test function like $\mathbb{E} \log(W + \delta R) = \mathbb{E} \log(\alpha D_{\infty})$.
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{0.5}
\begin{tabular}{|c|c|c|c|c|c| }
\hline
& $\alpha = 0.64$ & $\alpha = 0.32$ &$\alpha = 0.16$ & $\alpha = 0.08$& $\alpha = 0.04$\\
\hline
$\abs{\mathbb{E} f(Y_0) - \mathbb{E} f(W)}$ & 0.095 & 0.039 & 0.011 & 0.002 & $3.6 \times 10^{-4}$ \\ \hline
$\abs{\mathbb{E} f(Y_1) - \mathbb{E} f(W)}$ & 0.104 & 0.037 & 0.008 & $9.4 \times 10^{-4}$ & $1.0\times 10^{-4}$ \\ \hline
$\abs{\mathbb{E} f(Y_2) - \mathbb{E} f(W)}$ & 0.034 & 0.011 & 0.003 & $4.7 \times 10^{-4}$ & $7.4 \times 10^{-5}$ \\ \hline
$\abs{\mathbb{E} f(Y_3) - \mathbb{E} f(W)}$ & 0.0201 & 0.005 & $7.9 \times 10^{-4}$ & $8.9 \times 10^{-5}$ &$7.8\times 10^{-6}$ \\ \hline
$\abs{\mathbb{E} f(\hat{Y}_3) - \mathbb{E} f(W)}$ & 0.0194 & 0.005 & $7.8 \times 10^{-4}$ & $8.9 \times 10^{-5}$ & $7.8 \times 10^{-6}$ \\ \hline
$\abs{\mathbb{E} f( {Y}_e) - \mathbb{E} f(W)}$ & 0.148 & 0.053 & 0.009 & $7.3 \times 10^{-4}$ & $6.3 \times 10^{-4}$ \\ \hline
$ \mathbb{E} f(W) $ & 0.510 & 0.721 & 0.994 & 1.302 & 1.629 \\ \hline
\end{tabular}
\end{center}
\caption{ $f(W) = \log(W + \delta R)$. The random variable $Y_n$ corresponds to the $v_n$-approximation, $\hat Y_3$ corresponds to the $\hat v_3$-approximation, and $Y_e$ corresponds to the Edgeworth expansion estimate. \label{tab:ar1log}}
\end{table}
\begin{figure}
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{armoddevalpha0.001.pdf}
\vspace{-.3cm}
\caption{\footnotesize{$\alpha = 0.001$}}
\end{subfigure}%
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{armoddevalpha0.9.pdf}
\vspace{-.3cm}
\caption{\footnotesize{$\alpha = 0.9$}}
\end{subfigure}%
\caption{The plots exclude $v_0$ and $v_1$, the worst performing approximations.}
\label{fig:arbig}
\end{figure}
\section{Conclusion}
We have outlined a general procedure to derive $v_n$ approximations for one-dimensional Markov chains. Although the expressions for $v_n(x)$ get more complicated as $n$ increases, the diffusion approximations remain computationally tractable. A natural question is how to extend this work to the multi-dimensional setting.
Another direction worth exploring relates to establishing theoretical guarantees for the approximations. The only results we have are for the $v_1$ error in Section~\ref{fse3}, a key ingredient of which are bounds on the derivatives to the solution of the Poisson equation, also known as Stein factor bounds; c.f., Lemma~\ref{lem:higherders} of the electronic companion. Since the Poisson equation depends on the diffusion coefficient, we have to reestablish Stein factor bounds for each new $v_n(x)$; the difficulty of this grows with the complexity of the expression for $v_n(x)$. The prelimit generator approach, recently proposed by \cite{Brav2022}, may offer a simpler avenue for theoretical guarantees because it uses Stein factor for the Markov chain instead of the diffusion.
\ECSwitch
\ECHead{Accompanying Proofs}
This e-companion contains the proofs of certain theoretical results in the paper. It is divided into three main sections. The first section is about the Erlang-C model, and contains the proofs of Proposition~\ref{prop:lowerbound}, Theorem~\ref{thm:md-high}, and Theorem~\ref{thm:md-std}. The second and third sections derive the $v_3$ approximation for the hospital model and AR(1) model, respectively.
\section{Companion for the Erlang-C Model}
\blue{To prepare for the arguments to come, let us recall the notation related to the Erlang-C model. The Erlang-C model is defined by the customer arrival rate $\lambda >0$, the service rate $\mu > 0$, and the number of servers $n > 0$. Additional important quantities include
\begin{align*}
R=\frac{\lambda}{\mu} <n,\ \beta=\frac{n-R}{\sqrt{R}}>0, \text{ and } \delta=\frac{1}{\sqrt{R}}.
\end{align*}
We study $W$, which has the stationary distribution of the CTMC $\{\tilde X(t) = \delta (X(t) - R)\}$, where $X(t)$ is the number of customers in the system at time $t \geq 0$. Equation \eqref{eq:gxbar} states that
\ben{ \label{eq:gxbarf}
\mathbb{E} G_{\tilde X} f(W) = \mathbb{E} \Big[ \lambda (f(W+\delta)-f(W))+\mu\big[ (W/\delta+ R) \wedge n \big] (f(W-\delta)-f(W)) \Big]=0
}
for all $f(x)$ satisfying $\mathbb{E} \abs{f(W)} < \infty$. We also note that the support of $W$ is
\begin{align}
\mathcal{W} = \{-\sqrt{R}, -\sqrt{R}+\delta, -\sqrt{R}+2\delta,\dots\}. \label{f29}
\end{align}
To define $v_0(x)$ and $v_1(x)$, we recall from \eqref{eq:tb} and \eqref{eq:ta} that for $x \in \mathcal{W}$,
\begin{align}
b(x) =&\ \delta \big(\lambda -\mu \big[(x/\delta+R)\wedge n\big]\big) \quad \text{ and } \quad a(x)= \delta^2 \big(\lambda+ \mu \blue{[(x/\delta+R)\wedge n]} \big), \label{eq:tabeff}
\end{align}
and from \eqref{eq:bdef} and \eqref{eq:adef} that the extensions of these to $\mathbb{R}$ are
\begin{align}
b(x) =&\ -(\mu x \wedge \mu \beta) \quad \text{ and } \quad a(x)= 2\mu - \delta b( -\sqrt{R}\vee x), \quad x \in \mathbb{R}. \label{eq:adeff}
\end{align}
We define $v_1(x) = \frac{1}{2} a(x)$ and $v_0(x) = v_0 = v_1(0) = \mu$, and for $n \in \{0,1\}$ we define the $v_n$ approximation to be the random variable $Y_n$ with density
\begin{equation}
\label{eq:stddenf}
\frac{\kappa}{v_n(x)} \exp\Big({\int_0^x \frac{b(y)}{v_n(y)}dy}\Big), \quad x \in \mathbb{R},
\end{equation}
where $\kappa > 0$ is a normalization constant that depends on $n$. Lastly, assuming that $-z$ belongs to $ \mathcal{W}$ and setting $f(x)=1(x\geq -z)$ in \eqref{eq:gxbarf}, we get
\ben{\label{f26}
\lambda \P(W=-z-\delta)=\mu[(-z/\delta+R)\wedge n] \P(W=-z),
}
which are the flow-balance equations for the CTMC.
}
\subsection{Proving Proposition~\ref{prop:lowerbound}}
\label{sec:lowerbound}
We repeat the statement of Proposition~\ref{prop:lowerbound} for convenience.
\begin{proposition}
\label{prop:lowerboundec}
Assume that $n = R + \beta \sqrt{R}$ for some fixed $\beta > 0$.
There exists a constant $C(\beta) > 0$ depending only on $\beta$ such that
\begin{align*}
\big| \mathbb{E} W - \mathbb{E} Y_0 \big| \geq \frac{C(\beta)}{\sqrt{R}}.
\end{align*}
\end{proposition}
We prove the proposition with the help of four auxiliary lemmas. The lemmas are proved at the end of this section after we prove Proposition~\ref{prop:lowerboundec}. We use $C = C(\beta)>0$ to denote a constant that may change from line to line, but does not depend on anything other than $\beta$.
\begin{lemma} \label{lem:momentequiv}
For any $\beta > 0$,
\begin{align}
\beta \mathbb{P}(W \geq \beta) = -\mathbb{E} \big( W 1(W < \beta)\big) \quad \text{ and } \quad \beta \mathbb{P}(Y_0 \geq \beta) = -\mathbb{E} \big( Y_0 1(Y_0 < \beta)\big). \label{eq:bizero}
\end{align}
Consequently,
\begin{align*}
\mathbb{E} W = \mathbb{E} (W-\beta)^{+} \quad \text{ and } \quad \mathbb{E} Y_0 = \mathbb{E} (Y_0-\beta)^{+}.
\end{align*}
\end{lemma}
Lemma~\ref{lem:momentequiv} implies that
\begin{align*}
\abs{ \mathbb{E} W - \mathbb{E} Y_0} = \abs{ \mathbb{E} (W-\beta)^{+} - \mathbb{E} (Y_0-\beta)^{+}}.
\end{align*}
The next lemma rewrites the right-hand side above using the Poisson equation so that we can bound it from below.
\begin{lemma}
\label{lem:expliciterror}
Fix $h \in \text{\rm Lip(1)}$ and let $f_h(x)$ be the solution the the Poisson equation
\begin{align}
b(x) f_h'(x) + v_0 f_h''(x) = \mathbb{E} h(Y_0) - h(x), \quad x \in \mathbb{R}. \label{eq:poissonv0}
\end{align}
Then $f_h'''(x-) = \lim_{y \uparrow x} f_h'''(y)$ is defined for all $x \in \mathbb{R}$ and
\begin{align*}
\mathbb{E} h(W) - \mathbb{E} h(Y_0) = - \frac{1}{2} \delta \mathbb{E} b(W) f_h''(W)+ \mathbb{E} \varepsilon(W),
\end{align*}
where
\begin{align*}
\varepsilon(W) =&\ \frac{1}{6}\delta^2 b(W) f_h'''(W-)+ \lambda(\varepsilon_{+}(W) + \varepsilon_{-}(W)) - \frac{1}{\delta} b(W) \varepsilon_{-}(W), \\
\varepsilon_{+}(W) =&\ \frac{1}{2}\int_{W}^{W+\delta} (W+\delta - y)^{2} (f_h'''(y) - f_h'''(W-)) dy, \\
\varepsilon_{-}(W) =&\ - \frac{1}{2}\int_{W-\delta}^{W} (y- (W-\delta))^{2} (f_h'''(y) - f_h'''(W-)) dy.
\end{align*}
\end{lemma}
Our plan is to show that for any $h \in \text{\rm Lip(1)}$, the term $\mathbb{E} \varepsilon(W)$ vanishes at a rate of at least $\delta^2$. We then fix $h(x) = (x-\beta)^{+}$ and show that $\abs{\mathbb{E} b(W) f_h''(W)}$ can be bounded away from zero by a constant independent of $\delta$, which implies Proposition~\ref{prop:lowerboundec}. The following two lemmas are needed for this. The first one is for the upper bound on $\mathbb{E} \varepsilon(W)$, and the second is for the lower bound on $\abs{\mathbb{E} b(W) f_h''(W)}$.
\begin{lemma}
\label{lem:higherders}
Assume that $n = R + \beta \sqrt{R}$ for some fixed $\beta > 0$.
There exists $C = C(\beta) > 0$ depending only on $\beta$ such that for any $h \in \text{\rm Lip(1)}$,
\begin{align*}
\abs{ f_h^{'''}(x-)} \leq \frac{C}{\mu } \quad \text{ and } \quad \mathbb{E} \abs{b(W)} \leq \mu \mathbb{E} \abs{W} \leq&\ \mu C , \quad x \in \mathbb{R}.
\end{align*}
Additionally, if $h(x) = (x-\beta)^{+}$, then for all $x \neq \beta$, $f_h^{(4)}(x)$ exists and $\abs{f_h^{(4)}(x)} \leq \frac{C}{\mu }(1 + \abs{x})$.
\end{lemma}
\begin{lemma}
\label{lem:secondder}
If $h(x) = (x-\beta)^{+}$, then $f_h''(x) = \frac{1}{\mu \beta}$ for $x \geq \beta$ and
\begin{align*}
f_h''(x) =\frac{1}{\mu \beta} \frac{1 + x e^{x^2/2} \int_{-\infty}^{x} e^{-y^2/2} dy }{1 + \beta e^{\beta^2/2} \int_{-\infty}^{\beta} e^{-y^2/2} dy } , \quad x \leq \beta.
\end{align*}
\end{lemma}
Before proving Proposition~\ref{prop:lowerboundec}, let us remark that we restrict ourselves to $h(x) = (x-\beta)^{+}$ to keep the proof simple. Our arguments can likely be extended to work for other $h(x)$ at the expense of added complexity.
\proof{Proof of Proposition~\ref{prop:lowerboundec}}
Lemma~\ref{lem:expliciterror} implies that
\begin{align*}
\mathbb{E} h(W) - \mathbb{E} h(Y_0) = - \frac{1}{2} \delta \mathbb{E} b(W) f_h''(W)+ \mathbb{E} \varepsilon(W).
\end{align*}
In the first part of the proof, we show that $\mathbb{E} \abs{\varepsilon(W)} \leq C \delta^2$. For convenience, we recall that
\begin{align*}
\varepsilon(W) =&\ \frac{1}{6}\delta^2 b(W) f_h'''(W-)+ \lambda(\varepsilon_{+}(W) + \varepsilon_{-}(W)) - \frac{1}{\delta} b(W) \varepsilon_{-}(W), \\
\varepsilon_{+}(W) =&\ \frac{1}{2}\int_{W}^{W+\delta} (W+\delta - y)^{2} (f_h'''(y) - f_h'''(W-)) dy, \\
\varepsilon_{-}(W) =&\ - \frac{1}{2}\int_{W-\delta}^{W} (y- (W-\delta))^{2} (f_h'''(y) - f_h'''(W-)) dy.
\end{align*}
Lemma~\ref{lem:higherders} immediately implies that $\mathbb{E} \abs{b(W) f_h'''(W-)} \leq C$. Next, we show that $\mathbb{E} \abs{\varepsilon_{+}(W)} \leq \frac{C}{\mu} \delta^4 $. By considering the cases when $W = \beta$ and $W \neq \beta$ and using Lemma~\ref{lem:higherders}, we see that
\begin{align*}
\mathbb{E} \abs{\varepsilon_{+}(W)} \leq&\ \frac{1}{2}\mathbb{P}(W = \beta)\bigg|\int_{\beta}^{\beta+\delta} (\beta+\delta - y)^{2} (\abs{f_h'''(y)} + \abs{f_h'''(\beta-)}) dy\bigg| \\
&+ \frac{1}{2}\mathbb{E} \Bigg[1(W \neq \beta)\bigg|\int_{W}^{W+\delta} (W+\delta - y)^{2}\int_{W}^{y} \abs{f_h^{(4)}(u)} du dy \bigg|\Bigg]\\
\leq&\ \frac{C}{\mu}\mathbb{P}(W = \beta)\bigg|\int_{\beta}^{\beta+\delta} (\beta+\delta - y)^{2} dy\bigg|\\
&+\frac{C}{\mu}\mathbb{E} \Bigg[1(W \neq \beta) \bigg|\int_{W}^{W+\delta} \delta (1+\abs{W}+\delta) (W+\delta - y)^{2} dy \bigg|\Bigg]\\
\leq&\ \frac{C}{\mu} \delta^3 \mathbb{P}(W = \beta) + \frac{C}{\mu} \delta^4.
\end{align*}
This argument can be repeated to show $\mathbb{E} \abs{\varepsilon_{-}(W)} \leq \frac{C}{\mu} \delta^4$.
It was shown in (3.29) of \cite{Brav2017} that $\mathbb{P}(W = \beta)\leq C \delta$, but for completeness we repeat the argument at the end of the proof. Combining these results, we arrive at
\begin{align*}
\mathbb{E}\abs{\varepsilon(W)} \leq C \delta^2 + \lambda \frac{C}{\mu} \delta^4 + \mathbb{E} \abs{b(W)} \frac{C}{\mu} \delta^3 \leq C \delta^2,
\end{align*}
where in the last inequality we use the bound on $\mathbb{E} \abs{b(W)}$ from Lemma~\ref{lem:higherders} and the fact that $\delta^2 = 1/R = \mu/\lambda$. We now show that $\abs{\mathbb{E} b(W) f_h''(W)} \geq C$. Combining the fact that $b(x) = -(\mu x \wedge \mu \beta)$ with the form of $f_h''(x)$ from Lemma~\ref{lem:secondder}, we have
\begin{align*}
\mathbb{E} b(W) f_h''(W) =&\ \frac{-\mu \beta}{\mu \beta} \mathbb{P}(W \geq \beta) - \frac{\mu}{\mu \beta} \frac{\mathbb{E}\bigg[ W\Big(1+ W e^{W^2/2}\int_{-\infty}^{W} e^{-y^2/2} dy \Big) 1(W < \beta) \bigg]}{1 + \beta e^{\beta^2/2} \int_{-\infty}^{\beta} e^{-y^2/2} dy } \\
\leq&\ -\mathbb{P}(W \geq \beta) - \frac{1}{\beta} \frac{\mathbb{E}\big( W 1(W < \beta) \big)}{1 + \beta e^{\beta^2/2} \int_{-\infty}^{\beta} e^{-y^2/2} dy }.
\end{align*}
From Lemma~\ref{lem:momentequiv} we know that $\mathbb{E} \big( W 1(W < \beta)\big) = -\beta\mathbb{P}(W \geq \beta)$, so
\begin{align*}
\mathbb{E} b(W) f_h''(W) =&\ -\mathbb{P}(W \geq \beta) + \frac{1}{ \beta} \frac{ \beta \mathbb{P}(W \geq \beta) - \mathbb{E}\bigg[ \Big( W^2 e^{W^2/2}\int_{-\infty}^{W} e^{-y^2/2} dy \Big) 1(W < \beta) \bigg]}{1 + \beta e^{\beta^2/2} \int_{-\infty}^{\beta} e^{-y^2/2} dy } \\
\leq&\ \mathbb{P}(W \geq \beta)\Big( -1 + \frac{1}{1 + \beta e^{\beta^2/2} \int_{-\infty}^{\beta} e^{-y^2/2} dy} \Big)\\
\leq&\ -C \mathbb{P}(W \geq \beta).
\end{align*}
Proposition 1 of \cite{HalfWhit1981} tells us that $\mathbb{P}(W \geq \beta)$ converges to a positive constant (depending on $\beta$) as $R \to \infty$. This implies the lower bound on $\abs{\mathbb{E} b(W) f_h''(W)}$. Lastly, we prove that $\mathbb{P}(W = \beta)\leq C \delta$. Let $\phi_0(x)$ be the density of $Y_0$. Since $W$ is grid valued, for any $z \in (0,\delta)$ we have
\begin{align*}
\mathbb{P}(W = \beta) =&\ \mathbb{P}( \beta - z \leq W \leq \beta + z) \\
\leq&\ \mathbb{P}( \beta - z \leq Y_{0} \leq \beta + z) + \abs{\mathbb{P}( \beta - z \leq W \leq \beta + z) - \mathbb{P}( \beta - z \leq Y_0 \leq \beta + z)}\\
\leq&\ 2z \sup_{x \in \mathbb{R}} \phi_0(x) + 2\sup_{x \in \mathbb{R}} \abs{\mathbb{P}(W \leq x) - \mathbb{P}(Y_0 \leq x)}.
\end{align*}
To reach the desired conclusion, we use Lemma 7 and Theorem 3 of \cite{BravDaiFeng2016}. The former says that $\phi_0(x) \leq \sqrt{2/\pi}$, while the latter result says that $\sup_{x \in \mathbb{R}} \abs{\mathbb{P}(W \leq x) - \mathbb{P}(Y_0 \leq x)} \leq C \delta$.
\hfill $\square$\endproof
\subsubsection{Proof of Lemma~\ref{lem:momentequiv}.}
\proof{Proof of Lemma~\ref{lem:momentequiv} }
If $f(x) = x$, then $\mathbb{E} G_{\tilde X} f(W) = 0$ because $\mathbb{E} |W| < \infty$; see Lemma~2 of \cite{BravDaiFeng2016} for a proof of the latter fact. It follows from \eqref{eq:gxbarf} that
\begin{align*}
\lambda - \mathbb{E} \Big( \mu\big[ (W/\delta+ R) \wedge n \big] \Big) = \mathbb{E} b(W) = - \mu \mathbb{E}(W \wedge \beta) = 0,
\end{align*}
implying that
\begin{align}
-\beta \mathbb{P}(W \geq \beta) = \mathbb{E} \big( W 1(W < \beta)\big). \label{eq:leminterm}
\end{align}
Adding $ \mathbb{E} \big( W 1(W \geq \beta)\big)$ to both sides proves that $\mathbb{E} W = \mathbb{E} (W-\beta)^{+}$. The claim about $Y_0$ follows similarly. The density of $Y_0$ is given by \eqref{eq:stddenf}, so
\begin{align*}
\mathbb{E} b(Y_0) = \kappa \int_{-\infty}^{\infty} \frac{b(x)}{\mu} \exp\Big(\int_0^x \frac{b(y)}{\mu} dy \Big) dx = 0.
\end{align*}
The last equality follows from integration by parts and the fact that $\lim_{x\to \pm \infty} \exp\big(\int_0^x \frac{b(y)}{\mu} dy \big) = 0$. Therefore
\begin{align*}
\frac{1}{\mu} \mathbb{E} b(Y_0) = -\beta \mathbb{P}(Y_0 \geq \beta) - \mathbb{E} \big(Y_0 1(Y_0 < \beta)\big) = 0
\end{align*}
and $\mathbb{E} Y_0 = \mathbb{E} (Y_0 -\beta)^{+}$.
\hfill $\square$\endproof
\subsubsection{Proof of Lemma~\ref{lem:expliciterror}.}
\proof{Proof of Lemma~\ref{lem:expliciterror}}
First, note that $f_h''(x) = -\frac{b(x)}{v_0} f_h'(x) + \frac{1}{v_0}( \mathbb{E} h(Y_0) - h(x))$ is differentiable almost everywhere because $f_h'(x)$ is continuously differentiable and both $b(x)$ and $h(x)$ are Lipschitz functions. The former statement follows, for example, from (B.1) of \cite{BravDaiFeng2016}. Therefore, $f_h'''(x)$ exists almost everywhere. Now \eqref{eq:gxbarf} implies that
\begin{align*}
\mathbb{E} G_{\tilde X} f_h(W) = \mathbb{E} \Big[ \lambda (f_h(W+\delta)-f_h(W))+\mu\big[ (W/\delta+ R) \wedge n \big] (f_h(W-\delta)-f_h(W)) \Big]=0,
\end{align*}
provided $\mathbb{E} \abs{f_h(W)} < \infty$. The integrability of $f_h(W)$ has already been established in \cite{BravDaiFeng2016}; see Lemma 1 and Remark 2 there. Since $f_h'''(x)$ does not exist everywhere (for instance at $x = \beta$), to perform Taylor expansion we need to use the integral form of the remainder term. We claim that
\begin{align}
f_h(x+\delta) - f_h(x) =&\ \delta f_h'(x) + \frac{1}{2}\delta^2 f_h''(x) + \frac{1}{6}\delta^{3} f_h'''(x-) + \varepsilon_+(x), \notag \\
f_h(x-\delta) - f_h(x) =&\ -\delta f_h'(x) + \frac{1}{2}\delta^2 f_h''(x) - \frac{1}{6}\delta^{3} f_h'''(x-) + \varepsilon_{-}(x). \label{eq:vareps}
\end{align}
To verify the claim, note that
\begin{align*}
f_h(x+\delta) - f_h(x) = \int_{x}^{x+\delta} f_h'(y) dy =&\ \delta f_h'(x) + \int_{x}^{x+\delta} (f_h'(y)-f_h'(x)) dy \\
=&\ \delta f_h'(x) + \int_{x}^{x+\delta} \int_{x}^{y} f_h''(u) du dy\\
=&\ \delta f_h'(x) + \int_{x}^{x+\delta} (x+\delta - u) f_h''(u) du.
\end{align*}
A similar treatment of $f_h(x+\delta) - f_h(x)$ yields \eqref{eq:vareps}. Letting $s(W) = \mu\big[ (W/\delta+ R) \wedge n \big]$, we therefore have
\begin{align*}
\mathbb{E} G_{\tilde X} f_h(W) =&\ \mathbb{E} \Big[ \delta (\lambda - s(W)) f_h'(W) + \frac{1}{2}\delta^2 (\lambda + s(W)) f_h''(W) \\
&+ \frac{1}{6}\delta^3 (\lambda - s(W)) f_h'''(W-) + \lambda \varepsilon_{+}(W) + s(W) \varepsilon_{-}(W)\Big].
\end{align*}
We know from \eqref{eq:tabeff} that $\delta(\lambda - s(W)) = b(W)$ and $\delta^2(\lambda + s(W)) = a(W)$, and consequently $s(W) = \lambda - b(W) / \delta$. Therefore,
\begin{align*}
& \mathbb{E} G_{\tilde X} f_h(W) \\
=&\ \mathbb{E} \Big[b(W)f_h'(W) + \frac{1}{2} a(W) f_h''(W) + \frac{1}{6}\delta^2 b(W) f_h'''(W-) + \lambda (\varepsilon_{+}(W) + \varepsilon_{-}(W)) - \frac{1}{\delta} b(W) \varepsilon_{-}(W)\Big] = 0.
\end{align*}
Taking expected values in the Poisson equation \eqref{eq:poissonv0}, we get
\begin{align*}
\mathbb{E} h(Y_0) - \mathbb{E} h(W) =&\ \mathbb{E} \Big[ b(W) f_h'(W) + v_0 f_h''(W)\Big] - \mathbb{E} G_{\tilde X} f_h(W) \\
=&\ - \frac{1}{2}\mathbb{E} \big(a(W) - a(0)\big) f_h''(W) - \mathbb{E} \varepsilon(W).
\end{align*}
We conclude by noting that $a(W) - a(0) = -\delta b(W)$.
\hfill $\square$\endproof
\subsubsection{Proof of Lemma~\ref{lem:higherders}.}
\proof{Proof of Lemma~\ref{lem:higherders}}
To bound $\abs{f^{(4)}(x)}$, first note that the Poisson equation \eqref{eq:poissonv0} implies that
\begin{align*}
f_h'''(x) = -\frac{b(x)}{v_0} f_h''(x) -\frac{b'(x)}{v_0} f_h'(x) -\frac{1}{v_0} h'(x).
\end{align*}
Since $b(x)$ and $h(x)$ are piece-wise linear with a kink at $x = \beta$, the derivative above exists for all $x \neq \beta$. Differentiating again, we get
\begin{align*}
f_h^{(4)}(x) = -\frac{b(x)}{v_0} f_h'''(x) -\frac{b'(x)}{v_0} f_h''(x) -\frac{b'(x)}{v_0} f_h''(x) , \quad x \neq \beta.
\end{align*}
We conclude that $|f_h^{(4)}(x)| \leq (C/\mu)(1+\abs{x})$ for $x \neq \beta$ because $v_0 = \mu$, $\abs{b(x)} \leq \mu \abs{x}$, $\abs{b'(x)} \leq \mu$, $\abs{f_h'''(x)} \leq C/\mu$, and $\abs{f_h''(x)} \leq C/\mu $, where the last two inequalities follow from Lemma 3 of \cite{BravDaiFeng2016}. To conclude the proof, we note that $\mathbb{E} \abs{b(W)} \leq \mu\mathbb{E} |W| \leq \mu C$, where the last inequality follows from
Lemma 2 of \cite{BravDaiFeng2016}, which tell us that $\mathbb{E} \abs{W} \leq C $.
\hfill $\square$\endproof
\subsubsection{Proof of Lemma~\ref{lem:secondder}.}
\proof{Proof of Lemma~\ref{lem:secondder}}
First assume that $x \geq \beta$. In (B.8) of \cite{BravDaiFeng2016} it is shown that
\begin{align*}
f_h''(x) =&\ e^{-\int_{0}^{x} \frac{b(u)}{v_0} du}\int_{x}^{\infty} \frac{1}{\mu}\big( h'(y) + f_h'(y) b'(y)\big) e^{\int_{0}^{y} \frac{b(u)}{v_0} du} dy = e^{-\int_{\beta}^{x} \frac{b(u)}{v_0} du}\int_{x}^{\infty} \frac{1}{\mu} e^{\int_{\beta}^{y} \frac{b(u)}{v_0} du} dy,
\end{align*}
where in the second equality we use $b'(x) = 0$ and $h'(x) = 1$ for $x \geq \beta$. Since $b(x)/v_0 = -\beta$ for $x \geq \beta$,
\begin{align*}
f_h''(x) = e^{\beta(x-\beta)} \int_{x}^{\infty} \frac{1}{\mu } e^{-\beta(y-\beta)} dy = \frac{1}{\mu \beta}.
\end{align*}
Now suppose that $x \leq \beta$. The Poisson equation \eqref{eq:poissonv0} and the fact that $h(x) = 0$ imply that
\begin{align*}
f_h''(x) =&\ -\frac{b(x)}{v_0} f_h'(x) + \frac{1}{v_0} \mathbb{E} h(Y_0) = x f_h'(x) + \frac{\mathbb{E} h(Y_0)}{\mu }.
\end{align*}
One can verify by differentiating that
\begin{align*}
f_h'(x) = e^{-\int_{0}^{x} \frac{b(u)}{v_0} du} \int_{-\infty}^{x} \frac{1}{v_0} \big(\mathbb{E} h(Y_0) - h(y) \big) e^{\int_{0}^{y} \frac{b(u)}{v_0} du} dy = \frac{\mathbb{E} h(Y_0)}{\mu }e^{ \frac{1}{2}x^2 } \int_{-\infty}^{x} e^{- \frac{1}{2} y^2} dy.
\end{align*}
The first equality appears as equation (B.1) in \cite{BravDaiFeng2016}. The second equality follows from the form of $b(x)$ in \eqref{eq:adeff} and the fact that $h(x) = 0$ for $x \leq \beta$. Lastly, since the density of $Y_0$ is given by \eqref{eq:stddenf}, we have
\begin{align*}
\mathbb{E} h(Y_0) = \frac{\int_{-\infty}^{\infty} h(y) e^{\int_{0}^{y} \frac{b(u)}{v_0} du} dy}{\int_{-\infty}^{\infty} e^{\int_{0}^{y} \frac{b(u)}{v_0} du}dy} =&\ \frac{\int_{\beta}^{\infty} (y-\beta)^{+} e^{\int_{\beta}^{y} \frac{b(u)}{v_0} du}dy}{\int_{-\infty}^{\beta} e^{\int_{\beta}^{y} \frac{b(u)}{v_0} du}dy + \int_{\beta}^{\infty} e^{\int_{\beta}^{y} \frac{b(u)}{v_0} du}dy}\\
=&\ \frac{\int_{\beta}^{\infty} (y-\beta) e^{-\beta(y-\beta)}dy}{\int_{-\infty}^{\beta} e^{-\frac{1}{2}(y^2-\beta^2) }dy + \int_{\beta}^{\infty} e^{-\beta(y-\beta)}dy} = \frac{1/\beta^2 }{ e^{\frac{1}{2}\beta^2 } \int_{-\infty}^{\beta} e^{-\frac{1}{2} y^2 }dy + 1/\beta}.
\end{align*}
This verifies the form of $f_h''(x)$ when $x \leq \beta$.
\hfill $\square$\endproof
\subsection{Proving Theorem~\ref{thm:md-high}}\label{fse8}
We first recall Theorem~\ref{thm:md-high}.
\begin{theorem}
\label{thm:md-highec}
Assume $n = R + \beta \sqrt{R}$ for some fixed $\beta > 0$.
\blue{There exist positive constants $c_1$ and $C$ depending only on $\beta$ such that
\begin{align}
& \left|\frac{\mathbb{P}(Y_1\geq z)}{\mathbb{P}(W\geq z)}-1\right|\leq \frac{C}{\sqrt{R}}\left(1+\frac{z}{\sqrt{R}}\right) \quad \text{ for } 0< z\leq c_1 R\ \text{and} \label{f13ec}\\
&\left|\frac{\mathbb{P}(Y_1\leq -z)}{\mathbb{P}(W\leq -z)}-1\right|\leq \frac{C}{\sqrt{R}}\left(1+z+\frac{z^4}{\sqrt{R}}\right),\ \text{ for } 0<z\leq \min\{c_1 R^{1/4}, R^{1/2}\} \label{f16ec}.
\end{align}
}
\end{theorem}
We use \blue{$c_1,\ C,\ C_1, K$} to denote positive constants which may differ from line to line, but will only depend on $\beta$. We first prove \eqref{f16ec}, and then \eqref{f13ec}.
\blue{ Note that if $R \leq K$ for some $K > 0$, then $n$ is also bounded because $\beta$ is fixed and $n = R + \beta \sqrt{R}$. We argue that \eqref{f16ec} holds trivially in such a case. Observe that for any $K > 0$, \magenta{because $\{W = -\sqrt{R}\}$ corresponds to an empty system,}
\begin{align*}
\inf_{\substack{0 < R \leq K \\ 0 < z \leq \sqrt{R}}} \mathbb{P}(W \leq -z) \geq \inf_{\substack{0 < R \leq K }} \mathbb{P}(W = -\sqrt{R}) \geq L(K,\beta) > 0,
\end{align*}
\magenta{where $L(K,\beta)$ is a positive constant depending only on $K$ and $\beta$. The second-last inequality is true because $\inf_{\substack{0 < R \leq K }} \mathbb{P}(W = -\sqrt{R})$ is at least as large as $\mathbb{P}(W = -\sqrt{R})$ when $R = K$, because for each $n$, the probability of an empty system decreases in $R$.} We can choose a sufficiently large $C$ (that depends on $K$) to ensure \eqref{f16ec} trivially holds. Therefore, in the following, we assume $R\geq C_1$ for a sufficiently large $C_1$. Since \eqref{f16ec} requires $ 0 < z \magenta{\leq} c_1 R^{1/4}$, we can also assume that
\ben{\label{fEC4}
\delta = \frac{1}{\sqrt{R}} <\min\{1/2,\beta\}, \quad 0<z<\sqrt{R}-2,\quad \delta(z+1)<1/2.
}
If not, we simply increase the value of $C_1$ and decrease the value of $c_1$ until \eqref{fEC4} holds. Without loss of generality let us therefore assume \eqref{fEC4} going forward. } Given $z \in \mathbb{R}$, we let $f_z(x)$ be the solution \magenta{(cf. \eqref{f18})} to the Poisson equation
\ben{\label{f17}
\frac{1}{2}a(x)f_z''(x)+b(x)f_z'(x)=\P(Y_1 \leq -z) - 1(x\leq -z), \quad x \in \mathbb{R}.
}
The following object will be of use. Define\magenta{, for $W$ in its support \eqref{f29},}
\begin{align}
K_W(y) =
\begin{cases}
(\lambda - b(W)/\delta) (y+\delta) \geq 0, \quad y \in [-\delta, 0], \\
\lambda(\delta-y) \geq 0, \quad y \in [0,\delta].
\end{cases} \label{eq:kdef}
\end{align}
It can be checked that
\begin{align}
\int_{-\delta}^{0} K_W(y) dy =&\ \frac{1}{2} \delta^2 \lambda -\frac{1}{2}\delta b(W), \quad \int_{0}^{\delta} K_W(y) dy = \frac{1}{2}\delta^2 \lambda, \notag \\
\int_{-\delta}^{\delta} K_W(y) dy =&\ \frac{1}{2}a(W) = \mu - \frac{\delta}{2} b(W), \quad \text{ and } \quad \int_{-\delta}^\delta y K_W(y) dy=\frac{\delta^2 b(W)}{6}. \label{MD:k3}
\end{align}
Our first result is an expression for $\mathbb{P}(Y_1 \leq -z) - \mathbb{P}(W \leq -z)$, and is proved in Section~\ref{sec:prooftaylormdhigh}.
\begin{lemma} \label{lem:taylormdhigh}
For any $z \in \mathbb{R}$,
\begin{align}
&\mathbb{P}(Y_1 \leq -z) - \mathbb{P}(W \leq -z) \notag \\
=&\ \mathbb{E} \bigg[\int_{-\delta}^{\delta} \Big(\frac{2b(W+y)}{a(W+y)}f_z'(W+y) - \frac{2b(W)}{a(W)}f_z'(W) \Big) K_W(y) dy \bigg] \notag \\
&\magenta{-}\mathbb{E} \bigg[\int_{-\delta}^{\delta} \Big( \frac{2}{a(W)} 1(W\leq -z) - \frac{2}{a(W+y)}1(W+y\leq -z) \Big) K_W(y) dy \bigg] \notag \\
&\magenta{-}\mathbb{P}(Y_1 \leq -z)\mathbb{E} \bigg[\int_{-\delta}^{\delta} \Big( \frac{2}{a(W+y)} -\frac{2}{a(W)} \Big) K_W(y) dy \bigg]. \label{f20}
\end{align}
\end{lemma}
The bulk of the effort to prove \eqref{f16ec} comes from the first term. The following lemma is proved in Section~\ref{fap4}.
\begin{lemma}\label{l4}
For $x \in \mathbb{R}$, define $r(x) = 2b(x)/a(x)$. There exists constants $c_1,C,C_1 > 0$ depending only on $\beta$ such that for any $R\geq C_1$ and $0<z\leq c_1 R^{1/4}$ satisfying \eqref{fEC4},
\besn{\label{f21}
\left| \mathbb{E} \bigg[ \int_{-\delta}^\delta \big(r(W+y)f_z'(W+y)-r(W)f_z'(W)\big)K_W(y)dy \bigg] \right|\leq&\ C \delta^2 (z\vee 1)^4 \P(Y_1\leq -z).
}
\end{lemma}
\proof{Proof of \eqref{f16ec}}
We first bound the third term in \eqref{f20}. Using the form of $a(x)$ in \eqref{eq:adeff} and the assumption that $\delta < 1/2$ in \eqref{fEC4}, it is not hard to check that
\ben{\label{fEC6}
\mu \leq a(x) \leq 2\mu+\delta\mu \beta \leq C \mu, \quad \text{ and } \quad \abs{a'(x)} \leq \delta \mu,
}
from which it follows that
\begin{align}
\frac{1}{a(x)} \leq 1/\mu \quad \text{ and } \quad \abs{\frac{1}{a(x)} - \frac{1}{a(y)}} = \frac{\abs{a(y) - a(x)}}{a(y)a(x)} \leq \frac{\delta \abs{y-x}}{\mu}. \label{eq:alipsch}
\end{align}
Therefore, the third term in \eqref{f20} satisfies
\begin{align}
\P(Y_1\leq -z)\left| \mathbb{E} \bigg[\int_{-\delta}^{\delta} \Big( \frac{2}{a(W+y)} -\frac{2}{a(W)} \Big) K_W(y) dy \bigg] \right| \leq&\ \P(Y_1\leq -z)\left| \mathbb{E} \bigg[\frac{2\delta^2}{\mu } \int_{-\delta}^{\delta} K_W(y) dy \bigg] \right| \notag \\
=&\ \P(Y_1\leq -z)\left| \mathbb{E} \bigg[\frac{2\delta^2}{\mu } \frac{1}{2}a(W) \bigg] \right| \notag \\
\leq&\ C\delta^2 \P(Y_1\leq -z). \label{f24}
\end{align}
The equality is due to \eqref{MD:k3}. The second term in \eqref{f20} is bounded similarly. Namely,
\begin{align}
& \left| \mathbb{E} \bigg[\int_{-\delta}^{\delta} \Big( \frac{2}{a(W)} 1(W\leq -z) - \frac{2}{a(W+y)}1(W+y\leq -z) \Big) K_W(y) dy \bigg] \right| \notag \\
\leq &\ \left| \mathbb{E} \bigg[\int_{-\delta}^{\delta} \Big( \frac{2}{a(W)} - \frac{2}{a(W+y)} \Big)1(W\leq -z) K_W(y) dy \bigg] \right| \notag \\
&+\left| \mathbb{E} \bigg[\int_{-\delta}^{\delta} \frac{2}{a(W+y)}\Big( 1(W\leq -z) -1(W+y\leq -z) \Big) K_W(y) dy \bigg] \right| \notag \\
\leq &\ C \delta^2 \mathbb{P}(W \leq -z) + \frac{2}{\mu } \mathbb{E} \bigg[\int_{-\delta}^{\delta} \Big| 1(W\leq -z) -1(W+y\leq -z) \Big| K_W(y) dy \bigg]. \label{f22}
\end{align}
Letting $-\tilde{z}$ denote the smallest value in the support of $\{W: W > -z \}$ \magenta{(cf. \eqref{f29})}, we have
\begin{align}
\frac{2}{\mu } \mathbb{E} \bigg[\int_{-\delta}^{\delta} \Big| 1(W\leq -z) -1(W+y\leq -z) \Big| K_W(y) dy \bigg] \leq C \P(W=-\tilde{z})+ C \P(W=-\tilde{z}-\delta). \label{eq:interm30}
\end{align}
At the end of this proof we argue that
\ben{\label{f23}
\P(W=-\tilde{z})+ \P(W=-\tilde{z}-\delta)\leq C\delta (z\vee 1) \P(W\leq -z).
}
Combining the bounds in \magenta{\eqref{f24},} \eqref{f22}, \eqref{eq:interm30} and \eqref{f23} with Lemma~\ref{l4} yields
\begin{align*}
|\P(Y_1\leq -z)-\P(W\leq -z)|\leq C\delta^2 (z\vee 1)^4 \P(Y_1\leq -z)+C\delta (z\vee 1) \P(W\leq -z).
\end{align*}
Dividing both sides by $P(W\leq -z)$, which is allowed because our assumption that $z < \sqrt{R} -2$ \magenta{in \eqref{fEC4}} implies that $P(W\leq -z)\geq \P(W=-\sqrt{R})>0$, we arrive at
\ben{\label{fEC5}
\left|\frac{\P(Y_1\leq -z)}{\P(W\leq -z)}-1 \right|\leq C \bigg( \delta^2 (z\vee 1)^4 \frac{\P(Y_1\leq -z)}{\P(W\leq -z)}+\delta(z\vee 1) \bigg).
}
Since we assumed $z \leq c_1 R^{1/4}$ and $R \geq C_1$, it follows that $\delta^2(z\vee 1)^4 \leq c_1^4 \vee (1/\magenta{C_1})$, which can be made arbitrarily small (without affecting $C$ above) by decreasing $c_1$ and increasing $C_1$. Therefore, without loss of generality we can assume $C\delta^2(z\vee 1)^4<1/2$, so
\begin{align*}
\frac{1}{2}\frac{\P(Y_1\leq -z)}{\P(W\leq -z)} \leq (1-C\delta^2 (z\vee 1)^4)\frac{\P(Y_1\leq -z)}{\P(W\leq -z)}\leq 1+C\delta (z\vee 1) \leq C,
\end{align*}
where the second inequality is due to \eqref{fEC5} and the last inequality follows from \eqref{fEC4}. Combining the upper bound above with \eqref{fEC5} proves \eqref{f16ec}.
It remains to verify \eqref{f23}. Equation \eqref{f26} implies that for $-y$ in the support of $W$,
\begin{align*}
\lambda \P(W=-y-\delta)=\mu[(-y/\delta+R)\wedge n] \P(W=-y).
\end{align*}
Since $\beta = \delta(n-R)$, the set $\{-y/\delta+R \leq n\}$ equals $\{-y \leq \beta\}$, so dividing both sides above by $\lambda$ and using the fact that $\mu/\lambda = 1/R$, it follows that
\ben{\label{f27}
\P(W=- y-\delta)=(1-\delta y) \P(W=- y) \quad \text{ for $-y\leq \beta$}.
}
Recall that $-\tilde{z}$ denotes the smallest value in the support of $\{W: W > -z \}$, so $-\tilde z - \delta \leq -z < -\tilde z$ and the set $\{W \leq -z\}$ equals $\{W \leq -\tilde z - \delta\}$. Also recall that we assumed in \eqref{fEC4} that $0 < z < \sqrt{R} -2$ and $\delta < \beta$. Therefore,
\begin{align*}
\P(W\leq -z) =&\ \P(W=-\tilde z-\delta)+\P(W=-\tilde z-2\delta)+\cdots +\P(W=-\sqrt{R})\\
=&\ \P(W=-\tilde z) \Big( (1-\delta \tilde z) +\big((1-\delta \tilde z)(1-\delta (\tilde z+\delta))\big)\\
&+\big((1-\delta \tilde z)(1-\delta (\tilde z+\delta))(1-\delta (\tilde z+2\delta))\big)+\cdots \Big)\\
\geq&\ \P(W=-\tilde z) \Big( (1-\delta \tilde z) +\big((1-\delta \tilde z)(1-\delta (\tilde z+\delta))\big)+\cdots\\
& +\big((1-\delta \tilde z)\cdots(1-\delta(\tilde z+ \lfloor \frac{1}{\delta}\rfloor \delta))\big) \Big).
\end{align*}
The second equality follows from \eqref{f27} with $\tilde z$ in place of $y$. This requires that $-\tilde z \leq \beta$, which follows from the fact that $-\tilde z - \delta \leq -z < 0$ and therefore $-\tilde z < \beta$ by our assumption that $\delta < \beta$ in \eqref{fEC4}. In the last inequality, $\lfloor \cdot \rfloor$ denotes the integer part and the inequality itself follows from the fact that $-(\tilde z+ \lfloor \frac{1}{\delta}\rfloor \delta)\geq -(\tilde z+1)>-(z+1)>-\sqrt{R}$. Now for any $0 \leq k \leq \lfloor \frac{1}{\delta}\rfloor$ we have $1 - \delta (\tilde z + \delta k) \geq 1 - \delta(\tilde z + 1) > 1 - \delta(z + 1) > 1/2$, where the last inequality follows from our assumption in \eqref{fEC4}. Therefore, the right-hand side above is bounded from below by
\begin{align*}
& \P(W=-\tilde z) \Big( \big(1-\delta (\tilde z+1)\big) +\big(1-\delta (\tilde z+1)\big)^2+\cdots +\big(1-\delta (\tilde z+1)\big)^{\lfloor \frac{1}{\delta}\rfloor +1}\Big)\\
=&\ \P(W=-\tilde z)\big(1-\delta (\tilde z+1)\big) \frac{1-\big(1-\delta (\tilde z+1)\big)^{\lfloor \frac{1}{\delta}\rfloor+1}}{\delta (\tilde z+1)} \\
\geq & \P(W=-\tilde z)\big(1-\delta (\tilde z+1)\big) \frac{1-(e^{-\delta (\tilde z+1)})^{\lfloor \frac{1}{\delta}\rfloor+1}}{\delta (\tilde z+1)} \\
\geq& \P(W=-\tilde z)[1-\delta ( z+1)] \frac{1-e^{- 1/2}}{\delta ( z+1)} \\
\geq& \P(W=-\tilde z) \frac{1-e^{- 1/2}}{2\delta ( z+1)}.
\end{align*}
The first inequality is true because $1-x \leq e^{-x}$ and $\delta(\tilde z+1)\geq \delta(z-\delta+1)>0$. The second inequality is true because $0<\delta(\tilde z+1)<\delta (z+1)$ and $\delta (\tilde z+1)(\lfloor 1/\delta\rfloor+1)>(\delta/2) (\lfloor 1/\delta\rfloor+1) > 1/2$ (because $\lfloor 1/\delta\rfloor+1 > 1/\delta$, $\delta < 1/2$ by \eqref{fEC4}, and $-\tilde z - \delta \leq -z < 0$ from it follows that $\tilde z > -\delta > -1/2$). The last inequality follows from $\delta(z+1)<1/2$ in \eqref{fEC4}.
This implies that $\P(W=-\tilde z)\leq C\delta(z\vee 1) \P(W\leq \magenta{-z})$.
Using $-\tilde z < \delta < 1/2$ and \eqref{f27} again we get
\be{
\P(W=-\tilde z-\delta)=(1-\delta \tilde z) \P(W=-\tilde z)\leq (1+\delta^2) \P(W=-\tilde z)\leq \frac{5}{4} \P(W=-\tilde z),
}
This proves \eqref{f23}.
\hfill $\square$\endproof
\subsubsection{Proof of Lemma~\ref{lem:taylormdhigh}. } \label{sec:prooftaylormdhigh}
\proof{Proof of Lemma~\ref{lem:taylormdhigh}}
Lemma~\ref{fEC3}, which we state in Section~\ref{fap4}, implies that $f_z'(x)$ is bounded and absolutely continuous with bounded $f_z''(x)$.
Lemma~\ref{lem:higherders} implies $\mathbb{E} \abs{W} < \infty$, which when combined with the fact that $f_z'(x)$ is bounded implies $\mathbb{E} \abs{f_z(W)} < \infty$, and in turn $\mathbb{E} G_{\tilde X} f_z(W) = 0$ due to \eqref{eq:gxbarf}. Letting
\begin{align*}
G_Y f(x)=\frac{1}{2}a(x)f''(x)+b(x) f'(x),
\end{align*}
taking expected values with respect to $W$ in the Poisson equation \eqref{f17} and subtracting $\mathbb{E} G_{\tilde X} f_z(W)=0$ from it, we get
\ben{ \label{eq:f171}
\mathbb{E} G_Y f_z(W) - \mathbb{E} G_{\tilde X} f_z(W) =\P(Y_1\leq -z) - \P(W\leq -z).
}
To prove the lemma we work on the left-hand side. Similar to \eqref{eq:vareps}, for any function $f: \mathbb{R} \to \mathbb{R}$ with an absolutely continuous derivative, and any $x,\delta \in \mathbb{R}$,
\begin{align*}
f(x+\delta) -f(x) = \delta f'(x) + \int_x^{x+\delta} (x+\delta -y)f''(y)dy= \delta f'(x) + \int_{0}^{\delta} (\delta -y)f''(x+y)dy.
\end{align*}
Applying this expansion to $G_{\tilde X}$ in \eqref{eq:gxbarf}, we get
\begin{align*}
G_{\tilde X} f_z(W) =&\ \delta \big( \lambda - \mu \big[(W/\delta+R)\wedge n\big]\big) f_z'(W) \\
&+ \lambda \int_{0}^{\delta} (\delta -y) f_z''(W+y) dy + \mu \big[(W/\delta+R)\wedge n\big] \int_{-\delta}^{0} (y+\delta)f_z''(W+y) dy.
\end{align*}
Recall from \eqref{eq:tabeff} that $b(W) =\delta \big( \lambda - \mu \big[(W/\delta+R)\wedge n\big]\big) $, and consequently $\mu \big[(W/\delta+R)\wedge n\big] = \lambda - b(W)/\delta$. We recall $K_W(y)$ from \eqref{eq:kdef}, so
\begin{align}
G_{\tilde X} f_z(W) =&\ b(W)f_z'(W) +\int_{-\delta}^{\delta} f_z''(W+y) K_{W}(y) dy \notag \\
=&\ b(W)f_z'(W) + \frac{1}{2} a(W) f_z''(W) + \int_{-\delta}^{\delta} \big(f_z''(W+y) - f_z''(W)\big) K_{W}(y) dy . \label{eq:steinx}
\end{align}
The last equality follows from $\int_{-\delta}^{\delta} K_W(y) dy = \frac{1}{2}a(W)$ in \eqref{MD:k3}. Thus,
\begin{align}
\mathbb{P}(Y_1 \leq -z) - \mathbb{P}(W \leq -z) =&\ \mathbb{E} G_{Y} f_z(W) - \mathbb{E} G_{\tilde X} f_z(W) \notag \\
=&\ \mathbb{E} \bigg[ \int_{-\delta}^{\delta} \big(f_z''(W) - f_z''(W+y)\big) K_{W}(y) dy\bigg]. \label{MD:exp1}
\end{align}
In the last equality above we used \eqref{eq:steinx}. The lemma follows from \eqref{f17}; i.e., $f_z''(x) = -\frac{2b(x)}{a(x)}f_z'(x) + \frac{2}{a(x)}\big(\mathbb{P}(Y_1 \leq -z ) - 1(x \leq -z) \big)$.
\hfill $\square$\endproof
\subsubsection{Proof of Lemma~\ref{l4}.} \label{fap4}
We first present a series of intermediary lemmas that represent the main steps in the proof, and then use them to prove Lemma~\ref{l4}. We remind the reader that \eqref{eq:adeff} implies that
\begin{align}
r(x) =
\begin{cases}
-2x, \quad x \leq -1/\delta,\\
\frac{-2x}{2 +\delta x}, \quad x \in [-1/\delta, \beta], \\
\frac{-2\beta}{2 + \delta\beta}, \quad x \geq \beta,
\end{cases}
\quad
r'(x) = \begin{cases}
-2, \quad x \leq -1/\delta, \\
\frac{-4}{(2+\delta x)^2}, \quad x \in (-1/\delta, \beta], \\
0, \quad x > \beta,
\end{cases}\label{eq:rform}
\end{align}
where $r'(x)$ is interpreted as the derivative from the left at the points $x = -1/\delta, \beta$. In particular, note that $\abs{r'(x)} \leq 4$. The first lemma decomposes
\begin{align*}
\mathbb{E} \bigg[ \int_{-\delta}^\delta \big(r(W+y)f_z'(W+y)-r(W)f_z'(W)\big)K_W(y)dy \bigg]
\end{align*}
into a more convenient form. It is proved in Section~\ref{sec:errorexpproof}.
\begin{lemma} \label{lem:error_expansion}
Let $f_z(x)$ solve \eqref{f17}, then
\besn{\label{26}
& \int_{-\delta}^\delta \big(r(W+y)f_z'(W+y)-r(W)f_z'(W)\big)K_W(y)dy \\
=&\int_{-\delta}^\delta K_W(y) r(W+y) y f_z''(W) dy\\
&- \int_{-\delta}^\delta K_W(y) r(W+y)\int_0^y \int_0^s r(W+u)f_z''(W+u) du ds dy \\
&- \int_{-\delta}^\delta K_W(y) r(W+y)\int_0^y \int_0^s r'(W+u)f_z'(W+u) du ds dy \\
&-\int_{-\delta}^\delta K_W(y) r(W+y)\int_0^y \bigg[\frac{ 1(W+s\leq -z)}{a(W+s)/2}-\frac{ 1(W\leq -z)}{a(W)/2} \bigg]ds dy \\
&\magenta{+} \P(Y_1\leq -z) \int_{-\delta}^\delta K_W(y) r(W+y) \int_0^y \bigg[\frac{2}{a(W+s)}-\frac{2}{a(W)}\bigg] ds dy \\
&+1(W=-1/\delta) f_z'(W) \int_0^\delta K_W(y) \int_0^y r'(W+s) ds dy\\
&+1(W=\beta) f_z'(W) \int_{-\delta}^0 K_W(y) \int_0^y r'(W+s) ds dy\\
&+1(W\in [-1/\delta+\delta, \beta-\delta]) f_z'(W) \int_{-\delta}^\delta K_W(y) \int_0^y \int_0^s r''(W+u) duds dy\\
&+1(W\in [-1/\delta+\delta, \beta-\delta]) f_z'(W) r'(W) \frac{\delta^2 b(W)}{6}.
}
\end{lemma}
Next we need a bound on $f_z'(x)$ and $f_z''(x)$ as provided by the following lemma.
\begin{lemma}\label{fEC3}
Let $f_z(x)$ solve \eqref{f17}. There exists a constant $C$ that depends only on $\beta$ such that for any $z > 0$,
\begin{align}
\abs{f_z'(x)} \leq&\ \frac{C}{\mu } \bigg(1(x \leq -z) \Big( \frac{\mu}{\abs{b(x)}} \wedge 1 \Big) \notag \\
& \qquad + \P(Y_1\leq -z) \Big( 1(-z < x < 0) e^{-\int_{0}^{x} r(u)du} + 1(x \geq 0)\Big( \frac{\mu}{\abs{b(x)}} \wedge 1 \Big) \Big) \bigg), \label{eq:mdleft1} \\
\abs{f_z''(x)} \leq&\ \frac{C}{\mu } \bigg(1(x \leq -z) + \P(Y_1\leq -z) \Big( 1(-z < x < 0)(1 + \abs{x}) e^{-\int_{0}^{x} r(u)du} + 1(x \geq 0) \Big) \bigg)\magenta{.} \label{eq:mdleft2}
\end{align}
\end{lemma}
\proof{ Proof of Lemma \ref{fEC3} }
We recall from \eqref{eq:stddenf} that the density of $Y_1$ is
\begin{align*}
\kappa\frac{2}{a(x)} \exp\Big({\int_0^x \frac{2b(y)}{a(y)}dy}\Big), \quad x \in \mathbb{R},
\end{align*}
where $\kappa$ is the normalizing constant.
One may verify that the solution to the Poisson equation $\frac{1}{2}a(x)f_z''(x)+b(x)f_z'(x)=\P(Y_1 \leq -z) - 1(x\leq -z)$ satisfies
\ben{\label{f18}
f_z'(x)=
\begin{cases}
-\P(Y_1\geq -z) e^{-\int_0^x r(u)du} \frac{1}{\kappa} \P(Y_1\leq x), & x\leq -z, \\
-\P(Y_1\leq -z) e^{-\int_0^x r(u)du} \frac{1}{\kappa} \P(Y_1\geq x), & x\geq -z\magenta{.}
\end{cases}
}
Rearranging the Poisson equation and using \eqref{f18} yields
\besn{\label{f19}
f_z''(x)=&-r(x) f_z'(x) +\frac{2(\P(Y_1\leq -z) - 1(x\leq -z))}{a(x)}\\
=&
\begin{cases}
-\P(Y_1\geq -z) \left[ \frac{2}{a(x)} -\frac{r(x)}{\kappa} e^{-\int_0^x r(u) du} \P(Y_1\leq x) \right], & x\leq -z, \\
-\P(Y_1\leq -z) \left[ -\frac{2}{a(x)} -\frac{r(x)}{\kappa} e^{-\int_0^x r(u) du} \P(Y_1\geq x)\right], & x>-z.
\end{cases}
}
We will start by proving the bound on $f_z''(x)$. The form of the density of $Y_1$ implies
\begin{align*}
\frac{1}{\kappa} \P(Y_1\leq x) = \int_{-\infty}^{x} \frac{2}{ a(y)} e^{\int_{0}^{y}r(u)du} dy.
\end{align*}
Since $b(x)$ is nonincreasing and $b(x) > 0$ when $x < 0$, then for $x < 0$,
\begin{align}
\frac{1}{\kappa} \P(Y_1\leq x) = \int_{-\infty}^{x} \frac{2}{ a(y)} e^{\int_{0}^{y}r(u)du} dy \leq \frac{1}{b(x)} \int_{-\infty}^{x} \frac{2 b(y)}{ a(y)} e^{\int_{0}^{y}r(u)du} dy = \frac{1}{b(x)} e^{\int_{0}^{x}r(u)du}, \label{eq:main1}
\end{align}
and since $b(x) < 0$ for $x > 0$, then for $x \magenta{>} 0$,
\begin{align}
\frac{1}{\kappa} \P(Y_1\geq x) \leq \frac{1}{b(x)} \int_{x}^{\infty} \frac{2 b(y)}{ a(y)} e^{\int_{0}^{y}r(u)du} dy = \frac{-1}{b(x)} e^{\int_{0}^{x}r(u)du}= \frac{ 1}{\abs{b(x)}} e^{\int_{0}^{x}r(u)du}. \label{eq:main2}
\end{align}
Applying \eqref{eq:main1} and \eqref{eq:main2} to the form of $f_z''(x)$ above, we get the desired upper bound when $x\leq -z$ or $x \geq 0$. When $-z < x < 0$, we use the fact that $\abs{r(x)} \leq C \abs{x}$ to get
\begin{align*}
\abs{f_z''(x)} \leq&\ \magenta{\frac{2}{a(x)}\P(Y_1\leq -z)+C} \P(Y_1\leq -z)\magenta{\abs{x}} e^{-\int_0^x r(u) du} \int_{x}^{\infty} \frac{2}{ a(y)} e^{\int_{0}^{y} r(u)du} dy \\
\leq&\ \frac{C}{\mu } \P(Y_1\leq -z)(1 + \abs{x}) e^{-\int_0^x r(u) du}.
\end{align*}
The last inequality follows from the facts that $a(x) \geq \mu$, and that $\int_{x}^{\infty} e^{\int_{0}^{y} r(u)du} dy$ can be bounded by a constant that depends only on $\beta$, which is evident from the form of $r(x)$. This establishes the bound on $f_z''(x)$. Repeating the same procedure with $f_z'(x)$ gives us the bound
\begin{align*}
\abs{f_z'(x)} \leq&\ C \bigg(1(x \leq -z) \frac{1}{\abs{b(x)}} + \P(Y_1\leq -z) \Big( 1(-z < x < 0) \magenta{\frac{1}{\mu}} e^{-\int_{0}^{x} r(u)du} + 1(x \geq 0)\frac{1}{\abs{b(x)}} \Big) \bigg).
\end{align*}
To conclude the proof, we require the following two inequalities from Lemma B.8 of \cite{Brav2017}:
\begin{align}
& e^{-\int_{0}^{x} r(u)du}\int_{-\infty}^{x} \frac{2}{ a(y)} e^{\int_{0}^{y}r(u)du} dy\leq
\begin{cases}
\frac{3}{ \mu }, \quad x \leq 0,\\
\frac{1}{\mu}e^{\beta^2} (3 + \beta), \quad x \in [0,\beta],
\end{cases} \label{DS:fbound1}\\
& e^{-\int_{0}^{x}r(u)du}\int_{x}^{\infty} \frac{2}{ a(y)} e^{\int_{0}^{y} r(u)du} dy \leq
\begin{cases}
\frac{1}{\mu } \Big( 2 + \frac{1}{\beta} \Big), \quad x \in [0,\beta], \\
\frac{1}{\mu \beta}, \quad x \geq \beta.
\end{cases} \label{DS:fbound2}
\end{align}
By repeating the bounding procedure discussed above, but using \eqref{DS:fbound1} and \eqref{DS:fbound2} in place of \eqref{eq:main1} and \eqref{eq:main2}, we arrive at
\begin{align*}
\abs{f_z'(x)} \leq&\ \frac{C}{\mu } \bigg(1(x \leq -z) + \P(Y_1\leq -z) \Big( 1(-z < x < 0) e^{-\int_{0}^{x} r(u)du} + 1(x \geq 0) \Big) \bigg),
\end{align*}
which establishes the desired bound on $f_z'(x)$.
\hfill $\square$\endproof
Lastly, we will need the following lemmas, which are proved in Section~\ref{fap5}.
\begin{lemma}\label{fl2}
There exist constant $c_1,C_1 > 0$ such that for any integer $k \geq 0$, some $C(k) > 0$, and any $R\geq C_1$ and $0<z\leq c_1 R^{1/4}$,
\ben{\label{11}
\mathbb{E}|1(W\leq -z) W^k|\leq C(k) (z\vee 1)^{k+1} \P(Y_1\leq -z),
}
The constants $c_1,C_1,C(k)$ depend on $\beta$.
\end{lemma}
\begin{lemma}\label{fl3}
There exist constant $c_1,C_1 > 0$ such that for any integer $k \geq 0$, some $C(k) > 0$, and any $R\geq C_1$ and $0<z\leq c_1 R^{1/4}$,
\ben{\label{14}
\mathbb{E}|1(-z\leq W\leq 0)W^k e^{-\int_0^W r(u)du}|\leq C(k) (z\vee 1)^{k+1},
}
The constants $c_1,C_1,C(k)$ depend on $\beta$.
\end{lemma}
We are now ready to prove Lemma~\ref{l4}.
\proof{Proof of Lemma~\ref{l4}}
Again, in the following, we C to denote a constant whose value may change from line to line but only depends on $\beta$. We take expected values on both sides of \eqref{26} and bound the terms on the right-hand side one line at a time. For the first line, we need to bound
\bes{
\left| \mathbb{E} \bigg[ \int_{-\delta}^\delta K_W(y) r(W+y) y f_z''(W) dy \bigg] \right|
\leq &\left| \mathbb{E} \bigg[ r(W) f_z''(W) \int_{-\delta}^\delta y K_W(y) dy \bigg]\right|\\
&+\left| \mathbb{E} \bigg[ f_z''(W) \int_{-\delta}^\delta y K_W(y) (r(W+y)-r(W)) dy \bigg] \right|.
}
Using \eqref{MD:k3}, it follows that
\begin{align*}
\left| \mathbb{E} \bigg[ r(W) f_z''(W) \int_{-\delta}^\delta y K_W(y) dy \bigg]\right| =&\ \left| \mathbb{E} \bigg[ r(W) f_z''(W) \frac{ \delta^2 b(W)}{6}\bigg]\right| \leq C \mu \delta^2 \mathbb{E} \abs{W^2 f_{z}''(W)},
\end{align*}
where we used $\abs{r(x)} \leq C \abs{x}$ and $\abs{b(x)}\leq \mu \abs{x}$ in the inequality. Furthermore, since $\abs{a(x)} \leq C \mu$ and recalling from \eqref{eq:rform} that $\abs{r'(x)} \leq 4$, we have
\begin{align*}
\left| \mathbb{E} \bigg[ f_z''(W) \int_{-\delta}^\delta y K_W(y) (r(W+y)-r(W)) dy \bigg] \right| \leq&\ \magenta{ \mathbb{E} \bigg[ C |f_z''(W)|\delta^2 \int_{-\delta}^\delta K_W(y) dy \bigg] } \\
=&\ \magenta{ \mathbb{E} \bigg[ C |f_z''(W)|\delta^2 \frac{a(W)}{2} \bigg] } \\
\leq &\ C \mu \delta^2 \mathbb{E} \abs{f_z''(W)}.
\end{align*}
Therefore,
\begin{align*}
\left| \mathbb{E} \bigg[ \int_{-\delta}^\delta K_W(y) r(W+y) y f_z''(W) dy \bigg] \right| \leq C \mu \delta^2 \mathbb{E} \abs{(1 + W^2) f_z''(W)}.
\end{align*}
Applying the bound on $\abs{f_z''(x)}$ from Lemma~\ref{fEC3}, we see that the right-hand side above is bounded by
\begin{align*}
& \magenta{C\delta^2} \mathbb{E} \bigg((1 + W^2)1(W \leq -z) \\
&\qquad + \P(Y_1\leq -z)(1 + W^2) \Big( 1(-z < W < 0)(1 + \abs{W}) e^{-\int_{0}^{W} r(u)du} + 1(W \geq 0) \Big) \bigg)\\
\leq&\ C \P(Y_1\leq -z) \magenta{\delta^2} (z\vee 1)^4,
\end{align*}
where the inequality is due to Lemmas~\ref{fl2} and \ref{fl3} and the fact that $\mathbb{E} W^2 \leq C$, which was proved in Lemma A.1 of \cite{Brav2017}.
Following the same argument, the second line
\begin{align*}
&\abs{\int_{-\delta}^\delta K_W(y) r(W+y)\int_0^y \int_0^s r(W+u)f_z''(W+u) du ds dy}\\
&\magenta{\leq C\mu\delta^2\mathbb{E} (1+W^2) \sup_{-\delta\leq u\leq \delta}|f_z''(W+u)|} \\
&\magenta{\leq C\delta^2 \mathbb{E} \bigg( (1+W^2)1(W\leq -z+\delta)} \\
&\magenta{\quad + \P(Y_1\leq -z) (1+W^2) \Big( 1(-z-\delta<W<\delta)(1+|W|)e^{\sup_{-\delta\leq u\leq \delta}(-\int_0^{W+u} r(v)dv)}+1(W\geq -\delta) \Big) \bigg),}
\end{align*}
\magenta{which} is also bounded by $C \P(Y_1\leq -z) \delta^2 (z\vee 1)^4$ from simple modifications of Lemmas~\ref{fl2} and \ref{fl3}. For the third line, we use the representation $f_z'(W+u) - f_z'(W) = \int_{0}^{u} f_z''(W+v) dv$ and the triangle inequality to get
\begin{align*}
&\left| \mathbb{E} \bigg[\int_{-\delta}^\delta K_W(y) r(W+y)\int_0^y \int_0^s r'(W+u)f_z'(W+u) du ds dy \bigg]\right| \\
\leq&\ \left| \mathbb{E} \bigg[\int_{-\delta}^\delta K_W(y) r(W+y)\int_0^y \int_0^s r'(W+u)\int_{0}^{u} f_z''(W+v) dvdu ds dy \bigg]\right| \\
&+ \left| \mathbb{E} \bigg[\int_{-\delta}^\delta K_W(y) f_z'(W) (r(W+y)-r(W))\int_0^y \int_0^s r'(W+u) du ds dy \bigg]\right| \\
&+ \left| \mathbb{E} \bigg[\int_{-\delta}^\delta K_W(y) f_z'(W) r(W) \int_0^y \int_0^s r'(W+u) du ds dy \bigg]\right| \\
\leq&\magenta{\ C \mathbb{E} \bigg[ (\abs{W} + \delta)\int_{-\delta}^\delta K_W(y) \int_0^y \int_0^s \int_{0}^{u} |f_z''(W+v)| dvdu ds dy \bigg]} \\
&\magenta{+ C \delta^3 \mathbb{E} \bigg[\abs{f_z'(W)} \int_{-\delta}^\delta K_W(y)dy \bigg] + C\delta^2 \mathbb{E} \bigg[\abs{r(W) f_z'(W)}\int_{-\delta}^\delta K_W(y) dy \bigg]}\\
\leq&\magenta{\ C \mathbb{E} \bigg[ (\abs{W} + \delta)\int_{-\delta}^\delta K_W(y) \int_0^y \int_0^s \int_{0}^{u} |f_z''(W+v)| dvdu ds dy \bigg]} \\
&\magenta{+ C \mu \delta^3 \mathbb{E} \abs{f_z'(W)} + C\delta^2 \mathbb{E} \abs{b(W) f_z'(W)} .}
\end{align*}
In the second inequality we used $\abs{r'(x)}\leq 4$ and the last inequality is due to \eqref{MD:k3}, that $r(x) = 2b(x)/a(x)$ and the fact that $\abs{a(x)} \leq C \mu$. The right-hand side can be bounded by $C\P(Y_1\leq -z) \delta^2 (z\vee 1)^4$ as follows.
The first term on the right-hand side above can be bounded by repeating the procedure used to bound lines one and two of \eqref{26}. The second and third terms can be bounded by combining the bound on $f_z'(x)$ from Lemma~\ref{fEC3} with Lemmas~\ref{fl2} and \ref{fl3}.
For the fourth line, repeating the arguments from \eqref{f22}, \eqref{eq:interm30} and \eqref{f23}, we have
\bes{
&\left| \mathbb{E} \bigg[\int_{-\delta}^\delta K_W(y) r(W+y)\int_0^y \bigg[\frac{ 1(W+s\leq -z)}{a(W+s)/2}-\frac{ 1(W\leq -z)}{a(W)/2} \bigg]ds dy \bigg] \right|\\
\leq & \left| \mathbb{E} \bigg[ \int_{-\delta}^\delta K_W(y) r(W+y)\int_0^y \bigg[\frac{2}{a(W+s)}-\frac{2}{a(W)} \bigg]1(W\leq -z)ds dy \bigg] \right|\\
&+\left| \mathbb{E} \bigg[ \int_{-\delta}^\delta K_W(y) r(W+y)\int_0^y \bigg[\frac{2(1(W+s\leq -z)-1(W\leq -z))}{a(W+s)} \bigg]ds dy \bigg] \right|\\
\leq & C\delta^3 \magenta{\mathbb{E}(1+|W|) 1(W\leq -z)} + C\delta \magenta{(z\vee 1)} [\P(W=-\tilde{z})+\P(W=-\tilde{z}-\delta)]\\
\leq & \magenta{C\P(Y_1\leq -z) \delta^3 (z\vee 1)^2+ C\delta^2 (z\vee 1)^2 \P(W\leq -z)}\\
\leq & C\P(Y_1\leq -z) \delta^2 (z\vee 1)^3,
}
where in the last \magenta{two inequalities} we used Lemma~\ref{fl2}. The fifth line is bounded as follows. Using \eqref{MD:k3}, \eqref{eq:alipsch} and the fact that $\abs{r(x)} \leq C\abs{x}$, we get
\bes{
&\left| \mathbb{E} \bigg[ \int_{-\delta}^\delta K_W(y) r(W+y) \P(Y_1\leq -z) \int_0^y \bigg[\frac{2}{a(W+s)}-\frac{2}{a(W)}\bigg] ds dy \bigg] \right|\\
\leq & C\P(Y_1\leq -z) \delta^3 \magenta{\mathbb{E}(1+|W|)}\leq C\P(Y_1\leq -z) \delta^3.
}
The sixth line is bounded as follows. Using $\abs{r'(x)} \leq 4$, \eqref{MD:k3}, and $\abs{1(W=-1/\delta) f_z'(W)} \leq 1(W=-1/\delta) C/\mu$, which follows from Lemma~\ref{fEC3} and the fact that $z < \sqrt{R}-2 < \sqrt{R} = 1/\delta$ from \eqref{fEC4}, we have
\bes{
\left|\mathbb{E} \bigg[ 1(W=-1/\delta) f_z'(W) \int_0^\delta K_W(y) \int_0^y r'(W+s) ds dy\bigg]\right| \leq & C \delta \P(W=-1/\delta),
}
and now we argue that
\begin{align}
C \delta \P(W=-1/\delta) \leq C \delta^2 (z \vee 1)\P(Y_1 \leq -z). \label{eq:toargue}
\end{align}
For any integer $0 < k < R$, \magenta{from \eqref{f26}}
\begin{align*}
\P(W=-1/\delta) = \P(W = -\sqrt{R}) \leq \frac{1}{k} \sum_{i=0}^{k} \P(W = \delta(i-R)) = \frac{1}{k} \P(W \leq \delta (k-R)).
\end{align*}
Choose $k$ such that $\delta (k-R)$ is the largest element in the support of $W$ that is less than or equal to $-z$, and use the fact that $z \leq c_1 R^{1/4}$ to conclude that
\begin{align*}
\P(W=-1/\delta) \leq C \delta \P(W \leq -z).
\end{align*}
Using Lemma~\ref{fl2} implies \eqref{eq:toargue}. The seventh line is bounded as follows. Using $\abs{r'(x)} \leq 4$, \eqref{MD:k3}, and $\abs{1(W=\beta) f_z'(W)} \leq 1(W=\beta) C/\mu$, which follows from Lemma~\ref{fEC3}, we have
\bes{
& \left| \mathbb{E} \bigg[1(W=\beta) f_z'(W) \int_{-\delta}^0 K_W(y) \int_0^y r'(W+s) ds dy\bigg]\right| \\
\leq & C \delta \P(W=\beta)\P(Y_1\leq -z)\leq C\delta^2 \P(Y_1\leq -z),
}
where we used $\P(W=\beta)\leq C\delta$, which was argued at the end of the proof of Proposition~\ref{prop:lowerboundec}. The eighth line is bounded as follows. Using $\abs{r''(x)} \leq C \delta$ and Lemma~\ref{fEC3},
\bes{
& \left| \mathbb{E} \bigg[1(W\in [-1/\delta+\delta, \beta-\delta]) f_z'(W) \int_{-\delta}^\delta K_W(y) \int_0^y \int_0^s r''(W+u) duds dy \bigg] \right| \\
\leq & C \delta^3 \P(W\leq -z) +C \delta^3 \P(Y_1\leq -z) \mathbb{E}|1(-z\leq W\leq 0) e^{-\int_0^W r(u)du}| +C \delta^3 \P(Y_1\leq -z)\\
\leq & C \delta^3 (z\vee 1) \P(Y_1\leq -z),
}
where in the last inequality we used \magenta{Lemmas~\ref{fl2} and \ref{fl3}}. The ninth line is bounded similarly. Using $\abs{r'(x)} \leq 4$ and the bound on $\abs{f_z'(x) b(x)} $ from Lemma~\ref{fEC3}, we have
\bes{
& \left| E\{1(W\in [-1/\delta+\delta, \beta-\delta]) f_z'(W) r'(W) \frac{\delta^2 b(W)}{6}\right| \\
\leq&\ C \delta^2 \mathbb{E} \abs{W 1(W\leq -z)} +C \delta^2 \P(Y_1\leq -z) \mathbb{E}|1(-z\leq W\leq 0) W e^{-\int_0^W r(u)du}| +C \delta^2 \P(Y_1\leq -z) \\
\leq&\ C \delta^2 (z\vee 1)^2 \P(W\leq -z) \magenta{+C\delta^2 \P(Y_1\leq -z)} \leq C \delta^2 (z\vee 1)^3 \P(Y_1\leq -z).
}
The second inequality is due to \magenta{Lemmas~\ref{fl2} and \ref{fl3}. }
The last inequality is due to Lemma~\ref{fl2} with $k = 0$ there. Combining the bounds proves Lemma~\ref{l4}.
\hfill $\square$\endproof
\subsubsection{Proof of Lemma~\ref{lem:error_expansion}.} \label{sec:errorexpproof}
\proof{Proof of Lemma~\ref{lem:error_expansion}}
Assume for now that for all $y \in (-\delta, \delta)$,
\begin{align}
&r(W+y)f_z'(W+y) - r(W)f_z'(W) \notag \\
=&\ y r(W+y)f_z''(W) - r(W+y)\int_{0}^{y}\int_{0}^{s} \Big(r(W+u) f_z''(W+u) + r'(W+u)f_z'(W+u) \Big)duds \notag \\
&- r(W+y)\int_{0}^{y}\Big(\frac{2}{a(W+s)} 1(W+s \leq -z) - \frac{2}{a(W)} 1(W \leq -z) \Big)ds \notag \\
&+ \mathbb{P}(Y_1 \leq -z)r(W+y)\int_{0}^{y}\Big(\frac{2}{a(W+s)} - \frac{2}{a(W)} \Big)ds + f_z'(W) \int_{0}^{y}r'(W+s)ds. \label{eq:interm}
\end{align}
We postpone verifying \eqref{eq:interm} to the end of this proof, but \eqref{eq:interm} implies
\begin{align*}
&\int_{-\delta}^{\delta} \Big(r(W+y)f_z'(W+y) - r(W)f_z'(W) \Big) K_W(y) dy \notag \\
=&\ \int_{-\delta}^{\delta} K_W(y) y r(W+y)f_z''(W) dy \\
&- \int_{-\delta}^{\delta} K_W(y) r(W+y)\int_{0}^{y}\int_{0}^{s} r(W+u) f_z''(W+u) duds dy \\
&- \int_{-\delta}^{\delta} K_W(y) r(W+y)\int_{0}^{y}\int_{0}^{s} r'(W+u)f_z'(W+u) duds dy\\
&- \int_{-\delta}^{\delta} K_W(y) r(W+y) \int_{0}^{y}\Big(\frac{2}{a(W+s)} 1(W+s \leq -z) - \frac{2}{a(W)} 1(W \leq -z) \Big)ds dy\\
&+ \mathbb{P}(Y_1\magenta{\leq -z}) \int_{-\delta}^{\delta}K_W(y)r(W+y)\int_{0}^{y}\Big(\frac{2}{a(W+s)} - \frac{2}{a(W)} \Big)ds dy \\
&+ f_z'(W)\int_{-\delta}^{\delta}K_W(y) \int_{0}^{y}r'(W+s)dsdy.
\end{align*}
We are almost done, but the last term on the right-hand side above requires some additional manipulations. Since $r'(x) = 0$ for $x > \beta$ and $K_W(y)=0$ for $W = -1/\delta$ and $y \in [-\delta, 0]$,
\begin{align*}
&\int_{-\delta}^{\delta} K_W(y)\int_{0}^{y}r'(W+s)ds dy \\
=&\ 1(W = -1/\delta)\int_{0}^{\delta} K_W(y)\int_{0}^{y}r'(W+s)ds dy\\
&+ 1(W = \beta) \int_{-\delta}^{0} K_W(y)\int_{0}^{y}r'(W+s)ds dy \\
&+ 1(W \in [-1/\delta + \delta, \beta - \delta]) \int_{-\delta}^{\delta} K_W(y)\int_{0}^{y}r'(W+s)ds dy,
\end{align*}
and for $W \in [-1/\delta + \delta, \beta - \delta]$,
\begin{align*}
&\int_{-\delta}^{\delta} K_W(y)\int_{0}^{y}r'(W+s)ds dy \\
=&\ \int_{-\delta}^{\delta} K_W(y)\int_{0}^{y} \big(r'(W+s) - r'(W)\big)ds dy +r'(W) \int_{-\delta}^{\delta} yK_W(y)dy \\
=&\ \int_{-\delta}^{\delta} K_W(y)\int_{0}^{y} \int_{0}^{s}r''(W+u)duds dy + r'(W) \frac{1}{6}\delta^2b(W).
\end{align*}
To conclude the proof, we verify \eqref{eq:interm}:
\begin{align*}
&r(W+y)f_z'(W+y) - r(W)f_z'(W) \notag \\
=&\ r(W+y)f_z'(W) + r(W+y) \int_{0}^{y} f_z''(W+s) ds - r(W)f_z'(W) \notag \\
=&\ r(W+y) \int_{0}^{y} f_z''(W+s) ds \magenta{+} f_z'(W) \int_{0}^{y} r'(W+s) ds.
\end{align*}
Now
\begin{align*}
\int_{0}^{y} f_z''(W+s)ds =&\ yf_z''(W) + \int_{0}^{y} \big(f_z''(W+s) - f_z''(W) \big)ds \\
=&\ y f_z''(W) - \int_{0}^{y}\Big(r(W+s)f_z'(W+s) - r(W)f_z'(W) \Big)ds \\
&- \int_{0}^{y}\Big(\frac{2}{a(W+s)} 1(W+s \leq -z) - \frac{2}{a(W)} 1(W \leq -z) \Big)ds\\
&+ \mathbb{P}(Y_1 \leq -z)\int_{0}^{y}\Big(\frac{2}{a(W+s)} - \frac{2}{a(W)} \Big)ds,
\end{align*}
and the fundamental theorem of calculus tells us that
\begin{align*}
&r(W+s)f_z'(W+s) - r(W)f_z'(W)\\
=&\ \int_{0}^{s} \Big(r(W+u) f_z''(W+u) + r'(W+u)f_z'(W+u) \Big)du,
\end{align*}
which proves \eqref{eq:interm}.
\hfill $\square$\endproof
\subsubsection{Proving Lemmas~\ref{fl2} and \ref{fl3}.}\label{fap5}
We first prove an upper bound on $E e^{-tW}$ for $0\leq t\leq c_1 R^{1/4}$. Note that $\mathbb{E} e^{-t W} < \infty$ for $ t \geq 0$ because $W\geq -\sqrt{R}$ (cf. \eqref{f29}).
\begin{lemma}\label{l1}
There exist $c_1,C_1>0$ such that if $R\geq C_1$ and $0\leq t\leq c_1 R^{1/4}$, then
\ben{\label{04}
\mathbb{E} e^{-tW}\leq C e^{\frac{t^2}{2}-\frac{\delta t^3}{6}}.
}
\end{lemma}
\proof{Proof of Lemma \ref{l1}}
For $0\leq s\leq t$, define $h(s)=\mathbb{E} e^{-sW}$,
so $h'(s)= -\mathbb{E} \big(W e^{-sW}\big)$. We will shortly prove that
\ben{\label{05}
\Big(1+\frac{\delta s}{2}+ \frac{\delta^2 s^2}{6}\Big) h'(s)\leq (s+C\delta^2 s^3) h(s).
}
from which we have
\begin{align*}
h'(s)\leq \Big(\frac{s}{1+\frac{\delta s}{2}+ \frac{\delta^2 s^2}{6}}+\frac{C \delta^2 s^3}{1+\frac{\delta s}{2}+ \frac{\delta^2 s^2}{6}}\Big) h(s) \leq&\ \Big(\frac{s}{1+\frac{\delta s}{2}}+C \delta^2 t^3\Big) h(s) \\
\leq&\ \Big(s\big(1-\delta s/2 + (\delta s/2)^2 \big) +C\delta^2 t^3\Big)h(s) \\
\leq&\ \Big(s-\delta s^2/2 +C\delta^2 t^3\Big)h(s).
\end{align*}
Above we have used $0 \leq s \leq t$. The \magenta{third} inequality follows from the inequality $1/(1+x) \leq (1-x+x^2)$ for $x \geq 0$.
Since $h(0) = 1$,
\be{
\log (h(t)) = \int_{0}^{t} \frac{h'(s)}{h(s)} ds \leq \int_0^t (s-\frac{\delta s^2}{2}+C\delta^2 t^3) ds\leq \frac{t^2}{2}-\frac{\delta t^3}{6}+C\delta^2 t^4\leq \frac{t^2}{2}-\frac{\delta t^3}{6}+\magenta{C}c_1^4,
}
which implies \eqref{04}. We are left to prove \eqref{05}. Recall from \eqref{eq:gxbarf} that $\mathbb{E} G_{\magenta{\tilde X}} f(W) = 0$ provided $\mathbb{E} \abs{f(W)} < \infty$. Choose $f(x) = -e^{-sx}/s$, so that $f'(x) = e^{-sx}$ and $f''(x) = -s e^{-sx}$. The form of $G_{\magenta{\tilde X}} f(x)$ in \eqref{eq:steinx} implies
\begin{align}
\mathbb{E} \bigg[ \frac{a(W)}{2}f''(W)+b(W)f'(W)+\int_{-\delta}^\delta (f''(W+y)-f''(W)) K_W(y) dy \bigg] =0 \label{eq:identstein}
\end{align}
Using the form of $b(x)$ from \eqref{eq:adeff}, we have
\besn{\label{06}
\mathbb{E}[b(W)f'(W)] =&\ \mathbb{E}[1(W\leq \beta) (-\mu W) e^{-sW}+1(W>\beta)(-\mu \beta) e^{-sW}]\\
=&\mu h'(s)+\mathbb{E}[1(W>\beta)\mu(W-\beta)e^{-sW}]\\
\geq & \mu h'(s).
}
Similarly, using the form of $a(x)$ from \eqref{eq:adeff} and the fact that $W \geq -1/\delta$, we have
\besn{\label{07}
-\mathbb{E}\Big[\frac{a(W)}{2}f''(W)\Big] =&\ \mu \mathbb{E}\Big[1(-\frac{1}{\delta}\leq W\leq \beta) (1+\frac{\delta W}{2})s e^{-sW}+1(W>\beta)(1+\frac{\delta \beta}{2})s e^{-sW}\Big]\\
=&\ \mu \mathbb{E}\Big[(1+\frac{\delta W}{2})s e^{-sW}\Big]\magenta{-}\mu \mathbb{E}\Big[1(W>\beta) \frac{\delta (W-\beta)}{2}s e^{-sW}\Big]\\
\leq&\ \mu \mathbb{E}\Big[(1+\frac{\delta W}{2})s e^{-sW}\Big]=\mu sh(s)-\mu \frac{\delta s}{2}h'(s).
}
Lastly,
\begin{align*}
-\mathbb{E} \bigg[ \int_{-\delta}^\delta (f''(W+y)-f''(W)) K_W(y) dy\bigg] =&\ \mathbb{E} \bigg[\int_{-\delta}^\delta s (e^{-s(W+y)}-e^{-sW}) K_W(y)dy\bigg] \nonumber\\
\leq&\ \mathbb{E} \bigg[se^{-sW} \int_{-\delta}^\delta (-sy+s^2 y^2e^{\abs{sy}} )K_W(y)dy\bigg] \\
\leq&\ \mathbb{E} \bigg[se^{-sW} \int_{-\delta}^\delta (-sy+ C s^2 y^2 )K_W(y)dy\bigg].
\end{align*}
The first inequality is due to $e^{-x}-1 \leq -x + x^2 e^{\abs{x}}$, and the second inequality is due to $|sy|\leq c_1 R^{1/4}\frac{1}{\sqrt{R}}\leq C$. Recall from \eqref{MD:k3} that $\int_{-\delta}^\delta K_W(y)dy = \frac{1}{2} a(W) $ and that $\int_{-\delta}^\delta y K_W(y)dy = \frac{\delta^2 b(W)}{6}$, and from \eqref{fEC6} that $\abs{a(x)} \leq C \mu$. Also note from \eqref{eq:adeff} that $b(x) = -\mu x - 1(x > \beta)(\mu \beta - \mu x)$. Therefore, the right-hand side above is bounded by
\begin{align}
& \mathbb{E} \bigg[s^2e^{-sW} \frac{-\delta^2 b(W)}{6}\bigg]+ C\mu \delta^2 s^3 Ee^{-sW} \nonumber\\
=&\ \frac{\delta^2 s^2}{6} \mathbb{E} \big[\mu W e^{-sW}\big] +\frac{\delta^2 s^2}{6} \mathbb{E} \Big[1(W>\beta) (\mu \beta - \mu W) e^{-sW}\Big]+ C\mu \delta^2 s^3 \mathbb{E} e^{-sW}\nonumber\\
=&\ -\mu \frac{\delta^2 s^2}{6} h'(s) +\frac{\delta^2 s^2}{6} \mathbb{E} \Big[1(W>\beta) (\mu \beta - \mu W) e^{-sW}\Big]+C\mu \delta^2 s^3 h(s)\nonumber\\
\leq&\ -\mu \frac{\delta^2 s^2}{6} h'(s)+C\mu \delta^2 s^3 h(s).\label{08}
\end{align}
Combining \eqref{06}--\eqref{08} with \eqref{eq:identstein} concludes the proof.
\hfill $\square$\endproof
Now we are ready to prove Lemmas~\ref{fl2} and \ref{fl3}.
\proof{Proof of Lemma~\ref{fl2}}
Suppose $k \geq 1$. We prove the lemma by showing that
\begin{align}
\mathbb{E}|1(W\leq -z) W^k| \leq&\ C(k) (z \vee 1)^{k} e^{-\frac{z^2}{2}-\frac{\delta z^3}{6}}, \notag \\
\text{ and } \quad C \frac{1}{z} e^{-\frac{z^2}{2}-\frac{\delta z^3}{6}} \leq&\ \mathbb{P}(Y_1 \leq -z). \label{eq:l1part1}
\end{align}
Let us start with the first inequality.
\magenta{For $0<z<1$, the first inequality holds because of Lemma \ref{l1}. Therefore, we only need to consider the case $z\geq 1$.}
Integration by parts yields
\be{
\mathbb{E}|1(W\leq -z) W^k|=z^k \P(W\leq -z)+\int_{-\infty}^{-z} k (-y)^{k-1} \P(W\leq y) dy.
}
For the first term, note that
\ben{\label{12}
z^k\P(W\leq -z) \leq z^{k} \mathbb{E}\Big( e^{z(-z -W)} 1(W \leq -z) \Big) \leq z^k e^{-z^2} \mathbb{E} e^{-zW}\leq Cz^k e^{-\frac{z^2}{2}-\frac{\delta z^3}{6}},
}
where the last inequality is due to Lemma~\ref{l1}. For the second term, we have
\bes{
\int_{-\infty}^{-z} k (-y)^{k-1} \P(W\leq y) dy \magenta{\leq}&\ \int_{-\infty}^{-z} k (-y)^{k-1} \mathbb{E}\Big( e^{z(y-W)} 1(W \leq y) \Big) dy\\
\leq&\ \int_{-\infty}^{-z} k (-y)^{k-1} e^{zy} \mathbb{E} e^{-zW} dy\\
\leq&\ C e^{\frac{z^2}{2}-\frac{\delta z^3}{6}} \int_{-\infty}^{-z} k (-y)^{k-1}e^{zy} dy\\
\leq&\ C(k) z^{k-2} e^{-\frac{z^2}{2}-\frac{\delta z^3}{6}}.
}
The second-last inequality is due to Lemma~\ref{l1} and in the last inequality we used $\int_{-\infty}^{-z} (-y)^{k-1}e^{zy}dy\leq C(k)z^{k-2}e^{-z^2}$ for $k\geq 1$ \magenta{and $z\geq 1$}. This proves the first inequality in \eqref{eq:l1part1}, and we now argue the second one. Recall that $r(x) = 2b(x)/a(x)$ and that the density of $Y_1$ in \eqref{eq:stddenf} implies
\bes{
\P(Y_1\leq -z)= \int_{-\infty}^{-z} \frac{2\kappa}{a(y)} e^{\int_{0}^{y} r(u) du} dy \geq \int_{-z-1}^{-z} \frac{2\kappa}{a(y)} e^{\int_{0}^{y} r(u) du} dy = &\ \int_{-z-1}^{-z} \frac{2\kappa}{a(y)} e^{\int_{0}^{y} \frac{-2u}{2 + \delta u} du} dy.
}
The last equality follows from the assumption $z \leq \sqrt{R} - 2 $ in \eqref{fEC4} so that $-z-1 \geq -\sqrt{R} $, meaning $a(x) = 2\mu - \delta b(x)$ for $x \in [-z-1, \magenta{0}]$. We now argue that $2\kappa/a(x) \geq C$ for some constant $C> 0 $ that depends only on $\beta$. The density of $Y_1$ is given by \eqref{eq:stddenf}, so we know that the normalizing constant $\kappa$ satisfies
\begin{align*}
\frac{1}{\kappa} =&\ \int_{-\infty}^{\infty} \frac{2}{a(x)} e^{\int_{0}^{x} 2b(y)/a(y) dy} dx \\
=&\ \int_{-\infty}^{0} \frac{2}{a(x)} e^{\int_{x}^{0} -2\abs{b(y)}/a(y) dy} dx + \int_{0}^{\infty} \frac{2}{a(x)} e^{\int_{0}^{x} -2\abs{b(y)}/a(y) dy} dx \\
\leq &\ \frac{C}{\mu} \int_{-\infty}^{0} e^{\int_{x}^{0} -\frac{2\abs{b(y)}} {\mu (2+\delta \beta)} dy} dx + \frac{C}{\mu} \int_{0}^{\infty} e^{\int_{0}^{x} -\frac{2\abs{b(y)}} {\mu (2+\delta \beta)} dy} dx.
\end{align*}
The second equality is true because $b(x)$ in \eqref{eq:adeff} is nonincreasing and $b(0) = 0$, meaning $b(x) = -\abs{b(x)} 1(x \geq 0) + \abs{b(x)} 1(x < 0)$. The inequality is true because $a(x) \geq \mu$ and is \magenta{nondecreasing} with $ a(x) \leq a(\beta) = \mu(2 + \delta \beta)$. Since $b(x)/\mu$ is a function that depends only on $\beta$, the right-hand side is a quantity that increases in $\delta$. In \eqref{fEC4} we assumed $\delta < 1/2$, so $ 1/\kappa \leq \sup_{ \delta \in (0,1/2)} 1/\kappa \leq C/\mu$. Combining this with the fact that $a(x) \leq C\mu$ from \eqref{fEC6} we get $2\kappa/a(x) \geq C$. Therefore,
\bes{
\P(Y_1\leq -z) \geq &\ C \int_{-z-1}^{-z} e^{\int_{0}^{y} \frac{-2u}{2 + \delta u} du} dy = C\int_{-z-1}^{-z} e^{ \frac{4\log(\delta y+2) -\magenta{2} \delta y}{\delta^2} -\frac{4\log(2)}{\delta^2}} dy .
}
Taylor expansion tells us that
\begin{align}
\frac{4\log(\delta y+2) - \magenta{2}\delta y}{\delta^2} -\frac{4\log(2)}{\delta^2} = \frac{-y^2}{2} + \frac{\delta y^3}{6} - \frac{\delta^2 \xi^4}{16} \label{eq:logtaylor}
\end{align}
for some $\xi$ between $0$ and $y$. Recall that $\delta = 1/\sqrt{R}$ and $z\leq c_1 R^{1/4}$, meaning $\delta^2 \abs{\xi}^4 \leq C$ when $y \in [-z-1,-z]$, so
\bes{
\P(Y_1\leq -z) \geq C\int_{-z-1}^{-z} e^{-\frac{y^2}{2}+\frac{\delta y^3}{6}}dy =&\ C\int_{-1}^0 e^{-\frac{(-z+y)^2}{2}+\frac{\delta(-z+y)^3}{6}}dy.
}
Now for $y \in [-1,0]$, and $z\leq c_1 R^{1/4}$ we use the fact that $\delta z^2 \leq C$ to get
\begin{align*}
-\frac{(-z+y)^2}{2}+\frac{\delta(-z+y)^3}{6} =&\ -\frac{z^2}{2}+zy - \frac{y^2}{2}+ \delta\frac{z^2y}{2} - \delta \frac{z y^2}{2} + \delta \frac{y^3}{6} -\frac{\delta z^3}{6}\\
\geq&\ -\frac{z^2}{2}+zy - \frac{y}{2}(y - \delta z^2) -\frac{\delta z^3}{6} - C\\
\geq&\ -\frac{z^2}{2}+zy -\frac{\delta z^3}{6} - C,
\end{align*}
so
\besn{\label{f30}
\P(Y_1\leq -z) \geq C\int_{-1}^0 e^{-\frac{z^2}{2}+zy-\frac{\delta z^3}{6}}dy \geq C \frac{1}{z}e^{-\frac{z^2}{2}-\frac{\delta z^3}{6}}.
}
This proves \eqref{eq:l1part1} for $k\geq 1$. For $k=0$, the lemma follows from \eqref{12} and \eqref{f30}.
\hfill $\square$\endproof
\proof{Proof of Lemma~\ref{fl3}}
\magenta{The bound \eqref{14} trivially holds if $0<z<1$. Therefore, we assume $z\geq 1$ in the following.}
Using integration by parts, we have for $k\geq 1$,
\besn{\label{15}
&\mathbb{E}|1(-z\leq W\leq 0)W^k e^{-\int_0^W r(u)du}|\\
=&\ -z^k e^{-\int_0^{\magenta{-z}} r(u)du} \P(W\leq -z)+\int_{-z}^0 \big(k(-y)^{k-1}+(-y)^kr(y)\big)e^{-\int_0^y r(u)du}\P(W\leq y) dy\\
\leq&\ C(k)\int_{-z}^0 \big((-y)^{k-1}+(-y)^{k+1}\big)e^{-\int_0^y r(u)du}\P(W\leq y) dy.
}
The last inequality is due to $r(y) = 2b(y)/a(y) = 2\mu(-y)/a(y) \leq C (-y)$ when $y \leq 0$, because $a(y) \geq \mu$. The same argument tells us that for $k=0$,
\ben{\label{16}
\mathbb{E}[1(-z\leq W\leq 0) e^{-\int_0^W r(u)du}]\leq \magenta{1+}C\int_{-z}^0 (-y)e^{-\int_0^y r(u)du}\P(W\leq y) dy.
}
In what follows we prove that for $k\geq 0$,
\ben{\label{13}
\int_{-z}^0 (-y)^k e^{-\int_0^y r(u)du}\P(W\leq y) dy\leq Cz^{k\vee 1}.
}
Lemma~\ref{fl3} follows from \eqref{15}, \eqref{16} and \eqref{13}.
Without loss of generality assume that $z$ is an integer. If not, increase it to the nearest integer.
The taylor expansion in \eqref{eq:logtaylor} implies that for $0\leq -y\leq z \leq c_1 R^{1/4}$,
\be{
-\int_0^y r(u)du = -\int_0^y \frac{2b(u)}{a(u)} du = -\frac{4\log(\delta y+2) - \magenta{2}\delta y}{\delta^2} + \frac{4\log(2)}{\delta^2} \leq \frac{y^2}{2}-\frac{\delta y^3}{6}+C.
}
Thus,
\bes{
&\int_{-z}^0 (-y)^k e^{-\int_0^y r(u)du}\P(W\leq y) dy\\
\leq &C\sum_{j=-z}^{-1} |j|^k \int_j^{j+1} e^{\frac{y^2}{2}-\frac{\delta y^3}{6}} e^{|j|y}e^{-|j|y} \P(W\leq y) dy\\
\leq &C \sum_{j=-z}^{-1} |j|^k \sup_{j\leq y\leq j+1} [e^{\frac{y^2}{2}+|j|y}] \sup_{j\leq y\leq j+1} [e^{-\frac{\delta y^3}{6}}]
\int_{j}^{j+1} e^{-|j|y} \P(W\leq y) dy\\
\leq & C \sum_{j=-z}^{-1} |j|^k e^{-\frac{j^2}{2}} \sup_{j\leq y\leq j+1} [e^{-\frac{\delta y^3}{6}}] \int_{-\infty}^{\infty} e^{-|j|y} \P(W\leq y) dy\\
=&C \sum_{j=-z}^{-1} |j|^k e^{-\frac{j^2}{2}} \sup_{j\leq y\leq j+1} [e^{-\frac{\delta y^3}{6}}] \frac{1}{|j|} E e^{-|j| W}.
}
We used $\sup_{j\leq y\leq j+1} [e^{\frac{y^2}{2}+|j|y}]=e^{-\frac{j^2}{2}+\frac{1}{2}}$ in the last inequality and integration by parts in the last equality. Invoking Lemma~\ref{l1}, we have
\bes{
\int_{-z}^0 (-y)^k e^{-\int_0^y r(u)du}\P(W\leq y) dy \leq&\ C \sum_{j=-z}^{-1} |j|^{k-1} e^{-\frac{j^2}{2}} \sup_{j\leq y\leq j+1} [e^{-\frac{\delta y^3}{6}}] e^{\frac{j^2}{2}} e^{-\frac{\delta |j|^3}{6}}\\
\leq&\ C \sum_{j=-z}^{-1} |j|^{k-1} \leq C z^{k\vee 1},
}
where we used
\be{
\sup_{j\leq y\leq j+1} [-\frac{\delta y^3}{6} -\frac{\delta |j|^3}{6}]\leq C \delta j^2\leq C,
\ \text{for}\ 1\leq -j\leq z \leq c_1 R^{1/4}.
}
This proves \eqref{13}.
\hfill $\square$\endproof
\subsubsection{Proof of \eqref{f13ec}.}
\proof{Proof of \eqref{f13ec}.}
Inequality \eqref{f13ec} contains an upper bound on $\left|\frac{\mathbb{P}(Y_1\geq z)}{\mathbb{P}(W\geq z)}-1\right|$.
\magenta{A} similar upper bound on $\left|\frac{\mathbb{P}(W\geq z)}{\mathbb{P}(Y_1\geq z)}-1\right|$ is a consequence of Theorem~4.1 of \cite{Brav2017}. The following simple modification of the argument in \cite{Brav2017} implies \eqref{f13ec}. It follows from (4.8), (4.9), (4.11), and (4.12) of \cite{Brav2017} that there exist some $c_1,C_1 > 0$ such that for $R\geq C_1$ and $0<z\leq c_1 R$,
\bes{
&|P(Y_1\geq z)-P(W\geq z)|\\
\leq& C \delta^2 \P(Y_1\geq z)+C\delta^2 \P(W\geq z)+C\delta^2 \min\{(z\vee 1), \frac{1}{\delta^2}\}\P(Y_1\geq z)+C\delta \P(W\geq z).
}
Dividing both sides by $\P(W\geq z)$ we have
\be{
\left|\frac{\P(Y_1\geq z)}{\P(W\geq z)}-1 \right|\leq C\delta^2 (z\vee 1) \frac{\P(Y_1\geq z)}{\P(W\geq z)}+C\delta.
}
Note that $C\delta^2 (z\vee 1) \leq C(\delta^2 z + \delta^2) = C(z/R + 1/R) \leq C(c_1 + 1/C_1) $, and choose $c_1$ small enough and $C_1$ large enough so that $C(c_1 + 1/C_1) < 1/2$. Then, repeating the argument used below \eqref{fEC5} implies \magenta{\eqref{f13ec} for $R\geq C_1$, and the argument above \eqref{fEC4} implies \eqref{f13ec} for $R<C_1$.}
\hfill $\square$\endproof
\subsection{Proof of Theorem~\ref{thm:md-std}}\label{fap3}
We first recall Theorem~\ref{thm:md-std}.
\begin{theorem}
\label{thm:md-stdec}
Assume $n = R + \beta \sqrt{R}$ for some fixed $\beta > 0$. \blue{There exist positive constants $c_0$ and $C$ depending only on $\beta$ such that
\begin{align}
& \left|\frac{\mathbb{P}(Y_0\geq z)}{\mathbb{P}(W\geq z)}-1\right|\leq \frac{C}{\sqrt{R}}\left(1+z\right) \quad \text{ for } 0<z\leq c_0 R^{1/2}\ \text{and} \label{f12ec} \\
&\left|\frac{\mathbb{P}(Y_0\leq -z)}{\mathbb{P}(W\leq -z)}-1\right|\leq \frac{C}{\sqrt{R}}\left(1+z^3\right),\ \text{ for } 0<z\leq \min\{c_0 R^{1/6}, R^{1/2}\}. \label{f15ec}
\end{align}
}
\end{theorem}
Theorem~\ref{thm:md-stdec} follows from a similar and simpler proof than that of Theorem~\ref{thm:md-highec}.
In particular, the bound \eqref{f15ec} can be proved by a simple adaption of the arguments in \cite{ChFaSh13a} and therefore its proof is omitted.
The proof of \eqref{f12ec} below may be useful for other exponential approximation problems.
We \magenta{will use $c_0,\ C,\ C_0$ to denote positive constants} that may differ from line to line, but will only depend on $\beta$. We require the following two lemmas. The first one is proved in Section~\ref{fap7}, and the second in Section~\ref{fap6}.
\begin{lemma}\label{l2}
There exist $C,C_0 > 0$ that depend only on $\beta$, such that for any $R \geq C_0$ and any $z > \beta$,
\ben{\label{5}
\P(W\geq \magenta{z+\delta})\leq C e^{-(\beta- 3\beta^2 \delta)z}.
}
\end{lemma}
To state the second lemma we let $f_z(x)$ solve the Poisson equation
\ben{\label{8}
b(x) f_z'(x) + \mu f_z''(x)= 1(x\geq z) - \P(Y_0\geq z).
}
\begin{lemma}\label{lem:eq9}
There exist $C_0,C > 0$ depending on $\beta$ such that for all $R \geq C_0$ \magenta{and any $z>\beta$},
\begin{align}
&\mathbb{E} \Big( \int_{-\delta}^{\delta} \big( b(W+y)f_z'(W+y) - b(W) f_z'(W)\big) \frac{K_W(y)}{\mu } dy \Big) \notag \\ \leq&\ C\delta \P(Y_0\geq z) \mathbb{E} \Big( \sup_{|s|\leq \delta}\big(1+e^{\beta (W+s)}1(\beta \leq W+s \leq z)\magenta{\big)\Big)}, \label{eq:lem91}
\end{align}
where $K_W(y)$ is defined in \eqref{eq:kdef}. Furthermore,
\begin{align}
& - \frac{\delta}{2\mu } \mathbb{E} \big( b^2(W) f_z'(W)\big) \leq C\delta \Big(\P(W\geq z)+\P(Y_0\geq z)\big(1+\mathbb{E} e^{\beta W}1(\beta\leq W\leq z) + |W|\big) \Big), \label{eq:lem92}\\
&- \frac{\delta}{2\mu } \mathbb{E} \big( b(W)\big( \mathbb{P}(Y_0 \geq z) -1(W \geq \delta + z) \big)\big) \leq C\delta \P(W\geq z+\delta)\magenta{.} \label{eq:lem93}
\end{align}
\end{lemma}
\proof{Proof of \eqref{f12ec}}
Note that if $R$ is bounded, then, for fixed $\beta$, $n$ is also bounded and $P(W\geq z)$ is bounded away from 0 for $z$ in the bounded range $0<z\leq c R^{1/2}$. By choosing a sufficiently large $C$, \eqref{f12ec} trivially holds. Therefore, in the following, we assume $R\geq C_0$ for a sufficiently large $C_0$. Additionally, the result for finite range $z \in (0, \beta+\delta]$ follows from the Berry-Esseen bound in Theorem 3 of \cite{BravDaiFeng2016}, so we assume $z > \beta + \delta$.
Recall that the density of $Y_0$ is given in \eqref{eq:stddenf}, and that $b(x) =-(\mu x \wedge \mu \beta)$.
Just like in \eqref{f18}, \magenta{one may verify that the solution to the Poisson equation \eqref{8} satisfies}
\ben{\label{f33}
f_z'(x)=
\begin{cases}
-\P(Y_0\geq z)e^{-\int_{0}^{x} \frac{b(u)}{\mu} du} \int_{-\infty}^{x} \frac{1}{\mu } e^{\int_{0}^{y} \frac{b(u)}{\mu} du} dy , & x<z, \\
-\P(Y_0\leq z)e^{-\int_{0}^{x} \frac{b(u)}{\mu} du} \int_{x}^{\infty} \frac{1}{\mu } e^{\int_{0}^{y} \frac{b(u)}{\mu} du} dy, & x\geq z,
\end{cases}
}
\magenta{so} \eqref{eq:gxbarf} implies $\mathbb{E} G_{\magenta{\tilde X}} f_z(W) = 0$. Taking expected values in \eqref{8} then gives us
\begin{align*}
&\P(W\geq z) - \P(Y_0\geq z) \\
=&\ \mathbb{E}\Big( b(W) f_z'(W) + \mu f_z''(W)\Big) - \mathbb{E} G_{\magenta{\tilde X}} f_z(W) \\
=&\ \mathbb{E}\bigg(\mu f_z''(W)- \int_{-\delta}^{\delta}f_{\magenta{z}}''(W+y) K_W(y) dy \bigg) \\
=&\ \mathbb{E}\big( 1(W\geq z) - \P(Y_0\geq z) -b(W)f_z'(W)\big)\\
&-\mathbb{E}\bigg(\int_{-\delta}^{\delta} \big( 1(W+y\geq z) - \P(Y_0\geq z) - b(W+y) f_z'(W+y)\big) \frac{1}{\mu}K_W(y) dy\bigg),
\end{align*}
where $K_W(y)$ is defined in \eqref{eq:kdef}, the second equality is due to \eqref{eq:steinx}, and the final equality follows from $\mu f_z''(x) = - b(x) f_z'(x) + 1(x \geq z) - \mathbb{P}(Y_0 \geq z)$. Using $K_W(y) \geq 0$ and recalling from \eqref{MD:k3} that $\int_{-\delta}^{\delta} K_W(y) dy = \mu - \frac{\delta}{2}b(W)$, we have
\begin{align*}
&b(W)f_z'(W) = \int_{-\delta}^{\delta}b(W) f_z'(W) \frac{K_W(y)}{\mu } dy + \frac{\delta}{2\mu } b^2(W) f_z'(W), \\
&-\mathbb{E}\bigg(\int_{-\delta}^{\delta}1(W+y\geq z) \frac{1}{\mu}K_W(y) dy\bigg) \leq -\mathbb{E}\Big(1(W \geq z+\delta) \big(1 - \frac{\delta}{2 \mu} b(W)\big) \Big),\\
&\mathbb{E}\bigg(\int_{-\delta}^{\delta} \P(Y_0\geq z) \frac{1}{\mu}K_W(y) dy\bigg) = \P(Y_0\geq z) \mathbb{E}\Big( 1 - \frac{\delta}{2 \mu} b(W) \Big),
\end{align*}
and therefore,
\begin{align}
&\P(W\geq z) - \P(Y_0\geq z) \notag \\
\leq&\ \P(z \leq W < z+\delta) + \mathbb{E} \Big( \int_{-\delta}^{\delta} \big( b(W+y)f_z'(W+y) - b(W) f_z'(W)\big) \frac{K_W(y)}{\mu } dy \Big) \notag \\
&- \frac{\delta}{2\mu } \mathbb{E} \big( b^2(W) f_z'(W)\big) - \frac{\delta}{2\mu } \mathbb{E} \big( b(W)\big( \mathbb{P}(Y_0 \geq z) -1(W \geq \delta + z) \big)\big)\magenta{.} \label{10}
\end{align}
Subtracting $\P(z \leq W < z+\delta)$ from both sides and using Lemma~\ref{lem:eq9} to bound the remaining three terms, we get
\begin{align*}
&\P(W\geq z+\delta) - \P(Y_0\geq z) \notag \\
\leq&\ C\delta\sup_{|s|\leq \delta}\Big( \P(\magenta{W}\geq z)+ \P(Y_0\geq z)\big(1+\mathbb{E}|\magenta{W}|+\mathbb{E} e^{\beta(W+s)}1(\beta\leq W+s\leq z)\big) \Big).
\end{align*}
We now argue that the right-hand side above can be bounded by $ C\delta \P(Y_0\geq z) (z\vee 1)$. Note that $\mathbb{E} \abs{W} \leq C$ due to Lemma~\ref{lem:higherders}. Next, since $b(x)/\mu = -(x \wedge \beta)$ only depends on $\beta$ and $z \geq \beta$, then $\mathbb{P}(Y_0 \geq z) = \frac{\int_{z}^{\infty} e^{-\beta y} dy}{\int_{-\infty}^{\infty} e^{\int_{0}^{y} b(u)/\mu du }} = C e^{-\beta z}$ for some $C>0$ depending only on $\beta$. Thus, Lemma~\ref{l2} tells us that for \magenta{$R\geq C_0$ and $\beta+\delta<z\leq c_0 R^{1/2}$},
\be{
\P(\magenta{W}\geq z) = \frac{ \P(\magenta{W}\geq z)}{\P(Y_0\geq z)}\P(Y_0\geq z)\leq C\P(Y_0\geq z).
}
Moreover, \magenta{for $|s|\leq \delta$,}
\bes{
&\mathbb{E} \big(e^{\beta(W+s)}1(\beta\leq W+s\leq z)\big)\\
&\magenta{= e^{\beta^2}\P(\beta\leq W+s\leq z) +\int_\beta^z \beta e^{\beta y} \P(y<W+s\leq z) dy } \\
&\leq C+\beta\int_{\beta}^z e^{\beta u} \P(W+s\geq u)du\\
&\leq C+ \int_{\beta}^z C du\\
&\leq C(z\vee 1).
}
The \magenta{first} equality follows from integration by parts, and the second-last inequality is due to Lemma~\ref{l2}. Thus we have shown that
\be{
\P(W\geq z+\delta)-\P(Y_0\geq z)\leq C\delta \P(Y_0\geq z) (z\vee 1).
}
Note that \magenta{$|\P(Y_0\geq z)/\P(Y_0\geq z+\delta)-1|=|e^{\beta \delta}-1|\leq \delta C$}. Therefore,
\be{
\P(W\geq z+\delta)-\P(Y_0\geq z+\delta)(1+\delta C)\leq C\delta \P(Y_0\geq z+\delta)(1+\delta C)(z\vee 1).
}
Dividing both sides of the above inequality by $\P(W\geq z+\delta)$, we obtain
\be{
1-\frac{\P(Y_0\geq z+\delta)}{\P(W\geq z+\delta)}(1+\delta C)\leq C\delta \frac{\P(Y_0\geq z+\delta)}{\P(W\geq z+\delta)}(1+\delta C)(z\vee 1).
}
Choose $C_0$ large enough and $c_0$ small enough and \magenta{since} $R\geq C_0$ and $0<z\leq c_0 R^{1/2}$, the constant in front of $\frac{\P(Y_0\geq z+\delta)}{\P(W\geq z+\delta)}$ on the right-hand side can be made less than \magenta{1/2, implying }
\be{
1-\frac{\P(Y_0\geq z+\delta)}{\P(W\geq z+\delta)} \leq C\delta (z\vee 1).
}
A similar argument can be repeated to show $\frac{\P(Y_0\geq z+\delta)}{\P(W\geq z+\delta)} -1 \leq C\delta (z\vee 1)$, implying \eqref{f12ec}.
\hfill $\square$\endproof
\subsubsection{Proof of Lemma~\ref{l2}.}\label{fap7}
We require the following auxiliary result, whose proof is provided after the proof of Lemma~\ref{l2}.
\begin{lemma}\label{l3}
There exist constants $C,C_0 > 0$ depending only on $\beta$ such that for $R \geq C_0$,
\ben{\label{6}
\mathbb{E} e^{(\beta-3\beta^2 \delta)W}\leq C/\delta.
}
\end{lemma}
\proof{Proof of Lemma~\ref{l2}}
Since $\delta = 1/\sqrt{R}$, we can choose $C_0$ large enough so that $(\beta-3\beta^2 \delta)>\beta/2$. For notational convenience, we set $\nu = 3\beta^2 \delta$. Define
\be{
f''(x)=
\begin{cases}
(\beta-\nu)e^{(\beta-\nu)x}, & x\leq z\\
\text{linear interpolation}, & z<x\leq z+\Delta\\
0, & x>z+\Delta,
\end{cases}
}
with $0<\Delta\leq \delta$ to be chosen,
$f'(x) = 1 + \int_{0}^{x} f''(y) dy$ and $f(x) = \int_{0}^{x} f'(y) dy$. Note that $f(x)$ grows linearly in $x$ when $x \geq z + \Delta$ because $f'(x) = f'(z+\Delta)$ for $x \geq z + \Delta$, so $\mathbb{E} \abs{f(W)} < \infty$ because $W$ is bounded from below and $\mathbb{E} \abs{W} < \infty$. Therefore, $\mathbb{E} G_{\magenta{\tilde X}} f(W) = 0$ due to \eqref{eq:gxbarf}, implying $ \mathbb{E}\big(-b(W)f'(W)\big) = \mathbb{E} \Big(\int_{-\delta}^{\delta} f''(W+y) K_{W}(y) dy\Big)$ if we use the form of $G_{\magenta{\tilde X}} f(x)$ in \eqref{eq:steinx}. Note also that $f'(0) = 1$, $f'(x) \geq 0$, and $f'(x) \geq e^{(\beta-\nu)z}$ for $x \geq z + \Delta$. Using $b(x) = -(\mu x \wedge \mu \beta)$ \magenta{and the assumption that $z>\beta$} we therefore have
\bes{
\mathbb{E}\big(-b(W)f'(W)\big) \geq&\ \mathbb{E}\big(\mu We^{(\beta-\nu)W}1(W<\beta)\big) \\
&\quad + \mathbb{E}\big(\mu \beta e^{(\beta-\nu)W}1(\beta\leq W\leq z)\big) +\mathbb{E}\big(\mu \beta e^{(\beta-\nu)z}1(W> z+\Delta)\big)\\
\geq&\ -\mu C+ \mathbb{E}\big(\mu \beta e^{(\beta-\nu)W}1(\beta\leq W\leq z)\big) +\mathbb{E}\big(\mu \beta e^{(\beta-\nu)z}1(W> z+\Delta)\big),
}
where the second inequality is due to $-\abs{x} e^{-(\beta-\nu)\abs{x}} 1(x < \beta) \geq -C$. Recalling from \eqref{MD:k3} that $\int_{-\delta}^{\delta} K_{W}(y)dy = \mu-\delta b(W)/2 $, we have
\bes{
\mathbb{E} \Big(\int_{-\delta}^{\delta} f''(W+y) K_{W}(y) dy\Big) &\leq\mathbb{E}\Big( \sup_{|s|\leq \delta} f''(W+s) \int_{-\delta}^{\delta} K_{W}(y)dy \Big)\\
&=\mathbb{E} \Big( \mu \sup_{|s|\leq \delta} f''(W+s) \big(1-\frac{\delta}{2\mu}b(W)\big)1(W\leq z+\Delta+\delta)\Big)\\
&\leq \mathbb{E}\Big( \mu (\beta-\nu) e^{(\beta-\nu)(W+\delta)}\big(1-\frac{\delta}{2\mu}b(W)\big)1(W\leq z+\Delta+\delta)\Big)\\
&=\mathbb{E}\Big( \mu (\beta-\nu) e^{(\beta-\nu)(W+\delta)}\big(1+\frac{\delta}{2}\beta\big)1(\beta\leq W\leq z+\Delta+\delta)\Big)\\
&\quad +\mathbb{E}\Big( \mu (\beta-\nu) e^{(\beta-\nu)(W+\delta)}\big(1+\frac{\delta}{2}W\big)1(W<\beta )\Big)\\
&\leq \mathbb{E}\Big( \mu (\beta+C\delta) e^{(\beta-\nu)W}1(\beta\leq W\leq z)\Big)\\
&\quad +\mathbb{E} \Big( \mu (\beta+C\delta)e^{(\beta-\nu)z}1(z\leq W\leq z+\Delta+\delta)\Big)+ \mu C.
}
Combining the inequalities above, we have
\bes{
&\mathbb{E}\big(\beta e^{(\beta-\nu)z}1(W> z+\Delta+\delta)\big)\\
&\leq C+C\delta \mathbb{E} \big( e^{(\beta-\nu)W}1(\beta\leq W\leq z+\Delta+\delta)\big)+\beta e^{(\beta-\nu)z}
\P(z\leq W\leq z+\Delta).
}
Without loss of generality we assume $z$ does not belong to the support of $W$ and let $\Delta\to 0$, and observe that $\P(z\leq W\leq z+\Delta) \to 0$.
Therefore, we have
\be{
\magenta{\beta e^{(\beta-\nu)z}}\P(W>z+\magenta{\delta})\leq C+ C\delta \mathbb{E} e^{(\beta-\nu)W}\leq C
}
where we \magenta{have used Lemma~\ref{l3}.}
\hfill $\square$\endproof
\proof{Proof of Lemma~\ref{l3}}
Since $\delta = 1/\sqrt{R}$, we can choose $C_0$ large enough so that $(\beta-3\beta^2 \delta)>\beta/2$. For notational convenience, we set $\nu = 3\beta^2 \delta$.
Fix $M > \beta $ and let $f(x) = \int_{0}^{x}e^{(\beta-\nu)(y \wedge M)} dy$. We recall from \eqref{eq:steinx} that $G_{\magenta{\tilde X}} f(x) = b(W)f'(W) +\int_{-\delta}^{\delta} f''(W+y) K_{W}(y) dy$ where $K_W(y)$ is defined in \eqref{eq:kdef}. Since $\beta - \nu > \beta/2$ by assumption, the function $f(x)$ grows linearly for $x \geq M$, so $\mathbb{E} \abs{ f(W)} < \infty$ because $\mathbb{E} \abs{W} < \infty$. Therefore, $\mathbb{E} G_{\magenta{\tilde X}} f(W) = 0$, or $ \mathbb{E}\big(-b(W)f'(W)\big) = \mathbb{E} \Big(\int_{-\delta}^{\delta} f''(W+y) K_{W}(y) dy\Big)$, due to \eqref{eq:gxbarf}. Now
\besn{\label{f40}
\mathbb{E}\big(-b(W)f'(W)\big)=&\ \mathbb{E}\big(\mu \beta e^{(\beta-\nu)(W \wedge M)} 1(W\geq \beta) \big)+\mathbb{E}\big(\mu W e^{(\beta-\nu)W} 1(W< \beta) \big)\\
=&\ \mathbb{E}\big(\mu \beta e^{(\beta-\nu)(W \wedge M)} \big)+\mathbb{E}\big(\mu (W-\beta) e^{(\beta-\nu)W} 1(W< \beta)\big)\\
\geq&\ \mathbb{E}\big(\mu \beta e^{(\beta-\nu)(W \wedge M)} \big)-\mu C
}
where in the last inequality we used the fact that $\abs{(x-\beta) e^{(\beta-\nu)x}} 1(x<\beta) \leq C$ if $(\beta-\nu)>\beta/2$. Furthermore, since $f''(x) = (\beta-\nu) e^{(\beta-\nu)(x \wedge M)} 1(x < M)$ and $\int_{-\delta}^{\delta} K_{W}(y)dy = \mu -\delta b(W)/2 $, we have
\besn{\label{f41}
\mathbb{E} \bigg(\int_{-\delta}^{\delta} f''(W+y) K_{W}(y) dy\bigg) \leq&\ \mathbb{E} \bigg(\int_{-\delta}^{\delta}(\beta-\nu)e^{(\beta-\nu) ((W+y)\wedge M ) }K_W(y) dy\bigg)\\
\leq&\ (\beta-\nu) e^{(\beta-\nu)\delta}\mathbb{E}\Big(e^{(\beta-\nu)(W \wedge M)}(\mu-\frac{\delta}{2}b(W))\Big)\\
=&\ \mu (\beta-\nu) e^{(\beta-\nu)\delta}\mathbb{E}\Big(e^{(\beta-\nu)(W \wedge M)}(1+\frac{\delta}{2}W)1(W \leq \beta)\Big) \\
&+ \mu (\beta-\nu) e^{(\beta-\nu)\delta}\mathbb{E}\Big(e^{(\beta-\nu)(W \wedge M)}(1+\frac{\delta}{2}\beta)1(W \geq \beta)\Big)\\
\leq&\ \mu C+ \mu (\beta-\nu)e^{(\beta-\nu)\delta}(1+\frac{\delta}{2}\beta)\mathbb{E}\big(e^{(\beta-\nu)(W \wedge M)}\big).\\
}
Divide both sides of \eqref{f40} and \eqref{f41} by \magenta{$\mu \delta$} and combine these two inequalities, and also substitute $3 \beta^2 \delta$ for $\nu$, to get
\be{
\frac{\Big(\beta-(\beta-3\beta^2\delta)e^{(\beta-3\beta^2\delta)\delta} (1+\frac{\delta}{2}\beta) \Big)}{\delta} \mathbb{E} \big( e^{(\beta-3 \beta^2 \delta)(W\wedge M)}\big)\leq C/\delta.
}
Since the coefficient in front of the expected value on the left-hand side converges to a positive constant as $\delta \to 0$, for sufficiently small $\delta$ (or sufficiently large $C_0$), we have
\be{
\mathbb{E} \big(e^{(\beta-3\beta^2\delta)(W\wedge M)}\big)\leq C/\delta.
}
We conclude by letting $M \to \infty$.
\hfill $\square$\endproof
\subsubsection{Proof of Lemma~\ref{lem:eq9}.}\label{fap6}
We begin by proving \eqref{eq:lem91}.
\begin{align*}
&\mathbb{E} \Big( \int_{-\delta}^{\delta} \big( b(W+y)f_z'(W+y) - b(W) f_z'(W)\big) \frac{K_W(y)}{\mu } dy \Big) \\
=&\ \mathbb{E} \Big( \int_{-\delta}^{\delta}\frac{K_W(y)}{\mu } \int_{0}^{y}\big( b(x)f_z'(x)\big)'\big|_{x=W+s} ds dy \Big) \\
\leq&\ \delta \mathbb{E} \Big( \sup_{|s|\leq \delta} \abs{\big( b(x)f_z'(x)\big)'\big|_{x=W+s}} \big( 1 - \frac{\delta}{2\mu}b(W) \big) \Big)\\
\leq&\ C \delta \mathbb{E} \Big( \sup_{|s|\leq \delta} \abs{\big( b(x)f_z'(x)\big)'\big|_{x=W+s}} \Big).
\end{align*}
The first inequality is due to $K_W(y) \geq 0$ and $\int_{-\delta}^{\delta} K_{W}(y)/\mu dy = 1-\delta b(W)/(2\mu)$ from \eqref{MD:k3}, and the last inequality is true because $\big| \frac{\delta}{2\mu}b(W)\big| = \big|\frac{\delta}{2}(W \wedge \beta) \big| \leq C$ since $W\geq -1/\delta$. To bound the right-hand side we note that \magenta{(cf. \eqref{f33})}
\be{
-b(x)f_z'(x)=
\begin{cases}
\P(Y_0\geq z)b(x) e^{-\int_{0}^{x} \frac{b(u)}{\mu} du} \int_{-\infty}^{x} \frac{1}{\mu } e^{\int_{0}^{y} \frac{b(u)}{\mu} du} dy , & x<z, \\
\P(Y_0\leq z)b(x) e^{-\int_{0}^{x} \frac{b(u)}{\mu} du} \int_{x}^{\infty} \frac{1}{\mu } e^{\int_{0}^{y} \frac{b(u)}{\mu} du} dy, & x\geq z,
\end{cases}
}
so for \magenta{$x> z>\beta$},
\bes{
(-b(x)f_z'(x))' =&\ -\beta \P(Y_0\leq z)\Big(-1+ \beta e^{-\int_{0}^{x} \frac{b(u)}{\mu} du}\int_{x}^{\infty} e^{ \int_{0}^{y} \frac{b(u)}{\mu} du} dy \Big)\\
=&\ -\beta \P(Y_0\leq z)\Big(-1+ \beta e^{- \frac{\beta^2}{2} + \beta x}\int_{x}^{\infty} e^{ \frac{\beta^2}{2} - \beta y} dy \Big)\\
=&\ 0.
}
For $\beta<x<z$,
\bes{
\abs{(b(x)f_z'(x))'} =&\ \beta \P(Y_0\geq z) \Big(1+ \beta e^{-\frac{\beta^2}{2}+\beta x} \int_{-\infty}^{x} e^{\int_{0}^{y} \frac{b(u)}{\mu} du} dy \Big) \leq \beta \P(Y_0\geq z) (1 + C e^{\beta x}).
}
In the inequality above we used the fact that $\int_{-\infty}^{x} e^{\int_{0}^{y} \frac{b(u)}{\mu} du} dy \leq C$ because $b(x)/\mu = -(x \wedge \beta)$ depends only on $\beta$. Lastly, for $x<\beta$,
\bes{
\abs{(b(x)f_z'(x))'} =&\ \P(Y_0\geq z)\Big|x+(e^{\frac{x^2}{2}}+x^2 e^{\frac{x^2}{2}})\int_{-\infty}^{x} e^{- \frac{y^2}{2}} dy \Big|.
}
When $ -1 \leq x\magenta{ < \beta}$, the right-hand side is bounded by $C \P(Y_0\geq z)$ and when $x < -1$, we use the bound $\frac{1}{-x-1} e^{-\frac{x^2}{2}}\leq \int_{-\infty}^{x} e^{- \frac{y^2}{2}} dy \leq \frac{1}{-x} e^{-\frac{x^2}{2}}$ to conclude that
\begin{align*}
\P(Y_0\geq z)\Big|x+(e^{\frac{x^2}{2}}+x^2 e^{\frac{x^2}{2}})\int_{-\infty}^{x} e^{- \frac{y^2}{2}} dy \Big| \leq \P(Y_0\geq z)\Big|x+C-x \Big| \leq C \P(Y_0\geq z).
\end{align*}
Combining the three cases yields \eqref{eq:lem91}. We now prove the bound on $- \frac{\delta}{2\mu } \mathbb{E} \big( b^2(W) f_z'(W)\big)$ in \eqref{eq:lem92}. From the form of $b(x) f_z'(x)$ above, we have for $x>z\magenta{>\beta}$,
\bes{
\magenta{-}\frac{1}{\mu}b^2(x)f_z'(x)&= \magenta{\beta} \P(Y_0\leq z),
}
for $\beta\leq x\leq z$,
\bes{
\magenta{-}\frac{1}{\mu}b^2(x)f_z'(x)&= \P(Y_0\geq z) \beta e^{-\frac{\beta^2}{2}+\beta x} \int_{-\infty}^{x} e^{\magenta{\int_{0}^{y}} \frac{b(u)}{\mu} du} dy
\leq C e^{ \beta x}\P(Y_0\geq z),
}
and for $x<\beta$,
\be{
\magenta{-}\frac{1}{\mu}b^2(x)f(x) = \P(Y_0\geq z) x^2 e^{\frac{x^2}{2}}\int_{-\infty}^{x} e^{- \frac{y^2}{2}} dy \leq \P(Y_0\geq z) \magenta{(1+|x|)}.
}
Combining the three cases implies \eqref{eq:lem92}. Lastly we prove \eqref{eq:lem93}. In the proof of Lemma~\ref{lem:momentequiv} we showed that $\mathbb{E} b(W) = 0$. Furthermore, $-\mu\beta\leq b(x)\leq 0$ for $x\geq 0$, so
\begin{align*}
-\frac{\delta}{2\mu} \mathbb{E} \big( b(W) \big(\P(Y_0\geq z)-1(W\geq z+\delta)\big) \big)= \frac{\delta}{2\mu} \mathbb{E} \big( b(W)1(W\geq z+\delta) \big) \leq C\delta \P(W\geq z+\delta).
\end{align*}
\hfill $\square$\endproof
\section{Companion for the Hospital Model}
\label{app:hospital_proofs}
In this portion of the electronic companion, we motivate the $v_3$ approximation for the hospital model presented in Section~\ref{sec:hospcompare}, where we suggested using
\begin{align}
\label{eq:hosv3ec}
v_3(x)=\max\Big\{ \delta +\frac{1}{2} \Big(\delta^{2} 1(x<0)- \delta^2 ( x^{-} - \beta )-\delta^2-2\delta^2\beta\Big), \delta/2 \Big\}.
\end{align}
We recall that $\tilde X = \{ \tilde X(n) = \delta (X(n) - N)\}$, where $X(n)$ is the customer count at the end of time unit $n$, that $W$ and $W'$ have the distributions of $\tilde X(0)$ and $\tilde X(1)$ when $\tilde X(0)$ is initialized according to the stationary distribution of $\tilde X$, and that $\Delta = W' - W$. We use $\epsilon(x)$ and $\epsilon_i(x)$ to denote generic functions satisfying
\begin{align}
\abs{\epsilon(x)} \leq C (1 + \abs{x})^{5}. \label{eq:epsdefec}
\end{align}
The following lemma gives us the conditional moments of $\Delta$. It is proved in Section~\ref{sec:hosmecproof}.
\begin{lemma}\label{lem:hosmec}
For the hospital model with $N$ servers, $\Lambda=\sqrt{N}-\beta$ and $\mu=\delta=1/\sqrt{N}$,
\begin{align}
b(x) = \mathbb{E}(\Delta| W= x) =&\ \delta( x^{-} - \beta ), \label{eq:1ec}\\
\mathbb{E}(\Delta^2|W=x) =&\ 2\delta + \Big(b^{2}(x)-\delta b(x) -\delta^2 - 2\delta^2\beta\Big) + \delta^3 x^{-} \label{eq:2ec} \\
\mathbb{E}(\Delta^3|W=x) =&\ 6\delta b(x) + \delta^{3} \epsilon(x) , \label{eq:3ec}\\
\mathbb{E}(\Delta^4|W=x) =&\ 12\delta^2 + \delta^{3} \epsilon(x), \label{eq:4ec}\\
\mathbb{E}(\Delta^5|W=x)=&\ \delta^{3} \epsilon(x). \label{eq:5ec}
\end{align}
\end{lemma}
To derive $v_3(x)$, we begin with the Taylor expansion in \eqref{eq:taylorgeneric} with $n = 4$, which says that for sufficiently smooth $f(x)$,
\begin{align}
- \mathbb{E} \Delta f'(W) = \mathbb{E} \bigg[\sum_{i=2}^{4} \frac{1}{i!} \Delta^i f^{(i)}(W) + \frac{1}{5!} \Delta^{5} f^{(5)}(\xi_1)\bigg], \label{eq:txec}
\end{align}
where $\xi_i$ denote numbers lying between $W$ and $W'$. By combining \eqref{eq:txec} with Lemma~\ref{lem:hosmec}, we will show that
\begin{align}
&- \mathbb{E} b(W) f'(W) - \frac{1}{2} \mathbb{E}\Big(2 \delta + \delta^{2} 1(W<0)- \delta^2( W^{-} - \beta ) -\delta^2-2\delta^2\beta\Big) f''(W)\notag \\
=&\ \delta^3 \Big(\frac{1}{2}\mathbb{E} \epsilon_0(W) f''(W)+ \frac{1}{6} \mathbb{E} \epsilon_3(W) f'''(W) + \frac{1}{24} \mathbb{E} \epsilon_4(W) f^{(4)}(W) + \frac{1}{120} \mathbb{E} \epsilon_5(W) f^{(5)}(\xi_1)\Big) \notag \\
&+ \frac{1}{2}\delta^3 \mathbb{E} \Big(\epsilon_1(W) f^{(4)}(W) + \epsilon_2(W) f^{(5)}(\xi)\Big) \notag \\
&+\frac{1}{2} \delta^3 \mathbb{E} \Big(\epsilon_6(W)f''(W)+\epsilon_7(W)f'''(W) + \epsilon_2(W) \Big(\frac{d^2}{d x^2} \big( (x^{-}-\beta) f''(x) \big)\big|_{x = \xi_3}\Big)\Big). \label{eq:toprovehospec}
\end{align}
Truncating the term in front of $f''(W)$ on the left-hand side from below by $\delta/2$ gives us $v_3(x)$ in \eqref{eq:hosv3ec}. The truncation level $\delta/ 2$ is chosen because the support of $W$ is in $[-\delta N, \infty)$ and the term in front of $f''(W)$ on the left-hand side of \eqref{eq:toprovehospec} equals $\delta \big(\frac{1}{2} - \frac{1}{2}\delta \beta \big) \approx\frac{\delta}{2}$ when evaluated at the point $W = -\delta N$. We could have chosen $\delta \big(\frac{1}{2} - \frac{1}{2}\delta \beta \big)$ instead (when this quantity is positive), but in practice this does not make a big difference. Let us now prove \eqref{eq:toprovehospec}. Combining \eqref{eq:txec} with Lemma~\ref{lem:hosmec} yields
\begin{align*}
- \mathbb{E} b(W) f'(W) =&\ \frac{1}{2} \mathbb{E}\Big(2\delta + b^{2}(W)-\delta b(W) -\delta^2 - 2\delta^2\beta + \delta^3 W^{-}\Big) f''(W) \notag \\
&+ \frac{1}{6} \mathbb{E} \big(6\delta b(W) + \delta^{3} \epsilon_3(W)\big)f'''(W) + \frac{1}{24} \mathbb{E}\big(12\delta^2 + \delta^3 \epsilon_4(W) \big) f^{(4)}(W) \notag \\
&+ \frac{1}{120} \delta^3 \mathbb{E} \epsilon_5(W) f^{(5)}(\xi_1).
\end{align*}
Let us write $W^{-}f''(W)$ as $\epsilon_{0}(W) f''(W)$ and rearrange the right-hand side above into the more convenient form:
\begin{align}
&- \mathbb{E} b(W) f'(W) - \frac{1}{2} \mathbb{E}\Big(2\delta + b^{2}(W)-\delta b(W) -\delta^2 - 2\delta^2\beta \Big) f''(W) \notag \\
=&\ \frac{1}{2} \delta \mathbb{E} b(W) f'''(W) + \frac{1}{2} \delta \big(\mathbb{E} b(W) f'''(W) + \delta \mathbb{E} f^{(4)}(W)\big) \notag \\
&+\delta^3 \Big( \frac{1}{2} \mathbb{E} \epsilon_0(W) f''(W)+ \frac{1}{6} \mathbb{E} \epsilon_3(W) f'''(W) + \frac{1}{24} \mathbb{E} \epsilon_4(W) f^{(4)}(W) + \frac{1}{120} \mathbb{E} \epsilon_5(W) f^{(5)}(\xi_1)\Big). \label{1}
\end{align}
The last row is considered as error because of the $\delta^3$ there. We wish to transform the first row on the right-hand side into an expression involving $f''(x)$ plus error. To this end we require the following lemma, which is proved at the end of this section.
\begin{lemma}
\label{lem:hospaux}
Suppose that $g \in C^{3}(\mathbb{R})$ is such that $\mathbb{E} g(W') - \mathbb{E} g(W) = 0$. Then
\begin{align*}
\mathbb{E} b(W) g'(W) + \delta \mathbb{E} g''(W) =&\ \delta^2 \mathbb{E} \Big(\epsilon_1(W) g''(W) + \epsilon_2(W) g'''(\xi)\Big),
\end{align*}
where $\epsilon_i(x)$ are generic functions satisfying \eqref{eq:epsdefec} and $\xi$ lies between $W$ and $W'$.
\end{lemma}
We apply Lemma~\ref{lem:hospaux} with $g(x) = \frac{1}{2} \delta f''(x)$ to get
\begin{align}
& \frac{1}{2}\delta \Big(\mathbb{E} b(W) f'''(W) + \delta \mathbb{E} f^{(4)}(W)\Big) = \frac{1}{2}\delta^3 \mathbb{E} \Big(\epsilon_1(W) f^{(4)}(W) + \epsilon_2(W) f^{(5)}(\xi)\Big). \label{eq:gfdelta}
\end{align}
The left-hand side above coincides with one of the terms in the second row of \eqref{1}.
Next we choose $g(x) = \int_{0}^{x} b(y) f''(y) dy$ and note that $g''(x) = b'(x)f''(x) + b(x)f'''(x)$. Applying Lemma~\ref{lem:hospaux} with our new choice of $g(x)$, we get
\begin{align*}
& \mathbb{E} b^2(W) f''(W) + \delta \mathbb{E} \big(b'(W)f''(W) + b(W)f'''(W)\big) \\
=&\ \delta^2 \mathbb{E} \Big(\epsilon_1(W) \big(b'(W)f''(W) + b(W)f'''(W)\big) + \epsilon_2(W) \Big(\frac{d^2}{d x^2} \big( b(x) f''(x) \big)\big|_{x = \xi_3}\Big)\Big)\\
=&\ \delta^3 \mathbb{E} \Big(\epsilon_6(W)f''(W)+\epsilon_7(W)f'''(W) + \epsilon_2(W) \Big(\frac{d^2}{d x^2} \big( (x^{-}-\beta) f''(x) \big)\big|_{x = \xi_3}\Big)\Big).
\end{align*}
The last equality follows from the fact that $b(x) = \delta(x^{-}-\beta)$. Multiplying both sides by $1/2$ and rearranging terms, we get
\begin{align}
& \frac{1}{2}\delta \mathbb{E} b(W) f'''(W) \notag \\
=&\ - \frac{1}{2}\mathbb{E}\big( b^2(W) + \delta b'(W)\big) f''(W) \notag \\
&+\frac{1}{2} \delta^3 \mathbb{E} \Big(\epsilon_6(W)f''(W)+\epsilon_7(W)f'''(W) + \epsilon_2(W) \Big(\frac{d^2}{d x^2} \big( (x^{-}-\beta) f''(x) \big)\big|_{x = \xi_3}\Big)\Big). \label{f25}
\end{align}
Plugging \eqref{eq:gfdelta} and \eqref{f25} into \eqref{1}, we conclude that
\begin{align*}
&- \mathbb{E} b(W) f'(W) - \frac{1}{2} \mathbb{E}\Big(2\delta + b^{2}(W)-\delta b(W) -\delta^2 - 2\delta^2\beta \Big) f''(W)\notag \\
=&\ - \frac{1}{2}\mathbb{E}\big( b^2(W) + \delta b'(W)\big) f''(W) \\
&+ \delta^3 \Big(\frac{1}{2}\mathbb{E} \epsilon_0(W) f''(W)+ \frac{1}{6} \mathbb{E} \epsilon_3(W) f'''(W) + \frac{1}{24} \mathbb{E} \epsilon_4(W) f^{(4)}(W) + \frac{1}{120} \mathbb{E} \epsilon_5(W) f^{(5)}(\xi_1)\Big) \\
&+ \frac{1}{2}\delta^3 \mathbb{E} \Big(\epsilon_1(W) f^{(4)}(W) + \epsilon_2(W) f^{(5)}(\xi)\Big) \\
&+\frac{1}{2} \delta^3 \mathbb{E} \Big(\epsilon_6(W)f''(W)+\epsilon_7(W)f'''(W) + \epsilon_2(W) \Big(\frac{d^2}{d x^2} \big( (x^{-}-\beta) f''(x) \big)\big|_{x = \xi_3}\Big)\Big).
\end{align*}
To conclude \eqref{eq:toprovehospec}, we move $- \frac{1}{2}\mathbb{E}\big( b^2(W) + \delta b'(W)\big) f''(W)$ to the left-hand side and note that
\begin{align*}
& 2\delta + b^{2}(W)-\delta b(W) -\delta^2 - 2\delta^2\beta - b^2(W) - \delta b'(W) \\
=&\ 2\delta + \delta^2( W^{-} - \beta )^2 -\delta^2( W^{-} - \beta ) -\delta^2 - 2\delta^2\beta - \delta^2( W^{-} - \beta )^2 + \delta^2 1(W<0) \\
=&\ 2\delta - \delta^2(W^{-}-\beta)- \delta^2 - 2\delta^2\beta + \delta^2 1(W < 0),
\end{align*}
which coincides with the term in front of $f''(W)$ on the left-hand side of \eqref{eq:toprovehospec}.
\proof{Proof of Lemma~\ref{lem:hospaux}}
Since $\mathbb{E} g(W') - \mathbb{E} g(W) = 0$, performing a third-order Taylor expansion gives us
\begin{align*}
0 =&\ \mathbb{E} \Delta g'(W) + \frac{1}{2} \mathbb{E}\Delta^2 g''(W) + \frac{1}{6} \mathbb{E}\Delta^{3} g'''(\xi) \\
=&\ \mathbb{E} b(W) g'(W) + \frac{1}{2} \mathbb{E}\Big(2\delta + b^{2}(W)-\delta b(W) -\delta^2 - 2\delta^2\beta + \delta^3 W^{-}\Big) g''(W) \notag \\
&+ \frac{1}{6} \mathbb{E} \big(6\delta b(W) + \delta^{3} \epsilon_0(W)\big)g'''(\xi).
\end{align*}
The second equality is due to Lemma~\ref{lem:hosmec}.
We set
\begin{align*}
\epsilon_1(x) =&\ -\frac{b^{2}(x)-\delta b(x) -\delta^2 - 2\delta^2\beta + \delta^3 x^{-}}{2\delta^2} \quad \text{ and } \quad \epsilon_2(x) = - \frac{ 6\delta b(x) + \delta^{3} \epsilon_0(x) }{6\delta^2}
\end{align*}
and note that both $\epsilon_1(x)$ and $\epsilon_2(x)$ satisfy \eqref{eq:epsdefec} because $b(x) = \delta(x^{-}-\beta)$.
\hfill $\square$\endproof
\subsection{Proof of Lemma~\ref{lem:hosmec} }
\label{sec:hosmecproof}
By the definition of the hospital model the change in customers $\Delta = W'-W$ satisfies
\begin{align*}
\Delta = \delta(A - D),
\end{align*}
where $A$ is a mean $\Lambda$ Poisson random variable, and
conditioned on $W=x=\delta(k-N)$, $D\sim \text{Binomial}(k\wedge N, \mu)$;
see \cite{DaiShi2017}. To prove the lemma, we utilize the following Stein
identities for a mean $\Lambda$ Poisson random variable $X$ and
a Binomial($k, \mu$) random varabile $Y$:
\begin{align}
& \mathbb{E} X f(X) =\Lambda \mathbb{E} f(X+1) \quad \text{ for each } f:\mathbb{Z}_+\to \mathbb{R} \text{ with } \mathbb{E} \abs{ X f(X)} <\infty, \label{eq:steinpoisson} \\
& \mathbb{E} Yf(Y) = \mu k \mathbb{E} f(Y+1) - \mu \mathbb{E}\Big[Y\Big(f(Y+1)-f(Y)\Big)\Big] \text{ for each } f:\mathbb{Z}_+\to \mathbb{R}. \label{eq:steinbinom}
\end{align}
See, for example, Lectures VII and VIII of \cite{Stei1986}.
\proof{Proof of Lemma~\ref{lem:hosmec}}
We prove \eqref{eq:1ec}--\eqref{eq:5ec} in sequence. Using the facts that $\Lambda=\sqrt{N}-\beta$ and $\mu=\delta=1/\sqrt{N}$, for $x = \delta(k - N)$ we have
\begin{align}
\mathbb{E}(A-D|W=x)=\Lambda - (k\wedge N) \mu = \mu \big(\Lambda/\mu - N - (k- N \wedge 0)\big) =&\ -\beta - (x \wedge 0) = x^{-} - \beta. \label{eq:6}
\end{align}
For the remainder of the proof we adopt the convention that all expectations are conditioned on $W=x$. We now prove \eqref{eq:2ec}:
\begin{align}
\mathbb{E} \Delta^2 &=\delta^2\Big(\mathbb{E}\big[A(A-D)\big]-\mathbb{E}\big[D(A-D)\big]\Big)\nonumber\\
&=\delta^2\Big(\Lambda\mathbb{E} (A-D+1)-\mathbb{E}\big[D(A-D)\big] \Big) \nonumber\\
&=\delta^2\Big(\Lambda\mathbb{E} (A-D+1)-\mu(n\wedge N) \mathbb{E} (A-D-1) - \mu \mathbb{E} D \Big) \nonumber\\
& =\delta^2\Big(2\Lambda + ( \Lambda - \mu(n \wedge N) )\mathbb{E} (A-D-1) + \mu(x^{-} - \sqrt{N})\Big) \nonumber\\
&= \delta^2\Big(2\Lambda + (x^{-} - \beta)^2 - (x^{-} - \beta) + \mu(x^{-} - \sqrt{N})\Big).\label{eq:8}
\end{align}
We used \eqref{eq:steinpoisson} in the second equality and \eqref{eq:steinbinom} in the third equality. The fourth and fifth equalities are due to
\begin{align*}
\mathbb{E} D =&\ \mathbb{E}(D-A)+\mathbb{E} A =-(x^{-} - \beta)+\Lambda = -x^- + \sqrt{N} \\
&\ \text{ and } \Lambda-\mu(n\wedge N) = \mathbb{E}(A-D) = x^- - \beta, \text{ respectively}.
\end{align*}
Lastly, bringing the $\delta^2$ term in \eqref{eq:8} inside the parenthesis and recalling that $\delta^2 \Lambda = \delta - \delta^2 \beta$, $\delta (x^- - \beta) = b(x)$, and $\mu = \delta = 1/\sqrt{N}$ yields (\ref{eq:2ec}). We now prove \eqref{eq:3ec}. Note that $\mathbb{E}\Delta^3 = \delta^3 \mathbb{E} (A-D)^3$ equals
\begin{align*}
& \delta^3\Big(\mathbb{E}\big[A(A-D)^2\big]-\mathbb{E}\big[D(A-D)^2\big]\Big)\\
=&\delta^3\Big(\Lambda \mathbb{E}(A-D+1)^2-\mu (n\wedge N)\mathbb{E}(A-D-1)^2 + \mu \mathbb{E} D\big[(A-D-1)^2-(A-D)^2\big] \Big) \\
=&\delta^3\Big(4 \Lambda \mathbb{E} (A-D) + \big(\Lambda-(n\wedge N)\mu\big) \mathbb{E} (A-D-1)^2 + \mu \mathbb{E} D\big[-2(A-D)+1\big] \Big).
\end{align*}
The first equality is due to \eqref{eq:steinpoisson} and \eqref{eq:steinbinom} and to get the second equality we use $\mathbb{E} (A-D+1)^2 = \mathbb{E} (A-D- 1 + 2)^2 = \mathbb{E}\big[(A-D- 1)^2 + 4(A-D)\big]$. Let us analyze the terms above one by one. First, we have
\begin{align*}
\delta^3 4 \Lambda \mathbb{E} (A-D) =&\ 4 (x^- - \beta) \delta^3 \Lambda = 4 (x^- - \beta) (\delta^2 - \delta^3 \beta) = 4\delta^2(x^- - \beta) + \delta^3\epsilon(x).
\end{align*}
Second, since $\delta(A-D) = \Delta$, we have
\begin{align*}
\delta^3 \big(\Lambda-(n\wedge N)\mu\big) \mathbb{E} (A-D-1)^2 =&\ \delta (x^- - \beta) \mathbb{E} \big(\Delta^2 - 2\delta \mathbb{E} \Delta + \delta^2 \big)\\
=&\ \delta (x^- - \beta) \mathbb{E} \big(\Delta^2 - 2\delta^2(x^- - \beta) + \delta^2 \big) \\
=&\ \delta (x^- - \beta) \mathbb{E} \Delta^2 + \delta^3 \epsilon(x)\\
=&\ 2\delta^2(x^- - \beta) + \delta^3 \epsilon(x).
\end{align*}
The last equality is due to \eqref{eq:2ec}, which says that $\mathbb{E} \Delta^2 = 2\delta + \delta^2 \epsilon(x)$. Lastly,
\begin{align*}
\delta^3 \mu \mathbb{E} D\big[-2(A-D)+1\big] =&\ \delta^4 \big( 2 \mathbb{E} (A-D)^2 - 2\mathbb{E} A(A-D) + \mathbb{E} D\big) \\
=&\ \delta^2 \big( 2 \mathbb{E} \Delta^2 - 2\delta^2\Lambda\mathbb{E} (A-D+1) + \delta^2(- x^- + \sqrt{N})\big) \\
=&\ \delta^3 \epsilon(x).
\end{align*}
The second equality is due to \eqref{eq:steinpoisson} and the last equality follows from $\mathbb{E} \Delta^2 = \delta \epsilon(x)$ and $\delta^2 \Lambda\mathbb{E}(A-D +1) = \delta (1-\delta \beta)(x^- - \beta+1) = \delta \epsilon(x)$. Putting the pieces together yields $\mathbb{E} \Delta^3 = 6\delta^2 (x^- - \beta) + \delta^3 \epsilon(x)$, which proves \eqref{eq:3ec}. We now prove (\ref{eq:4ec}):
\begin{align*}
\mathbb{E} \Delta^4 =&\delta^4\Big(\mathbb{E}\big[A(A-D)^3\big]-\mathbb{E}\big[D(A-D)^3\big]\Big)\\
&= \delta^4\Big(\Lambda \mathbb{E}\big[(A-D+1)^3\big]-\mu(n\wedge N) \mathbb{E}(A-D-1)^3 \Big)\\
& \quad + \mu \mathbb{E}\big[D(A-D-1)^3-D(A-D)^3]\Big)\\
=&\delta^4\Big(\Lambda \mathbb{E}\big[(A-D+1)^3-(A-D-1)^3\big] + \big(\Lambda-(n\wedge N)\mu\big) \mathbb{E}(A-D-1)^3 \\
&+ \mu \mathbb{E} D\big[(A-D-1)^3-(A-D)^3\big] \Big) \\
=&\delta^4\Big(\Lambda \mathbb{E}\big[6(A-D)^2+2\big] + (x^{-} - \beta) \mathbb{E}(A-D-1)^3 \\
&+ \mu \mathbb{E} D\big[-3(A-D)^2+3(A-D)-1\big] \Big).
\end{align*}
Let us analyze the terms above one by one. First,
\begin{align*}
\delta^4 \Lambda \mathbb{E}\big[6(A-D)^2+2\big] = \delta^2 \Lambda \mathbb{E}\big[6\Delta^2+2\delta^2\big] = 12\delta^2 + \delta^3 \epsilon(x).
\end{align*}
Second,
\begin{align*}
\delta^4 (x^{-} - \beta) \mathbb{E} (A-D-1)^3 = \delta (x^{-} - \beta) \big( \mathbb{E} \Delta^3 - 3 \delta \mathbb{E} \Delta^2 + 3 \delta^2 \mathbb{E} \Delta - 1\big) = \delta^3 \epsilon(x),
\end{align*}
and third,
\begin{align*}
& \delta^4 \mu \mathbb{E} D\big[-3(A-D)^2+3(A-D)-1\big] \\
=&\ \delta^5 \mathbb{E} \big[ 3 (A-D)^3 - 3(A-D)^2 + D \big] + \delta^5 \mathbb{E} A\big[-3(A-D)^2+3(A-D)\big]\\
=&\ \delta^2 \mathbb{E} \big[ 3 \Delta^3 - 3\delta \Delta^2 + \delta^3 D \big] + \delta^5 \Lambda \mathbb{E} \big[-3(A-D+1)^2+3(A-D+1)\big]\\
=&\ \delta^3 \epsilon(x).
\end{align*}
Putting the pieces together yields $\mathbb{E} \Delta^4 = 12\delta^2 + \delta^3 \epsilon(x)$. The proof of (\ref{eq:5ec}) is analogous to the proof of (\ref{eq:4ec}) and is omitted.
\hfill $\square$\endproof
\section{Companion for the AR(1) Model} \label{app:ar1proof}
In this section we derive the $v_3$ approximation for the AR(1) model and then prove Lemma~\ref{lem:ar1} in Section~\ref{sec:ar1pf}. We recall that $W = \delta(D_{\infty}-R)$, $W' = e^{-\alpha Z}W + \delta\big(X + R(e^{-\alpha Z} - 1)\big)$, and $\Delta = W' - W$, where $\delta = \sqrt{\alpha}$, $R = 1/\alpha$, $X$ and $Z$ are independent unit-mean exponentially distributed random variables that are also independent of $W$, and $D_\infty > 0$ has the limiting distribution of the AR(1) model defined by \eqref{eq:defar1}. The asymptotic regime we consider is $\alpha \to 0$, so we assume that $\alpha \in (0,1)$. We recall Lemma~\ref{lem:ar1}:
\begin{lemma}\label{lem:ar1ec}
Recall that $\delta = \sqrt{\alpha}$. For any $k \geq 1$,
\begin{align*}
\mathbb{E}(\Delta^{k} | D_{\infty} = d) = \delta^{k} k! \bigg(1 + \sum_{i=1}^{k} (-1)^{i}d^{i} \prod_{j=1}^{i} \frac{ \alpha }{1 + j \alpha} \bigg), \quad d > 0.
\end{align*}
\end{lemma}
We also recall that for $x \geq -1/\sqrt{\alpha}$,
\begin{align*}
\mathbb{E}(\Delta^{k} | W = x) = \mathbb{E}(\Delta^{k} | D_{\infty} = x/\delta + R) =&\ \delta^{k} k! \bigg(1 + \sum_{i=1}^{k} (-1)^{i}\Big( x\sqrt{\alpha} + 1 \Big)^{i} \prod_{j=1}^{i} \frac{1}{1 + j \alpha} \bigg),
\end{align*}
and observe that $\mathbb{E}(\Delta^{k} | W = x) = \delta^k p_{k}(x)$ for some degree-$k$ polynomial $p_{k}(x)$. To derive $v_3(x)$, we start with the Taylor expansion in \eqref{eq:taylorgeneric} with $n = 4$; i.e., for any function $f: \mathbb{R} \to \mathbb{R}$ satisfying $\mathbb{E} f(W') - \mathbb{E} f(W) = 0$,
\begin{align}
& \delta \mathbb{E} p_{1}(W) f'(W)+\frac{1}{2}\delta^2 \mathbb{E} p_{2}(W) f''(W) + \frac{1}{6} \delta^3 \mathbb{E} p_{3}(W) f'''(W)+\frac{1}{24}\delta^4 \mathbb{E} p_{4}(W) f^{(4)}(W) \notag \\
=&\ -\frac{1}{120} \delta^5 \mathbb{E} p_{5}(W) f^{(5)}(\xi). \label{eq:taylorfourth}
\end{align}
Since $\sup_{\alpha \in (0,1)} \abs{p_{k}(x)} < \infty$ for each $x \in \mathbb{R}$, the right-hand side is of order $\delta^5$. When deriving $v_3$, we want it to account for all terms of order $\delta, \ldots, \delta^4$ and treat terms of order $\delta^5$ as error. The following lemma is the basis for the $v_3$ approximation. It converts the third and fourth derivative terms in \eqref{eq:taylorfourth} into expressions involving $f''(x)$ plus error. Its proof is similar to the $v_2$ derivation in Section~\ref{sec:v2def}, so we postpone it until the end of this section.
\begin{lemma}
\label{lem:ar1taylor}
Define
\begin{align*}
\bar p_3(x) =&\ \frac{1}{6} \Big( p_3(x) - \frac{p_1(x)p_4(x)}{ 2 p_2(x)} - \frac{1}{4} \delta p_2(x)\Big(\frac{p_4(x)}{p_2(x)}\Big)' \Big),\\
\underline p_2(x) =&\ \Big(\frac{p_2(x)}{2}-\frac{p_1(x)p_3(x)}{3p_2(x)}-\frac{p_2(x)}{6}\Big(\frac{p_3(x)}{p_2(x)}\Big)'\Big).
\end{align*}
Let $W$ and $W'$ be as in Section~\ref{fse5}. If $f \in C^{5}(\mathbb{R})$ is such that $\mathbb{E} f(W') - \mathbb{E} f(W) = 0$, then
\begin{align}
&\delta \mathbb{E} p_{1}(W) f'(W)+\delta^2 \mathbb{E} \Big(\frac{p_{2}(W)}{2}-\frac{p_{1}(W)\bar p_3(W)}{ \underline{p}_2(W)}-\delta \underline{p}_2(W)\Big(\frac{\bar p_3(W)}{\underline{p}_2(W)}\Big)'\Big)f''(W) \notag \\
=&\ \ -\frac{1}{120} \delta^5 \mathbb{E} p_5(W) f^{(5)}(\xi_1)+ \frac{1}{72} \delta^5 \mathbb{E} p_3(W) \Big( \frac{ p_4(x) }{p_2(x) }f'''(x) \Big)''\Big|_{x = \xi_2 } \notag \\
& \hspace{1.2cm} + \frac{1}{24}\delta^5 \mathbb{E} p_4(W) \Big( \frac{\bar p_3(x) }{\underline{p}_2(x) }f''(x) \Big)'''\Big|_{x = \xi_3 } - \frac{1}{18} \delta^5 \mathbb{E} p_3(W) \Big(\frac{p_3(x)}{p_2(x)} \frac{\bar p_3(x) }{\underline{p}_2(x) }f''(x) \Big)'' \Big|_{x = \xi_4 }. \label{eq:arlem}
\end{align}
\end{lemma}
We choose
\begin{align*}
\underline{v}_3(x) = \delta^2 \Big(\frac{p_{2}(x)}{2}-\frac{p_{1}(x)\bar p_3(x)}{ \underline{p}_2(x)}-\delta \underline{p}_2(x)\Big(\frac{\bar p_3(x)}{\underline{p}_2(x)}\Big)'\Big)
\end{align*}
based on the term in front of $f''(W)$ on the left-hand side of \eqref{eq:arlem}, which is the basis for the $v_3$ approximation in Section~\ref{fse5}. Our choice is based on the presumption that all the terms on the right-hand side of \eqref{eq:arlem} are of order $\delta^5$ when $\delta$ is close to zero. We do not prove this claim rigorously in this paper. Nevertheless, our $v_3$ approximation performs quite well numerically.
\proof{Proof of Lemma~\ref{lem:ar1taylor}}
In Section~\ref{sec:v2def}, we used $a(x), \ldots, d(x)$ to represent $\mathbb{E}(\Delta^{k} | W = x)$; i.e.,
\begin{align}
b(x) =&\ \mathbb{E}(\Delta | W = x), \quad a(x) = \mathbb{E}(\Delta^{2} | W = x), \quad c(x) = \mathbb{E}(\Delta^{3} | W = x), \notag \\
d(x) =&\ \mathbb{E}(\Delta^{4} | W = x), \quad e(x) = \mathbb{E}(\Delta^{5} | W = x). \label{eq:abcd}
\end{align}
Since this proof relies heavily on Section~\ref{sec:v2def}, we use this notation and then convert to use $\delta^k p_k(x) = \mathbb{E}(\Delta^{k} | W = x)$ at the end. Our starting point is equation \eqref{eq:taylorfourth}, which we recall for convenience:
\begin{align}
& \mathbb{E} b(W) f'(W)+\frac{1}{2} \mathbb{E} a(W) f''(W) + \frac{1}{6} \mathbb{E} c(W) f'''(W)+\frac{1}{24} \mathbb{E} d(W) f^{(4)}(W) \notag \\
=&\ -\frac{1}{120} \mathbb{E} e(W) f^{(5)}(\xi). \label{eq:taylorfourthinpf}
\end{align}
Our proof relies on several key equations from Section~\ref{sec:v2def}, which we recall as we go. Let $g_{1}(x) = \int_{0}^{x}\frac{d(y)}{a(y)} f'''(y) dy $ and use \eqref{eq:taylorsecond}, or
\begin{align}
\mathbb{E} b(W) f'(W)+\frac{1}{2}\mathbb{E} a(W)f''(W) = -\frac{1}{6} \mathbb{E} c(W) f'''(\xi_2), \label{eq:taylorsecondec}
\end{align}
with $g_{1}(x)$ in place of $f(x)$ there to get
\begin{align*}
\mathbb{E} \frac{b(W)d(W)}{a(W)}f'''(W)+\mathbb{E} \frac{a(W)}{2}\Big(\frac{d(W)}{a(W)}\Big)'f'''(W)+\frac{1}{2}\mathbb{E} d(W)f^{(4)}(W) = -\frac{1}{6} \mathbb{E} c(W) g_{1}'''(\xi_2).
\end{align*}
We multiply both sides by $1/12$ and subtract the result from \eqref{eq:taylorfourthinpf} to get
\begin{align}
&\mathbb{E} b(W) f'(W)+\frac{1}{2}\mathbb{E} a(W)f''(W) + \mathbb{E} \bar c(W) f'''(W) \notag \\
=&\ \frac{1}{72} \mathbb{E} c(W) g_{1}'''(\xi_2) -\frac{1}{120} \mathbb{E} e(W) f^{(5)}(\xi_1), \label{eq:arintermv3}
\end{align}
where $\bar c(x) = \frac{1}{6} c(x) - \frac{b(x)d(x)}{ 12 a(x)} - \frac{a(x)}{24}\Big(\frac{d(x)}{a(x)}\Big)' $. Note that $\bar c(x) = \delta^3 \bar p_3(x)$. Next, let $g_2(x) = \int_{0}^{x}\frac{\bar c(y) }{\underline{v}_2(y) }f''(y) dy $, where
\begin{align*}
\underline{v}_2(x) = \frac{a(x)}{2}-\frac{b(x)c(x)}{3a(x)}-\frac{a(x)}{6}\Big(\frac{c(x)}{a(x)}\Big)', \quad x \in \mathbb{R}
\end{align*}
is identical to $\underline{v}_2(x)$ defined in \eqref{eq:underlinev2}, and note that $\underline{v}_2(x) = \delta^2 \underline{p}_2(x)$.
We use \eqref{f4}, or
\begin{align}
& \mathbb{E} b(W)f'(W)+\mathbb{E}\Big(\frac{a(W)}{2}-\frac{b(W)c(W)}{3a(W)}-\frac{a(W)}{6}\Big(\frac{c(W)}{a(W)}\Big)'\Big)f''(W) \notag \\
=&\ \frac{1}{18} \mathbb{E} c(W) g'''(\xi_2) -\frac{1}{24} \mathbb{E} d(W) f^{(4)}(\xi_1), \label{f4ec}
\end{align}
with $g_2(x)$ in place of $f(x)$ there to get
\begin{align*}
& \mathbb{E} b(W)\frac{\bar c(W) }{\underline{v}_2(W) }f''(W)+\mathbb{E} \underline{v}_2(W) \Big(\frac{\bar c(W) }{\underline{v}_2(W) }\Big)' f''(W) + \mathbb{E} \underline{v}_2(W) \frac{\bar c(W) }{\underline{v}_2(W) }f'''(W) \notag \\
=&\ \frac{1}{18} \mathbb{E} c(W) \Big(\frac{c(x)}{a(x)} g_{2}''(x) \Big)'' \Big|_{x = \xi_4 } -\frac{1}{24} \mathbb{E} d(W) g_2^{(4)}(\xi_3).
\end{align*}
Subtracting the equation above from \eqref{eq:arintermv3}, we conclude that
\begin{align*}
&\mathbb{E} b(W) f'(W)+\mathbb{E} \Big(\frac{a(W)}{2}-\frac{b(W)\bar c(W)}{ \underline{v}_2(W)}-\underline{v}_2(W)\Big(\frac{\bar c(W)}{\underline{v}_2(W)}\Big)'\Big)f''(W) \notag \\
=&\ \ - \frac{1}{18} \mathbb{E} c(W) \Big(\frac{c(x)}{a(x)} g_{2}''(x) \Big)'' \Big|_{x = \xi_4 } + \frac{1}{24} \mathbb{E} d(W) g_2^{(4)}(\xi_3) \\
&+ \frac{1}{72} \mathbb{E} c(W) g_{1}'''(\xi_2) -\frac{1}{120} \mathbb{E} e(W) f^{(5)}(\xi_1).
\end{align*}
To conclude, we note that $g_2^{(4)}(\xi_3) =\Big( \frac{\bar c(x) }{\underline{v}_2(x) }f''(x) \Big)'''\Big|_{x = \xi_3 }$ and $g_{1}'''(\xi_2) = \Big( \frac{ d(x) }{a(x) }f'''(x) \Big)''\Big|_{x = \xi_2 }$ and then substitute $\delta p_1(x)$ for $b(x)$, $\delta^2 p_2(x)$ for $a(x)$, etc., where they appear above.
\hfill $\square$\endproof
\subsection{Proof of Lemma~\ref{lem:ar1}}
\label{sec:ar1pf}
\proof{}
Recall that $\Delta = W' - W = \delta \big(D_{\infty}(e^{-\alpha Z} - 1) + X\big)$, so
\begin{align*}
\mathbb{E}(\Delta^{k} | D_{\infty} = d) =&\ \delta^k \mathbb{E} \big(d(e^{-\alpha Z} - 1) + X \big)^{k} = \delta^{k} \sum_{i=0}^{k} {k \choose i} \mathbb{E} \Big(X^{i}\big( e^{-\alpha Z} - 1\big)^{k-i}\Big) d^{k-i}.
\end{align*}
Since $X$ and $Z$ are independent and exponentially distributed with mean $1$, we have $\mathbb{E} X^i = i!$ and
\begin{align*}
{k \choose i} \mathbb{E} \Big(X^{i}\big( e^{-\alpha Z} - 1\big)^{k-i}\Big) =&\ \frac{k!}{(k-i)!} \mathbb{E} \big( e^{-\alpha Z} - 1\big)^{k-i} = \frac{k!}{(k-i)!} \int_{0}^{\infty} \big(e^{-\alpha z} - 1\big)^{k-i} e^{-z} dz.
\end{align*}
Using integration by parts,
\begin{align*}
&\int_{0}^{\infty} \big(e^{-\alpha z} - 1\big)^{k-i} e^{-z} dz \\
=&\ (k-i) (-\alpha) \int_{0}^{\infty} \big(e^{-\alpha z} - 1\big)^{k-i-1} e^{-(1+\alpha)z} dz\\
=&\ (k-i)(k-i-1) (-\alpha)^2 \frac{1}{1 + \alpha} \int_{0}^{\infty} \big(e^{-\alpha z} - 1\big)^{k-i-2} e^{-(1+2\alpha)z} dz\\
& \ldots \\
=&\ (k-i)! (-\alpha)^{k-i} \frac{1}{1 + \alpha}\frac{1}{1 + 2\alpha} \ldots \frac{1}{1+(k-i-1)\alpha} \int_{0}^{\infty} e^{-(1+(k-i)\alpha)z} dz\\
=&\ (k-i)! (-\alpha)^{k-i} \frac{1}{1 + \alpha}\frac{1}{1 + 2\alpha} \ldots \frac{1}{1+(k-i)\alpha}.
\end{align*}
\hfill $\square$\endproof
\ACKNOWLEDGMENT{We thank Zhuosong Zhang for proving Lemma~\ref{lem:hosmec}. We thank Yige Hong and Zhuoyang Liu for producing
some figures of this paper. Xiao Fang is partially supported by Hong
Kong RGC grants 24301617, 14302418 and 14304917, a CUHK direct grant
and a CUHK start-up grant. J. G. Dai is partially supported by NSF grant CMMI-1537795.}
\bibliographystyle{informs2014}
| {
"timestamp": "2022-07-12T02:12:23",
"yymm": "2012",
"arxiv_id": "2012.02824",
"language": "en",
"url": "https://arxiv.org/abs/2012.02824",
"abstract": "We derive and analyze new diffusion approximations of stationary distributions of Markov chains that are based on second- and higher-order terms in the expansion of the Markov chain generator. Our approximations achieve a higher degree of accuracy compared to diffusion approximations widely used for the past fifty years, while retaining a similar computational complexity. To support our approximations, we present a combination of theoretical and numerical results across three different models. Our approximations are derived recursively through Stein/Poisson equations, and the theoretical results are proved using Stein's method.",
"subjects": "Probability (math.PR)",
"title": "High order steady-state diffusion approximations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9811668728630677,
"lm_q2_score": 0.8198933315126791,
"lm_q1q2_score": 0.8044521761615778
} |
https://arxiv.org/abs/1906.03125 | A new Federer-type characterization of sets of finite perimeter in metric spaces | Federer's characterization states that a set $E\subset \mathbb{R}^n$ is of finite perimeter if and only if $\mathcal H^{n-1}(\partial^*E)<\infty$. Here the measure-theoretic boundary $\partial^*E$ consists of those points where both $E$ and its complement have positive upper density. We show that the characterization remains true if $\partial^*E$ is replaced by a smaller boundary consisting of those points where the \emph{lower} densities of both $E$ and its complement are at least a given number. This result is new even in Euclidean spaces but we prove it in a more general complete metric space that is equipped with a doubling measure and supports a Poincaré inequality. | \section{Introduction}
Federer's \cite{Fed} characterization of sets of finite perimeter
states that a set $E\subset {\mathbb R}^n$ is of finite perimeter if and only if
$\mathcal H^{n-1}(\partial^*E)<\infty$,
where $\mathcal H^{n-1}$ is the $n-1$-dimensional Hausdorff measure and
$\partial^*E$ is the measure-theoretic boundary;
see Section \ref{sec:preliminaries} for definitions.
A similar characterization holds also in the abstract setting of complete metric spaces
$(X,d,\mu)$
that are equipped with a doubling measure $\mu$ and support a Poincar\'e inequality;
in such spaces one replaces the $n-1$-dimensional Hausdorff measure
with the \emph{codimension one} Hausdorff measure $\mathcal H$.
The ``only if'' direction of the characterization was
shown in metric spaces by Ambrosio \cite{A1}, and the ``if'' direction was recently
shown by the author \cite{L-Fedchar}.
Federer also showed that if a set $E\subset {\mathbb R}^n$ is of finite perimeter,
then $\mathcal H^{n-1}(\partial^*E\setminus \Sigma_{1/2}E)=0$,
where the boundary $\Sigma_{1/2}E$ consists of those points where both
$E$ and its complement have density exactly $1/2$.
In metric spaces we similarly have
$\mathcal H(\partial^*E\setminus \Sigma_{\gamma}E)=0$,
where $0<\gamma\le 1/2$ is a suitable constant depending on the space
and the \emph{strong boundary} $\Sigma_{\gamma}E$ is defined by
\[
\Sigma_{\gamma} E:=\left\{x\in X:\, \liminf_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}\ge \gamma\ \ \textrm{and}\ \ \liminf_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}\ge \gamma\right\}.
\]
This raises the natural question of whether the condition
$\mathcal H(\Sigma_{\beta} E)<\infty$
for some $\beta>0$, which appears much weaker than $\mathcal H(\partial^* E)<\infty$,
is already enough to imply that $E$
is of finite perimeter.
Recently Chleb\'ik \cite{Chl}
posed this question in Euclidean spaces and noted that
the (positive) answer is known only when $n=1$.
In the current paper we show that this characterization does indeed hold in every Euclidean space and even in the much more general metric spaces that we consider.
\begin{theorem}\label{thm:main theorem}
Let $(X,d,\mu)$ be a complete metric space with $\mu$ doubling and supporting
a $(1,1)$-Poincar\'e inequality.
Let $\Omega\subset X$ be an open set and let $E\subset X$ be a $\mu$-measurable set
with $\mathcal H(\Sigma_{\beta} E\cap \Omega)<\infty$,
where $0<\beta\le 1/2$ only depends on the doubling constant of the measure
and the constants in the Poincar\'e inequality. Then $P(E,\Omega)<\infty$.
\end{theorem}
Explicitly, in the Euclidean space ${\mathbb R}^n$ with $n\ge 2$, we can take
(see \eqref{eq:choice of beta in Euclidean space})
\[
\beta= \frac{n^{13n/2}}{2^{26n^2+64n+15}\omega_n^{13}},
\]
where $\omega_n$ is the volume of the Euclidean unit ball.
Our strategy is to show that if $\mathcal H(\Sigma_{\beta}E\cap \Omega)<\infty$,
then $\mathcal H((\partial^*E\setminus \Sigma_{\beta}E)\cap \Omega)=0$
and so the result follows from the previously known Federer's characterization.
Our proof consists essentially of two steps.
First in Section \ref{sec:strong boundary points},
we show that for every point in the measure-theoretic boundary $\partial^*E$,
arbitrarily close there is a point in the strong boundary $\Sigma_{\beta} E$.
Then, after some preliminary results concerning connected components of sets
of finite perimeter as well as functions of least gradient in Sections
\ref{sec:components} and \ref{sec:least gradient},
in Section \ref{sec:constructing a quasiconvex space} we
show that there exists an open set $V$ containing a suitable part of $\Sigma_{\beta}E$ such that
$X\setminus V$ is itself a metric space with rather good properties.
Thus we can apply the first step in this space.
In Section \ref{sec:proof of the main result} we combine the two steps to prove
Theorem \ref{thm:main theorem}.
\paragraph{Acknowledgments.}
The author wishes to thank Nageswari Shanmugalingam for many helpful comments
as well as for discussions on
constructing spaces where the Mazurkiewicz metric agrees with the ordinary one;
Anders Bj\"orn also for discussions on constructing such spaces; and
Olli Saari for discussions on finding strong boundary
points.
\section{Notation and definitions}\label{sec:preliminaries}
In this section we introduce the notation, definitions,
and assumptions that are employed in the paper.
Throughout this paper, $(X,d,\mu)$ is a complete metric space that is equip\-ped
with a metric $d$ and a Borel regular outer measure $\mu$ satisfying
a doubling property, meaning that
there exists a constant $C_d\ge 1$ such that
\[
0<\mu(B(x,2r))\le C_d\mu(B(x,r))<\infty
\]
for every ball $B(x,r):=\{y\in X:\,d(y,x)<r\}$, with $x\in X$ and $r>0$.
Closed balls are denoted by $\overline{B}(x,r):=\{y\in X:\,d(y,x)\le r\}$.
By iterating the doubling condition, we obtain that for every $x\in X$ and $y\in B(x,R)$ with $0<r\le R<\infty$, we have
\begin{equation}\label{eq:homogenous dimension}
\frac{\mu(B(y,r))}{\mu(B(x,R))}\ge \frac{1}{C_d^2}\left(\frac{r}{R}\right)^{s},
\end{equation}
where $s>1$ only depends on the doubling constant $C_d$.
Given a ball $B=B(x,r)$ and $\beta>0$, we sometimes abbreviate $\beta B:=B(x,\beta r)$;
note that in a metric space, a ball (as a set) does not necessarily have a unique center point and radius, but these will be prescribed for all the balls
that we consider.
We assume that $X$ consists of at least $2$ points.
When we want to state that a constant $C$
depends on the parameters $a,b, \ldots$, we write $C=C(a,b,\ldots)$.
When a property holds outside a set of $\mu$-measure zero, we say that it holds
almost everywhere, abbreviated a.e.
All functions defined on $X$ or its subsets will take values in $[-\infty,\infty]$.
As a complete metric space equipped with a doubling measure, $X$ is proper,
that is, closed and bounded sets are compact.
Since $X$ is proper, for any open set $\Omega\subset X$
we define $L_{\mathrm{loc}}^1(\Omega)$ to be the space of
functions that are in $L^1(\Omega')$ for every open $\Omega'\Subset\Omega$.
Here $\Omega'\Subset\Omega$ means that $\overline{\Omega'}$ is a
compact subset of $\Omega$.
Other local spaces of functions are defined analogously.
For any set $A\subset X$ and $0<R<\infty$, the restricted Hausdorff content
of codimension one is defined by
\[
\mathcal{H}_{R}(A):=\inf\left\{ \sum_{j\in I}
\frac{\mu(B(x_{j},r_{j}))}{r_{j}}:\,A\subset
\bigcup_{j\in I}B(x_{j},r_{j}),\,r_{j}\le R,\,I\subset{\mathbb N}\right\}.
\]
The codimension one Hausdorff measure of $A\subset X$ is then defined by
\[
\mathcal{H}(A):=\lim_{R\rightarrow 0}\mathcal{H}_{R}(A).
\]
In the Euclidean space ${\mathbb R}^n$ (equipped with the Euclidean metric and the $n$-dimensional Lebesgue measure) this is comparable to the $n-1$-dimensional
Hausdorff measure.
By a curve we mean a rectifiable continuous mapping from a compact interval of the real line into $X$.
The length of a curve $\gamma$
is denoted by $\ell_{\gamma}$. We will assume every curve to be parametrized
by arc-length, which can always be done (see e.g. \cite[Theorem~3.2]{Hj}).
A nonnegative Borel function $g$ on $X$ is an upper gradient
of a function $u$
on $X$ if for all nonconstant curves $\gamma$, we have
\begin{equation}\label{eq:definition of upper gradient}
|u(x)-u(y)|\le \int_{\gamma} g\,ds:=\int_0^{\ell_{\gamma}} g(\gamma(s))\,ds,
\end{equation}
where $x$ and $y$ are the end points of $\gamma$.
We interpret $|u(x)-u(y)|=\infty$ whenever
at least one of $|u(x)|$, $|u(y)|$ is infinite.
Upper gradients were originally introduced in \cite{HK}.
The $1$-modulus of a family of curves $\Gamma$ is defined by
\[
\Mod_{1}(\Gamma):=\inf\int_{X}\rho\, d\mu
\]
where the infimum is taken over all nonnegative Borel functions $\rho$
such that $\int_{\gamma}\rho\,ds\ge 1$ for every curve $\gamma\in\Gamma$.
A property is said to hold for $1$-a.e. curve
if it fails only for a curve family with zero $1$-modulus.
If $g$ is a nonnegative $\mu$-measurable function on $X$
and (\ref{eq:definition of upper gradient}) holds for $1$-a.e. curve,
we say that $g$ is a $1$-weak upper gradient of $u$.
By only considering curves $\gamma$ in a set $A\subset X$,
we can talk about a function $g$ being a ($1$-weak) upper gradient of $u$ in $A$.\label{curve discussion}
Given an open set $\Omega\subset X$, we let
\[
\Vert u\Vert_{N^{1,1}(\Omega)}:=\Vert u\Vert_{L^1(\Omega)}+\inf \Vert g\Vert_{L^1(\Omega)},
\]
where the infimum is taken over all upper gradients $g$ of $u$ in $\Omega$.
Then we define the Newton-Sobolev space
\[
N^{1,1}(\Omega):=\{u:\|u\|_{N^{1,1}(\Omega)}<\infty\}.
\]
In ${\mathbb R}^n$ this coincides, up to a choice of pointwise representatives,
with the usual Sobolev space $W^{1,1}(\Omega)$; this is shown in
Theorem 4.5 of \cite{S}, where the Newton-Sobolev space was originally
introduced.
We understand Newton-Sobolev functions to be defined at every point $x\in \Omega$
(even though $\Vert \cdot\Vert_{N^{1,1}(\Omega)}$ is then only a seminorm).
It is known that for every $u\in N_{\mathrm{loc}}^{1,1}(\Omega)$ there exists a minimal $1$-weak
upper gradient of $u$ in $\Omega$, always denoted by $g_{u}$, satisfying $g_{u}\le g$
a.e. in $\Omega$ for any other $1$-weak upper gradient $g\in L_{\mathrm{loc}}^{1}(\Omega)$
of $u$ in $\Omega$, see \cite[Theorem 2.25]{BB}.
In ${\mathbb R}^n$, the minimal $1$-weak upper gradient coincides (a.e.) with $|\nabla u|$,
see \cite[Corollary A.4]{BB}.
We will assume throughout the paper that $X$ supports a $(1,1)$-Poincar\'e inequality,
meaning that there exist constants $C_P\ge 1$ and $\lambda \ge 1$ such that for every
ball $B(x,r)$, every $u\in L^1_{\mathrm{loc}}(X)$,
and every upper gradient $g$ of $u$,
we have
\[
\vint{B(x,r)}|u-u_{B(x,r)}|\, d\mu
\le C_P r\vint{B(x,\lambda r)}g\,d\mu,
\]
where
\[
u_{B(x,r)}:=\vint{B(x,r)}u\,d\mu :=\frac 1{\mu(B(x,r))}\int_{B(x,r)}u\,d\mu.
\]
As \label{quasiconvex and geodesic}a complete metric space equipped with a doubling measure and supporting a Poincar\'e
inequality, $X$ is \emph{quasiconvex}, meaning that for every
pair of points $x,y\in X$ there is a curve $\gamma$ with $\gamma(0)=x$,
$\gamma(\ell_{\gamma})=y$, and $\ell_{\gamma}\le Cd(x,y)$,
where $C$ is a constant and
only depends on $C_d$ and $C_P$, see e.g. \cite[Theorem 4.32]{BB}.
Thus a biLipschitz change in the metric gives a geodesic space
(see \cite[Section 4.7]{BB}).
Since Theorem \ref{thm:main theorem}
is easily seen to be invariant under such a biLipschitz
change in the metric, we can assume that $X$ is geodesic.
By \cite[Theorem 4.39]{BB}, in the Poincar\'e inequality we can now choose
$\lambda=1$.
The $1$-capacity of a set $A\subset X$ is defined by
\[
\capa_1(A):=\inf \Vert u\Vert_{N^{1,1}(X)},
\]
where the infimum is taken over all functions $u\in N^{1,1}(X)$ satisfying
$u\ge 1$ in $A$.
The variational $1$-capacity of a set $A\subset \Omega$
with respect to an open set $\Omega\subset X$ is defined by
\[
\rcapa_1(A,\Omega):=\inf \int_X g_u \,d\mu,
\]
where the infimum is taken over functions $u\in N^{1,1}(X)$ satisfying
$u=0$ in $X\setminus\Omega$ and
$u\ge 1$ in $A$, and $g_u$ is the minimal $1$-weak upper gradient of $u$ (in $X$).
By truncation, we see that we can assume $0\le u\le 1$ on $X$.
The variational $1$-capacity is an outer
capacity in the sense that if $A\Subset \Omega$, then
\begin{equation}\label{eq:rcapa outer capacity}
\rcapa_{1}(A,\Omega)
=\inf_{\substack{V\textrm{ open} \\A\subset V\subset \Omega}}\rcapa_{1}(V,\Omega);
\end{equation}
see \cite[Theorem 6.19(vii)]{BB}.
For basic properties satisfied by capacities, such as monotonicity and countable subadditivity, see e.g. \cite{BB}.
We say that a set $U\subset X$ is $1$-quasiopen\label{quasiopen}
if for every $\varepsilon>0$ there exists an
open set $G\subset X$ such that $\capa_1(G)<\varepsilon$ and $U\cup G$ is open.
Next we present the definition and basic properties of functions
of bounded variation on metric spaces, following \cite{M}. See also e.g. \cite{AFP, EvGa, Fed, Giu84, Zie89} for the classical
theory in the Euclidean setting.
Given an open set $\Omega\subset X$ and a function $u\in L^1_{\mathrm{loc}}(\Omega)$,
we define the total variation of $u$ in $\Omega$ by
\[
\|Du\|(\Omega):=\inf\left\{\liminf_{i\to\infty}\int_\Omega g_{u_i}\,d\mu:\, u_i\in N^{1,1}_{\mathrm{loc}}(\Omega),\, u_i\to u\textrm{ in } L^1_{\mathrm{loc}}(\Omega)\right\},
\]
where each $g_{u_i}$ is the minimal $1$-weak upper gradient of $u_i$
in $\Omega$.
In ${\mathbb R}^n$ this agrees with the usual Euclidean definition involving distributional
derivatives, see e.g. \cite[Proposition 3.6, Theorem 3.9]{AFP}.
(In \cite{M}, local Lipschitz constants were used in place of upper gradients, but the theory
can be developed similarly with either definition.)
We say that a function $u\in L^1(\Omega)$ is of bounded variation,
and denote $u\in\mathrm{BV}(\Omega)$, if $\|Du\|(\Omega)<\infty$.
For an arbitrary set $A\subset X$, we define
\[
\|Du\|(A):=\inf\{\|Du\|(W):\, A\subset W,\,W\subset X
\text{ is open}\}.
\]
If $u\in L^1_{\mathrm{loc}}(\Omega)$ and $\Vert Du\Vert(\Omega)<\infty$,
then $\|Du\|(\cdot)$ is
a Borel regular outer measure on $\Omega$ by \cite[Theorem 3.4]{M}.
A $\mu$-measurable set $E\subset X$ is said to be of finite perimeter if $\|D\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_E\|(X)<\infty$, where $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_E$ is the characteristic function of $E$.
The perimeter of $E$ in $\Omega$ is also denoted by
\[
P(E,\Omega):=\|D\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_E\|(\Omega).
\]
The measure-theoretic interior of a set $E\subset X$ is defined by
\begin{equation}\label{eq:measure theoretic interior}
I_E:=
\left\{x\in X:\,\lim_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}=0\right\},
\end{equation}
and the measure-theoretic exterior by
\[
O_E:=
\left\{x\in X:\,\lim_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}=0\right\}.
\]
The measure-theoretic boundary $\partial^{*}E$ is defined as the set of points
$x\in X$
at which both $E$ and its complement have nonzero upper density, i.e.
\[
\limsup_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}>0\quad
\textrm{and}\quad\limsup_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}>0.
\]
Note that the space $X$ is always partitioned into the disjoint sets
$I_E$, $O_E$, and $\partial^*E$.
By Lebesgue's differentiation theorem (see e.g. \cite[Chapter 1]{Hei}),
for a $\mu$-measurable set $E$ we have $\mu(E\Delta I_E)=0$,
where $\Delta$ is the symmetric difference.
Given a number $0<\gamma\le 1/2$, we also define the strong boundary
\begin{equation}\label{eq:strong boundary}
\Sigma_{\gamma} E:=\left\{x\in X:\, \liminf_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}\ge \gamma\ \, \textrm{and}\ \, \liminf_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}\ge \gamma\right\}.
\end{equation}
For an open set $\Omega\subset X$ and a $\mu$-measurable set $E\subset X$ with
$P(E,\Omega)<\infty$, we have
$\mathcal H((\partial^*E\setminus \Sigma_{\gamma}E)\cap\Omega)=0$
for $\gamma \in (0,1/2]$ that only depends on $C_d$ and $C_P$,
see \cite[Theorem 5.4]{A1}.
Moreover, for any Borel set $A\subset\Omega$ we have
\begin{equation}\label{eq:def of theta}
P(E,A)=\int_{\partial^{*}E\cap A}\theta_E\,d\mathcal H,
\end{equation}
where
$\theta_E\colon \Omega\to [\alpha,C_d]$ with $\alpha=\alpha(C_d,C_P)>0$, see \cite[Theorem 5.3]{A1}
and \cite[Theorem 4.6]{AMP}.
The following coarea formula is given in \cite[Proposition 4.2]{M}:
if $\Omega\subset X$ is an open set and $u\in L^1_{\mathrm{loc}}(\Omega)$, then
\begin{equation}\label{eq:coarea}
\|Du\|(\Omega)=\int_{-\infty}^{\infty}P(\{u>t\},\Omega)\,dt,
\end{equation}
where we abbreviate $\{u>t\}:=\{x\in \Omega:\,u(x)>t\}$.
If $\Vert Du\Vert(\Omega)<\infty$, then \eqref{eq:coarea} holds with $\Omega$ replaced by
any Borel set $A\subset \Omega$.
We know that for an open set $\Omega\subset X$, an arbitrary set $A\subset \Omega$,
and any $\mu$-measurable sets
$E_1,E_2\subset X$, we have
\begin{equation}\label{eq:lattice property of sets of finite perimeter}
P(E_1\cap E_2,A)+P(E_1\cup E_2,A)\le P(E_1,A)+P(E_2,A);
\end{equation}
for a proof in the case $A=\Omega$ see \cite[Proposition 4.7]{M},
and then the general case follows by approximation.
Using this fact as well as the lower semicontinuity of the total
variation with respect to $L_{\mathrm{loc}}^1$-convergence in open sets, we have
for any $E_1,E_2\ldots \subset X$ that
\begin{equation}\label{eq:perimeter of countable union}
P\Bigg(\bigcup_{j=1}^{\infty}E_j,\Omega\Bigg)
\le \sum_{j=1}^{\infty}P(E_j,\Omega).
\end{equation}
Applying the Poincar\'e inequality to sequences of approximating
$N^{1,1}_{\mathrm{loc}}$-functions in the definition of the total variation, we get
the following $\mathrm{BV}$ version:
for every ball $B(x,r)$ and every
$u\in L^1_{\mathrm{loc}}(X)$, we have
\[
\int_{B(x,r)}|u-u_{B(x,r)}|\,d\mu
\le C_P r \Vert Du\Vert (B(x, r)).
\]
Recall here and from now on that we take the constant $\lambda$ to be $1$,
and so it does not appear in the inequalities.
For a $\mu$-measurable set $E\subset X$, by considering the two cases
$(\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_E)_{B(x,r)}\le 1/2$ and $(\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_E)_{B(x,r)}\ge 1/2$, from the above we get
the relative isoperimetric inequality
\begin{equation}\label{eq:relative isoperimetric inequality}
\min\{\mu(B(x,r)\cap E),\,\mu(B(x,r)\setminus E)\}\le 2 C_P rP(E,B(x,r)).
\end{equation}
From the $(1,1)$-Poincar\'e inequality,
by \cite[Theorem 4.21, Theorem 5.51]{BB}
we also get the following Sobolev inequality:
if $x\in X$, $0<r<\frac{1}{4}\diam X$, and $u\in N^{1,1}(X)$ with $u=0$
in $X\setminus B(x,r)$, then
\begin{equation}\label{eq:sobolev inequality}
\int_{B(x,r)} |u|\,d\mu \le C_S r \int_{B(x,r)} g_u\,d\mu
\end{equation}
for a constant $C_S=C_S(C_d,C_P)\ge 1$.
For any $\mu$-measurable set $E\subset B(x,r)$, applying the Sobolev inequality
to a suitable sequence approximating $u$,
we get the isoperimetric inequality
\begin{equation}\label{eq:isop inequality with zero boundary values}
\mu(E)\le C_S r P(E,X).
\end{equation}
The lower and upper approximate limits of a function $u$ on an open set
$\Omega$
are defined respectively by
\begin{equation}\label{eq:lower approximate limit}
u^{\wedge}(x):
=\sup\left\{t\in{\mathbb R}:\,\lim_{r\to 0}\frac{\mu(B(x,r)\cap\{u<t\})}{\mu(B(x,r))}=0\right\}
\end{equation}
and
\begin{equation}\label{eq:upper approximate limit}
u^{\vee}(x):
=\inf\left\{t\in{\mathbb R}:\,\lim_{r\to 0}\frac{\mu(B(x,r)\cap\{u>t\})}{\mu(B(x,r))}=0\right\}
\end{equation}
for $x\in \Omega$.
Unlike Newton-Sobolev functions, we understand $\mathrm{BV}$ functions to be
equivalence classes of a.e. defined functions,
but $u^{\wedge}$ and $u^{\vee}$ are pointwise defined.
The $\mathrm{BV}$-capacity of a set $A\subset X$ is defined by
\[
\capa_{\mathrm{BV}}(A):=\inf \left(\Vert u\Vert_{L^1(X)}+\Vert Du\Vert(X)\right),
\]
where the infimum is taken over all $u\in\mathrm{BV}(X)$ with $u\ge 1$ in a neighborhood of $A$.
By \cite[Theorem 4.3]{HaKi} we know that for some constant
$C_{\textrm{cap}}=C_{\textrm{cap}}(C_d,C_P)\ge 1$ and every
$A\subset X$, we have
\begin{equation}\label{eq:Newtonian and BV capacities are comparable}
\capa_1(A)\le C_{\textrm{cap}}\capa_{\mathrm{BV}}(A).
\end{equation}
We also define a variational $\mathrm{BV}$-capacity for any $A\subset\Omega$, with
$\Omega\subset X$ open, by
\[
\rcapa^{\vee}_{\mathrm{BV}}(A,\Omega):=\inf \Vert Du\Vert(X),
\]
where the infimum is taken over functions $u\in \mathrm{BV}(X)$ such that
$u^{\wedge}=u^{\vee}= 0$ $\mathcal H$-a.e. in $X\setminus \Omega$ and
$u^{\vee}\ge 1$ $\mathcal H$-a.e. in $A$.
By \cite[Theorem 5.7]{L-SS} we know that
\begin{equation}\label{eq:variational one and BV capacity}
\rcapa_{1}(A,\Omega)\le C_{\textrm{r}}\rcapa^{\vee}_{\mathrm{BV}}(A,\Omega)
\end{equation}
for a constant $C_{\textrm{r}}=C_{\textrm{r}}(C_d,C_P)\ge 1$.
\textbf{Standing assumptions:} In Section \ref{sec:strong boundary points}
we will consider a different metric space $Z$ (which will later be taken
to be a subset of $X$), but in Sections \ref{sec:components} to \ref{sec:proof of the main result}
we will assume that $(X,d,\mu)$ is a complete, geodesic metric space that
is equipped with the doubling Radon measure $\mu$ and supports a
$(1,1)$-Poincar\'e inequality with $\lambda=1$.
\section{Strong boundary points}\label{sec:strong boundary points}
In this section we consider a complete metric space
$(Z,\widehat{d},\mu)$ where $\mu$ is a Borel regular outer measure and
doubling with constant $\widehat{C}_d\ge 1$. We define
the Mazurkiewicz metric
\begin{equation}\label{eq:widehat d c}
\widehat{d}_M(x,y):=\inf\{\diam F:\,F\subset Z\textrm{ is a continuum containing }x,y\},
\quad x,y\in Z,
\end{equation}
and we assume the space to be ``geodesic'' in the sense that
$\widehat{d}_M = \widehat{d}$.
As usual, a continuum means a compact connected set.
\begin{definition}
We say that $(x_0,\ldots,x_m)$ is an $\varepsilon$-chain from $x_0$ to $x_m$
if $\widehat{d}(x_j,x_{j+1})<\varepsilon$ for all $j=0,\ldots,m-1$.
\end{definition}
The following proposition gives the existence of a strong boundary point.
\begin{proposition}\label{prop:strong boundary point}
Let $x_0\in Z$, $R>0$, and let $E\subset Z$ be a $\mu$-measurable set
such that
\begin{equation}\label{eq:half measure assumption}
\frac{1}{2\widehat{C}_d^2}\le \frac{\mu(B(x_0,R)\cap E)}{\mu(B(x_0,R))}\le 1-\frac{1}{2\widehat{C}_d^2}.
\end{equation}
Then there exists a point $x\in B(x_0,6 R)$ such that
\begin{equation}\label{eq:desired density point}
\frac{1}{4 \widehat{C}_d^{12}}\le \liminf_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}
\le \limsup_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}
\le 1-\frac{1}{4 \widehat{C}_d^{12}}.
\end{equation}
\end{proposition}
\begin{proof}
The proof is by suitable iteration, where we consider two options.
\textbf{Case 1.}
Suppose that
\begin{equation}\label{eq:E smaller than half everywhere}
\frac{\mu(B(x,2^{-2}R)\cap E)}{\mu(B(x,2^{-2}R))}<\frac{1}{2}
\end{equation}
for all $x\in B(x_0,R)$; the case ``$>$'' is considered analogously.
Define a ``bad'' set
\[
P:=\left\{x\in B(x_0,R):\,\frac{\mu(B(x,2^{-2j}R)\cap E)}{\mu(B(x,2^{-2j}R))}
\le \frac{1}{4\widehat{C}_d^6}\ \ \textrm{for some }j\in{\mathbb N}\right\}.
\]
For every $x\in P$ there is a radius $r_x\le R/20\le R$ such that
\[
\frac{\mu(B(x,5r_x)\cap E)}{\mu(B(x,5r_x))}\le \frac{1}{4\widehat{C}_d^6}.
\]
Thus $\{B(x,r_x)\}_{x\in P}$ is a covering of $P$.
By the $5$-covering theorem, pick a countable collection of
pairwise disjoint balls $\{B(x_j,r_j)\}_{j=1}^{\infty}$ such that
$P\subset \bigcup_{j=1}^{\infty}B(x_j,5r_j)$.
Now
\begin{align*}
\mu(P\cap E)\le \sum_{j=1}^{\infty}\mu(B(x_j,5r_j)\cap E)
&\le \frac{1}{4 \widehat{C}_d^6}\sum_{j=1}^{\infty}\mu(B(x_j,5r_j))\\
&\le \frac{1}{4 \widehat{C}_d^3}\sum_{j=1}^{\infty}\mu(B(x_j,r_j))\\
&\le \frac{1}{4 \widehat{C}_d^3}\mu(B(x_0,2R))\\
&\le \frac{1}{4 \widehat{C}_d^2}\mu(B(x_0,R)).
\end{align*}
Thus
\begin{align*}
\mu(P)
&=\mu(P\cap E)+\mu(P\setminus E)\\
&\le \frac{1}{4\widehat{C}_d^2}\mu(B(x_0,R))+\mu(B(x_0,R)\setminus E)\\
&\le \frac{1}{4\widehat{C}_d^2}\mu(B(x_0,R))+\Bigg(1-\frac{1}{2\widehat{C}_d^2}\Bigg)\mu(B(x_0,R))\quad\textrm{by }\eqref{eq:half measure assumption}\\
&\le \Bigg(1-\frac{1}{4\widehat{C}_d^2}\Bigg)\mu(B(x_0,R)).
\end{align*}
In particular,
there is a point $y\in B(x_0,R)\setminus P$.
Now there are two options.
\textbf{Case 1(a).}
The first option is that for each $j\in{\mathbb N}$, we have
\[
\frac{\mu(B(y,2^{-2j}R)\cap E)}{\mu(B(y,2^{-2j}R))}<\frac{1}{2}
\]
and then in fact
\[
\frac{1}{4\widehat{C}_d^6}\le \frac{\mu(B(y,2^{-2j}R)\cap E)}{\mu(B(y,2^{-2j}R))}<\frac{1}{2},
\]
for all $j\in{\mathbb N}$,
since $y\in B(x_0,R)\setminus P$.
From this we easily find that \eqref{eq:desired density point} holds (with $x=y$).
\textbf{Case 1(b).}
The second option is that there is a smallest index $l\ge 2$ such that
\[
\frac{\mu(B(y,2^{-2l}R)\cap E)}{\mu(B(y,2^{-2l}R))}\ge\frac{1}{2}.
\]
Then
\[
\frac{1}{2\widehat{C}_d^2}\le \frac{\mu(B(y,2^{-2l+2}R)\cap E)}{\mu(B(y,2^{-2l+2}R))}
< \frac{1}{2},
\]
and also
\[
\frac{1}{4\widehat{C}_d^6}\le\frac{\mu(B(y,2^{-2j}R)\cap E)}{\mu(B(y,2^{-2j}R))}
<\frac{1}{2}\quad\textrm{for all }j=1,\ldots,l-2.
\]
Note that regardless of the direction of the inequality in
\eqref{eq:E smaller than half everywhere}, we get
\[
\frac{1}{2\widehat{C}_d^2}\le \frac{\mu(B(y,2^{-2l+2}R)\cap E)}{\mu(B(y,2^{-2l+2}R))}
< 1-\frac{1}{2\widehat{C}_d^2}
\]
and
\begin{equation}\label{eq:doubling constant six estimate}
\frac{1}{4\widehat{C}_d^6}\le\frac{\mu(B(y,2^{-2j}R)\cap E)}{\mu(B(y,2^{-2j}R))}
\le1-\frac{1}{4\widehat{C}_d^6}\quad\textrm{for all }j=1,\ldots,l-2.
\end{equation}
\textbf{Case 2.}
Alternatively, suppose that we find two points $x,y\in B(x_0,R)$ such that
\[
\frac{\mu(B(x,2^{-2}R)\cap E)}{\mu(B(x,2^{-2}R))}\ge \frac{1}{2}
\]
and
\[
\frac{\mu(B(y,2^{-2}R)\cap E)}{\mu(B(y,2^{-2}R))}\le \frac{1}{2}.
\]
Then, using the fact that $\widehat{d}_M=\widehat{d}$,
we find a continuum $F$ that contains
$x$ and $y$ and is contained in $B(x_0,3 R)$.
Since $F$ is connected, for every $\varepsilon>0$ there is an
$\varepsilon$-chain in $F$ from $x$ to $y$. In particular, we
find an $R/4$-chain in $F$ from $x$ to $y$.
Let $z$ be the last point in the chain for which we have
\[
\frac{\mu(B(z,2^{-2}R)\cap E)}{\mu(B(z,2^{-2}R))}\ge \frac{1}{2}.
\]
If $z=y$, then we have
\[
\frac{\mu(B(z,2^{-2}R)\cap E)}{\mu(B(z,2^{-2}R))}= \frac{1}{2}.
\]
Else there exists $w\in F$ with $\widehat{d}(z,w)<R/4$ and
\[
\frac{\mu(B(w,2^{-2}R)\cap E)}{\mu(B(w,2^{-2}R))}< \frac{1}{2}\quad\textrm{and thus}
\quad \frac{\mu(B(w,2^{-2}R)\setminus E)}{\mu(B(z,2^{-1}R))}\ge
\frac{1}{2\widehat{C}_d^2}.
\]
Now
\begin{align*}
\frac{\mu(B(z,2^{-1}R)\cap E)}{\mu(B(z,2^{-1}R))}
&= \frac{\mu(B(z,2^{-1}R))-\mu(B(z,2^{-1}R)\setminus E)}{\mu(B(z,2^{-1}R))}\\
&\le \frac{\mu(B(z,2^{-1}R))-\mu(B(w,2^{-2}R)\setminus E)}{\mu(B(z,2^{-1}R))}\\
&\le 1-\frac{1}{2\widehat{C}_d^2}.
\end{align*}
Conversely,
\[
\frac{\mu(B(z,2^{-1}R)\cap E)}{\mu(B(z,2^{-1}R))}
\ge\frac{\mu(B(z,2^{-2}R)\cap E)}{\widehat{C}_d\mu(B(z,2^{-2}R))}
\ge \frac{1}{2\widehat{C}_d}.
\]
In conclusion, there is $z\in B(x_0,3R)$ with
\[
\frac{1}{2\widehat{C}_d^2}\le \frac{\mu(B(z,2^{-1}R)\cap E)}{\mu(B(z,2^{-1}R))}\le 1-\frac{1}{2\widehat{C}_d^2};
\]
note that this holds also in the case $z=y$.
To summarize, in Case 1(a) we obtain infinitely many balls (and then we are done),
in Case 1(b) we obtain the $l-1$ new balls
$B(y,2^{-2}R),\ldots,B(y,2^{-2l+2}R)$, where $B(y,2^{-2l+2}R)$ satisfies
\eqref{eq:half measure assumption}, and in Case (2) we obtain one new
ball satisfying \eqref{eq:half measure assumption}.
By iterating the procedure
and concatenating the new balls obtained in each step to the previous
list of balls, we find a sequence of balls with center points
$x_k\in B(x_{k-1},3 r_{k-1})$ and radii
$r_k$ such that $r_0=R$, $r_k\in [r_{k-1}/4,r_{k-1}/2]$, and
(recall \eqref{eq:doubling constant six estimate})
\[
\frac{1}{4\widehat{C}_d^6}\le \frac{\mu(B(x_k,r_k)\cap E)}{\mu(B(x_k,r_k))}
\le 1-\frac{1}{4\widehat{C}_d^6}
\]
for all $k\in{\mathbb N}$.
(Note that several consecutive balls in this sequence will have the same center
points if they are obtained from Case 1.)
By completeness of the space
we find $x\in Z$ such that
$x_k\to x$. For each $l=0,1,\ldots$ we have
\[
d(x,x_l)\le \sum_{k=l}^{\infty}d(x_k,x_{k+1})
\le 3\sum_{k=l}^{\infty}r_k\le 6 r_l.
\]
In particular,
$d(x,x_0)\le 6 R$.
Now $B(x_l,r_l)\subset B(x,7 r_l)\subset B(x_l,13 r_l)$
for all $l\in{\mathbb N}$, and so
\[
\frac{\mu(B(x,7 r_l)\cap E)}{\mu(B(x,7 r_l))}
\ge \frac{\mu(B(x_l,r_l)\cap E)}{\mu(B(x_l,13 r_l))}
\ge \frac{1}{\widehat{C}_d^{4}}\frac{\mu(B(x_l,r_l)\cap E)}{\mu(B(x_l,r_l))}
\ge \frac{1}{4 \widehat{C}_d^{10}}
\]
and similarly
\[
\frac{\mu(B(x,7 r_l)\setminus E)}{\mu(B(x,7 r_l))}
\ge \frac{\mu(B(x_l,r_l)\setminus E)}{\mu(B(x_l,13 r_l))}
\ge \frac{1}{\widehat{C}_d^{4}}
\frac{\mu(B(x_l,r_l)\setminus E)}{\mu(B(x_l,r_l))}
\ge \frac{1}{4 \widehat{C}_d^{10}}.
\]
It follows that
\[
\liminf_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}
\ge \frac{1}{4 \widehat{C}_d^{12}}
\]
and
\[
\liminf_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}
\ge \frac{1}{4 \widehat{C}_d^{12}},
\]
proving \eqref{eq:desired density point}.
\end{proof}
\begin{corollary}\label{cor:density points}
Let $x_0\in Z$, $R>0$, and let $E\subset Z$ be a $\mu$-measurable set
such that
\[
0< \mu(B(x_0,R)\cap E)<\mu(B(x_0,R)).
\]
Then there exists a point $x\in B(x_0,9 R)$ such that
\begin{equation}\label{eq:strong boundary point}
\frac{1}{4 \widehat{C}_d^{12}}\le \liminf_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}
\le \limsup_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}
\le 1-\frac{1}{4 \widehat{C}_d^{12}}.
\end{equation}
\end{corollary}
\begin{proof}
Again consider two cases. The first is that
we find two points $y,z\in B(x_0,R)$ such that
\[
\frac{\mu(B(y,2^{-1}R)\cap E)}{\mu(B(y,2^{-1}R))}\ge \frac{1}{2}
\quad\textrm{and}\quad
\frac{\mu(B(z,2^{-1}R)\cap E)}{\mu(B(z,2^{-1}R))}\le \frac{1}{2}.
\]
Then just as in the proof of Proposition \ref{prop:strong boundary point}
Case 2, we find $w\in B(x_0,3R)$ with
\[
\frac{1}{2\widehat{C}_d^2}\le \frac{\mu(B(w,R)\cap E)}{\mu(B(w,R))}
\le 1-\frac{1}{2\widehat{C}_d^2}.
\]
Now Proposition \ref{prop:strong boundary point} gives a point $x\in B(w,6R)\subset B(x_0,9R)$ such that \eqref{eq:strong boundary point} holds.
The second possible case is that for all $y\in B(x_0,R)$ we have
\[
\frac{\mu(B(y,2^{-1}R)\cap E)}{\mu(B(y,2^{-1}R))}< \frac{1}{2}
\]
(the case ``$>$'' being analogous).
By Lebesgue's differentiation theorem, we find a point
$y\in I_E\cap B(x_0,R)$ (recall \eqref{eq:measure theoretic interior}) and then
it is easy to find a radius $0<r\le R/2$ such that
\[
\frac{1}{2\widehat{C}_d}\le \frac{\mu(B(y,r)\cap E)}{\mu(B(y,r))}
<\frac{1}{2}.
\]
Now Proposition \ref{prop:strong boundary point} again gives a point
$x\in B(y,6r)\subset B(x_0,4R)$ such that
\eqref{eq:strong boundary point} holds.
\end{proof}
\section{Components of sets of finite perimeter}\label{sec:components}
In Sections \ref{sec:components} to \ref{sec:proof of the main result}
we assume that $(X,d,\mu)$ is a complete, geodesic metric space that
is equipped with the doubling measure $\mu$ and supports a
$(1,1)$-Poincar\'e inequality.
In this section we consider connected components,
or components for short, of sets of finite perimeter.
The following is the main result of the section.
\begin{proposition}\label{prop:connected components}
Let $B(x,R)$ be a ball with $0<R<\frac{1}{4}\diam X$ and let
$F\subset X$ be a closed set with $P(F,X)<\infty$.
Denote the components of $F\cap \overline{B}(x,R)$ having nonzero
$\mu$-measure by $F_1,F_2,\ldots$. Then $\mu\left(\overline{B}(x,R)\cap F\setminus \bigcup_{j=1}^{\infty}F_j\right)=0$,
$P(F_j,B(x,R))<\infty$ for all $j\in{\mathbb N}$,
and for any sets $A_j\subset F_j$ with $P(A_j,B(x,R))<\infty$ for
all $j\in{\mathbb N}$ we have
\[
P\Bigg(\bigcup_{j=1}^{\infty}A_j,B(x,R)\Bigg)=\sum_{j=1}^{\infty}P(A_j,B(x,R)).
\]
\end{proposition}
Of course, there may be only finitely many $F_j$'s, and so we will always understand
that some $F_j$'s can be empty. In fact, supposing that $\mu(F\cap B(x,R))>0$,
we will know only after Lemma \ref{lem:H has measure zero} that
any $F_j$'s are nonempty.
Next we gather a number of preliminary results.
Recall the definition of $1$-quasiopen sets from page
\pageref{quasiopen}.
\begin{proposition}[{\cite[Proposition 4.2]{L-Fed}}]\label{prop:set of finite perimeter is quasiopen}
Let $\Omega\subset X$ be open and let $F\subset X$ be $\mu$-measurable with
$P(F,\Omega)<\infty$. Then the sets $I_F\cap\Omega$ and $O_F\cap\Omega$ are $1$-quasiopen.
\end{proposition}
\begin{proposition}\label{prop:ae curve goes through boundary}
Let
$F\subset X$ with $P(F,X)<\infty$.
Then for $1$-a.e. curve $\gamma$, $\gamma^{-1}(I_F)$
and $\gamma^{-1}(O_F)$ are relatively open subsets of $[0,\ell_{\gamma}]$.
\end{proposition}
\begin{proof}
By Proposition \ref{prop:set of finite perimeter is quasiopen},
the sets $I_F$ and $O_F$ are $1$-quasiopen. Then
by \cite[Remark 3.5]{S2}, they
are also \emph{$1$-path open},
meaning that for $1$-a.e. curve $\gamma$ in $X$,
the sets $\gamma^{-1}(I_F)$ and $\gamma^{-1}(O_F)$
are relatively open subsets of $[0,\ell_{\gamma}]$.
\end{proof}
For any set $A\subset X$, we define the \emph{measure-theoretic closure} as
\begin{equation}\label{eq:measure theoretic closure}
\overline{A}^m:=I_A\cup \partial^*A.
\end{equation}
\begin{lemma}\label{lem:inner capacity}
Let $B(x,R)$ be a ball with $0<R<\frac{1}{4}\diam X$ and let $E_1\supset E_2\supset \ldots$ such that
$P(E_j,B(x,R))<\infty$ for all $j\in{\mathbb N}$,
and $\mu(E_j)\to 0$ and $P(E_j,B(x,R))\to 0$ as $j\to\infty$.
Let $0<r<R$. Then
\[
\capa_1(\overline{E_j}^m\cap B(x,r))\to 0.
\]
\end{lemma}
\begin{proof}
Take a cutoff function $\eta\in \Lip_c(B(x,R))$ with $0\le \eta\le 1$ on $X$,
$\eta=1$ in $B(x,r)$, and $g_\eta\le 2/(R-r)$, where $g_{\eta}$ is the minimal
$1$-weak upper gradient of $\eta$.
Then for all $j\in{\mathbb N}$, by a Leibniz rule (see
\cite[Proposition 4.2]{KKST3}) we have
\[
\Vert D(\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{E_j}\eta)\Vert(X)=\Vert D(\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{E_j}\eta)\Vert(B(x,R))
\le \frac{2\mu(E_j)}{R-r}+P(E_j,B(x,R))\to 0
\]
as $j\to\infty$.
By \eqref{eq:variational one and BV capacity} and the fact that
$(\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{E_j}\eta)^{\vee}=1$ in $\overline{E_j}^m\cap B(x,r)$, we get
\begin{align*}
\rcapa_1(\overline{E_j}^m\cap B(x,r),B(x,R))
&\le C_{\textrm{r}}\rcapa_{\mathrm{BV}}^{\vee}(\overline{E_j}^m\cap B(x,r),B(x,R))\\
&\le \Vert D(\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{E_j}\eta)\Vert(X)\to 0\quad\textrm{as }j\to\infty.
\end{align*}
Then by the Sobolev inequality \eqref{eq:sobolev inequality} we easily
get
\[
\capa_1(\overline{E_j}^m\cap B(x,r))\to 0.
\]
\end{proof}
The variation measure is always absolutely continuous with respect to
the $1$-capacity, in the following sense.
\begin{lemma}[{\cite[Lemma 3.8]{L-SA}}]\label{lem:variation measure and capacity}
Let $\Omega\subset X$ be an open set and
let $u\in L^1_{\mathrm{loc}}(\Omega)$ with $\Vert Du\Vert(\Omega)<\infty$. Then for every $\varepsilon>0$ there exists $\delta>0$ such that if $A\subset \Omega$ with $\capa_1 (A)<\delta$, then $\Vert Du\Vert(A)<\varepsilon$.
\end{lemma}
\begin{lemma}\label{lem:coincidence of perimeter}
Let $\Omega\subset X$ be open,
let $F_1\subset F_2\subset X$ with $P(F_1,\Omega)<\infty$ and $P(F_2,\Omega)<\infty$,
and let $A\subset \Omega$
such that for all $x\in A$, we have
\[
\lim_{r\to 0}\frac{\mu(B(x,r)\cap (F_2\setminus F_1))}{\mu(B(x,r))}=0.
\]
Then $P(F_1,A)=P(F_2,A)$.
\end{lemma}
\begin{proof}
First note that $P(F_2\setminus F_1,\Omega)<\infty$
by \eqref{eq:lattice property of sets of finite perimeter},
and then by \eqref{eq:def of theta} we have
\[
P(F_2\setminus F_1,A)=0.
\]
Using \eqref{eq:lattice property of sets of finite perimeter} again, we have
\[
P(F_2,A)\le P(F_1,A)+P(F_2\setminus F_1,A)=P(F_1,A)
\]
and
\[
P(F_1,A)\le P(F_2,A)+P(F_2\setminus F_1,A)=P(F_2,A).
\]
\end{proof}
The following lemma says that perimeter can always be controlled by
the measure of a suitable ``curve boundary''.
\begin{lemma}\label{lem:perimeter controlled by boundary}
Let $\Omega\subset X$ be open, let $E\subset X$ be closed,
and let $A\subset \Omega$ be such that $1$-a.e. curve
$\gamma$ in $\Omega$ with $\gamma(0)\in I_E$ and
$\gamma(\ell_{\gamma})\in X\setminus E$
intersects $A$. Then $P(E,\Omega)\le C_d\mathcal H(A)$.
\end{lemma}
\begin{proof}
We can assume that $\mathcal H(A)<\infty$.
Fix $\varepsilon>0$. We find a covering of $A$
by balls $\{B_j=B(x_j,r_j)\}_{j\in I}$, with $I\subset{\mathbb N}$, such that $r_j\le \varepsilon$
and
\begin{equation}\label{eq:covering for A}
\sum_{j\in I}\frac{\mu(B_j)}{r_j}\le \mathcal{H}(A)+\varepsilon.
\end{equation}
Denote the exceptional family of curves by $\Gamma$.
Take a nonnegative Borel function $\rho$ such that $\Vert \rho\Vert_{L^1(\Omega)}<\varepsilon$
and $\int_{\gamma}\rho\,ds\ge 1$ for all $\gamma\in\Gamma$.
Let
\[
g:=\sum_{j\in I}\frac{\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{2B_j}}{r_j}+\rho.
\]
Then let
\[
u(x):=\min\left\{1,\inf \int_{\gamma}g\,ds\right\},
\]
where the infimum is taken over curves $\gamma$ (also constant curves)
in $\Omega$ with $\gamma(0)= x$ and
$\gamma(\ell_{\gamma})\in \Omega\setminus \left(E\cup\bigcup_{j\in I}2B_j\right)$.
We know that $g$ is an upper gradient of $u$ in $\Omega$,
see \cite[Lemma 5.25]{BB}. Moreover, $u$ is $\mu$-measurable
by \cite[Theorem 1.11]{JJRRS}; strictly speaking this result is written for
functions defined on the whole space, but the proof clearly works also for functions
defined in an open set such as $\Omega$.
If $x\in \Omega\setminus \left(E\cup\bigcup_{j\in I}2B_j\right)$,
clearly $u(x)=0$.
If $x\in I_E\setminus \bigcup_{j\in I}2B_j$, consider any curve
$\gamma$ in $\Omega$ with $\gamma(0)= x$ and
$\gamma(\ell_{\gamma})\in \Omega\setminus \left(E\cup\bigcup_{j\in I}2B_j\right)$.
Then either $\int_{\gamma}\rho\,ds\ge 1$ or there is $t$ such that
$\gamma(t)\in A$. In the latter case,
for some $j\in I$ we have $\gamma(t)\in B_j$. Then
\[
\int_{\gamma}g\,ds\ge \int_{\gamma}\frac{\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{2B_j}}{r_j}\,ds\ge 1.
\]
Thus $u(x)=1$, and so by Lebesgue's differentiation theorem we have
$u=\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_E$ a.e. in $\Omega\setminus \bigcup_{j\in I}2B_j$. Thus
\begin{align*}
\int_{\Omega}|u-\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_E|\, d\mu
&\le \int_{\Omega} \text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{\bigcup_{j\in I}2B_j}\, d\mu\le \sum_{j\in I}\mu(2B_j)
\le \varepsilon\sum_{j\in I} \frac{\mu(2B_j)}{r_j}
\le \varepsilon (C_d\mathcal{H}(A)+\varepsilon).
\end{align*}
Moreover, using \eqref{eq:covering for A} we get
\[
\int_{\Omega}g\,d\mu\le \sum_{j\in I}\int_{\Omega}\frac{\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{2B_j}}{r_j}\,d\mu
+\int_{\Omega}\rho\,d\mu\le C_d\mathcal H(A)+C_d \varepsilon +\varepsilon.
\]
Now for each $i\in{\mathbb N}$, use the above construction to obtain functions
$u_i\in N^{1,1}_{\mathrm{loc}}(\Omega)$ and upper gradients
$g_i\in L^1(\Omega)$ corresponding to $\varepsilon=1/i$.
We have
\[
\int_{\Omega}|u_i-\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_E|\, d\mu\le i^{-1} (C_d\mathcal{H}(A)+i^{-1})\to 0
\quad\textrm{as }i\to \infty
\]
and thus
\[
P(E,\Omega)\le \liminf_{i\to\infty}\int_{\Omega}g_i\,d\mu
\le \liminf_{i\to\infty}(C_d\mathcal H(A)+C_d i^{-1}+i^{-1})=C_d\mathcal H(A).
\]
\end{proof}
\begin{proposition}\label{prop:sum of perimeters of components}
Let $B(x,R)$ be a ball with $0<R<\frac{1}{4}\diam X$ and let $F\subset X$ be a closed set
with $P(F,X)<\infty$.
Denote the components of $F\cap \overline{B}(x,R)$ having nonzero
$\mu$-measure by $F_1,F_2,\ldots$.
Then
\[
\sum_{j=1}^{\infty}P(F_j,B(x,R))<\infty,
\]
and for any sets $A_j\subset F_j$
with $P(A_j,B(x,R))<\infty$ for all $j\in{\mathbb N}$ we have
\begin{equation}\label{eq:perimeter of union and sum of Ajs is the same}
P\Bigg(\bigcup_{j=1}^{\infty}A_j,B(x,R)\Bigg)
=\sum_{j=1}^{\infty}P(A_j,B(x,R)).
\end{equation}
\end{proposition}
\begin{proof}
Let $\Gamma_b$ be the exceptional family of curves of
Proposition \ref{prop:ae curve goes through boundary};
then $\Mod_1(\Gamma_b)=0$.
Consider a component $F_j$; it is a closed set.
Consider a curve $\gamma\notin \Gamma_b$ in $B(x,R)$ with $\gamma(0)\in I_{F_j}$ and
$\gamma(\ell_{\gamma})\in X\setminus F_j$. Then $\gamma(0)\in I_F$.
Take
\[
t:=\max\{s\in [0,\ell_{\gamma}]:\,\gamma([0,s])\subset F_j\}.
\]
Clearly $t<\ell_{\gamma}$.
There cannot exist $\delta>0$ such that
$\gamma(s)\in F$ for all $s\in (t,t+\delta)$
because this would connect $F_j$ with at least
one other component of $F\cap \overline{B}(x,R)$.
Thus there are points $s_j\searrow t$ with $\gamma(s_j)\in X\setminus F\subset O_F$.
By Proposition \ref{prop:ae curve goes through boundary},
this implies that either $\gamma(t)\in\partial^*F$ or $\gamma(t)\in O_{F}$.
In the latter case, there is a point $\widetilde{t}\in (0,t)$
with $\gamma(\widetilde{t})\in\partial^*F$.
In both cases, we have found $t$ such that $\gamma(t)\in \partial^*F\cap F_j$.
Thus by Lemma \ref{lem:perimeter controlled by boundary},
\[
P(F_j,B(x,R))\le C_d\mathcal H(\partial^*F\cap F_j)
\]
and so
\begin{equation}\label{eq:perimeter sum is finite}
\begin{split}
\sum_{j=1}^{\infty}P(F_j,B(x,R))
&\le C_d \sum_{j=1}^{\infty} \mathcal H(\partial^*F\cap F_j)\\
&\le C_d \mathcal H(\partial^*F)\\
&\le C_d\alpha^{-1}P(F,X)\quad\textrm{by }\eqref{eq:def of theta}\\
&<\infty,
\end{split}
\end{equation}
as desired. Next note that one inequality in \eqref{eq:perimeter of union and sum of Ajs is the same} follows from \eqref{eq:perimeter of countable union}. To prove the other one, note that the sets $F_j$ are closed and then in fact compact,
and so for any $\mu$-measurable
sets $A_j\subset F_j$ with $P(A_j,B(x,R))<\infty$ for all $j\in{\mathbb N}$, we have
\begin{equation}\label{eq:distance between Aj and Ak}
\dist(A_j,A_k)\ge \dist(F_j,F_k)>0
\end{equation}
for all $j\neq k$. Take $N,M\in{\mathbb N}$ with $N\le M$. We have
(recall \eqref{eq:measure theoretic closure})
\begin{equation}\label{eq:perimeter of union of Ajs}
\begin{split}
P\Bigg(\bigcup_{j=1}^{\infty}A_j,B(x,R)\Bigg)
&\ge P\Bigg(\bigcup_{j=1}^{\infty}A_j,B(x,R)\setminus \overline{\bigcup_{j=M+1}^{\infty}A_j}^m\Bigg)\\
&=P\Bigg(\bigcup_{j=1}^{M}A_j,B(x,R)\setminus
\overline{\bigcup_{j=M+1}^{\infty}A_j}^m\Bigg)\quad\textrm{by Lemma }\ref{lem:coincidence of perimeter}\\
&=\sum_{j=1}^{M} P\Bigg(A_j,B(x,R)\setminus\overline{\bigcup_{j=M+1}^{\infty}A_j}^m\Bigg)\quad\textrm{by }\eqref{eq:distance between Aj and Ak}\\
&\ge \sum_{j=1}^{N} P\Bigg(A_j,B(x,R)\setminus\overline{\bigcup_{j=M+1}^{\infty}A_j}^m\Bigg).
\end{split}
\end{equation}
By \eqref{eq:perimeter of countable union} and \eqref{eq:perimeter sum is finite},
we have
\[
P\Bigg(\bigcup_{j=M+1}^{\infty}F_j,B(x,R)\Bigg)
\le
\sum_{j=M+1}^{\infty}P(F_j,B(x,R))
\to 0\quad\textrm{as }M\to \infty.
\]
Then by Lemma \ref{lem:inner capacity} we have
\[
\capa_1\Bigg(\overline{\bigcup_{j=M+1}^{\infty}A_j}^m\cap B(x,r)\Bigg)
\le \capa_1\Bigg(\overline{\bigcup_{j=M+1}^{\infty}F_j}^m\cap B(x,r)\Bigg)
\to 0\quad\textrm{as }M\to \infty
\]
for all $0<r<R$.
From \eqref{eq:perimeter of union of Ajs} and
Lemma \ref{lem:variation measure and capacity} we now get
\[
P\Bigg(\bigcup_{j=1}^{\infty}A_j,B(x,R)\Bigg)\ge \sum_{j=1}^{N} P(A_j,B(x,r)).
\]
Letting $r\nearrow R$ and $N\to\infty$, we get the conclusion.
\end{proof}
For any nonnegative $g\in L^1_{\mathrm{loc}}(X)$, define the centered
Hardy-Littlewood maximal function
\[
\mathcal M g(x):=\sup_{r>0}\,\vint{B(x,r)}g\,d\mu,\quad x\in X.
\]
Recall the definition of the exponent $s>1$ from
\eqref{eq:homogenous dimension}.
The argument in the following lemma was inspired by the study
of the so-called $\textrm{MEC}_p$-property in \cite{JJRRS}.
\begin{lemma}\label{lem:finding a positive measure component}
Let $B(x_0,r)$ be a ball and let $V\subset X$ be an open set with
\[
\capa_1(V\cap B(x_0,r))< \frac{1}{20 \cdot 10^s C_P C_d^7}\frac{\mu(B(x_0,r))}{r}.
\]
Then there is a connected subset of $\overline{B}(x_0,r/2)\setminus V$
with measure at least $\mu(B(x_0,r))/(4\cdot 10^s C_d^2)$.
\end{lemma}
\begin{proof}
Take $u\in N^{1,1}(X)$ with $u=1$ in $V\cap B(x_0,r)$ and
\[
\Vert u\Vert_{N^{1,1}(X)}<\frac{1}{20 \cdot 10^s C_P C_d^7}\frac{\mu(B(x_0,r))}{r}.
\]
Thus there is an upper gradient $g$ of $u$ with
\[
\Vert g\Vert_{L^1(X)}<\frac{1}{20 \cdot 10^s C_P C_d^7}\frac{\mu(B(x_0,r))}{r}.
\]
By the Vitali-Carath\'eodory theorem
(see e.g. \cite[p. 108]{HKST15}) we can assume that $g$ is lower semicontinuous.
We define
\[
A:=\{\mathcal M g> (10C_P C_d^2 r)^{-1}\}\quad\textrm{and}\quad
D:=\{u\ge 1/2\}.
\]
Then by the weak $L^1$-boundedness of the maximal function
(see e.g. \cite[Lemma 3.12]{BB}) as well as
\eqref{eq:homogenous dimension}, we estimate
\[
\mu(A)\le 10 C_P C_d^5 r\Vert g\Vert_{L^1(X)}\le \frac{1}{2 \cdot 10^s C_d^2}\mu(B(x_0,r))
\le \frac{1}{2}\mu(B(x_0,r/10)).
\]
Similarly,
\[
\mu(D)\le 2\Vert u\Vert_{L^1(X)}\le \frac{1}{4}\mu(B(x_0,r/10)),
\]
and then
\begin{equation}\label{eq:measure of complement of A D}
\mu(B(x_0,r/10)\setminus (A\cup D))\ge \frac{1}{4}\mu(B(x_0,r/10))
\ge \frac{\mu(B(x_0,r))}{4\cdot 10^s C_d^2}.
\end{equation}
In particular, we can fix $x\in B(x_0,r/10)\setminus (A\cup D)$.
Let $\delta:=(100C_P C_d^2 r)^{-1}$.
For every $k\in{\mathbb N}$, let $g_k:=\min\{g,k\}$ and
\[
v_k(y):=\inf\int_{\gamma}(g_k+\delta)\,ds,\quad y\in B(x_0,r/2),
\]
where the infimum is taken over curves $\gamma$ (also constant curves)
in $B(x_0,r/2)$ with $\gamma(0)= x$ and $\gamma(\ell_{\gamma})=y$.
Then $g_k+\delta\le g+\delta$ is an upper gradient of $v_k$ in
$B(x_0,r/2)$
(see \cite[Lemma 5.25]{BB}) and $v_k$ is $\mu$-measurable
by \cite[Theorem 1.11]{JJRRS}.
Since the space is geodesic, each $v_k$ is $(k+\delta)$-Lipschitz
in $B(x_0,r/10)$ and thus all points in $B(x_0,r/10)$
are Lebesgue points of $v_k$.
Define $B_j:=B(x,2^{-j+1}r/10)$, for $j=0,1\ldots$.
By the Poincar\'e inequality,
\begin{equation}\label{eq:telescope at x}
\begin{split}
|v_k(x)-(v_k)_{B_0}|
\le \sum_{j=0}^{\infty}|(v_k)_{B_{j+1}}-(v_k)_{B_{j}}|
&\le C_d\sum_{j=0}^{\infty}\, \vint{B_j}|v_k-(v_k)_{B_j}|\,d\mu\\
&\le C_d C_P \sum_{j=0}^{\infty}\frac{2^{-j+1}r}{10}\vint{B_j}(g+\delta)\,d\mu\\
&\le C_d C_P r(\mathcal M g(x)+ \delta)\\
&\le 1/8.
\end{split}
\end{equation}
Similarly, for every
$y\in B(x_0,r/10)\setminus (A\cup D)$ we have
\begin{equation}\label{eq:telescope at y}
|v_k(y)-(v_k)_{B(y,r/5)}|\le 1/8
\end{equation}
and
\begin{equation}\label{eq:middle term}
\begin{split}
|(v_k)_{B(x,r/5)}-(v_k)_{B(y,r/5)}|
&\le 2C_d^2\vint{B(x,2r/5)}|v_k-(v_k)_{B(x,2r/5)}|\,d\mu\\
&\le 2C_d^2 C_P r\vint{B(x,2r/5)}(g+\delta)\,d\mu\\
&\le 2C_d^2 C_P r(\mathcal Mg(x)+\delta)\\
&\le 1/4.
\end{split}
\end{equation}
Combining \eqref{eq:telescope at x}, \eqref{eq:telescope at y},
and \eqref{eq:middle term}, we get
\[
v_k(y)= |v_k(x)-v_k(y)|\le 1/2.
\]
This means that there is a curve $\gamma_k$ in $B(x_0,r/2)$ with
$\gamma_k(0)= x$, $\gamma_k(\ell_{\gamma_k})=y$, and
$\int_{\gamma_k}(g_k+\delta)\,ds\le 1/2$, for every $k\in{\mathbb N}$.
Note that
\[
\ell_{\gamma_k}\le \frac{1}{\delta}\int_{\gamma_k}(g_k+\delta)\,ds\le \frac{1}{2\delta}.
\]
Consider the reparametrizations
$\widetilde{\gamma}_k(t):=\gamma_k(t\ell_{\gamma_k})$, $t\in [0,1]$.
By the Arzela-Ascoli theorem (see e.g. \cite[p. 169]{Roy}),
passing to a subsequence (not relabeled)
we find $\widetilde{\gamma}\colon [0,1]\to X$ such that
$\widetilde{\gamma}_k\to \widetilde{\gamma}$ uniformly.
It is straightforward to check that $\widetilde{\gamma}$ is continuous
and rectifiable.
Let $\gamma$ be the parametrization of $\widetilde{\gamma}$ by arc-length;
then $\gamma(0)= x$ and $\gamma(\ell_{\gamma})=y$, and
by \cite[Lemma 2.2]{JJRRS}, we have for every $k_0\in{\mathbb N}$ that
\[
\int_{\gamma}g_{k_0}\,ds\le \liminf_{k\to\infty}\int_{\gamma_k}g_{k_0}\,ds
\le \liminf_{k\to\infty}\int_{\gamma_k}g_{k}\,ds\le 1/2.
\]
Letting $k_0\to\infty$, we obtain
\[
\int_{\gamma}g\,ds\le 1/2.
\]
Note that if $\gamma$ intersected a point $z\in V$, then we would have
\[
\int_{\gamma}g\,ds \ge |u(x)-u(z)|> |1/2-1|=1/2,
\]
so this is not possible. Thus $\gamma$ is in $\overline{B}(x_0,r/2)\setminus V$;
let us denote this curve, and also its image, by $\gamma_y$.
Define the desired connected set as the union
\[
\bigcup_{y\in B(x_0,r/10)\setminus (A\cup D)}\gamma_y.
\]
By \eqref{eq:measure of complement of A D} this has measure at least
$\mu(B(x_0,r))/(4\cdot 10^s C_d^2)$.
\end{proof}
\begin{lemma}\label{lem:H has measure zero}
Let $B(x,R)$ be a ball with $0<R<\frac{1}{4}\diam X$
and let $F\subset X$ be a closed set
with $P(F,X)<\infty$.
Denote the components of $F\cap \overline{B}(x,R)$ having nonzero
$\mu$-measure by $F_1,F_2,\ldots$, and
$H:=\overline{B}(x,R)\cap F\setminus \bigcup_{j=1}^{\infty} F_j$.
Then $\mu(H)=0$.
\end{lemma}
\begin{proof}
It follows from Proposition \ref{prop:sum of perimeters of components}
that $P\left(\bigcup_{j=1}^{\infty} F_j,B(x,R)\right)<\infty$, and then
by \eqref{eq:lattice property of sets of finite perimeter} also
$P(H,B(x,R))<\infty$.
By \eqref{eq:def of theta} and
a standard covering argument (see e.g. the proof of \cite[Lemma 2.6]{KKST3}),
we find that
\[
\lim_{r\to 0}r\frac{P\left(\bigcup_{j=1}^{\infty} F_j,B(y,r)\right)}{\mu(B(y,r))}=0
\]
for all $y\in B(x,R)\setminus \left(\partial^*\big(\bigcup_{j=1}^{\infty} F_j\big)\cup N\right)$, with $\mathcal H(N)=0$, in particular for all
$y\in B(x,R)\cap I_H\setminus N$.
Take $y\in B(x,R)\cap I_H\setminus N$ (if it exists).
We find arbitrarily small $r>0$ such that $B(y,r) \subset B(x,R)$ and
\begin{equation}\label{eq:complement of H small}
\frac{\mu(B(y,r)\setminus H)}{\mu(B(y,r))}\le \frac{1}{80 \cdot 10^s C_P C_d^8 C_{\textrm{cap}}}
\end{equation}
and
\[
r\frac{P\left(\bigcup_{j=1}^{\infty} F_j,B(y,r)\right)}{\mu(B(y,r))}
\le \frac{1}{80 \cdot 10^s C_P C_d^8 C_{\textrm{Cap}}}.
\]
Now suppose that
\[
P(H,B(y,r))\le \frac{1}{80 \cdot 10^s C_P C_d^8 C_{\textrm{cap}}}\frac{\mu(B(y,r))}{r}.
\]
Then since $H\cup\bigcup_{j=1}^{\infty}F_j=F\cap \overline{B}(x,R)$,
by \eqref{eq:lattice property of sets of finite perimeter} we get
\begin{align*}
P(F,B(y,r))
&\le P(H,B(y,r))+P\Bigg(\bigcup_{j=1}^{\infty}F_j,B(y,r)\Bigg)\\
&\le \frac{1}{40 \cdot 10^s C_P C_d^8 C_{\textrm{cap}}}\frac{\mu(B(y,r))}{r}.
\end{align*}
Define the Lipschitz function
\[
\eta:=\max\left\{0,1-\frac{\dist(\cdot,B(y,r/2))}{r/2}\right\},
\]
so that $0\le \eta\le 1$ on $X$, $\eta=1$ in $B(y,r/2)$,
$\eta=0$ in $X\setminus B(y,r)$, and
$g_{\eta}\le (2/r)\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{B(y,r)}$ (see \cite[Corollary 2.21]{BB}).
Then by a Leibniz rule (see
\cite[Proposition 4.2]{KKST3}), we have
\begin{align*}
\capa_{\mathrm{BV}}(B(y,r/2)\setminus F)
&\le \Vert D(\eta\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{X\setminus F})\Vert(X)\\
&\le P(F,B(y,r))+2\frac{\mu(B(y,r)\setminus F)}{r}\\
&\le P(F,B(y,r))+2\frac{\mu(B(y,r)\setminus H)}{r}\\
&\le \frac{1}{20 \cdot 10^s C_P C_d^8 C_{\textrm{cap}}}\frac{\mu(B(y,r))}{r}.
\end{align*}
Then by \eqref{eq:Newtonian and BV capacities are comparable},
\[
\capa_{1}(B(y,r/2)\setminus F)\le
\frac{1}{20 \cdot 10^s C_P C_d^8}\frac{\mu(B(y,r))}{r}
<\frac{1}{20 \cdot 10^s C_P C_d^7}\frac{\mu(B(y,r/2))}{r/2}.
\]
Then by Lemma \ref{lem:finding a positive measure component},
there is a connected subset of $F\cap \overline{B}(y,r/4)$
with measure at least
\[
\frac{\mu(B(y,r/2))}{4\cdot 10^s C_d^2}\ge \frac{\mu(B(y,r))}{4\cdot 10^s C_d^3}.
\]
By \eqref{eq:complement of H small} this must be (partially) contained in $H$,
a contradiction since $H$ contains no components of nonzero measure.
Thus for all $y\in I_H\cap B(x,R)\setminus N$, we have
\[
\limsup_{r\to 0}r\frac{P(H,B(y,r))}{\mu(B(y,r))}
\ge \frac{1}{80 \cdot 10^s C_P C_d^8 C_{\textrm{cap}}}.
\]
By a simple covering argument, it follows that
\[
\mu(I_H\cap B(x,R)\setminus N)\le \varepsilon\cdot 80 \cdot
10^s C_P C_d^{11} C_{\textrm{cap}} P(H,B(x,R))
\]
for every $\varepsilon>0$. Thus $\mu(H\cap B(x,R)\setminus N)=0$
and so $\mu(H\cap B(x,R))=0$.
Since the space $X$ is geodesic, by \cite[Corollary 2.2]{Buc}
we know that $\mu(\{y\in X:\,d(y,x)=R\})=0$ and so
in fact $\mu(H)=0$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:connected components}]
This follows from
Proposition \ref{prop:sum of perimeters of components} and
Lemma \ref{lem:H has measure zero}.
\end{proof}
\section{Functions of least gradient}\label{sec:least gradient}
In this section we consider functions of least gradient, or more precisely
superminimizers and solutions of obstacle problems
in the case $p=1$. We will follow the definitions and theory developed in \cite{L-WC}.
Throughout this section
the symbol $\Omega$ will always denote a nonempty open subset of $X$.
We denote by $\mathrm{BV}_c(\Omega)$ the class of functions $\varphi\in\mathrm{BV}(\Omega)$ with compact
support in $\Omega$, that is, $\supp \varphi\Subset \Omega$.
\begin{definition}
We say that $u\in\mathrm{BV}_{\mathrm{loc}}(\Omega)$ is a $1$-minimizer in $\Omega$ (often
called function of least gradient) if
for all $\varphi\in \mathrm{BV}_c(\Omega)$, we have
\begin{equation}\label{eq:definition of 1minimizer}
\Vert Du\Vert(\supp\varphi)\le \Vert D(u+\varphi)\Vert(\supp\varphi).
\end{equation}
We say that $u\in\mathrm{BV}_{\mathrm{loc}}(\Omega)$ is a $1$-superminimizer in $\Omega$
if \eqref{eq:definition of 1minimizer} holds for all nonnegative $\varphi\in \mathrm{BV}_c(\Omega)$.
We say that $u\in\mathrm{BV}_{\mathrm{loc}}(\Omega)$ is a $1$-subminimizer in $\Omega$ if
\eqref{eq:definition of 1minimizer} holds for all nonpositive $\varphi\in \mathrm{BV}_c(\Omega)$,
or equivalently if $-u$ is a $1$-superminimizer in $\Omega$.
\end{definition}
Equivalently, we can replace $\supp\varphi$ by any set $A\Subset \Omega$ containing $\supp\varphi$
in the above definitions.
If $\Omega$ is bounded, and $\psi\colon\Omega\to\overline{{\mathbb R}}$
and $f\in L^1_{\mathrm{loc}}(X)$
with $\Vert Df\Vert(X)<\infty$, we define the class of admissible functions
\[
\mathcal K_{\psi,f}(\Omega):=\{u\in\mathrm{BV}_{\mathrm{loc}}(X):\,u\ge \psi\textrm{ in }\Omega\textrm{ and }u=f\textrm{ in }X\setminus\Omega\}.
\]
The (in)equalities above are understood in the a.e. sense.
For brevity, we sometimes write $\mathcal K_{\psi,f}$ instead of $\mathcal K_{\psi,f}(\Omega)$.
By using a cutoff function,
it is easy to show that $\Vert Du\Vert(X)<\infty$
for every $u\in\mathcal K_{\psi,f}(\Omega)$.
\begin{definition}
We say that $u\in\mathcal K_{\psi,f}(\Omega)$ is a solution of the $\mathcal K_{\psi,f}$-obstacle problem
if $\Vert Du\Vert(X)\le \Vert Dv\Vert(X)$ for all $v\in\mathcal K_{\psi,f}(\Omega)$.
\end{definition}
Whenever the characteristic function of a set $E$
is a solution of an obstacle problem,
for simplicity we will call $E$ a solution as well.
Similarly, if $\psi=\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_A$ for some $A\subset X$, we let
$\mathcal K_{A, f}:=\mathcal K_{\psi, f}$.
Now we list some properties of superminimizers
and solutions of obstacle problems derived mostly in \cite{L-WC}.
\begin{lemma}[{\cite[Lemma 3.6]{L-WC}}]\label{lem:solutions from capacity}
If $x\in X$, $0<r<R<\frac 18 \diam X$, and $A\subset B(x,r)$, then there exists
$E\subset X$ that is a solution of the $\mathcal K_{A,0}(B(x,R))$-obstacle problem
with
\[
P(E,X)\le \rcapa_1(A,B(x,R)).
\]
\end{lemma}
\begin{proposition}[{\cite[Proposition 3.7]{L-WC}}]\label{prop:solutions are superminimizers}
If $u\in\mathcal K_{\psi,f}(\Omega)$ is a solution
of the $\mathcal K_{\psi,f}$-obstacle problem, then $u$
is a $1$-superminimizer in $\Omega$.
\end{proposition}
The following fact and its proof are similar to \cite[Lemma 3.2]{KKLS}.
\begin{lemma}\label{lem:subminimizer char}
Let $F\subset X$ with
$P(F,\Omega)<\infty$ and suppose that for every $H\Subset \Omega$, we have
\[
P(F,\Omega)\le P(F\setminus H,\Omega).
\]
Then $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_F$ is a $1$-subminimizer in $\Omega$.
\end{lemma}
\begin{proof}
Take a nonnegative $\varphi\in\mathrm{BV}_c(\Omega)$.
Observe that for every $0<s<1$, we have $\supp\{\varphi\ge s\}\Subset \Omega$.
Thus by the coarea formula \eqref{eq:coarea},
\begin{align*}
\Vert D(\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_F-\varphi)\Vert(\supp\varphi)
&\ge\int_0^1 P(\{\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_F-\varphi>t\},\supp\varphi)\,dt\\
&=\int_0^1 P(F\setminus \{\varphi\ge 1-t\},\supp\varphi)\,dt\\
&\ge\int_0^1 P(F,\supp\varphi)\,dt=\Vert D\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_F\Vert(\supp\varphi).
\end{align*}
\end{proof}
\begin{proposition}\label{prop:components are subminimizers}
Let $B(x,R)$ be a ball and let $F\subset X$ be a closed set with $P(F,X)<\infty$
and such that $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_F$ is a $1$-subminimizer in $B(x,R)$.
Denote the components of $F\cap \overline{B}(x,R)$
with nonzero $\mu$-measure by $F_1,F_2,\ldots$.
Then each $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{F_k}$ is a $1$-subminimizer in $B(x,R)$.
\end{proposition}
\begin{proof}
Fix $k\in{\mathbb N}$ and take $H\Subset B(x,R)$.
We can assume that $H\subset F_k$ and that $P(F_k\setminus H,B(x,R))<\infty$.
Now
\begin{align*}
\sum_{\substack{j\in{\mathbb N}\\ j\neq k}}P(F_j,B(x,R))+P(F_k,B(x,R))
&=\sum_{j=1}^{\infty}P(F_j,B(x,R))\\
&= P(F,B(x,R))\quad\textrm{by Proposition }\ref{prop:connected components}\\
&\le P(F\setminus H,B(x,R))\\
&= \sum_{j=1}^{\infty}P(F_j\setminus H,B(x,R))\quad\textrm{by Proposition }\ref{prop:connected components}\\
&=\sum_{\substack{j\in{\mathbb N}\\ j\neq k}}P(F_j,B(x,R))+P(F_k\setminus H,B(x,R)).
\end{align*}
Note that since $\sum_{j=1}^{\infty}P(F_j,B(x,R))= P(F,B(x,R))<\infty$, we now get
\[
P(F_k,B(x,R))\le P(F_k\setminus H,B(x,R)).
\]
By Lemma \ref{lem:subminimizer char}, $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{F_k}$ is a $1$-subminimizer in $B(x,R)$.
\end{proof}
We have the following weak Harnack inequality. We denote the positive
part of a function by $u_+:=\max\{u,0\}$.
\begin{theorem}[{\cite[Theorem 3.10]{L-WC}}]\label{thm:weak Harnack}
Suppose $k\in{\mathbb R}$ and $0<R<\tfrac 14 \diam X$ with $B(x,R)\Subset \Omega$, and
assume either that
\begin{enumerate}[{(a)}]
\item $u$ is a $1$-subminimizer in $\Omega$, or
\item $\Omega$ is bounded, $u$ is a solution of the
$\mathcal K_{\psi, f}(\Omega)$-obstacle problem,
and $\psi\le k$ a.e. in $B(x,R)$.
\end{enumerate}
Then for any $0<r<R$ and some constant $C_1=C_1(C_d,C_P)$,
\[
\esssup_{B(x,r)}u\le C_1\left(\frac{R}{R-r}\right)^{s}\vint{B(x,R)}(u-k)_+\,d\mu+k.
\]
\end{theorem}
For later reference, let us note that a close look at the proof
of the above theorem reveals that we can take
\begin{equation}\label{eq:C1}
C_1=2^{(s+1)^2}(6\widetilde{C}_S C_d)^s,
\end{equation}
where $\widetilde{C}$ is the constant from an $(s/(s-1),1)$-Sobolev inequality
with zero boundary values.
\begin{corollary}\label{cor:weak Harnack}
Suppose $k\in{\mathbb R}$, $x\in X$, $0<R<\tfrac 14 \diam X$, and
assume that $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_F$ is a $1$-subminimizer in $B(x,R)$ with $\mu(F\cap B(x,R/2))>0$.
Then
\[
\frac{\mu(B(x,R)\cap F)}{\mu(B(x,R))}\ge (2^s C_1)^{-1}.
\]
\end{corollary}
\begin{proof}
Let $0<\varepsilon<R/2$.
Applying Theorem \ref{thm:weak Harnack}(i) with $\Omega=B(x,R)$,
$u=\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_F$, $k=0$, and $R/2,\, R-\varepsilon$ in place of $r,\, R$, we get
\[
1\le C_1\left(\frac{R-\varepsilon}{R-\varepsilon-R/2}\right)^{s}\frac{\mu(B(x,R-\varepsilon)\cap F)}{\mu(B(x,R-\varepsilon))}.
\]
Letting $\varepsilon\to 0$, we get the result.
\end{proof}
Recall the definitions of the lower and upper approximate limits
$u^{\wedge}$ and $u^{\vee}$ from \eqref{eq:lower approximate limit}
and \eqref{eq:upper approximate limit}.
\begin{theorem}[{\cite[Theorem 3.11]{L-WC}}]\label{thm:superminimizers are lsc}
Let $u$ be a $1$-superminimizer in $\Omega$. Then $u^{\wedge}\colon\Omega\to (-\infty,\infty]$
is lower semicontinuous.
\end{theorem}
\begin{lemma}\label{lem:smallness in annuli}
Let $B=B(x,R)$ be a ball with $0<R<\frac{1}{32} \diam X$, and
suppose that $W\subset B$.
Let $V\subset 4B$ be a solution of the $\mathcal K_{W,0}(4B)$-obstacle problem
(as guaranteed by Lemma \ref{lem:solutions from capacity}).
Then for all
$y\in 3 B\setminus 2 B$,
\[
\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_V^{\vee}(y)\le C_2 R \frac{\rcapa_1(W,4B)}{\mu(B)}
\]
for some constant $C_2=C_2(C_d,C_P)$.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:solutions from capacity} we know that
\[
P(V,X)\le \rcapa_1(W, 4B),
\]
and thus by the isoperimetric inequality \eqref{eq:isop inequality with zero boundary values},
\begin{equation}\label{eq:E1 has small measure}
\mu(V)\le 4C_S R P(V,X)\le 4C_S R \rcapa_1(W,4B).
\end{equation}
For any $z\in 3 B\setminus 2 B$ we have
$B(z,R)\subset 4 B\setminus B$. Since now
$W\cap B(z,R)=\emptyset$, we can apply
Theorem \ref{thm:weak Harnack}(b) with $k=0$ to get
\begin{align*}
\sup_{B(z,R/2)} \text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_V^{\vee }
&\le \esssup_{B(z,R/2)}\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_V\\
&\le C_1\left(\frac{R}{R-R/2}\right)^s\vint{B(z,R)}(\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_V)_+\,d\mu\\
&= \frac{2^s C_1}{\mu(B(z,R))}\int_{B(z,R)} (\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_V)_+\,d\mu\\
&\le \frac{2^s C_1 C_d^2}{\mu(B)}\mu(V)\\
&\le 2^{s+2} C_1 C_d^2 C_S R \frac{\rcapa_1(W,4B)}{\mu(B)}\quad\textrm{by }\eqref{eq:E1 has small measure}.
\end{align*}
Thus we can choose
$C_2=2^{s+2} C_1 C_d^2 C_S$.
\end{proof}
\section{Constructing a ``geodesic'' space}\label{sec:constructing a quasiconvex space}
In this section we construct a suitable space where the Mazurkiewicz metric
agrees with the ordinary one; this space will be needed
in the proof of the main result.
Recall that in Section
\ref{sec:strong boundary points}, in the space $(Z,\widehat{d},\mu)$
we defined the Mazurkiewicz metric
$\widehat{d}_M$; given a set $V\subset X$ we now define
\[
d_{M}^V(x,y):=\inf\{\diam K: K\subset X\setminus V\textrm{ is a continuum containing }x,y\},
\quad x,y\in X\setminus V.
\]
If $V=\emptyset$, we leave it out of the notation, consistent with
\eqref{eq:widehat d c}.
\begin{lemma}\label{lem:new metric lemma}
Let $V\subset X$ be a bounded open set and let $B(x_0,R_0)$ be a ball such
that $V\Subset B(x_0,R_0)$, and
$\overline{B}(x_0,R_0)\setminus V$ is connected.
Moreover, suppose there is $R>0$ such that
for every $x\in X\setminus V$ and $0<r\le R$,
the connected components of $\overline{B}(x,r)\setminus V$
intersecting $B(x,r/2)$ are finite in number.
Then $d_M^V$ is a metric on $X\setminus V$ such that $d\le d_M^V$,
$d_M^V$ induces the same topology on $X\setminus V$ as $d$, $(d_M^V)_M=d_M^V$,
and $(X\setminus V,d_M^V)$ is complete.
\end{lemma}
Note that explicitly, for $x,y\in X\setminus V$,
\[
(d_{M}^V)_M(x,y)=
\inf\{\diam_{d_M^V} K:\, K\subset X\setminus V
\textrm{ is a }d_M^V\textrm{-continuum containing }x,y\}.
\]
\begin{proof}
Since $V\Subset B(x_0,R_0)$ and $\overline{B}(x_0,R_0)\setminus V$ is connected,
also every $\overline{B}(x_0,r)\setminus V$ with $r\ge R_0$ is connected,
by the fact that $X$ is geodesic. Thus we have for all $x,y\in X\setminus V$
\[
d_M^V(x,y)\le 2\max\{R_0,d(x,x_0),d(y,x_0)\}<\infty.
\]
Obviously $d\le d_M^V$ and
$d_M^V(x,x)=0$ for all $x\in X\setminus V$.
If $d_M^V(x,y)=0$ then $d(x,y)=0$ and so $x=y$. Obviously
also $d_M^V(x,y)=d_M^V(y,x)$ for all $x,y\in X\setminus V$.
Finally, take $x,y,z\in X\setminus V$.
Take a continuum $K_1\subset X\setminus V$ containing $x,y$ and a continuum
$K_2\subset X\setminus V$ containing $y,z$.
Then $K_1\cup K_2\subset X\setminus V$ is a continuum containing $x,z$
and so
\[
d_M^V(x,z)\le \diam(K_1\cup K_2)\le \diam(K_1)+\diam (K_2).
\]
Taking infimum over $K_1$ and $K_2$, we conclude that the triangle inequality holds.
Hence $d_M^V$ is a metric on $X\setminus V$.
To show that the topologies induced on $X\setminus V$ by $d$ and $d_M^V$ are the same,
take a sequence $x_j\to x$ with respect to $d$ in $X\setminus V$.
Fix $\varepsilon\in (0,R)$. Consider the
components of $\overline{B}(x,\varepsilon/2)\setminus V$ intersecting $B(x,\varepsilon/4)$.
By assumption there are only finitely many.
Each of them not containing $x$ is at a nonzero distance from $x$ and
so for large $j$, every $x_j$ belongs to the component containing $x$; denote it $F_1$.
For such $j$, we have
\[
d_M^V(x_j,x)\le \diam F_1\le \varepsilon.
\]
We conclude that $x_j\to x$ also with respect to $d_M^V$.
Since we had $d\le d_M^V$, it follows that the topologies are the same.
If $x,y\in X\setminus V$, and $\varepsilon>0$, we can take a continuum $K$
containing $x$ and $y$, with $\diam K<d_M^V(x,y)+\varepsilon$.
The set $K$ is still a continuum in the metric space $(X\setminus V,d_M^V)$, and
for every $z,w\in K$,
\[
d_M^V(z,w)\le \diam K< d_M^V(x,y)+\varepsilon.
\]
It follows that $\diam_{d_M^V} K\le d_M^V(x,y)+\varepsilon$,
and so $(d_M^V)_M(x,y)\le d_M^V(x,y)+\varepsilon$, showing that $(d_M^V)_M=d_M^V$.
Finally let $(x_j)$ be a Cauchy sequence in $(X\setminus V, d_M^V)$.
Since $d\le d_M^V$, it is also a Cauchy sequence in $(X,d)$,
and so $x_j\to x\in X\setminus V$ with respect to $d$.
But as we showed before, this implies that $x_j\to x$
with respect to $d_M^V$.
\end{proof}
Let $B$ be a ball and let $B_1,B_2\subset B$
be two other balls, and let $u\in L^1(B)$ such that $u=1$ in $B_1$
and $u=0$ in $B_2$. Then we have
\begin{equation}\label{eq:KoLa result}
\int_{B}|u-u_{B}|\,d\mu\ge \frac{1}{2}\min\{\mu(B_1),\mu(B_2)\};
\end{equation}
this follows easily by considering the cases $u_{B}\le 1/2$
and $u_{B}\ge 1/2$.
We have the following \emph{linear local connectedness};
versions of this property have been proved before e.g. in \cite{HK}, but they
assume certain growth bounds on the measure, which we do not want to assume.
\begin{lemma}\label{lem:lin loc connectedness}
Let $B(x_0,R)$ be a ball and
let $V\subset B(x_0,2R)$ with
\begin{equation}\label{eq:V capacity in 4B}
\rcapa_1(V,B(x_0,3R))<\frac{1}{12C_P C_d^3}\frac{\mu(B(x_0,R))}{R}.
\end{equation}
Then every pair of points $y,z\in B(x_0,5R)\setminus B(x_0,4R)$
can be joined by a curve in $B(x_0,6R)\setminus V$.
\end{lemma}
\begin{proof}
If $d(y,z)\le 2R$, then the result is clear since the space is geodesic.
Thus assume that $d(y,z)> 2R$. Consider the disjoint balls
$B_1:=B(y,R)$ and $B_2:=B(z,R)$ which both belong to
$B(x_0,6R)\setminus B(x_0,3R)$.
Denote by $\Gamma$ the family of curves $\gamma$ in $B(x_0,6R)$
with $\gamma(0)\in B_1$ and $\gamma(\ell_{\gamma})\in B_2$.
Note that $\Mod_1(\Gamma)<\infty$ since $\dist(B_1,B_2)>0$.
Let $\varepsilon>0$.
Let $g\in L^1(B(x_0,6R))$ such that $\int_{\gamma}g\,ds\ge 1$
for all $\gamma\in\Gamma$ and
\[
\int_{B(x_0,6R)}g\,d\mu<\Mod_1(\Gamma)+\varepsilon.
\]
Let
\[
u(x):=\min\left\{1,\inf \int_{\gamma}g\,ds\right\},\quad x\in B(x_0,6R),
\]
where the infimum is taken over curves $\gamma$ (also constant curves)
in $B(x_0,6R)$ with $\gamma(0)=x$ and
$\gamma(\ell_{\gamma})\in B_1$. Then $u=1$ in $B_2$.
Moreover, $g$ is an upper gradient of $u$ in $B(x_0,6R)$, see \cite[Lemma 5.25]{BB},
and $u$ is $\mu$-measurable by \cite[Theorem 1.11]{JJRRS}.
In total, $u\in N^{1,1}(B(x_0,6R))$ with $u=0$ in $B_1$ and
$u=1$ in $B_2$.
Thus using the Poincar\'e inequality,
\begin{align*}
\Mod_1(\Gamma)
&>\int_{B(x_0,6R)}g\,d\mu-\varepsilon\\
&\ge \frac{1}{6C_P R}\int_{B(x_0,6R)}|u-u_{B(x_0,6R)}|\,d\mu-\varepsilon\\
&\ge \frac{1}{12C_P R}\min\{\mu(B_1),\mu(B_2)\}-\varepsilon\quad\textrm{by }\eqref{eq:KoLa result}\\
&\ge \frac{1}{12C_P C_d^3 R}\mu(B(x_0,R))-\varepsilon
\end{align*}
and so
\[
\Mod_1(\Gamma)\ge \frac{1}{12C_P C_d^3 R}\mu(B(x_0,R)).
\]
On the other hand, by \eqref{eq:V capacity in 4B} we find a function
$v\in N^{1,1}(X)$ such that $v=1$ in $V$,
$v=0$ in $X\setminus B(x_0,3R)$,
and $v$ has an upper gradient $\widetilde{g}$ satisfying
\[
\int_X \widetilde{g}\,d\mu< \frac{1}{12C_P C_d^3}\frac{\mu(B(x_0,R))}{R}.
\]
Denote the family of all curves intersecting $V$ by $\Gamma_V$.
Now $\int_{\gamma}\widetilde{g}\,ds\ge 1$ for all $\gamma\in \Gamma\cap \Gamma_V$,
and so
\[
\Mod_1(\Gamma\cap \Gamma_V)\le \int_X \widetilde{g}\,d\mu
< \frac{1}{12C_P C_d^3}\frac{\mu(B(x_0,R))}{R}.
\]
Thus $\Gamma\setminus \Gamma_V$ is nonempty.
Take a curve $\gamma\in \Gamma\setminus \Gamma_V$. Now we get the required curve
by concatenating three curves:
the first going from $y$ to $\gamma(0)$ inside $B(y,R)$
(using the fact that the space is geodesic), the second $\gamma$, and the third
going from $\gamma(\ell_{\gamma})$ to $z$ inside $B(z,R)$.
\end{proof}
By using an argument involving Lipschitz cutoff functions, it is easy to
see that for any ball $B(x,r)$ and any set $A\subset B(x,r)$, we have
\begin{equation}\label{eq:capacity and Hausdorff measure}
\rcapa_1(A,B(x,3r))\le C_d \mathcal H(A).
\end{equation}
In the following proposition we construct the space in which the metric and
Mazurkiewicz metric agree.
\begin{proposition}\label{prop:constructing the quasiconvex space}
Let $B=B(x,R)$ be a ball with $0<R<\frac{1}{32} \diam X$,
and let $A\subset B$ with
\[
\mathcal H(A)
\le \frac{1}{24 C_P C_S C_2 C_r C_d^4}
\frac{\mu(B)}{R}.
\]
Let $\varepsilon>0$. Then we find an open set $V$ with
$A\subset V\subset 2B$ and
\[
P(V,X)\le C_d\mathcal H(A)+\varepsilon,
\]
and such that the following hold:
the space $(Z,d_M^V,\mu)$ with $Z=X\setminus V$ is a complete metric space with
$(d_M^V)_M=d_M^V$, $\mu$ in $Z$ is a Borel regular outer measure
and doubling with constant $2^s C_1 C_d^2$, and
for every $y\in X\setminus V$ and $r>0$ we have
\[
\frac{\mu(B_Z(y,r))}{\mu(B(y,r))}\ge (2^s C_1 C_d)^{-1}
\]
where $B_Z(y,r)$ denotes an open ball in $Z$, defined with respect to the metric
$d_M^V$.
\end{proposition}
\begin{proof}
Using the fact that $\rcapa_1$
is an outer capacity in the sense of \eqref{eq:rcapa outer capacity},
as well as \eqref{eq:capacity and Hausdorff measure},
we find an open set $W$, with $A\subset W\subset B$, such that (note that
the first inequality is obvious)
\[
\rcapa_1(W,4B)\le \rcapa_1(W,3B)\le \rcapa_1(A,3B)+\varepsilon\le C_d\mathcal H(A)+\varepsilon.
\]
We can assume that
\[
\varepsilon<\frac{1}{24 C_P C_S C_2 C_r C_d^3}\frac{\mu(B)}{R}.
\]
Take a solution $V$ of the
$\mathcal K_{W,0}(4B)$-obstacle problem.
By Lemma \ref{lem:solutions from capacity}, we have
\[
P(V,X)\le \rcapa_1(W,4B)\le C_d\mathcal H(A)+\varepsilon.
\]
By Theorem \ref{thm:superminimizers are lsc}, the function $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_V^{\wedge}$
is lower semicontinuous, and by redefining $V$ in a set of measure zero,
we get $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_V=\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_V^{\wedge}$ and so $V$ is open.
By Lemma \ref{lem:smallness in annuli} we know that
for all
$y\in 3 B\setminus 2B$,
\[
\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_V^{\vee}(y)\le C_2 R \frac{\rcapa_1(W,4B)}{\mu(B)}
\le C_2 R \frac{C_d \mathcal H(A)+\varepsilon}{\mu(B)}<1
\]
and so $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_V^{\vee}=0$ in $3 B\setminus 2B$. Then
in fact $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_V=\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_V^{\vee}=0$ in $4B\setminus 2B$, that is,
$V\subset 2B$, because else we could remove the parts of $V$ inside $4B\setminus 3B$
to decrease $P(V,X)$.
By the isoperimetric inequality
\eqref{eq:isop inequality with zero boundary values},
\begin{equation}\label{eq:measure of V}
\mu(V)\le 2C_S R P(V,X)\le 2 C_S C_d R \mathcal H(A)
+2C_S R \varepsilon
\le \frac{\mu(B)}{2C_d^2}.
\end{equation}
Moreover, by \eqref{eq:variational one and BV capacity} we get
\begin{align*}
\rcapa_1(V,3B)
&\le C_r \rcapa_{\mathrm{BV}}^{\vee}(V,3B)\\
&\le C_r P(V,X)\le C_r C_d \mathcal H(A)+C_r\varepsilon
< \frac{1}{12C_P C_d^3}\frac{\mu(B)}{R}.
\end{align*}
By Lemma \ref{lem:lin loc connectedness},
$5B\setminus 4B$ belongs to one component of
$6\overline{B}\setminus V$.
Since the space is geodesic, in fact
$6\overline{B}\setminus 4B$ belongs to one component of
$6\overline{B}\setminus V$.
Call this component $F_1$. Moreover, denote $F:=X\setminus V$; $F$ is a closed set with
$P(F,X)=P(V,X)<\infty$.
Consider all components of $F\cap 6\overline{B}$.
Suppose there is another component $F_2$ with nonzero $\mu$-measure.
Denote by $F_1,F_2,\ldots$ all the components with nonzero $\mu$-measure
(as usual, some of these may be empty).
By the relative isoperimetric inequality
\eqref{eq:relative isoperimetric inequality}, we have
\begin{equation}\label{eq:F2 has perimeter}
P(F_2,6B)>0.
\end{equation}
Now the set $\widetilde{V}:=V\cup \bigcup_{j=2}^{\infty}F_j\subset 4B$ is admissible
for the $\mathcal K_{W,0}(4B)$-obstacle problem, with
\begin{align*}
P(\widetilde{V},X)
&=P(\widetilde{V},6B)\\
&=P\Bigg(X\setminus \Big(V\cup \bigcup_{j=2}^{\infty}F_j\Big),6B\Bigg)\\
&=P\Bigg(F\setminus \bigcup_{j=2}^{\infty}F_j,6B\Bigg)\\
&= P(F,6B)-\sum_{j=2}^{\infty}P(F_j,6B)\quad\textrm{by Proposition }\ref{prop:connected components}\\
&<P(F,6B)\quad\textrm{by }\eqref{eq:F2 has perimeter}\\
&=P(V,6B)=P(V,X).
\end{align*}
This is a contradiction with the fact that $V$ is a solution
of the $\mathcal K_{W,0}(4B)$-obstacle problem. Thus
by Proposition \ref{prop:connected components},
$F\cap 6\overline{B}$
is the union of $F_1$ and a set of measure zero $N$.
Suppose
\[
y\in 6\overline{B}\cap F\setminus F_1
=4B\cap F\setminus F_1.
\]
Now $y$ is at a nonzero distance from $F_1$. Thus for small $\delta>0$,
\[
\mu(B(y,\delta)\cap F)\le \mu(N)=0.
\]
Note that since we had $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_V=\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_V^{\wedge}$, it follows that
$\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_F=\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_F^{\vee}$. Thus in fact such $y$ cannot exist
and $F\cap 6\overline{B}=F_1$ is connected.
If $y\in F\setminus B(x,3R)$ and
$0<r\le R$, then $\overline{B}(y,r)\cap F=\overline{B}(y,r)$ is connected since
the space is geodesic.
If $y\in F\cap B(x,3R)$ and
$0<r\le R$, by Proposition \ref{prop:connected components} we know that
$F\cap \overline{B}(y,r)$
consists of at most countably many components $F_1,F_2,\ldots$
and a set of measure zero $\widetilde{N}$.
By Proposition \ref{prop:solutions are superminimizers}
we know that $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_F$ is a $1$-subminimizer in $B(x,4R)$, and then
also in $B(y,r)\subset B(x,4R)$.
Then each $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_{F_j}$ is a $1$-subminimizer in $B(y,r)$
by Proposition \ref{prop:components are subminimizers}.
By Corollary \ref{cor:weak Harnack} we get for each $F_j$
with $\mu(B(y,r/2)\cap F_j)>0$ that
\begin{equation}\label{eq:subminimizer component measure lower bound}
\frac{\mu(F_j\cap B(y,r))}{\mu(B(y,r))}\ge (2^s C_1)^{-1}.
\end{equation}
Thus there are less than $2^s C_1+1$ such components, which we can
relabel $F_1,\ldots,F_M$.
Suppose
\[
z\in B(y,r/2)\cap \widetilde{N}\setminus \bigcup_{j=1}^M F_j.
\]
This is at nonzero distance from all $F_1,\ldots,F_M$. Thus for small $\delta>0$,
\[
\mu(B(z,\delta)\cap F)\le \mu(\widetilde{N})+\sum_{j=M+1}^{\infty}\mu(F_j\cap B(y,r/2))=0.
\]
As before, we have $\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_F=\text{\raise 1.3pt \hbox{$\chi$}\kern-0.2pt}_F^{\vee}$. Thus in fact such $z$ cannot exist
and
\[
F\cap B(y,r/2)=B(y,r/2)\cap \bigcup_{j=1}^M F_j.
\]
Now Lemma \ref{lem:new metric lemma} gives that $(Z,d_M^V,\mu)$,
with $Z=X\setminus V$,
is a complete metric space, $d\le d_M^V$, the topologies induced
by $d$ and $d_M^V$ are the same, and $(d_M^V)_M=d_M^V$.
Note that $\mu$ restricted to the subsets of $X\setminus V$ is still a Borel
regular outer measure, see \cite[Lemma 3.3.11]{HKST15}.
Since the topologies induced by $d$ and $d_M^V$
are the same, $\mu$ remains a Borel regular outer measure in $Z$.
(Note that as sets, we have $X\setminus V=F=Z$.)
Denoting by $F_1$ the component of $F\cap \overline{B}(y,r)$ containing
$y$,
by \eqref{eq:subminimizer component measure lower bound} we have for
$y\in F\cap B(x,3R)$ and
$0<r\le R$ that
\begin{equation}\label{eq:size of F1}
\frac{\mu(B(y,r)\cap F_1)}{\mu(B(y,r))}\ge (2^s C_1)^{-1}.
\end{equation}
Recall that if $y\in F\setminus B(x,3R)$, then $F_1=\overline{B}(y,r)$
and so \eqref{eq:size of F1} holds.
Eq. \eqref{eq:size of F1} is easily seen to hold also for
all $x\in F$ and $r>R$ by \eqref{eq:measure of V}.
It follows that for all $y\in F$ and $r>0$, we have
\[
\frac{\mu(B_Z(y,2r))}{\mu(B(y,r))}\ge (2^s C_1)^{-1}
\]
and so in fact
\[
\frac{\mu(B_Z(y,r))}{\mu(B(y,r))}\ge (2^s C_1 C_d)^{-1}\quad\textrm{for all }y\in Z
\textrm{ and }r>0,
\]
as desired.
Thus
\[
\frac{\mu(B_Z(y,2r))}{\mu(B_Z(y,r))}\le 2^s C_1 C_d\frac{\mu(B(y,2r))}{\mu(B(y,r))}
\le 2^s C_1 C_d^2.
\]
Thus in the space $(Z,d_M^V,\mu)$, the measure $\mu$
is doubling with constant $2^s C_1 C_d^2$.
\end{proof}
\section{Proof of the main result}\label{sec:proof of the main result}
In this section we prove the main result of the paper, Theorem \ref{thm:main theorem}.
First note that with the choice
$\widehat{C}_d=2^s C_1 C_d^2$, the constant appearing in
Corollary \ref{cor:density points} becomes
\[
\frac{1}{4 \widehat{C}_d^{12}}
=\frac{1}{4 (2^s C_1 C_d^2)^{12}}=:\beta_0.
\]
Recall from \eqref{eq:C1} that we can take
$C_1=2^{(s+1)^2}(6\widetilde{C}_S C_d)^s$.
Define
\begin{equation}\label{eq:definition of beta}
\begin{split}
\beta:=\frac{\beta_0}{2^s C_1 C_d}=\frac{1}{2^{2+s}C_1 C_d (2^s C_1 C_d^2)^{12}}
&=\frac{1}{2^{13s +2} (2^{(s+1)^2}(6\widetilde{C}_S C_d)^s)^{13} C_d^{25}}\\
&=\frac{1}{2^{13s^2+52s +15}3^{13s} \widetilde{C}_S^{13s} C_d^{13s+25}}.
\end{split}
\end{equation}
Note that in the Euclidean space ${\mathbb R}^n$, $n\ge 2$, we can take $C_d=2^n$, $s=n$, and $\widetilde{C}_S=2^{-1}n^{-1/2}\omega_n^{1/n}$, where $\omega_n$ is the volume
of the Euclidean unit ball, and then
\begin{equation}\label{eq:choice of beta in Euclidean space}
\beta = 2^{-26n^2-64n-15} 3^{-13n}n^{13n/2}\omega_n^{-13}.
\end{equation}
Recall the definition of the strong boundary from \eqref{eq:strong boundary}.
\begin{theorem}\label{thm:comparison of boundaries}
Let $\Omega\subset X$ be open and let
$E\subset X$ be $\mu$-measurable with
$\mathcal H(\Sigma_{\beta} E\cap \Omega)<\infty$.
Then $\mathcal H((\partial^*E\setminus \Sigma_{\beta} E)\cap \Omega)=0$.
\end{theorem}
\begin{proof}
By a standard covering argument (see e.g. the proof of \cite[Lemma 2.6]{KKST3}),
we find that
\[
\lim_{r\to 0}r\frac{\mathcal H(\Sigma_{\beta} E\cap B(x,r))}{\mu(B(x,r))}=0
\]
for all $x\in \Omega\setminus (\Sigma_{\beta}E\cup N)$, with $\mathcal H(N)=0$.
We will show that $\partial^*E\cap \Omega\subset (\Sigma_{\beta} E\cup N)\cap \Omega$
and thereby prove the claim.
Suppose instead that there exists $x\in\Omega\cap \partial^*E\setminus (\Sigma_{\beta} E\cup N)$.
Then
\[
\lim_{r\to 0}r\frac{\mathcal H(\Sigma_{\beta} E\cap B(x,r))}{\mu(B(x,r))}=0
\]
and
\[
\limsup_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}>0
\quad \textrm{and}\quad\limsup_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}>0.
\]
Thus for some $0<a<(2C_d^2)^{-1}$ we have
\[
\limsup_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}>C_d a
\quad \textrm{and}\quad\limsup_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}>C_d a.
\]
Now we can choose $0<R_0<\tfrac{1}{32} \diam X$ such that
\[
\frac{\mu(B(x,40^{-1}R_0)\cap E)}{\mu(B(x,40^{-1}R_0))}>a
\]
and
\[
r\frac{\mathcal H(\Sigma_{\beta} E\cap B(x,r))}{\mu(B(x,r))}
<\frac{a}{24 \cdot 2^s C_P C_S C_1 C_2 C_r C_d^8}
\]
for all $0<r\le R_0$.
Choose the smallest $j=0,1,\ldots$ such that for some
$r\in (2^{-j-1}R_0,2^{-j}R_0]$ we have
\[
\frac{\mu(B(x,40^{-1}r)\setminus E)}{\mu(B(x,40^{-1}r))}>C_d a
\quad\textrm{and thus}\quad\frac{\mu(B(x,40^{-1}2^{-j}R_0)\setminus E)}{\mu(B(x,40^{-1}2^{-j}R_0))}>a.
\]
Let $R:=2^{-j}R_0$. If $j\ge 1$, then
\[
\frac{\mu(B(x,20^{-1}R)\setminus E)}{\mu(B(x,20^{-1}R))} \le C_d a
\]
and so
\begin{align*}
\frac{\mu(B(x,40^{-1}R)\cap E)}{\mu(B(x,40^{-1}R))}
&\ge \frac{\mu(B(x,40^{-1}R))-\mu(B(x,20^{-1}R)\setminus E)}{\mu(B(x,40^{-1}R))}\\
&\ge 1 -C_d \frac{\mu(B(x,20^{-1}R)\setminus E)}{\mu(B(x,20^{-1}R))}\\
&\ge 1 -C_d^2 a\ge 1 -C_d^2 \frac{1}{2 C_d^2}=\frac{1}{2}> a.
\end{align*}
Thus
\begin{equation}\label{eq:portion of E}
a<\frac{\mu(B(x,40^{-1}R)\cap E)}{\mu(B(x,40^{-1}R))}<1-a,
\end{equation}
which holds clearly also if $j=0$, and
\[
R\frac{\mathcal H(\Sigma_{\beta} E\cap B(x,R))}{\mu(B(x,R))}
<\frac{a}{24 \cdot 2^s C_P C_S C_1 C_2 C_r C_d^8}.
\]
Let $A:= \Sigma_{\beta} E\cap B(x,R)$.
By Proposition \ref{prop:constructing the quasiconvex space}
we find an open set $V$ with $A\subset V\subset B(x,2R)$ and such that denoting $Z=X\setminus V$, the space
$(Z,d_M^V,\mu)$ is a complete metric space with $d\le d_M^V=(d_M^V)_M$ in $Z$,
$\mu$ in $Z$ is a Borel regular outer measure and
doubling with constant $\widehat{C}_d=2^s C_1 C_d^2$,
and for every $y\in Z$ and $r>0$ we have
\begin{equation}\label{eq:lower measure property}
\frac{\mu(B_Z(y,r))}{\mu(B(y,r))}\ge (2^s C_1 C_d)^{-1}.
\end{equation}
Moreover, by choosing a suitably small $\varepsilon>0$,
\begin{equation}\label{eq:perimeter of V}
P(V,X)\le C_d\mathcal H(A)+\varepsilon
< \frac{a}{2^{s+1}C_P C_S C_1 C_d^7}\frac{\mu(B(x,R))}{R}.
\end{equation}
Thus by the isoperimetric inequality
\eqref{eq:isop inequality with zero boundary values},
\[
\mu(V)\le 2C_S RP(V,X)< \frac{1}{C_d^6}\mu(B(x,R))
\le \mu(B(x,40^{-1}R)).
\]
Thus we can choose $y\in B(x,40^{-1}R)\setminus V$.
Denote $F:=X\setminus V$.
Let $F_1$ be the component of
$\overline{B}(y,20^{-1}R)\setminus V$ containing $y$.
By \eqref{eq:size of F1} (and the comments after it) we know that
\[
\mu(F_1)\ge (2^s C_1)^{-1}\mu(B(y,20^{-1}R)).
\]
Since $\mu(\{z\in X:\,d(z,y)=20^{-1}R\})=0$ (see \cite[Corollary 2.2]{Buc}),
now also
\[
\mu(B(y,20^{-1}R)\cap F_1)\ge (2^s C_1)^{-1}\mu(B(y,20^{-1}R)).
\]
Suppose that
\[
\mu(B(y,20^{-1}R)\setminus F_1)\ge \frac{a}{2^s C_1 C_d^2}\mu(B(y,20^{-1}R)).
\]
Then
\begin{align*}
P(V,B(y,20^{-1}R))
&=P(F,B(y,20^{-1}R))\\
&\ge P(F_1,B(y,20^{-1}R))\quad\textrm{by Proposition }\ref{prop:connected components}\\
&\ge \frac{a}{2\cdot 2^s C_P C_1 C_d^2}
\frac{\mu(B(y,20^{-1}R))}{20^{-1}R}
\quad\textrm{by }\eqref{eq:relative isoperimetric inequality}\\
&\ge \frac{a}{2^{s+1} C_P C_1 C_d^7}\frac{\mu(B(x,R))}{R}.
\end{align*}
This contradicts \eqref{eq:perimeter of V}, and so necessarily
\begin{equation}\label{eq:complement of F1 small measure}
\mu(B(y,20^{-1}R)\setminus F_1)< \frac{a}{2^s C_1 C_d^2}\mu(B(y,20^{-1}R))
\le \frac{a}{C_d^2}\mu(B(y,20^{-1}R)).
\end{equation}
Now
\begin{align*}
C_d\frac{\mu(B_{Z}(y,10^{-1}R)\cap E)}{\mu(B(y,10^{-1}R))}
&\ge \frac{\mu(B(y,20^{-1}R)\cap E\cap F_1)}{\mu(B(y,20^{-1}R))}\\
&\ge \frac{\mu(B(y,20^{-1}R)\cap E)}{\mu(B(y,20^{-1}R))}-\frac{a}{C_d^2}\quad\textrm{by }\eqref{eq:complement of F1 small measure}\\
&\ge \frac{1}{C_d^2}\frac{\mu(B(x,40^{-1}R)\cap E)}{\mu(B(x,40^{-1}R))}-\frac{a}{C_d^2}\\
&> \frac{a}{C_d^2}-\frac{a}{C_d^2}= 0\quad\textrm{by }\eqref{eq:portion of E}.
\end{align*}
The same string of inequalities holds with $E$ replaced by $X\setminus E$.
It follows that
\[
0<\mu(B_{Z}(y,10^{-1}R)\cap E)<\mu(B_Z(y,10^{-1}R)).
\]
Denoting by $\Sigma_{\beta_0}^ZE$ the strong boundary defined in the space
$(Z,d_M^V,\mu)$, by Corollary \ref{cor:density points} we find a point
\[
z\in\Sigma_{\beta_0}^Z E\cap B_{Z}(y,9R/10)
\subset \Sigma_{\beta_0}^Z E\cap B(y,9R/10)\setminus V
\subset \Sigma_{\beta_0}^ZE\cap B(x,R)\setminus V.
\]
Now using \eqref{eq:lower measure property}, we get
\[
\liminf_{r\to 0}\frac{\mu(B(z,r)\cap E)}{\mu(B(z,r))}
\ge \liminf_{r\to 0}\frac{\mu(B_{Z}(z,r)\cap E)}{\mu(B_{Z}(z,r))}
\frac{\mu(B_{Z}(z,r))}{\mu(B(z,r))}
\ge \beta_0\frac{1}{2^s C_1 C_d}=\beta,
\]
and analogously for $X\setminus E$.
Thus $z\in\Sigma_{\beta} E\cap B(x,R)\setminus V$,
a contradiction.
\end{proof}
Recall the usual version of Federer's characterization in metric spaces.
\begin{theorem}[{\cite[Theorem 1.1]{L-Fedchar}}]\label{thm:Federers characterization}
Let $\Omega\subset X$ be an open set, let $E\subset X$ be a $\mu$-measurable set, and
suppose that $\mathcal H(\partial^*E\cap \Omega)<\infty$. Then $P(E,\Omega)<\infty$.
\end{theorem}
Now we can prove our main result; recall from the discussion on page
\pageref{quasiconvex and geodesic} that one can assume the space to be
geodesic, as we have done in most of the paper.
(However, the constant $\beta$, which is defined explicitly in geodesic spaces
in \eqref{eq:definition of beta}, will have a different form in the original
space considered in Theorem \ref{thm:main theorem}.)
\begin{proof}[Proof of Theorem \ref{thm:main theorem}]
By Theorem \ref{thm:comparison of boundaries} we get $\mathcal H(\partial^*E\cap \Omega)<\infty$, and then Theorem \ref{thm:Federers characterization} gives
$P(E,\Omega)<\infty$.
\end{proof}
| {
"timestamp": "2019-06-10T02:14:13",
"yymm": "1906",
"arxiv_id": "1906.03125",
"language": "en",
"url": "https://arxiv.org/abs/1906.03125",
"abstract": "Federer's characterization states that a set $E\\subset \\mathbb{R}^n$ is of finite perimeter if and only if $\\mathcal H^{n-1}(\\partial^*E)<\\infty$. Here the measure-theoretic boundary $\\partial^*E$ consists of those points where both $E$ and its complement have positive upper density. We show that the characterization remains true if $\\partial^*E$ is replaced by a smaller boundary consisting of those points where the \\emph{lower} densities of both $E$ and its complement are at least a given number. This result is new even in Euclidean spaces but we prove it in a more general complete metric space that is equipped with a doubling measure and supports a Poincaré inequality.",
"subjects": "Metric Geometry (math.MG)",
"title": "A new Federer-type characterization of sets of finite perimeter in metric spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9925393571981168,
"lm_q2_score": 0.810478913248044,
"lm_q1q2_score": 0.8044322195778418
} |
https://arxiv.org/abs/1309.4564 | Asymptotics of Landau constants with optimal error bounds | We study the asymptotic expansion for the Landau constants $G_n$ $$\pi G_n\sim \ln N + \gamma+4\ln 2 + \sum_{s=1}^\infty \frac {\beta_{2s}}{N^{2s}},~~n\rightarrow \infty, $$ where $N=n+3/4$, $\gamma=0.5772\cdots$ is Euler's constant, and $(-1)^{s+1}\beta_{2s}$ are positive rational numbers, given explicitly in an iterative manner. We show that the error due to truncation is bounded in absolute value by, and of the same sign as, the first neglected term for all nonnegative $n$. Consequently, we obtain optimal sharp bounds up to arbitrary orders of the form $$ \ln N+\gamma+4\ln 2+\sum_{s=1}^{2m}\frac{\beta_{2s}}{N^{2s}}< \pi G_n < \ln N+\gamma+4\ln 2+\sum_{s=1}^{2k-1}\frac{\beta_{2s}}{N^{2s}}$$ for all $n=0,1,2,\cdots$, $m=1,2,\cdots$, and $k=1,2,\cdots$.The results are proved by approximating the coefficients $\beta_{2s}$ with the Gauss hypergeometric functions involved, and by using the second order difference equation satisfied by $G_n$, as well as an integral representation of the constants $\rho_k=(-1)^{k+1}\beta_{2k}/(2k-1)!$. | \section{Introduction and statement of results} \indent\setcounter{section} {1}
\setcounter{equation} {0} \label{sec:1}
A century ago, it was
shown by Landau \cite{Landau} that if a function
$f(z)$ is analytic in the unit disc, such that $|f(z)|<1$, with
the Maclaurin expansion
$$f(z)=a_0+a_1 z+a_2 z^2+\cdots+ a_n z^n+\cdots ,~~~|z|<1,$$
then it holds
$$|a_0+a_1+a_2+\cdots +a_n|\leq G_n,~~~n=0, 1,2,\cdots,$$
where $G_0=1$ and
\begin{equation}\label{(1.1)}
G_n=1+ \left(\frac 1 2\right)^2 +\left ( \frac {1\cdot 3}{2\cdot 4} \right )^2 +\cdots +\left ( \frac{1\cdot 3\cdots (2n-1)}{2\cdot 4\cdots (2n)}\right )^2
\end{equation}
for $n=1,2,\cdots$,
and the equal sign can be attained for each $n$. The constants $G_n$ are termed Landau's constants; see, e.g., Watson \cite{Watson}.
Efforts have been made to approximate these constants from the very beginning. Indeed,
Landau himself \cite{Landau} has worked out the large-$n$ behavior
\begin{equation*}\label{(1.2)}
G_n\sim \frac 1 \pi \ln n,~~~\mbox{as}~~n\rightarrow \infty; \end{equation*}
see also Watson \cite{Watson}.
Since then, the approximation of $G_n$ goes to two related directions. One is to find sharper bounds of $G_n$ for all positive integers $n$, and the other is
to obtain large-$n$ asymptotic approximations for the constants.
\subsection{Sharper bounds}
Many authors have worked on the sharp bounds of $G_n$.
For example, in 1982, Brutman \cite{Brutman} obtains
\begin{equation*}\label{(1.3)} 1+ \pi^{-1} \ln(n+1) \leq G_n< 1.0663+ \pi^{-1} \ln(n+1),~~n=0,1,2,\cdots. \end{equation*}
The result is improved in 1991 by Falaleev \cite{Falaleev} to give
\begin{equation*}\label{(1.4)} 1.0662+\pi^{-1} \ln(n+0.75)< G_n \leq 1.0916 +\pi^{-1} \ln(n+0.75),~~n=0,1,2,\cdots. \end{equation*}
In 2000, an attempt is made by Cvijovi\'c \& Klinowski \cite{CK} to use the digamma function $\psi=\Gamma'/\Gamma$ (see, e.g., \cite[p.136, (5.2.2)]{NIST}). They prove that
\begin{equation*}\label{(1.5)} c_0 +\pi^{-1} \psi(n+ 5/4)< G_n < 1.0725+\pi^{-1} \psi(n+ 5/4) ,~~n=0,1,2,\cdots , \end{equation*}
and
\begin{equation*}\label{(1.6)} 0.9883+\pi^{-1} \psi(n+ 3 /2) < G_n < c_0+\pi^{-1} \psi(n+ 3 /2) ,~~n=0,1,2,\cdots , \end{equation*}
where $c_0=(\gamma+4\ln 2)/\pi=1.0662\cdots$, $\gamma=0.5772\cdots$ is the Euler constant (\cite[(5.2.3)]{NIST}).
Inequalities of this type are revisited in a 2002 paper \cite{Alzer} of Alzer. In that paper, the problem is turned into the following: to find the largest $\alpha$ and smallest $\beta$ such that
\begin{equation*}\label{(1.7)}c_0+\pi^{-1} \psi(n+\alpha )\leq G_n\leq c_0+ \psi(n+\beta )~~\mbox{for~all}~ n\geq 0. \end{equation*}
The answer is that $\alpha=5/4$ and $\beta=\psi^{-1}(\pi(1-c_0))=1.2662\cdots$, appealing to the complete monotonicity of $\Delta G_n$.
In 2009,
Zhao \cite{Zhao} starts seeking higher terms in the bounds. A formula in \cite{Zhao}, holding for all positive integer $n$, reads
\begin{equation}\label{(1.8)} \ln (16n)+\gamma-\frac 1 {4n} +\frac 5 {192n^2} <\pi G_{n-1} < \ln (16n)+\gamma-\frac 1 {4n} +\frac 5 {192n^2} +\frac 3 {128n^3}. \end{equation}
Several authors have made improvements. In a 2011 paper \cite{Mortici},
Mortici gives an inequality of the above type involving higher order term $1/n^{5}$. A $1/n^{7}$ term is brought in by Granath in a recent paper \cite{Granath} in 2012.
It seems possible to obtain sharper bounds involving terms of higher and higher orders. Accordingly, difficulties may arise. The case by case process of taking more and more terms might be endless.
\subsection{Asymptotic approximations}
Most of the above inequalities can be used to derive asymptotic approximations for $G_n$. Such approximations can also be obtained by employing integral representations, generating functions and relations with hypergeometric functions; see, e.g., \cite{LLXZ}. Indeed,
back to
Watson \cite{Watson}, a formula of asymptotic nature is derived by using a certain integral representation:
\begin{equation}\label{(1.9)}
\begin{aligned}
G_n& = \frac 1 \pi \ln(n+1) +\frac {\{\Gamma(n+3/2)\}^2}{\pi^2\Gamma(n+1)}\sum^{m-1}_{l=1} \frac{\{\Gamma(l+1/2)\}^2}{\Gamma(l+1)\Gamma(n+l+2)}\times \\
& \times \left\{ \psi(l+n+2) -\ln (n+1)+\psi(l+1)-2\psi(l+ 1/2)\right\}+ O\left \{ (n+1)^{1-m}\right \}
\end{aligned}
\end{equation}for large $n$ and positive integer $m$.
Theoretically, an asymptotic expansion can be extracted from (\ref{(1.9)}) by substituting the large-$n$ expansions of $\Gamma$ and $\psi$ into it. In fact, Waston obtains
\begin{equation*}\label{(1.10)}
G_n\sim \frac 1 \pi\left [ \ln(n+1)+ \gamma+4\ln 2- \frac 1 {4 (n+1)} +\frac 5 {192 (n+1)^2} +\cdots\right ], \end{equation*}
of which (\ref{(1.8)}) is an extended version.
We skip to some very recent progress in this direction. In
the manuscript \cite{Ismail}, Ismail, Li and Rahman
derive a complete asymptotic expansion for the Landau constants
$G_n$, using the asymptotic sequence $n!/(n + k)!$. The approach is based on a formula of Ramanujan, which connects the Landau constants with a hypergeometric function.
Several relevant papers are worth mentioning. In \cite{NemesNemes}, Nemes and Nemes derive full asymptotic expansions using a formula in \cite{CK}. They also conjecture a symmetry property of the coefficients in the expansion.
The conjecture has been proved by G. Nemes himself in \cite{Nemes}.
\noindent
\begin{prop}\label{Prop 1}
(Nemes) Let $0 <h < 3/2$. The Landau constants $G_n$ have the following asymptotic expansions
\begin{equation}\label{(1.11)}
G_n\sim \frac 1 \pi \ln (n+h) +\frac 1 \pi (\gamma+4\ln 2 ) - \sum_{k\geq 1}\frac {g_k(h)}{(n+h)^k}\end{equation}
as $n\rightarrow +\infty$, where the coefficients $g_k(h)$ are certain computable constants that satisfy $g_k(h)=(-1)^k g_k(3/2-h)$ for every $k\geq 1$.\end{prop}
As an important special case, Nemes \cite{Nemes} has further proved that
\begin{equation}\label{(1.12)}
\pi G_n\sim \ln (n+3/4) + \gamma+4\ln 2 + \sum_{s=1}^\infty \frac { \beta_{2s}}{ (n+3/4)^{2s}},~~n\rightarrow \infty, \end{equation}
where the coefficients $(-1)^{s+1}\beta_{2s}$ are positive rational numbers.
The argument in \cite{Nemes} is based on an integral representation of $G_n$ involving a Gauss hypergeometric function in the integrand.
While in \cite{LLXZ}, the authors of the present paper study this asymptotic problem by using an entirely different approach, starting from an obvious observation that the Landau constants satisfy
a difference equation
\begin{equation}\label{(1.13)}
G_{n+1}-G_n=\left[\frac{2n+1}{2n+2}\right]^2 (G_n-G_{n-1}),~~~n=0,1,\cdots,
\end{equation}as can be seen from the explicit formula (\ref{(1.1)}), where $G_{-1}:=0$.
By applying the theory of Wong and Li for
second-order linear
difference equations \cite{WongLi1992a} to (\ref{(1.13)}), the general expansion in (\ref{(1.11)}) is obtained, and the conjecture of \cite{NemesNemes} is also confirmed.
An advantage of this approach,
compared with the previous ones, is that all coefficients in the expansion are given iteratively in an explicit manner.
\subsection{A question and numerical evidences}
As pointed out in \cite{LLXZ}, the case corresponding to (\ref{(1.12)}) is numerically efficient since all odd terms in the expansion vanish.
We will find that this expansion in terms of $n+3/4$ is even more special, both from asymptotic and sharper bound points of view.
From (\ref{(1.12)}), as suggested by the alternating signs and by numerical calculations,
there is a natural question as follows:
\noindent {\qe\label{question 1}{Is the error due to truncation of (\ref{(1.12)}) bounded in absolute value by, and of the same sign as, the first neglected term?
Or, more precisely,
do we have the following?
\begin{equation}\label{(1.14)}\frac {\varepsilon_l(N)}{\beta_{2l}/N^{2l}} \in (0, 1) ~~\mbox{for}~~n=0,1,2,\cdots~~\mbox{and}~~l=1,2,\cdots, \end{equation} where $N=n+3/4$, and
\begin{equation}\label{(1.15)}\varepsilon_l(N)=\pi G_n - \left \{ \ln N+\gamma+4\ln 2+\sum_{s=1}^{l-1} \frac{\beta_{2s}}{N^{2s}}\right\}.\end{equation}
}}\vskip .5cm
Recalling that $(-1)^{s+1}\beta_{2s}$ are positive, it is readily seen that a positive answer to (\ref{(1.14)}) is equivalent to
\begin{equation}\label{(1.16)}
\varepsilon_{2k}(N)< 0~~\mbox{and}~~\varepsilon_{2k-1}(N)>0
\end{equation}for all $k=1,2,3,\cdots$ and $n=0,1,2,\cdots$.
The question reminds us of an earlier work of Shivakumar and Wong \cite{ShivakumarWong}, where an asymptotic expansion is obtained for the Lebesgue constants associated with the polynomial interpolation at the zeros of the Chebyshev polynomials, and the error in stopping the series at any time is shown to have the sign as, and is in absolute value less than, the first term neglected.
Similar discussion can be found in, e.g., Olver \cite[p.285]{olver1974}, on the Euler-Maclaurin formula.
Numerical experiments agree with (\ref{(1.14)}). The functions $\frac {\varepsilon_l(N)}{\beta_{2l}/N^{2l}}$, $N=n+\frac 3 4$, are depicted in Figure \ref{figure 1} for the first few $n$.
\begin{figure}[h]
\begin{center}
\includegraphics[height=6cm]{figure1a.eps}\hskip 1cm \includegraphics[height=6.12cm]{figure1b.eps}
\caption{The function $\frac {\varepsilon_l(N)}{\beta_{2l}/N^{2l}}$. Left: $l=2$, $n=0$-$30$. Right: $l=16$, $n=0$-$50$.}
\label{figure 1}
\end{center}
\end{figure}
\subsection{Statement of results }
In the present paper, we will justify (\ref{(1.16)}). In fact, we will prove the following theorem.
\noindent
\begin{thm}\label{Thm 1}For $N=n+3/4$,
it holds
\begin{equation}\label{(1.17)}
(-1)^{l+1} \varepsilon_l(N) >0\end{equation}
for $l= 1,2,3,\cdots$ and $n=0,1,2,\cdots$, where $\varepsilon_l(N)$ is defined in (\ref{(1.15)}), the coefficients $\beta_{2s}$ are determined iteratively in (\ref{(2.4)}) below. \end{thm}
The above theorem has direct applications both in asymptotics and sharp bounds. In asymptotic point of view, we can obtain error bounds which in a sense are optimal. To be precise, we have the following.
\noindent
\begin{thm}\label{Thm 2}The error due to truncation of (\ref{(1.12)})
is bounded in absolute value by, and of the same sign as, the first neglected term for all nonnegative $n$.
That is,
\begin{equation}\label{(1.18)}
0< (-1)^{l+1} \varepsilon_l(N) = \left | \varepsilon_l(N) \right | < \frac {\left | \beta_{2l}\right |} {N^{2l}} = \frac {(-1)^{l+1} \beta_{2l}} { N^{2l}}\end{equation}
for $l=0,1,2\cdots$ and $n=0,1,2,\cdots$, where $N=n+3/4$. \end{thm}
The error bound in (\ref{(1.18)}) is the first neglected term in the asymptotic expansion, and hence is optimal and can not be improved. The inequalities in (\ref{(1.18)}) can be derived from Theorem \ref{Thm 1} by noticing that $\varepsilon_l(N)= \beta_{2l}/N^{2l}+\varepsilon_{l+1}(N)$, as can be seen from (\ref{(1.15)}).
Another application of Theorem \ref{Thm 1} is the construction of sharp bounds up to arbitrary orders.
\noindent
\begin{thm}\label{Thm 3}For $N=n+3/4$,
it holds
\begin{equation}\label{(1.19)}
\ln N+\gamma+4\ln 2+\sum_{s=1}^{2m}\frac{\beta_{2s}}{N^{2s}}< \pi G_n < \ln N+\gamma+4\ln 2+\sum_{s=1}^{2k-1}\frac{\beta_{2s}}{N^{2s}}
\end{equation}
for all $n=0,1,2,\cdots$, $m=1,2,\cdots$, and $k=1,2,\cdots$.
\end{thm}
The inequalities in (\ref{(1.19)}) are understood as sharp bounds on both sides up to arbitrary orders.
In a sense, the bounds are optimal and can not be improved.
The first few
coefficients $\beta_{2s}$ are listed in Table \ref{table1}, as can be evaluated via (\ref{(2.4)}).
\noindent
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$\beta_2$ & $\beta_4$ & $\beta_6$ & $\beta_8$ & $\beta_{10} $ &$\beta_{12}$ & $\beta_{14}$ \\[0.1cm]
$\frac {11}{192}$ & $\frac{-1541}{122880}$ &$\frac{63433}{8257536}$&$\frac {-9199901}{1006632960}$&$\frac { 317959723}{17716740096}$&$\frac {-14849190321163}{281406257233920}$&$\frac {717209117969}{3298534883328}$ \\[0.1cm]
\hline
\end{tabular}
\caption{The first few $\beta_{2s}$, $s=1,2,\cdots, 7$.}\label{table1}
\end{table}
\section { Proof of Theorem \ref{Thm 1} } \indent\setcounter{section} {2}
\setcounter{equation} {0} \label{sec:2}
The proof is based on the difference equation (\ref{(1.13)}) and an approximation of the coefficients $\beta_{2s}$. To justify Theorem \ref{Thm 1}, several lemmas are stated, and all, except one, are proved in the present section. While the validity of Lemma \ref{lem 2.2} is the objective of the next section.
\subsection{The coefficients $\beta_s$ in (\ref{(1.12)})}
Write the difference equation (\ref{(1.13)}) in the symmetrical form
\begin{equation}\label{(2.1)}
\left (1+\frac 1 {4N}\right )^2 w(N+1) -\left (2 +\frac 1 {8N^2}\right ) w(N)+\left (1-\frac 1 {4N}\right )^2 w(N-1)=0,
\end{equation}in which $N=n+\frac 3 4$ for $n=0,1,2,\cdots$. As mentioned earlier and as in the previous paper \cite{LLXZ}, the Landau constants $w(N)=G_n$ solves (\ref{(2.1)}), having an asymptotic expansion
\begin{equation}\label{(2.2)} \pi G_n\sim \ln N+ \gamma+4\ln 2 +\sum^\infty_{s=1} \frac {\beta_s} {N^s},\end{equation} and the coefficients are determined by a formal substitution of (\ref{(2.2)}) into (\ref{(2.1)}); see Wong and Li \cite{WongLi1992a}. The following result then follows:
\noindent
\begin{lem}\label{lem 2.1}For $N=n+\frac 3 4$, the coefficients $\beta_s$ in expansion (\ref{(2.2)})
fulfill
\begin{equation}\label{(2.3)}
\beta_{2k+1}=0,~~k=0,1,2,\cdots,
\end{equation}and
\begin{equation}\label{(2.4)}
\beta_{2k}=\frac {-1} {4k^2} \left( d_{k-1, k+1} \beta_{2k-2} + d_{k-2, k+1} \beta_{2k-4}+\cdots + d_{1, k+1} \beta_2 -d_{0, k+1}\right ),~~k=1,2,\cdots,
\end{equation} where $d_{j, j+1}=
4j^2$ for $j=1,2,\cdots$,
\begin{equation}\label{(2.5)}
d_{j, s}= \frac { (2s+2j-2) \;(2s-2)! }{(2s-2j)!\; (2j-1)!} + \frac { (2s-3)!}{8(2s-2j-2)!\; (2j-1)!}~~\mbox{for} ~~ s\geq j+2,
\end{equation}and
\begin{equation}\label{(2.6)}
d_{0, s}= \left (\frac 1 s-\frac 1 {2s-1} \right ) +\frac 1 {16(s-1)},~~s=2,3,\cdots.
\end{equation}
In addition, it holds
\begin{equation}\label{(2.7)}
\beta_{2k}=(-1)^{k+1}\left | \beta_{2k}\right |,~~k=1,2,\cdots.
\end{equation}
\end{lem}\vskip .5cm
Part of this lemma ( (\ref{(2.3)}) and (\ref{(2.7)}) ) has been proved in Nemes' recent paper \cite{Nemes}. Part of it, namely (\ref{(2.3)}), and an equivalent form of (\ref{(2.4)}), has been proved in our earlier paper \cite{LLXZ}. Following Wong and Li \cite{WongLi1992a}, (\ref{(2.3)}) and (\ref{(2.4)}) can be justified by substituting (\ref{(2.2)}) into
(\ref{(2.1)}), expanding both sides in formal power series of $1/N$, and equalizing the coefficients of the same powers.
It is readily seen that all $d_{j, s}>0$ for $l\geq 1$ and $s\geq l+1$, and $d_{0, s}>0$ for $s\geq 2$.
\subsection{Analysis of $R_l(N)$}
Here,
\begin{equation}\label{(2.8)}
R_l(N)= \left (1+\frac 1 {4N}\right )^2\varepsilon_l(N+1) -\left (2 +\frac 1 {8N^2}\right ) \varepsilon_l(N)+\left (1-\frac 1 {4N}\right )^2\varepsilon_l(N-1),
\end{equation}with the error term $\varepsilon_l(N)$ being given in (\ref{(1.15)}), and $N=n+3/4$.
There are several facts worth mentioning. It is readily seen from \eqref{(1.15)} that $\pi G_n-(\gamma+4\ln2)$ satisfies the difference equation \eqref{(2.1)}, and can then be removed from $R_l(N)$ in \eqref{(2.8)}. If we write $x=1/N$, then the logarithmic singularity at $x=0$ is also cancelled in $R_l(N)$. Therefore,
each $R_l(N)$ is an analytic function in $x$ for $|x|<1$. Hence the asymptotic expansion for $R_l(N)$, in descending powers of $N$, is actually a convergent Taylor expansion in $x$,
\begin{equation}\label{(2.9)}
R_l(N)= \sum_{s=l+1}^\infty r_{l,s} x^{2s}, ~~ |x|<1,
\end{equation}where
\begin{equation}\label{(2.10)}
r_{l,s} = -\left ( d_{l-1, s} \beta_{2l-2} + d_{l-2, s} \beta_{2l-4}+\cdots + d_{1, s} \beta_2 -d_{0, s}\right )
\end{equation}for $s\geq l+1$, with the leading coefficient $r_{l,l+1}=4l^2\beta_{2l}$; cf. (\ref{(2.4)}).
For later use, we estimate the ratio of the consecutive coefficients $\beta_{2s}$. To this end, we
introduce a sequence of positive constants
\begin{equation}\label{(2.11)}\rho_0=1, ~~\mbox{and}~\rho_l=\frac {(-1)^{l+1} \beta_{2l}} {(2l-1)!},~~l=1,2,\cdots .\end{equation} We shall use the following lemma and leave its proof to Section \ref{sec:3} below.
\noindent\begin{lem}\label{lem 2.2} It holds
\begin{equation}\label{(2.12)}
\rho_k/\rho_{k+1}\leq \frac {44} 9 \pi^2
\end{equation} for $k=0,1,2,\cdots$.
\end{lem}
Now
we proceed to analyze $R_l(N)$ (sometimes denoted by $R_l(x)$, understanding that $x=1/N$), so as to show that $R_l(x)/\beta_{2l} \geq 0$ for $x\in [0, 1)$. More precisely, we prove a much stronger result, as follows:
\noindent
\begin{lem}\label{lem 2.3} For $l=1,2,\cdots$ and $N=n+3/4$ with $n=1,2,\cdots $, we have
\begin{equation}\label{(2.13)}
(-1)^{l+1} r_{l,s} >0, ~~ s\geq l+1 ,
\end{equation}
where $r_{l,s}$ are given in (\ref{(2.9)}) and (\ref{(2.10)}).
\end{lem}
\noindent
{\bf{Proof:}}
The lemma can be proved by using induction with respect to $l$. Initially, we have
\begin{equation*}\label{(2.14)}
R_1(N)= \sum_{s=2}^\infty d_{0,s} x^{2s}.
\end{equation*}Since $d_{0, s} >0$ for $s=2,3,\cdots$; cf. (\ref{(2.6)}), we see that (\ref{(2.13)}) holds for $l=1$.
In view of the fact that $\beta_2=\frac {11}{192}$; cf. Table \ref{table1}, it is readily verified that
\begin{equation*}\label{(2.15)}
R_2(N)=\sum_{s=3}^\infty r_{2,s} x^{2s} =\sum_{s=3}^\infty \left ( -\frac {11}{192} d_{1, s}+d_{0,s}\right ) x^{2s},
\end{equation*}with all coefficients $r_{2,s}$ being negative. Indeed, in view of (\ref{(2.5)}) and (\ref{(2.6)}), we have
$$ r_{2,s} =-\frac {11}{192}\left (2s+\frac {2s-3} 8\right ) +\left [ \left (\frac 1 s-\frac 1 {2s-1}\right )+\frac 1 {16(s-1)}\right ] <-\frac {11}{192} \cdot (2s)+\frac 1 s <0$$
for $s\geq 3$.
Thus (\ref{(2.13)}) holds for $l=2$.
Similarly, we can verify (\ref{(2.13)}) for $l=3$, recalling that $\beta_4=-\frac {1541}{122880}$; cf. Table \ref{table1}.
Assume that for $l\leq k$, it holds $(-1)^{l+1} r_{l,s} >0$ for $s\geq l+1$.
From (\ref{(2.10)}), we can write
\begin{equation}\label{(2.16)}(-1)^{k+3} r_{k+2,s} =(-1)^{k+1}r_{k, s} +(-1)^k \left (d_{k+1, s} \beta_{2k+2}+d_{k,s}\beta_{2k}\right ) \end{equation}for $s\geq k+3 $.
To show that (\ref{(2.13)}) is valid for $l=k+2$, it suffices to show that
\begin{equation*}\label{(2.17)}(-1)^k \left (d_{k+1, s} \beta_{2k+2}+d_{k,s}\beta_{2k}\right )>0\end{equation*} for $s\geq k+4 $ since the validity of (\ref{(2.13)}) for $(l,s)=(k+2, k+3)$ is trivial. This is equivalent to show that
\begin{equation}\label{(2.18)}c_{k+1, s}-c_{k, s} \rho_k/\rho_{k+1} = \frac { (-1)^k \left (d_{k+1, s} \beta_{2k+2}+d_{k,s}\beta_{2k}\right ) } {8(s-1)^2\; (2s-3)!\; \rho_{k+1}} >0,\end{equation}
where $\rho_k>0$ is defined in \eqref{(2.11)}, and $c_{k,s}=\frac {(2k-1)! d_{k,s}}{8(s-1)^2 (2s-3)!}$; cf. (\ref{(3.2)}) below. In view of (\ref{(2.5)}), we may write
$$
c_{k,s}=\left\{\frac 1 {2(2s-2k)!}+\frac k {2(s-1) (2s-2k)!} \right\} +\left\{ \frac 1 {64(s-1)^2(2s-2k-2)!}\right\}:=A+B.$$ For $k\geq 1$ and $s-k\geq 4$, we have
$$c_{k+1,s}\geq (2s-2k-1)(2s-2k) A+(2s-2k-3)(2s-2k-2)B\geq 56A+30B >\frac {478} 9c_{k,s}.$$
The last inequality holds since $A>8B$.
From (\ref{(2.12)}) in Lemma \ref{lem 2.2}, it is readily verified that
$$
\frac {478} 9 > \frac {44} 9 \pi^2\geq \frac {\rho_k}{\rho_{k+1}}$$ for $k\geq 1$. Then (\ref{(2.18)}) holds for $s\geq k+4 $, and it follows that $(-1)^k \left (d_{k+1, s} \beta_{2k+2}+d_{k,s}\beta_{2k}\right )>0$ for $s\geq k+4 $.
Accordingly, from (\ref{(2.16)}) we see that (\ref{(2.13)}) holds for $l=k+2$. This completes the proof of Lemma \ref{lem 2.3}.
\subsection{Proof of Theorem \ref{Thm 1} }
Now Lemma \ref{lem 2.3} implies that $(-1)^{l+1} R_l(N) >0$ for all $l$ and all $N=n+3/4$.
To show that $\tilde\varepsilon_l(N):=(-1)^{l+1} \varepsilon_l (N)>0$ for all $N$, we note first that $\tilde\varepsilon_l(N)= \frac {|\beta_{2l}|}{N^{2l}}\left\{1+O\left (\frac 1 {N^2}\right )\right \}$ as $N\rightarrow +\infty$. Hence
$\tilde\varepsilon_l (N)>0$ for $N$ large enough. Now assume that (\ref{(1.17)}) is not true. Then there exists a finite $M$ defined as
$$M=\max\{N= n+3/4: ~n\in \mathbb{Z}~\mbox{and}~\tilde\varepsilon_l(N)\leq 0\},$$
so that
$M-3/4$ is a positive integer and $\tilde\varepsilon_l(M)\leq 0$, while $\tilde\varepsilon_l(M+1), \tilde\varepsilon_l(M+2), \cdots >0$.
For simplicity, we denote $a_N=(1+\frac 1 {4N} )^2$ and $b_N=(1-\frac 1 {4N} )^2$. From (\ref{(2.8)}) we have
\begin{equation*}\label{(2.19)}
a_{M+1}\tilde\varepsilon_l(M+2)=(a_{M+1}+b_{M+1}) \tilde\varepsilon_l(M+1) + b_{M+1} \left (- \tilde\varepsilon_l(M)\right ) + (-1)^{l+1} R_l(M+1).
\end{equation*}The later terms on the right-hand side are non-negative, hence we obtain
\begin{equation*}\label{(2.20)}
a_{M+1}\tilde\varepsilon_l(M+2)\geq \left (a_{M+1}+b_{M+1}\right ) \tilde\varepsilon_l(M+1) ,
\end{equation*}which implies that
\begin{equation}\label{(2.21)}
\tilde\varepsilon_l(M+2)> \tilde\varepsilon_l(M+1).
\end{equation}
Using (\ref{(2.8)}) again for $N=M+2$, we have
\begin{equation*}\label{(2.22)}
a_{M+2}\tilde\varepsilon_l(M+3)\geq (a_{M+2}+b_{M+2}) \tilde\varepsilon_l(M+2) - b_{M+2} \tilde\varepsilon_l(M+1) .
\end{equation*}
A combination of the previous two inequalities gives
\begin{equation}\label{(2.23)}
\tilde\varepsilon_l(M+3)> \tilde\varepsilon_l(M+2).
\end{equation}
In general, we obtain
\begin{equation}\label{(2.24)}
\tilde\varepsilon_l(M+k+1)> \tilde\varepsilon_l(M+k)
\end{equation}by induction. From the equalities in (\ref{(2.21)}), (\ref{(2.23)}) and (\ref{(2.24)}), we conclude that
\begin{equation}\label{(2.25)}
\tilde\varepsilon_l(M+k)> \tilde\varepsilon_l(M+1)
\end{equation} for all $k\geq 2$. Recalling that $\tilde\varepsilon_l(N)= O(N^{-2l})$ for $N\rightarrow\infty$, letting $k\rightarrow \infty$ will give $\tilde\varepsilon_l(M+1) \leq 0$. This contradicts the definition of $M$. Thus we have proved Theorem \ref{Thm 1}.
\section { Proof of Lemma \ref{lem 2.2}} \indent\setcounter{section} {3}\setcounter{subsection} {0}
\setcounter{equation} {0} \label{sec:3}
The idea is simple: to approximate the coefficients $\beta_{2s}$, and then to work out the ratio $\beta_{2s}/\beta_{2s+2}$. Yet the procedure is complicated.
A brief outline of the proof is as follows: In Section \ref{sec:3.1}, we bring in an ordinary differential equation \eqref{(3.11)} with a specific analytic solution $v(z)$, of which $\rho_k=\frac {(-1)^{k+1} \beta_{2k}} {(2k-1)!}$ are coefficients of the Maclaurin expansion. The function $v(z)$ is then extended, in Section \ref{sec:3.2}, and via the hypergeometric functions, to a function analytic in the cut-strip $\{z ~|~ -4\pi<\mathop{\rm Re}\nolimits z< 4\pi, ~z\not \in \{(-\infty, -2\pi]\cup [2\pi, +\infty)\} \}$.
An integral representation is then obtained by using the Cauchy integral formula, and the integration path is deformed based on the analytic continuation procedure. In Section \ref{sec:3.3}, the integral is spilt, approximated, and estimated, and hence bounds for $\rho_k$ on both sides are established in \eqref{(3.36)} for all $k\geq 10$. Eventually, in Section \ref{sec:3.4}, an upper bound for $\rho_k/\rho_{k+1}$ is obtained for all non-negative integer $k$.
\subsection{Differential equation}\label{sec:3.1}
In terms of
$\rho_s$ defined in (\ref{(2.11)}), namely,
$\rho_0=1$ and $\rho_l=\frac {(-1)^{l+1} \beta_{2l}} {(2l-1)!}$ for $l=1,2,\cdots$,
formula (\ref{(2.4)}) can be written as
\begin{equation}\label{(3.1)}c_{l,l+1}
\rho_l -c_{l-1,l+1} \rho_{l-1} +\cdots + (-1)^{k} c_{l-k, l+1} \rho_{l-k} +\cdots +
(-1)^{l-1} c_{1,l+1} \rho_1 +(-1)^l c_{0,l+1}\rho_0 =0 \end{equation} for $l=1,2,\cdots$,
where $c_{l,l+1}= \frac 1 {2!}$, and
\begin{equation}\label{(3.2)}
c_{l-k,l+1}=\frac {(2l-2k-1)!}{(2l-1)!}\frac {d_{l-k, l+1}}{2d_{l, l+1}}= \frac {1}{2(2k+2)!} +\frac {l-k}{2l (2k+2)!}+\frac 1 {64 l^2 \cdot (2k)!}\end{equation} for $k=1,2,\cdots l-1$ and $l=1,2,\cdots$. It can be verified from (\ref{(2.6)}) that $c_{0, l+1}=\frac {d_{0, l+1}}{8l^2(2l-1)!}$ also takes the same form, that is, (\ref{(3.2)}) is also valid for $k=l$, $l=1,2,\cdots$.
The idea now is to approximate $\rho_s$, and then to estimate the ratio $\rho_{s}/\rho_{s+1}$.
Taking $l=1,2,\cdots$ in (\ref{(3.1)}), we have
$$
\begin{array}{l}
a_1:=c_{1,2}\rho_1-c_{0,2}\rho_0=0, \\[0.2cm]
a_2:=c_{2,3}\rho_2-c_{1,3}\rho_1+c_{0,3}\rho_0=0,\\[0.2cm]
a_3:=c_{3,4}\rho_3 - c_{2,4}\rho_2+c_{1,4}\rho_1-c_{0,4}\rho_0=0,\\[0.2cm]
\cdots\cdots.
\end{array}$$
Summing up $\sum^\infty_{s=1} a_s x^{2s}$ gives
\begin{equation}\label{(3.3)}\rho_0\left (-c_{0,2} x^2+c_{0,3}x^4-c_{0,4} x^6+\cdots\right )+\sum^\infty_{k=1}
\rho_k x^{2k} \sum^\infty_{s=1} (-1)^{s-1} c_{k,k+s} x^{2s-2}=0.\end{equation}
In view of (\ref{(3.2)}), it is readily verified by summing up the series that
\begin{equation}\label{(3.4)}-c_{0,2} x^2+c_{0,3}x^4-c_{0,4} x^6+\cdots=-\frac 1 4 +\frac {h(x)}{2} - \int^x_0 \frac {dt_1} {t_1}\int^{t_1}_0 \frac {t h(t) dt}{16}, \end{equation}
where \begin{equation*}\label{(3.5)}h(x):=\frac {1-\cos x} {x^2}.\end{equation*}
Also we have, for $k=1,2,\cdots$,
\begin{equation}\label{(3.6)}\sum^\infty_{s=1} (-1)^{s-1} c_{k,k+s} x^{2s-2} =\frac { h(x)} 2+\frac k {x^{2k}} \int^x_0 t^{2k-1} h(t) dt - \frac 1 { x^{2k}}\int^x_0\frac {dt_1}{t_1}\int^{t_1}_0\frac{t^{2k+1}h(t)dt }{16}.\end{equation}
Substituting (\ref{(3.4)}) and (\ref{(3.6)}) into (\ref{(3.3)}), we obtain an equation
\begin{equation}\label{(3.7)}-\frac 1 4 +\frac 1 2 h(x)u(x)+\int^x_0\frac 1 2 h(t) u'(t) dt -\int^x_0\frac {dt_1}{t_1}\int^{t_1}_0\frac t{16} h(t) u(t) dt=0,\end{equation}
where \begin{equation}\label{(3.8)}u(x):=\sum^\infty_{k=0}\rho_k x^{2k}.\end{equation}
\noindent
\begin{rem}\label{rem 1}
The existence of $u(x)$ defined above and the validity of (\ref{(3.3)}) can be justified by showing that $ \left |\rho_k\right |\leq M_0/\delta^{2k}$ for all positive integers $k$, $M_0$ and $\delta$ being positive constants.
Indeed, from (\ref{(3.2)}) it is readily seen that $\left |c_{l-k, l+1} \right |\leq \frac 1 {(2k)!}$ for $k=1,2,\cdots , l$. Now we assume that $ \left |\rho_k\right |\leq M_0/\delta^{2k}$ for $k<l$, where $\delta$ is small enough such that $2(\cosh \delta -1) <1$. Then, by using (\ref{(3.1)}) we have
$$\frac 1 2 |\rho_l| \leq \sum^l_{k=1} \left |c_{l-k, l+1} \right | |\rho_{l-k}| \leq \sum^l_{k=1} \frac {M_0\delta^{2k-2l}} {(2k)!} \leq \frac {M_0}{\delta^{2l}} \left (\cosh\delta -1\right ).$$
Hence we have $ \left |\rho_l \right |\leq M_0/\delta^{2l}$ by induction.
\end{rem}
Applying the operator $\displaystyle{\frac d {dx} \left ( x \frac d {dx} \right )}$ to both sides of (\ref{(3.7)}), we see that $u(x)$ solves the second order differential equation
\begin{equation}\label{(3.9)}\left [ x\left (\frac 1 2 h'(x) u(x)+h(x) u'(x)\right )\right ]' -\frac {xh(x) u(x)}{16} =0 \end{equation}
in a neighborhood of $x=0$, with initial conditions $u(0)=1$ and $u'(0)=0$.
In the next few steps we derive a representation of $u(x)$ for later use. First, substituting
\begin{equation}\label{(3.10)}v(x)=\sqrt{h(x)} u(x)=\frac {\sqrt 2 \sin\frac x 2} x u(x)\end{equation}
into equation (\ref{(3.9)}) yields
\begin{equation}\label{(3.11)}\sin\frac x 2 v''(x)+\frac 1 2 \cos\frac x 2 v'(x) -\frac 1 {16} \sin \frac x 2 v(x)=0 \end{equation}in a neighborhood of $x=0$, with $v(0)=1/\sqrt 2$ and $v'(0)=0$.
It is shown in Remark \ref{rem 1} that $u(x)$ is analytic at the origin. So is $v(x)$; cf. \eqref{(3.10)}. What is more, near $x=0$, the function $v(x)$ can be represented as a hypergeometric function. Indeed,
a change of variable
\begin{equation*}\label{(3.12)}t=\frac {1-\cos x} 2=\sin^2\frac x 2\end{equation*} turns the equation into the hypergeometric equation
\begin{equation}\label{(3.13)} t(1-t)\frac {d^2 v}{d t^2}+\left (1-\frac 3 2 t\right )\frac {dv}{dt}-\frac 1 {16} v=0.\end{equation} Taking the initial conditions into account,
it is easily verified that
\begin{equation}\label{(3.14)}v(x)= \frac 1 {\sqrt 2} F\left (\frac 14, \frac 1 4; 1; \sin^2\frac x 2\right ) =\frac 1 {\sqrt 2} F\left (\frac 12, \frac 1 2; 1; \sin^2\frac x 4\right ) ,\end{equation} initially at $x=0$, and then analytically extended elsewhere; cf. \cite[(15.2.1)]{NIST}. The second equality follows from a quadratic hypergeometric transformation; see \cite[(15.8.18)]{NIST}.
\subsection{Analytic continuation}\label{sec:3.2}
Well-known formulas for hypergeometric functions include
\begin{equation}\label{(3.15)}
F(a,b;c;t)=\frac {\Gamma(c)}{\Gamma(b)\Gamma(c-b)} \int^1_0 s^{b-1}(1-s)^{c-b-1}(1-st)^{-a} ds\end{equation} for $t\in \mathbb{C}\backslash [1, +\infty)$ as $\mathop{\rm Re}\nolimits c>\mathop{\rm Re}\nolimits b >0$; cf. \cite[(15.6.1)]{NIST}, which extends the hypergeometric function to a single-valued analytic function in the cut-plane.
Now we proceed to consider (\ref{(3.11)}) with complex variable $z$. It is worth noting that the solution $v(z)$ we seek is an even function. So we restrict ourselves to its analytic continuation on the right-half plane $\mathop{\rm Re}\nolimits z >0$. To this aim, we define
\begin{equation}\label{f-def} f(z) =F\left (\frac 1 2,\frac 1 2;1;\sin^2\frac z 4\right ) ~~\mbox{for}~\mathop{\rm Re}\nolimits z\in (-2\pi, 2\pi)\cup (2\pi, 6\pi).\end{equation}
We see that $t=\sin^2\frac z 4$ maps the strip $-2\pi< \mathop{\rm Re}\nolimits z <2\pi$ duplicately and analytically onto the cut-plane $t\in \mathbb{C}\backslash [1, +\infty)$. The same is true for $2\pi< \mathop{\rm Re}\nolimits z <6\pi$. Hence from (\ref{(3.14)}) we have the analytic continuation
$v(z)=f(z)$ for $-2\pi<\mathop{\rm Re}\nolimits z<2\pi$.
Moreover, from \eqref{f-def} it is readily seen that the function $f(z)$, defined and analytic in the disjoint strips, satisfies
\begin{equation*}\label{f-periodic}f(z)=f(z-4\pi)~~\mbox{for}~~\mathop{\rm Re}\nolimits z\in (2\pi, 6\pi).
\end{equation*}
Careful treatment should be brought in here, since there is a logarithmic singularity of $v(z)$ at the boundary point $z= 2\pi$, as we will see later, and as can be seen from the equation (\ref{(3.11)}): $z=2\pi$ is one of the nearest regular singularities, and the indicial equation has a double root $0$ there.
Next, we extend the domain of analyticity of $v(z)$ beyond the vertical line $\mathop{\rm Re}\nolimits z=2\pi$. To do so, we recall the jump along the branch cut of the hypergeometric function
\begin{equation}\label{(3.21)}F(a,b;c;x+i0)- F(a,b;c;x-i0) = \frac {2\pi i \Gamma(c)}{\Gamma(a) \Gamma(b)} \frac { (x-1)^{c-a-b}} {\Gamma(c-a-b+1)} F(c-a, c-b; c-a-b+1; 1-x)
\end{equation}
for $x>1$; see \cite[(15.2.2)-(15.2.3)]{NIST}.
Applying (\ref{(3.21)}) to $f(z)$ defined in \eqref{f-def}, a careful calculation yields
\begin{equation*}\label{f-jump-upper}f(2\pi -0+iy)-f(2\pi +0+iy)=2i f(iy )~~\mbox{for}~~y=\mathop{\rm Im}\nolimits z >0,
\end{equation*}
where use has been made of the fact that
$t=\sin^2 \frac {2\pi-0+iy} 4$, $y=\mathop{\rm Im}\nolimits z>0$ corresponds to the upper edge of $[1, \infty)$, while $t=\sin^2 \frac {2\pi+0+iy} 4=\sin^2 \frac {-2\pi+0+iy} 4$, $y=\mathop{\rm Im}\nolimits z>0$ corresponds to the lower edge, and that $t=\sin^2 \frac z 4$ maps the upper imaginary axis to $(-\infty, 0)$.
Similarly we have
\begin{equation*}\label{f-jump-lower}f(2\pi -0+iy)-f(2\pi +0+iy)=-2i f(iy )~~\mbox{for}~~y=\mathop{\rm Im}\nolimits z <0.
\end{equation*}
Summarizing the above, we have an analytic function in the cut-strip $\{z ~|~ 0\leq \mathop{\rm Re}\nolimits z< 4\pi, ~z\not \in [2\pi, +\infty) \}$, defined as follows:
\begin{equation}\label{(3.25)}
v(z)=\left\{
\begin{array}{ll}
f(z), & 0\leq \mathop{\rm Re}\nolimits z<2\pi, \\[.1cm]
f(z) +2if(z-2\pi), & 2\pi<\mathop{\rm Re}\nolimits z <4\pi,~ \mathop{\rm Im}\nolimits z >0, \\[.1cm]
f(z) -2if(z-2\pi), & 2\pi<\mathop{\rm Re}\nolimits z <4\pi,~ \mathop{\rm Im}\nolimits z <0.
\end{array}\right .
\end{equation}The value at $\mathop{\rm Re}\nolimits z=2\pi$, $\mathop{\rm Im}\nolimits z\not=0$ is obtained by taking limit.
At last, we confirm that there is a logarithmic singularity at $z=2\pi$, a regular singularity of (\ref{(3.11)}). Indeed, following the derivation from (\ref{(3.11)}) to (\ref{(3.13)}), we see that all solutions to (\ref{(3.11)}) takes the form
$$A w_1(z) +B \left \{ w_1(z)\ln \left (\sin^2\frac z 2\right ) + w_2(z)\right\}=A w_1(z) +B \left \{2 w_1(z)\ln \left (z -2\pi \right ) + \tilde w_2(z)\right\};$$ cf. Wong \cite[(2.1.24)]{Wong2010},
where $w_2(z)$ and $\tilde w_2(z)$ are specific single-valued analytic functions at $z=2\pi$, $A$ and $B$ are constants, and $w_1(z)= f(z-2\pi)$ is an analytic solution of (\ref{(3.11)}) at $z=2\pi$. The function $v(z)$ in (\ref{(3.25)}) is such a solution, and, what is more, with $B=-\frac 1 \pi$, as can be seen by comparing the jumps along $(2\pi, 3\pi)$. Accordingly, we may write
\begin{equation}\label{(3.26)}
v(z)=v_A(z) -\frac 2 \pi f(z-2\pi)\ln \left (2\pi -z \right )
\end{equation} for $0< \mathop{\rm Re}\nolimits z <4\pi$, with $v_A(z)$ being analytic in the strip, and the branch of logarithm being chosen as $\arg (2\pi -z)\in (-\pi, \pi)$.
\subsection{Approximation of $\rho_s$}\label{sec:3.3}
Now we are in a position to approximate $\rho_s$.
To begin, we mention several known facts.
From (\ref{(3.25)}), and that $v(z)$ is even, we know that the function $v(z)$ is now extended analytically to the cut-strip $$\{z ~|~ -4\pi<\mathop{\rm Re}\nolimits z< 4\pi, ~z\not \in \{(-\infty, -2\pi]\cup [2\pi, +\infty)\} \}.$$
Again, from \eqref{f-def}, (\ref{(3.25)}) and (\ref{(3.26)}), and that the hypergeometric function $F(a,a;c;t)$ behaviors of the form $O(|t|^{-a}\ln |t|)$ at infinity; see \cite[(15.8.8)]{NIST},
we conclude that
$$ v(z)=O\left ( e^{-\frac 1 4 |\mathop{\rm Im}\nolimits z|}\ln |\mathop{\rm Im}\nolimits z |\right )$$ as $\mathop{\rm Im}\nolimits z \rightarrow \infty$, holding uniformly in $|\mathop{\rm Re}\nolimits z |\leq 4\pi -\delta$ for arbitrary positive $\delta$.
Thus $v(z)$ displays an exponentially small decay at infinity in each sub-strip.
Hence, we can derive from (\ref{(3.8)}) and (\ref{(3.10)}) that
\begin{equation}\label{(3.27)}\rho_k=\frac 1 {2\pi i} \oint z^{-2k-1} u(z)dz =\frac 1 {\sqrt 2\pi i} \oint \frac {\frac z 2}{\sin \frac z 2}\frac { v(z) dz} { z^{2k+1} }
=\frac 1 {\sqrt 2\pi i} \int_\Gamma \frac {\frac z 2}{\sin \frac z 2}\frac { v(z) dz} { z^{2k+1} },\end{equation}
where initially the integration path is a small circle encircling the origin, and then being deformed to the contour $\Gamma$ illustrated in Figure \ref{figure 2}.
\begin{figure}[h]
\begin{center}
\includegraphics[height=6cm]{figure2.eps}
\caption{The deformed contour $\Gamma$: the oriented curve.}
\label{figure 2}
\end{center}
\end{figure}
From the symmetry property of $v(z)$, we need only evaluate and estimate the integral on the right-half of $\Gamma$. We split the integral in (\ref{(3.27)}) into three integrals, namely,
\begin{equation}\label{(3.28)}\frac {\pi i} {\sqrt 2}\rho_k= \int_{\Gamma_v} \frac {\frac z 2}{\sin \frac z 2}\frac { v(z) dz} { z^{2k+1} }
+ \int_{\Gamma_l} \frac {\frac z 2}{\sin \frac z 2}\frac { v_A(z) dz} { z^{2k+1} }- \int_{\Gamma_l} \frac {\frac z \pi f(z-2\pi)\ln (2\pi-z) dz}{\sin \frac z 2 ~ z^{2k+1} }:=I_v+I_a+I_l ,
\end{equation} where $\Gamma_v$ is the right-half vertical part $\mathop{\rm Re}\nolimits z=3\pi$, and $\Gamma_l$ is the remaining right-half $\Gamma$, consisting of an circular part around $z=2\pi$, and a pair of horizontal line segments joining the circle with the vertical line; compare Figure \ref{figure 2} for the curves and the orientation.
We will show that the main contribution to $\rho_k$, when $k$ is not small, comes from $I_l$. Estimates will be obtained with full details. We do the calculation case by case.
\vskip .5cm
\noindent
{\bf{Estimating $I_v$}}:
First, we estimate $v(z)$ for $\mathop{\rm Re}\nolimits z=3\pi$. It is readily seen that $t=\sin^2\frac {3\pi+iy} 4=\frac {1-i\sinh y} 2$ and $\sin^2\frac {\pi+iy} 4=\frac {1+i\sinh y} 2$, where $y=\mathop{\rm Im}\nolimits z$. Hence $$\arg \left\{(1-st)^{-\frac 1 2}\right \}\in (- \pi/ 2, 0)~\mbox{for}~y\in (0, \infty),~~\mbox{and}~\left | (1-st)^{-\frac 1 2}\right |\leq \left (1-\frac s 2\right )^{-\frac 1 2}~\mbox{for}~0\leq s \leq 1.$$ Accordingly, from (\ref{(3.15)}) and \eqref{f-def} we have
$$ \left | f(3\pi+iy)\right | \leq \frac 1 \pi \int^1_0 s^{-\frac 1 2} (1-s)^{-\frac 1 2} \left (1-s/2\right )^{-\frac 1 2} ds:=C_{f,v}=1.1803\cdots ~~\mbox{for}~~y> 0. $$ Noting that $\arg f(3\pi +iy) \in (-\pi/2, 0)$ for $y>0$, and that $f(\pi +iy)=\overline{f(3\pi +iy)}$,
Substituting all above into (\ref{(3.25)}), we have
$$|v( 3\pi +iy)| \leq \sqrt 5 \left | f(3\pi+iy)\right |\leq M_v:=\sqrt 5\; C_{f,v} =2.6393\cdots ,~~y>0.$$ Similar argument shows that the equality holds for $y\in \mathbb{R}$.
Hence, we conclude from (\ref{(3.25)})
and (\ref{(3.28)}) that
\begin{equation}\label{(3.29)}
|I_v |\leq \frac {1} 2 \int^\infty_{-\infty} \frac {|v( 3\pi+iy)|\; dy}{\cosh\frac y 2 ~ | 3\pi+iy|^{2k}}\leq \frac {M_v} {2} \frac 1 {(3\pi)^{2k}}\int^\infty_{-\infty} \frac {dy} {\cosh\frac y 2}
=\frac { \pi M_v} {(3\pi)^{2k}}=\frac { 8.2916\cdots } {(3\pi)^{2k}}.
\end{equation}
\vskip .5cm
\noindent
{\bf{Evaluating $I_a$}}:
This time, $v_A(z)$ is analytic in a domain containing $\Gamma_l$. It is readily seen that only the simple pole $z=2\pi$ contributes to $I_a$. More precisely, applying the residue theorem we have
$$ I_a =\frac {2\pi i v_A(2\pi)} {(2\pi)^{2k}}.$$
Here
$v_A(2\pi)$ can be obtained by substituting $z=2\pi-\varepsilon$ into (\ref{(3.25)}) and (\ref{(3.26)}), and taking limit
$$ v_A(2\pi)=\lim_{\varepsilon\rightarrow 0^+}\left\{ \frac 1 {\pi} \int^1_0 s^{-\frac 1 2} (1-s)^{-\frac 1 2} (1-st)^{-\frac 1 2} ds + \frac 2 \pi f(-\varepsilon) \ln\varepsilon\right \},$$where $t=\sin^2\frac z 4=\cos^2\frac \varepsilon 4$; cf. (\ref{(3.15)}) and \eqref{f-def}.
A careful calculation leads to \begin{equation}\label{(3.30)}
v_A(2\pi) =\frac 1 \pi\int^1_0 \frac {s^{-\frac 12 }-1}{1-s } ds + \frac 1 \pi\int^1_0 \frac {s^{-\frac 1 2 }-1}{1-s } ds+\frac 4 \pi \ln 2= \frac 8 \pi \ln 2 .
\end{equation}
Indeed, (\ref{(3.30)}) can be derived as follows:
$$\frac 1 {\pi} \int^1_0\left ( s^{-\frac 1 2} -1\right )(1-s)^{-\frac 1 2} (1-st)^{-\frac 1 2} ds \longrightarrow \frac 1 \pi\int^1_0 \frac {s^{-\frac 12 }-1}{1-s } ds ~~\mbox{as}~~t\rightarrow 1^-$$ by applying the dominated convergence theorem. Also,
$$\frac 1 {\pi} \int^1_0 (1-s)^{-\frac 1 2} (1-st)^{-\frac 1 2} ds = \frac 1 { \pi} \int^1_0 \frac {\tau ^{-\frac 1 2 } }{1-t\tau } d\tau $$ by making change of variable $\tau=\frac {1-s}{1-ts}$. Finally,
$$\frac 1 {\pi} \int^1_0 \left\{s ^{-\frac 1 2 }-1 \right \} \frac 1 {1-ts } ds \longrightarrow \frac 1 { \pi} \int^1_0 \frac {s^{-\frac 1 2 }-1}{1-s } ds~~\mbox{as}~~t\rightarrow 1^- $$ by applying the dominated convergence theorem one more time. For $t<1$, it further holds $$\frac 1 {\pi} \int^1_0 \frac { 1 }{1-ts } ds=-\frac 1{\pi t} \ln (1-t).$$
Adding up all these gives (\ref{(3.30)}).
Thus we have
\begin{equation}\label{(3.31)}
I_a = \frac {16\ln 2} {(2\pi)^{2k}}i = \frac {11.0903\cdots} {(2\pi)^{2k}}i .
\end{equation}
\vskip .5cm
\noindent
{\bf{Evaluating $I_l$}}:
Now we turn to the last integral $I_l$ in (\ref{(3.28)}). First, we observe that $\displaystyle{\frac {f(z-2\pi)}{\sin\frac {z-2\pi} 2}- \frac {f(0)}{ \frac {z-2\pi} 2} } $ is analytic in the strip $\mathop{\rm Re}\nolimits z\in (0, 4\pi)$. Hence the circular part of $\Gamma_l$ collapses in the following integral to give
$$\frac 1 \pi \int_{\Gamma_l} \left\{ \frac {f(z-2\pi)}{\sin\frac {z-2\pi} 2}- \frac {f(0)}{ \frac {z-2\pi} 2}\right \} \frac {\ln (2\pi -z)}{z^{2k}} dz= -2i \int_{2\pi}^{3\pi} \left\{ \frac {f(z-2\pi)}{\sin\frac {z-2\pi} 2}- \frac {f(0)}{ \frac {z-2\pi} 2}\right \} \frac {dz}{z^{2k}} . $$
Accordingly we have
$$\left | \frac 1 \pi \int_{\Gamma_l} \left\{ \frac {f(z-2\pi)}{\sin\frac {z-2\pi} 2}- \frac {f(0)}{ \frac {z-2\pi} 2}\right \} \frac {\ln (2\pi -z)}{z^{2k}} dz\right |\leq \frac {4\pi M_f}{2k-1} \frac 1 {(2\pi)^{2k}},$$
where $$ M_f\geq \max_{2\pi \leq z \leq 3\pi} \left |\frac {f(z)}{\sin\frac {z-2\pi} 2}- \frac {f(2\pi)}{ \frac {z-2\pi} 2}\right |.$$
We give a rough estimate for this maximum value. Recalling that $f(z)=F\left (\frac 1 2, \frac 12; 1; \sin^2\frac {z} 4\right )$ and $f(0)=1$, noting that the Maclaurin expansion of $F\left (\frac 1 2, \frac 12; 1; t \right )$ has all positive coefficients (cf. \cite[(15.2.1)]{NIST}), and that $\frac {d}{dt} \left ( \frac {F\left (\frac 1 2, \frac 12; 1; t \right )-1}{\sqrt t}\right )>0$ for $t\in (0, 1)$ via term-by-term differentiation, we see that $\frac {f(z-2\pi)-1 }{\sin\frac {z-2\pi} 2}=\frac 1 {2\cos\frac {z-2\pi} 4} \frac {F\left (\frac 1 2, \frac 12; 1; t \right )-1}{\sqrt t}$ is monotone increasing for
$ 2\pi< z \leq 3 \pi$, or correspondingly, $t= \sin^2\frac {z-2\pi} 4\in (0, 1/2]$. Therefore, we have
$$ 0< \frac {f(z-2\pi)-1 }{\sin\frac {z-2\pi} 2} \leq F \left (\frac 1 2, \frac 12; 1; \frac 1 2 \right ) -1 = \frac {\sqrt\pi} {\Gamma(3/4)\Gamma(3/4)} -1; $$
see \cite[(15.4.28)]{NIST}. Also, one has
$$0< \frac 1 {\sin x}-\frac 1 x =\frac {x-\sin x} {x\sin x}\leq \frac {x^3}{6x\sin x} \leq \frac {\pi^2} {24}~~\mbox{for}~~0<x\leq \frac \pi 2.$$
Hence an appropriate choice of $M_f$ is
$$M_f=\frac {\sqrt\pi} {\Gamma(3/4)\Gamma(3/4)} -1 + \frac {\pi^2} {24}=0.5915\cdots.$$
We proceed to evaluate the crucial part
$$\frac 2 \pi \int_{\Gamma_l} \frac {\ln (2\pi-z)}{z-2\pi} \frac {dz}{z^{2k}},$$
of which the integrand has a pole $z=2\pi$, coinciding with the logarithmic singularity.
To treat such kind of singularities, we appeal to the idea of
Wong and Wyman \cite{WongWyman}.
For simplicity we re-scale the variable $\tau=\frac {z-2\pi} {2\pi}$, which turns the integral into
$$\frac 2 {\pi (2\pi)^{2k}} \int_{\frac 1 2}^{(0-)} \frac {\ln (2\pi)+\ln(-\tau)}{\tau } \frac { d\tau} { e^{2k\ln (1+\tau)}}
=-\frac {4i\ln 2\pi}{(2\pi)^{2k}} + \frac 2 {\pi(2\pi)^{2k}} \int_{\frac 1 2}^{(0-)} \frac { \ln(-\tau)}{\tau } \frac {d\tau }{e^{2k\ln (1+\tau)}}
,$$
where the integration path starts and ends at $\tau=\frac 1 2$, and encircles the origin clockwise: the loop is a re-scaled version of $\Gamma_l$. The branch of $\ln (1+\tau) $ is chosen so that $\ln (1+\tau)>0 $ for $\tau >0$.
To extract the main contribution, we further split the exponent in the integrand. Indeed, we see that
$$ \left | \frac 2 {\pi(2\pi)^{2k}} \int_{\frac 1 2}^{(0-)} \frac {e^{2k(\tau -\ln (1+\tau)) }
-1}{\tau } e^{-2k\tau} \ln(-\tau)d\tau \right | \leq \frac 2 { (2\pi)^{2k}} \max_{0\leq \tau <1/2} \left |\varphi(\tau )\right |\leq \frac {2M_\varphi} { (2\pi)^{2k}},$$
where the function $$\varphi(\tau )=\frac {e^{2k(\tau -\ln (1+\tau)) }
-1}{\tau } e^{-2k\tau}$$ is analytic in a neighborhood of $[0, 1/2]$, thus the integration path collapses to the lower and upper edges of $[0, 1/2]$, and its bound $M_\varphi$ can be obtained by noticing that
$$0\leq \tau -\ln(1+\tau )\leq \frac {\tau^2} 2~~\mbox{for}~~\tau >0,~~~\mbox{and}~~-\ln(1+\tau)\leq -\frac 2 3\tau ~~\mbox{for}~~ 0\leq \tau \leq \frac 1 2.$$
Now for $0\leq \tau \leq \frac 1 {\sqrt k}$, one has
$$0\leq \varphi(\tau)\leq \frac {e^{k \tau^2}
-1}{\tau } e^{-2k\tau} \leq (e-1) (k\tau)e^{-2(k\tau)} \leq \frac {e-1}{2e}. $$
While for $ \frac 1 {\sqrt k}\leq \tau \leq \frac 1 2$,
$$0\leq \varphi(\tau)\leq \frac {e^{-2k \ln (1+\tau) }}\tau \leq \frac {e^{-\frac 4 3 k\tau}}\tau \leq \left . \frac {e^{-\frac 4 3 k\tau}}\tau \right |_{\tau=\frac 1 {\sqrt k}}=\sqrt k e^{-\frac 4 3\sqrt k} \leq \frac 3 {4e}
. $$ Thus we may chose
$$M_\varphi=\max\left ( \frac {e-1}{2e}, \frac 3 {4e}\right ) =\frac {e-1}{2e},$$ which does not depend on $k$.
The remaining piece would turn out to be of the most significance. Indeed, a change of variable $s=2k\tau$ makes
\begin{equation*}\label{(3.32)} \begin{array}{rl}
\displaystyle{ \frac 2 {\pi(2\pi)^{2k}} \int_{\frac 1 2}^{(0-)} \frac {\ln (-\tau) }
\tau e^{-2k\tau} d\tau }&= \displaystyle{ \frac 2 {\pi(2\pi)^{2k}} \int_{k}^{(0-)} \frac {\ln(-s)-\ln 2k} s e^{-s} ds } \\[.4cm]
& =\displaystyle{ \frac {4i\ln 2k} {(2\pi)^{2k}}+ \frac 2 {\pi(2\pi)^{2k}} \int_{k}^{(0-)} \frac {\ln(-s)} s e^{-s} ds .}
\end{array}
\end{equation*}
The last integral can be approximated by extending the integration path to $\displaystyle{\int_{+\infty}^{(0-)}}$, with an error bounded by $\displaystyle{\frac {4 e^{-k}} {k(2\pi)^{2k}}}$.
Recalling the integral representation
$$\frac 1 {\Gamma(z)}=\frac 1 {2\pi i} \int^{(0-)}_{+\infty} (-s)^{-z} e^{-s}ds$$ for all finite $z$; cf. \cite[(5.9.2)]{NIST} or Wong and Wyman \cite{WongWyman}, taking derivative with respect to $z$ and setting $z=1$, we obtain
$$ \int^{(0-)}_{+\infty} \frac {\ln(-s)} s e^{-s} ds= \frac {2\pi i\Gamma'(1)}{\Gamma^2(1)} =-2\pi i \gamma,$$
where $\gamma=0.5772\cdots$ is Euler's constant; see \cite[(5.4.12)]{NIST}.
Hence we can write
\begin{equation}\label{(3.33)} I_l= \frac {4i\ln 2k} {(2\pi)^{2k}}+ \frac {(-4\gamma-4\ln2\pi)i+\delta_{l,k}} { (2\pi)^{2k}},
\end{equation}
with
$$|\delta_{l,k}|\leq \frac {4\pi M_f}{2k-1} + 2M_\varphi + \frac {4e^{-k}} k .$$
\vskip .3cm
Now we sum up the calculations and estimations made above. Substituting (\ref{(3.29)}), (\ref{(3.31)}) and (\ref{(3.33)}) into (\ref{(3.28)}), we have the approximation
\begin{equation}\label{(3.34)}
\frac \pi {\sqrt 2}
\rho_k= \frac {4 \ln 2k} {(2\pi)^{2k}}+ \frac {\left (11.0903\cdots -4\gamma-4\ln2\pi\right )+\delta_{k}} { (2\pi)^{2k}}, \end{equation} with
\begin{equation*}\label{(3.35)}
|\delta_{k}| \leq \pi M_v \left ( \frac 2 3\right )^{2k} + \frac {4\pi M_f}{2k-1} +2M_\varphi + \frac {4e^{-k}} k < 1.0259 \end{equation*} for $k\geq 10$. Hence we obtain from (\ref{(3.34)}) that
\begin{equation}\label{(3.36)}
\frac {4 \ln 2k + 0.4041\cdots } {(2\pi)^{2k}} \leq
\frac \pi {\sqrt 2}
\rho_k\leq \frac {4 \ln 2k +2.4559\cdots } {(2\pi)^{2k}} \end{equation}for $k\geq 10$.
\subsection{The ratio}\label{sec:3.4}
For $k\geq 10$, it is readily verified that
$$ \frac 8 9\ln(2k+2) \geq 2.7475\cdots >2.4559-\frac {11} 9 \times 0.4041\cdots.$$ Therefore, it follows from (\ref{(3.36)}) that
\begin{equation*}\label{(3.37)} \rho_k/\rho_{k+1}\leq \frac {44} 9 \pi^2 \end{equation*}for $k\geq 10$.
It is easily verified that the inequality holds for all $k\geq 0$ by numerical evaluation of the first few $\rho_k$ for $k=0,1,\cdots 9$, as can be seen from Table \ref{table2}.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$k$ & $0$ & $1$ & $2$ & $3$ &$4$ & $5$ & $6$ & $7$ & $8$ & $9$ \\
$\rho_k/\rho_{k+1}$ & 17.46 & 27.41 &32.65 &35.30 &36.67 &37.41 & 37.86 & 38.15 & 38.36 & 38.51 \\[0.1cm]
$\frac {44} 9\pi^2$ & 48.25 & 48.25 &48.25 &48.25 &48.25 &48.25 &48.25 &48.25 &48.25 &48.25 \\
\hline
\end{tabular}
\caption{The first few $\rho_k$, $k=0,2,\cdots, 9$.}\label{table2}
\end{table}
A by-product is that $\rho_k/\rho_{k+1} \rightarrow 4\pi^2$ as $k\rightarrow \infty$. Another byproduct of (\ref{(3.36)}) and Table \ref{table2} is that $\rho_k>0$ for all $k$, and thus $\beta_{2k}$ having alternative signs, as stated in (\ref{(2.7)}).
\section { Discussion } \indent\setcounter{section} {4}
\setcounter{equation} {0} \label{sec:4}
We discuss very briefly a conjecture of H. Granath, of which we were not aware until we almost finish writing the present paper. In \cite{Granath}, Granath derives an asymptotic expansion
\begin{equation}\label{(4.1)}
\pi G_{n-1}\sim \ln(16n)+\gamma +\sum_{k=1} ^\infty \frac{a_k}{(16n)^k}, ~~n\rightarrow\infty, \end{equation} where $a_k$ are `effectively computable' constants but not explicitly given, except for the first few.
The author shows interest in seeking sharp bounds of arbitrary orders. Indeed, denoting
\begin{equation}\label{(4.2)}
A_m(n)=\ln(16n)+\gamma +\sum_{k=1} ^m \frac{a_k}{(16n)^k} , \end{equation}
Granath proves that
$A_5(n) <\pi G_{n-1} < A_7(n)$ and states that $A_9(n) <\pi G_{n-1} < A_{11}(n)$, for all positive $n$. It is then conjectured that (the sign in \cite[(10)]{Granath} is apparently wrong)
\begin{equation}\label{(4.3)}
(-1)^{\frac {m(m+1)} 2} \left ( \pi G_{n-1}-A_m(n)\right ) <0 \end{equation}for all $n=1,2,\cdots$ and $m=0,1,2,\cdots$.
The existence of (\ref{(4.1)}) has also been justified in \cite{LLXZ} and \cite{NemesNemes}. It is easily seen that in terms of $\beta_{s}$ in (\ref{(2.3)}) and (\ref{(2.4)}), the coefficients can be written as
\begin{equation}\label{(4.4)}
a_k=4^k\left [ -\frac 1 k +\sum^k_{s=1} \frac { (k-1)! 4^s \beta_s }{(s-1)!(k-s)!} \right ], ~~k=1,2,\cdots; \end{equation}
see also \cite{LLXZ} for an iterative formula. So far, numerical experiments agree with (\ref{(4.3)}). A proof of it might be found either by following the steps, or by using the results, of the present paper.
Then, a natural question arises:
\noindent {\qe\label{question 2}{Considering the general expansion in (\ref{(1.11)}), for what $h$ we have the ``best'' approximation in the sense of Theorem \ref{Thm 1} and (\ref{(4.3)})?
}}
The case $h=3/4$ is what we have been dealt with in the present paper. Very likely the expansion (\ref{(1.11)}) for $h=1/2$ and $h=1$ would turn out to be the ``best''.
As mentioned earlier, the coefficients $\beta_{2s}$ of the expansion (\ref{(1.12)}) can be obtained iteratively via (\ref{(2.4)}). We complete the paper by giving a couple of
alternative representations for $\beta_{2s}$. For example,
it can be shown that
\begin{equation}\label{(4.5)}
\beta_{2l} = \frac { (-1)^{l+1}}{ 2^{2l} (l!)^2 } \left |
\begin{array}{cccccc}
d_{0,2} & d_{0,3} & \cdots & d_{0,l-1} & d_{0, l} & d_{0, l+1} \\
d_{1,2} & d_{1,3} & \cdots & d_{1,l-1} & d_{1, l} & d_{1, l+1} \\
0 & d_{2,3} & \cdots & d_{2,l-1} & d_{2, l} & d_{2, l+1} \\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
0 & 0 & \cdots &d_{l-2,l-1}&d_{l-2,l} &d_{l-2,l+1} \\
0 & 0 & \cdots & 0 &d_{l-1,l} &d_{l-1,l+1} \\
\end{array}
\right |, ~~~l=1,2,\cdots.\end{equation}Indeed, taking $k=1,2,\cdots, l$ in (\ref{(2.4)}), we have
a linear algebraic system with unknowns $\beta_2$, $\beta_4$, $\cdots$, $\beta_{2l}$. Solving the system gives (\ref{(4.5)}).
Also, a combination of (\ref{(2.11)}), (\ref{(3.14)}) and (\ref{(3.27)}) yields an integral representation
\begin{equation}\label{(4.6)}
\beta_{2l}=\frac {(-1)^{l+1} (2l-1)!} {4\pi i} \oint \frac z {\sin\frac z 2} \frac {F\left ( 1/ 4 , 1/ 4 ; 1; \sin^2\frac z 2\right ) dz}{z^{2l+1}} , ~~l=1,2,\cdots,\end{equation}
where the integration path is a small circle encircling the origin counterclockwise. Of course, (\ref{(3.34)}) can be interpreted as
\begin{equation}\label{(4.7)}
\beta_{2l}=\frac {(-1)^{l+1} (2l-1)!} {\pi (2\pi)^{2l}}\left\{4\sqrt 2\ln(2l) +O(1) \right\} \end{equation} for large $l$. Results can be obtained from such approximations. For example, the expansion (\ref{(1.12)}) is divergent; compare \cite[Theorem 3]{Granath}.
\section*{Acknowledgements}
The authors are grateful to Prof. R. Wong for bringing the problem into their attention.
The authors thank the anonymous referees for their carefully reading of the manuscript and for the valuable suggestions and comments. One referee suggested the using of a quadratic transformation of the hypergeometric functions which makes the proof of Lemma \ref{lem 2.2} more natural and simplified. The other referee provided many constructive suggestions and corrections which have much improved the readability of the manuscript.
The work of Yutian Li was supported in part by the HKBU Strategic Development Fund,
a start-up grant from Hong Kong Baptist University,
and a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. HKBU 201513).
The work of Saiyu Liu was supported in part by Hunan Natural Science Foundation under grant number 14JJ6030, and by the National Natural Science Foundation of China under grant number 11326082.
The work of Shuaixia Xu was supported in part by the National
Natural Science Foundation of China under grant number
11201493, GuangDong Natural Science Foundation under grant number S2012040007824, and the Fundamental Research Funds for the Central Universities under grand number 13lgpy41.
Yuqiu Zhao was supported in part by the National
Natural Science Foundation of China under grant numbers 10471154 and
10871212.
| {
"timestamp": "2014-05-13T02:11:53",
"yymm": "1309",
"arxiv_id": "1309.4564",
"language": "en",
"url": "https://arxiv.org/abs/1309.4564",
"abstract": "We study the asymptotic expansion for the Landau constants $G_n$ $$\\pi G_n\\sim \\ln N + \\gamma+4\\ln 2 + \\sum_{s=1}^\\infty \\frac {\\beta_{2s}}{N^{2s}},~~n\\rightarrow \\infty, $$ where $N=n+3/4$, $\\gamma=0.5772\\cdots$ is Euler's constant, and $(-1)^{s+1}\\beta_{2s}$ are positive rational numbers, given explicitly in an iterative manner. We show that the error due to truncation is bounded in absolute value by, and of the same sign as, the first neglected term for all nonnegative $n$. Consequently, we obtain optimal sharp bounds up to arbitrary orders of the form $$ \\ln N+\\gamma+4\\ln 2+\\sum_{s=1}^{2m}\\frac{\\beta_{2s}}{N^{2s}}< \\pi G_n < \\ln N+\\gamma+4\\ln 2+\\sum_{s=1}^{2k-1}\\frac{\\beta_{2s}}{N^{2s}}$$ for all $n=0,1,2,\\cdots$, $m=1,2,\\cdots$, and $k=1,2,\\cdots$.The results are proved by approximating the coefficients $\\beta_{2s}$ with the Gauss hypergeometric functions involved, and by using the second order difference equation satisfied by $G_n$, as well as an integral representation of the constants $\\rho_k=(-1)^{k+1}\\beta_{2k}/(2k-1)!$.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Asymptotics of Landau constants with optimal error bounds",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9925393569774312,
"lm_q2_score": 0.8104789018037399,
"lm_q1q2_score": 0.8044322080400587
} |
https://arxiv.org/abs/0909.1859 | Simplices with equiareal faces | We study simplices with equiareal faces in the Euclidean 3-space by means of elementary geometry. We present an unexpectedly simple proof of the fact that, if such a simplex is non-degenerate, than every two of its faces are congruent. We show also that this statement is wrong for degenerate simplices and find all degenerate simplices with equiareal faces. | \section{Introduction}\label{section1}
\footnotetext{The first author was supported in part by the Russian State Program for Leading Scientific Schools, Grant~NSh--8526.2008.1.}
This paper deals with simplices with equiareal faces in the Euclidean 3-space.
A simplex is called a \textit{simplex with equiareal faces} if all its faces
have the same area and is called \textit{degenerate} if all its vertices lie
on a single plane. Our primary interest is the following
\textbf{Problem 1.}
\textit{Prove that all faces of any non-degenerate
simplex with equiareal faces in the Euclidean 3-space
are congruent to each other.}
Problem 1 already appeared in several contexts.
We mention here some of those appearances though we cannot
say that our knowledge is complete.
In the 1960s Professor Hans Vogler in Vienna used Problem 1
to convince his students how powerful the synthetic geometry is.
As far as we know, his
desciptive-geometrical solution was never published.
Independently, in the 1970's and 1980's
the following form of Problem 1 was used
to prevent `undesirable applicants'
from joining the Moscow State University:
`The faces of a triangular pyramid have the same area.
Show that they are congruent.'
An elementary solution to the latter problem
given in \cite{Vardi} shows that indeed the problem is
far from being trivial.
In \cite{McMullen}, among other results P. McMullen has
proved that \textit{a non-degenerate simplex is
a simplex with equiareal faces
if an only if its opposite
pairs of edges have the same lengths.}
It is easy to see that this statement is equivalent to
Problem 1.
McMullen's proof is very short and natural, but it is not
elementary since it rests on Minkowski's theorem
on uniqueness and existence of a~closed convex polyhedron
with given directions
and areas of faces (see, e.\,g., \cite{Alexandrov};
the direction of a~face is determined by
the outward unit normal to the face).
In 2007 Professor Robert Connelly in Cornell University
has brought our attention to
the fact that it is reasonable to study degenerate simplices
with equiareal faces. He argued that when we fix three vertices,
say $A$, $B$, and $C$, and move arbitrarily the fourth vertex,
say $D$, nothing special happens when $D$ occurs
in the plane~$ABC$: the degenerate simplex is obtained as a limit
of non-degenerate ones; the notions of the vertex, edge, and
face are clear for it; the notion of
the face area is well defined; from combinatorial point of
view there is no difference between degenerate and
non-degenerate simplices. Besides, the degenerate
polyhedra play very important role in `advanced' study
of convex polytopes, see, e.\,g., \cite{Alexandrov}.
Also Professor Robert Connelly has brought our attention to
the fact that, in contrast with Problem 1, there are
degenerate simplices with non-congruent equiareal faces.
In fact, every parallelogram equipped with its diagonals may be
treated as a degenerate simplex with equiareal faces as well as
every four points on a line may be treated as a vertex-set
of a degenerate simplex with equiareal faces
(with all areas equal zero). The problem is whether
there are some other degenerate simplices with equiareal faces.
In Section~\ref{section2} we give the shortest available for us
elementary solution to Problem~1.
As far as we know, its main idea should be
attributed to Professor Hans Vogler
who is now at the University of Innsbruck.
In Section~\ref{section3} we use Heron's formula to study all
simplices with equiareal faces, both degenerate
and non-degenerate.
\section{A short elementary solution to Problem 1}\label{section2}
Note that in order to solve Problem 1 it is sufficient to solve the
following problem~2 which is of independent interest.
\textbf{Problem~2.}
\textit{Let $ABCD$ be a non-degenerate simplex, let
$a(\triangle ABC){=}a(\triangle ABD)$, and let
$a(\triangle ACD)=a(\triangle BCD)$, where
$a(\triangle XYZ)$ stands for the area of the
triangle $\triangle XYZ$.
Prove that $|AC|=|BD|$ and $|BC|=|AD|$, where
$|XY|$ stands for the length of the straight
line segment~$XY$} (see Fig.~1).
\textbf{Solution} to Problem 2.
Let $P$ be the plane which is parallel to the line~$AB$
and contains the line~$CD$ (see Fig.~2).
We are going to study the orthogonal projection of
the simplex $ABCD$ into the plane $P$.
Denote by~$X^\perp$ the image of a point $X$ under that projection
and denote by~$X^\ast$ the foot of the perpendicular to the line~$AB$
emanated from the point~$X$.
\begin{figure}
\includegraphics[width=0.45\textwidth]{alex_weiss_fig1.eps}\hfill
\includegraphics[width=0.54\textwidth]{alex_weiss_fig2.eps}\\
\hskip-65mm\parbox[t]{0.47\textwidth}{\caption{}}\hskip10mm
\parbox[t]{0.47\textwidth}{\caption{}}
\end{figure}
Since~$AB$ is parallel to~$P$ we
have~$|D^\ast D^{\ast\perp}|=|C^\ast C^{\ast\perp}|$.
Since the triangles~$\triangle ABC$ and~$\triangle ABD$
have equal areas and the common side~$AB$ we conclude
that~$|DD^\ast |=|CC^\ast |$.
Applying Pythagoras theorem to the right-angled
triangles~$\triangle DD^\ast D^{\ast\perp}$ and
$\triangle CC^\ast C^{\ast\perp}$
we get~$|DD^{\ast\perp}|^2
=|DD^{\ast\perp}|^2-|D^\ast D^{\ast\perp}|^2
=|CC^{\ast\perp}|^2-|C^\ast C^{\ast\perp}|^2
=|CC^{\ast\perp}|^2$.
In terms of the quadrilateral $A^\perp CB^\perp D$ this means that
the vertices~$D$ and~$C$ lie at the same distance from
the diagonal~$A^\perp B^\perp$.
Note also that the triangles $\triangle A^\perp B^\perp D$ and
$\triangle A^\perp B^\perp C$ have the same area and the points~$D$
and~$C$ lie on the different sides of the line passing
through the points~$A^\perp$ and~$B^\perp$.
In fact, if they lie on the same side, the line through
the points~$A^\perp$ and~$B^\perp$ must be parallel to the line
through the points~$D$ and~$C$ and, thus,
the points~$A$, $B$, $C$, and~$D$ should be coplanar.
A contradiction.
Similar arguments applied to the triangles~$\triangle ACD$
and~$\triangle BCD$ show that the vertices~$A^\perp$ and~$B^\perp$
of the quadrilateral~$A^\perp CB^\perp D$ lie at the same distance
from the line through the points~$D$ and~$C$ and, moreover,
lie on the different sides of that line. Hence,
the quadrilateral~$A^\perp CB^\perp D$ is convex.
Recall that if a convex planar quadrilateral is such that
every two opposite vertices lie at the same distance from
the diagonal that joins the rest two vertices then the
quadrilateral is a parallelogram. This implies that
the quadrilateral~$A^\perp CB^\perp D$ is a parallelogram
and, thus, that $|A^\perp C|=|B^\perp D|$ and
$|B^\perp C|=|A^\perp D|$.
Applying Pythagoras theorem to the right-angled triangles
$\triangle AA^\perp C$ and $\triangle BB^\perp D$ and
taking into account that $|AA^\perp|=|BB^\perp|$
we get $|AC|^2=|AA^\perp|^2+|A^\perp C|^2
=|BB^\perp|^2+|B^\perp D|^2=|BD|^2$.
Similar arguments applied to the triangles
$\triangle AA^\perp D$ and $\triangle BB^\perp C$
show that $|BC|=|AD|$. \hfill Q.E.D.
\section{A study of degenerate and non-degenerate simplices\\ with equiareal faces}\label{section3}
From Section \ref{section1}
we know the following three types of equiareal
non-degenerate and degenerate simplices in the Euclidean 3-space:
Type 1: non-degenerate simplex with all faces congruent;
Type 2: parallelogram equipped with its diagonals; and
Type 3: four points on a line treated as a vertex-set
of a degenerate simplex with equiareal faces of zero area.
In this Section we use Heron's formula to study the following
\textbf{Problem~3.}
\textit{Prove that there are no simplices with equiareal faces which do not belong to Types 1--3.}
\textbf{Solution} to Problem 3.
Let $T$ be a simplex with equiareal faces. Let the first face
(I) of $T$ has edges of the lengths $a$, $b$, and $c$;
the second face
(II) has edges of the lengths $a$, $y$, and $z$;
the third face
(III) --- $b$, $x$, and $z$; and the forth face
(IV) --- $c$, $x$, and $y$.
(Equivalently, we can say that side $x$ is opposite to $a$;
side $y$ --- to $b$; and $z$ --- to $c$).
Let $S$ be common area of the faces of $T$.
Heron's formula for the face (I) yields
\begin{align}
(4S)^2 &=(a+b+c)(-a+b+c)(a-b+c)(a+b-c)\notag\\
&= 2a^2b^2+2a^2c^2+2b^2c^2-a^4-b^4-c^4=-(a^2-b^2+c^2)^2+4a^2c^2.
\end{align}
Now let's use Heron's formula to express the fact that the faces
(I) and (II) have the same areas
\begin{align}
(a+y+z)(-& a+y+z)(a-y+z)(a+y-z)-(4S)^2\notag\\
=& 2a^2y^2+2a^2z^2+2z^2y^2-a^4-y^4-z^4-(4S)^2\\
=&-(z^2-y^2-a^2)^2-(4S)^2+4y^2a^2=0.\notag
\end{align}
Solving this equation with respect to $z^2$ yields
\begin{equation}
z^2=y^2+a^2\pm\sqrt{4a^2y^2-(4S)^2}.
\end{equation}
Similarly, we use Heron's formula to express the fact that
the faces (I) and (III) have the same areas
\begin{align}
(b+z+x)(-& b+z+x)(b-z+x)(b+z-x)-(4S)^2\notag\\
=& 2b^2z^2+2b^2x^2+2z^2x^2-b^4-z^4-x^4-(4S)^2\notag\\
=& -(z^2-x^2-b^2)^2-(4S)^2+4b^2x^2=0.\notag
\end{align}
Solving this equation with respect to $z^2$ yields
\begin{equation}
z^2=x^2+b^2\pm\sqrt{4b^2x^2-(4S)^2}.
\end{equation}
At last, we use Heron's formula to express the fact that the faces
(I) and (IV) have the same areas
\begin{align}
(c+x+y)(-& c+x+y)(c-x+y)(c+x-y)-(4S)^2\notag\\
=& 2c^2x^2+2c^2y^2+2x^2y^2-c^4-x^4-y^4-(4S)^2\notag\\
=& -(y^2-x^2-c^2)^2-(4S)^2+4c^2x^2=0.\notag
\end{align}
Solving this equation with respect to $y^2$
\begin{equation}
y^2=x^2+c^2\pm\sqrt{4c^2x^2-(4S)^2}.
\end{equation}
Eliminate $z^2$ from (3) and (4)
$$
y^2+a^2\pm\sqrt{4a^2y^2-(4S)^2}=x^2+b^2\pm\sqrt{4b^2x^2-(4S)^2},
$$
then twice square this equation in order to eliminate square
roots and use the
formula $(a^2-b^2+c^2)^2=4a^2c^2-(4S)^2$ (which is a consequence
of (1)) to obtain
\begin{gather}
4a^4y^4+4b^4x^4+(x^2+b^2-y^2-a^2)^4-8a^2b^2x^2y^2-2a^2y^2(x^2+b^2-y^2-a^2)^2\notag\\
-2b^2x^2(x^2-y^2-a^2+b^2)^2=-4(x^2-y^2-a^2+b^2)^2.
\end{gather}
So, we arrive at the most computationally difficult, but
still straightforward
point of the solution: substitute $y^2$ in (6) by the
right-hand side of (5).
After simplifications and multiple usage of the formula
$(a^2-b^2+c^2)^2=4a^2c^2-(4S)^2$ we get
\begin{equation}
4(c^2x^2-4S^2)(x^2-a^2)^2 S^4= \bigl[x^4-x^2(a^2+b^2-c^2)-a^2(b^2-c^2)^2\bigr]S^4.
\end{equation}
We see that $S=0$ is a root of (7) which corresponds to simplices of Type 3.
In order to find the other roots,
cancel $S^4$ in (7), rearrange terms and use the formula
$(a^2-b^2+c^2)^2=4a^2c^2-(4S)^2$ again to arrive at
$$
(x^2-a^2)^2\bigl[x^4-2x^2(b^2+c^2)+a^2(2b^2+2c^2-a^2)\bigr]=0.
$$
Note that the bi-quadratic expression in the brackets has two roots:
$x^2=a^2$ and $x^2=2b^2+2c^2-a^2$.
Note also that we can obtain similar equations for $y$ and
$z$ just by permuting three pairs of symbols $(a,x)$, $(b,y)$,
and $(z,c)$.
As a result we find 8 solutions for $x, y,$ and $z$ that
are accumulated as rows in the following table:
\vskip10pt
\begin{center}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|c|c|c|c|}
\hline
& $x$ & $y$ & $z$ \\
\hline
Solution 1 & $a$ & $b$ & $c$ \\
\hline
Solution 2 & $a$ & $b$ & ${\sqrt{2a^2+2b^2-c^2}}^{}$ \\
\hline
Solution 3 & $a$ & $\sqrt{2a^2-b^2+2c^2}$ & $c$ \\
\hline
Solution 4 & $a$ & $\sqrt{2a^2-b^2+2c^2}$ & $\sqrt{2a^2+2b^2-c^2}$ \\
\hline
Solution 5 & $\sqrt{2b^2+2c^2-a^2}$ & $b$ & $c$ \\
\hline
Solution 6 & $\sqrt{2b^2+2c^2-a^2}$ & $b$ & $\sqrt{2a^2+2b^2-c^2}$ \\
\hline
Solution 7 & $\sqrt{2b^2+2c^2-a^2}$ & $\sqrt{2a^2-b^2+2c^2}$ & $c$ \\
\hline
Solution 8 & $\sqrt{2b^2+2c^2-a^2}$ & $\sqrt{2a^2-b^2+2c^2}$ & $\sqrt{2a^2+2b^2-c^2}$ \\
\hline
\end{tabular}
\end{center}
\vskip10pt
Solution 1, obviously, corresponds to the simplices of Type 1.
Solutions 2, 3, and 5 correspond to simplices of Type 2.
For example, a simplex $T$, corresponding to Solution 2,
is degenerated into a parallelogram with the side lengths $a$
and $b$ and the diagonals $c$ and $\sqrt{2a^2+2b^2-c^2}$
(recall that in any parallelogram the sum of the squared
lengths of all sides equals the sum of the squared lengths
of the both diagonals).
Solutions 4, 6, and 7 correspond to simplices of Type 2 again, but
the face (I), with edge lengths $a$, $b$, and $c$, must be a
right-angled triangle this time. For example, consider
Solution 4. We have
\begin{equation}
x^2=a^2,\quad y^2=2a^2-b^2+2c^2, \quad\mbox{and}\quad z^2=2a^2+2b^2-c^2.
\end{equation}
Using the formula (2) we get $-(z^2-y^2-a^2)^2-(4S)^2+4a^2y^2=0$.
Now we use the formula (1) and, after some simplifications,
we get $a^4-(b^2-c^2)^2=0$. Without loss of generality,
we may assume that $b\geqslant c$. This yields $a^2+c^2=b^2$
and, thus, the face (I) is a right-angled triangle.
Moreover, now the formula (8) implies that $y^2= b^2$ and
$z^2= 4a^2+c^2$. Hence, the simplex $T$, corresponding to
Solution 4, is degenerated to a parallelogram with the side
lengths $a$, $b$, $x=a$, and $y=b$ and the diagonals of the
lengths $c$ and $z=\sqrt{4a^2+c^2}=\sqrt{2a^2+2b^2-c^2}$.
Hence, the simplex $T$ is of Type 2. Solutions 6 and 7 are
treated similarly.
Solution 8 does not correspond to any simplex in the Euclidean
3-space (neither degenerated nor non-degenerated).
In fact, we have
\begin{equation}
x^2=2b^2+2c^2-a^2, \quad y^2=2a^2-b^2+2c^2, \quad\mbox{and}\quad z^2=2a^2+2b^2-c^2.
\end{equation}
Using the formula (2) we get $-(z^2-y^2-a^2)^2-(4S)^2+4a^2y^2=0$.
Now we use formula (1) and, after some simplifications, we get
$a^4-(b^2-c^2)^2=0$ or
\begin{equation}
(a^2-b^2+c^2)(a^2+b^2-c^2)=0.
\end{equation}
The geometric meaning of the formula (10) is that either $b$ or $c$ is the hypotenuse
of the right-angled triangle (I) with the sides $a$, $b$, and $c$.
Similarly we can substitute (9) into formulas (3) and (4).
Proceeding as above we get
\begin{gather}
(a^2+b^2-c^2)(-a^2+b^2+c^2)=0,\\
(a^2-b^2+c^2)(-a^2+b^2+c^2)=0.
\end{gather}
The geometric meaning of the formula (11) is that either $c$ or $a$
is the hypotenuse of the right-angled triangle (I) with the sides
$a$, $b$, and $c$.
Similarly, the formula (12) implies that either $b$ or $a$ is
the hypotenuse of the right-angled triangle (I) with the sides
$a$, $b$, and $c$.
But the triangle (I) has only one hypotenuse! Hence the equations
(10)--(12) can not hold true simultaneously.
This means that Solution 8
does not correspond to any simplex.
Now we can conclude that there is no
simplices with equiareal faces which do not belong to Types 1--3.
\hfill{Q.E.D.}
| {
"timestamp": "2009-09-10T04:22:13",
"yymm": "0909",
"arxiv_id": "0909.1859",
"language": "en",
"url": "https://arxiv.org/abs/0909.1859",
"abstract": "We study simplices with equiareal faces in the Euclidean 3-space by means of elementary geometry. We present an unexpectedly simple proof of the fact that, if such a simplex is non-degenerate, than every two of its faces are congruent. We show also that this statement is wrong for degenerate simplices and find all degenerate simplices with equiareal faces.",
"subjects": "Metric Geometry (math.MG)",
"title": "Simplices with equiareal faces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846678676151,
"lm_q2_score": 0.8221891283434876,
"lm_q1q2_score": 0.8044172372587071
} |
https://arxiv.org/abs/1303.4850 | Regular graphs of odd degree are antimagic | An antimagic labeling of a graph $G$ with $m$ edges is a bijection from $E(G)$ to $\{1,2,\ldots,m\}$ such that for all vertices $u$ and $v$, the sum of labels on edges incident to $u$ differs from that for edges incident to $v$. Hartsfield and Ringel conjectured that every connected graph other than the single edge $K_2$ has an antimagic labeling. We prove this conjecture for regular graphs of odd degree. | \section{Introduction}
A \emph{magic square} of order $n$ is a $n\times n$ arrangement of the integers
$\{1,2,\ldots, n^2\}$ so that the sums of the entries in each row, each column,
and along the two main diagonals are equal. These squares were known to the Chinese as
early as the fourth century B.C.\ and have been widely studied in recreational
mathematics~\cite{Gar88}.
A \emph{labeling} of a graph $G$ with $m$ edges is a bijection from $E(G)$ to
$\{1,2,\ldots,m\}$. Given a labeling of a graph, the \emph{vertex sum} at a
vertex $v$ is the sum of the labels on edges incident to $v$.
A labeling is \emph{magic} if all vertex sums are equal. Magic labelings take
their name from their connection with magic squares, since a magic square of order
$n$ naturally gives rise to a magic labeling of the complete bipartite graph
$K_{n,n}$ (vertices in one part correspond to rows of the square, and
vertices in the other correspond to columns). Finally, a labeling of a graph is
\emph{antimagic} if all its vertex sums are different. We call a graph
antimagic (magic) if it has an antimagic (magic) labeling.
It is easy to find many graphs that are not magic (for example, forests).
However, graphs that are not antimagic are rare. In fact, Hartsfield and Ringel
conjectured the following.
\begin{conj}[\cite{HR90}]
Every connected graph other than $K_2$ is antimagic.
\end{conj}
Hartsfield and Ringel also explicitly conjectured that all trees other than
$K_2$ are antimagic. Both conjectures remain wide open; however, much progress
has been made. The first major result on antimagic labelings was due to
Alon, Kaplan, Lev, Roditty, and Yuster~\cite{AKLRY04}.
They showed that there exists a constant $c$ such that if $G$ is an $n$-vertex
graph with minimum degree $\delta \ge c\log n$, then $G$ is antimagic.
This proof relies on a combination of combinatorial ideas, probabilistic
tools, and methods from analytic number theory.
They also proved that graphs with maximum degree $\Delta\ge n-2$ are antimagic.
Yilma~\cite{Yil13+} later extended this result to show that graphs with $\Delta\ge n-3$
are antimagic. His proof finds a breadth-first spanning tree $T$ rooted at a vertex
of maximum degree; he labels all edges outside of $T$ first, then uses the
largest $n-1$ labels on $T$ to guarantee an antimagic labeling.
Hefetz~\cite{Hef05} used algebraic tools to show that a graph is antimagic if it has $3^k$
vertices and a $C_3$-factor. Hefetz, Saluz, and Tran~\cite{HST10} generalized
this approach to show that a graph is antimagic if it has $p^k$ vertices and a
$C_p$-factor (where $p$ is an odd prime). Cranston~\cite{Cra09} used Hall's marriage theorem
to show that regular bipartite graphs are antimagic.
Liang and Zhu~\cite{LZ13+} labeled edges in order of decreasing distance from a
central vertex (breaking ties carefully) to show that 3-regular graphs are antimagic.
Perhaps the most interesting result is that of Eccles~\cite{Ecc13+}, who
recently improved on the work of Alon et.al. He showed that if a graph has no
isolated edges or vertices and has average degree at least
4468, then it is antimagic.
He conjectures that, under the same first condition,
average degree at least $\sqrt{2}$ implies that a graph is antimagic.
This much stronger conjecture immediately implies Conjecture~1, since a
connected $n$-vertex graph has at least $n-1$ edges, and so for $n\ge 4$
has average degree at least $2(n-1)/n=2-2/n>\sqrt{2}$.
In this note, we prove that every $k$-regular graph with $k$ odd and $k\ge 3$ is
antimagic.
\section{Main Result}
A \emph{trail} is a walk in $G$ that may reuse vertices but may not reuse edges;
a trail is \emph{open} if it starts and ends at distinct vertices, and is
\emph{even (odd)} if its length is even (odd).
For a set of vertices $U$ and a function $\sigma$ we write $\sigma(U)$ to denote
$\{\sigma(u):u\in U\}$. For a subgraph or trail $H$, we write $d_H(v)$ for the
degree of $v$ in $H$.
We begin with an easy decomposition result for bipartite graphs.
\begin{keylemma}
\label{lemma1}
Let $G$ be a bipartite graph with parts $U$ and $W$. There exists a
function $\sigma: U\to E(G)$ and a set $\mathcal{T}=\{T_1,T_2,\ldots\}$ such
that $\sigma(u)$ is incident to $u$ for all $u\in U$ and $\mathcal{T}$ is a
collection of edge-disjoint open trails with at most one trail ending at each
vertex and with $(\bigcup_{T\in\mathcal{T}}E(T))\cap \sigma(U)=\emptyset$ and
$\bigcup_{T\in\mathcal{T}}E(T)\cup \sigma(U) = E(G)$. In other words, we can
partition $E(G)$ into $\mathcal{T}$ and $\sigma(U)$.
\end{keylemma}
\begin{proof}
We first choose $\sigma(U)$ arbitrarily, and let $\widehat{E}=E(G)\setminus
\sigma(U)$. We form a greedy trail decomposition of $\widehat{E}$ as follows. Start at an
arbitrary vertex and keep walking (using unused edges of $\widehat{E}$) as long as possible.
When you reach a vertex with no unused edges, start a trail at another vertex.
Repeat this process until all edges are used up. This gives a decomposition
$\mathcal{T}$ of $\widehat{E}$, but it might contain a closed trail.
Suppose that $\mathcal{T}$ contains a closed trail $T_1$. If any
vertex $v$ of $T_1$ has an open trail $T_2$ that ends at $v$, then we
splice $T_1$ and $T_2$ together, by starting at $v$, following all the edges of
$T_1$, then following the edges of $T_2$.
If no vertex of $T_1$ is the endpoint of an open trail in $\mathcal{T}$,
then choose $u\in U\cap V(T_1)$ arbitrarily. Let $w$ be a successor of $u$ on
$T_1$ and let $v$ be such that $\sigma(u)=uv$. We redefine $\sigma(u):=uw$,
and redefine $T_1:=T_1-uw+uv$.
Now $T_1$ is an open trail, since $d_{T_1}(w)$ is odd.
By repeating this process for each closed trail in $\mathcal{T}$, we reach a collection
of open trails. If any vertex $v$ is the endpoint of at least two open trails,
then we merge them together, by walking along one to end at $v$, then walking
along another starting from $v$. Merging two trails reduces the number of
trails, and ``opening up'' a closed trail (as described above), does not
increase this number. So iterating these merging and opening up steps
gives the desired partition of $E(G)$ into $\mathcal{T}$ and $\sigma(U)$.
\end{proof}
Now we prove our main result.
Our proof builds heavily on that of Liang and Zhu~\cite{LZ13+}, who showed that
3-regular graphs are antimagic.
\begin{theoremA}
Every $k$-regular graph with $k$ odd and $k\ge 3$ is antimagic.
\end{theoremA}
\begin{proof}
Suppose that $G$ and $H$ are both antimagic $k$-regular graphs and that
$|E(G)|=m$.
Given antimagic labelings for $G$ and $H$, we get an antimagic labeling
of $G\cup H$ by increasing the label on each edge of $H$ by $m$.
Thus, we need only consider connected graphs.
Choose an arbitrary vertex $v^*$ and let $V_i$ denote the set of vertices at
distance exactly $i$ from $v^*$; let $p$ be the furthest distance of a vertex
from $v^*$. Let $G[V_i]$ denote the subgraph induced by $V_i$ and
$G[V_i,V_{i-1}]$ denote the induced bipartite subgraph with parts
$V_i$ and $V_{i-1}$.
For each $i$, we apply the \hyperref[lemma1]{Helpful Lemma} to $G[V_i,V_{i-1}]$ with $U=V_i$
and $W=V_{i-1}$ to get a partition of $E(G[V_i,V_{i-1}])$ into an edge set
$\sigma(V_i)$ and a collection of edge-disjoint open trails. Let
$G_{\sigma}[V_i,V_{i-1}] = G[V_i,V_{i-1}]\setminus \sigma(V_i)$. Let
$E_i=E(G[V_i])$, let $E'_i=E(G_{\sigma}[V_i,V_{i-1}])$, and let
$E''_i=\sigma(V_i)$; note that $E'_i$ and $E''_i$ partition $E(G[V_i,V_{i-1}])$.
Given a labeling $f$ of the edges, we denote the total sum of labels on edges
incident to vertex $v$ by $t(v)=\sum_{e\in E(v)}f(e)$, where $E(v)$ denotes
the set of edges incident to $v$. Similarly, we denote the partial sum at $v$
(omitting the label on $\sigma(v)$) by
$p(v)=\sum_{e\in E(v)\setminus\{\sigma(v)\}}f(e)=t(v)-f(\sigma(v))$.
We now outline the proof. We will label the edges in the order $E_p, E'_p,
E''_p, \ldots, E_1, E'_1, E''_1$, using the smallest unused labels on each edge
set when we come to it. In other words, we use the $|E_p|$ smallest labels on
$E_p$, the $|E'_p|$ next smallest labels on $E'_p$, the $|E''_p|$ next smallest
labels after that on $E''_p$, etc. (Note that the labels assigned to each of
these edge sets span an interval.) This label assignment immediately gives
that if $i\ge j+2$ and $u\in V_i$ and $w\in V_j$, then $t(u)<t(w)$ since
$G$ is regular and the edges incident to $u$ have smaller labels than the edges
incident to $w$. Thus, we need only ensure that $t(u)\ne t(w)$ when
either (i) $u,w\in V_i$ or (ii) $u\in V_i$ and $w\in V_{i-1}$. We handle these
two cases by specifying more precisely how to assign the label to each edge of
these $3p$ edge sets.
We label the edges of each $E_i$ arbitrarily from its assigned labels.
We now specify how to label each $E''_i$; in the process, we handle Case (i).
Suppose that for some $i$, we have already labeled the edges of $E_p, E'_p,
E''_p, \ldots, E_i, E'_i$. As a result, $p(u)$ is already determined for each
$u\in V_i$. We may name the vertices of $V_i$ as $u_1, u_2, u_3, \ldots$ so
that $p(u_1)\le p(u_2) \le p(u_3) \le \cdots$. Now we use the smallest label
for $E''_i$ on $\sigma(u_1)$, the next smallest on $\sigma(u_2)$, etc. This
ensures that $t(u_j)<t(u_{j+1})$ for all $u_j\in V_i$.
Finally, we specify how to label each $E'_i$; in the process, we handle Case
(ii). That is, we ensure that if $u\in V_i$ and $w\in V_{i-1}$, then
$t(u)\net(w)$. Let $\{s,s+1,\ldots,\ell-1,\ell\}$ be the set of labels to be
used on $E'_i$. Recall that $G$ is $k$-regular for odd
$k\ge 3$, and let $t=(k-1)/2$. We will ensure that $p(u)\le t(s+\ell)$ and that
$p(w)\ge t(s+\ell)$. Now since $f(\sigma(u))<f(\sigma(w))$, we get that $t(u)<t(w)$.
The details follow.
Let $\mathcal{T}$ be the set of open trails partitioning $E'_i$ (from the
\hyperref[lemma1]{Helpful Lemma}). Again, let
$\{s,s+1,\ldots,\ell-1,\ell\}$ be the labels assigned to $E'_i$. We label
each trail so that every pair of successive labels (on a trail) incident to a
vertex $u\in V_i$ has sum at most $s+\ell$
and each pair of successive labels incident to a vertex $w\in V_{i-1}$
has sum at least $s+\ell$.
This ensures that $p(u)\le t(s+\ell)$ and $p(w)\ge t(s+\ell)$.
We first label each even trail, then label the odd trails, taken together in
pairs (possibly with a single odd trail last).
Suppose that we have already labeled some even number $2r$ of edges in the set
$E'_i$ and the remaining labels available for this edge set are $\{s+r,
s+r+1,\ldots, \ell-r-1,\ell-r\}$. We have three possibilities. (1) Suppose first
that $T\in \mathcal{T}$ is an
even trail with both endpoints in $V_{i-1}$. We assign the labels: $s+r,
\ell-r,s+r+1,\ell-r-1,\ldots$ successively along the trail. Now every two successive
edges incident to $u\in V_i$ have sum $s+\ell$ and every two successive edges
incident to $w\in V_{i-1}$ have sum $s+\ell+1$. (2) Suppose instead that $T\in
\mathcal{T}$ is an even trail with both endpoints in $V_i$. Now we assign
the labels: $\ell-r, s+r, \ell-r-1, s+r+1, \ldots$ successively along the trail.
Now every two successive edges incident to $u\in V_i$ have sum $s+\ell-1$ and
every two successive edges incident to $w\in V_{i-1}$ have sum $s+\ell$. (3)
Finally, suppose that $T_1, T_2\in \mathcal{T}$ are odd trails with lengths
$2a+1$ and $2b+1$. Beginning at a vertex in $V_i$, we label the edges of $T_1$
with $\ell-r, s+r, \ell-r-1, \ldots, s+r+a-1, \ell-r-a$. Here the successive pairs
of labels incident to $u\in V_i$ sum to $s+\ell-1$ and the pairs incident to
$w\in V_{i-1}$ sum to $s+\ell$. Finally, beginning at a vertex in $V_{i-1}$,
we label the edges of $T_2$ with $s+r+a, \ell-r-a-1, s+r+a+1, \ldots, s+r+a+b$.
Again the successive pairs incident to $u\in V_i$ sum to $s+\ell-1$ and the
successive pairs incident to $w\in V_{i-1}$ sum to $s+\ell$.
If we have a single odd trail left at the end, we treat it like a trail of
length $2a+1$ above.
All that remains is to verify that for $u\in V_i$ and $w\in V_{i-1}$ we
have $p(u)\le t(s+\ell)$ and $p(w)\ge t(s+\ell)$. We consider $p(u)$, and the
analysis for $p(w)$ is nearly identical. Recall that $d_G(u)=k=2t+1$. If
$d_{E'_i}(u)=2t$, then the desired inequality holds, since each of the $t$
pairs of successive edges on trails through $u$ have label sum at most $s+\ell$.
If $u$ is the end of some trail $T$ in $\mathcal{T}$, then let $e$ be the final edge of
$T$ incident to $u$; note that $f(e)\le \ell$. But now, we have $d_{E'_i}(u)$ is
odd, so $u$ has some incident edge (in fact, an odd number of them) in
$E'_{i+1}\cup E''_{i+1}\cup E_i$; this edge has label less than $s$. Thus, the
sum of this label and $f(e)$ is less than $s+\ell$. If $u$ has additional
incident edges in $E'_{i+1}\cup E''_{i+1}\cup E_i$, then each edge has label
less than $s$; thus, each pair of these edges has label sum less than $s+\ell$.
So $p(u)\le t(s+\ell)$, as desired. For each $w\in V_{i-1}$, the analysis to
show that $p(w)\ge t(s+\ell)$ is nearly identical to that above; the only
difference is that all edges incident to $w$ that are not in $E'_i$ are in
$E''_i\cup E_{i-1}\cup E'_{i-1}$, so each such edge has label larger than $\ell$.
This completes the proof.
\end{proof}
We remark in closing that the proof easily translates to an efficient
(polynomial time) algorithm to find an antimagic labeling.
We thank Mike Barrus for his careful reading of this manuscript and detailed
feedback.
| {
"timestamp": "2013-03-21T01:01:13",
"yymm": "1303",
"arxiv_id": "1303.4850",
"language": "en",
"url": "https://arxiv.org/abs/1303.4850",
"abstract": "An antimagic labeling of a graph $G$ with $m$ edges is a bijection from $E(G)$ to $\\{1,2,\\ldots,m\\}$ such that for all vertices $u$ and $v$, the sum of labels on edges incident to $u$ differs from that for edges incident to $v$. Hartsfield and Ringel conjectured that every connected graph other than the single edge $K_2$ has an antimagic labeling. We prove this conjecture for regular graphs of odd degree.",
"subjects": "Combinatorics (math.CO)",
"title": "Regular graphs of odd degree are antimagic",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9838471665987074,
"lm_q2_score": 0.8175744739711884,
"lm_q1q2_score": 0.8043683296999824
} |
https://arxiv.org/abs/1902.09146 | Higher order Jacobians, Hessians and Milnor algebras | We introduce and study higher order Jacobian ideals, higher order and mixed Hessians, higher order polar maps, and higher order Milnor algebras associated to a reduced projective hypersurface. We relate these higher order objects to some standard graded Artinian Gorenstein algebras, and we study the corresponding Hilbert functions and Lefschetz properties. | \section{Introduction}
In Algebraic Geometry and Commutative Algebra, the {\it Jacobian ideal} of a homogeneous reduced form $f \in R=\mathbb{C}[x_0,\ldots,x_n]$, denoted by $J(f )= (\frac{\partial f}{\partial x_0}, \ldots, \frac{\partial f}{\partial x_0})$, plays several key roles.
Let $X=V(f) \subset \mathbb{P}^n$ be the associated hypersurface in the projective space.
The linear system associated to the Jacobian ideal defines the {\it polar map} $\varphi_X:\mathbb{P}^n \dashrightarrow \mathbb{P}^n$, also called the {\it gradient map}, whose image is the {\it polar image} of $X$, denoted by $Z_X = \overline{\varphi(\mathbb{P}^n)}$. The restriction of the polar map to the hypersurface is the {\it Gauss map} of $X$, $\mathcal{G}_X=\varphi_X|X$, whose image is the {\it dual variety} of $X$.
The base locus of these maps is the singular scheme of the hypersurface $X$, see \cite{Ru}. The {\it Milnor algebra} of $f$, also called the {\it Jacobian ring} of $f$, is the quotient $M(f )= R/J(f)$. This graded algebra is closely related to the Hodge filtration on the cohomology of $X$ and the period map, see \cite{DSa,Gr,Se}. {\it The aim of this paper is to construct higher order versions of these
classical objects, explicit some relations among them and extend some classical results to this higher order context.}
In the second section we give some definitions of Artinian Gorenstein algebras, Hessians, Lefschetz properties, Jacobian ideals and Milnor algebras, and review some known results. The {\it standard Artinian Gorenstein algebra} $A(f)$, associated to $f$, is given by Macaulay Matlis duality: the ring $Q = \mathbb{C}[X_0,\ldots,X_n]$ acts on $R$ via the identification $X_i =\frac{\partial }{\partial x_i}$, and we define $A(f )= Q/\operatorname{Ann}(f)$, see \cite{MW}.
Since the Jacobian matrix associated to the polar map is the {\it Hessian matrix} of $f$, see \cite[Chapter 7]{Ru}, one gets that $\varphi_X$ is a dominant map, that is $Z _X = \mathbb{P}^n$, if and only if the {\it Hessian determinant} $\operatorname{hess}_f \ne 0$. A description of the forms $f$ with $\operatorname{hess}_f = 0$ is given by the Gordan-Noether criterion, and can be found, for example, in \cite{CRS, GR, Go, GRu, Ru}.
The new results in the second sections are Proposition \ref{propSLP1} and Proposition \ref{propSLP2},
dealing with the Lefschetz properties of smooth cubic surfaces in $\mathbb{P}^3$ and smooth quartic curves in $\mathbb{P}^2$.
In the third section we introduced the higher order Jacobian ideals $J^k(f)
$ and the corresponding Milnor algebras $M^k(f)$. The Gauss map $\mathcal{G}$ of a smooth hypersurface is a birational morphism, see for instance \cite{GH,Z}.
The natural $k$-th order version of smoothness is the hypothesis that all the points of $X$ have multiplicity at most $k$. In Theorem \ref{T1} we prove that this condition is equivalent to the $k$-th order Milnor algebra $M^k(f)$ being Artinian, generalizing a classical result for non singular hypersurfaces, and a second order result that can be found in \cite{DSt1}. We also discuss when the Hessian of the form $f$ belongs to the Jacobian ideal $J(f)$, see Proposition \ref{P1.5} and Question \ref{questionHESS}.
In the fourth section we first show that the $k$-th order Milnor algebra $M^k(f)$ determines the hypersurface $V(f)$ up-to projective equivalence, for a generic $f$ and any
$k\leq d/2-1$, see Theorem \ref{thmZW2}. In the rather long Example \ref{ex3}, we look at quartic curves in $\mathbb{P}^2$, both smooth and singular, and we compute the Hilbert functions for our graded algebras $M^k(f)$ and $A(f)$
as well as the minimal resolutions as a graded $Q$-module for $A(f)$ in many cases.
In the fifth section,
we construct the $k$-th polar map $\varphi^k_X$ and we prove, in Theorem \ref{thm:polarrankhess}, a higher order version of the Gordan-Noether criterion for the degeneracy of this $k$-th polar map. In this setting, we use the mixed Hessians developed in \cite{GZ}, generalizing higher order Hessians introduced in \cite{MW}. In Corollaries \ref{cor:polar1} and \ref{corHess10}, we give sufficient conditions for the non degeneracy of $\varphi^k_X$ and Theorem \ref{T2} give also some information about the degree of the $k$-th polar map.
In \cite{D}, the author showed that the natural higher order related dual map, $\psi^k_X = \varphi^k_X|X$, is a finite map.
In Theorem \ref{T2}, we assume that the $k$-th Milnor algebra is Artinian to prove that $\varphi^k_X$ is finite.
\bigskip
We would like to thank the referee for his very careful reading of our manuscript and for his suggestions which greatly improved the presentation of our results.
\section{Preliminaries}
\subsection{Artinian Gorenstein algebras and mixed Hessians}
In this section we give a brief account of Artinian Gorenstein algebras and Macaulay-Matlis duality.
\begin{defin}\rm Let $R = \mathbb{C}[x_0,\ldots,x_n] $ be a polynomial ring with the usual grading and $I \subset R$ be a homogeneous Artinian ideal and suppose, without loss of generality that $I_1=0$.
Then the graded Artinian $\mathbb{C}$-algebra $A=R/I = \displaystyle\bigoplus_{i=0}^dA_i$ is standard, i.e. it is generated in degree $1$ as an algebra. Since $A$ is Artinian, under the hypothesis $I_1=0$, we call $n+1$ the codimension of $A$, by abuse of notation. Setting $h_i(A)=\dim_\mathbb{C} A_i$, the \emph{Hilbert vector} of $A$ is $\operatorname{Hilb}(A)=(1,h_1(A),\dots,h_d(A))$. The Hilbert vector is sometimes conveniently expressed as the \emph{Hilbert function} of $A$, given by the formula
\begin{equation}
\label{HF}
H(A,t)=\sum_{k=0}^dh_k(A)t^k.
\end{equation}
\end{defin}
The Hilbert vector $\operatorname{Hilb}(A)$ is said to be \emph{unimodal} if there exists an integer $t\ge 1$ such that $1\le h_1(A) \le \dots\le h_t(A)\ge h_{t+1}(A) \ge\dots\ge h_d(A).$ Moreover the Hilbert vector $\operatorname{Hilb}(A)=(1,h_1(A),\dots,h_d(A))$ is said to be \emph{symmetric} if $h_{d-i}(A)=h_i(A)$ for every $i=0,1,\dots,\lfloor\frac{d}{2}\rfloor$. The next Definition is based in a well known equivalence that can be found in \cite[Prop. 2.1]{MW}.
\begin{defin}\rm
A standard graded Artinian algebra $A$ as above is Gorenstein if and only if $h_d(A)= 1$ and the restriction of the multiplication of the algebra in complementary degree, that is $A_k \times A_{d-k} \to A_d$, is a perfect paring for $k =0,1,\ldots,d$, see \cite{MW}. If $A_j=0$ for $j >d$, then $d$ is called the \emph{socle degree} of $A$.
\end{defin}
It follows that the Hilbert vector $\operatorname{Hilb}(A)$ of a graded Artinian Gorenstein $\mathbb{C}-$algebra $A$ is symmetric. The converse is not true, and $\operatorname{Hilb}(A)$ is not always unimodal for $A$ Artinian Gorenstein.
\begin{ex}\rm
The first example of a non unimodal Hilbert vector $\operatorname{Hilb}(A)$ of a Gorenstein algebra $A$ was given by Stanley in \cite{St}, namely
$$(1,13,12,13,1).$$
This algebra $A$ has codimension $13$ and socle degree $4$.
In \cite{BI} we can find the first known example of a non unimodal Gorenstein Hilbert function in codimension $5$, namely
$$(1,5, 12 , 22 , 35 , 51 , 70, 91 , 90 , 91 , 70 , 51 , 35 , 22 , 12 , 5 , 1).$$
All Gorenstein $h$-vectors are unimodal in codimension $\leq 3$, see \cite{St}. To the best of the authors knowledge, it is not known if there is a non unimodal Hilbert vector of a Gorenstein algebra in codimension $4$, see \cite{MN1}.
\end{ex}
Since our approach is algebro-geometric-differential, we recall a differentiable version of the Macaulay-Matlis duality which is equivalent to polarity in characteristic zero. We denote by $R_d=\mathbb{C}[x_0,\ldots,x_n]_d$ the $\mathbb{C}-$vector space of homogeneous polynomials of degree $d$. We denote by $Q=\mathbb{C}[X_0,\ldots,X_n]$ the ring of differential operators of $R$, where $X_i := \frac{\partial}{\partial x_i}$ for $i=0,\ldots,n.$ We denote by $Q_k=\mathbb{C}[X_0,\ldots,X_n]_k$ the $\mathbb{C}-$vector space of homogeneous differential operators of $R$ of degree $k$.\\ For each
integer $k$, with $d\geq k\geq 0$ there exist natural $\mathbb{C}-$bilinear maps $R_d\times Q_k \to R_{d-k}$ defined by differentiation: $$(f,\alpha) \to f_\alpha := \alpha(f).$$
Let $f\in R$ be a homogeneous polynomial of degree $\deg f=d\geq 1$, we define the \emph{annihilator ideal of $f$} by
$$\operatorname{Ann} (f) :=\left\{\alpha\in Q | \alpha(f)=0\right\}\subset Q.$$
Note that $\operatorname{Ann}(f)_1 \ne 0$ if and only if $X = V(f) \subset \mathbb{P}^n$ is a cone, that is, up-to a linear change of coordinates, the polynomial $f$ depends only of $x_1, \ldots,x_n$. {\it We assume from now on that $V(f)$ is not a cone, and hence that $\operatorname{Ann}(f)_1= 0.$}
Since $\operatorname{Ann}(f)$ is a homogeneous ideal of $Q$, we can define $$A(f)=\frac{Q}{\operatorname{Ann}(f)}.$$
Then $A(f)$ is the standard graded Artinian Gorenstein $\mathbb{C}$-algebra associated to $f$, given by the Macaulay-Matlis duality, and it satisfies
$$\begin{cases} A(f)_j=0 \mbox{ for } j >d\\ A(f)_d=\mathbb{C} \end{cases}.$$ A proof of this result can be found in \cite[Theorem 2.1]{MW}.
\begin{ex} \label{exFer}\rm
Take $f_F=x_0^d+...+x_n^d$, the Fermat type polynomial of degree $d$. In this case the ideal $\operatorname{Ann}(f)$ is generated by $X_iX_j$ for $0 \leq i <j \leq n$
and by $X_0^d-X_j^d$ for $j=1,2,...,n$. The graded part of degree $k$ of $A$ is $A_k = \langle X_0^k,X_1^k,\ldots,X_n^k\rangle$ for $k=1,\ldots,d-1$, and $A_j=\langle x_0^j\rangle $ for $j=0$ and $j=d$. This determines the Hilbert vector $$\operatorname{Hilb}(A(f_F))=(1,n+1, \ldots,n+1,1).$$
\end{ex}
\begin{defin}
Let $A=\displaystyle{\oplus_{i=0}^d}A_i$ be an Artinian graded $\mathbb{C}-$algebra with $A_d\neq 0$.
\begin{enumerate}
\item The algebra $A$ is said to have the Weak Lefschetz property, briefly WLP, if there exists an element $L\in A_1$ such that the multiplication map $\bullet L: A_i\to A_{i+1}$ is of maximal rank for $0\leq i \leq d-1$.
\item The algebra $A$ is said to have the Strong Lefschetz property, briefly SLP, if there exists an element $L\in A_1$ such that the multiplication map $L^k: A_i\to A_{i+k}$ is of maximal rank for $0\leq i\leq d$ and $0\leq k\leq d-i$.
\item We say that $A$ has the Strong Lefschetz property in the narrow sense, if there is $L \in A_1$ such that the linear map $\bullet L^{d-2k}: A_k \to A_{d-k}$ is an isomorphism for all $k\leq d/2$.
\end{enumerate}
\end{defin}
\begin{rmk}
In the case of standard graded Artinian Gorenstein algebra the two conditions SLP and SLP in the narrow sense are equivalent.
\end{rmk}
\begin{ex}\label{ex:monomial}\rm This example is due to Stanley \cite{St} and Watanabe \cite{Wa3}. It is considered to be the starting point of the research area of Lefschetz properties for graded algebras. Nowadays there are lots of different proofs for it.
Consider the graded Artinian Gorenstein algebra
$$A = \frac{\mathbb{C}[X_0,\ldots,X_n]}{(X_0^{a_0},\ldots,X_n^{a_n})} = \frac{\mathbb{C}[X_0]}{(X_0^{a_0})}\otimes \ldots \otimes \frac{\mathbb{C}[X_n]}{(X_n^{a_n})},$$
with integers $a_i>0$ for all $i=0,\ldots,n$.
It is a monomial complete intersection. Since the cohomology of the complex projective space is $H^*(\mathbb{P}^m,\mathbb{C})=\mathbb{C}[x]/(x^{m+1})$, and the Segre product commutes with the tensor product by K\"unneth Theorem for cohomology, we have:
$$H^*(\mathbb{P}^{a_0-1}\times \ldots \times \mathbb{P}^{a_n-1},\mathbb{C})=\frac{\mathbb{C}[X_0,\ldots,X_n]}{(X_0^{a_0},\ldots,X_n^{a_n})}. $$
By the Hard Lefschetz Theorem applied to the smooth projective variety $(\mathbb{P}^{a_0-1}\times \ldots \times \mathbb{P}^{a_n-1}$, we know that $A$ has the SLP.
\end{ex}
The standard graded Artinian Gorenstein algebra $A(f)$ associated to a form $f$ is a natural model for the cohomology algebras of spaces in several categories. For smooth projective varieties, the Hard Lefschetz theorem inspired what is now called Lefschetz properties for
the algebra $A(f)$. As we show below, the geometric properties of the higher order objects introduced in this paper are intrinsicly connected with such Lefschetz properties, see Theorem \ref{thm:polarrankhess}.
\subsection{Hessians and Lefschetz properties}
We recall the following classical results involving the usual Hessian.
Cones are trivial forms with vanishing Hessian and are characterized by the fact that $Z_X\subset H = \mathbb{P}^{n-1} \subset \mathbb{P}^n$ is a degenerate variety. Hesse claimed in \cite{He} that a reduced hypersurface has vanishing Hessian if and only if it is a cone. Gordan-Noether proved that the claim is true for $n \leq 3$
and false for $n \geq 4$ and this is part of the so called Gordan-Noether theory, see \cite{GN, CRS,Wa3, GR, Go, Ru}.
More precisely, let $f \in R_d$ be a reduced form and let $X = V(f) \subset \mathbb{P}^n $ be the associated hypersurface. Consider the polar map associated to $f$:
$$
\varphi_X:\mathbb{P}^n\dasharrow(\mathbb{P}^n)^{*}.$$
It is also called the gradient map of $X=V(f)\subset \mathbb{P}^n$, and it is defined by
$$\varphi_X(p)=(f_{x_0}(p):\cdots :f_{x_n}(p)),$$
where $f_{x_i}= \frac{\partial f}{\partial x_i}$.
The image $Z=Z_X$ of $\mathbb{P}^n$ under the polar map $\varphi_X$ is called the polar image of $X$.
\begin{prop} \label{prop:GNcriteria} \cite{GN} Let $f\in \mathbb{C}[x_0,\ldots,x_n]$ be a reduced polynomial and consider $X = V(f) \subset \mathbb{P}^n$. Then
\begin{enumerate}
\item[(i)] $X$ is a cone if and only if $Z\subset H = \mathbb{P}^{n-1}$ is degenerated, which is equivalent to say that $ f_{x_0},\ldots,f_{x_n} $ are linearly dependent;
\item[(ii)] $\operatorname{hess}_f=0$ if and only if $Z \subsetneq \mathbb{P}^n$, or equivalently $f_{x_0},\ldots,f_{x_n}$ are algebraically dependent.
\end{enumerate}
\end{prop}
\begin{thm}\label{thm:GN} \cite{GN} Let $X = V(f) \subset \mathbb{P}^n$, $n \leq 3$, be a hypersurface such that $\operatorname{hess}_f=0$. Then $X$ is a cone.
\end{thm}
\begin{thm}\label{thm:GN2} \cite{GN} For each $n \geq 4$ and $d \geq 3$ there exist irreducible hypersurfaces $X = V(f) \subset \mathbb{P}^n$,
of degree $\deg(f) = d$, not cones, such that $\operatorname{hess}_f=0$.
\end{thm}
Now we recall a generalization of a construction that can be found in \cite{MW}. Set $A=A(f)$, let $k\le l$ be two integers, take $L\in A_1$ and let us consider the linear map
$$\bullet L^{l-k}: A_k\to A_l.$$
Let $\mathcal{B}_k=(\alpha_1,\ldots,\alpha_r)$ be a basis of the vector space $A_k$, and
$\mathcal{B}_l=(\beta_1,\ldots,\beta_s)$ be a basis of the vector space $A_l.$
\begin{defin}
\label{defMH}
We call mixed Hessian of $f$ of mixed order $(k,l)$ with respect to the basis $\mathcal{B}_k$ and $\mathcal{B}_l$ the matrix:
$$\operatorname{Hess}_f^{(k,l)}:=[ \alpha_i\beta_j(f)]$$
Moreover, we define $\operatorname{Hess}_f^k=\operatorname{Hess}_f^{(k,k)}$, $\operatorname{hess}_f^k = \det(\operatorname{Hess}_f^k)$ and $\operatorname{hess}_f=\operatorname{hess}_f^1$.
\end{defin}
Note that $A(f)_1=Q_1$ by our assumption, which implies that $\operatorname{Hess}_f=\operatorname{Hess}_f^1$ is the usual Hessian
matrix of the polynomial $f$ and $\operatorname{hess}_f$ is the usual Hessian of $f$.
Since $A$ is Gorenstein, there is an isomorphism $A_k^* \simeq A_{d-k} $. Therefore, given the basis
$\mathcal{B}_k=(\alpha_1,\ldots,\alpha_r)$ of $A_k$ and a basis $\theta$ of $A_d \simeq \mathbb{C}$, we get the dual basis
$\mathcal{B}^*_k=(\beta^*_1,\ldots,\beta^*_s),$ of $A_{d-k}$ in the following way
$$\beta^*_i\beta_j(f)=\delta_{ij}\theta.$$
\begin{defin}
We call dual mixed Hessian matrix the matrix
$$\operatorname{Hess}^{(k^*,l)}(f):=[(\beta_i^*)\alpha_j(f)]$$
\end{defin}
Note that $\operatorname{rk} \operatorname{Hess}^{(k^*,l)} = \operatorname{rk} \operatorname{Hess}^{(d-k,l)}$.
If $L=a_0X_0+\ldots+a_nX_n \in Q_1,$
we set $L^{\perp}=(a_0,\ldots,a_n) \in \mathbb{C}^{n+1}.$
The next result can be found in \cite{GZ2} and it is a generalization of the main result of \cite{MW}.
\begin{thm}\cite{GZ2} \label{thm:generalization}
With the previous notation, let $M$ be the matrix associated to the map $\bullet L^{l-k}:A_k \to A_l$ with respect
to the bases $\mathcal{B}_k$ and $\mathcal{B}_l.$ Then
$$M=(l-k)!\operatorname{Hess}^{(l^*,k)}(f)(L^{\perp}).$$
\end{thm}
\begin{cor} \label{cor1}
For a generic $L$, one has the following.
\begin{enumerate}
\item The map $\bullet L^{d}:A_0 \to A_d$ is an isomorphism.
\item The map $\bullet L^{d-2}:A_1 \to A_{d-1}$ is an isomorphism if and only if $\operatorname{hess}_f \ne 0$.
\item If $d=2k$ is even, then $\operatorname{hess}_f^k \neq 0$.
\item $A$ has the SLP if and only if $\operatorname{hess}^k_f \neq 0$ for all $k\leq d/2$.
\end{enumerate}
\end{cor}
Using Theorem \ref{thm:GN} and Theorem \ref{thm:generalization} and we get the following.
\begin{cor}\label{cor:lowdeg}
All standard graded Artinian algebras $A$ of $\operatorname{codim} A \leq 4$ and of socle degree $=3,4$ have the SLP.
\end{cor}
\begin{cor}\cite{Go} For each pair $(n,d)\not\in\{(3,3), (3,4)\}$ with $N \geq 3$ and with $d \geq 3$, there exist standard graded
Artinian Gorenstein algebras $A = \displaystyle \oplus_{i=0}^d A_i$ of codimension $\dim A_1=n+1 \geq 4$ and socle degree $d$ that do not satisfy the Strong
Lefschetz Property. Furthermore, for each $L \in A_1$ we can choose arbitrarily the level $k$ where the map
$$\bullet L^{d-2k} : A_k \to A_{d-k}$$
is not an isomorphism.
\end{cor}
\begin{rmk}\rm For algebras of codimension $2$, SLP hold in general. Therefore, for Gorenstein algebras, it means that the higher Hessians are not zero.
This result is a first step in order to generalize Theorem \ref{thm:GN}.
The issue is that in codimension $3$ the problem is open, that is, we do not know if there is an AG algebra failing SLP.
A generalizaion of Theorem \ref{thm:GN2} can be found in \cite{Go}. In this work we give a generalization of Proposition \ref{prop:GNcriteria}, see Theorem \ref{thm:polarrankhess}.
\end{rmk}
\subsection{Jacobian ideals and Milnor algebras}
Let $R=\mathbb{C}[x_0,\ldots,x_n]$ be the polynomial ring in $n+1$ variables with complex coefficients, endowed with the usual grading.\\ Let $f\in R_d$ be a homogeneous polynomial of degree $d$ such that the hypersurface $X=V(f)\subset \mathbb{P}^n$ is reduced. Let $J(f)$ be the Jacobian ideal of $f$, generated by the partial derivatives $f_{x_i}$, of $f$ with respect to $x_i$ for $i=0,\ldots,n$. If $X$ is smooth, then the ideal $J(f)$ is generated by a regular sequence, and $M(f)=R/J(f)$ is a Gorenstein Artinian algebra. Moreover we have $$\dim_{\mathbb{C}} M(f)<+\infty\Leftrightarrow V(f) \mbox{ is a smooth, }$$
and the corresponding Hilbert function is given by
\begin{equation} \label{eq0}
H(M(f);t)=\left( \frac{1-t^{d-1}}{1-t}\right)^{n+1}.
\end{equation}
In particular, the socle degree of $M(f) $ is $(d-2)(n+1)$.\\
Assume now that $X\subset \mathbb{P}^n$ is singular, but reduced. In this case the Jacobian algebra is not of finite length, in particular it is not Artinian. It contains information on the structure of the singularities and on the global geometry of $X$.The following results can be found in \cite{IG}.
\begin{prop} \label{propIG1}
Let $V:f = 0$ be a hypersurface in $\mathbb{P}^n$ of degree $d>2$, such that its singular locus $V_s$ has dimension at most $n-3$. Then $M(f)$ has the WLP in degree $d-2$.
\end{prop}
\begin{prop} \label{propIG2}
Let $V:f = 0$ be a hypersurface in $\mathbb{P}^n$ of degree $d>2$, such that its singular locus $V_s$ has dimension at most $n-3$. Then for every positive integer $k<d-1$ $M(f)$ has the SLP in degree $d-k-1$ at range $k$.
\end{prop}
\begin{thm} \label{thmIG1}
Let $V:f = 0$ be a general hypersurface, then $M(f)$ has the SLP.
\end{thm}
In view of the above result, it is natural to ask the following.
\begin{question}\label{q1}
\rm
Is it true for any homogeneous polynomial $f$ with $V(f)$ smooth?
\end{question}
We have the following results in relation with this question.
\begin{prop} \label{propSLP1}
Let $V:f = 0$ be any smooth surface in $\mathbb{P}^3$ of degree $d=3$. Then $M(f)$ has the SLP.
\end{prop}
\begin{proof}
Since $M(f)$ is Artinian Gorenstein, by \cite[Theorem 2.1]{MW} we have
$$M(f) \cong Q/\operatorname{Ann}(g),$$
for some homogeneous polynomial $g$,
where $$\deg(g)= \text{ socle degree of } M(f)= (n+1)(d-2)=4$$ and $\operatorname{hess}_g\neq 0,$
by Theorem \ref{thm:GN}. Indeed, otherwise $V(g)$ would be a cone, in contradiction with $\dim M(f)_1=4$. By Corollaries \ref{cor1} and \ref{cor:lowdeg}, $M(f)$ has the SLP.
\end{proof}
\begin{prop} \label{propSLP2}
Let $V:f = 0$ be a smooth curve in $\mathbb{P}^2$ of even degree $d=2d'$. Then the multiplication by the square of a generic linear form $\ell \in R_1$ induces an isomorphism
$$\ell^2: M(f)_{3d'-4} \to M(f)_{3d'-2}.$$
In particular, when $d=4$, the Milnor algebra $M(f)$ has the SLP.
\end{prop}
\begin{proof} Note that the socle degree of $M(f)$ is in this case $T=3(d-2)=6d'-6$.
As explained in \cite[Remark 3.7]{DStJump}, a linear form $\ell$ such that the above map is not an isomorphism corresponds exactly to the fact that the associated line $L: \ell=0$ in $\mathbb{P}^2$ is a jumping line of the second kind for the rank two vector bundle $T\langle V \rangle $ on $\mathbb{P}^2$, where $T\langle V \rangle $ is the sheaf of logarithmic vector fields along $V$ as considered for instance in \cite{AD, DS14,MaVa}. Then a key result \cite[Theorem 3.2.2]{KH} of K. Hulek implies that the set of jumping lines of second kind is a curve in the dual projective plane $(\mathbb{P}^2)^{*}$ of all lines.
When $d=4$, this yields an isomorphism $\ell^2: M(f)_{2} \to M(f)_{4}.$
The other isomorphisms necessary for the SLP follows from Corollary \ref{cor1}.
\end{proof}
\begin{rmk} \label{RkCurves} \rm
For any smooth curve $V:f=0$ in $\mathbb{P}^2$, the associated Milnor algebra $M(f)$ has the WLP, as follows from the more general results in \cite{HMNW}. In addition, for a singular, reduced curve $V:f=0$ in $\mathbb{P}^2$, the associated Milnor algebra $M(f)$ is no longer Artinian or Gorenstein, but a partial WLP still holds, see \cite[Corollary 4.4]{DP}.
\end{rmk}\rm
\section{Higher order Jacobian ideals and Milnor algebras}
Let us consider the $k$-th order Jacobian ideal of $f \in R$ to be
$J^k =J^k(f)= (Q_k \ast f) = (A_k \ast f$), the ideal generated by the $k$-th order partial derivatives of $f$. Then $J^k$ is a homogeneous ideal and we define $M^k=M^k(f)=R/J^k$ to be the $k$-th order Milnor algebra of $f$. For $k=1$, the ideal $J^1$ is just the usual Jacobian ideal $J(f)$ of $f$
and $M^1$ is the usual Milnor algebra $M(f)$ as defined in the previous section.
\begin{rmk} \label{R2} \rm
For $k=2$, the ideal $J^2(f)$ is the ideal in $R$ spanned by all the second order partial derivatives of $f$. Euler formula implies that $J(f) \subset J^2(f)$, when $d=\deg(f) \geq 2$.
It follows that $M^2(f)$ coincides with the graded {\it first Hessian algebra} $H_1(f)$ of the polynomial $f$, as defined in \cite{DSt1}.
It follows from \cite[Theorem 1.1]{DSt1} and \cite[Example 2.7]{DSt1} that, for a hypersurface $V(f)$ having at most isolated singularities, the algebra $M^2(f)=H_1(f)$ is Artinian
if and only if the multiplicity of the hypersurface $V(f)$ at any singular point is 2.
\end{rmk}\rm
The above remark can be extended to higher order Milnor algebras. First consider an isolated hypersurface singularity $(V,0):g=0$ at the origin of $\mathbb{C}^n$. Then we define the $k$-th order Tjurina ideal $TI^k(g)$ to by the ideal in the local ring ${\mathcal O}_n$ generated by all the partial derivatives $\partial^{\alpha}g$, for $0 \leq |\alpha|\leq k$. The $k$-th order Tjurina algebra of the germ $(V,0)$ is by definition the quotient
$$T^k(V,0)=\frac{{\mathcal O}_n}{TI^k(g)}.$$
It can be shown that this algebra depends only on the isomorphism class of the germ $(V,0)$, and we define the $k$-th Tjurina number of $(V,0)$ to be the integer
$$\tau^k(V,0)=\dim_{\mathbb{C}}T^k(V,0).$$
With this notation, we have the following result.
\begin{thm} \label{T1}
The $k$-th order Milnor algebra $M^k(f)$ of a reduced homogeneous polynomial $f$ is Artinian if and only if the multiplicity of the projective hypersurface $V(f)$ at any point $p \in V(f)$ is at most $k$. Moreover, if the hypersurface $V(f)$ has only isolated singularities, say at the points $p_1,\ldots,p_s$, then for any $k$ and for any large enough $m$ one has
$$\dim_{\mathbb{C}} M^k(f)_m=\sum_{i=1}^s\tau^k(V,p_i).$$
\end{thm}
\proof The algebra $M^k(f)$ is Artinian if and only if the zero set $Z(J^k(f))$ of the ideal $J^k(f)$ in $\mathbb{P}^n$ is empty. Note that a point $p \in Z(J^k(f))$ is a point on the hypersurface $V(f)$, by a repeated application of the Euler formula. If we choose the coordinates on $\mathbb{P}^n$ such that $p=(1:0:\ldots:0)$, then the local equation of the hypersurface germ $(V(f),p)$ is
$$g(y_1,\ldots,y_n)=f(1,y_1,\ldots,y_n)=0,$$
exactly as in the proof of \cite[Theorem 1.1]{DSt1}. It follows that the localization of the ideal $J^k(f)$ at the point $p$ coincides with the ideal generated by all the partial derivatives $\partial^{\alpha}g$, for $0 \leq |\alpha|\leq k$. And these derivatives vanish all at $p$ exactly when the multiplicity of $V(f)$ at $p$ is $>k$. This proves the first claim.
The proof of the second claim is completely similar.
\endproof
Note that \cite[Example 2.18]{DSt1} shows that even for a smooth curve $V(f)$, the algebra $M^2(f)=H_1(f)$ is not Gorenstein in general, since its Hilbert function, which depends on the choice of the smooth curve, is not symmetric and the dimension of the socle can be $>1$.
However, there is a Zariski open subset $U_{d,k}$ in $R_d$ such that the Hilbert vector
$\operatorname{Hilb}(M^k(f))$ is constant for $f \in U_{d,k}$.
\begin{question}\label{q2}
\rm
Determine the value of the vector $\operatorname{Hilb}(M^k(f))$, or equivalently of the Hilbert function $H(M^k(f),t)$ for $f \in U_{d,k}$.
\end{question}
By semicontinuity, it follows that
$$h_i(M^k(f))= \min \{ \dim(M^k(g)_i) \ : \ g \in R_d\}.$$
Similarly, there is a Zariski open subset $U'_{d}$ in $R_d$ such that the Hilbert vector
$\operatorname{Hilb}(A(f))$ is constant for $f \in U'_{d}$.
Using recent results by Zhenjian Wang, see \cite[Proposition 1.3]{ZW2}, we have the following.
\begin{prop} \label{propZW}
For a polynomial $f \in U'_d$, one has $h_k(A(f))=\dim Q_k={n+k \choose n}$ for
$k \leq d/2$ and $h_k(A(f))=h_{d-k}(A(f))={n+d-k \choose n}$ for $d/2<k\leq d$.
In particular, a Fermat type polynomial $f_F=x_0^d+ \ldots +x_n^d$ is not in $U'_d$, for $d \geq 4$.
\end{prop}
\proof
Since $A(f)$ is Artinian Gorenstein with socle degree $d$, it is enough to prove only the claim for $k \leq d/2$. This claim is equivalent to $\operatorname{Ann}(f)_k=0$ for $k \leq d/2$, and also to
$\dim J^k(f)_{d-k}=\dim Q_k$. This last equality is exactly the claim of \cite[Proposition 1.3]{ZW2},
where $J^k(f)_{d-k}$ is denoted by $E_k(f)$. The claim for the Fermat type polynomial follows from Example \ref{exFer}.
\endproof
\begin{cor} \label{cor2}
For any $k \leq d/2$ and any polynomial $f \in U_{d,k}$ one has $$h_i(M^k(f))= {n+i \choose n}\text{ for } i<d-k$$
and
$$h_{d-k}(M^k(f))=\dim R_{d-k}-\dim Q_k={n+d-k \choose n}-{n+k \choose n}.$$
In particular, a Fermat type polynomial $f=x_0^d+ \ldots +x_n^d$ is not in $U_{d,k}$, for $d \geq 2k \geq 4$.
\end{cor}
In conclusion, the introduction of higher order Milnor algebras is motivated by the desire to construct a larger class of Artin graded algebras starting with homogeneous polynomials. These new Artinian algebras may exhibit interesting examples with respect to Lefschetz properties. It is known that the Hessians are related to Lefschetz properties, and the Hessians of singular hypersurfaces behave in a different way from the ones of smooth hypersurfaces. As an example, we have the following. Recall first that $\operatorname{hess}_f$ denotes the Hessian of
a homogeneous polynomial $f$ as in Definition \ref{defMH} or, more explicitly,
$$\operatorname{hess}_f= \det \left( \frac{\partial ^2 f}{\partial x_i \partial x_j}\right)_{0\leq i,j \leq n}.$$
\begin{prop} \label{P1.5}
Let $f$ be a homogeneous polynomial in $R$.
\begin{enumerate}
\item If the hypersurface $V(f)$ is smooth, then $\operatorname{hess}_f \notin J(f)$.
\item If the hypersurface $V(f)$ is not smooth, but has isolated singularities, then $\operatorname{hess}_f \in J(f)$.
\end{enumerate}
\end{prop}
\proof The first claim is well known, and it holds in fact for any isolated hypersurface singularity, not only for the cone over $V(f)$, see Theorem 1, section 5.11 in \cite{AGV}.
The second claim is less known, and it follows from \cite[Proposition 1.4 (ii)]{DSt1}. Indeed, for the $n+1$-st Hessian algebra $H_{n+1}(f)$, one has the equalities
$$H_{n+1}(f)=\frac{R}{J(f)+(\operatorname{hess}_f)}=\frac{M(f)}{({\overline \operatorname{hess}_f})},$$
where $(\operatorname{hess}_f)$ is the principal ideal in $R$ generated by the Hessian $\operatorname{hess}_f$ and $({\overline \operatorname{hess}_f})$ is the principal ideal in $M(f)$ generated by the class $ {\overline \operatorname{hess}_f}$ of the
Hessian $\operatorname{hess}_f$ in $M(f)$.
By \cite[Proposition 1.4 (ii)]{DSt1}, we know that the graded algebras $H_{n+1}(f)$ and $M(f)$ have the same Hilbert series when the hypersurface $V(f)$ is not smooth and has only isolated singularities. This proves our claim (2).
\endproof
\begin{question} \label{questionHESS}
Is is true that $\operatorname{hess}_f \in J(f)$ for any reduced, singular hypersurface $V(f)$?
\end{question}
\section{The Hilbert functions of $A(f)$ and $M^k(f)$, and the geometry of $V(f)$}
It is known that for two homogeneous polynomials $f, g \in R_d$, the corresponding Milnor algebras $M(f)$ and $M(g)$ are isomorphic as $\mathbb{C}$-algebras if and only if the associated hypersurfaces $V(f)$ and $V(g)$ in $\mathbb{P}^n$ are projectively equivalent. This claim follows from \cite{MY} when the hypersurfaces $V(f)$ and $V(f')$ are both smooth. However, the method of proof can be extended to cover all hypersurfaces.
For a closely related result, see \cite{ZW1}.
Note that a similar claim fails if we replace the Milnor algebra $M(f)=M^1(f)$ by the second order Milnor algebra $M^2(f)$. Indeed, it is enough to consider the family of complex plane cubics $f_a=x_0^3+x_1^3+x_2^3-3ax_0x_1x_2$, where $a\ne0$, $a^3\ne 1$.
In this case $M^2({f_a})=R/(x_0,x_1,x_2)=\mathbb{C}$ does not detect the parameter $a$. However, the main result of \cite{ZW2} implies the following.
\begin{thm} \label{thmZW2}
The $k$-th order Milnor algebra $M^k(f)$ of a generic homogeneous polynomial $f$ determines the hypersurface $V(f)$ up-to projective equivalence, when $k \leq d/2-1$.
\end{thm}
\proof
Let $f$ and $g$ be two generic, degree $d$ homogeneous polynomials in $R$, such that we have an isomorphism of graded algebras
$$M^k(f)=R/J^k(f) \simeq R/J^k(g)=M^k(g).$$
Then there is a linear change of coordinates $\phi \in Gl_{n+1}(\mathbb{C})$, inducing the above isomorphism, and hence such that
$\phi^*(J^k(f))=J^k(g)$. This last equality can be re-written as $J^k(f\circ \phi)=J^k(g)$, and Theorem 1.2 and Proposition 1.3 in \cite{ZW2} imply that the two hypersurfaces $V(f\circ \phi)$ and $V(g)$ coincide.
\endproof
Saying that the associated hypersurfaces $V(f)$ and $V(g)$ in $\mathbb{P}^n$ are projectively equivalent means that the two polynomials $f$ and $g$ belongs to the same $G$-orbit, where $G=Gl_{n+1}(\mathbb{C})$ is acting in the natural way on the space of polynomials $R_d$ by substitution. And the fact that $f$ and $g$ belongs to the same $G$-orbit implies immediately that the Milnor algebras $M(f)$ and $M(g)$ are isomorphic.
Similarly, the fact that $f$ and $g$ belong to the same $G$-orbit implies immediately that the standard graded Artinian Gorenstein algebras $A(f)$ and $A(g)$ are isomorphic, see for instance \cite[Lemma 3.3]{DiPo}. In particular, the Hilbert function of $A(f)$ is determined by the $G$-orbit of $f$, and hopefully by the geometry of the corresponding hypersurface $V(f)$. However, the following example seems to suggest that it is a hard problem to relate the geometry of the hypersurface $V(f)$ to the properties of the algebras $A(f)$, $M(f)$ and $M^2(f)$.
\begin{ex} \label{ex3}\rm
In this example we look at quartic curves in $\mathbb{P}^2$, i.e. $(n,d)=(2,4)$.
When $V(f)$ is not a cone, only dimension $h_2(A(f))$ has to be determined. It turns out that all the possible values $ \{3,4,5,6\}$ are obtained. All the computations below were done using CoCoA software, see \cite{CoCoA}, with the help of Gabriel Sticlaru.
\medskip
\noindent{\bf Case $V(f)$ smooth}
\medskip
All the smooth quartics $V(f)$ have the same Hilbert function
$$H(M(f);t)=1+3t+6t^2+7t^3+6t^4+3t^5+t^6,$$
given by the formula \eqref{eq0}. But the other invariants may change, as the following examples show.
\begin{enumerate}
\item When $f_F=x_0^4+x_1^4+x_2^4$, the Fermat type polynomial of degree $4$, the corresponding Hilbert function is
$$H(A(f_F);t)=1+3t+3t^2+3t^3+t^4,$$
by Example \ref{exFer}. The minimal resolution of $A(f_F)$
is given by
$$0 \to Q(-7) \to Q(-3)^2 \oplus Q(-5)^3 \to Q(-2)^3 \oplus Q(-4)^2 \to Q,$$
in particular $A(f_F)$ is not a complete intersection. The second order Milnor algebra is $M^2({f_F})=R/(x_0^2,x_1^2,x_2^2)$, hence a complete intersection, with Hilbert function
$$H(M^2({f_F});t)=1+3t+3t^2+t^3.$$
\item For the smooth Caporali quartic given by $f_{Ca}=x_0^4+x_1^4+x_2^4+(x_0+x_1+x_2)^4$, we get
$$H(A(f_{Ca});t)=1+3t+4t^2+3t^3+t^4,$$
and the minimal resolution of $A(f_{Ca})$
is given by
$$0 \to Q(-7) \to Q(-4) \oplus Q(-5)^2 \to Q(-2)^2 \oplus Q(-3) \to Q.$$
Hence $A(f_{Ca})$ is a complete intersection of multi-degree $(2,2,3)$.
The second order Milnor algebra $M^2({f_{Ca}})$ has a Hilbert function given by
$$H(M^2({f_{Ca}});t)=1+3t+2t^2,$$
in particular this algebra is not Gorenstein.
\item For the smooth quartic given by $f_{Ca_1}=x_0^4+x_1^4+x_2^4+(x_0^2+x_1^2+x_2^2)^2$, we get
$$H(Af_{Ca_1});t)=1+3t+6t^2+3t^3+t^4,$$
which coincides with the generic value given in Proposition \ref{propZW}, and the minimal resolution of $A(f_{Ca_1})$
is given by
$$0 \to Q(-7) \to Q(-4)^7 \to Q(-3)^7 \to Q.$$
Hence $A(f_{Ca_1})$ is far from being a complete intersection.
The second order Milnor algebra $M^2({f_{Ca_1}})$ has a Hilbert function given by
$$H(M^2({f_{Ca_1}});t)=1+3t,$$
in particular this algebra is not Gorenstein.
\item For the smooth quartic given by $f_{Ca_2}=x_0^4+x_1^4+x_2^4+(x_0^2+x_1^2)^2$, we get the same Hilbert function as for $A(f_{Ca})$,
but the minimal resolution of $A(f_{Ca_2})$
is given by
$$0 \to Q(-7) \to Q(-3)\oplus Q(-4)^2 \oplus Q(-5)^2 \to Q(-2)^2 \oplus Q(-3)^2\oplus Q(-4) \to Q.$$
The second order Milnor algebra $M^2({f_{Ca_2}})$ has a Hilbert function given by
$$H(M^2({f_{Ca}});t)=1+3t+2t^2,$$
and hence this algebra is again not Gorenstein.
\end{enumerate}
\medskip
\noindent{\bf Case $V(f)$ singular}
\medskip
\begin{enumerate}
\item The rational quartic with an $E_6$-singularity, defined by $f_C=x_0^3x_1+x_2^4$ satisfies
$H(A(f_{C});t)=H(A_{f_F};t)$ and the minimal resolution for $A_{f_{C}}$ is
$$0 \to Q(-7) \to Q(-3)^2 \oplus Q(-5)^3 \to Q(-2)^3 \oplus Q(-4)^2 \to Q.$$
Hence the algebra $A(f_{C})$ has the same resolution as a graded $R$-module as the algebra $A(f_{F})$. But these two algebras are not isomorphic. Indeed, note that
$$\operatorname{Ann}(f_F)_2=\langle X_0X_1,X_0X_2,X_1X_2\rangle \text{ and } \operatorname{Ann}(f_C)_2= \langle X_1^2,X_0X_2,X_1X_2\rangle.$$
An isomorphism $A(f_F) \simeq A(f_C)$ of $\mathbb{C}$-algebras would imply that the two nets of conics
$$N_F: aX_0X_1+bX_0X_2+c X_1X_2 \text{ and } N_C: aX_1^2+b X_0X_2+c X_1X_2$$
are equivalent. This is not the case, since a conic in $N_F$ is singular if and only if it belongs to the union of three lines given by $abc=0$, while a conic in $N_C$ is singular if and only if it belongs to the union of two lines given by $ab=0$.
For the associated Milnor algebras, one has
$$H(M(f_C);t)=1+3t+6t^2+7t^3+7t^4+6 \frac{t^5}{1-t},$$
and
$$H(M^2(f_C);t)=1+3t+3t^2+2 \frac{t^3}{1-t}.$$
Hence $M^2(f_C)$ is not Artinian, as predicted by Theorem \ref{T1}.
Note that this curve has a unique $E_6$-singularity, with Tjurina numbers
$\tau(E_6)=\tau^1(E_6)=6$ and $\tau^2(E_6)=2$, which explain the coefficients of the rational fractions in the above formulas, in view of Theorem \ref{T1}.
\item For $f_{3A_2}=x_0^2x_1^2+x_1^2x_2^2+x_0^2x_2^2-2x_0x_1x_2(x_0+x_1+x_2)$, which defines a quartic curve with 3 cusps $A_2$, a direct computation shows that
$$H(A(f_{3A_2});t)=1+3t+6t^2+3t^3+t^4,$$
which coincides with the generic value given in Proposition \ref{propZW}. The minimal resolution is
$$0 \to Q(-7) \to Q(-4)^7 \to Q(-3)^7 \to Q.$$
Hence the algebra $A(f_{3A_2})$ has the same resolution as a graded $R$-module as the algebra $A(f_{Ca1})$. Does this imply that these two algebras are isomorphic? In this case $\operatorname{Ann}(f)_2=0$ and $\dim \operatorname{Ann}(f)_3=7$, hence it is more complicated to use the above method to distinguish these two algebras. Note also that the line arrangement $f=x_0x_1x_2(x_0+x_1+x_2)=0$ gives rise to an algebra $A(f)$ with exactly the same
resolution as a graded $R$-module as the algebra $A(f_{Ca1})$.
For the associated Milnor algebras, one has
$$H(M(f_{3A_2});t)=1+3t+6t^2+7t^3+6 \frac{t^4}{1-t},$$
and
$$H(M^2(f_{3A_2});t)=1+3t.$$
Hence $M^2(f_{3A_2})$ is Artinian, as predicted by Theorem \ref{T1}, but not Gorenstein.
\item For $f_{2A_3}=x_0^2x_1^2+x_2^4$, which defines a quartic curve with $2$ singularities $A_3$, a direct computation shows that
$$H(A(f_{2A_3});t)=1+3t+4t^2+3t^3+t^4$$
and the minimal resolution is
$$0 \to Q(-7) \to Q(-3)\oplus Q(-4)^2 \oplus Q(-5)^2 \to Q(-2)^2 \oplus Q(-3)^2\oplus Q(-4) \to Q.$$
Hence the algebra $A(f_{2A_3})$ has the same resolution as a graded $R$-module as the algebra $A(f_{Ca_2})$. Does this imply that these two algebras are isomorphic? Note that
$$\operatorname{Ann}(f_{2A_3})_2=\langle X_0X_2,X_1X_2\rangle =\operatorname{Ann}(f_{Ca_2})_2,$$
while $\operatorname{Ann}(f_{2A_3})_3$ and $\operatorname{Ann}(f_{Ca_2})_3$ have both dimension $7$.
Hence again the above method to distinguish these two algebras is not easy to apply. For the associated Milnor algebras, one has
$$H(M(f_{2A_3});t)=H(M(f_C);t)$$
and
$$H(M^2(f_{2A_3});t)=1+3t+2t^2.$$
Hence $M^2(f_C)$ is Artinian, but not Gorenstein.
\item For $f_{4A_1}=(x_0^2+x_1^2)^2+(x_1^2+x_2^2)^2$, which defines a quartic curve with $4$ singularities $A_1$ that is the union of two conics intersecting in the $4$ nodes, a direct computation shows that
$$H(A(f_{4A_1});t)=1+3t+5t^2+3t^3+t^4$$
and the minimal resolution is
$$0 \to Q(-7) \to Q(-4)^4 \oplus Q(-5) \to Q(-2) \oplus Q(-3)^4 \to Q.$$
\item For the line arrangement defined by $f=(x_0^3+x_1^3)x_2$, we get
an algebra $A(f)$ with exactly the same resolution as a graded $R$-module as the algebra $A(f_{Ca})$, hence a complete intersection of multi-degree $(2,2,3)$. But these two algebras are not isomorphic. Indeed, note that
$$\operatorname{Ann}(f_{Ca})_2=\langle X_0X_1-X_1X_2,X_0X_2-X_1X_2\rangle \text{ and } \operatorname{Ann}(f)_2=\langle X_0X_1,X_2^2\rangle.$$
An isomorphism $A(f_{Ca}) \simeq A(f)$ of $\mathbb{C}$-algebras would imply that the two pencils of conics
$$P_{Ca}: a(y_0y_1-y_1y_2)+b(y_0y_2-y_1y_2) \text{ and } P_f: ay_0y_1+b y_2^2$$
are equivalent. This is not the case, since a conic in $P_{Ca}$ is singular if and only if it belongs to the union of three lines given by $ab(a+b)=0$, while a conic in $P_f$ is singular if and only if it belongs to the union of two lines given by $ab=0$.
For the associated Milnor algebras, one has
$$H(M(f_{4A_1});t)=1+3t+6t^2+7t^3+6t^4+4 \frac{t^5}{1-t},$$
and
$$H(M^2(f_{4A_1});t)=1+3t+t^2.$$
Hence $M^2(f_{4A_1})$ is Artinian and Gorenstein.
\end{enumerate}
\end{ex}
\begin{ex} \label{ex4}\rm
We show here that for some smooth quartics $V(f)$ in $\mathbb{P}^2$ the multiplication by a generic linear form $\ell$ does not give rise to an injection $M^2(f)_1 \to M^2(f)_2$. Note that one has
$$\dim M^2(f)_1=\dim R_1=3 \text{ and } \dim M^2(f)_2=\dim R_2-\dim A(f)_{2}= 6-\dim A(f)_{2},$$
using the general formula
$$\dim J^k_{d-k}=\dim A(f)_{k}.$$
Hence, as soon as $\dim A(f)_{2}\geq 4$, the morphism $M^2(f)_1 \to M^2(f)_2$ cannot be injective. This happens for all the smooth quartics
in Example \ref{ex3}, except for the Fermat one.
\end{ex}
\begin{question} \label{questionMY} \rm
It would be interesting to find out whether Mather-Yau result extends to this setting, i.e. if an isomorphism
$A(f) \simeq A(f')$ of $\mathbb{C}$-algebras implies that $f$ and $f'$ belongs to the same $G$-orbit. In the case when $(n,d)=(1,4)$ or $(n,d)=(2,3)$, one has $T=(n+1)(d-2)=d$ and
the $G$-orbits of polynomials in $R_d$ corresponding to smooth hypersurfaces can be listed using 1-parameter families. In both cases, the correspondence $\operatorname{Mac}:f_t \to g_{u(t)}$ between a polynomial $f_t\in R_d$ with $V(f_t)$ smooth and a polynomial $g_{u(t)}=\operatorname{Mac}(f_t) \in R_d$, defined by the property that one has an isomorphism
$$M(f_t) =A_{g_{u(t)}},$$
give rise to a bijection $u$ of the corresponding parameter space, see \cite{DiPo} for details. Hence the Mather-Yau result implies a positive answer to our question of the algebras $A_f$ in these two special cases.
\end{question}
\section{Higher Jacobians and Higher Polar mappings}
Consider the following exact sequence
$$0 \to I_k \to Q_k \to J^k_{d-k} \to 0,$$
where $I=\operatorname{Ann}(f)$ and $J^k=J^k(f)$.
The map $Q_k \to J^k_{d-k}$ is given by evaluation $ \alpha \mapsto \alpha(f)$. \\
We have also the natural exact sequence:
$$0 \to I_k \to Q_k \to A_k \to 0.$$
Note that the vector spaces $J^k_{d-k}$ and $A_k$ have the same dimension,
\begin{equation}\label{eq1}
\dim J^k_{d-k} = \dim Q_k -\dim I_k = \binom{n+k}{k} - \dim I_k = \dim A_k.
\end{equation}
\begin{defin}\rm
The $k$-th polar mapping (or $k$-th gradient mapping) of the hypersurface $X=V(f) \subset \mathbb{P}^n$ is the rational map $\Phi^k_X: \mathbb{P}^n \dashrightarrow \mathbb{P}^{\binom{n+k}{k}-1}$ given by the $k$-th partial derivatives of $f$.
The $k$-th polar image of $X$ is $\tilde{Z}_k = \overline{\Phi^k_X(\mathbb{P}^n)}\subseteq\mathbb{P}^{\binom{n+k}{k}-1}$, the closure of the image of the $k$-th polar map.
\end{defin}
For $\{\alpha_1, \ldots, \alpha_{a_k}\}$ a basis of the vector space $A_k$, we define the relative $k$-th polar map of $X$ to be the map $\varphi^k_X:\mathbb{P}^n \dashrightarrow \mathbb{P}^{a_k-1}$ given by the linear system $J^k_{d-k}$:
$$\ \varphi^k_X(p)=(\alpha_1(f)(p): \ldots : \alpha_{a_k}(f)(p) ).$$
The $k$-th relative polar image of $X$ is $Z_k = \overline{\varphi^k_X(\mathbb{P}^n)}\subseteq \mathbb{P}^{a_k-1}$, the closure of the image of the relative $k$-th polar map.\\
It follows from Proposition \ref{propZW}, that for $k \leq d/2$ and $f$ generic, one has $a_k=\binom{n+k}{k}$ and hence $\Phi^k_X=\varphi^k_X$ in such cases.
In general, the exact sequence $$0 \to I_k \to Q_k \to J^k_{d-k} \to 0$$ gives rise to a linear projection $\mathbb{P}^{\binom{n+k}{k}-1} \dashrightarrow \mathbb{P}^{a_k-1}$ making compatible these two polar maps, as in the diagram
$$\begin{array}{ccc}
\mathbb{P}^n & \rightarrow & \mathbb{P}^{\binom{n+k}{k}-1}\\
& \searrow & \downarrow \\
& & \mathbb{P}^{a_k-1}.
\end{array}
$$
Moreover, since $\tilde{Z}_k \subset \mathbb{P}(J^k_{d-k}) = \mathbb{P}^{a_k - 1} \subset \mathbb{P}^{\binom{n+k}{k}-1}$, the secant variety of $\tilde{Z}_k$ does not intersect the projection center, hence $Z_k \simeq \tilde{Z}_k$.
The next result is a formalization of the intuitive idea that the mixed Hessian $\operatorname{Hess}_f^{(1,k)}$, introduced in Definition \ref{defMH}, is the Jacobian matrix of the gradient (or polar) map $\varphi^k$ of order $k$.
\begin{thm}\label{thm:polarrankhess}
With the above notations, we get:
$$\dim Z_k = \dim \tilde{Z}_k = \operatorname{rk}(\operatorname{Hess}^{(1,k)}_X)-1.$$
In particular, the following conditions are equivalents:
\begin{enumerate}
\item[(i)] $\varphi^k$ is a degenerated map, that is, $\dim Z_k < n$;
\item[(ii)] $\operatorname{rk} (\operatorname{Hess}^{(1,k)}_X) < n+1$;
\item[(iii)] The map $\bullet L^{d-k-1}: A_1 \to A_{d-k}$ has not maximal rank for any $L \in A_1$.
\end{enumerate}
\end{thm}
\begin{proof}
If $p=[\mathbf v] \in \mathbb{P}^n$, then $T_p\mathbb{P}^n=\mathbb C^{n+1}/<\mathbf v>$ is the affine tangent space to $\mathbb{P}^n$ at $p$. Let
$\operatorname{Hess}^{(1,k)}_X(p)$ be the equivalence class of the mixed Hessian matrix of $X=V(f)$ evaluated at $\mathbf v$.
$\operatorname{Hess}^{(1,k)}_X(p)$ passes to the quotients and it induces the differential
of the map $\varphi^k_X$ at $p$:
\begin{equation}\label{eq:dfp}
(d\varphi^k_X)_p: T_p\mathbb{P}^n\to T_{\varphi^k_X(p)}\mathbb{P}^{a_k-1},
\end{equation}
whose image is exactly $T_{\varphi^k_X(p)}Z_k$, when $p$ is generic. From this we can
describe explicitly the projective tangent space to $Z_k$ at $\varphi^k_X(p)$, obtaining
\begin{equation}\label{eq:TpZ}
T_{\varphi^k_X(p)}Z_k=\mathbb{P}(\Im(\operatorname{Hess}^{(1,k)}_X(v))\subseteq \mathbb{P}^{a_k-1}.
\end{equation}
Thus, there is an integer $\gamma_k \geq 0$ such that
\begin{equation}\label{eq:dimZdiff}
\dim Z_k = \operatorname{rk}(\operatorname{Hess}^{(1,k)}_X)-1=n-\gamma_k.
\end{equation}
The equivalence between $(i)$ and $(ii)$ is clear, since $\operatorname{rk} (\operatorname{Hess}^{(1,k)}_X) < n+1$ if and only if $\gamma_k > 0$ if and only if $\dim (Z_k) < n$. \\
The equivalence between $(ii)$ and $(iii)$ follows from Theorem \ref{thm:generalization}.
\end{proof}
Recall that for a standard graded Artinian Gorenstein algebra, the injectivity of $\bullet L: A_i \to A_{i+1}$ for a certain $L \in A_1$ implies the injectivity of $\bullet L: A_j \to A_{j+1}$ for all $j < i$, see \cite[Proposition 2.1]{MMN}.
If $A(f)$ does not satisfy the WLP, then there is a minimal $k$ such that $\bullet L: A_k \to A_{k+1}$ is not injective for all $L \in A_1$. By Theorem \ref{thm:polarrankhess} the previous condition is equivalent to
say that for all $j \leq k$ the matrix $\operatorname{Hess}_f^{(1,j)}$ has full rank.
The following Corollary is a partial generalization of the Gordan-Noether Hessian criterion, recalled in Proposition \ref{prop:GNcriteria}, to the case of higher order Hessians.
\begin{cor} \label{cor:polar1}
Let $k\leq \lfloor \frac{d}{2}\rfloor$ be the greatest integer such that $\bullet L: A_{k-1} \to A_k$ is injective for some $L \in A_1$. For each $j \leq k$, we get that $\varphi^j$ degenerated implies $\operatorname{hess}^j_f = 0$.
\end{cor}
\begin{proof}
The result follows from the commutative diagram, by Theorem \ref{thm:generalization} and by Theorem \ref{thm:polarrankhess}.
$$\begin{array}{ccc}
A_j & \to & A_{d-j}\\
\uparrow & \nearrow & \\
A_1 &
\end{array}
$$
Indeed, for $j\leq k$, we have that $\bullet L^{j-1}:A_1 \to A_j$ is injective, by composition. If $\bullet L^{d-2j}: A_j \to A_{d-j}$ is injective, then $\bullet L^{d-j-1} : A_1 \to A_{d-j}$ is injective.
In other words, if $\operatorname{hess}^j_f \neq 0$, then $\varphi^j$ is not degenerated.
\end{proof}
The converse is not true, as one can see in the next example.
\begin{ex}\rm Let $f=xu^3+yu^2v+zuv^2+v^4\in \mathbb{C}[x,y,z,u,v]_4$ as in \cite[Example 3]{Go}, and let $A=Q/\operatorname{Ann}(f)$. The map $\bullet L: A_1\to A_2$ is injective for $L=u+v$. For $j=2$, we have $$\operatorname{Hess}^2_f=\begin{pmatrix}
0&0&0&0&0&0&6&0\\
0&0&0&0&0&2&0&0\\
0&0&0&0&0&0&0&2\\
0&0&0&0&0&0&2&0\\
0&0&0&0&0&2&0&0\\
0&2&0&0&2&0&0&0\\
6&0&0&2&0&0&0&0\\
0&0&2&0&0&0&0&24
\end{pmatrix}$$ and $\operatorname{hess}^2_f=\det\operatorname{Hess}^2_f=0$. Calculating the $\operatorname{Hess}_f^{(1,2)}$, we get: $$\operatorname{Hess}_f^{(1,2)}=\begin{pmatrix}
0&0&0&6u&0\\
0&0&0&2v&2u\\
0&0&0&0&2v\\
0&0&0&2u&0\\
0&0&0&2v&2u\\
0&2u&2v&2y&2z\\
6u&2v&0&6x&2y\\
0&0&2u&2z&24v
\end{pmatrix}.$$ The $\operatorname{rk}(\operatorname{Hess}_f^{(1,2)})=5$, hence $\varphi^2$ is not degenerated, by Theorem \ref{thm:polarrankhess}.
\end{ex}
\begin{cor} \label{corHess10}
Let $f$ be a homogeneous form of degree $d$ and let $1<k<d-1$. Let $\varphi^k_f$ be the $k$-th polar map
of $f$. If $\operatorname{hess}_f \neq 0$ then $\varphi^k_f$ is not degenerated, that is $\dim Z_k = n$.
In particular, if $X = V(f)\subset \mathbb{P}^n$ with $n \leq 3$ is not a cone, then $\varphi^k_X$ is not degenerated.
\end{cor}
\begin{proof}
By Theorem \ref{thm:polarrankhess} we have to prove that $rk(\operatorname{Hess}^{(1,k)}_f)=n+1$ is maximal. Let $L\in A_1$ be a general linear form.
Since $\operatorname{hess}_f \neq 0$, the multiplication map $\bullet L^{d-2}: A_1 \to A_{d-1}$ is an isomorphism. Indeed, by Theorem \ref{thm:generalization}, after choosing basis the matrix of $\bullet L^{d-2}$
is $(d-2)!\operatorname{Hess}^{((d-1)^*,1)}$ whose rank is the same as the rank of $\operatorname{Hess}_f$. Since $\bullet L^{d-2}: A_1 \to A_{d-1}$ factors via $\bullet L^{d-k-1}: A_1 \to A_{d-k}$, the injectivity of $\bullet L^{d-2}$ implies
the injectivity of $\bullet L^{d-k-1}$. On the other hand the rank of $\operatorname{Hess}^{((d-k)^*,1)}$ is equal to the rank of $\operatorname{Hess}_f^{(1,k)}$. The result now follows from Gordan-Noether theory.
\end{proof}
\begin{ex}\rm Let $X = V(f)\subset \mathbb{P}^3$ be Ikeda's surface given by $f=xuv^3+yu^3v+x^2y^3$ as in \cite{Ik, MW}. Since $X$ is not a cone, by the Gordan-Noether criterion, $\operatorname{hess}_f \neq 0$. On the other hand, $\operatorname{hess}^2_f=0$.
By Corollary \ref{corHess10}, we know that the second polar map is not degenerated. Moreover, $\varphi_X = \Phi_X:\mathbb{P}^3 \dashrightarrow \mathbb{P}^9$ since $\dim A_2 = 10$.
From an algebraic viewpoint we have $A_1 \to A_2 \to A_3$ and we know that $\bullet L: A_2 \to A_3$ is not an isomorfism, in particular it has non trivial kernel. On the other hand, $\bullet L^2: A_1 \to A_3$ is injective by Theorem \ref{thm:polarrankhess}.
So the image of the first multiplication does not intersect the kernel of the second one.
\end{ex}
Consider the $k$-th polar map $\Phi^k_X:\mathbb{P}^n \to \mathbb{P}^N$ associated to a smooth hypersurface $X=V(f)$, where $N={n+k \choose k}-1$.
Corollary \ref{corHess10} implies that the image of this map has dimension $n$ for any $k$ with
$1<k<d-1$. There is a related result: consider the restriction
\begin{equation} \label{eqPol}
\psi^k_X=\Phi^k_X|X:X \to \mathbb{P}^N.
\end{equation}
Then it is shown in \cite{D} that the map $\psi^k_X$ is finite for $1\leq k<d$. The case $k=2$ is stated in \cite[Proposition 1.6]{D} and the general case in \cite[Remark 2.3 (ii)]{D}. We prove next that the map
$\Phi^k_X$ is finite as well, which implies that $\dim Z_k=n$. Note that this map $\Phi^k_X$ is well defined as soon as $M^k(f)$ is Artinian.
\begin{thm} \label{T2}
Let $\Phi^k_X:\mathbb{P}^n \to \mathbb{P}^N$ be the $k$-th polar map associated to a hypersurface $X=V(f)\subset \mathbb{P}^n$ such that $M^k(f)$ is Artinian, where $N={n+k \choose k}-1$ and $0<k<d$. Then $\Phi^k_X$ is finite.
In particular, $$\dim Z_k=\dim \Phi^k_X(\mathbb{P}^n)=n$$
and one has $$\deg \Phi^k_X \cdot \deg \Phi^k_X(\mathbb{P}^n)=(d-k)^n.$$
\end{thm}
\proof Suppose there is a curve $C$ in $\mathbb{P}^n$ such that $\Phi^k_X$ is constant on $C$, say
$$\Phi^k_f(x)=(b_{\alpha})_{|\alpha|=k} \in \mathbb{P}^N,$$
for any $x \in C$. There is at least one multi-index $\beta$ such that $b_{\beta}\ne 0$. But this leads to a contradiction, since either the partial derivative
$\partial^{\beta}f$ is identically zero, or the hypersurface $\partial^{\beta}f=0$ meets the curve $C$, say at a point $p$. At such a point $p$, all the partial derivatives $\partial^{\alpha}f(p)=0$, for $|\alpha|=k$, in contradiction to the fact that $M^k(f)$ is Artinian.
The claim about the degree follows by cutting
$\Phi^k_X(\mathbb{P}^n)$ with $n$ generic hyperplanes $H_j$ in $\mathbb{P}^N$, where $j=1,...,n$ and using the fact that $(\Phi^k_X)^{-1}(H_j)$ is a family of $n$ hypersurfaces of degree $d-k$ meeting in $\deg \Phi^k_X \cdot \deg \Phi^k_X(\mathbb{P}^n)$ simple points.
\endproof
{\bf Acknowledgments}. The authors would like to thank the organizers and the participants of the workshop {\it Lefschetz Properties and Jordan Type in Algebra, Geometry and Combinatorics}, that took place in Levico Terme, Trento, Italy in June 2018.
This work was started at this conference. \\
The second author would like to thank Francesco Russo and Giuseppe Zappalà for many conversations on this subject.
| {
"timestamp": "2019-09-17T02:27:21",
"yymm": "1902",
"arxiv_id": "1902.09146",
"language": "en",
"url": "https://arxiv.org/abs/1902.09146",
"abstract": "We introduce and study higher order Jacobian ideals, higher order and mixed Hessians, higher order polar maps, and higher order Milnor algebras associated to a reduced projective hypersurface. We relate these higher order objects to some standard graded Artinian Gorenstein algebras, and we study the corresponding Hilbert functions and Lefschetz properties.",
"subjects": "Algebraic Geometry (math.AG); Commutative Algebra (math.AC)",
"title": "Higher order Jacobians, Hessians and Milnor algebras",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9895109102866639,
"lm_q2_score": 0.8128673223709251,
"lm_q1q2_score": 0.8043410841015372
} |
https://arxiv.org/abs/2003.10312 | A termination criterion for stochastic gradient descent for binary classification | We propose a new, simple, and computationally inexpensive termination test for constant step-size stochastic gradient descent (SGD) applied to binary classification on the logistic and hinge loss with homogeneous linear predictors. Our theoretical results support the effectiveness of our stopping criterion when the data is Gaussian distributed. This presence of noise allows for the possibility of non-separable data. We show that our test terminates in a finite number of iterations and when the noise in the data is not too large, the expected classifier at termination nearly minimizes the probability of misclassification. Finally, numerical experiments indicate for both real and synthetic data sets that our termination test exhibits a good degree of predictability on accuracy and running time. |
\section{Analysis of stopping criterion} \label{sec:Analysis}
In this section, we present our analysis of the stopping criterion \eqref{eq: termination_test} proposed in Section~\ref{sec:termtest}. Here we introduce the first iteration at which the stopping criterion is satisfied, denoted by the random variable
\begin{equation}\label{eq:stopping_criteria}
T:=\inf\left\{k> 0: \hat{\bm{\xi}}_k^T\bm{\theta}_k \geq 1\right\}.
\end{equation}
By viewing the stopping criterion through the lens of stopping times, we are able to utilize probability theory to analyze the classifier at termination $\bm{\theta}_T$. Throughout this section, we work with the following filtration.
\begin{equation}\label{eq:sigma}
\mathcal{F}_0 = \sigma(\bm{\theta}_0) \quad \text{and} \quad \mathcal{F}_k:=\sigma(\bm{\theta}_0, \hat{\bm{\xi}_1}, \bm{\xi}_1, \hat{\bm{\xi}_2}, \bm{\xi}_2, \hdots, \hat{\bm{\xi}_k}, \bm{\xi}_k), \quad \text{for all $k\geq 1$}
\end{equation}
Clearly, the random variable $\bm{\theta}_k$ is $\mathcal{F}_k$-measurable. Our theoretical results are structured as follows.
First, we show that SGD with our proposed termination test indeed stops after a finite number of iterations. To do so, we provide a bound on $\bE[T]$, \textit{i.e.} the expected number of iterations before termination. Yet, despite this guarantee, the resulting classifier at termination need not be optimal. Hence, our second result establishes that both $\bm{\theta}_T$ and $\bm{\theta}^*$ point in approximately the same direction; thereby ensuring that the classifier at termination, $\bm{\theta}_T$, is nearly optimal. We remark the worst-case bounds established throughout these sections are conservative; we observe in our experiments that the termination test stops sooner while also yielding good classification properties for Gaussian and non-Gaussian data sets.
To bound $\bE[T]$, we identify subsets of $\R^d$ for which when an iterate enters the set, termination (\textit{i.e.} \eqref{eq: termination_test}) is \textit{highly likely} to succeed. Such sets $C$, we call \textit{target sets}. Precisely, for any $\bm{\theta} \in C$ and $\hat{\bm{\xi}}\sim N(\bm{\mu},\sigma^2I_d)$, the probability of terminating is at least $\delta>0$,
\begin{equation}\label{eq:probability_delta}
\exists \; \delta>0 \text{ such that } \mathbb{P}_{\hat{\bm{\xi}}}\left( \hat{\bm{\xi}}^T\bm{\theta}\geq 1 \right) \geq \delta.
\end{equation}
We guarantee the iterates generated by SGD enter the target set by way of a \textit{drift function}, $V:\R^d\rightarrow [0,+\infty)$. A drift function, on average, decreases each time the iterate fails to live in the target set. In other words, conditioned on the past iterates the following holds
\begin{equation}\label{eq:driftequation}
(\bE[V(\bm{\theta}_k)|\mathcal{F}_{k-1}]-V(\bm{\theta}_{k-1})])1_{\{\bm{\theta}_{k-1}\not\in C\}} \leq -b1_{\{\bm{\theta}_{k-1}\not\in C\}}
\end{equation}
for the target set $C$ and some positive constant $b$. Loosely speaking, the iterates in expectation \textit{drift} towards the target set. Target sets and drift functions in the context of drift analysis are well-studied in stochastic processes, see Lemma \ref{lem:drift_from_meyn} below.
A natural choice for the target set is a neighborhood of the unique optimum solution of \eqref{optimization_problem}, $\bm{\theta}^*$, with the drift function $\Vert \bm{\theta}-\bm{\theta}^* \Vert^2$. Indeed, it is known the iterates of SGD converge to a neighborhood of $\bm{\theta}^*$ (\cite{Pflug}). However, an iterate may be nearly optimal well before it enters this neighborhood. In fact when $\sigma \ll \Vert \bm{\mu} \Vert$, we identify a target set where satisfying the stopping criterion occurs at least half the time and does not require the iterate to be near $\bm{\theta}^*$. We summarize below our target set and drift function.
\begin{enumerate}
\item Under the assumption $\sigma \leq c \Vert \bm{\mu} \Vert$ for some numerical constant $c$, which we call the \textit{Low Variance Regime}, we define the target set to be
\begin{equation}\label{eq:targetset_lownoise}
C = \{\bm{\theta}: \bm{\mu}^T\bm{\theta}\geq 1 \},
\end{equation}
and the drift function by
\begin{equation}\label{eq:driftfunction_lownoise}
V(\bm{\theta})=\left(M-\bm{\mu}^T\bm{\theta}\right)^2,
\end{equation}
for some constant $M$, to be determined later.
\item Under the assumption $c\Vert \bm{\mu} \Vert \leq \sigma$ where the constant $c$ is the same as in 1 above, which we call the \textit{High Variance Regime}, we define the target set to be
\begin{equation}\label{eq:targetset_highnoise}
C=\{\bm{\theta}: \vert \rho\sigma^2-1\vert <1 \text{ and } \sigma \Vert \tilde{\bm{\theta}}\Vert\leq c'\},
\end{equation}
for some numerical constant $c'$. Here, we orthogonally decompose $\bm{\theta}=\rho \bm{\mu}+\tilde{\bm{\theta}}$ with $\bm{\mu}^T\tilde{\bm{\theta}}=0$. We use the following drift function
\begin{equation}\label{eq:driftfunction_highnoise}
V(\bm{\theta})=\frac{1}{2\alpha}\Vert \bm{\theta}-\bm{\theta}^*\Vert^2.
\end{equation}
\end{enumerate}
In Section \ref{sec:proof_low} (resp. Section \ref{sec:proof_high}) we show that the pairs $(C,V)$ defined in \eqref{eq:targetset_lownoise} and \eqref{eq:driftfunction_lownoise} (resp. \eqref{eq:targetset_highnoise} and \eqref{eq:driftfunction_highnoise}) satisfies the drift equation \eqref{eq:driftequation} for any step-size $\alpha$ (resp. for any sufficiently small step-size $\alpha$).
As mentioned above, the target set $C$ attracts the iterates generated by SGD. Each time an iterate enters $C$, the stopping criterion holds with probability at least $\delta>0$. Provided the iterates enters the set $C$ an infinite number of times, then after waiting a geometrically distributed many iterations, we expect the following condition to hold:
\begin{equation}\label{eq:termination_inside_C}
\hat{\bm{\xi}}_k^T\bm{\theta}_k\geq 1 \text{ and } \bm{\theta}_k \in C.
\end{equation}
The SGD algorithm does not know the value of $\bm{\theta}^*$; therefore at each iteration, it cannot check whether the condition \eqref{eq:termination_inside_C} occurs. Nevertheless, we are able to compute a bound on the average waiting time until \eqref{eq:termination_inside_C} holds and the first time \eqref{eq:termination_inside_C} holds is always an upper bound on $T$, our stopping criterion. This is summarized in Lemma \ref{lem:ETleqET_C}. Precisely, if we denote by
\begin{equation}\label{eq:stoppingtime_TC}
T_C:=\inf\{k>0:\hat{\bm{\xi}}_k^T\bm{\theta}_k\geq 1 \text{ and } \bm{\theta}_k \in C\},
\end{equation}
then $T\leq T_C$, thus yielding $\bE[T]\leq \bE[T_C]$. We bound $\bE[T_C]$ by way of stopping times $\tau_m$ defined as the $m^{th}$ time the iterates of SGD enters $C$. Formally for any sequence $\{\bm{\theta}_k\}_{k=0}^\infty$ generated by SGD starting at $\bm{\theta}_0 = \bm{0}$, we set
\begin{equation}\label{eq:tau1}
\tau_1:=\inf\{k>0:\bm{\theta}_k\in C\}
\end{equation}
and inductively, for $m\geq 2$,
\begin{equation}\label{eq:taum}
\tau_m:=\inf\{k>\tau_{m-1}:\bm{\theta}_k\in C\}.
\end{equation}
The following lemma formalizes the discussion above.
\begin{lemma}\label{lem:ETleqET_C}
Let $\{\bm{\theta}_k\}_{k=0}^\infty$ be a sequence generated by SGD such that $\bm{\theta}_0 = \bm{0}$ and suppose that $\bE[\tau_m]<+\infty$ for all $m\geq 1$. Then the following holds
\begin{equation}
\bE[T]\leq \bE[T_C]\leq \sum_{m=1}^{\infty} \bE[\tau_m](1-\delta)^{m-1},
\end{equation}
where $\delta$ satisfies \eqref{eq:probability_delta}.
\end{lemma}
\begin{proof}
We first show that \begin{equation} \label{eq:analysis_1} \bE\left[1_{\{T_C\geq \tau_m\}}\right]\leq (1-\delta)^{m-1}.\end{equation} Define the $\sigma$-algebra $\mathcal{F'}=\sigma(\bm{\theta}_0,\bm{\xi}_1,\bm{\xi}_2,\cdots)$. From the independence between $\sigma(\hat{\bm{\xi}}_k)$'s and $\mathcal{F}'$ and also $\tau_i<+\infty$ a.s. for all $i\geq 1$, the following is obtained:
\begin{align*}
\bE\left[1_{\{ T_C\geq \tau_m\}}|\mathcal{F}'\right]
&= \bE\left[1_{\{ \hat{\bm{\xi}}_{\tau_1}^T\bm{\theta}_{\tau_1}<1\}}\cdots1_{\{ \hat{\bm{\xi}}_{\tau_{m-1}}^T\bm{\theta}_{\tau_{m-1}}<1\}}|\mathcal{F}'\right]\\&=\prod_{i=1}^{m-1}\bE\left[1_{\{ \hat{\bm{\xi}}_{\tau_i}^T\bm{\theta}_{\tau_i}<1\}}|\mathcal{F}'\right]\\&\leq (1-\delta)^{m-1}.
\end{align*}
By taking expectations, we conclude \eqref{eq:analysis_1} holds. Now since $\bE[1_{\{T_C=+\infty\}}]\leq \bE[1_{\{T_C\geq \tau_m\}}]$ for all $m\geq 1$, it follows from \eqref{eq:analysis_1} that $T_C<\infty$ a.s. We next observe that
\begin{align*}
\bE\left[T_C1_{\{ T_C= \tau_m\}}|\mathcal{F}'\right]&=\bE\left[\tau_m1_{\{ T_C= \tau_m\}}|\mathcal{F}'\right]\\
&\le\tau_m\bE\left[1_{\{ \hat{\bm{\xi}}_{\tau_{1}}^T\bm{\theta}_{\tau_{1}}<1\}}\cdots1_{\{ \hat{\bm{\xi}}_{\tau_{m-1}}^T\bm{\theta}_{\tau_{m-1}}<1\}}|\mathcal{F}'\right]\\
&=\tau_m\prod_{i=1}^{m-1}\bE\left[1_{\{ \hat{\bm{\xi}}_{\tau_{i}}^T\bm{\theta}_{\tau_{i}}<1\}}|\mathcal{F}'\right]\\
&\leq \tau_m(1-\delta)^{m-1}.
\end{align*}
Taking expectations yields $\bE\left[T_C1_{\{T_C=\tau_m\}}\right]\leq \bE\left[\tau_m\right](1-\delta)^{m-1}$ for all $m\geq 1$. Now since $T_C<\infty$ a.s. we get $1=\sum_{m=1}^{+\infty}1_{\{T_C=\tau_m\}}$ a.s. This yields that
\[
\bE[T]\leq \bE[T_C]=\sum_{m=1}^{\infty} \bE[T_C1_{\{T_C=\tau_m\}}]\leq \sum_{m=1}^{\infty}\bE[\tau_m](1-\delta)^{m-1}.
\]
The proof is complete.
\end{proof}
Now, in view of Lemma \ref{lem:ETleqET_C}, it suffices to bound $\bE[\tau_m]$ by a sequence which can not grow too fast in $m$. Indeed, we show that \eqref{eq:driftequation} implies the following
\begin{equation}
\bE[\tau_m]=\mathcal{O}(m).
\end{equation}
\begin{theorem}(Low Regime)\label{thm:low}
Let $\{\bm{\theta}_k\}_{k=0}^\infty$ be a sequence generated by Algorithm~\ref{alg:SGD_termination} such that $\bm{\theta}_0=\bm{0}$. There exists positive constants $c,b$ and $M$ such that provided $\sigma \leq c\Vert \bm{\mu} \Vert$ the following holds.
\begin{equation}
\bE[T]\leq 2+\frac{2M^2}{b}\cdot\left(\Phi^c\left(\frac{\Vert \bm{\mu} \Vert}{\sigma} \right)+\frac{\alpha\sigma^3}{ \Vert \bm{\mu} \Vert }\cdot\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2} \right)+1 \right).
\end{equation}
Here the constants $c,b$ and $M$ are defined as follows:
\begin{enumerate}
\item For the logistic loss,
\begin{equation}\label{eq:para_log}
c=0.33,\quad b=\alpha\Vert \bm{\mu} \Vert^2,\quad \text{ and } M=501+640\alpha \Vert \bm{\mu} \Vert^2.
\end{equation}
\item For the hinge loss,
\begin{equation}\label{eq:para_hinge}
c=1.25,\quad b=\alpha\Vert \bm{\mu} \Vert^2,\quad \text{ and } M=501+782\alpha \Vert \bm{\mu} \Vert^2.
\end{equation}
\end{enumerate}
\end{theorem}
Therefore, on relatively separable data (\textit{i.e.} in the low variance regime), the expected waiting time before termination exponentially decreases as the data becomes more separable (\textit{i.e.} $\sigma \to 0$).
We prove Theorem \ref{thm:low} in Section \ref{sec:theorem_angle}. The next theorem shows that the expected value of the stopping time is finite provided that the $\sigma > c\Vert \bm{\mu} \Vert$ and the step-size is small enough.
\begin{theorem}\label{thm:high}(High Regime) Suppose that $\sigma > c\Vert \bm{\mu} \Vert$ where $c$ is defined in \eqref{eq:para_log} and \eqref{eq:para_hinge}. Then there exists a universal positive constant $A$ such that if the step-size $\alpha$ satisfies
\begin{equation}
\alpha \leq A\cdot\frac{\Vert \bm{\mu} \Vert^2}{\sigma^2(\Vert \bm{\mu} \Vert^2+d\sigma^2)},
\end{equation}
then it holds that $\bE[T]<+\infty$. In particular, the termination criterion occurs almost surely.
\end{theorem}
It remains to determine whether the classifier at termination $\bm{\theta}_T$, has desirable accuracy. The scale-invariance of optimal classifiers means a classifier yields a lower probability of misclassification the closer its direction aligns with any optimal classifier. In view of this, it suffices to bound the absolute value of the inner product of any unit vector that is perpendicular to $\bm{\theta}^*$, $\bm{v}$ with $\bm{\theta}_T$. The following theorem establishes a bound on $\bE[\vert \bm{v}^T\bm{\theta}_T\vert]$.
\begin{theorem}\label{thm:angle_bound}
Let $\bm{\theta}_0=\bm{0}$. Fix any unit vector $\bm{v}\in \R^d$ such that $\bm{v}^T\bm{\theta}^*=0$. Then the following estimate holds
\begin{equation}
\bE[\vert \bm{v}^T\bm{\theta}_T\vert]\leq \sigma \alpha \sqrt{\frac{2}{\pi}} \bE[T].
\end{equation}
\end{theorem}
In the low variance regime by combining Theorem \ref{thm:low} and \ref{thm:angle_bound} for a fixed step-size $\alpha$ it holds that $\bE[\vert \bm{v}^T\bm{\theta}\vert]\leq \mathcal{O}(\sigma)$. Thus, the more separable the data set is, the more accurate the classifier $\bm{\theta}_T$ is on average. In the high variance regime, Theorem \ref{thm:high} yields a very loose bound. Yet despite this, our numerical result in Section \ref{sec:Num_Experiment} show promising accuracy of \eqref{eq: termination_test} in this case as well. We conjecture that the inequality can be significantly strengthened.
\input{arxiv_low_variance_regime.tex}
\input{arxiv_high_variance_regime.tex}
\input{arxiv_angle_bound.tex}
\subsection{Angle bound, proof of Theorem \ref{thm:angle_bound}}\label{sec:theorem_angle}
\begin{proof}[Proof of Theorem \ref{thm:angle_bound}] Recall the SGD algorithm for logistic regression uses the update
\[
\bm{\theta}_k=\bm{\theta}_{k-1}+\frac{\alpha\bm{\xi}_k}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})}
\]
and for hinge regression
\[
\bm{\theta}_k=\bm{\theta}_{k-1}+\alpha 1_{\{\bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}\bm{\xi}_{k-1}
\]
where $\bm{\theta}_0=\bm{0}$ and $\bm{\xi}_1,\bm{\xi}_2,\cdots \stackrel{i.i.d}{\sim} N(\bm{\mu},\sigma^2I_d)$. It clearly holds in both cases that
\begin{equation}
\left\vert\vert \bm{v}^T\bm{\theta}_{k}\vert-\vert \bm{v}^T\bm{\theta}_{k-1}\vert \right\vert\leq \alpha \vert \bm{v}^T\bm{\xi}_{k-1}\vert.
\end{equation}
We define a new random variable $X_k:=\vert \bm{v}^T\bm{\theta}_k\vert-k \sigma\alpha \sqrt{\frac{2}{\pi}} $. Observe that $\bE\left[\left|X_0\right|\right] = 0$ and for all $k \ge 1$, it holds that
\begin{align*}
\bE\left[\left\vert X_k\right\vert\right]&\leq \alpha\sum_{i=1}^k \bE\left[\left\vert \bm{v}^T\bm{\xi}_k \right\vert\right]+k\sigma\alpha \sqrt{\frac{2}{\pi}} <\infty,
\end{align*}
\textit{i.e.}, $X_k \in \mathcal{L}^1$ for all $k \geq 1$. Next, we have for any $k \ge 1$
\begin{align*}
\bE\left[\left\vert X_{k}-X_{k-1}\right\vert\,|\,\mathcal{F}_{k-1} \right] \leq \bE\left[\left\vert \vert \bm{v}^T\bm{\theta}_k\vert -\vert \bm{v}^T\bm{\theta}_{k-1}\vert\right\vert\,|\,\mathcal{F}_{k-1} \right]+\sigma\alpha \sqrt{\frac{2}{\pi}} &\leq 2\sigma\alpha \sqrt{\frac{2}{\pi}} .
\end{align*}
Here we used that $\bm{v}^T\bm{\xi}_k \sim N(0,\sigma^2)$ along with \eqref{fact:norm_Gaussians}. We also see that \begin{align*}
\bE\left[\vert \bm{v}^T\bm{\theta}_{k}\vert\,|\,\mathcal{F}_{k-1} \right]&\leq \vert \bm{v}^T\bm{\theta}_{k-1}\vert+ \sigma \alpha\sqrt{\frac{2}{\pi}} \quad \Rightarrow \quad \bE\left[X_{k}\,|\,\mathcal{F}_{k-1} \right]\leq X_{k-1}.
\end{align*}
Therefore, we have shown that $X_0,X_1,\cdots$ is a super-martingale. By Theorem \ref{thm:Durret_Martingale}, we have $\bE\left[X_{T}\right]\leq 0$. The result follows.
\end{proof}
\section{Appendix}
In this section, we provide proofs of technical results that we used in the paper.
\subsection{Drift Analysis Lemma}
Here we state and prove a result on Drift Analysis that we used in the paper. Recall that we require the following equation to hold
\begin{equation}\label{drift_equation1_appendix}
\left( \bE\left[V(\bm{\theta}_k)|\mathcal{F}_{k-1} \right]-V(\bm{\theta}_{k-1}) \right)1_{\{\bm{\theta}_{k-1}\not\in C\}}\leq -1_{\{\bm{\theta}_{k-1}\not\in C\}}.
\end{equation}
However, when the iterate lies inside $C$, then we do not assume any bound on the expected increase in the drift function. Nevertheless, when the target set $C$ is compact then there exists a positive constant $b>0$ such that
\begin{equation}\label{drift_equation2_appendix}
\left( \bE\left[V(\bm{\theta}_k)|\mathcal{F}_{k-1} \right]-V(\bm{\theta}_{k-1}) \right)1_{\{\bm{\theta}_{k-1}\in C\}}\leq b1_{\{\bm{\theta}_{k-1}\in C\}},
\end{equation}
The following lemma bounds the expected value of return times $\tau_m^C$.
\begin{lemma}\label{lem:drift_lemma_appendix} Suppose that for some test function $V$ and a target set $C$, the drift equation \eqref{drift_equation1_appendix} holds. Let $\hat{\bm{\theta}} \in \R^d\backslash C$. The following is then true
\begin{equation}
\bE[\tau_1^C|\bm{\theta}_0=\hat{\bm{\theta}}]\leq V(\hat{\bm{\theta}}).
\end{equation}
In addition, if both equations \eqref{drift_equation1_appendix} and \eqref{drift_equation2_appendix} hold, then the following is true
\begin{equation}
\bE[\tau_m^C|\bm{\theta}_0=\hat{\bm{\theta}}]\leq V(\hat{\bm{\theta}})+(m-1)\sup_{\bm{\theta}\in C} V(\bm{\theta}).
\end{equation}
\end{lemma}
\begin{proof}
See \cite{meyn2012markov}.
\end{proof}
\subsection{Stopping Time Lemma}\label{stopping_time_lemma}
In light of drift equation \eqref{drift_equation_appendix}, the iterates are inclined towards the target set $C$. With a positive probability $\delta$, the test will be activated once we are in $C$. Hence, the expected value of the stopping time $T$, i.e. $\bE[T]$, can be upper-bounded in terms of $\delta$ and expected return times $\bE[\tau_m^C]$.
\begin{lemma}\label{lem:stopping_time_appendix}
Suppose that $\bE[\tau_m^{C}]<\infty$ for all $m\geq 1$. Then it is true that $T_C<+\infty$. Moreover, the following holds.
\begin{equation}\label{eq:ST_tctm}
\bE[T]\leq \bE[T_C]\leq \sum_{m=1}^{\infty}\bE[\tau_m^{C}](1-\delta)^{m-1},
\end{equation}
where for any fixed $\bm{\theta}\in C$ the constant $\delta$ satisfies
\begin{equation}
\delta= \min \mathbb{P}_{\bm{\xi} \sim \mathcal{P}_*}\left(\bm{\xi}^T\bm{\theta}\geq 1 \right).
\end{equation}
\end{lemma}
\subsection{Basic Convex Analysis Lemma}
Here we state and prove two basic results from convex analysis that we used in the paper.
\begin{lemma}\label{lem:convex_analysis_lemma}
Suppose that $g:\R^{\geq 0}\to \R$ is a convex function with a minimizer at $\rho^*>0$. Assume that $g$ is twice differentiable on the interval $[\tfrac{3}{4}\rho^*,\tfrac{5}{4}\rho^*]$. Moreover, assume that there exists a constant $r>0$ such that $g''(\rho)\geq r$ for all $\rho \in [\tfrac{3}{4}\rho^*,\tfrac{5}{4}\rho^*]$. Then it holds that
\begin{equation}\label{eq0:cvx_lemma}
g(\rho)-g(\rho^*)\geq \frac{\rho^*r}{8}\vert \rho-\rho^*\vert \quad \text{for all $\rho \not\in [\tfrac{1}{2}\rho^*,\tfrac{3}{2}\rho^*]$}.
\end{equation}
\end{lemma}
\begin{proof}
First, assume that $\rho > \tfrac{3}{2}\rho^*$ holds. There exists $\hat{\rho}\in [\rho^*,\tfrac{5\rho^*}{4}]$ such that
\begin{equation}\label{eq1:cvx_lemma}
g'(\rho^*+\frac{\rho^*}{4})=g'(\rho^*+\frac{\rho^*}{4})-g'(\rho^*)=\frac{\rho^*}{4}g''(\hat{\rho})\geq \frac{\rho^*r}{4},
\end{equation}
where we used that $g'(\rho^*)=0$. With the convexity of $g$, for any $\rho>\frac{3\rho^*}{2}$, we have
\[
g(\rho)\geq g(\rho^*+\frac{\rho^*}{4})+g'(\rho^*+\frac{\rho^*}{4})(\rho-\frac{5\rho^*}{4})\geq g(\rho^*)+\frac{\rho^*r}{4}(\rho-\frac{5\rho^*}{4}).
\]
Note that the second inequality follows from \eqref{eq1:cvx_lemma} and the optimal value of $g$ occurring at $g(\rho^*)$. Using the identity, $2(\rho-\frac{5\rho^*}{4})\geq \frac{\rho^*}{4}+\rho-\frac{5\rho^*}{4}=\rho-\rho^*$, we conclude
\begin{equation}\label{eq:cvx_lemma1}
g(\rho)-g(\rho^*)\geq \tfrac{1}{8}\rho^*r(\rho-\rho^*) \quad \text{for all $\rho > \frac{3\rho^*}{2}$}.
\end{equation}
We now consider the second case, $\rho<\frac{\rho^*}{2}$. Similarly as above, we get
\begin{equation}\label{eq2:cvx_lemma}
-g'(\rho^*-\frac{\rho^*}{4})= g'(\rho^*)-g'(\rho^*-\frac{\rho^*}{4})=\frac{\rho^*}{4}g''(\hat{\rho})\geq \frac{\rho^*r}{4}.
\end{equation}
for some $\hat{\rho}\in [\frac{3\rho^*}{4},\rho^*]$. Since $\rho<\frac{\rho^*}{2}$, we have
\[
g(\rho)\geq g(\rho^*-\frac{\rho^*}{4})+g'(\rho^*-\frac{\rho^*}{4})(\rho-\frac{3\rho^*}{4})\geq g(\rho^*)+\frac{\rho^*r}{4}(\frac{3\rho^*}{4}-\rho).
\]
Here, again, the first inequality follows from convexity of $g$ and the second inequality from \eqref{eq2:cvx_lemma} and $g(\rho^*-\frac{\rho^*}{4})\geq g(\rho^*)$. We note that $2(\frac{3\rho^*}{4}-\rho)\geq \frac{\rho^*}{4}+\frac{3\rho^*}{4}-\rho=\rho^*-\rho$ and therefore
\begin{equation}\label{eq:cvx_lemma2}
g(\rho)-g(\rho^*) \geq \frac{\rho^*r}{8}(\rho^*-\rho)\quad \text{for all $\rho < \frac{\rho^*}{2}$}.
\end{equation}
Combining \eqref{eq:cvx_lemma1} and \eqref{eq:cvx_lemma2} the result follows.
\end{proof}
We used the following technical lemma in the paper as well.
\begin{lemma}\label{lem:technical_convex_bound}
Consider the following optimization problem
\[
\min f(\bm{\theta}):=\bE_{(\bm{\xi},y)\sim \mathcal{P}_*}\left[\ell(\bm{\xi}^T\bm{\theta})\right].
\]
where $\ell$ is a convex function. The sequences $\{\bm{\theta}_k, \bm{\xi}_k \}_{k = 0}^\infty$ generated by Algorithm~\ref{alg:SGD_termination} satisfy for any $\bm{\theta} \in \R^d$, the following
\begin{equation} \label{eq:high_decrease}
f(\bm{\theta}_{k-1})-f(\bm{\theta})\leq \frac{1}{2\alpha}\left(\Vert \bm{\theta}_{k-1}-\bm{\theta} \Vert^2-\bE\left[\Vert \bm{\theta}_{k}-\bm{\theta} \Vert^2\, |\mathcal{F}_{k-1}\right] \right)+\bE[\Vert\nabla_{\bm{\theta}}\ell\left( \bm{\xi}_k^T\bm{\theta}\right)\Vert^2 \, | \, \mathcal{F}_{k-1}],
\end{equation}
for all $k\geq 1$ where $f$ is defined in \eqref{optimization_problem} and the filtration $\{\mathcal{F}_k\}_{k=0}^{+\infty}$ in \eqref{eq:sigma}.
\end{lemma}
\begin{proof
Define the quantity
\begin{equation*}
\bm{g_k}:=\frac{1}{\alpha}\left( \bm{\theta_{k-1}}-\bm{\theta_{k}}\right) = \nabla_{\bm{\theta}}\ell\left(\bm{\xi}_k^T\bm{\theta} \right). \end{equation*}
Therefore, we obtain that
\[
\bE_{\bm{\xi}_{k}}\left[\bm{g}_k|\mathcal{F}_{k-1} \right]=\nabla_{\bm{\theta}} f(\bm{\theta}_{k-1})
\]
By convexity of the function $f$, we have for any $\bm{\theta} \in \R^d$ the following
\begin{align*}
\norm{\bm{\theta}_{k}-\bm{\theta}}^2 &= \norm{\bm{\theta}_{k-1}-\bm{\theta}}^2 -2 \alpha \bm{g}_k^T (\bm{\theta}_{k-1}-\bm{\theta}) + \alpha^2\norm{\bm{g}_k}^2\\
&= \norm{\bm{\theta}_{k-1}-\bm{\theta}}^2 - 2\alpha(\bm{g}_k-\bE_{\bm{\xi}_{k}}\left[\bm{g}_k|\mathcal{F}_{k-1} \right])^T(\bm{\theta}_{k-1}-\bm{\theta}) - 2 \alpha \bE_{\bm{\xi}_{k}}\left[\bm{g}_k|\mathcal{F}_{k-1} \right]^T (\bm{\theta}_{k-1}-\bm{\theta})+\alpha^2 \norm{\bm{g}_k}^2\\
&\le \norm{\bm{\theta}_{k-1}-\bm{\theta}}^2- 2\alpha(\bm{g}_k-\bE_{\bm{\xi}_{k}}\left[\bm{g}_k|\mathcal{F}_{k-1} \right])^T(\bm{\theta}_{k-1}-\bm{\theta})-2\alpha (f(\bm{\theta}_{k-1})-f(\bm{\theta})) + \alpha^2 \norm{\bm{g}_k}^2.
\end{align*}
By taking conditional expectations with respect to $\mathcal{F}_{k-1}$ and rearranging the above inequality, the result follows.
\end{proof}
\section{Background and preliminaries}\label{sec:defs}
Throughout we consider a Euclidean space, denoted by $\R^d$, with an inner product and an induced norm $\norm{\cdot}$. The set of non-negative real numbers is denoted by $\R_{\geq 0}$. Bold-faced variables are vectors. Throughout, the matrix $I_d$ is the $d$ by $d$ identity matrix. All stochastic quantities defined hereafter live on a probability space denoted by $(\bP, \Omega, \mathcal{F})$, with probability measure $\bP$ and the $\sigma$-algebra $\mathcal{F}$ containing subsets of $\Omega$. Recall, a random variable (vector) is a measurable map from $\Omega$ to $\R$ ($\R^d$), respectively. An important example of a random variable is the \textit{indicator of the event $A \in \mathcal{F}$}:
\[1_A(\omega) = \begin{cases}
1, & \omega \in A\\
0, & \omega \not \in A.
\end{cases}\]
If $X$ is a measurable function and $t \in \R$, we often simplify the notation for the pull back of the function $X$, to simply $\{\omega \in \Omega \, :\, X(\omega) \le t\} =: \{X \le t\}$.
As is often in probability theory, we will not explicitly define the space $\Omega$, but implicitly define it through random variables. For any sequence of random vectors $(\bm{X}_1, \bm{X}_2, \hdots, \bm{X}_k)$, we denote the \textit{$\sigma$-algebra generated by random vectors $\bm{X}_1, \bm{X}_2, \hdots, \bm{X}_k$} by the notation $\sigma(\bm{X}_1, \bm{X}_2, \bm{X}_3, \hdots, \bm{X}_k)$ and the \textit{expected value of $\bm{X}$} by $\bE[\bm{X}] := \int_{\Omega} \bm{X} \, d\bP$.
Particularly, we are interested in random variables that are distributed from normal distributions. In the next section, we state some known results about normal distributions.
\paragraph{Normal distributions}
The \textit{probability density function of a univariate Gaussian} with mean $\mu$ and variance $\sigma^2$ is described by:
\begin{equation*}
\varphi(t) := \frac{1}{\sigma\sqrt{2\pi }}\exp\left(-\frac{(t-\mu)^2}{\sigma^2}\right).
\end{equation*}
In particular, we say a random variable $\xi$ is distributed as a Gaussian with mean $\mu$ and variance $\sigma^2$ by $\xi \sim N(\mu,\sigma^2)$ to mean $\bP(\xi \le t) = \int_{-\infty}^t \varphi(t) \, dt$. When the random variable $\xi \sim N(0,1)$, we denote its cumulative density function as
\[\Phi(t) := \bP(\xi \le t) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^t \exp\left (-\xi^2 \right ) \, d\xi,\]
and its complement by $\Phi^c(t) = 1-\Phi(t)$. The symmetry of a normal around its mean yields the identity, $\Phi(t) = \Phi^c(-t)$.
One can, analogously, formulate a higher dimensional version of the univariate normal distribution called a \textit{multivariate normal distribution}. A random vector is a multivariate normal distribution if every linear combination of its component is a univariate normal distribution. We denote such multivariate normals by $\bm{\xi} \sim N(\bm{\mu},\Sigma)$ with $\bm{\mu}\in \R^d$ and $\Sigma$ is a symmetric positive semidefinite $d\times d$ matrix.
Normal distributions have interesting properties which simplify our computations throughout the paper. We list those which we specifically rely on. See \cite{Famoye_univardist_book} for proofs. Below, $\bm{v},\bm{v}'\in \R^d$, $r\in \R$, $\bm{\xi} \sim N(\bm{\mu},\sigma^2I_d)$ and $\xi\sim N(\mu,\sigma^2)$. Also, $\psi \sim N(0,1)$.
Throughout our analysis, we encounter random variables of the form $\bm{v}^T\bm{\xi}+r$, i.e. affine transformations of a given normal distribution. A fundamental property of normal distributions is that they stay in the same class of distributions after any such transformation. In other words, it holds that
\begin{equation}\label{eq:fact_affine}
\bm{v}^T\bm{\xi}+r\sim N(\bm{v}^T\bm{\mu}+r, \sigma^2\Vert \bm{v} \Vert^2).
\end{equation}
Working with independent random variables makes the analysis significantly easier. In particular, it is essential for us to know when the two random variables $\bm{v}^T\bm{\xi}$ and $\bm{v}'^T\bm{\xi}$ are independent. We will use the following simple fact below: The following is true
\begin{equation} \label{eqn:fact_independence}
\mbox{$\bm{v}^T\bm{\xi}$ and $\bm{v}'^T\bm{\xi}$ are independent} \quad \text{ if and only if } \quad \bm{v}^T\bm{v}'=0.
\end{equation}
We will also use the following simple fact about truncated normal distributions:
\begin{equation} \label{eqn:fact_truncated}
\bE_{\xi}[\xi1_{\{\xi\leq b\}}]=0 \Longrightarrow \Phi\left(\frac{b-\mu}{\sigma}\right)\cdot \exp\left(\frac{1}{2}\cdot\left(\frac{b-\mu}{\sigma}\right)^2 \right)=\frac{\sigma}{\mu}.
\end{equation}
We conclude our remarks on normal distributions with the statement of two facts about the expected value of their norm. The following hold:
\begin{equation}\label{fact:norm_Gaussians}
\bE\left[\Vert \bm{\xi} \Vert^2 \right]= \Vert \bm{\mu} \Vert^2+d\sigma^2, \quad \bE_{\xi}[\vert \xi \vert]\leq \sqrt{\frac{2}{\pi}}\cdot \sigma+\vert\mu\vert \quad \text{and} \quad \bE\left[\vert \psi \vert \right] = \sqrt{\frac{2}{\pi}}.
\end{equation}
\paragraph{Martingales and stopping times} Here we state some relevant definitions and theorems used in analyzing our stopping criteria in Section~\ref{sec:Analysis}. We refer the reader to \cite{Durrett_probability_book} for further details. For any probability space, $\left(\bP,\Omega,\mathcal{F}\right)$, we call a sequence of $\sigma$-algebras, $\{\mathcal{F}_k\}_{k=0}^\infty$, a \textit{filtration} provided that $\mathcal{F}_i \subset \mathcal{F}$ and $\mathcal{F}_0 \subseteq \mathcal{F}_1 \subseteq \mathcal{F}_2 \subseteq \cdots$ holds. Given a filtration, it is natural to define a sequence of random variables $\{X_k\}_{k=0}^\infty$ with respect to the filtration, namely $X_k$ is a $\mathcal{F}_k$-measurable function. If, in addition, the sequence satisfies
\[\bE[|X_k|] < \infty \quad \text{and} \quad \bE[X_{k+1} | \mathcal{F}_k] \le X_k \quad \text{for all $k$},\]
we say $\{X_k\}_{k=0}^\infty$ is a \textit{supermartingale}. In probability theory, we are often interested in the (random) time at which a given stochastic sequence exhibits a particular behavior. Such random variables are known as \textit{stopping times}. Precisely, a stopping time is a random variable $T: \Omega \to \mathbb{N} \cup \{0, \infty\}$ where the event $\{T=k\} \in \mathcal{F}_k$ for each $k$, i.e., the decision to stop at time $k$ must be measurable with respect to the information known at that time. Supermartingales and stopping times are closely tied together, as seen in the theorem below, which gives a bound on the expectation of a stopped supermartingale.
\begin{theorem}[See \cite{Durrett_probability_book} Theorem 4.8.5]\label{thm:Durret_Martingale}
Suppose that $\left\{X_k\right\}_{k=0}^{\infty}$ is a supermartingale w.r.t to the filtration $\left\{\mathcal{F}_k\right\}_{k=0}^{\infty}$ and let $T$ be any stopping time satisfying $\bE[T]<\infty$. Moreover if $\bE\left[\vert X_{k+1}-X_k\vert|\mathcal{F}_k\right]\leq B$ a.s. for some constant $B>0$, then it holds that $\bE[X_T]\leq \bE[X_0]$.
\end{theorem}
As we illustrate in Section~\ref{sec:Analysis}, a connection between stopping criteria (i.e. the decision to stop an algorithm) and stopping times naturally exists.
\subsection{High regime, proof of Theorem \ref{thm:high}}\label{sec:proof_high}
In this section, we consider the high variance regime. We consider the target set $C$ and the function $V$ defined in \eqref{eq:targetset_highnoise} and \eqref{eq:driftfunction_highnoise}, respectively, \textit{i.e.}
\begin{equation}\label{eq:CV_high_recall}
C:=\left\{\bm{\theta}: \vert \rho -\rho^*\vert<\tfrac{1}{2}\rho^* \text{ and } \sigma \Vert \tilde{\bm{\theta}}\Vert \leq c'\right\} \quad \text{and}\quad V(\bm{\theta}):=\frac{1}{2\alpha}\Vert \bm{\theta}-\bm{\theta}^*\Vert^2,
\end{equation}
where the minimizer $\bm{\theta}^*=\rho^*\bm{\mu}$ is defined in Lemma \ref{lem:minimizers} and the constant $c'$ is to be determined. We first aim to show that $V$ is a drift function with respect to the set $C$ under the high variance regime assumption, meaning $\sigma \geq c\Vert \bm{\mu} \Vert$. We next state a standard SGD convergence result applied to the logistic and hinge loss functions.
\begin{lemma}\label{lem:technical_convex_bound}
Consider the optimization problem \eqref{optimization_problem} where $\ell:\R\times \R \rightarrow \R$ is either the logistic or hinge loss function. Denote the vector $\bm{\theta}^*$ as the unique minimizer of $f$ in \eqref{optimization_problem}. Let $\bm{\theta}_0\in \R^d$. The sequence $\{\bm{\theta}_k \}_{k = 0}^\infty$ generated by SGD satisfies the following for all $k\geq 1$,
\begin{equation} \label{eq:high_convex_tech_lemma}
f(\bm{\theta}_{k-1})-f(\bm{\theta}^*)\leq \frac{1}{2\alpha}\left(\Vert \bm{\theta}_{k-1}-\bm{\theta} ^*\Vert^2-\bE\left[\Vert \bm{\theta}_{k}-\bm{\theta}^* \Vert^2\, |\mathcal{F}_{k-1}\right] \right)+\frac{\alpha}{2}\left(\Vert \bm{\mu} \Vert^2+d\sigma^2\right).
\end{equation}
\end{lemma}
\begin{proof
Define the quantity
\begin{equation*}
\bm{g_k}:=\frac{1}{\alpha}\left( \bm{\theta}_{k-1}-\bm{\theta}_{k}\right) = \nabla_{\bm{\theta}}\ell\left(\bm{\xi}_k^T\bm{\theta}_{k-1},1 \right). \end{equation*}
Here, it is easy to check that the derivative with respect to $\bm{\theta}$ and the expectation over $\bm{\xi}_k$ are interchangeable, thus yielding \[
\bE_{\bm{\xi}_{k}}\left[\bm{g}_k|\mathcal{F}_{k-1} \right]=\nabla_{\bm{\theta}} f(\bm{\theta}_{k-1}).
\]
By convexity of the function $f$, we have the following
\begin{align*}
\norm{\bm{\theta}_{k}-\bm{\theta}^*}^2 &= \norm{\bm{\theta}_{k-1}-\bm{\theta}^*}^2 -2 \alpha \bm{g}_k^T (\bm{\theta}_{k-1}-\bm{\theta}^*) + \alpha^2\norm{\bm{g}_k}^2\\
&= \norm{\bm{\theta}_{k-1}-\bm{\theta}^*}^2 - 2\alpha(\bm{g}_k-\bE_{\bm{\xi}_{k}}\left[\bm{g}_k|\mathcal{F}_{k-1} \right])^T(\bm{\theta}_{k-1}-\bm{\theta}^*) - 2 \alpha \bE_{\bm{\xi}_{k}}\left[\bm{g}_k|\mathcal{F}_{k-1} \right]^T (\bm{\theta}_{k-1}-\bm{\theta}^*)+\alpha^2 \norm{\bm{g}_k}^2\\
&\le \norm{\bm{\theta}_{k-1}-\bm{\theta}^*}^2- 2\alpha(\bm{g}_k-\bE_{\bm{\xi}_{k}}\left[\bm{g}_k|\mathcal{F}_{k-1} \right])^T(\bm{\theta}_{k-1}-\bm{\theta}^*)-2\alpha (f(\bm{\theta}_{k-1})-f(\bm{\theta}^*)) + \alpha^2 \norm{\bm{g}_k}^2.
\end{align*}
By taking conditional expectations with respect to $\mathcal{F}_{k-1}$ and rearranging the above inequality, we obtain that
\begin{equation}\label{eq:label_ineq_blah1}
f(\bm{\theta}_{k-1})-f(\bm{\theta}^*)\leq \frac{1}{2\alpha}\left(\Vert \bm{\theta}_{k-1}-\bm{\theta} ^*\Vert^2-\bE\left[\Vert \bm{\theta}_{k}-\bm{\theta}^* \Vert^2\, |\mathcal{F}_{k-1}\right] \right)+\frac{\alpha}{2}\bE_{\bm{\xi}_k}\left[\Vert \nabla_{\bm{\theta}} \ell(\bm{\xi}_k^T\bm{\theta}_{k-1},1)\Vert^2 \right].
\end{equation}
We next observe the following bound
\begin{equation}\label{eq:label_ineq_blah2}
\bE_{\bm{\xi}_k}[\Vert\nabla_{\bm{\theta}}\ell\left( \bm{\xi}_k^T\bm{\theta}_{k-1},1\right)\Vert^2 \, | \, \mathcal{F}_{k-1}]\leq \bE_{\bm{\xi}_k}[\Vert \bm{\xi}_k \Vert^2|\mathcal{F}_{k-1}]=\Vert \bm{\mu} \Vert^2+d\sigma^2.
\end{equation}
Combining \eqref{eq:label_ineq_blah1} and \eqref{eq:label_ineq_blah2}, the result follows.
\end{proof}
By Lemma \ref{lem:technical_convex_bound} for each $k\geq 1$, we deduce
\begin{equation}\label{eq:driftequation_high}
\bE[V(\bm{\theta}_k)|\mathcal{F}_{k-1}]-V(\bm{\theta}_{k-1})\leq -\left(f(\bm{\theta}_{k-1})-f(\bm{\theta}^*)\right)+\frac{\alpha}{2}(\Vert\bm{\mu}\Vert^2+d\sigma^2).
\end{equation}
Therefore, in order to show that the pair $(C,V)$ in \eqref{eq:CV_high_recall} satisfies the drift equation \eqref{eq:driftequation}, it suffices to lower bound the quantity $f(\bm{\theta}_{k-1})-f(\bm{\theta}^*)$ whenever $\bm{\theta}_{k-1}\not\in C$. To do so, we orthogonally decompose $\bm{\theta}_{k-1}=\rho_{k-1}\bm{\mu}+\tilde{\bm{\theta}}_{k-1}$, \textit{i.e.} $\bm{\mu}^T\tilde{\bm{\theta}}_{k-1}=0$ and $\rho_{k-1}\in \R$ and write
\begin{equation}\label{eq:quantity_split}
\begin{aligned}
f(\bm{\theta}_{k-1})-f(\bm{\theta}^*)= & \underbrace{f(\bm{\theta}_{k-1})-f(\rho_{k-1}\bm{\mu})}_{(a)}+\underbrace{f(\rho_{k-1}\bm{\mu})-f(\bm{\theta}^*)}_{(b)}.
\end{aligned}
\end{equation}
The assumption $\bm{\theta}_{k-1}\not \in C$ yields that either $\sigma \Vert \tilde{\bm{\theta}}_{k-1}\Vert \geq c'$ or $\vert \rho_{k-1}-\rho^*\vert\geq \tfrac{1}{2}\rho^*$. In Lemma \ref{lem:lower_bound_1} (resp. \ref{lem:lower_bound_2}), we show that (a) (resp. (b)) in \eqref{eq:quantity_split} are both non-negative and they are lower bounded by some positive constant provided that $\sigma \Vert \tilde{\bm{\theta}}_{k-1}\Vert \geq c'$ and $\vert \rho_{k-1}-\rho^*\vert\leq \tfrac{1}{2}\rho^*$ (resp. $\vert \rho_{k-1}-\rho^*\vert\geq \tfrac{1}{2}\rho^*$).
\begin{lemma}(Lower bound for (a) in \eqref{eq:quantity_split})\label{lem:lower_bound_1}
Fix $\bm{\theta}\in \R^d$ and orthogonally decompose $\bm{\theta}=\rho\bm{\mu}+\tilde{\bm{\theta}}$ where $\bm{\mu}^T\tilde{\bm{\theta}}=0$ and $\rho \in \R$. Then the following are true
\begin{enumerate}
\item $f(\bm{\theta})-f(\rho\bm{\mu})\geq 0$.
\item $f(\bm{\theta})-f(\rho\bm{\mu})\geq 1$ provided that $\vert \rho-\rho^*\vert\leq \tfrac{1}{2}\rho^*$, $\sigma \Vert \tilde{\bm{\theta}}\Vert \geq c'$ and $\sigma \geq c\Vert \bm{\mu} \Vert$ where $c$ is defined in \eqref{eq:para_log} and \eqref{eq:para_hinge}. Here $\rho^*$ is defined in Lemma \ref{lem:minimizers} and the constant $c'$ is defined by $436$ and $8+10\rho^*\sigma^2$ for the logistic and hinge loss respectively.
\end{enumerate}
\end{lemma}
\begin{proof} We consider the logistic and hinge loss separately.
\item
\textbf{Logistic loss.} The two normal random variables, $\bm{\tilde{\theta}}^T\bm{\xi} \sim N(0,\sigma^2\Vert \bm{\tilde{\theta}} \Vert^2)$ and $\bm{\mu}^T\bm{\xi} \sim N(\Vert \bm{\mu} \Vert^2,\sigma^2\Vert \bm{\mu} \Vert^2)$, are independent by \eqref{eqn:fact_independence}. Since we have $\bE_{\bm{\xi}}[\log(\exp(-\bm{\tilde{\theta}}^T\bm{\xi}))]=\bE_{\bm{\xi}}[\log(\exp(\bm{\tilde{\theta}}^T\bm{\xi}))]=0$, it holds
\begin{align*}
f(\bm{\theta})=\bE_{\bm{\xi}} \left[ \log\left (1+\exp(-\bm{\theta}^T\bm{\xi})\right)\right] &= \bE_{\bm{\xi}} \left[\log \left(1 + \exp(-\tilde{\bm{\theta}}^T\bm{\xi}) \exp(-\rho \bm{\mu}^T\bm{\xi}) \right ) \right ]\\
&=\bE_{\bm{\xi}} \left[ \log\left (\exp(\bm{\tilde{\theta}}^T\bm{\xi})+\exp(-\rho \bm{\mu}^T\bm{\xi})\right)\right]\\
&=\bE_{\bm{\xi}} \left[ \log\left (\exp(-\bm{\tilde{\theta}}^T\bm{\xi})+\exp(-\rho \bm{\mu}^T\bm{\xi})\right)\right],
\end{align*}
where the last equality is true because $\bm{\tilde{\theta}}^T\bm{\xi} \sim -\bm{\tilde{\theta}}^T\bm{\xi}$. Therefore we obtain
\begin{align*}
\bE_{\bm{\xi}} &\left[ \log\left (1+\exp(-\bm{\theta}^T\bm{\xi})\right)\right]\\
& \qquad \qquad= \frac{1}{2} \bE_{\bm{\xi}} \left[ \log\left (\exp(\bm{\tilde{\theta}}^T\bm{\xi})+\exp(-\rho \bm{\mu}^T\bm{\xi})\right)\right] + \frac{1}{2}\bE_{\bm{\xi}} \left[ \log\left (\exp(-\bm{\tilde{\theta}}^T\bm{\xi})+\exp(-\rho \bm{\mu}^T\bm{\xi})\right)\right] \\
& \qquad \qquad= \frac{1}{2} \bE_{\bm{\xi}} \left[ \log\left ( (\exp(\bm{\tilde{\theta}}^T\bm{\xi})+\exp(-\rho \bm{\mu}^T\bm{\xi}))(\exp(-\bm{\tilde{\theta}}^T\bm{\xi})+\exp(-\rho \bm{\mu}^T\bm{\xi})) \right)\right]\\
&\qquad \qquad=\frac{1}{2}\bE_{\bm{\xi}} \left[\log\left(1+\exp(-\bm{\tilde{\theta}}^T\bm{\xi}-\rho \bm{\mu}^T\bm{\xi})+\exp(\bm{\tilde{\theta}}^T\bm{\xi}-\rho \bm{\mu}^T\bm{\xi})+\exp(-2\rho \bm{\mu}^T\bm{\xi})\right) \right].
\end{align*}
By the equality $\exp(\bm{\tilde{\theta}}^T\bm{\xi})+\exp(-\bm{\tilde{\theta}}^T\bm{\xi})= 2+4\sinh^2(\tfrac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})$, we have
\begin{align*}
\bE_{\bm{\xi}} &\left[ \log\left (1+\exp(-\bm{\theta}^T\bm{\xi})\right)\right]\\
& \qquad \qquad \qquad =\frac{1}{2}\bE_{\bm{\xi}} \left[\log\left(1+2\exp(-\rho \bm{\mu}^T\bm{\xi})+\exp(-2\rho \bm{\mu}^T\bm{\xi})+4\sinh^2(\tfrac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})\exp(-\rho \bm{\mu}^T\bm{\xi})\right) \right].
\end{align*}
Therefore, we have
\begin{equation} \begin{aligned} \label{eqn: high_noise_blah_20}
2\bE_{\bm{\xi}}\left[\log\left( \frac{1+\exp(-\bm{\theta}^T\bm{\xi})}{1+\exp(-\rho \bm{\mu}^T\bm{\xi})}\right)\right]&=2\bE_{\bm{\xi}}\left[\log(1+\exp(-\bm{\theta}^T\bm{\xi}))\right]-\bE_{\bm{\xi}}\left[\log\left(1+\exp(-\rho \bm{\mu}^T\bm{\xi})\right)^2\right]\\
\qquad \qquad &= \bE_{\bm{\xi}}\left[\log \left ( 1 + \frac{4\sinh^2(\frac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})\exp(-\rho \bm{\mu}^T\bm{\xi})}{(1+ \exp(-\rho \bm{\mu}^T\bm{\xi}))^2} \right ) \right] \ge 0.
\end{aligned} \end{equation}
Thereby, we showed that $f(\bm{\theta})-f(\rho \bm{\mu})\geq 0$. Now we establish the positive lower bound. First, we note the following $1+ \exp(-\rho \bm{\mu}^T\bm{\xi}) = 2 \exp( -\tfrac{\rho \bm{\mu}^T\bm{\xi}}{2}) \cosh(\tfrac{\rho \bm{\mu}^T\bm{\xi}}{2})$. Fix a constant $r > 0$ and consider the set $\{\bm{\xi}: |\bm{\theta}^T\bm{\xi}| > r\}$. Applying the inequality $x^2+y^2\geq 2\vert xy \vert$ and \eqref{eqn: high_noise_blah_20}, we obtain that
\begin{align}
2\bE_{\bm{\xi}}\left[\log\left( \frac{1+\exp(-\bm{\theta}^T\bm{\xi})}{1+\exp(-\rho \bm{\mu}^T\bm{\xi})}\right)\right] & = \bE_{\bm{\xi}}\left[\log\left(1 + \frac{4\sinh^2(\frac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})\exp(-\rho \bm{\mu}^T\bm{\xi})}{(1+ \exp(-\rho \bm{\mu}^T\bm{\xi}))^2}\right) \right] \nonumber \\
&= \bE_{\bm{\xi}}\left[\log\left(1+\frac{\sinh^2(\frac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})}{\cosh^2(\frac{\rho}{2}\bm{\mu}^T\bm{\xi})} \right) \right] \nonumber \\
&\geq \bE_{\bm{\xi}}\left[\log\left(1+\frac{\sinh^2(\frac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})}{\cosh^2(\frac{\rho}{2}\bm{\mu}^T\bm{\xi})} \right)\cdot1_{\{\bm{\xi}:\vert \tilde{\bm{\theta}}^T\bm{\xi}\vert \geq r\}}\right] \label{eq:high_noise_blah_21}
\\
&\geq \bE_{\bm{\xi}}\left[\left(\log2+\log\left( \frac{\vert\sinh(\frac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})\vert}{\cosh(\frac{\rho}{2}\bm{\mu}^T\bm{\xi})}\right) \right)\cdot1_{\{\bm{\xi}:\vert \tilde{\bm{\theta}}^T\bm{\xi}\vert \geq r\}}\right] \nonumber.
\end{align}
Here \eqref{eq:high_noise_blah_21} follows from $\log\left(1+\frac{\sinh^2(\frac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})}{\cosh^2(\frac{\rho}{2}\bm{\mu}^T\bm{\xi})} \right)$ is always positive. From \eqref{eq:fact_affine}, we have $\bm{\mu}^T\bm{\xi} \sim N(\Vert \bm{\mu} \Vert^2,\sigma^2\Vert \bm{\mu} \Vert^2)$ and $\tilde{\bm{\theta}}^T\bm{\xi} \sim N(0,\sigma^2\Vert \tilde{\bm{\theta}} \Vert^2)$, so $\tilde{\bm{\theta}}^T\bm{\xi} = \sigma \| \tilde{\bm{\theta}} \| \psi$ where $\psi \sim N(0,1)$. Moreover, a simple computation shows that $-\log\left(\cosh(\frac{\rho}{2}\bm{\mu}^T\bm{\xi})\right)1_{\{\vert \tilde{\bm{\theta}}^T\bm{\xi}\vert \geq r\}} \geq -\log\left(\cosh(\frac{\rho}{2}\bm{\mu}^T\bm{\xi})\right) $ since $\cosh(\frac{\rho}{2}\bm{\mu}^T\bm{\xi})\geq 1$ always holds. Using the inequality $\log\cosh(x)\leq \vert x\vert$ for $x$, the following bound holds
\begin{equation} \begin{aligned} \label{eq: high_noise_22}
&\bE_{\bm{\xi}}\left[\log\left( \frac{1+\exp(-\bm{\theta}^T\bm{\xi})}{1+\exp(-\rho \bm{\mu}^T\bm{\xi})}\right)\right]\\
& \quad \ge \tfrac{1}{2} \log(2) \cdot \bE_{\psi}\left[1_{\{\vert \psi\vert \geq \frac{r}{\sigma \Vert\tilde{\bm{\theta}}\Vert}\}}\right] + \tfrac{1}{2} \bE_{\psi}\left[\log\left\vert\sinh(\tfrac{\sigma\Vert \tilde{\bm{\theta}}\Vert\psi}{2})\right\vert1_{\{\vert \psi\vert \geq \frac{r}{\sigma \Vert\tilde{\bm{\theta}}\Vert}\}}\right] - \tfrac{1}{2}\bE_{\bm{\xi}} \left [ \log( \cosh(\tfrac{\rho}{2} \bm{\mu}^T\bm{\xi})) \right ]\\
& \quad \ge \tfrac{1}{2} \log(2) \cdot \bE_{\psi}\left[1_{\{\vert \psi\vert \geq \frac{r}{\sigma \Vert\tilde{\bm{\theta}}\Vert}\}}\right] + \tfrac{1}{2} \bE_{\psi}\left[\log\left\vert\sinh(\tfrac{\sigma\Vert \tilde{\bm{\theta}}\Vert\psi}{2})\right\vert1_{\{\vert \psi\vert \geq \frac{r}{\sigma \Vert\tilde{\bm{\theta}}\Vert}\}}\right] - \tfrac{1}{2} \bE_{\bm{\xi}} \left [ | \tfrac{\rho}{2} \bm{\mu}^T\bm{\xi} | \right ]\\
&\quad\geq \tfrac{1}{2} \log(2) \cdot \bE_{\psi}\left[1_{\{\vert \psi\vert \geq \frac{r}{\sigma \Vert\tilde{\bm{\theta}}\Vert}\}}\right] + \tfrac{1}{2} \bE_{\psi}\left[\log\left\vert\sinh(\tfrac{\sigma\Vert \tilde{\bm{\theta}}\Vert\psi}{2})\right\vert1_{\{\vert \psi\vert \geq \frac{r}{\sigma \Vert\tilde{\bm{\theta}}\Vert}\}}\right]-\frac{3}{4}\left(\frac{\Vert \bm{\mu}\Vert^2}{\sigma^2}+ \sqrt{\frac{2}{\pi}}\cdot\frac{\Vert \bm{\mu} \Vert}{\sigma} \right),
\end{aligned} \end{equation}
where the last inequality uses \eqref{fact:norm_Gaussians} and $\rho\leq \frac{3}{\sigma^2}$. Using the inequality $\vert \sinh(x) \vert \geq \exp(\frac{\vert x\vert}{2})$ for all $\vert x \vert \geq 2\log(\sqrt{2}+1)$ and letting $r=4\log(\sqrt{2}+1)$, we obtain
\begin{align}
\tfrac{1}{2} \log(2) \cdot \bE_{\psi} \big [ 1_{\{|\psi| \ge \frac{4 \log( \sqrt{2}+ 1)\}}{\sigma \norm{\tilde{\bm{\theta}}} }} \big ] &+ \tfrac{1}{2}\bE_\psi\left[\log\left\vert\sinh(\tfrac{\sigma\Vert \tilde{\bm{\theta}}\Vert\psi}{2})\right\vert1_{\{|\psi| \geq \frac{4\log(\sqrt{2}+1)}{\sigma \Vert \tilde{\bm{\theta}}\Vert}\}}\right]\nonumber\\
&\geq \tfrac{1}{2} \log(2) \cdot \bE_{\psi} \big [ 1_{\{|\psi| \ge \frac{4 \log( \sqrt{2}+ 1)}{\sigma \norm{\tilde{\bm{\theta}}} } \}} \big ] + \tfrac{1}{2}\bE_\psi\left[\left\vert\tfrac{\sigma \Vert \tilde{\bm{\theta}}\Vert \psi}{4} \right\vert1_{\{|\psi| \geq \frac{4\log(\sqrt{2}+1)}{\sigma \Vert \tilde{\bm{\theta}}\Vert}\}}\right]\nonumber\\
&\geq \tfrac{1}{2} \log(2) \cdot \bE_{\psi}[1_{\{\vert\psi\vert \ge 1\}}] + \tfrac{1}{2} \bE_\psi\left[\left\vert\tfrac{\sigma \Vert \tilde{\bm{\theta}}\Vert \psi}{4} \right\vert1_{\vert\psi\vert \geq 1\}}\right] \label{eq:high_noise_blah_111} \\\nonumber
&\geq \left ( \tfrac{1}{2} \log(2) + \frac{\sigma \Vert \tilde{\bm{\theta}}\Vert}{8} \right )\cdot\Phi^c(1).
\end{align}
Here \eqref{eq:high_noise_blah_111} follows from the assumption that $\sigma \Vert \tilde{\bm{\theta}}\Vert\geq 436
$. Combining \eqref{eq: high_noise_22}, \eqref{eq:high_noise_blah_111} and the bounds $\sigma \geq 0.33\Vert \bm{\mu} \Vert$ and $\sigma\Vert \tilde{\bm{\theta}}\Vert \geq 436$ the result follows. \item
\textbf{Hinge loss.} We begin by denoting
$\xi_1:=\bm{\xi}^T\tilde{\bm{\theta}}$ and $\xi_2:=\bm{\xi}^T\bm{\mu}$. Notice that $\xi_1$ and $\xi_2$ are independent random variables. Recall that $\ell(t):=\ell(t,1)=\max(0,1-t)$. We have that
\begin{align*}
f(\bm{\theta})-f(\rho \bm{\mu})&=\bE_{\bm{\xi}}\left[\ell(\bm{\xi}^T\bm{\theta})-\ell(\rho\bm{\xi}^T\bm{\mu}) \right]\\&=\bE_{\xi_1,\xi_2}\left[\ell(\xi_1+\rho\xi_2)-\ell(\rho\xi_2) \right]\\&=\bE_{\xi_1,\xi_2}\left[\ell(-\xi_1+\rho\xi_2)-\ell(\rho\xi_2) \right].
\end{align*}
The second equality follows since $\xi_1 \sim -\xi_1$. We define the function
\[
\kappa(\xi_1,\xi_2):=\ell(\xi_1+\rho\xi_2)+\ell(-\xi_1+\rho\xi_2)-2\ell(\rho\xi_2).
\]
We therefore obtain that
\[
2\left(f(\bm{\theta})-f(\rho\bm{\mu})\right)=\bE_{\xi_1,\xi_2}\left[\kappa(\xi_1,\xi_2)\right].
\]
Next we claim that
\begin{equation}\label{eq:hinge_lemma_claim1}
\kappa(\xi_1,\xi_2)=0 \text{ whenever } \vert \xi_1\vert\leq \vert 1-\rho\xi_2\vert.
\end{equation}
To see this, suppose that $\vert \xi_1\vert\leq \vert 1-\rho\xi_2\vert$ holds. We consider two cases. First, assume that $0\leq 1-\rho\xi_2$ which yields that $\rho\xi_2-\xi_1\leq 1$ and $\rho\xi_2+\xi_1\leq 1$. We therefore have $\kappa(\xi_1,\xi_2)=1-\xi_1-\rho\xi_2+1+\xi_1-\rho\xi_2-2(1-\rho\xi_2)=0$. Second, assume that $1-\rho\xi_2\leq 0$. It thus holds that $1\leq \rho\xi_2-\xi_1$ and $1\leq \rho\xi_2+\xi_1$. Now it immediately follows that $\kappa(\xi_1,\xi_2)=0$ and equation \eqref{eq:hinge_lemma_claim1} is established. We claim the following
\begin{equation}\label{eq:hinge_lemma_claim2}
\kappa(\xi_1,\xi_2)=\vert \xi_1\vert-\vert 1-\rho\xi_2\vert \text{ whenever } \vert \xi_1\vert\geq \vert 1-\rho\xi_2\vert.
\end{equation}
To this end, we again consider two cases. First, assume that $\xi_1\leq -\vert 1-\rho\xi_2\vert$. This yields that $1\leq -\xi_1+\rho\xi_2$ and $\xi_1+\rho\xi_2\leq 1$, so it holds that $ \kappa(\xi_1,\xi_2)= 1-\xi_1-\rho\xi_2-2\ell(\rho\xi_2)$. The claim \eqref{eq:hinge_lemma_claim2} follows from the following simple identity
\begin{equation}\label{eq:hinge_identity}
2\ell(t)= 1-t+\vert 1 - t \vert, \quad \forall t \in \R.
\end{equation}
Second, assume that $\xi_1\geq \vert 1-\rho\xi_2\vert$. It then holds that $\xi_1+\rho\xi_2\geq 1$ and $-\xi_1+\rho\xi_2\leq 1$ and therefore $
\kappa(\xi_1,\xi_2)=1+\xi_1-\rho\xi_2-2\ell(\rho \xi_2).
$ The claim \eqref{eq:hinge_lemma_claim2} follows from the identity \eqref{eq:hinge_identity}. We therefore obtain
\begin{align}
\bE_{\xi_1,\xi_2}[\kappa(\xi_1,\xi_2)]&=2\bE_{\xi_1,\xi_2}[(\ell(\xi_1+\rho\xi_2)+\ell(-\xi_1+\rho\xi_2)-2\ell(\rho\xi_2))1_{\{ \xi_1>0\}}]\label{eq:hinge_lemma_it1}\\&=2\bE_{\xi_1,\xi_2}[(\ell(\xi_1+\rho\xi_2)+\ell(-\xi_1+\rho\xi_2)-2\ell(\rho\xi_2))1_{\{ \xi_1\geq\vert 1-\rho \xi_2\vert\}}]\label{eq:hinge_lemma_it2}\\&=\bE_{\xi_1,\xi_2}[(\xi_1-\vert 1-\rho \xi_2\vert)1_{\{ \xi_1\geq \vert 1-\rho\xi_2\vert\}} ]. \label{eq:hinge_lemma_it3}
\end{align}
Here equation \eqref{eq:hinge_lemma_it1} holds because $\xi_1\sim -\xi_1$ and $\kappa(\xi_1,\xi_2)=\kappa(-\xi_1,\xi_2)$. Equation \eqref{eq:hinge_lemma_it2} is true because of claim \eqref{eq:hinge_lemma_claim1} and \eqref{eq:hinge_lemma_it3} follows from claim \eqref{eq:hinge_lemma_claim2}. From \eqref{eq:hinge_lemma_it3}, we conclude that $\bE_{\xi_1,\xi_2}[\kappa(\xi_1,\xi_2)]\geq 0$. We then observe the bound
\begin{equation}
\begin{aligned}
\bE_{\xi_1,\xi_2}[(\xi_1-\vert 1-\rho \xi_2\vert)1_{\{ \xi_1\geq \vert 1-\rho\xi_2\vert\}} ] &= \tfrac{1}{2}\bE_{\xi_1,\xi_2}[\xi_1-\vert 1-\rho \xi_2\vert+\vert \xi_1-\vert 1-\rho \xi_2\vert \vert]\\&\geq -\tfrac{1}{2}\bE_{\xi_2}[\vert 1-\rho \xi_2\vert]+\tfrac{1}{2}\bE_{\xi_1,\xi_2}[\vert \xi_1\vert-\vert 1-\rho \xi_2\vert]\\&=\tfrac{1}{2}\bE_{\xi_1}[\vert \xi_1\vert]-\bE_{\xi_2}[\vert1-\rho\xi_2\vert]. \label{eq:thrid_last_equation}
\end{aligned}
\end{equation}
The second inequality follows from $\bE_{\xi_1}[\xi_1]=0$ and the triangle inequality $\vert x\vert-\vert y\vert \leq \vert \vert x\vert-y\vert$. On the other hand, it holds that
\begin{equation}\label{first_last_equation}
\bE_{\xi_1}[\vert \xi_1\vert]=\sqrt{\frac{2}{\pi}}\cdot\sigma \Vert \tilde{\bm{\theta}}\Vert,
\end{equation}
and
\begin{equation}\label{second_last_equation}
\bE_{\bm{\xi}}[\vert 1-\rho \bm{\mu}^T\bm{\xi}\vert]\leq 1+\rho \bE_{\bm{\xi}}[\vert \bm{\mu}^T\bm{\xi}\vert]\leq 1+\rho\Vert \bm{\mu} \Vert \left(\sqrt{\frac{2}{\pi}}\cdot\sigma +\Vert \bm{\mu} \Vert \right).
\end{equation}
Combing equations \eqref{eq:hinge_lemma_claim1}, \eqref{eq:hinge_lemma_claim2}, \eqref{eq:thrid_last_equation}, \eqref{first_last_equation}, and \eqref{second_last_equation}, we deduce
\begin{equation}\label{eq:last_hinge_high}
f(\bm{\theta})-f(\rho \bm{\mu}) \geq \frac{1}{2}\left(\sqrt{\frac{1}{2\pi}}\cdot\sigma\Vert\tilde{\bm{\theta}}\Vert-1-\rho\Vert \bm{\mu} \Vert \left(\sqrt{\frac{2}{\pi}}\cdot\sigma +\Vert \bm{\mu} \Vert \right) \right).
\end{equation}
Using the bounds $\sigma\Vert \tilde{\bm{\theta}}\Vert \geq 8+10\rho^*\sigma^2$, $\sigma \geq 0.62\Vert \bm{\mu} \Vert$ and $\rho\leq \tfrac{3}{2}\rho^*$, the result follows from \eqref{eq:last_hinge_high}.
\end{proof}
We next derive a lower bound \eqref{eq:quantity_split}, Part (b). But, first we need a basic lemma from convex analysis.
\begin{lemma}\label{lem:convex_analysis_lemma}
Suppose that $g:\R_{\geq 0}\to \R$ is a convex function with a minimizer at $\rho^*>0$. Assume that $g$ is twice differentiable on the interval $[\tfrac{3}{4}\rho^*,\tfrac{5}{4}\rho^*]$ and there exists a constant $B>0$ such that $g''(\rho)\geq B$ for all $\rho \in [\tfrac{3}{4}\rho^*,\tfrac{5}{4}\rho^*]$. Then it holds that
\begin{equation}\label{eq0:cvx_lemma}
g(\rho)-g(\rho^*)\geq \frac{\rho^*B}{8}\vert \rho-\rho^*\vert \quad \text{for all $\rho \not\in [\tfrac{1}{2}\rho^*,\tfrac{3}{2}\rho^*]$}.
\end{equation}
\end{lemma}
\begin{proof}
The proof follows by considering the second order Taylor series expansion of the function $g$.
\end{proof}
\begin{lemma}(Lower bound for (b) in \eqref{eq:quantity_split})\label{lem:lower_bound_2} Fix $\bm{\theta}\in \R^d$ and orthogonally decompose $\bm{\theta}=\rho\bm{\mu}+\tilde{\bm{\theta}}$. Suppose that $\vert \rho-\rho^*\vert \geq \tfrac{1}{2}\rho^*$. Then provided that
$\sigma \geq c\Vert \bm{\mu} \Vert$ where the constant $c$ is defined in \eqref{eq:para_log} and \eqref{eq:para_hinge}, there exists a positive constant $A$ such that the following is true
\begin{equation}\label{eq:lemma9}
f(\rho \bm{\mu})-f(\bm{\theta^*}) \geq A\cdot\frac{\Vert \bm{\mu} \Vert^2}{\sigma^2}.
\end{equation}
\begin{proof} We consider the logistic and hinge loss separately.
\item \textbf{Logistic loss.} Define the function
\[
g(\rho):=\bE_{\bm{\xi}}\left[\log\left(1+\exp(-\rho \bm{\mu}^T\bm{\xi})\right) \right], \quad \bm{\xi} \sim N(\bm{\mu},\sigma^2I_d).
\]
By Lemma~\ref{lem:minimizers}, we know that $g$ is a convex function with a unique minimizer at $\rho^*:=\frac{2}{\sigma^2}$. Observe that $f(\rho \bm{\mu})-f(\bm{\theta}^*) = g(\rho)-g(\rho^*)$; hence in order to prove \eqref{eq:lemma9}, we instead aim to bound this difference in the function $g$. From \eqref{eq:fact_affine}, we have $\bm{\mu}^T\bm{\xi} \sim N(\Vert \bm{\mu} \Vert^2,\sigma^2\Vert \bm{\mu} \Vert^2)$. It thus holds
\[
4g''(\rho)=\bE\left(\frac{(\bm{\mu}^T\bm{\xi})^2}{\cosh(\frac{\rho}{2} \bm{\mu}^T\bm{\xi})^2} \right)=\frac{1}{\sigma\Vert \bm{\mu}\Vert\sqrt{2\pi }}\int_{-\infty}^{\infty}\frac{z^2}{\cosh^2(\frac{\rho z}{2})}\exp\left(-\frac{(z-\Vert \bm{\mu} \Vert^2)^2}{2\sigma^2\Vert \bm{\mu} \Vert^2}\right)dz.
\]
Upper bounding $\cosh^2(\frac{\rho z}{2})$ by $\exp(\vert \rho z\vert)$, we next obtain
\begin{align*}
4g''(\rho)&\geq \frac{1}{\sigma\Vert \bm{\mu}\Vert\sqrt{2\pi}} \int_{-\infty}^{\infty} z^2\exp\left(-\vert \rho z \vert \right)\exp\left(-\frac{(z-\Vert \bm{\mu} \Vert^2)^2}{2\sigma^2\Vert \bm{\mu}\Vert^2}\right)dz & \\ &=\frac{1}{\sigma\Vert \bm{\mu}\Vert\sqrt{2\pi }} \int_{-\infty}^{\infty} z^2\exp\left(-\frac{(z-\Vert \bm{\mu} \Vert^2)^2+2\sigma^2\Vert \bm{\mu}\Vert^2\vert\rho z\vert}{2\sigma^2\Vert \bm{\mu}\Vert^2}\right)dz ,&\\
&=\frac{1}{\sigma\Vert \bm{\mu}\Vert\sqrt{2\pi }}\cdot\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2} \right) \int_{-\infty}^{\infty} z^2\exp\left(-\frac{z^2-2\Vert \bm{\mu} \Vert^2 z+2\sigma^2\Vert \bm{\mu}\Vert^2\vert\rho z\vert}{2\sigma^2\Vert \bm{\mu}\Vert^2}\right)dz
\\&= \frac{\sigma^2\Vert \bm{\mu}\Vert^2}{\sqrt{2\pi }}\cdot\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2} \right)\int_{-\infty}^{+\infty} z^2\exp\left(-\frac{z^2-2\frac{\Vert \bm{\mu} \Vert}{\sigma}z+2\vert \rho z \vert \sigma\Vert\bm{\mu}\Vert }{2} \right) dz\\&\geq \frac{\sigma^2\Vert \bm{\mu}\Vert^2}{\sqrt{2\pi }}\cdot\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2} \right)\int_{0}^{+\infty} z^2\exp\left(-\frac{z^2}{2}\right)\exp\left(z\left(\frac{\Vert\bm{\mu}\Vert}{\sigma}-\rho \sigma \Vert \bm{\mu} \Vert\right) \right)dz\\&\geq \frac{\sigma^2\Vert \bm{\mu}\Vert^2}{\sqrt{2\pi }}\cdot\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2}-\frac{1}{2}-\left \vert \frac{\Vert \bm{\mu} \Vert}{\sigma}-\rho \sigma \Vert \bm{\mu} \Vert \right\vert \right)\int_{0}^1 z^2 dz.
\end{align*}
Here the second to last inequality follows from the change of variables $z\rightarrow z\sigma\Vert \bm{\mu}\Vert$. The last inequality follows from restricting the integral's domain to $[0,1]$ and also lower bounding $-\frac{z^2}{2}$ and $z\left(\frac{\Vert\bm{\mu}\Vert}{\sigma}-\rho \sigma \Vert \bm{\mu} \Vert\right)$ by $\frac{-1}{2}$ and $-\left\vert\frac{\Vert\bm{\mu}\Vert}{\sigma}-\rho \sigma \Vert \bm{\mu} \Vert\right\vert$ respectively. We see that, for $\rho \in [\tfrac{3}{4}\rho^*,\tfrac{5}{4}\rho^*]$, the term $\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2}-\frac{1}{2}-\left \vert \frac{\Vert \bm{\mu} \Vert}{\sigma}-\rho \sigma \Vert \bm{\mu} \Vert \right\vert \right)$ is lower bounded by $\exp\left(-\frac{1}{2c^2}-\frac{1}{4c}-\frac{1}{2} \right)$. By Lemma \ref{lem:convex_analysis_lemma}, the result follows with the constant $A$ computed as follows
\begin{equation*}\label{eq:constant_A_log}
A=\frac{1}{12\sqrt{2\pi}}\cdot\exp\left(-\frac{1}{2c^2}-\frac{1}{4c}-\frac{1}{2} \right).
\end{equation*}
\item \textbf{Hinge loss.} We begin by defining the function $h(\rho)=f(\rho\bm{\mu})$. Therefore
\[
f(\rho \bm{\mu})=\bE_{\bm{\xi}}[\ell(\rho\bm{\xi}^T\bm{\mu})]=\bE_{\bm{\xi}}[(1-\rho \bm{\xi}^T\bm{\mu})1_{\{\rho\bm{\xi}^T\bm{\mu}\leq 1 \}}].
\]
Hence, it holds that
\[
h'(\rho)=\bm{\mu}^T\nabla f(\rho \bm{\mu})=-\bE_{\bm{\xi}}[\bm{\xi}^T\bmu1_{\{ \rho\bm{\xi}^T\bm{\mu}\leq 1\}}].
\]
From \eqref{eq:fact_affine}, we obtain that $\bm{\mu}^T\bm{\xi} \sim N(\Vert \bm{\mu} \Vert^2,\sigma^2\Vert \bm{\mu} \Vert^2)$. For $\rho>0$, therefore, it holds that
\begin{equation}\label{der_phi_pos}
h'(\rho)=\frac{-1}{\sigma\Vert \bm{\mu} \Vert\sqrt{2\pi}}\int_{-\infty}^{\frac{1}{\rho}}z\exp\left(-\frac{1}{2}\cdot\left(\frac{z}{\sigma\Vert \bm{\mu} \Vert}-\frac{\Vert \bm{\mu} \Vert}{\sigma} \right)^2\right)dz.
\end{equation}
Applying chain rule thus yields
\[
h''(\rho)=\frac{1}{\rho^3\sigma\Vert \bm{\mu} \Vert\sqrt{2\pi}}\exp\left(-\frac{1}{2}\cdot \left(\frac{1}{\rho\sigma\Vert \bm{\mu} \Vert}-\frac{\Vert \bm{\mu} \Vert}{\sigma} \right)^2 \right)\quad \text{for all $\rho>0$}.
\]
Hence, for all $\rho \in [\tfrac{3}{4}\rho^*,\tfrac{5}{4}\rho^*]$ it holds that
\[
h''(\rho)\geq \frac{64}{125{\rho^*}^3\sigma\Vert \bm{\mu} \Vert\sqrt{2\pi}}\exp\left(-\frac{1}{2}\cdot \Gamma^2 \right),
\]
where $\Gamma:=\max\left\{\left|\frac{4}{3\rho^*\sigma\Vert \bm{\mu} \Vert}-\frac{\Vert \bm{\mu} \Vert}{\sigma} \right|,\left|\frac{4}{5\rho^*\sigma\Vert \bm{\mu} \Vert}-\frac{\Vert \bm{\mu} \Vert}{\sigma} \right| \right\}$. Therefore, by Lemma \ref{lem:convex_analysis_lemma} and $\vert \rho-\rho^*\vert \geq \tfrac{1}{2}\rho^*$, it holds that
\begin{equation}\label{eq:constant_r_hinge_loss}
\begin{aligned}
f(\rho\bm{\mu})-f(\bm{\theta}^*)&\geq \frac{4}{125\sqrt{2\pi}}\cdot\frac{\sigma}{r\Vert \bm{\mu} \Vert}\cdot\exp\left(-\frac{1}{2}\cdot \Gamma^2 \right).
\end{aligned}
\end{equation}
Here $r=\rho^*\sigma^2$. Note that $r>0$ by Lemma \ref{lem:minimizers}. We aim to lower bound the right-hand side of \eqref{eq:constant_r_hinge_loss}. We denote by $w=\frac{\sigma}{r\Vert \bm{\mu} \Vert}-\frac{\Vert \bm{\mu} \Vert}{\sigma}$ the quantity defined in Lemma \ref{lem:minimizers}. In particular, by Lemma \ref{lem:minimizers}, the following holds
\begin{equation}\label{eq:w_hinge_later}
\frac{1}{\sqrt{2\pi}}\cdot\frac{\sigma}{\Vert \bm{\mu} \Vert}=\Phi(w)\cdot \exp(\tfrac{1}{2}w^2).
\end{equation}
We consider two cases. First suppose that $w\geq \frac{1}{(3\sqrt{2}-4)c}$. Along with the assumption $\frac{\sigma}{\Vert \bm{\mu} \Vert}\geq c$ this implies that $w\geq \frac{1}{3\sqrt{2}-4}\cdot \frac{\Vert \bm{\mu} \Vert}{\sigma}$. A simple computation shows that $w^2\geq \frac{1}{2}\cdot \Gamma^2$ for all $w\geq \frac{1}{3\sqrt{2}-4}\cdot \frac{\Vert \bm{\mu} \Vert}{\sigma}$. On the other hand, by \eqref{eq:w_hinge_later} for $w\geq 0$, we obtain that $\frac{2}{\pi}\cdot \frac{\sigma^2}{\Vert \bm{\mu} \Vert^2}\geq \exp(w^2)$. Plugging in the bounds $w^2\geq \frac{1}{2}\cdot \Gamma^2 $, $\exp(-w^2)\geq \frac{\pi}{2}\cdot\frac{\Vert \bm{\mu} \Vert^2}{\sigma^2}$, and $\frac{\sigma}{r\Vert \bm{\mu} \Vert}\geq w\geq \frac{1}{(3\sqrt{2}-4)c}$ into the right-hand-side of \eqref{eq:constant_r_hinge_loss}, we obtain that
\begin{equation*}
f(\rho\bm{\mu})-f(\bm{\theta}^*)\geq \frac{\sqrt{2\pi}}{125(3\sqrt{2}-4)c}\cdot\frac{\Vert \bm{\mu} \Vert^2}{\sigma^2}.
\end{equation*}
Next, suppose that $w<\frac{1}{(3\sqrt{2}-4)c}$. In this case, the two factors $\frac{\sigma}{r\Vert \bm{\mu} \Vert}$ and $\exp\left(-\frac{1}{2}\cdot\Gamma^2 \right)$ in \eqref{eq:constant_r_hinge_loss} are lower bounded separately. Note that it always holds that $w\geq -\frac{\Vert \bm{\mu} \Vert}{\sigma}$ as $r>0$. Therefore, it is easy to see that the latter factor is lower bounded by $\exp\left(-\frac{1}{2} \left(\frac{4}{3(3\sqrt{2}-4)c}+\frac{1}{3c} \right)^2\right)$. Hence, it remains to bound the factor $\frac{\sigma}{r\Vert \bm{\mu} \Vert}$ in \eqref{eq:constant_r_hinge_loss}. To this end, we show that $w\geq -\frac{\Vert \bm{\mu} \Vert}{2\sigma}$ for all $\frac{\sigma}{\Vert \bm{\mu} \Vert}\geq c$. Note that a chain of change of variables gives
\begin{equation*}\label{eq:myeq_blah}
\begin{aligned}
\Phi(w)\cdot\exp\left(\frac{w^2}{2}\right)&=\frac{1}{\sqrt{2\pi}}\cdot\int_{0}^{+\infty}\exp(-\frac{1}{2}t^2)\cdot\exp(wt) \, dt.
\end{aligned}\end{equation*}
The right-hand side of \eqref{eq:w_hinge_later} is an increasing function with respect to $w$. Therefore it suffices to show that the following holds
\begin{equation}\label{eq:sigma/bmu>c}
\frac{1}{\sqrt{2\pi}}\cdot \frac{\sigma}{\Vert \bm{\mu} \Vert} \geq \Phi\left(-\frac{\Vert \bm{\mu} \Vert}{2\sigma} \right)\cdot \exp\left(\frac{\Vert \bm{\mu} \Vert^2}{8\sigma^2} \right) \quad \text{whenever} \quad \frac{\sigma}{\Vert \bm{\mu} \Vert}\geq c.
\end{equation}
However, it can be verified by a plot that $\tfrac{1}{\sqrt{2\pi}}\geq t\cdot \Phi\left(-\tfrac{t}{2} \right)\cdot \exp\left(\frac{t^2}{8} \right)$ holds for all $t\in (0,\frac{1}{c})$. Therefore, we have shown that $w\geq -\frac{\Vert \bm{\mu} \Vert}{2\sigma}$ which implies that $\frac{\sigma}{r\Vert \bm{\mu} \Vert}\geq \frac{\Vert \bm{\mu} \Vert}{2\sigma}$. Finally we lower bound the quantity $\frac{\sigma}{r\Vert\bm{\mu}\Vert}$ by $c\cdot\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2}$. We have concluded \eqref{eq:lemma9} in case of hinge loss function where the constant $A$ can be computed as follows
\begin{equation*}\label{eq:constant_A_hinge}
A=\min\left\{\frac{c}{2}\cdot\exp\left(-\frac{1}{2} \left(\frac{4}{3(3\sqrt{2}-4)c}+\frac{1}{3c} \right)^2\right), \frac{\sqrt{2\pi}}{125(3\sqrt{2}-4)c} \right\}.
\end{equation*}
\end{proof}
\end{lemma}
We now have the ingredients to prove Theorem \ref{thm:high}.
\begin{proof}[Proof of Theorem \ref{thm:high}]
Consider the set $C$ and function $V$ defined in \eqref{eq:CV_high_recall}:
\begin{equation}\label{eq:CV_high_proofthm3}
C:=\left\{\bm{\theta}: \vert \rho -\rho^*\vert<\tfrac{1}{2}\rho^* \text{ and } \sigma \Vert \tilde{\bm{\theta}}\Vert \leq c'\right\} \quad \text{ and }\quad V(\bm{\theta})=\frac{1}{2\alpha}\Vert \bm{\theta}-\bm{\theta}^*\Vert^2.
\end{equation}
We let $c'$ to be defined as in Lemma \ref{lem:lower_bound_1}. This means that $c'$ equals to $436$ and $8+10\rho^*\sigma^2$ in case of logistic and hinge loss respectively. We next show that there exists a positive constant $\delta$ such that the following is true
\begin{equation}
\bP_{\bm{\xi}}\left(\bm{\xi}^T\bm{\theta}\geq 1\right)\geq \delta \quad \text{for all}\quad \bm{\theta} \in C.
\end{equation}
Let $\bm{\theta}\in C$ and orthogonally decompose it into $\bm{\theta}=\rho\bm{\mu}+\tilde{\bm{\theta}}$. We have that $\bm{\xi}^T\bm{\theta}=\rho\bm{\xi}^T\bm{\mu}+\bm{\xi}^T\tilde{\bm{\theta}}$. Note that $\rho>0$ as $\bm{\theta}\in C$. By \eqref{eqn:fact_independence}, we see that $\bm{\xi}^T\bm{\theta}$ and $\bm{\xi}^T\tilde{\bm{\theta}}$ are independent normal random variables. It thus holds that
\begin{equation}
\bP_{\bm{\xi}}\left(\bm{\xi}^T\bm{\theta}\geq 1\right)\geq \bP_{\bm{\xi}}\left(\rho\bm{\xi}^T\bm{\mu}\geq 1\right)\cdot \bP_{\bm{\xi}}\left(\bm{\xi}^T\tilde{\bm{\theta}}\geq 0\right) =\frac{1}{2}\cdot\bP_{\bm{\xi}}\left(\bm{\xi}^T\bm{\mu}\geq \frac{1}{\rho}\right).
\end{equation}
Rewrite the inequality $\bm{\xi}^T\bm{\mu}\geq \frac{1}{\rho}$ by $z:=\frac{\bm{\xi}^T\bm{\mu}-\Vert \bm{\mu} \Vert^2}{\sigma\Vert \bm{\mu} \Vert}\geq \frac{\frac{1}{\rho}-\Vert \bm{\mu} \Vert^2}{\sigma\Vert
\bm{\mu} \Vert}$. Noting that $z\sim N(0,1)$ and using the inequality $\frac{2}{\rho^*}\geq\frac{1}{\rho}$, we obtain that
\begin{equation}\label{eq:delta_proof}
\bP_{\bm{\xi}}\left(\bm{\xi}^T\bm{\theta}\geq 1\right)\geq \delta:= \frac{1}{2}\cdot\Phi^c\left(\frac{\frac{2}{\rho^*}-\Vert\bm{\mu} \Vert^2}{\sigma\Vert \bm{\mu} \Vert}\right).
\end{equation}
We next show that the pair $(C,V)$ satisfies the drift equation \eqref{eq:driftequation}. Let us rewrite \eqref{eq:quantity_split}:
\begin{equation}\label{eq:quantity_split2}
\begin{aligned}
f(\bm{\theta}_{k-1})-f(\bm{\theta}^*)= & \underbrace{f(\bm{\theta}_{k-1})-f(\rho_{k-1}\bm{\mu})}_{(a)}+\underbrace{f(\rho_{k-1}\bm{\mu})-f(\bm{\theta}^*)}_{(b)}.
\end{aligned}
\end{equation}
By Lemmas \ref{lem:lower_bound_1} and \ref{lem:lower_bound_2}, both terms in $(a)$ and $(b)$ in \eqref{eq:quantity_split2} are non-negative . Assume that $\bm{\theta}_{k-1}\not\in C$. Therefore, either $\sigma \Vert \tilde{\bm{\theta}}_{k-1}\Vert\geq c'$ or $\vert\rho_{k-1}-\rho^*\vert\geq \tfrac{1}{2}\rho^*$; this implies that the quantity $(a)$ is at least 1 or the quantity $(b)$ is at least $A\cdot\frac{\Vert \bm{\mu} \Vert^2}{\sigma^2}$ respectively. The constant $A$ in Lemma \ref{lem:lower_bound_2} satisfies $1\geq A\cdot \frac{\Vert \bm{\mu} \Vert^2}{\sigma^2}$ for all $\frac{\sigma}{\Vert \bm{\mu} \Vert}\geq c$. Hence it holds that
\begin{equation}
A\cdot\frac{\Vert \bm{\mu} \Vert^2}{\sigma^2}\leq f(\bm{\theta}_{k-1})-f(\bm{\theta}^*) \quad \text{for all } \bm{\theta}_{k-1}\not\in C.
\end{equation}
We use \eqref{eq:high_convex_tech_lemma} next to establish the drift equation \eqref{eq:driftequation}. Recall that the following holds
\begin{equation}
f(\bm{\theta}_{k-1})-f(\bm{\theta}^*)\leq \frac{1}{2\alpha}\left(\Vert \bm{\theta}_{k-1}-\bm{\theta} ^*\Vert^2-\bE\left[\Vert \bm{\theta}_{k}-\bm{\theta}^* \Vert^2\, |\mathcal{F}_{k-1}\right] \right)+\frac{\alpha}{2}\left(\Vert \bm{\mu} \Vert^2+d\sigma^2\right).
\end{equation}
Combining the last two displayed inequalities and using the definition of function $V$, we obtain that
\begin{equation}
\left(\bE\left[V(\bm{\theta}_k)|\mathcal{F}_{k-1}\right]-V(\bm{\theta}_{k-1})\right)\cdot 1_{\{\bm{\theta}_{k-1}\not\in C\}} \leq \left(\frac{\alpha}{2}(\Vert \bm{\mu} \Vert^2+d\sigma^2)-A\cdot\frac{\Vert \bm{\mu}\Vert^2}{\sigma^2}\right)\cdot 1_{\{\bm{\theta}_{k-1}\not\in C\}}.
\end{equation}
Therefore, by choosing $\alpha<A\cdot\frac{\Vert \bm{\mu} \Vert^2}{\sigma^2(\Vert \bm{\mu} \Vert^2+d\sigma^2)}$, we obtain the drift equation \eqref{eq:driftequation} holds with $b:=\frac{A}{2}\cdot\frac{\Vert \bm{\mu} \Vert^2}{\sigma^2}$. Next, we obtain bounds on $\bE[\tau_m]$ for $m\geq 1$. By Lemma \ref{lem:drift_from_meyn} and a simple induction, we obtain that
\begin{equation}
\bE[\tau_m]\leq \tfrac{1}{b}V(0)+\tfrac{1}{b}(m-1)\sup_{\bm{\theta}\in C} V(\bm{\theta}).
\end{equation}
Compactness of set $C$ yields that, $\sup_{\bm{\theta}\in C} V(\bm{\theta})<+\infty$. Therefore, for some constant $\gamma$, the following is true
\begin{equation}\label{eq:bound_on_tau_m_high}
\bE[\tau_m] \leq \gamma\cdot m.
\end{equation}
Combining \eqref{eq:bound_on_tau_m_high}, \eqref{eq:delta_proof} and Lemma \ref{lem:ETleqET_C}, the proof immediately follows.
\end{proof}
\subsection{Hinge Regression}
In this section, we study the following optimization problem
\begin{align*}
\min f(\bm{\theta}):=\bE_{\bm{\xi} \sim \mathcal{P}_*} [\ell(\bm{\xi}^T\bm{\theta})],
\end{align*}
where $\ell(x):=\ell(x,1)$ with the hinge loss function defined in \eqref{eq:hinge_loss_definition}. We state the main theorem of this section below. We defer the proof to the end of this subsection.
\begin{theorem}\label{theorem_hinge}
Let $\bm{\theta}_0=\bm{0}$. The following are true.
\begin{enumerate}
\item Fix $\epsilon\in (0,\tfrac{1}{2})$ and let $M=1+\frac{\alpha(\Vert \bm{\mu} \Vert^2+\sigma^2)}{2\epsilon}$. Suppose that $\sigma \leq \left(\tfrac{1}{2}-\epsilon\right)\sqrt{\tfrac{\pi}{2}}\Vert \bm{\mu} \Vert$ (low variance regime). The following then holds.
\begin{equation}\begin{aligned} \label{eq:low_Et_proof}
\bE[T]&\leq \left(2+\frac{\left(M-1\right)^2}{\alpha\Vert \bm{\mu} \Vert^2}+\frac{\sigma}{\Vert \bm{\mu} \Vert}\left(\frac{1.6M^2}{\alpha \Vert \bm{\mu} \Vert^2}+7,612\sigma^2\alpha^4\Vert\bm{\mu} \Vert^4\right) \right)+\frac{2M^2}{\alpha \Vert\bm{\mu}\Vert^2}.
\end{aligned} \end{equation}
\item Suppose that $\sqrt{\frac{\pi}{8}}\Vert \bm{\mu} \Vert \leq \sigma$ (high variance regime). It holds that $\bE[T]<+\infty$ provided the step-size $\alpha$ satisfies
\begin{equation}
\alpha< \frac{\min\{1,\tfrac{1}{2}r\rho^*\}}{\Vert \bm{\mu} \Vert^2+d\sigma^2}
\end{equation}
where the constant $r$ is defined as
\begin{equation*}
r:= \frac{64}{125 \rho^*\sigma \Vert \bm{\mu} \Vert \sqrt{2\pi}}\exp\left(-\frac{(\tfrac{4}{3}\frac{1}{\rho^*}-\Vert \bm{\mu} \Vert^2)^2}{2\sigma^2\Vert \bm{\mu} \Vert^2} \right).
\end{equation*}
\end{enumerate}
\end{theorem}
\subsubsection{Low Variance Regime}\label{sec:low_hinge}
Throughout this section, we fix a positive constant $0<\epsilon<\tfrac{1}{2}$ and consider the following drift function.
\[
V(\bm{\theta}):=\frac{1}{\alpha \Vert \bm{\mu}\Vert^2}(M-\bm{\mu}^T\bm{\theta})^2,
\]
where $M=1+\frac{\alpha(\Vert \bm{\mu} \Vert^2+\sigma^2)}{2\epsilon}$. In addition, we consider the following target set.
\begin{equation}\label{eq:hinge_low_C}
C:=\{\bm{\theta}:\bm{\mu}^T\bm{\theta} \geq 1\}.
\end{equation}
\begin{proposition}\label{prop:low_noise_hinge_Etau1}
Fix $\epsilon\in(0,\tfrac{1}{2})$ and suppose that $\sigma \leq \left(\tfrac{1}{2}-\epsilon\right)\sqrt{\tfrac{\pi}{2}}\Vert \bm{\mu} \Vert$. Let $\bm{\theta}\in \R^d$ such that $\bm{\mu}^T\bm{\theta}\leq 1$. The following is then true
\begin{equation}
\bE[\tau_1^C|\bm{\theta}_0=\bm{\theta}]\leq V(\bm{\theta}).
\end{equation}
\end{proposition}
\begin{proof}
Suppose that $\bm{\mu}^T\bm{\theta}_{k-1}\leq 1$. It, then, holds that
\begin{align}
\bE[(M-\bm{\mu}^T\bm{\theta}_k)^2|\mathcal{F}_{k-1}]1_{\{\tau_1^C\wedge n\geq k \}}&= \bE_{\bm{\xi}_k}[(M-\bm{\mu}^T\bm{\theta}_{k-1}-\alpha 1_{\{\bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}\bm{\mu}^T\bm{\xi}_k)^2|\mathcal{F}_{k-1}]1_{\{\tau_1^C\wedge n\geq k \}}\\&=(M-\bm{\mu}^T\bm{\theta}_{k-1})^2-2\alpha(M-\bm{\mu}^T\bm{\theta}_{k-1})\bE_{\bm{\xi}_k}[1_{\{\bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}\bm{\mu}^T\bm{\xi}_k|\mathcal{F}_{k-1}]\\&+\alpha^2\bE_{\bm{\xi}_k}[1_{\{\bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}(\bm{\mu}^T\bm{\xi}_k)^2|\mathcal{F}_{k-1}].
\end{align}
We need to lower bound the quantity $\bE_{\bm{\xi}_k}[1_{\{ \bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}\bm{\mu}^T\bm{\xi}_k|\mathcal{F}_{k-1}]$. We have
\begin{align}
\bE_{\bm{\xi}_k}[1_{\{ \bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}\bm{\mu}^T\bm{\xi}_k|\mathcal{F}_{k-1}]&=\Vert \bm{\mu}\Vert^2\bE_{\bm{\xi}_k}[1_{\{ \bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}|\mathcal{F}_{k-1}]+\bE_{\bm{\xi}_k}[1_{\{ \bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}\bm{\mu}^T(\bm{\xi}_k-\bm{\mu})|\mathcal{F}_{k-1}]\\&\geq \tfrac{1}{2}\Vert \bm{\mu} \Vert^2-\bE_{\bm{\xi}_k}[\vert \bm{\mu}^T(\bm{\xi}_k-\bm{\mu})]\vert]\\&\geq \tfrac{1}{2}\Vert \bm{\mu} \Vert^2-\sqrt{\tfrac{2}{\pi}}\sigma\Vert \bm{\mu} \Vert.
\end{align}
Here we used that $1_{\{\bm{\xi}_k^T\bm{\theta}_{k-1}\leq \bm{\mu}^T\bm{\theta}_{k-1}\}}\leq 1_{\{\bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}$ and $\bE_{\bm{\xi}_k}[1_{\{\bm{\xi}_k^T\bm{\theta}_{k-1}\leq \bm{\mu}^T\bm{\theta}_{k-1}\}}]=\tfrac{1}{2}$. Continuing, we obtain that
\begin{align*}
\bE[(M-\bm{\mu}^T\bm{\theta}_k)^2-(M-\bm{\mu}^T\bm{\theta}_{k-1})^2&|\mathcal{F}_{k-1}]1_{\{\tau_1^C\wedge n\geq k \}}\\
&\leq \alpha^2 \bE_{\bm{\xi}_k}[(\bm{\mu}^T\bm{\xi}_k)^2]-2\alpha(M-\bm{\mu}^T\bm{\theta}_{k-1})(\tfrac{1}{2}\Vert \bm{\mu} \Vert^2-\sqrt{\tfrac{2}{\pi}}\sigma\Vert \bm{\mu} \Vert)\\&\leq \alpha^2\Vert \bm{\mu}\Vert^2(\Vert \bm{\mu} \Vert^2+\sigma^2)-2\alpha(M-\bm{\mu}^T\bm{\theta}_{k-1})\left(\tfrac{1}{2}\Vert \bm{\mu} \Vert^2-\sqrt{\tfrac{2}{\pi}}\sigma\Vert \bm{\mu} \Vert\right)
\end{align*}
Hence, using the bounds $
\sigma\leq \left(\tfrac{1}{2}-\epsilon\right)\sqrt{\tfrac{\pi}{2}}\Vert \bm{\mu} \Vert$ and $M\geq 1+\frac{\alpha(\Vert \bm{\mu} \Vert^2+\sigma^2)}{2\epsilon}$, we obtain that
\begin{align*}
\bE[V(\bm{\theta}_k)-V(\bm{\theta}_{k-1})|\mathcal{F}_{k-1}]1_{\{\tau_1^C\wedge n\geq k \}}\leq \alpha (\Vert \bm{\mu} \Vert^2+\sigma^2)-2(M-\bm{\mu}^T\bm{\theta}_{k-1})\left(\tfrac{1}{2}-\sqrt{\tfrac{2}{\pi}}\frac{\sigma}{\Vert \bm{\mu} \Vert}\right)\leq -1_{\{\tau_1^C\wedge n\geq k \}}.
\end{align*}
Using Appendix Lemma \ref{lem:drift_lemma_appendix}, we obtain that $\bE[\tau_1^C\wedge n|\mathcal{F}_{-1}] \leq V(\bm{\theta})$. Monotone convergence theorem completes the proof.
\end{proof}
We next upper bound the expected value of $\tau_m^C$ when we initialize with $\bm{\theta}_0=\bm{0}$.
\textcolor{red}{Is the proof of this proposition the same as Proposition 2 in logistic}
\textcolor{blue}{Sina:They have a lot in common! but not entirely the same. It would be best if we make this one shorter.}
\begin{proposition}\label{prp:hinge_low_etm}
Let $\bm{\theta}_0=\bm{0}$ and suppose that $\sigma\leq \left(\tfrac{1}{2}-\epsilon\right)\sqrt{\tfrac{\pi}{2}}\Vert \bm{\mu} \Vert$. Consider the set $C$ defined in \eqref{eq:hinge_low_C}. Then the following bound holds for all $m \ge 1$
\begin{equation}
\bE[\tau_m^C]\leq (m-1)\left(2+\frac{\left(M-1\right)^2}{\alpha\Vert \bm{\mu} \Vert^2}+\frac{\sigma}{\Vert \bm{\mu} \Vert}\left(\frac{1.6M^2}{\alpha \Vert \bm{\mu} \Vert^2}+7,612\sigma^2\alpha^4\Vert\bm{\mu} \Vert^4\right) \right)+\frac{M^2}{\alpha \Vert\bm{\mu}\Vert^2},
\end{equation}
where $M=1+\frac{\alpha(\Vert \bm{\mu} \Vert^2+\sigma^2)}{2\epsilon}$.
\end{proposition}
\begin{proof}
Similar to \eqref{eq: low_noise_bound}, we obtain that
\begin{equation}\label{eq:low_noise_hinge1}
\bE\left[(\tau_m^{C}-\tau_{m-1}^{C})\wedge n|\mathcal{F}_{\tau_{m-1}^{C}+1}\right]\leq 2+\sum_{i=1}^{\infty}\bE\left[\tau_1^{C}\wedge n|\bm{\theta}_0=\bm{\theta}_{\tau_{m-1}^{C}+1}\right]1_{\{i-1\leq 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}+1}<i\}}.
\end{equation}
For each $i \ge 2$, we observe the bound
\begin{align*}
i-1 \leq 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}+1} &=1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}}-\alpha \bm{\mu}^T\bm{\xi}_{\tau_{m-1}^C+1}1_{\{\bm{\xi}_{\tau_{m-1}^C+1}^T\bm{\theta}_{\tau_{m-1}^C}\leq 1\}}\\&\leq -\alpha \bm{\mu}^T\bm{\xi}_{\tau_{m-1}^C+1}1_{\{\bm{\xi}_{\tau_{m-1}^C+1}^T\bm{\theta}_{\tau_{m-1}^C}\leq 1\}}\\&\leq -\alpha \bm{\mu}^T\bm{\xi}_{\tau_{m-1}^C+1}.
\end{align*}
Here the first inequality follows since $1\leq \bm{\mu}^T\bm{\theta}_{\tau_{m-1}^C}$ and the second inequality from the fact that $0\leq -\alpha \bm{\mu}^T\bm{\xi}_{\tau_{m-1}^C+1}$ as $i-1> 0$. We next, similar to our computation in \eqref{eq: low_noise_blah_3} and \eqref{eq: low_noise_blah_4}, have that
\begin{equation} \begin{aligned} \label{eq: low_noise_hinge2}
\bE\left[\tau_1^{C}\wedge n|\bm{\theta}_0=\bm{\theta}_{\tau_{m-1}^{C}+1}\right]1_{\{i-1\leq 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}+1}<i\}}&\leq \frac{(M+i-1)^2}{\alpha \Vert \bm{\mu}\Vert^2}1_{\{ \bm{\mu}^T\bm{\xi}_{\tau_{m-1}^{C}+1}\leq \frac{1-i}{\alpha}\}} \quad \forall i\geq 2.
\end{aligned} \end{equation}
Moreover,
\begin{equation} \begin{aligned} \label{eq: low_noise_hinge_3}
\bE\left[1_{\{ \bm{\mu}^T\bm{\xi}_{\tau_{m-1}^{C}+1}\leq \frac{1-i}{\alpha}\}} \right]&=\Phi\left(\frac{\frac{1-i}{\alpha}-\Vert \bm{\mu}\Vert^2}{\sigma\Vert \bm{\mu}\Vert } \right).
\end{aligned} \end{equation}
Combining \eqref{eq:low_noise_hinge1}, \eqref{eq: low_noise_hinge2} and \eqref{eq: low_noise_hinge_3} we obtain that
\begin{equation}\label{eq:low_noise_hinge_blah12}
\bE\left[(\tau_m^{C}-\tau_{m-1}^{C})\wedge n|\mathcal{F}_{\tau_{m-1}^{C}+1}\right]\leq 2+\frac{\left(M-1\right)^2}{\alpha\Vert \bm{\mu} \Vert^2}+\frac{\sigma}{\Vert \bm{\mu} \Vert\sqrt{2\pi}}\sum_{i=2}^{+\infty}\frac{\left(M+i-1\right)^2}{\alpha \Vert \bm{\mu} \Vert^2+i-1}\exp\left(-\tfrac{1}{2}\left(\frac{\Vert \bm{\mu} \Vert^2+\frac{i-1}{\alpha}}{\sigma\Vert \bm{\mu} \Vert^2}\right)^2 \right).
\end{equation}
Note that we used the bound $\bE\left[\tau_1^{C}\wedge n|\bm{\theta}_0=\bm{\theta}_{\tau_{m-1}^{C}+1}\right]1_{\{0\leq 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}+1}<1\}}\leq \frac{(M-1)^2}{\alpha \Vert\bm{\mu}\Vert^2}$. Continuing, similar to \eqref{eq: low_noise_blah_6} and \eqref{eq: low_noise_blah_7}, we have
\begin{equation}
\sum_{i=3}^{+\infty}\frac{\left(M+i-1\right)^2}{\alpha \Vert \bm{\mu} \Vert^2+i-1}\exp\left(-\tfrac{1}{2}\left(\frac{\Vert \bm{\mu} \Vert^2+\frac{i-1}{\alpha}}{\sigma\Vert \bm{\mu} \Vert^2}\right)^2 \right) \leq 19,080 \alpha^4 \sigma^4 \Vert \bm{\mu} \Vert^4.
\end{equation}
We upper bound the term in \eqref{eq:low_noise_hinge_blah12} for $i=2$ by $\frac{4M^2}{\alpha \Vert \bm{\mu} \Vert^2}$. Finally we conclude that
\begin{equation}
\bE\left[\tau_m^C-\tau_{m-1}^C\wedge n \right] \leq 2+\frac{\left(M-1\right)^2}{\alpha\Vert \bm{\mu} \Vert^2}+\frac{\sigma}{\Vert \bm{\mu} \Vert}\left(\frac{1.6M^2}{\alpha \Vert \bm{\mu} \Vert^2}+7,612\sigma^2\alpha^4\Vert\bm{\mu} \Vert^4\right).
\end{equation}
We, therefore, obtain that
\begin{equation}
\bE\left[\tau_m^C\right] \leq (m-1)\left(2+\frac{\left(M-1\right)^2}{\alpha\Vert \bm{\mu} \Vert^2}+\frac{\sigma}{\Vert \bm{\mu} \Vert}\left(\frac{1.6M^2}{\alpha \Vert \bm{\mu} \Vert^2}+7,612\sigma^2\alpha^4\Vert\bm{\mu} \Vert^4\right) \right)+\bE\left[\tau_1^C\right]
\end{equation}
The result follows after applying Proposition \ref{prop:low_noise_hinge_Etau1} with $\bm{\theta}_0=\bm{0}$.
\end{proof}
\begin{proof}[Proof of Theorem 3.1] Using Proposition \ref{prp:hinge_low_etm} and Lemma \ref{lem:stopping_time_appendix}, Theorem \ref{theorem_hinge}.1 immediately follows.
\end{proof}
\subsubsection{High Variance Regime}\label{sec:high_hinge}
The main theorem of this section is as follows.
\begin{theorem}\label{thm:hinge_loss_high_noise}
Suppose that the step-size $\alpha$ satisfies
\begin{equation}
\alpha< \frac{\min\{1,\tfrac{1}{2}r\rho^*\}}{\Vert \bm{\mu} \Vert^2+d\sigma^2}
\end{equation}
where $\rho^*$ is defined in Lemma \ref{lem:hinge_minimizer} and the constant $r$ is defined as
\begin{equation*}
r:= \frac{64}{125 \rho^*\sigma \Vert \bm{\mu} \Vert \sqrt{2\pi}}\exp\left(-\frac{(\tfrac{4}{3}\frac{1}{\rho^*}-\Vert \bm{\mu} \Vert^2)^2}{2\sigma^2\Vert \bm{\mu} \Vert^2} \right).
\end{equation*}
Then it holds that $\bE[T]<+\infty$.
\end{theorem}
We refer the proof to the end of this section. We first need a technical lemma.
\begin{lemma}\label{lem:hinge_loss_high_noise_fundamental_lemma}
For any $\bm{\theta}$ the following holds
\begin{equation}\label{eq:lemma_for_high_noise_hinge}
f(\bm{\theta}_{k-1})-f(\bm{\theta}) \leq \frac{1}{2\alpha} \left(\Vert \bm{\theta}_{k-1}-\bm{\theta}\Vert^2-\bE\left[\Vert \bm{\theta}_k-\bm{\theta}\Vert^2 |\mathcal{F}_{k-1}\right]\right)+\frac{\alpha}{2}\left(\Vert \bm{\mu} \Vert^2+d\sigma^2\right).
\end{equation}
\end{lemma}
\begin{proof}
See Appendix, Lemma \ref{lem:technical_convex_bound}. Note that we are using the inequality $\Vert \nabla_{\bm{\theta}}\ell(\bm{\xi}^T\bm{\theta})\Vert^2\leq \Vert \bm{\xi} \Vert^2$ and then applying \eqref{fact:norm_Gaussians}.
\end{proof}
We write the orthogonal complement of $\bm{\theta}\in \R^d$ as $\bm{\theta}=\rho\bm{\mu}+\tilde{\bm{\theta}}$. As before, the idea is to lower bound the left hand side of \eqref{eq:lemma_for_high_noise_hinge} for $\bm{\theta}=\bm{\theta}^*:=\rho^* \bm{\mu}$. To do so, we write $f(\bm{\theta}_{k-1})-f(\bm{\theta}^*)=f(\bm{\theta}_{k-1})-f(\rho\bm{\mu})+f(\rho \bm{\mu})-f(\bm{\theta}^*)$ and then provide bounds for the both terms $f(\bm{\theta}_{k-1})-f(\rho_{k-1} \bm{\mu})$ and $f(\rho_{k-1}\bm{\mu})-f(\rho^*\bm{\mu})$.
The following lemma bounds $f(\bm{\theta}_{k-1})-f(\rho \bm{\mu})$.
\begin{lemma}\label{lem:hinge_high_noise_first_lemma} For each $\bm{\theta}\in \R^d$, the following bound holds
\begin{equation}
f(\bm{\theta})-f(\rho\bm{\mu})\geq \max\left\{\tfrac{1}{\sqrt{2\pi}}\sigma \Vert \tilde{\bm{\theta}}\Vert-\rho \Vert \bm{\mu} \Vert\left(\sigma\sqrt{\tfrac{2}{\pi}}+\Vert \bm{\mu} \Vert \right)-1,0 \right\}.
\end{equation}
\end{lemma}
\begin{proof} We begin by denoting
$\xi_1:=\bm{\xi}^T\tilde{\bm{\theta}}$ and $\xi_2:=\bm{\xi}^T\bm{\mu}$. Notice that $\xi_1$ and $\xi_2$ are independent random variables. We have that
\begin{align*}
f(\bm{\theta})-f(\rho \bm{\mu})&=\bE_{\bm{\xi}}\left[\ell(\bm{\xi}^T\bm{\theta})-\ell(\rho\bm{\xi}^T\bm{\mu}) \right]\\&=\bE_{\bm{\xi}}\left[\ell(\xi_1+\rho\xi_2)-\ell(\rho\xi_2) \right]\\&=\bE_{\bm{\xi}}\left[\ell(-\xi_1+\rho\xi_2)-\ell(\rho\xi_2) \right].
\end{align*}
The second equality follows since $\xi_1 \sim -\xi_1$. We define the function
\[
\Phi(\xi_1,\xi_2):=\ell(\xi_1+\rho\xi_2)+\ell(-\xi_1+\rho\xi_2)-2\ell(\rho\xi_2).
\]
We, therefore, obtain that
\[
2\left(f(\bm{\theta})-f(\rho\bm{\mu})\right)=\bE_{\xi_1,\xi_2}\left[\Phi(\xi_1,\xi_2)\right].
\]
We, next, claim that
\begin{equation}\label{eq:hinge_lemma_claim1}
\Phi(\xi_1,\xi_2)=0 \text{ whenever } \vert \xi_1\vert\leq \vert 1-\rho\xi_2\vert.
\end{equation}
To see this, suppose that $\vert \xi_1\vert\leq \vert 1-\rho\xi_2\vert$ holds. We consider two cases. First, assume that $0\leq 1-\rho\xi_2$. Then, it holds that $\rho\xi_2-\xi_1\leq 1$ and $\rho\xi_2+\xi_1\leq 1$. We then have $\Phi(\xi_1,\xi_2)=1-\xi_1-\rho\xi_2+1+\xi_1-\rho\xi_2-2(1-\rho\xi_2)=0$. Second, assume that $1-\rho\xi_2\leq 0$. Then, it holds that $1\leq \rho\xi_2-\xi_1$ and $1\leq \rho\xi_2+\xi_1$. Now it immediately follows that $\Phi(\xi_1,\xi_2)=0$. We have thus established \eqref{eq:hinge_lemma_claim1}. We, next, claim the following
\begin{equation}\label{eq:hinge_lemma_claim2}
\Phi(\xi_1,\xi_2)=\vert \xi_1\vert-\vert 1-\rho\xi_2\vert \text{ whenever } \vert \xi_1\vert\geq \vert 1-\rho\xi_2\vert.
\end{equation}
To this end, we again consider two cases. First, assume that $\xi_1\leq -\vert 1-\rho\xi_2\vert$. It, then, holds that $1\leq -\xi_1+\rho\xi_2$ and $\xi_1+\rho\xi_2\leq 1$. We, therefore, have $ \Phi(\xi_1,\xi_2)= 1-\xi_1-\rho\xi_2-2\ell(\rho\xi_2)$. The claim \eqref{eq:hinge_lemma_claim2} follows from the following simple identity
\begin{equation}\label{eq:hinge_identity}
2\ell(t)= 1-t+\vert 1 - t \vert, \quad \forall t \in \R.
\end{equation}
Second, assume that $\xi_1\geq \vert 1-\rho\xi_2\vert$. It, then, holds that $\xi_1+\rho\xi_2\geq 1$ and $-\xi_1+\rho\xi_2\leq 1$. We, therefore, have $
\Phi(\xi_1,\xi_2)=1+\xi_1-\rho\xi_2-2\ell(\rho \xi_2).
$ The claim \eqref{eq:hinge_lemma_claim2} follows from the identity \eqref{eq:hinge_identity}. We, therefore, obtain
\begin{align}
\bE_{\xi_1,\xi_2}[\Phi(\xi_1,\xi_2)]&=2\bE_{\xi_1,\xi_2}[(\ell(\xi_1+\rho\xi_2)+\ell(-\xi_1+\rho\xi_2)-2\ell(\rho\xi_2))1_{\{ \xi_1>0\}}]\label{eq:hinge_lemma_it1}\\&=2\bE_{\xi_1,\xi_2}[(\ell(\xi_1+\rho\xi_2)+\ell(-\xi_1+\rho\xi_2)-2\ell(\rho\xi_2))1_{\{ \xi_1\geq\vert 1-\rho \xi_2\vert\}}]\label{eq:hinge_lemma_it2}\\&=\bE_{\xi_1,\xi_2}[(\xi_1-\vert 1-\rho \xi_2\vert)1_{\{ \xi_1\geq \vert 1-\rho\xi_2\vert\}} ]. \label{eq:hinge_lemma_it3}
\end{align}
Here, \eqref{eq:hinge_lemma_it1} holds because $\xi_1\sim -\xi_1$ and $\Phi(\xi_1,\xi_2)=\Phi(-\xi_1,\xi_2)$. Equation \eqref{eq:hinge_lemma_it2} is true because of claim \eqref{eq:hinge_lemma_claim1} and \eqref{eq:hinge_lemma_it3} follows from claim \eqref{eq:hinge_lemma_claim2}. From \eqref{eq:hinge_lemma_it3}, we clearly conclude that $\bE_{\xi_1,\xi_2}[\Phi(\xi_1,\xi_2)]\geq 0$. We, next, observe the bound
\begin{align}
\bE_{\xi_1,\xi_2}[(\xi_1-\vert 1-\rho \xi_2\vert)1_{\{ \xi_1\geq \vert 1-\rho\xi_2\vert\}} ] &= \tfrac{1}{2}\bE_{\xi_1,\xi_2}[\xi_1-\vert 1-\rho \xi_2\vert+\vert \xi_1-\vert 1-\rho \xi_2\vert \vert]\nonumber\\&\geq -\tfrac{1}{2}\bE_{\xi_2}[\vert 1-\rho \xi_2\vert]+\tfrac{1}{2}\bE_{\xi_1,\xi_2}[\vert \xi_1\vert-\vert 1-\rho \xi_2\vert]\nonumber\\&=\tfrac{1}{2}\bE_{\xi_1}[\vert \xi_1\vert]-\bE_{\xi_2}[\vert1-\rho\xi_2\vert]. \label{eq:thrid_last_equation}
\end{align}
The second inequality follows from $\bE_{\xi_1}[\xi_1]=0$ and the triangle inequality $\vert x\vert-\vert y\vert \leq \vert \vert x\vert-y\vert$. On the other hand, it holds that
\begin{equation}\label{first_last_equation}
\bE_{\xi_1}[\vert \xi_1\vert]=\sqrt{\tfrac{2}{\pi}}\sigma \Vert \tilde{\bm{\theta}}\Vert,
\end{equation}
and
\begin{equation}\label{second_last_equation}
\bE_{\bm{\xi}}[\vert 1-\rho \bm{\mu}^T\bm{\xi}\vert]\leq 1+\rho \bE_{\bm{\xi}}[\vert \bm{\mu}^T\bm{\xi}\vert]\leq 1+\rho\Vert \bm{\mu} \Vert \left(\sigma \sqrt{\tfrac{2}{\pi}}+\Vert \bm{\mu} \Vert \right).
\end{equation}
Combing equations \eqref{eq:thrid_last_equation}
, \eqref{first_last_equation} and \eqref{second_last_equation} concludes the proof.
\end{proof}
In turn, we bound $f(\rho \bm{\mu})-f(\rho^*\bm{\mu})$ in the next lemma.
\begin{lemma}\label{lem:hinge_loss_second_lemma}
There exists a constant $r>0$ such that the following holds
\begin{equation}
f(\rho \bm{\mu})-f(\rho^*\bm{\mu}) \geq r\vert \rho -\rho^*\vert \quad \text{for any $\rho$ satisfying $\vert \rho-\rho^*\vert >\tfrac{1}{2}\rho^*$ }.
\end{equation}
\end{lemma}
\begin{proof}
We begin by defining the function $h(\rho)=f(\rho\bm{\mu})$. Recall that
\[
f(\rho \bm{\mu})=\bE_{\bm{\xi} \sim \mathcal{P}_*}[\ell(\rho\bm{\xi}^T\bm{\mu})]=\bE_{\bm{\xi} \sim \mathcal{P}_*}[(1-\rho \bm{\xi}^T\bm{\mu})1_{\{\rho\bm{\xi}^T\bm{\mu}\leq 1 \}}]
\]
Hence, it holds that
\[
h'(\rho)=\bm{\mu}^T\nabla f(\rho \bm{\mu})=-\bE_{\bm{\xi} \sim \mathcal{P}_*}[\bm{\xi}^T\bmu1_{\{ \rho\bm{\xi}^T\bm{\mu}\leq 1\}}].
\]
From \eqref{eq:fact_affine}, we obtain that $\bm{\mu}^T\bm{\xi} \sim N(\Vert \bm{\mu} \Vert^2,\sigma^2\Vert \bm{\mu} \Vert^2)$. For $\rho>0$, therefore, it holds that
\begin{equation}\label{der_phi_pos}
h'(\rho)=\frac{-1}{\sigma\Vert \bm{\mu} \Vert\sqrt{2\pi}}\int_{-\infty}^{\frac{1}{\rho}}z\exp\left(-\frac{(z-\Vert \bm{\mu} \Vert^2)^2)}{2\sigma^2\Vert \bm{\mu} \Vert^2} \right)dz.
\end{equation}
Applying chain rule thus yields
\[
h''(\rho)=\frac{1}{\rho^3\sigma\Vert \bm{\mu} \Vert\sqrt{2\pi}}\exp\left(-\frac{(\frac{1}{\rho}-\Vert \bm{\mu} \Vert^2)^2}{2\sigma^2\Vert \bm{\mu} \Vert^2} \right) \quad \text{for all $\rho>0$}.
\]
Clearly, there exists $r>0$ such that for $\rho\in [\tfrac{3}{4}\rho^*,\tfrac{5}{4}\rho^*]$, it holds that $h''(\rho)\geq r$. Applying Lemma \ref{lem:convex_analysis_lemma} the result follows with the constant $r$ defined as follows
\begin{equation}\label{eq:constant_r_hinge_loss}
r:= \frac{64}{125 \rho^*\sigma \Vert \bm{\mu} \Vert \sqrt{2\pi}}\exp\left(-\frac{(\tfrac{4}{3}\frac{1}{\rho^*}-\Vert \bm{\mu} \Vert^2)^2}{2\sigma^2\Vert \bm{\mu} \Vert^2} \right).
\end{equation}
\end{proof}
We define the target set $C$ as follows:
\begin{equation}\label{target_set_for_hinge}
C:=\left\{\bm{\theta}:\vert \rho-\rho^*\vert \leq \tfrac{1}{2}\rho^* \text{ and } \Vert \tilde{\bm{\theta}}\Vert \leq c:=\frac{\sqrt{2\pi}}{\sigma}\left(2+\tfrac{3}{2}\rho^*\Vert \bm{\mu} \Vert\left(\sigma\sqrt{\tfrac{2}{\pi}}+\Vert \bm{\mu} \Vert \right) \right) \right\}.
\end{equation}
Also, we consider the following drift function
\begin{equation}
V(\bm{\theta}):=\Vert \bm{\theta} -\bm{\theta}^*\Vert^2.
\end{equation}
The following proposition establishes the drift equation.
\begin{proposition}
Suppose that the step-size $\alpha$ satisfies the bound
\begin{equation}\label{bound_for_alpha_hinge_loss}
\alpha< \frac{\min\{1,\tfrac{1}{2}r\rho^*\}}{\Vert \bm{\mu} \Vert^2+d\sigma^2}
\end{equation}
where $r$ is defined in \eqref{eq:constant_r_hinge_loss}. The following is then true.
\begin{equation}
\left(\bE\left[V(\bm{\theta}_k)|\mathcal{F}_{k-1} \right]-V(\bm{\theta}_{k-1})\right)1_{\{\bm{\theta}_{k-1}\not\in C\}}\leq -\tfrac{1}{2}\min\{1,\tfrac{1}{2}r\rho^*\}1_{\{\bm{\theta}_{k-1}\not\in C\}}.
\end{equation}
\end{proposition}
\begin{proof}
Suppose that $\bm{\theta}_{k-1}\not\in C$. Lemma \ref{lem:hinge_loss_second_lemma} yields that
\begin{equation}\label{eq:hinge_lemma_proposition_eq2}
f(\rho_{k-1}\bm{\mu})-f(\rho^*\bm{\mu}) \geq \tfrac{1}{2}r\rho^* \quad \text{ whenever } \vert \rho-\rho^* \vert \geq \tfrac{1}{2}\rho^*
\end{equation}
where constant $r$ is defined in \eqref{eq:constant_r_hinge_loss}. Moreover, from Lemma \ref{lem:hinge_high_noise_first_lemma}, we obtain that
\begin{equation}\label{eq:hinge_lemma_proposition_eq1}
f(\bm{\theta}_{k-1})-f(\rho_{k-1} \bm{\mu})\geq 1 \quad \text{ whenever } \sigma \Vert \tilde{\bm{\theta}}_{k-1}\Vert \geq c \text{ and } \vert \rho-\rho^* \vert < \tfrac{1}{2}\rho^*.
\end{equation}
Moreover,
Combining \eqref{eq:hinge_lemma_proposition_eq1}, \eqref{eq:hinge_lemma_proposition_eq2} yields that
\begin{equation}
f(\bm{\theta}_{k-1})-f(\rho^*\bm{\mu}) \geq \min\{1,\tfrac{1}{2}r\rho^*\} \quad \text{ whenever } \bm{\theta}_{k-1} \not\in C.
\end{equation}
Lemma \eqref{lem:hinge_loss_high_noise_fundamental_lemma} and the bound on $\alpha$ in \eqref{bound_for_alpha_hinge_loss} along with the above displayed bound gives that
\begin{equation}
\bE\left[V(\bm{\theta}_k)|\mathcal{F}_{k-1} \right]-V(\bm{\theta}_{k-1}) \leq -\tfrac{1}{2}\min\{1,\tfrac{1}{2}r\rho^*\}.
\end{equation}
The proof is complete.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:hinge_loss_high_noise}]
The target set $C$ defined in \eqref{target_set_for_hinge} is compact and thus there exists $b>0$ such that
\begin{equation}
\left( \bE\left[V(\bm{\theta}_k)|\mathcal{F}_{k-1} \right]-V(\bm{\theta}_{k-1})\right)1_{\{\bm{\theta}_{k-1}\in C\}} \leq b1_{\{\bm{\theta}_{k-1}\in C\}}.
\end{equation}
Therefore, it holds that
\begin{equation}\label{drift_for_hinge_loss}
\bE\left[V(\bm{\theta}_k)|\mathcal{F}_{k-1} \right]-V(\bm{\theta}_{k-1}) \leq -\tfrac{1}{2}\min\{1,\tfrac{1}{2}r\rho^*\}1_{\{\bm{\theta}_{k-1}\not\in C\}}+b1_{\{\bm{\theta}_{k-1}\in C\}}.
\end{equation}
Furthermore, since $C$ is a compact set which does not contain $\bm{0}$, there exists a positive constant $\delta$ such that for any fixed $\bm{\theta}\in C$ it holds that
\begin{equation}
\delta:= \min \mathbb{P}_{\bm{\xi} \sim \mathcal{P}_*}\left(\bm{\xi}^T\bm{\theta}\geq 1 \right)>0.
\end{equation}
The result now follows from Appendix Lemma \ref{lem:drift_lemma_appendix} and Lemma \ref{lem:stopping_time_appendix}.
\end{proof}
\section{Introduction}
Minimization of an expected loss objective function using linear predictors,
\begin{equation} \label{eq:intro_expected_loss} \min_{\bm{\theta} \in \R^d} f(\bm{\theta}) := \bE_{(\bm{\zeta},y) \sim \mathcal{P}} \ell(\bm{\zeta}^T\bm{\theta}, y), \end{equation}
is a central task in machine learning. Here the loss function $\ell: \R \times \R \to \R$, the probability distribution $\mathcal{P}$ is unknown, and the data sample $(\bm{\zeta}, y) \in \R^d \times \R$ is a random vector distributed as $\mathcal{P}$. The most prevalent algorithm employed for solving \eqref{eq:intro_expected_loss} is \textit{stochastic gradient descent} (SGD). Whereas a significant amount of work has been devoted to the convergence analysis of SGD (see, {\em e.g.}, \cite{RM1951,Curtis_SGD,Bubeck_Convex_book, Pflug}), leading, in particular, to learning rate schedules, the question of how to terminate the algorithm when one is near an optimal classifier remains largely unaddressed.
Yet, inexpensive stopping criteria are of utmost interest in machine learning. For instance, if one could produce a low cost test to determine near-optimality, then without sacrificing the quality of the solution or efficiency of the SGD algorithm, needless computational time would be eliminated. Secondly, early termination tests impose a degree of predictability on accuracy and running times-- a useful quality when SGD occurs as a subproblem of a larger computation. Several works show that early termination of SGD can prevent overfitting, speed up learning procedures, and/or improve generalization properties \citep{Prechelt2012, Hardt:2016:TFG:3045390.3045520,Yao2007}. Motivated by these facts, we sought to address from stochastic optimization the following question:
\begin{center}
How to design a test to terminate SGD with a fixed learning rate that is inexpensive without sacrificing quality of the solution?
\end{center}
To do so, we simplified our setting to binary classification, one of the fundamental examples of supervised machine learning \citep{ShalevShwartzBenDavid}. In binary classification, the learning algorithm is given a sequence of training examples $(\bm{\zeta}_1,y_1),(\bm{\zeta}_2,y_2),\ldots$, often noisy, where $\bm{\zeta}_i\in\R^d$ and $y_i\in\{0,1\}$ for each $i$. The job of the algorithm is to develop a rule for distinguishing future, unseen $\bm{\zeta}$'s that are classified as $1$ from those classified as $0$. In this work, we limit attention to linear classifiers. This means that the learning algorithm must determine a vector $\bm{\theta}$ such that the classification of $\bm{\zeta}$ is $1$ when $\bm{\zeta}^T\bm{\theta}>0$ else it is $0$. Note that any algorithm for linear classification can be extended to one for nonlinear classification via the construction of ``kernels''; see, {\em e.g.}, \cite{ShalevShwartzBenDavid}. This extension is not pursued; we leave it for later work.
The usual technique for determining $\bm{\theta}$, which is also adopted herein, is to define a loss function that turns the discrete problem of computing a $1$ or $0$ for $\bm{\zeta}$ to a continuous quantity. Common choices of loss functions include logistic and hinge. For simplicity, we consider only the unregularized logistic and hinge loss in this work.
Our theoretical results assume that our data comes from a Gaussian mixture model (GMM). The GMM is attributed to \cite{article}. The problem of identifying GMM parameters given random samples has attracted considerable attention in the literature; see, \textit{e.g}., the recent work of \cite{Ashtiani} and earlier references therein. Another common
use of GMMs in the literature, similar to our application here, is as test-cases for a learning algorithm intended to solve a more general problem. Examples include clustering; see, \textit{e.g.,} \cite{DBLP:journals/corr/abs-1902-07137} and \cite{pmlr-v70-panahi17a} and tensor factorization; see, \textit{e.g.,} \cite{sherman2019estimating}.
Ordinarily in deterministic first-order optimization methods, one terminates when the norm of the gradient falls below a predefined tolerance. In the case of SGD for binary classification, this is unsuitable for two reasons. First, the true gradient is generally inaccessible to the algorithm or it is computationally expensive to generate even a sufficient approximation of the gradient.
Second, even if the computations were possible, an `optimal' classifier $\bm{\theta}$ for the classification task is not necessarily the minimizer of the loss function since the loss function is merely a surrogate for correct classification of the data.
\paragraph{Our contributions.} In this paper, we introduce a new and simple termination criterion for stochastic gradient descent (SGD) applied to binary classification using logistic regression and hinge loss with constant step-size $\alpha>0$. Notably, our proposed criterion adds no additional computational cost to the SGD algorithm.
We analyze the behavior of the classifier at termination, where we sample from a normal distribution with unknown means $\bm{\mu}_0,\bm{\mu}_1\in \R^d$ and variances $\sigma^2I_d$. Here $\sigma>0$ and $I_d$ is the $d \times d$ identity matrix. As such, we make no assumptions on the separability of the data set.
When the variance is not too large, we have the following results:
\begin{enumerate}
\item The test will be activated for any fixed positive step-size. In particular, we establish an upper bound for the expected number of iterations before the activation occurs. This upper bound tends to a numeric constant when $\sigma$ converges to zero. In fact, we show that the expected time until termination decreases linearly as the data becomes more separable ({\em i.e.}, as the noise $\sigma \to 0$).
\item We prove that the accuracy of the classifier at termination nearly matches the accuracy of an optimal classifier. Accuracy is the fraction of predictions that a classification model got right while an optimal classifier minimizes the probability of misclassification when the sample is drawn from the same distribution as the training data.
\end{enumerate}
When the variance is large, we show that the test will be activated for a sufficiently small step-size.
We empirically evaluate the performance of our stopping criterion versus a baseline competitor. We compare performances on both synthetic (Gaussian and heavy-tailed $t$-distribution) as well as real data sets (MNIST \citep{MNIST} and CIFAR-10 \citep{cifar10}). In our experiments, we observe that our test yields relatively accurate classifiers with small variation across multiple runs.
\paragraph{Related works.} To the best of our knowledge, the earliest comprehensive numerical testing of a stopping termination test for SGD in neural networks was introduced by \cite{Prechelt2012}.
His stopping criteria, which we denote as \textit{small validation set} (SVS), periodically checks the iterate on a validation set. Theoretical guarantees for SVS were established in the works of \citep{Early_stopping_Lin,Yao2007}. \cite{Hardt:2016:TFG:3045390.3045520} shows that SGD is uniformly stable and thus solutions with low training error found quickly generalize well. These results support exploring new computationally inexpensive termination tests-- the spirit of this paper.
In a related topic,
the relationship between generalization and optimization is an active area of research in machine learning. Much of the pioneering work in this area focused on understanding how early termination of algorithms, such as conjugate gradient, gradient descent, and SGD, can act as an implicit regularizer and thus exhibit better generalization properties \citep{Prechelt2012,Early_stopping_Lin,Yao2007,CG_implicit_regularization,Multipass_Lin}. The use of early stopping as a tool for improving generalization is not studied herein because our experiments indicate that for the problem under consideration, binary classification with a linear separator, the accuracy increases as SGD proceeds and ultimately reaches a steady value but does not decrease, meaning that there is no opportunity to improve generalization by stopping early.
See also \cite{Nemirovski_Robust_Stochastic_1}.
Instead of using a validation set to stop early, \cite{Variational_Duvenaud} employs an estimate of the marginal likelihood as a stopping criteria. Another termination test based upon a Wald-type statistic developed for solving least squares with reproducing kernels guarantees a minimax optimal testing \citep{Liu_stopping}. However it is unclear the practical benefits of such procedures over a validation set.
Several works have introduced validation procedures to check the accuracy of solutions generated from stochastic algorithms based upon finding a point $\bm{\theta}_\varepsilon$ that satisfies a high confidence bound $\bP(f(\bm{\theta}_\varepsilon) - \min f \le \varepsilon) \ge 1-p$, in essence, using this as a stopping criteria (\textit{e.g.}, see \cite{Dima_Robust_Stochastic,Ghadimi_Strongly_cvx_validation_2,Ghadimi_Strongly_cvx_validation_1, Juditsky_robust_stochastic,Nemirovski_Robust_Stochastic_1}).
Yet, notably, all these procedures produce points with small function values. For binary classification, however, this could be quite expensive and a good classifier need not necessarily be the minimizer of the loss function. Ideally, one should terminate when the classifier's direction aligns with the optimal direction-- the approach we pursue herein.
\subsection{Low regime, proof of Theorem \ref{thm:low}}\label{sec:proof_low}
In this section, we investigate the low variance regime. We consider the target set $C$ and function $V$ defined in \eqref{eq:targetset_lownoise} and \eqref{eq:driftfunction_lownoise} respectively, \textit{i.e.}
\begin{equation}\label{eq:CV_low_recall}
C = \{\bm{\theta}: \bm{\mu}^T\bm{\theta}\geq 1 \}, \quad V(\bm{\theta})=\left(M-\bm{\mu}^T\bm{\theta}\right)^2,
\end{equation}
where $M$ is a constant to be determined.
Next lemma shows that the drift equation \eqref{eq:driftequation} holds for the pair $(C,V)$. \begin{lemma}[Drift equation]\label{lem:driftequation} Consider the SGD algorithm and let the set $C$ and the function $V$ be as in \eqref{eq:CV_low_recall}. Define the constants $c,b,M$ as in \eqref{eq:para_log} and \eqref{eq:para_hinge}. Then provided that $\sigma \leq c\Vert \bm{\mu} \Vert$, the function $V$ is a drift function with respect to the set $C$ and it satisfies the drift equation \eqref{eq:driftequation} with the constant $b$.
\end{lemma}
\begin{proof}For simplicity we write $\mathcal{F}_{-1}:=\sigma\left(\{\bm{\theta}_0=\bm{\theta}\} \right)$. Fix $k\geq 1$ and write $\bm{\xi}_k=\bm{\mu}+\sigma\bm{\psi}_k$ with $\bm{\psi}_k \sim N(0,I_d)$. Denote $\psi_k:=\frac{\bm{\mu}^T\bm{\psi}_k}{\Vert\bm{\mu} \Vert}$, thus $\psi_k\sim N(0,1)$. In order to show that the function $V$ satisfies the drift equation \eqref{eq:driftequation}, it suffices to assume $\bm{\theta}_{k-1}\not\in C$; in particular, this means $\bm{\theta}_{k-1}^T\bm{\mu}<1$.
\item
\textbf{Logistic loss.} By expanding out the term using the update formula, we get the following
\begin{align} \label{eq: low_noise_blah_1}
V(\bm{\theta}_{k})= V(\bm{\theta}_{k-1})-\frac{2\alpha \bm{\mu}^T\bm{\xi}_k(M-\bm{\mu}^T\bm{\theta}_{k-1})}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})}+\frac{\alpha^2(\bm{\mu}^T\bm{\xi}_k)^2}{(1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1}))^2}.
\end{align}
We have
\begin{align*}
&\bE_{\bm{\xi}_k}\left[ \frac{\bm{\mu}^T\bm{\xi}_k}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})}|\mathcal{F}_{k-1}\right]\\
& \qquad =\Vert\bm{\mu}\Vert^2\bE_{\bm{\xi}_k}\left[ \frac{1}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})}|\mathcal{F}_{k-1}\right]+\sigma \Vert \bm{\mu}\Vert\bE_{\bm{\xi}_k,\psi_k}\left[ \frac{\psi_k}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})}|\mathcal{F}_{k-1}\right]\\
& \qquad \geq \Vert \bm{\mu} \Vert^2 \bE_{\bm{\xi}_k}\left[ \frac{1}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})}|\mathcal{F}_{k-1}\right]+\sigma \Vert \bm{\mu}\Vert\bE_{\psi_k}\left[\psi_k1_{\{\psi_k<0\}} \right]\\
& \qquad = \Vert \bm{\mu} \Vert^2 \bE_{\bm{\xi}_k}\left[ \frac{1}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})} \left ( 1_{\{\bm{\mu}^T \bm{\theta}_{k-1} \geq \bm{\xi}_k^T \bm{\theta}_{k-1}\}} + 1_{\{\bm{\mu}^T \bm{\theta}_{k-1} < \bm{\xi}_k^T \bm{\theta}_{k-1}\}} \right ) |\mathcal{F}_{k-1}\right] -\sigma \Vert \bm{\mu}\Vert \sqrt{\frac{1}{2\pi}}\\
& \qquad \geq \frac{\Vert \bm{\mu}\Vert^2}{1+\exp(\bm{\mu}^T\bm{\theta}_{k-1})}\bE_{\bm{\xi}_k}\left[ 1_{\{\bm{\mu}^T \bm{\theta}_{k-1} \geq \bm{\xi}_k^T \bm{\theta}_{k-1}\}} |\mathcal{F}_{k-1} \right]-\sigma \Vert \bm{\mu}\Vert \sqrt{\frac{1}{2\pi}}\\
&\qquad \geq \frac{\Vert \bm{\mu}\Vert^2}{2(1+e)}-\sigma \Vert \bm{\mu}\Vert \sqrt{\frac{1}{2\pi}}\\
& \qquad \geq 0.001\Vert \bm{\mu} \Vert^2 .
\end{align*}
Here the first inequality follows from $\bE[X] \ge \bE[X1_{\{X<0\}}]$ and $1+ \exp(\bm{\xi}_k^T\bm{\theta}_{k-1}) \ge 1$, the second equation from \eqref{fact:norm_Gaussians}, and the second to last from the observation that for any $X$ normally distributed, $\bP(\bE[X] \ge X) = 1/2$ and $\bm{\xi}_k^T\bm{\theta}_{k-1} \sim N(\bm{\mu}^T\bm{\theta}_{k-1}, \sigma^2 \norm{\bm{\theta}_{k-1}}^2)$ and $\bm{\mu}^T\bm{\theta}_{k-1} < 1$. The last inequality uses the assumption $\sigma \le 0.33 \norm{\bm{\mu}}$. By taking the conditional expectations of \eqref{eq: low_noise_blah_1} combined with the above sequence of inequalities, we deduce the following bound
\begin{align*}
&\bE\left[V(\bm{\theta}_{k})-V(\bm{\theta}_{k-1})|\mathcal{F}_{k-1}\right]\\
& \quad \qquad = \bE_{\bm{\xi}_k} \left [-\frac{2\alpha \bm{\mu}^T\bm{\xi}_k(M-\bm{\mu}^T\bm{\theta}_{k-1})}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})} | \mathcal{F}_{k-1} \right] + \bE_{\bm{\xi}_k} \left [\frac{\alpha^2(\bm{\mu}^T\bm{\xi}_k)^2}{(1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1}))^2} | \mathcal{F}_{k-1} \right]\\
& \quad \qquad \leq -0.002(M-1)\alpha \Vert \bm{\mu} \Vert^2+ \alpha^2\Vert \bm{\mu} \Vert^2\left(\Vert \bm{\mu}\Vert^2+\sigma^2 \right)\\
& \quad \qquad = \alpha \Vert \bm{\mu} \Vert^2\left[ -0.002(M-1)+\alpha\left(\Vert \bm{\mu} \Vert^2+\sigma^2\right)\right].
\end{align*}
Here the first inequality follows from $\bm{\mu}^T\bm{\theta}_{k-1} < 1$ and by upper bounding $\frac{(\bm{\mu}^T\bm{\xi}_k)^2}{(1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1}))^2}$ with $(\bm{\mu}^T\bm{\xi}_k)^2$ and then applying \eqref{fact:norm_Gaussians}.
A quick computation after plugging in the value of $M$ and the bound $\sigma\leq 0.33\Vert \bm{\mu} \Vert$ from \eqref{eq:para_log} yields the drift equation \eqref{eq:driftequation} with $b=\alpha \Vert \bm{\mu} \Vert^2$.
\item \textbf{Hinge loss.} By expanding out the term using the update formula, we get the following
\begin{equation}\label{eq:Vequation_hinge}
V(\bm{\theta}_k)=V(\bm{\theta}_{k-1})-2\alpha (M-\bm{\mu}^T\bm{\theta}_{k-1})\bm{\mu}^T\bm{\xi}_k1_{\{\bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}+\alpha^2(\bm{\mu}^T\bm{\xi}_k)^21_{\{\bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}.
\end{equation}
We have
\begin{align*}
\bE_{\bm{\xi}_k}[1_{\{ \bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}\bm{\mu}^T\bm{\xi}_k|\mathcal{F}_{k-1}] &=\Vert \bm{\mu}\Vert^2\bE_{\bm{\xi}_k}[1_{\{ \bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}|\mathcal{F}_{k-1}]+\sigma\Vert \bm{\mu} \Vert\bE_{\bm{\xi}_k,\psi_k}[1_{\{ \bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}\psi_k|\mathcal{F}_{k-1}]\\
& \geq \frac{1}{2}\Vert \bm{\mu} \Vert^2+\sigma\Vert \bm{\mu} \Vert\bE_{\psi_k}[\psi_k1_{\{\psi_k<0 \}}]\\
& = \frac{1}{2}\Vert \bm{\mu} \Vert^2-\sigma\Vert \bm{\mu} \Vert\sqrt{\frac{1}{2\pi}} \\&
\geq
0.001 \Vert \bm{\mu} \Vert^2.
\end{align*}
Here the first inequality follows from $1_{\{\bm{\xi}_k^T\bm{\theta}_{k-1}\leq \bm{\mu}^T\bm{\theta}_{k-1}\}}\leq 1_{\{\bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}$ and $\bE_{\bm{\xi}_k}[1_{\{\bm{\xi}_k^T\bm{\theta}_{k-1}\leq \bm{\mu}^T\bm{\theta}_{k-1}\}}]=\tfrac{1}{2}$, and the second from \eqref{fact:norm_Gaussians}. The last inequality uses the assumption $\sigma \leq 1.25 \Vert \bm{\mu} \Vert$. By taking conditional expectations of \eqref{eq:Vequation_hinge} combined with the above sequence of inequalities, we deduce the bound
\begin{equation*}
\begin{aligned}
\bE[V(\bm{\theta}_k)-V(\bm{\theta}_{k-1})|\mathcal{F}_{k-1}]&=\bE_{\bm{\xi}_k}\left[-2\alpha(M-\bm{\mu}^T\bm{\theta}_{k-1})1_{\{\bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}|\mathcal{F}_{k-1}\right]+\bE_{\bm{\xi}_k}\left[\alpha^2(\bm{\mu}^T\bm{\xi}_k)^21_{\{\bm{\xi}_k^T\bm{\theta}_{k-1}\leq 1\}}|\mathcal{F}_{k-1}\right]
\\
&\leq \alpha \Vert \bm{\mu} \Vert^2\left[ -0.002(M-1)+\alpha\left(\Vert \bm{\mu} \Vert^2+\sigma^2\right)\right].
\end{aligned}
\end{equation*}
A quick computation after plugging in the value of $M$ and the bound $\sigma \leq 1.25 \Vert \bm{\mu} \Vert$ yields the desired result.
\end{proof}
Recall, the stopping times $\tau_m$ denote the $m^{th}$ time that the SGD iterates enter the target set $C$. We show that $\bE[\tau_m]=\mathcal{O}(m)$. To do so, we begin by stating a lemma that gives a bound on the stopping time $\tilde{\tau}_1$ starting from any $\bm{\theta}_0$. In other words, for an arbitrary starting $\bm{\theta}_0$, we define
\[
\tilde{\tau}_1:=\inf\{k>0:\bm{\theta}_k\in C\}.
\]
\begin{lemma}[\cite{meyn2012markov}, Theorem 11.3.4]\label{lem:drift_from_meyn} Suppose that $V:\R^d\rightarrow [0,+\infty)$ is a drift function with respect to some target set $C$ \textit{i.e.} for some constant $b\in (0,+\infty)$ the drift equation \eqref{eq:driftequation} holds. The following is true
\begin{equation}
\bE[\tilde{\tau}_1|\bm{\theta}_0=\bm{\theta}]\leq \tfrac{1}{b}V(\bm{\theta}).
\end{equation}
\end{lemma}
We establish upper bounds on $\bE[\tau_m]$ for $m\geq 1$ in the following proposition.
\begin{proposition}(Bound on $\bE[\tau_m]$)\label{prp:Etaum_lownoise}
Let $\bm{\theta}_0=\bm{0}$ and assume the notation and assumptions of Lemma \ref{lem:driftequation} hold. The following is true for all $m\geq 1$
\begin{align}
\bE[\tau_m]\leq (m-1) \left ( 1+\frac{M^2}{b}\cdot\Phi^c\left(\frac{\Vert \bm{\mu} \Vert}{\sigma} \right)+\frac{\alpha\sigma^3 M^2}{ \Vert \bm{\mu} \Vert b}\cdot\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2} \right) \right ) +\frac{M^2}{b}.
\end{align}
\end{proposition}
\begin{proof}First, the result for $m=1$ follows immediately by combining Lemmas \ref{lem:driftequation} and \ref{lem:drift_from_meyn} with $\bm{\theta}_0 = \bm{0}$. We now assume that $\tau_{m-1}<\infty$ a.s. for some $m\geq 2$. Fix an integer $n \ge 1$. We decompose the space to yield the following bounds
\begin{equation} \begin{aligned} \label{eq: low_noise_bound}
\bE\big[(\tau_m-\tau_{m-1})\wedge& n|\mathcal{F}_{\tau_{m-1}+1}\big]=\bE\left[((\tau_m-\tau_{m-1})\wedge n)|\mathcal{F}_{\tau_{m-1}+1}\right]1_{\{\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}\geq 1\}}\\
& \qquad \qquad \qquad \qquad \qquad +\bE\left[((\tau_m-\tau_{m-1})\wedge n)|\mathcal{F}_{\tau_{m-1}+1}\right]1_{\{\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}<1\}}\\
&= 1_{\{\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}\geq 1\}}+ \bE\left[((\tau_m-\tau_{m-1})\wedge n)|\mathcal{F}_{\tau_{m-1}+1}\right]1_{\{\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}<1\}}\\
&=1_{\{\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}\geq 1\}}+\sum_{i=1}^{\infty}\bE\left[(\tau_m-\tau_{m-1})\wedge n|\mathcal{F}_{\tau_{m-1}+1}\right]1_{\{i-1< 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}\leq i\}}\\
&= 1+\sum_{i=1}^{\infty}\bE\left[\tilde{\tau}_1\wedge n|\bm{\theta}_0=\bm{\theta}_{\tau_{m-1}+1}\right]1_{\{i-1< 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}\leq i\}}.
\end{aligned} \end{equation}
Here the first equality follows because $((\tau_m-\tau_{m-1})\wedge n) 1_{\{\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1} \ge 1\}} = 1_{\{\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}\geq 1\}}$ and the last equality by the strong Markov property. We consider the logistic and hinge loss case separately to show that the following is true
\begin{equation}\label{eq:ineq:indicators}
\begin{aligned}
1_{\{i-1< 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}\leq i\}} &\leq 1_{\{\bm{\mu}^T\bm{\xi}_{\tau_{m-1}+1}<\frac{1-i}{\alpha}\}}.
\end{aligned}
\end{equation}
For clarity, in the next few inequalities, we write $1\{.\}$ instead of $1_{\{.\}}$. In case of logistic loss,
for each $i \ge 1$, we observe the bound
\begin{equation}
\begin{aligned}
1\{i-1< 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}\leq i\}&\leq 1\{i-1< 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}\}\nonumber\\&=1\left\{i-1< 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}}-\frac{\alpha \bm{\mu}^T\bm{\xi}_{{\tau_{m-1}}+1}}{1+\exp(\bm{\xi}_{{\tau_{m-1}}+1}^T\bm{\theta}_{\tau_{m-1}})}\right\}\nonumber\\
&\leq 1\left\{i-1<-\frac{\alpha \bm{\mu}^T\bm{\xi}_{\tau_{m-1}+1}}{1+\exp(\bm{\xi}_{\tau_{m-1}+1}^T\bm{\theta}_{\tau_{m-1}})}\right\} \label{eq:i-1_calculus}\nonumber\\
& \leq 1\left\{i-1<-\alpha \bm{\mu}^T\bm{\xi}_{\tau_{m-1}+1}\right\},\nonumber
\end{aligned}
\end{equation}
where the second inequality follows because $\bm{\mu}^T\bm{\theta}_{\tau_{m-1}} \geq 1$ and the last inequality because $-\alpha \bm{\mu}^T \bm{\xi}_{\tau_{m-1}+1}$ is positive since $i-1\geq 0$.
\item In case of hinge loss, for each $i\geq 1$, similar as above, we observe the bound
\begin{equation}
\begin{aligned}
1\left\{i-1<1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}\leq i\right\}&\leq 1\left\{i-1<1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}\right\}\\&\leq 1\left\{i-1<1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}}-\alpha \bm{\mu}^T\bm{\xi}_{\tau_{m-1}+1}1_{\{\bm{\xi}_{\tau_{m-1}+1}^T\bm{\theta}_{\tau_{m-1}}\leq 1\}}\right\}\\&\leq 1\left\{i-1<-\alpha \bm{\mu}^T\bm{\xi}_{\tau_{m-1}+1}1_{\{\bm{\xi}_{\tau_{m-1}+1}^T\bm{\theta}_{\tau_{m-1}}\leq 1\}} \right\}\\&= 1\left\{i-1<-\alpha\bm{\mu}^T\bm{\xi}_{\tau_{m-1}+1}\right\}.
\end{aligned}
\end{equation}
Therefore we have shown that \eqref{eq:ineq:indicators} holds. Setting $\bm{\theta}_0=\bm{\theta}_{\tau_{m-1}+1}$ , by Lemma \ref{lem:drift_from_meyn} for each $i\geq 1$, we deduce
\begin{equation} \begin{aligned} \label{eq: low_noise_blah_3}
\bE\left[\tilde{\tau}_1\wedge n|\bm{\theta}_0=\bm{\theta}_{\tau_{m-1}+1}\right]1_{\{i-1< 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}\leq i\}}&\leq \frac{(M-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1})^2}{b}1_{\{i-1< 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}+1}\leq i\}}\\&\leq \frac{(M+i-1)^2}{b}1_{\{ \bm{\mu}^T\bm{\xi}_{\tau_{m-1}+1}< \frac{1-i}{\alpha}\}}.
\end{aligned} \end{equation}
Finally we observe that
\begin{equation} \begin{aligned} \label{eq: low_noise_blah_4}
\bE\left[1_{\{ \bm{\mu}^T\bm{\xi}_{\tau_{m-1}+1}< \frac{1-i}{\alpha}\}} \right]&=\bE\left[\sum_{k=1}^{\infty}1_{\{ \bm{\mu}^T\bm{\xi}_{k+1}< \frac{1-i}{\alpha}\}}1_{\{\tau_{m-1}=k\}}\right]\\
&=\sum_{k=1}^{\infty}\bE\left[1_{\{ \bm{\mu}^T\bm{\xi}_{k+1}< \frac{1-i}{\alpha}\}} \right]\bE\left[1_{\{\tau_{m-1}=k\}} \right]\\
&= \Phi\left(\frac{\frac{1-i}{\alpha}-\Vert \bm{\mu}\Vert^2}{\sigma\Vert \bm{\mu}\Vert } \right)\sum_{k=1}^{\infty}\bE\left[1_{\{\tau_{m-1}=k\}} \right]\\
&=\Phi\left(\frac{\frac{1-i}{\alpha}-\Vert \bm{\mu}\Vert^2}{\sigma\Vert \bm{\mu}\Vert } \right).
\end{aligned} \end{equation}
The second equality is by independence and the third equality because $\bm{\mu}^T \bm{\xi}_{k+1} \sim N(\norm{\bm{\mu}}^2, \sigma^2 \norm{\bm{\mu}}^2)$. By combining \eqref{eq: low_noise_bound}, \eqref{eq: low_noise_blah_3}, and \eqref{eq: low_noise_blah_4}, we obtain the following
\begin{equation} \begin{aligned} \label{eq: low_noise_blah_5}
\bE\big[(\tau_m-&\tau_{m-1})\wedge n\big]\leq 1+\frac{M^2}{b}\cdot\Phi\left(-\frac{\Vert \bm{\mu} \Vert}{\sigma} \right)+\sum_{i=2}^{\infty} \frac{(M+i-1)^2}{b}\cdot\Phi\left(\frac{\frac{1-i}{\alpha}-\Vert \bm{\mu}\Vert^2}{\sigma\Vert \bm{\mu}\Vert } \right)\\
&=1+\frac{M^2}{b}\cdot\Phi^c\left(\frac{\Vert \bm{\mu} \Vert}{\sigma} \right)+\sum_{i=2}^{\infty} \frac{(M+i-1)^2}{b}\cdot\Phi^{c}\left(\frac{\Vert \bm{\mu}\Vert^2+\frac{i-1}{\alpha}}{\sigma\Vert \bm{\mu}\Vert } \right)\\
&\le1+\frac{M^2}{b}\cdot\Phi^c\left(\frac{\Vert \bm{\mu} \Vert}{\sigma} \right)+\frac{\alpha\sigma\Vert \bm{\mu} \Vert}{b\sqrt{2\pi}}\cdot\sum_{i=2}^{\infty} \frac{(M+i-1)^2}{\alpha\Vert \bm{\mu}\Vert^2+i-1}\cdot\exp\left(-\frac{1}{2}\left(\frac{\Vert \bm{\mu}\Vert^2+\frac{i-1}{\alpha}}{\sigma\Vert \bm{\mu}\Vert}\right)^2 \right),
\end{aligned} \end{equation}
where we used the inequality $\Phi^{c}(t)<\frac{1}{t\sqrt{2\pi}}\exp(-\frac{t^2}{2})$ for all $t>0$. Next, note that $\frac{M+i-1}{\alpha \Vert \bm{\mu} \Vert^2+i-1}\leq \frac{M}{\alpha \Vert\bm{\mu}\Vert^2}$ holds for all $i\geq 2$. Using this we obtain the following bound
\begin{equation} \begin{aligned}\label{eq:exp_low_noise}
\sum_{i=2}^{\infty} \frac{(M+i-1)^2}{\alpha\Vert \bm{\mu}\Vert^2+i-1}\cdot\exp&\left(-\frac{1}{2}\left(\frac{\Vert \bm{\mu}\Vert^2+\frac{i-1}{\alpha}}{\sigma\Vert \bm{\mu}\Vert}\right)^2 \right)\\
&\leq \frac{\sigma M^2}{\alpha \Vert \bm{\mu} \Vert^3}\cdot\sum_{i=2}^{\infty} \frac{\alpha\Vert \bm{\mu}\Vert^2+i-1}{\alpha \sigma \Vert \bm{\mu} \Vert}\cdot\exp\left(-\frac{1}{2}\left(\frac{\alpha\Vert \bm{\mu}\Vert^2+i-1}{\alpha\sigma\Vert \bm{\mu}\Vert}\right)^2 \right)\\&\leq \frac{\sigma M^2}{\alpha \Vert \bm{\mu} \Vert^3}\cdot \alpha\sigma \Vert \bm{\mu} \Vert\cdot\int_{\frac{\Vert \bm{\mu} \Vert}{\sigma}}^{+\infty} t\exp\left(-\frac{t^2}{2}\right)dt \\&=\frac{\sigma^2 M^2}{\Vert \bm{\mu} \Vert^2}\cdot\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2} \right).
\end{aligned} \end{equation}
Here we have used that $t \mapsto t\exp(-\frac{t^2}{2})$ is decreasing over $[1,+\infty)$. Combining \eqref{eq: low_noise_blah_5} and \eqref{eq:exp_low_noise}, we obtain that
\begin{equation}\begin{aligned}
\bE\left[(\tau_m-\tau_{m-1})\wedge n\right]&\leq 1+\frac{M^2}{b}\cdot\Phi^c\left(\frac{\Vert \bm{\mu} \Vert}{\sigma} \right)+\frac{\alpha\sigma^3 M^2}{ \Vert \bm{\mu} \Vert b}\cdot\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2} \right).
\end{aligned}
\end{equation}
Taking the limit as $n \to +\infty$, we observe that
\[\bE[\tau_m]\leq 1+\frac{M^2}{b}\cdot\Phi^c\left(\frac{\Vert \bm{\mu} \Vert}{\sigma} \right)+\frac{\alpha\sigma^3 M^2}{ \Vert \bm{\mu} \Vert b}\cdot\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2} \right)+\bE[\tau_{m-1}].
\]
We then iterate the above inequality yielding
\[ \bE[\tau_m] \le (m-1) \left ( 1+\frac{M^2}{b}\cdot\Phi^c\left(\frac{\Vert \bm{\mu} \Vert}{\sigma} \right)+\frac{\alpha\sigma^3 M^2}{ \Vert \bm{\mu} \Vert b}\cdot\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2} \right)\right ) + \bE[\tau_1]. \]
The result follows by plugging in the bound from Lemma \ref{lem:drift_from_meyn} for the base case $m=1$.
\end{proof}
We are now ready to prove Theorem~\ref{thm:low}.
\begin{proof}[Proof of Theorem
\ref{thm:low}]
In order to simplify the subsequent argument, we define the quantity,
\[ M' := 1+\frac{M^2}{b}\cdot\Phi^c\left(\frac{\Vert \bm{\mu} \Vert}{\sigma} \right)+\frac{\alpha\sigma^3 M^2}{ \Vert \bm{\mu} \Vert b}\cdot\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2} \right). \]
It is easy to see that $\mathbb{P}_{\hat{\bm{\xi}}\sim N(\bm{\mu},\sigma^2I_d)}\left( \hat{\bm{\xi}}^T\bm{\theta}\geq 1 \right) \geq \frac{1}{2}$ for any $\bm{\theta}\in C$. Therefore $\delta=\frac{1}{2}$ satisfies \eqref{eq:probability_delta}. By Proposition~\ref{prp:Etaum_lownoise} with Lemma \ref{lem:ETleqET_C}, we conclude that
\begin{align*}
\bE[T]&\leq \bE[T_C]=\sum_{m=1}^{\infty} \bE[T_C1_{\{T_C=\tau_m\}}]\leq \sum_{m=1}^{\infty}\frac{\bE[\tau_m]}{2^{m-1}} \le \sum_{m=1}^\infty \frac{(m-1)M' + \frac{M^2}{b} }{2^{m-1}} = 2M' + \frac{2M^2}{b}.
\end{align*}
\end{proof}
\section{Numerical Experiments} \label{sec:Num_Experiment}
We investigate the performance of our termination test on two popular data sets, MNIST \citep{MNIST} and CIFAR-10 \citep{cifar10}, as well as synthetic data generated from Gaussians and heavy-tailed student t-distributions. All tests were performed using our zero overhead stopping criteria outlined in \eqref{eq: practical_termination_test}; experiments using our test which required an extra sample \eqref{eq: termination_test} are not presented since the behaviors of the two criteria were indistinguishable on all data sets.
\paragraph{Comparison with a popular stopping criterion.} We include as a baseline a popular termination test, the small validation set (SVS) \citep{Prechelt2012}. The SVS termination test is as follows. One fixes a validation set of $p$ instances $(\bm{\zeta}^{\rm V}_1,y^{\rm V}_1)$, \ldots, $(\bm{\zeta}^{\rm V}_p,y^{\rm V}_p)$ drawn from the same distribution as the training data. Then for $m = 1, 2, \ldots$, one checks the fraction correct of the current classifier $\bm{\theta}_{ml}$, where $ml$ is the iteration index, on the $p$ instances. In other words, the SVS test is run once every $l$ iterations. If the fraction correct fails to increase compared to the last run of the SVS, then the SGD iterations are terminated.
Note the computational overhead of running the small validation set is about $p$ times the cost of one SGD iteration. Therefore, in order to make the overhead only a constant factor, we choose $l=2p$, meaning an approximately 50\% overhead for SVS. In contrast, the overhead for \eqref{eq: practical_termination_test} is
0. The value of $p$ is a tuning parameter for SVS; we exhibit results for three different $p$ values (see Figs.~\ref{fig:normal_scatter}, \ref{fig:ht_scatter}, \ref{fig:mnist_scatter}, \ref{fig:cifar_scatter} ).
\begin{figure*}[htp!]
\begin{center}
\subfigure[]{\includegraphics[height=3cm]{it_vs_sigma_l.eps}} \quad
\subfigure[\label{fig:black}]{\includegraphics[height=3cm]{acc_vs_sigma_l.eps}} \quad
\subfigure[]{\includegraphics[height=3cm]{it_vs_sigma_h.eps}} \quad \subfigure[]{\includegraphics[height=3cm]{acc_vs_sigma_h.eps}}
\end{center}
\caption{Performance of stopping criterion \eqref{eq: practical_termination_test} on a mixture of Gaussians as $\sigma$ is varied. Plots $(a),(b)$ are logistic and $(c),(d)$ are hinge. All plots show tests for values of $\sigma$ equally spaced from 0.05 to 2.0. For each value of $\sigma$, ten trials were run. Plots $(a),(c)$ show the relationship between $\sigma$ and $k$, the iteration number when \eqref{eq: practical_termination_test} first holds. Plots $(b),(d)$ show the accuracy as red asterisks. The green asterisks show the accuracy of the optimal classifier. The black curve on the right is the ratio of the average accuracy (over 10 trials) of the classifier
when \eqref{eq: practical_termination_test} holds to the accuracy of the optimal classifier.}
\label{fig:acctime}
\end{figure*}
\paragraph{Measuring the accuracy.} In all the experiments, we measure the performance of a method with a score, generally known as ``accuracy," that is the fraction correct on a large validation set drawn from the same distribution as the training data. Thus, 1.0 is perfect accuracy, while 0.5 means that $\bm{\theta}_k$ is no better at classifying than random guessing. It is important to note that even on data for which the means $\bm{\mu}_0,\bm{\mu}_1$ are known a priori ({\em e.g.}, synthetic data), the score of the optimal $\bm{\theta}^*$
will not be 1.0 because the large validation set itself is noisy.
We center the data so that the linear classifier is homogeneous. In a preliminary phase, 100 samples are drawn from the training set. From this, $\bm{\mu}_0$ and $\bm{\mu}_1$ are estimated, and then the average of these estimates is used to offset training instances during SGD.
\paragraph{Parameter settings.} After centering, the vectors $\bm{\theta}$ and $\bm{\xi}$ scale inversely, so the step-size parameter $\alpha$ should scale as $1/\sigma^2$. Therefore, we take the step-size to be
$\tilde\alpha/\tilde\sigma^2$. Here, $\tilde\sigma^2$ is the average of $\norm{\bm{\zeta}_j-\tilde\bm{\mu}_{y_j}}^2$, and $\tilde\bm{\mu}_i$ ($i=0$ or $i=1$) is the estimate of $\bm{\mu}_i$, averaged over the two classes. We compute the quantities $\tilde\sigma^2$ and $\tilde\bm{\mu}_i$ using the 100 samples described in the preceding paragraph. Note that for the Gaussian mixture model, the expected value of $\tilde\sigma^2$ is $\sigma^2d$.
For the synthetic data, the means and variances are known exactly a priori, so the estimation procedures described in the previous two paragraphs are unnecessary. However, we used them anyway in order to be consistent with the tests on the realistic data.
The parameter $\tilde\alpha$ described in the last paragraph is a scale-free tuning parameter.
It is known (see, e.g., \cite{Nemirovski_Robust_Stochastic_1}) that a smaller $\tilde\alpha$ corresponds to more iterations but greater ultimate accuracy under a reasonable model of the data. Our termination test is obviously sensitive to the choice of $\tilde\alpha$: the condition $\bm{\xi}_{k+1}^T\bm{\theta}_k\ge 1$ cannot hold unless $\norm{\bm{\theta}_k}\ge 1/\norm{\bm{\xi}_{k+1}}$, but $\bE\left[\norm{\bm{\theta}_k}\right]\le O(\alpha k)$. See also Theorems~\ref{thm:low} and \ref{thm:high}.
On the other hand, SVS is only mildly sensitive to $\tilde\alpha$, according to our testing. Indeed, there is an upper bound of $pl$ on the total number of iterations possible before termination using the SVS condition, independent of $\tilde\alpha$ and of all other aspects of the problem. The dependence of the termination test on $\tilde\alpha$ is evidently desirable because the user is presumably seeking greater accuracy when a smaller value of $\tilde\alpha$ is selected.
\subsection{Experiments with synthetic data}
\label{sec:comp-sim}
\begin{figure*}[htp!]
\centering
\includegraphics[height=2.5cm]{gm_scatter99_500_l_10.eps} \qquad \includegraphics[height=2.5cm]{gm_scatter99_500_l_200.eps} \qquad
\includegraphics[height=2.5cm]{gm_scatter99_500_h_10.eps}\qquad \includegraphics[height=2.5cm]{gm_scatter99_500_h_200.eps} \\ \vspace{2 mm}
\includegraphics[height=2.5cm]{gm_scatter100_500_l_10.eps}\qquad \includegraphics[height=2.5cm]{gm_scatter100_500_l_200.eps} \qquad
\includegraphics[height=2.5cm]{gm_scatter100_500_h_10.eps}\qquad \includegraphics[height=2.5cm]{gm_scatter100_500_h_200.eps} \\ \vspace{2 mm}
\includegraphics[height=2.5cm]{gm_scatter101_500_l_10.eps}\qquad \includegraphics[height=2.5cm]{gm_scatter101_500_l_200.eps} \qquad \includegraphics[height=2.5cm]{gm_scatter101_500_h_10.eps}\qquad \includegraphics[height=2.5cm]{gm_scatter101_500_h_200.eps}
\caption{Each plot shows 10 random runs of SGD applied to normally distributed data with indicated values of $\sigma$ and for a fixed dimension $d=500$. For each of the ten runs, five termination tests corresponding to five colors were applied. SVS was tried with $p=32, 128, 512$, depicted as red, magenta and cyan circles respectively. Test \eqref{eq: practical_termination_test} is indicated with a blue asterisk. A green `+' corresponds to termination after $1.5k$ iterations, where $k$ is the iteration index that \eqref{eq: practical_termination_test} first holds. The notation $(l/200)$ means logistic loss with $\tilde\alpha=1/200$; simillarly $(h/10)$ means hinge loss with $\tilde\alpha = 1/10$, and so on.}
\label{fig:normal_scatter}
\end{figure*}
\paragraph{Normal distribution.} We generated test and training data using a mixture of Gaussians given by $N(\bz,\sigma^2I)$ for the 0-class and $N(\bm{e}_1,\sigma^2I)$ for the 1-class, where $\bm{e}_1 = (1,0,\hdots, 0)^T \in \R^d$.
In Fig.~\ref{fig:acctime}, we present the running time and accuracy (fraction correct) of our termination test for a fixed dimension $d=500$ and $\sigma$ ranging from $0.05$ to $2$. We record 10 runs for each value of $\sigma$. The performance of the classifier when our termination test \eqref{eq: practical_termination_test} holds almost matches the optimal classifier; in particular, the averaged accuracy of our classifier/accuracy of the optimal classifier over the 10 runs, black curve in Fig.~\ref{fig:black}, never dips below $0.95$.
In Fig.~\ref{fig:normal_scatter}, we compare performance of \eqref{eq: practical_termination_test} against SVS termination. One axis shows accuracy while the other shows iteration count. We continued to run SGD for an additional $1.5k$ iterations where $k$ is the first iteration at which \eqref{eq: practical_termination_test} holds (green '+') to test whether accuracy improves after termination.
The tests (for several values of $\sigma$, both hinge and logistic, and two values of $\tilde\alpha$) in Fig.~\ref{fig:normal_scatter} indicate that \eqref{eq: practical_termination_test} is more accurate than SVS, more predictable (i.e., there is less spread in the scatter plot), and that running until $1.5k$ iterations does not significantly improve the solution. As expected,
for a large $\tilde\alpha$, \eqref{eq: practical_termination_test} requires fewer iterations than SVS with $p=512$, while the opposite relationship holds for a small $\tilde\alpha.$
\begin{figure}
\begin{center}
\includegraphics[height=2.5cm]{ht_scatter98_2_l_10.eps} \qquad \includegraphics[height=2.5cm]{ht_scatter98_2_l_200.eps} \qquad \includegraphics[height=2.5cm]{ht_scatter98_2_h_10.eps} \qquad
\includegraphics[height=2.5cm]{ht_scatter98_2_h_200.eps} \qquad \includegraphics[height=2.5cm]{ht_scatter99_2_l_10.eps} \qquad \includegraphics[height=2.5cm]{ht_scatter99_2_l_200.eps} \qquad \includegraphics[height=2.5cm]{ht_scatter99_2_h_10.eps} \qquad \includegraphics[height=2.5cm]{ht_scatter99_2_h_200.eps} \\ \vspace{2 mm}
\includegraphics[height=2.5cm]{ht_scatter100_2_l_10.eps} \qquad \includegraphics[height=2.5cm]{ht_scatter100_2_l_200.eps} \qquad \includegraphics[height=2.5cm]{ht_scatter100_2_h_10.eps} \qquad \includegraphics[height=2.5cm]{ht_scatter100_2_h_200.eps} \\
\end{center}
\caption{Tests on the student-t distribution (heavy tailed) with two degrees of freedom and the indicated value of parameter $\beta$. See the caption of Fig.~\ref{fig:normal_scatter} for explanation of the plots.}
\label{fig:ht_scatter}
\end{figure}
\paragraph{Heavy-tailed distribution.}
We consider the student t-distribution with two degrees of freedom. This
distribution is heavy-tailed since some of its
higher moments are infinite.
The two classes were generated as follows. For $\bm{\zeta}$ in the 0-class, each of the $d$ entries of $\bm{\zeta}$ is chosen as
$\beta\eta$, where $\beta$ is varied in the experiments and
$\eta$ is drawn from the student t-distribution with two degrees of freedom. For the 1-class, $\bm{\zeta}$ is chosen in the same way except that the first entry is incremented by 1. Fig.~\ref{fig:ht_scatter} shows our performance against SVS.
The results in this table show similar trends as in the normally distributed case. One difference is that the accuracy achieved by our termination test \eqref{eq: practical_termination_test} is more spread out presumably because of the heavy-tailed nature of the data set.
\subsection{Experiments with real data} \label{sec:comp-realistic}
\paragraph{MNIST handwritten digits.} We compared our termination test on the MNIST handwritten digit set \citep{MNIST} ($d=784$, no preprocessing of the data other than centering between the two means). Two trials are shown: distinguishing 1 from 8 (easy case) and distinguishing 7 from 9 (more difficult case). The test runs are obtained by running through the training data in different randomized orders. The plots in Fig.~\ref{fig:mnist_scatter} show similar trends as before. As expected, the accuracy is overall higher for
$\tilde\alpha=1/200$ than for $\tilde\alpha=1/10$.
\begin{figure}
\centering
\includegraphics[height=2.5cm]{mnist_scatter1_8_l_10.eps} \qquad \includegraphics[height=2.5cm]{mnist_scatter1_8_l_200.eps} \qquad \includegraphics[height=2.5cm]{mnist_scatter1_8_h_10.eps} \qquad \includegraphics[height=2.5cm]{mnist_scatter1_8_h_200.eps} \\ \vspace{2 mm} \includegraphics[height=2.5cm]{mnist_scatter7_9_l_10.eps} \qquad \includegraphics[height=2.5cm]{mnist_scatter7_9_l_200.eps} \qquad \includegraphics[height=2.5cm]{mnist_scatter7_9_h_10.eps} \qquad \includegraphics[height=2.5cm]{mnist_scatter7_9_h_200.eps} \\
\caption{Tests on the MNIST handwritten digit data set for discerning ``1'' from ``8'' and ``7'' from ``9'' for both hinge and logistic, and for both $\tilde\alpha=1/10$ and $\tilde\alpha=1/200$. Refer to the caption of Fig.~\ref{fig:normal_scatter} for the key to the plots.}
\label{fig:mnist_scatter}
\end{figure}
\paragraph{CIFAR-10 image set.} We compared our termination test on the CIFAR-10 \citep{cifar10} ($d=3072$, no preprocessing of the data other than centering between the two means as described earlier). Two trials are shown: distinguishing deer from airplanes and frogs from trucks. As in MNIST, test runs are obtained by running through the training data in different randomized orders.
\begin{figure}
\centering
\includegraphics[height=2.5cm]{cifar_scatter_1_5_l_10.eps} \qquad \includegraphics[height=2.5cm]{cifar_scatter_1_5_l_200.eps} \qquad \includegraphics[height=2.5cm]{cifar_scatter_1_5_h_10.eps} \qquad \includegraphics[height=2.5cm]{cifar_scatter_1_5_h_200.eps} \\ \vspace{2 mm}
\includegraphics[height=2.5cm]{cifar_scatter_7_10_l_10} \qquad
\includegraphics[height=2.5cm]{cifar_scatter_7_10_l_200.eps} \qquad \includegraphics[height=2.5cm]{cifar_scatter_7_10_h_10.eps} \qquad \includegraphics[height=2.5cm]{cifar_scatter_7_10_h_200.eps}
\caption{Tests on the CIFAR-10 image set for two tasks, for logistic and hinge losses, and for $\tilde\alpha=1/10$ and $\tilde\alpha=1/200$. Refer to the caption of Fig.~\ref{fig:normal_scatter} for the key to the plots. The plot in the first row, right, does not include cyan circles because the training data was exhausted before the SVS test could activate for $p=512$.}
\label{fig:cifar_scatter}
\end{figure}
\section{Conclusions}
We have proposed a simple and computationally free termination test for SGD for binary classification, supported by both theoretical and experimental results.
The theoretical results show that the test will stop SGD after a finite time with a bound on the expected accuracy of the resulting classifier. The bounds that we proved are weaker than what we observed in our experiments. Therefore, the first obvious question left open by this work is whether the theoretical bounds can be improved.
In our experimental results,
the plots in Figs.~\ref{fig:normal_scatter} through \ref{fig:cifar_scatter} show a consistent pattern that
\eqref{eq: practical_termination_test} achieves low accuracy but is faster than SVS for $\tilde\alpha=1/10$, while it achieves higher accuracy with more iterations when $\tilde\alpha=1/200$. This is useful behavior in practice, compared to SVS, since it puts the accuracy/iterations tradeoff in the hands of the user who selects the stepsize $\tilde\alpha$. Another benefit of \eqref{eq: practical_termination_test} apparent from all plots is that the number of iterations is more consistent across random trials, which is beneficial in the case that SGD is used as a subproblem of a larger computation.
This work did not explore regularization via early stopping. As mentioned in the introduction, experiments showed that as SGD iterations continued, the accuracy on the test set eventually levels off but does not decrease significantly, {\em i.e.}, SGD for binary classification is not prone to overfitting. Because the test accuracy never shows marked decline, there is no opportunity for early stopping to regularize. However, we know of other settings in which early stopping has a strong regularizing effect ({\em e.g.}, conjugate gradient iterations for image deconvolution, already known in \cite{CG_implicit_regularization}), so if \eqref{eq: practical_termination_test} is extended beyond binary classification in future work, there will likely also be an opportunity to explore regularization.
\section{Stopping criterion for stochastic gradient descent}\label{sec:termtest} We analyze learning by minimizing an expected loss problem of homogeneous linear predictors ({\em i.e.}, without bias) of the form
\begin{align*}
\bE_{(\bm{\zeta},y) \sim \mathcal{P}} [\ell(\bm{\zeta}^T\bm{\theta}, y)]
\end{align*}
using logistic and hinge regression. Here the samples $(\bm{\zeta}, y) \in \R^d \times \{0,1\}$. We recall that in logistic regression the loss function is defined as follows
\begin{equation}\label{eq:log_loss_definition}
\ell(x,y):=-yx+\log\left(1+\exp(x)\right).
\end{equation}
Also, the hinge loss is defined as the following
\begin{equation}\label{eq:hinge_loss_definition}
\begin{aligned}
\ell(x,y):=\begin{cases} \max(1-x,0)& \quad y=1, \\\max(1+x,0)& \quad y=0. \end{cases}
\end{aligned}
\end{equation}
The data comes from a mixture model, that is, flip a coin to determine whether an item is in the $y=0$ or $y=1$ class, then generate the sample $\bm{\zeta}$ from either the distribution $\mathcal{P}_0$ (if $y=0$ was selected) or $\mathcal{P}_1$ (if $y=1$ was selected). We denote the mean of the $\mathcal{P}_0$ (resp. $\mathcal{P}_1$) distribution by $\bm{\mu}_0$ (resp. $\bm{\mu}_1$).
The homogeneity of the linear classifier is without loss of much generality because we can assume $\bm{\mu}_0 = - \bm{\mu}_1$. We enforce this assumption, with minimal loss in accuracy, by recentering the data using a preliminary round of sampling (see Sec.~\ref{sec:Num_Experiment}).
Because of the homogeneity, we can
simplify the notation by redefining our training examples to be $\bm{\xi}_k:=(2y_k-1)\bm{\zeta}_k$ and then assuming that for all $k \ge 0$, $y_k = 1$. Then the new samples $\bm{\xi}$ can be drawn from a \textit{single}, mixed distribution $\mathcal{P}_*$ with mean $\bm{\mu}:=\bm{\mu}_1$ where sampling $\bm{\xi}\sim \mathcal{P}_1$ occurs with probability 0.5 and $-\bm{\xi}\sim\mathcal{P}_0$ occurs with probability 0.5.
We make this simplification and, from this point on, we analyze the following optimization problem:
\begin{equation}\label{optimization_problem}
\min_{\bm{\theta} \in \R^d} f(\bm{\theta}):= \bE_{\bm{\xi} \sim \mathcal{P}_*}[ \ell(\bm{\xi}^T\bm{\theta},1)]
\end{equation}
Let us remark that the right-hand side of \eqref{optimization_problem} is differentiable with respect to $\bm{\theta}$ in either cases of logistic and hinge loss functions. Indeed, in case of hinge loss, note that for any $\bm{\theta}_{k-1}$, the function $\bm{\xi}_k \mapsto \ell(\bm{\xi}_k^T\bm{\theta}_{k-1},1)$ is almost surely differentiable as $\bP_{\bm{\xi}_k}\left(\bm{\xi}_k^T\bm{\theta}_{k-1}=1\right)=0$. Hence, we consider the expectation in \eqref{optimization_problem} to be over $\mathbb{R}^d\backslash \left\{\bm{\xi}_k:\bm{\xi}_k^T\bm{\theta}_{k-1}=1 \right\}$ on which the argument is differentiable with respect to $\bm{\theta}_{k-1}$.
The most widely used method to solve \eqref{optimization_problem} is SGD. Unlike gradient descent which uses the entire data to compute the gradient of the objective function, the SGD algorithm, at each iteration, generates a sample from the probability distribution and updates the iterate based only on this sample,
\begin{align}
\label{eq:derivative_formula}
\bm{\te}_{k}=\bm{\theta}_{k-1} -\alpha \nabla_{\bm{\theta}} \ell(\bm{\xi}_{k}^T\bm{\theta}_{k-1},1),
\end{align}
where $\bm{\xi}_k \sim \mathcal{P}_*$. Our presentation of SGD assumes a constant step-size $\alpha>0$. Constant step-size is commonly used in machine learning implementations despite the decreasing step-size often assumed to prove convergence (see, \textit{e.g.}, \cite{RM1951}). \cite{Nemirovski_Robust_Stochastic_1} explain in more detail the theoretical basis for both constant and decreasing step-size and provide an explanation as well as workarounds for the poor practical performance of decreasing step-size. However, in practice, constant step-size is still widely used.
With constant step-size, SGD is known to asymptotically converge to a neighborhood of the minimizer (see, {\em e.g.}, \cite{Pflug}). Yet, for binary classification, one does not require convergence to a minimizer in order to obtain good classifiers.
For homogeneous linear classifiers applied to the hinge loss function, it has been shown (\cite{molitor2019bias}) that the homotopic sub-gradient method converges to a maximal margin solution on linearly separable data. In (\cite{Srebro_logistic}), SGD applied to the logistic loss on linearly separable data will produce a sequence of $\bm{\theta}_k$ that diverge to infinity, but when normalized also converge to the $L_2$-max margin solution. Little is known about the behavior of constant step-size SGD when the linear separability assumption on the data is removed (see, \textit{e.g.}, \citep{Telgarsky_logistic}). The assumption of zero-noise in our context would mean that $\mathcal{P}_0$, $\mathcal{P}_1$ each reduce to a single point, a trivial example of separable data. Since there is often noise in the sample procedure, the data \textit{may not necessarily be linearly separable}. Understanding the behavior of SGD in the presence of noise is, therefore, important.
\subsection{Stopping criterion} A common stopping criterion from deterministic first-order optimization methods is to terminate at an iterate satisfying $\norm{\nabla f(\bm{\theta})}^2 < \varepsilon$ for a predetermined $\varepsilon > 0$. Yet, in stochastic optimization, the full gradient is inaccessible or it is simply too expensive to compute. Several works \citep{Dima_Robust_Stochastic,Ghadimi_Strongly_cvx_validation_2,Ghadimi_Strongly_cvx_validation_1, Juditsky_robust_stochastic,Nemirovski_Robust_Stochastic_1} have suggested an alternative for the stochastic setting-- terminate when $\bP(f(\bm{\theta})-\min~f \le \varepsilon) \ge 1-p$ for some chosen small $\varepsilon >0$ and probability $p$.
However, for binary classification, the minimizer of the loss function and a perfect classifier may not be the same or one may find a suitable substitute, at a lower cost, without having to compute the exact minimizer.
\paragraph{Optimal classifiers.} In classification, we call a classifier, $\bm{\theta}^*$, \textit{optimal} if it has the property that
\begin{equation} \label{eq:stop_criteria_observation}
\bm{\theta}^*\in \Argmax_{\bm{\theta}} \bP\left(\bm{\xi}^T\bm{\theta}>0\,|\,\bm{\xi}\sim \mathcal{P}_*\right), \end{equation}
\textit{i.e.}, the classifier, $\bm{\theta}^*$, minimizes the probability of misclassifying. Note there exist many optimal classifiers, in fact, the condition \eqref{eq:stop_criteria_observation} is scale-invariant; hence, for any $\lambda > 0$, $\lambda \cdot \bm{\xi}^T \bm{\theta}^* > 0 \Longleftrightarrow \bm{\xi}^T \bm{\theta}^* > 0$.
Even though the binary classifier is scale-free, the logistic and hinge regression loss is not. It transitions from flat to unit-slope when $\bm{\xi}^T\bm{\theta}=O(1)$. This suggests that when $\bm{\theta}$ reaches this region, a classification has been made.
\paragraph{Termination test.} Motivated by the above property of optimal classifiers, we propose the following termination test: Sample $\hat{\bm{\xi}}_k \sim \mathcal{P}_*$ and
\begin{align} \label{eq: termination_test}
\text{Terminate when \;$ \hat{\bm{\xi}}_k^T\bm{\theta}_k \ge 1$}.
\end{align}
A second motivation for this termination test comes from support vector machine (SVM) theory
\citep{ShalevShwartzBenDavid} in which the scaling of the optimizing classifier is constrained so that the margin between classes is $O(1)$.
Therefore, our termination test blends an SVM notion with SGD. Algorithm~\ref{alg:SGD_termination} describes the termination criteria \eqref{eq: termination_test} as applied with the update rule governed by SGD.
The termination test \eqref{eq: termination_test} requires an additional sample and an additional inner product per iteration and, as such, imposes a small additional cost. To reduce this cost, in all our numerical experiments (Sec.~\ref{sec:Num_Experiment}), we use the following termination test.
\begin{equation}\label{eq: practical_termination_test}
\mbox{Terminate when \;$ \bm{\xi}_{k+1}^T\bm{\theta}_k\ge 1,$}
\end{equation}
which imposes no computational overhead as SGD already computes $\bm{\xi}_{k+1}^T\bm{\theta}_k$. Unfortunately, we could not perform a straightforward analysis of \eqref{eq: practical_termination_test} because it introduces additional dependencies in the sequences $\{\bm{\xi}_k\}_{k=1}^\infty$ and $\{\bm{\theta}_k\}_{k=0}^\infty$.
After testing both \eqref{eq: termination_test} and \eqref{eq: practical_termination_test}, we found that up to the noise from the randomness, their behaviors in numerical experiments were identical.
\begin{algorithm}[htp!]
\textbf{initialize:} $\bm{\theta}_0 \in \R^d$, $\alpha > 0$, $\hat{\bm{\xi}}_0 \sim \mathcal{P}_*$, $k = 0$\\
\textbf{while $\hat{\bm{\xi}}_k^T \bm{\theta}_k < 1$}\\
\qquad \qquad Pick data point $\bm{\xi}_{k+1} \sim \mathcal{P}_*$.\\
\qquad \qquad Compute $\nabla_{\bm{\theta}} \ell(\bm{\xi_{k+1}}^T\bm{\theta}_k,1)$ as in \eqref{eq:derivative_formula} \\
\qquad \qquad Update $\bm{\theta}$ by setting
\begin{equation}\label{eq: SGD_update_expected_loss} \bm{\theta}_{k+1} \leftarrow \bm{\theta}_k - \alpha \nabla_{\bm{\theta}} \ell(\bm{\xi_{k+1}}^T\bm{\theta}_k,1) \end{equation}\\
\qquad \qquad Sample $\hat{\bm{\xi}}_{k+1} \sim \mathcal{P}_*$\\
\qquad \qquad $k \leftarrow k+1$\\
\textbf{end}
\caption{SGD with termination test} \label{alg:SGD_termination}
\end{algorithm}
\begin{assumption}\label{Gaussian_Assumption}[The distribution $\mathcal{P}_{*}$ is Gaussian] \rm{Our theoretical analysis makes a further assumption on the distribution $\mathcal{P}_*$. For the rest of this section and Sec.~\ref{sec:Analysis}, $\mathcal{P}_0=N(\bm{\mu}_0,\sigma^2I_d)$, $\mathcal{P}_1=N(\bm{\mu}_1,\sigma^2I_d)$, and therefore $\mathcal{P}_*=N(\bm{\mu},\sigma^2I_d)$, a Gaussian with unknown mean $\bm{\mu}\; (=\bm{\mu}_1=-\bm{\mu}_0)$ and variance $\sigma^2I_d$. This assumption allows for non-separable data provided $\sigma > 0$.}
\end{assumption}
\paragraph{The minimizer of logistic and hinge regression}
In \eqref{eq:stop_criteria_observation} we defined $\bm{\theta}^*$ to be any member of the set of optimal classifiers. For the remainder of this section, we provide an exact characterization of this set. In the next lemma, we redefine $\bm{\theta}^*$ to the minimizer of the expected loss function for either hinge or logistic and show that it is a positive scalar multiple of $\bm{\mu}$. We will continue to use $\bm{\theta}^*$ with this meaning for the remainder of the paper. In the lemma after that, we show that the set of optimal classifiers are exactly positive scalar multiples of $\bm{\mu}$ (or of $\bm{\theta}^*$).
\begin{lemma}[Minimizer of the logistic and hinge loss] \label{lem:minimizers} The function $f$ defined in \eqref{optimization_problem} with $\ell$ defined in \eqref{eq:log_loss_definition} or \eqref{eq:hinge_loss_definition} has a unique minimizer at $\bm{\theta}^* = \rho^*\bm{\mu}$ for some $\rho^*\in (0,+\infty)$. Moreover, let $r=\rho^*\sigma^2$. Then in the case of logistic regression, it holds that $r=2$ and in the case of hinge loss, $w=\frac{\sigma}{r\Vert \bm{\mu} \Vert}-\frac{\Vert \bm{\mu} \Vert}{\sigma}$ satisfies
\begin{equation}\label{eq:w_hinge}
\frac{1}{\sqrt{2\pi}}\cdot\frac{\sigma}{\Vert \bm{\mu} \Vert}=\Phi(w)\cdot \exp(\tfrac{1}{2}w^2).
\end{equation}
\end{lemma}
\begin{proof}
We consider the logistic and hinge loss case separately.
\item \textbf{Logistic loss.} We have
\[
f(\bm{\theta})=\bE_{\bm{\xi} \sim N(\bm{\mu},\sigma^2I_d)}[ -\bm{\theta}^T\bm{\xi} + \log(1+ \exp(\bm{\theta}^T\bm{\xi}))].
\]
Clearly, $f$ is a convex function. We next observe that for any $\bm{v},\bm{\theta}\in \R^d$ with $\bm{v}^T\bm{\theta}=0$, it holds that
\begin{equation}\label{app.eq:min_eq101}
\bm{v}^T\nabla f\left(\bm{\theta}\right)=\bE_{\bm{\xi}}\left[\frac{\bm{\xi}^T\bm{v}}{1+\exp(\bm{\xi}^T\bm{\theta})}\right]=\bE_{\bm{\xi}}[\bm{\xi}^T\bm{v}]\bE_{\bm{\xi}}\left[\frac{1}{1+\exp(\bm{\xi}^T\bm{\theta})}\right]=\bm{v}^T\bm{\mu}\cdot\bE_{\bm{\xi}}\left[\frac{1}{1+\exp(\bm{\xi}^T\bm{\theta})}\right].
\end{equation}
Here we used that $\bm{\xi}^T\bm{v}$ and $\bm{\xi}^T\bm{\theta}$ are independent random variables and the expectation of the product of two uncorrelated random variables is the product of the expectations. Now note that for any $\bm{\theta}$, the quantity $\bE_{\bm{\xi}}\left[\frac{1}{1+\exp(\bm{\xi}^T\bm{\theta})}\right]$ is strictly positive. Therefore, if $\bm{v}^T\bm{\theta}=0$ and $\nabla f(\bm{\theta})=\bm{0}$ then, using \eqref{app.eq:min_eq101}, we obtain that $\bm{v}^T\bm{\mu}=0$. Hence, we established that $\nabla f(\bm{\theta})=\bm{0}$ implies $\bm{\theta}=\rho \bm{\mu}$ for some $\rho \in \R$.
On the other hand, using \eqref{app.eq:min_eq101} again, we have that $\nabla f(\rho \bm{\mu})=0$ if and only if $\bm{\mu}^T\nabla f(\rho \bm{\mu})=0$. To see the only if direction, suppose $\bm{\mu}^T\nabla f(\rho \bm{\mu}) = 0$ and $\nabla f(\rho \bm{\mu}) \neq 0$. Then we have $\nabla f(\rho \bm{\mu}) = \bm{v}$ where the vector $\bm{v}$ is nonzero such that $\bm{v}^T\bm{\mu} = 0$. By \eqref{app.eq:min_eq101}, we deduce $\norm{\bm{v}}^2 = \bm{v}^T \nabla f(\rho \bm{\mu}) = 0$ yielding a contradiction.
Next, we consider the function,
\begin{equation*}
g(\rho):=-\bE_{\bm{\xi}}\left[\frac{\bm{\mu}^T\bm{\xi}}{1+\exp(\rho{\bm{\mu}}^T\bm{\xi})} \right].
\end{equation*}
Observe that $g(\rho) = \bm{\mu}^T \nabla f(\rho \bm{\mu})$. Therefore, if we can show $g(\rho)$ has a unique zero at $\rho = \tfrac{2}{\sigma^2} =: \rho^*$, we can conclude that $\bm{\mu}^T\nabla f(\rho^* \bm{\mu}) = 0$ which, in turn, gives us that $\rho^* \bm{\mu}$ is the unique solution to $\nabla f(\rho^* \bm{\mu}) = 0$. It remains to show that $\rho^*$ is the unique zero of $g$. By \eqref{eq:fact_affine}, $z: = \bm{\mu}^T\bm{\xi} \sim N(\Vert \bm{\mu}\Vert^2,\sigma^2\Vert \bm{\mu} \Vert^2)$. Therefore, this yields
\begin{equation*}
g(\rho)=\frac{1}{\sigma\Vert \bm{\mu} \Vert \sqrt{2\pi}} \int_{-\infty}^{\infty} \frac{z}{1+\exp(\rho z)} \exp\left(-\frac{(z-\Vert \bm{\mu} \Vert^2)^2}{2\sigma^2\Vert \bm{\mu} \Vert^2} \right)dz.
\end{equation*}
Expanding out the term inside the integral, we conclude
\begin{align}
\frac{z}{1+\exp(\rho z)} \exp\left(-\frac{(z-\Vert \bm{\mu} \Vert^2)^2}{2\sigma^2\Vert \bm{\mu} \Vert^2} \right)&=\frac{z}{2\cosh\left(\frac{\rho z}{2}\right)}\exp\left(-\frac{\rho z}{2}-\frac{(z-\Vert \bm{\mu} \Vert^2)^2}{2\sigma^2\Vert \bm{\mu} \Vert^2} \right)\nonumber\\
&=\frac{z}{2\cosh\left(\frac{\rho z}{2}\right)}\exp\left(-\frac{z^2+\left(\rho\sigma^2\Vert \bm{\mu} \Vert^2-2\Vert \bm{\mu} \Vert^2 \right)z+\Vert \bm{\mu} \Vert^4}{2\sigma^2\Vert \bm{\mu} \label{app.eq:integrho^*}
\Vert^2}\right).
\end{align}
When $\rho = \rho^*$, we observe that equation \eqref{app.eq:integrho^*} is an odd function of $z$. Therefore, the function $g(\rho^*) = 0$, i.e. the integral of \eqref{app.eq:integrho^*} is $0$. To see that $\rho^*$ is the only zero of $g$, we note that
\[
g'(\rho)=\bE_{\bm{\xi}}\left[\frac{\left(\bm{\mu}^T\bm{\xi} \right)^2\exp(\rho \bm{\mu}^T\bm{\xi})}{(1+\exp(\rho \bm{\mu}^T\bm{\xi}))^2} \right] >0.
\]
Here, $g'(\rho)=0$ implies that $\bm{\mu}^T\bm{\xi}=0$ a.s. which is not true. As a result, the function $g(\rho)$ is strictly decreasing with a zero at $\rho^*$. The result follows.
\item \textbf{Hinge loss.} We begin by noting that $f$ is differentiable and it holds that
\[
\nabla f(\bm{\theta})= -\bE_{\bm{\xi}}[\bxi1_{\{\bm{\xi}^T\bm{\theta}\leq 1\}}].
\]
We next observe that for any $\bm{v},\bm{\theta}\in \R^d$ such that $\bm{v}^T\bm{\theta}=0$, it holds that
\begin{equation}\label{eq:hinge_min_lemma}
-\bm{v}^T\nabla f(\bm{\theta})=\bE_{\bm{\xi}}[\bm{v}^T\bxi1_{\{\bm{\xi}^T\bm{\theta}\leq 1\}}]=\bE_{\bm{\xi}}[\bm{v}^T\bm{\xi}]\bE_{\bm{\xi}}[1_{\{\bm{\xi}^T\bm{\theta}\leq 1\}}]=\bm{v}^T\bm{\mu}\cdot\bE_{\bm{\xi}}[1_{\{\bm{\xi}^T\bm{\theta}\leq 1\}}].
\end{equation}
Here we used that $\bm{\xi}^T\bm{v}$ and $\bm{\xi}^T\bm{\theta}$ are independent random variables and the expectation of the product of two uncorrelated random variables is the product of the expectations. Now note that for any $\bm{\theta}$, the quantity $\bE_{\bm{\xi}}[1_{\{\bm{\xi}^T\bm{\theta}\leq 1\}}]$ is strictly positive. Therefore, if $\bm{v}^T\bm{\theta}=0$ and $\nabla f(\bm{\theta})=\bm{0}$ then, using \eqref{eq:hinge_min_lemma}, we obtain that $\bm{v}^T\bm{\mu}=0$. Hence, we established that $\nabla f(\bm{\theta})=\bm{0}$ implies $\bm{\theta}=\rho \bm{\mu}$ for some $\rho \in \R$. On the other hand, using \eqref{eq:hinge_min_lemma} again, we have that $\nabla f(\rho \bm{\mu})=0$ if and only if $\bm{\mu}^T\nabla f(\rho \bm{\mu})=0$. To see the only if direction, suppose $\bm{\mu}^T\nabla f(\rho \bm{\mu}) = 0$ and $\nabla f(\rho \bm{\mu}) \neq 0$. Then we have $\nabla f(\rho \bm{\mu}) = \bm{v}$ where the vector $\bm{v}$ is nonzero such that $\bm{v}^T\bm{\mu} = 0$. By \eqref{eq:hinge_min_lemma}, we deduce $\norm{\bm{v}}^2 = \bm{v}^T \nabla f(\rho \bm{\mu}) = 0$ yielding a contradiction.
Next, consider the function
\begin{equation}\label{eq:derivative_hinge}
g(\rho)=\bE_{\bm{\xi}}[ \bm{\mu}^T\bm{\xi} 1_{\{\rho \bm{\xi}^T\bm{\mu}\leq 1 \}}].
\end{equation}
Observe that $g(\rho) = \bm{\mu}^T \nabla f(\rho \bm{\mu})$. Dominated Convergence Theorem yields that
\[
\lim_{\rho \to +\infty}g(\rho)=\bE_{\bm{\xi}}[\bm{\mu}^T\bxi1_{\{\bm{\mu}^T\bm{\xi}\leq 0 \}}], \quad \lim_{\rho \to -\infty}g(\rho)=\bE_{\bm{\xi}}[\bm{\mu}^T\bxi1_{\{\bm{\mu}^T\bm{\xi}\geq 0 \}}].
\]
It, therefore, holds that $\lim_{\rho \to +\infty}g(\rho)<0$ and $\lim_{\rho \to -\infty}g(\rho)>0$. Since $g(0)=\bE_{\bm{\xi}}[\bm{\mu}^T\bm{\xi}]>0$, it remains to show that $g$ is a strictly decreasing function. To this end, we note that for any fixed $\rho_1<\rho_2$, it holds that
\begin{equation}\label{eq:hinge_g_decreasing}
\bm{\mu}^T\bm{\xi}\left(1_{\{\rho_1\bm{\mu}^T\bm{\xi}\leq 1 \}}-1_{\{\rho_2\bm{\mu}^T\bm{\xi}\leq 1 \}} \right) \geq 0 \quad \text{for any value of $\bm{\xi}$}.
\end{equation}
Indeed, if $\bm{\mu}^T\bm{\xi}\geq 0$, then $\rho_1\bm{\mu}^T\bm{\xi}\leq \rho_2\bm{\mu}^T\bm{\xi}$; thus ensuring $1_{\{\rho_1\bm{\mu}^T\bm{\xi}\leq 1 \}}\geq 1_{\{\rho_2\bm{\mu}^T\bm{\xi}\leq 1 \}}$. The case $\bm{\mu}^T\bm{\xi}\leq 0$ follows similarly. We, therefore, conclude that $g(\rho_1)\geq g(\rho_2)$. Finally, note that $g(\rho_1)=g(\rho_2)$, implies that \eqref{eq:hinge_g_decreasing} holds with equality, almost surely. Clearly, this yields a contradiction. It remains to show \eqref{eq:w_hinge}. By \eqref{eq:derivative_hinge}, we have that $g'(\rho^*)=\bE_{\bm{\xi}}[\bm{\mu}^T\bxi1_{\{\bm{\mu}^T\bm{\xi}\leq \frac{1}{\rho^*}\}}]$. Using \eqref{eq:fact_affine} and \eqref{eqn:fact_truncated}, we obtain that
\begin{equation}\label{myeq2}
\Phi\left(\frac{1-\rho^*\Vert \bm{\mu} \Vert^2}{\rho^*\sigma\Vert\bm{\mu}\Vert} \right)\cdot \exp\left(\frac{1}{2}\cdot\left(\frac{1-\rho^*\Vert \bm{\mu} \Vert^2}{\rho^*\sigma\Vert\bm{\mu}\Vert} \right)^2\right)=\frac{1}{\sqrt{2\pi}}\cdot\frac{\sigma}{\Vert\mu\Vert}.
\end{equation}
The result immediately follows.
\end{proof}
The previous lemma has defined $\bm{\theta}^*$ to be the minimizer of the loss function and showed that it is a positive multiple of $\bm{\mu}$. We now show that this $\bm{\theta}^*$ and its positive scalar multiples are exactly the set of optimal classifiers in the sense of \eqref{eq:stop_criteria_observation}, i.e., we give an exact characterization of that set.
\begin{lemma}[Characterization of the optimal classifier]
The following is true
\begin{equation}
\Argmax_{\bm{\theta}} \bP\left(\bm{\xi}^T\bm{\theta}>0\right) = \{ \lambda\cdot \bm{\theta}^*: \lambda>0\}.\end{equation}
\end{lemma}
\begin{proof}
Observe that the following simple fact holds.
\begin{equation}\label{app.fact:PhiProb}
\bP_{\hat{\bm{\xi}}}\left(\hat{\bm{\xi}}^T\bm{\theta}\geq t\right)=\Phi^c\left( \frac{\bm{\mu}^T\bm{\theta}-t}{\sigma \Vert \bm{\theta}\Vert}\right), \quad \text{for all $\bm{\theta}\in \R^d$, $t\in \R$ and $\hat{\bm{\xi}} \sim N(\bm{\mu},\sigma^2I_d)$}.
\end{equation}
Therefore we have that $\bP_{\bm{\xi}}(\bm{\xi}^T\bm{\theta}>0)=\Phi^c\left(\frac{\Vert \bm{\mu} \Vert}{\sigma}\cdot\cos(w_{\bm{\theta}}) \right)$ where $\bm{\xi} \sim N(\bm{\mu},\sigma^2I_d)$ and $w_{\bm{\theta}}$ denotes the angle between the two vectors $\bm{\theta}$ and $\bm{\mu}$. On the other hand a classifier $\bm{\theta}$ is optimal if and only if $\bm{\theta}=\rho \bm{\mu}$ for some $\rho > 0$, i.e. $\cos(w_{\bm{\theta}})=0$. The proof is complete after noting that $\Phi$ is an increasing function.
\end{proof}
\subsubsection{Low Variance Regime}\label{sec:low_log}
The main goal of this subsection is to give a proof of Theorem~\ref{theorem_logistic}.1. We restate it below.
\begin{manualtheorem}{2.1 }[]\label{thm.a}
Let $\bm{\theta}_0=\bm{0}$ and suppose that $\sigma \leq 0.16\Vert \bm{\mu} \Vert$. The following is then true.
\begin{equation}\begin{aligned}
\bE[T]&\leq 4+\frac{\sigma}{\Vert \bm{\mu}\Vert}\left(\frac{29,584 \cdot (1+\alpha \Vert \bm{\mu}\Vert^2)^2}{\alpha \Vert \bm{\mu}\Vert^2}+15,224 \cdot \sigma^4\alpha^4\Vert \bm{\mu}\Vert^4\right)+\frac{14,792 \cdot (1+\alpha \Vert \bm{\mu}\Vert^2)^2}{\alpha\Vert\bm{\mu}\Vert^2}.
\end{aligned} \end{equation}
\end{manualtheorem}
Recall that in the low variance regime (\textit{i.e.}, $\sigma \leq 0.16\Vert \bm{\mu} \Vert$), we define the target set by
\begin{equation}\label{eq:C_low_proof}
C=\{\bm{\theta}:\bm{\theta}^T\bm{\mu}\geq 1\}
\end{equation}
and the test function by
\begin{equation}\label{eq:V_low_proof}
V=\left(M-\bm{\mu}^T\bm{\theta}\right)^2
\end{equation}
where $M=86+86\alpha \Vert \bm{\mu} \Vert^2$. As we discussed earlier, our aim is to show that
\[
\bE[\tau_m^C] \underset{\sim}{<} m.
\]
The following proposition bounds the expected value of the stopping time $\tau_1^C$ provided we initialize with $\bm{\theta}_0=\bm{\theta}$ such that $\bm{\mu}^T\bm{\theta}<1$.
\begin{proposition}\label{prp:low_et1} Fix a vector $\bm{\theta} \in \R^d$ such that $\bm{\mu}^T \bm{\theta} < 1$. Suppose that $\sigma\leq 0.16 \Vert \bm{\mu} \Vert$. Consider the set $C$ and the function $V$ defined in \eqref{eq:C_low_proof} and \eqref{eq:V_low_proof} respectively. Then the following bound holds
\begin{equation}
\bE[\tau_1^{C}|\bm{\theta}_0=\bm{\theta}] \leq \frac{V(\bm{\theta})}{\alpha \Vert \bm{\mu} \Vert^2}.
\end{equation}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prp:low_et1}] For simplicity we write $\mathcal{F}_{-1}:=\sigma\left(\{\bm{\theta}_0=\bm{\theta}\} \right)$. By expanding out the term, we get for any $k \ge 1$ the following
\begin{align} \label{eq: low_noise_blah_1}
V(\bm{\theta}_{k})= V(\bm{\theta}_{k-1})-\frac{2\alpha \bm{\mu}^T\bm{\xi}_k(M-\bm{\mu}^T\bm{\theta}_{k-1})}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})}+\frac{\alpha^2(\bm{\mu}^T\bm{\xi}_k)^2}{(1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1}))^2}.
\end{align}
We fix an integer $n \ge 1$ and write the quantity $\bm{\mu}^T\bm{\xi}_k=\Vert\bm{\mu}\Vert^2+\sigma\Vert \bm{\mu}\Vert \psi_k$ with $\psi_k \sim N(0,1)$. We have
\begin{align*}
&\bE_{\bm{\xi}_k}\left[ \frac{\bm{\mu}^T\bm{\xi}_k}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})}|\mathcal{F}_{k-1}\right]1_{\{ \tau_1^{C}\wedge n \geq k\}}\\
& \qquad =\bE_{\bm{\xi}_k}\left[ \frac{\Vert\bm{\mu}\Vert^2}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})}|\mathcal{F}_{k-1}\right]1_{\{ \tau_1^{C}\wedge n \geq k\}}+\sigma \Vert \bm{\mu}\Vert\bE_{\bm{\xi}_k}\left[ \frac{\psi_k}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})}|\mathcal{F}_{k-1}\right]1_{\{ \tau_1^{C}\wedge n \geq k\}}\\
& \qquad \geq \Vert \bm{\mu} \Vert^2 \bE_{\bm{\xi}_k}\left[ \frac{1}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})}|\mathcal{F}_{k-1}\right]1_{\{ \tau_1^{C}\wedge n \geq k\}}-\sigma \Vert \bm{\mu}\Vert\bE_{\psi_k}\left[ \vert \psi_k \vert\right]1_{\{ \tau_1^{C}\wedge n \geq k\}}\\
& \qquad \ge \Vert \bm{\mu} \Vert^2 \bE_{\bm{\xi}_k}\left[ \frac{1}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})} \left ( 1_{\{\bm{\mu}^T \bm{\theta}_{k-1} \geq \bm{\xi}_k^T \bm{\theta}_{k-1}\}} + 1_{\{\bm{\mu}^T \bm{\theta}_{k-1} < \bm{\xi}_k^T \bm{\theta}_{k-1}\}} \right ) |\mathcal{F}_{k-1}\right]1_{\{ \tau_1^{C}\wedge n \geq k\}}\\
& \qquad \qquad \qquad -\sigma \Vert \bm{\mu}\Vert \sqrt{\frac{2}{\pi}}1_{\{ \tau_1^{C}\wedge n \geq k\}}\\
& \qquad \geq \frac{\Vert \bm{\mu}\Vert^2}{1+\exp(\bm{\mu}^T\bm{\theta}_{k-1})}\bE_{\bm{\xi}_k}\left[ 1_{\{\bm{\mu}^T \bm{\theta}_{k-1} \geq \bm{\xi}_k^T \bm{\theta}_{k-1}\}} |\mathcal{F}_{k-1} \right]1_{\{ \tau_1^{C}\wedge n \geq k\}}-\sigma \Vert \bm{\mu}\Vert \sqrt{\frac{2}{\pi}}1_{\{ \tau_1^{C}\wedge n \geq k\}}\\
&\qquad \geq \frac{\Vert \bm{\mu}\Vert^2}{2(1+e)}1_{\{ \tau_1^{C}\wedge n \geq k\}}-\sigma \Vert \bm{\mu}\Vert \sqrt{\frac{2}{\pi}}1_{\{ \tau_1^{C}\wedge n \geq k\}}\\
& \qquad \geq 0.006\Vert \bm{\mu} \Vert^2 1_{\{ \tau_1^{C}\wedge n \geq k\}}.
\end{align*}
Here the first inequality follows from $-\bE[X] \le \bE[|X|]$ and $1+ \exp(\bm{\xi}_k^T\bm{\theta}_{k-1}) \ge 1$, the second from ~\eqref{fact:norm_Gaussians}, and the second to last from the observation that for any $X$ normally distributed, $\bP(\bE[X] \ge X) = 1/2$ and $\bm{\xi}_k^T\bm{\theta}_{k-1} \sim N(\bm{\mu}^T\bm{\theta}_{k-1}, \sigma^2 \norm{\bm{\theta}_{k-1}}^2)$ and $\bm{\mu}^T\bm{\theta}_{k-1} < 1$ on the set $\tau_1^{C} \wedge n \ge k$. The last inequality uses the assumption $\sigma \le 0.16 \norm{\bm{\mu}}$. By taking the conditional expectations of \eqref{eq: low_noise_blah_1} combined with the above sequence of inequalities, we deduce the following bound
\begin{align*}
&\bE\left[V(\bm{\theta}_{k})-V(\bm{\theta}_{k-1})|\mathcal{F}_{k-1}\right]1_{\{ \tau_1^{C}\wedge n \geq k\}}\\
& \quad \qquad = \bE_{\bm{\xi}_k} \left [-\frac{2\alpha \bm{\mu}^T\bm{\xi}_k(M-\bm{\mu}^T\bm{\theta}_{k-1})}{1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1})} | \mathcal{F}_{k-1} \right] 1_{\{ \tau_1^{C}\wedge n \geq k\}} + \bE_{\bm{\xi}_k} \left [\frac{\alpha^2(\bm{\mu}^T\bm{\xi}_k)^2}{(1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1}))^2} | \mathcal{F}_{k-1} \right] 1_{\{ \tau_1^{C}\wedge n \geq k\}}\\
& \quad \qquad \leq -0.012(M-1)\alpha \Vert \bm{\mu} \Vert^21_{\{ \tau_1^{C}\wedge n \geq k\}}+ \alpha^2\Vert \bm{\mu} \Vert^2\left(\Vert \bm{\mu}\Vert^2+\sigma^2 \right)1_{\{ \tau_1^{C}\wedge n \geq k\}}\\
& \quad \qquad = \alpha \Vert \bm{\mu} \Vert^2\left[ \alpha\left(\Vert \bm{\mu} \Vert^2+\sigma^2\right)-0.012(M-1)\right]1_{\{ \tau_1^{C}\wedge n \geq k\}}.
\end{align*}
Here the first inequality follows from $\bm{\mu}^T\bm{\theta}_{k-1} 1_{\{\tau_1^{C} \wedge n \ge k\}} \le 1_{\{\tau_1^{C}\wedge n \geq k\}}$ and upper bounding $\frac{(\bm{\mu}^T\bm{\xi}_k)^2}{(1+\exp(\bm{\xi}_k^T\bm{\theta}_{k-1}))^2}$ by $(\bm{\mu}^T\bm{\xi}_k)^2$ and then applying \eqref{fact:norm_Gaussians}.
A quick computation after plugging in the value of $M$ and the bound $\sigma\leq 0.16\Vert \bm{\mu} \Vert$ yields
\begin{equation} \label{eq: low_noise_blah_10}
\bE\left[V(\bm{\theta}_{k})-V(\bm{\theta}_{k-1})|\mathcal{F}_{k-1}\right]1_{\{ \tau_1^{C}\wedge n \geq k\}} \leq -\alpha \Vert \bm{\mu} \Vert^21_{\{ \tau_1^{C}\wedge n \geq k\}}.
\end{equation}
Using Appendix Lemma \ref{lem:drift_lemma_appendix}, we obtain that $\bE[\tau_1^{C}\wedge n|\mathcal{F}_{-1}] \leq \frac{V(\bm{\theta})}{\alpha \Vert \bm{\mu} \Vert^2}$. By monotone convergence theorem the result follows.
\end{proof}
We next upper bound $\bE[\tau_m^C]$ for $m\geq 1$ in the following proposition.
\begin{proposition}\label{prp:low_etm}
Let $\bm{\theta}_0=\bm{0}$ and suppose that $\sigma\leq 0.16 \Vert \bm{\mu} \Vert$. Consider the set $C$ defined in \eqref{eq:C_low_proof}. Then the following bound holds for all $m \ge 1$
\begin{equation}
\bE[\tau_m^C]\leq (m-1)\left(2+\frac{\sigma}{\Vert \bm{\mu}\Vert}\left(\frac{2M^2}{\alpha \Vert \bm{\mu}\Vert^2}+7,612 \cdot \sigma^4\alpha^4\Vert \bm{\mu}\Vert^4\right)\right)+\frac{M^2}{\alpha \Vert \bm{\mu}\Vert^2},
\end{equation}
where $M=86+86\alpha \Vert \bm{\mu} \Vert^2$.
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prp:low_etm}]
First, it is clear when $m = 1$, the result holds by Proposition~\ref{prp:low_et1} with $\bm{\theta}_0 = \bm{0}$. We now assume that $\tau_{m-1}^{C}<\infty$ a.s. for some $m\geq 2$ and fix an integer $n \ge 1$. We decompose the space to yield the following bounds
\begin{equation} \begin{aligned} \label{eq: low_noise_bound}
\bE\left[(\tau_m^{C}-\tau_{m-1}^{C})\wedge n|\mathcal{F}_{\tau_{m-1}^{C}+1}\right]&=\bE\left[((\tau_m^{C}-\tau_{m-1}^{C})\wedge n)|\mathcal{F}_{\tau_{m-1}^{C}+1}\right]1_{\{\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}+1}\geq 1\}}\\
&+\bE\left[((\tau_m^{C}-\tau_{m-1}^{C})\wedge n)|\mathcal{F}_{\tau_{m-1}^{C}+1}\right]1_{\{\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}+1}<1\}}\\
&\leq 1+ \bE\left[((\tau_m^{C}-\tau_{m-1}^{C})\wedge n)|\mathcal{F}_{\tau_{m-1}^{C}+1}\right]1_{\{\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}+1}<1\}}\\
&=1+\sum_{i=1}^{\infty}\bE\left[(\tau_m^{C}-\tau_{m-1}^{C})\wedge n|\mathcal{F}_{\tau_{m-1}^{C}+1}\right]1_{\{i-1\leq 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}+1}<i\}}\\
&\leq 2+\sum_{i=1}^{\infty}\bE\left[\tau_1^{C}\wedge n|\bm{\theta}_0=\bm{\theta}_{\tau_{m-1}^{C}+1}\right]1_{\{i-1\leq 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}+1}<i\}}.
\end{aligned} \end{equation}
Here the first inequality follows because $((\tau_m^{C}-\tau_{m-1}^{C})\wedge n) 1_{\{\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}+1} \ge 1\}} \le 1$ by definition of C and the last inequality by the strong Markov property. For each $i \ge 1$, we observe the bound
\begin{align}
1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}+1} &=1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}}-\frac{\alpha \bm{\mu}^T\bm{\xi}_{{\tau_{m-1}^{C}}+1}}{1+\exp(\bm{\xi}_{{\tau_{m-1}^{C}}+1}^T\bm{\theta}_{\tau_{m-1}^{C}})}\nonumber\\
&\leq -\frac{\alpha \bm{\mu}^T\bm{\xi}_{\tau_{m-1}^{C}+1}}{1+\exp(\bm{\xi}_{\tau_{m-1}^{C}+1}^T\bm{\theta}_{\tau_{m-1}^{C}})} \label{eq:i-1_calculus}\\
& \leq -\alpha \bm{\mu}^T\bm{\xi}_{\tau_{m-1}^{C}+1},\nonumber
\end{align}
where the second inequality follows because $1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}} \le 0$ and the last inequality because $-\alpha \bm{\mu}^T \bm{\xi}_{\tau_{m-1}^{C}+1}$ is positive.
By Proposition~\ref{prp:low_et1} and the above, we deduce
\begin{equation} \begin{aligned} \label{eq: low_noise_blah_3}
\bE\left[\tau_1^{C}\wedge n|\bm{\theta}_0=\bm{\theta}_{\tau_{m-1}^{C}+1}\right]1_{\{i-1\leq 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}+1}<i\}}&\leq \frac{(M-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}+1})^2}{\alpha \Vert \bm{\mu} \Vert^2}1_{\{i-1\leq 1-\bm{\mu}^T\bm{\theta}_{\tau_{m-1}^{C}+1}\leq i\}}\\&\leq \frac{(M+i-1)^2}{\alpha \Vert \bm{\mu}\Vert^2}1_{\{ \bm{\mu}^T\bm{\xi}_{\tau_{m-1}^{C}+1}\leq \frac{1-i}{\alpha}\}}.
\end{aligned} \end{equation}
Finally we observe that
\begin{equation} \begin{aligned} \label{eq: low_noise_blah_4}
\bE\left[1_{\{ \bm{\mu}^T\bm{\xi}_{\tau_{m-1}^{C}+1}\leq \frac{1-i}{\alpha}\}} \right]&=\bE\left[\sum_{k=1}^{\infty}1_{\{ \bm{\mu}^T\bm{\xi}_{k+1}\leq \frac{1-i}{\alpha}\}}1_{\{\tau_{m-1}^{C}=k\}}\right]\\
&=\sum_{k=1}^{\infty}\bE\left[1_{\{ \bm{\mu}^T\bm{\xi}_{k+1}\leq \frac{1-i}{\alpha}\}} \right]\bE\left[1_{\{\tau_{m-1}^{C}=k\}} \right]\\
&= \Phi\left(\frac{\frac{1-i}{\alpha}-\Vert \bm{\mu}\Vert^2}{\sigma\Vert \bm{\mu}\Vert } \right)\sum_{k=1}^{\infty}\bE\left[1_{\{\tau_{m-1}^{C}=k\}} \right]\\
&=\Phi\left(\frac{\frac{1-i}{\alpha}-\Vert \bm{\mu}\Vert^2}{\sigma\Vert \bm{\mu}\Vert } \right).
\end{aligned} \end{equation}
The first equality is by independence and the second equality because $\bm{\mu}^T \bm{\xi}_{k+1} \sim N(\norm{\bm{\mu}}^2, \sigma^2 \norm{\bm{\mu}}^2)$. Indeed, the same argument shows $\bm{\xi}_{\tau_{m-1}^{C}+1} \sim N(\bm{\mu},\sigma^2I_d)$, see \cite[Theorem 4.3.1]{Durrett_probability_book}. By combining \eqref{eq: low_noise_bound}, \eqref{eq: low_noise_blah_3}, and \eqref{eq: low_noise_blah_4}, we obtain the following
\begin{equation} \begin{aligned} \label{eq: low_noise_blah_5}
\bE\left[(\tau_m^{C}-\tau_{m-1}^{C})\wedge n\right]&\leq 2+\sum_{i=1}^{\infty} \frac{(M+i-1)^2}{\alpha \Vert \bm{\mu}\Vert^2}\Phi\left(\frac{\frac{1-i}{\alpha}-\Vert \bm{\mu}\Vert^2}{\sigma\Vert \bm{\mu}\Vert } \right)\\&=2+\sum_{i=1}^{\infty} \frac{(M+i-1)^2}{\alpha \Vert \bm{\mu}\Vert^2}\Phi^{c}\left(\frac{\Vert \bm{\mu}\Vert^2+\frac{i-1}{\alpha}}{\sigma\Vert \bm{\mu}\Vert } \right)\\&\le2+\frac{\sigma}{\Vert \bm{\mu}\Vert\sqrt{2\pi}}\sum_{i=1}^{\infty} \frac{(M+i-1)^2}{\alpha\Vert \bm{\mu}\Vert^2+i-1}\exp\left(-\frac{1}{2}\left(\frac{\Vert \bm{\mu}\Vert^2+\frac{i-1}{\alpha}}{\sigma\Vert \bm{\mu}\Vert}\right)^2 \right),
\end{aligned} \end{equation}
where we used the inequality $\Phi^{c}(t)<\frac{1}{t\sqrt{2\pi}}\exp(-\frac{t^2}{2})$. We bound the sum by
\begin{equation}
\begin{aligned} \label{eq: low_noise_blah_6}
\sum_{i=3}^{\infty} &\frac{(M+i-1)^2}{\alpha\Vert \bm{\mu}\Vert^2+i-1}\exp\left(-\frac{1}{2}\left(\frac{\Vert \bm{\mu}\Vert^2+\frac{i-1}{\alpha}}{\sigma\Vert \bm{\mu}\Vert}\right)^2 \right)\\
&\qquad = \sum_{i=3}^{\infty} \frac{(M+i-1)^2}{\alpha\Vert \bm{\mu}\Vert^2+i-1} \cdot \frac{4\sigma^4 \norm{\bm{\mu}}^2 \alpha^4}{(\alpha \norm{\bm{\mu}}^2 + i-1)^4} \cdot \frac{(\alpha \norm{\bm{\mu}}^2 + i-1)^4}{4\sigma^4 \norm{\bm{\mu}}^2 \alpha^4} \exp\left(-\frac{1}{2}\left(\frac{\Vert \bm{\mu}\Vert^2+\frac{i-1}{\alpha}}{\sigma\Vert \bm{\mu}\Vert}\right)^2 \right)\\
&\qquad \leq 4\alpha^4\sigma^4\Vert \bm{\mu} \Vert^4 \sum_{i=3}^{\infty}\frac{(M+i-1)^2}{(\alpha \Vert \bm{\mu}\Vert^2+i-1)^4}\\
&\qquad \leq 29,584 \cdot \alpha^4\sigma^4\Vert \bm{\mu}\Vert^4 \sum_{i=3}^{\infty}\frac{1}{(i-1)^2} \\
&\qquad \leq 29,584 \cdot \left ( \frac{\pi^2}{6}-1 \right ) \cdot \alpha^4\sigma^4\Vert \bm{\mu}\Vert^4\\
& \qquad \leq 19,080 \cdot \alpha^4\sigma^4 \Vert \bm{\mu}\Vert^4.
\end{aligned} \end{equation}
Here the first inequality uses $x^2\leq e^x$ for all $x>0$ and $\alpha \norm{\bm{\mu}}^2 + i-1 \ge 1$ for all $i \ge 2$, the second uses that $\frac{M+i-1}{\alpha \Vert \bm{\mu}\Vert^2+i-1}\leq 86$ for all $i \ge 3$ and $\frac{1}{\alpha \Vert \bm{\mu}\Vert^2+i-1}\leq \frac{1}{i-1}$ for $i\geq 2$. We also upper bound
\begin{align} \label{eq: low_noise_blah_7}
\sum_{i=1}^{2} \frac{(M+i-1)^2}{\alpha\Vert \bm{\mu}\Vert^2+i-1}\exp\left(-\frac{1}{2}\left(\frac{\Vert \bm{\mu}\Vert^2+\frac{i-1}{\alpha}}{\sigma\Vert \bm{\mu}\Vert}\right)^2 \right) &\leq \frac{M^2}{\alpha \Vert \bm{\mu} \Vert^2}+\frac{(M+1)^2}{\alpha \Vert \bm{\mu} \Vert^2+1} \le \frac{5M^2}{\alpha \norm{\bm{\mu}}^2},
\end{align}
where the first inequality we use $e^x \le 1$ for all $x<0$ and the second inequality because $\alpha \norm{\bm{\mu}}^2 + 1 \ge \alpha \norm{\bm{\mu}}^2$ and $M +1 \le 2M$. By combining \eqref{eq: low_noise_blah_5}, \eqref{eq: low_noise_blah_6}, and \eqref{eq: low_noise_blah_7}, we obtain
\begin{align*}
\bE\left[(\tau_m^{C}-\tau_{m-1}^{C})\wedge n\right]&\leq 2+\frac{\sigma}{\Vert \bm{\mu}\Vert}\left(\frac{2M^2}{\alpha \Vert \bm{\mu}\Vert^2}+7,612 \cdot \sigma^4\alpha^4\Vert \bm{\mu}\Vert^4\right).
\end{align*}
Taking the limit as $n \to +\infty$, we observe that
\[\bE[\tau_m^C]\leq 2+\frac{\sigma}{\Vert \bm{\mu}\Vert}\left(\frac{2M^2}{\alpha \Vert \bm{\mu}\Vert^2}+7,612 \cdot \sigma^4\alpha^4\Vert \bm{\mu}\Vert^4\right)+\bE[\tau_{m-1}^{C}].
\]
We then iterate the above inequality yielding
\[ \bE[\tau_m^C] \le (m-1) \left ( 2+\frac{\sigma}{\Vert \bm{\mu}\Vert}\left(\frac{2M^2}{\alpha \Vert \bm{\mu}\Vert^2}+7,612 \cdot \sigma^4\alpha^4\Vert \bm{\mu}\Vert^4\right) \right ) + \bE[\tau_1^{C}]. \]
The result follows after applying Proposition~\ref{prp:low_et1} with $\bm{\theta}_0 = \bm{0}$.
\end{proof}
We are now ready to prove Theorem~\ref{thm.a}.
\begin{proof}[Proof of Theorem
\ref{thm.a}]
In order to simplify the subsequent argument, we define the quantity,
\[ M' := 2+\frac{\sigma}{\Vert \bm{\mu}\Vert}\left(\frac{2M^2}{\alpha \Vert \bm{\mu}\Vert^2}+7,612\sigma^4\alpha^4\Vert \bm{\mu}\Vert^4\right) \]
where $M = 86 + 86 \alpha \norm{\bm{\mu}}^2$, we see in Proposition~\ref{prp:low_etm}. By Lemma \ref{lem:st_ETC}, $T_C<\infty$ a.s. and, therefore we have $\sum_{m=1}^{\infty}1_{\{T_C=\tau_m^{C}\}} = 1$ a.s. We next apply Proposition~\ref{prp:low_etm} with Lemma \ref{lem:st_ETC}. Note that $\delta:=\tfrac{1}{2}$ with the target set $C=\{\bm{\theta}:\bm{\mu}^T\bm{\theta}\geq 1 \}$ satisfy \eqref{eq:ST_delta}
\begin{align*}
\bE[T]&\leq \bE[T_C]=\sum_{m=1}^{\infty} \bE[\tau_C1_{\{T_C=\tau_m^{C}\}}]\leq \sum_{m=1}^{\infty}\frac{\bE[\tau_m^C]}{2^{m-1}} \le \sum_{m=1}^\infty \frac{(m-1)M' + \frac{M^2}{\alpha \norm{\bm{\mu}}^2} }{2^{m-1}} = 2M' + \frac{2M^2}{\alpha \norm{\bm{\mu}}^2}.
\end{align*}
The result follows after plugging in the value of $M = 86 + 86 \alpha \norm{\bm{\mu}}^2$.
\end{proof}
\subsubsection{High Variance Regime}\label{sec:high_logistic}
The main goal of this subsection is to give a proof of Theorem~\ref{theorem_logistic}.2. We restate it below.
\begin{manualtheorem}{2.2}[]\label{thm.b}
Let $\bm{\theta}_0=\bm{0}$ and suppose that $ 0.16\Vert \bm{\mu} \Vert \leq \sigma$. Then it holds that $\bE[T]<+\infty$ provided the step-size $\alpha$ satisfies
\begin{equation}\label{eq:high_alpha}
\alpha < \tfrac{\min\{1,B\}}{\Vert \bm{\mu} \Vert^2+d\sigma^2}
\end{equation}
where the constant $B$ is defined by
\begin{equation}
B:=\tfrac{\Vert \bm{\mu}\Vert^4}{4\sigma^4\sqrt{\pi}}\exp\left(-\tfrac{\Vert \bm{\mu} \Vert^2}{2\sigma^2}\right) \Phi^{c}\left( \tfrac{7\sqrt{2}\Vert \bm{\mu} \Vert}{\sigma}\right).
\end{equation}
\end{manualtheorem}
The proof is deferred to the end of this subsection. We first need some preliminaries. Throughout this subsection, the following notation is used
\[
\bm{\theta} = \rho\bm{\mu}+ \bm{\tilde{\theta}} \text{ where } \bm{\mu}^T \bm{\tilde{\theta}} = 0.
\]
We also express $\bm{\theta}_k=\rho_k\bm{\mu}+\tilde{\bm{\theta}}_k$, etc. Recall that in the high variance regime (\textit{i.e.}, $0.16\Vert \bm{\mu} \Vert\leq \sigma$), we define the target set by
\begin{equation}\label{eq:C_high_proof_1}
C=\left\{\bm{\theta}: \vert \rho\sigma^2-2\vert < 1 \text{ and } \sigma \Vert \tilde{\bm{\theta}} \Vert \leq 3404.46 \right\}.`
\end{equation}
We first establish \eqref{eq:ST_delta}.
\begin{lemma}\label{lem: high_noise_delta}
Consider the set defined in \eqref{eq:C_high_proof_1}. Fix $\bm{\theta}\in {C}$ and let $\hat{\bm{\xi}} \sim N(\bm{\mu},\sigma^2 I_d)$. The following bound holds.
\begin{equation}\label{eq:deltaP}
\bP_{\hat{\bm{\xi}}}\left(\bm{\theta}^T\hat{\bm{\xi}}\geq 1\right)\geq \Phi\left(\frac{-\sigma}{\Vert \bm{\mu} \Vert}\right).
\end{equation}
\end{lemma}
\begin{proof}
It is easy to verify that $\bP_{\hat{\bm{\xi}}}\left(\bm{\theta}^T\hat{\bm{\xi}}\geq 1\right)=\Phi\left(\frac{\bm{\theta}^T\bm{\mu}-1}{\sigma \Vert \bm{\theta}\Vert} \right)$. We are interested in lower bounding the term $\frac{\bm{\theta}^T\bm{\mu}-1}{\sigma \Vert \bm{\theta}\Vert}$ independent of $\bm{\theta}$.
As $\bm{\theta} \in {C}$, we have $\vert \rho\sigma^2-2\vert < 1$ which yields $\rho \geq \frac{1}{\sigma^2}$. Combining this with $\Vert \bm{\theta} \Vert^2 = \| \rho \bm{\mu} + \tilde{\bm{\theta}} \|^2 \geq \rho^2\Vert \bm{\mu} \Vert^2$, we obtain that $\sigma \Vert \bm{\theta} \Vert\geq \frac{\Vert \bm{\mu} \Vert}{\sigma}$. This gives
\begin{equation}
\frac{\bm{\theta}^T\bm{\mu}-1}{\sigma \Vert \bm{\theta}\Vert} = \frac{(\rho \bm{\mu} + \tilde{\bm{\theta}})^T\bm{\mu}-1}{\sigma \norm{\bm{\theta}}} \geq \frac{-1}{\sigma \Vert \bm{\theta}\Vert}\geq \frac{-\sigma}{\Vert \bm{\mu} \Vert}.
\end{equation}
Here the first inequality follows from $\rho>0$ and the second inequality from $\sigma \Vert \bm{\theta} \Vert\geq \frac{\Vert \bm{\mu} \Vert}{\sigma}$. The proof is complete after noting that $\Phi$ is an increasing function.
\end{proof}
We need the following technical lemma below. The proof is deferred to Appendix Sec. ???.
\begin{lemma}\label{lem:high_tech_not_appendix}
Consider the following optimization problem
\[
\min f(\bm{\theta}):=\bE_{(\bm{\xi},y)\sim \mathcal{P}_*}\left[\ell(\bm{\xi}^T\bm{\theta},y)\right].
\]
where $\ell$ is a convex function. The sequences $\{\bm{\theta}_k, \bm{\xi}_k \}_{k = 0}^\infty$ generated by Algorithm~\ref{alg:SGD_termination} satisfy for any $\bm{\theta} \in \R^d$, the following
\begin{equation} \label{eq:tech_lemma_inside}
f(\bm{\theta}_{k-1})-f(\bm{\theta})\leq \frac{1}{2\alpha}\left(\Vert \bm{\theta}_{k-1}-\bm{\theta} \Vert^2-\bE\left[\Vert \bm{\theta}_{k}-\bm{\theta} \Vert^2\, |\mathcal{F}_{k-1}\right] \right)+\bE[\Vert\nabla_{\bm{\theta}}\ell\left( \bm{\xi}_k^T\bm{\theta}\right)\Vert^2 \, | \, \mathcal{F}_{k-1}],
\end{equation}
for all $k\geq 1$ where $f$ is defined in \eqref{optimization_problem} and the filtration $\{\mathcal{F}_k\}_{k=0}^{+\infty}$ in \eqref{eq:sigma}.
\end{lemma}
\begin{remark}
Note that in \eqref{eq:tech_lemma_inside}, we are not assuming that $\ell$ is differentiable. We have two choices for the $\ell$ function: logistic loss and hinge loss. Nevertheless, hinge loss is not differentiable, since for any $\bm{\theta}$ w.p. 1 $\bm{\xi}_k^T\bm{\theta}\neq 1$, we conclude that $\ell(\bm{\xi}_k^T\bm{\theta})$ as a function of $\bm{\xi}_k$ is almost everywhere differentiable.
\end{remark}
Let $\bm{\theta}$ be $\bm{\theta}^*$ in Lemma~\ref{lem:high_tech_not_appendix} and write
\begin{equation}\label{eq:high_ftheta_ktheta}
f(\bm{\theta}_{k-1})-f(\bm{\theta}^*)=f(\bm{\theta}_{k-1})-f(\rho\bm{\mu})+f(\rho\bm{\mu})-f(\bm{\theta}^*).
\end{equation}
In Lemmas~\ref{lem: logistic_case_2} and \ref{lem: logistic_case_1} below, we will show that both terms $f(\bm{\theta}_{k-1})-f(\rho \bm{\mu})$ and $f(\rho\bm{\mu})-f(\bm{\theta}^*)$ are non-negative. Moreover, we will establish that whenever $\bm{\theta}_{k-1} \not\in C$, either the first or second term can be lower bounded by some positive constant. We first lower bound $f(\bm{\theta}_{k-1})-f(\rho \bm{\mu})$. Let us recall that $\bm{\theta}=\rho \bm{\mu}+\tilde{\bm{\theta}}$ where $\hat{\bm{\theta}}^T\bm{\mu}=0$.
\begin{lemma} \label{lem: logistic_case_2} Fix a vector $\bm{\theta} \in \R^d$ and let $\bm{\xi} \sim N(\bm{\mu},\sigma^2I_d)$. Provided that $\sigma \Vert \tilde{\bm{\theta}}\Vert\geq 4\log(\sqrt{2}+1)$, the following holds
\begin{equation}\label{eq:high_lem_sigmatheta}
\bE_{\bm{\xi}} \left[\log\left( \frac{1+\exp(-\bm{\theta}^T\bm{\xi})}{1+\exp(-\rho \bm{\mu}^T\bm{\xi})}\right)\right] \geq \max\left\{\left ( \tfrac{1}{2} \log(2) + \tfrac{\sigma \Vert \tilde{\bm{\theta}}\Vert}{8} \right )\Phi^c(1)-\left | \frac{\rho}{4}\right | \left(\Vert \bm{\mu}\Vert^2+\sigma \Vert \bm{\mu} \Vert \sqrt{\frac{2}{\pi}} \right)
,0\right\}.
\end{equation}
\end{lemma}
\begin{proof} The two normal random variables, $\bm{\tilde{\theta}}^T\bm{\xi} \sim N(0,\sigma^2\Vert \bm{\tilde{\theta}} \Vert^2)$ and $\bm{\mu}^T\bm{\xi} \sim N(\Vert \bm{\mu} \Vert^2,\sigma^2\Vert \bm{\mu} \Vert^2)$, are independent by \eqref{eqn:fact_independence}. Since we have $\bE_{\bm{\xi}}[\log(\exp(-\bm{\tilde{\theta}}^T\bm{\xi}))]=\bE_{\bm{\xi}}[\log(\exp(\bm{\tilde{\theta}}^T\bm{\xi}))]=0$, it holds
\begin{align*}
\bE_{\bm{\xi}} \left[ \log\left (1+\exp(-\bm{\theta}^T\bm{\xi})\right)\right] &= \bE_{\bm{\xi}} \left[\log \left(1 + \exp(-\tilde{\bm{\theta}}^T\bm{\xi}) \exp(-\rho \bm{\mu}^T\bm{\xi}) \right ) \right ]\\
&=\bE_{\bm{\xi}} \left[ \log\left (\exp(\bm{\tilde{\theta}}^T\bm{\xi})+\exp(-\rho \bm{\mu}^T\bm{\xi})\right)\right]\\
&=\bE_{\bm{\xi}} \left[ \log\left (\exp(-\bm{\tilde{\theta}}^T\bm{\xi})+\exp(-\rho \bm{\mu}^T\bm{\xi})\right)\right],
\end{align*}
where the last equality is true because $\bm{\tilde{\theta}}^T\bm{\xi} \sim -\bm{\tilde{\theta}}^T\bm{\xi}$. Therefore we obtain
\begin{align*}
\bE_{\bm{\xi}} &\left[ \log\left (1+\exp(-\bm{\theta}^T\bm{\xi})\right)\right]\\
& \qquad \qquad= \frac{1}{2} \bE_{\bm{\xi}} \left[ \log\left (\exp(\bm{\tilde{\theta}}^T\bm{\xi})+\exp(-\rho \bm{\mu}^T\bm{\xi})\right)\right] + \frac{1}{2}\bE_{\bm{\xi}} \left[ \log\left (\exp(-\bm{\tilde{\theta}}^T\bm{\xi})+\exp(-\rho \bm{\mu}^T\bm{\xi})\right)\right] \\
& \qquad \qquad= \frac{1}{2} \bE_{\bm{\xi}} \left[ \log\left ( (\exp(\bm{\tilde{\theta}}^T\bm{\xi})+\exp(-\rho \bm{\mu}^T\bm{\xi}))(\exp(-\bm{\tilde{\theta}}^T\bm{\xi})+\exp(-\rho \bm{\mu}^T\bm{\xi})) \right)\right]\\
&\qquad \qquad=\frac{1}{2}\bE_{\bm{\xi}} \left[\log\left(1+\exp(-\bm{\tilde{\theta}}^T\bm{\xi}-\rho \bm{\mu}^T\bm{\xi})+\exp(\bm{\tilde{\theta}}^T\bm{\xi}-\rho \bm{\mu}^T\bm{\xi})+\exp(-2\rho \bm{\mu}^T\bm{\xi})\right) \right].
\end{align*}
By the equality $\exp(\bm{\tilde{\theta}}^T\bm{\xi})+\exp(-\bm{\tilde{\theta}}^T\bm{\xi})= 2+4\sinh^2(\tfrac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})$, we have
\begin{align*}
\bE_{\bm{\xi}} &\left[ \log\left (1+\exp(-\bm{\theta}^T\bm{\xi})\right)\right]\\
& \qquad \qquad \qquad =\frac{1}{2}\bE_{\bm{\xi}} \left[\log\left(1+2\exp(-\rho \bm{\mu}^T\bm{\xi})+\exp(-2\rho \bm{\mu}^T\bm{\xi})+4\sinh^2(\tfrac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})\exp(-\rho \bm{\mu}^T\bm{\xi})\right) \right].
\end{align*}
Therefore, we have
\begin{equation} \begin{aligned} \label{eqn: high_noise_blah_20}
2\bE_{\bm{\xi}}\left[\log\left( \frac{1+\exp(-\bm{\theta}^T\bm{\xi})}{1+\exp(-\rho \bm{\mu}^T\bm{\xi})}\right)\right]&=2\bE_{\bm{\xi}}\left[\log(1+\exp(-\bm{\theta}^T\bm{\xi}))\right]-\bE_{\bm{\xi}}\left[\log\left(1+\exp(-\rho \bm{\mu}^T\bm{\xi})\right)^2\right]\\
\qquad \qquad &= \bE_{\bm{\xi}}\left[\log \left ( 1 + \frac{4\sinh^2(\frac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})\exp(-\rho \bm{\mu}^T\bm{\xi})}{(1+ \exp(-\rho \bm{\mu}^T\bm{\xi}))^2} \right ) \right] \ge 0.
\end{aligned} \end{equation}
Thus it remains to establish the other lower bound in \eqref{eq:high_lem_sigmatheta}. First, we note the following $1+ \exp(-\rho \bm{\mu}^T\bm{\xi}) = 2 \exp( -\tfrac{\rho \bm{\mu}^T\bm{\xi}}{2}) \cosh(\tfrac{\rho \bm{\mu}^T\bm{\xi}}{2})$. Let the constant $r > 0$ and consider the set $\{\bm{\xi}: |\bm{\theta}^T\bm{\xi}| > r\}$. Applying the inequality $x^2+y^2\geq 2\vert xy \vert$ and \eqref{eqn: high_noise_blah_20}, we obtain that
\begin{align}
2\bE_{\bm{\xi}}\left[\log\left( \frac{1+\exp(-\bm{\theta}^T\bm{\xi})}{1+\exp(-\rho \bm{\mu}^T\bm{\xi})}\right)\right] & = \bE_{\bm{\xi}}\left[\log\left(1 + \frac{4\sinh^2(\frac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})\exp(-\rho \bm{\mu}^T\bm{\xi})}{(1+ \exp(-\rho \bm{\mu}^T\bm{\xi}))^2}\right) \right] \nonumber \\
&= \bE_{\bm{\xi}}\left[\log\left(1+\frac{\sinh^2(\frac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})}{\cosh^2(\frac{\rho}{2}\bm{\mu}^T\bm{\xi})} \right) \right] \nonumber \\
&\geq \bE_{\bm{\xi}}\left[\log\left(1+\frac{\sinh^2(\frac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})}{\cosh^2(\frac{\rho}{2}\bm{\mu}^T\bm{\xi})} \right) 1_{\{\bm{\xi}:\vert \tilde{\bm{\theta}}^T\bm{\xi}\vert \geq r\}}\right] \label{eq:high_noise_blah_21}
\\
&\geq \bE_{\bm{\xi}}\left[\left(\log2+\log\left( \frac{\vert\sinh(\frac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})\vert}{\cosh(\frac{\rho}{2}\bm{\mu}^T\bm{\xi})}\right) \right)1_{\{\bm{\xi}:\vert \tilde{\bm{\theta}}^T\bm{\xi}\vert \geq r\}}\right] \nonumber.
\end{align}
Here \eqref{eq:high_noise_blah_21} follows from $\log\left(1+\frac{\sinh^2(\frac{\tilde{\bm{\theta}}^T\bm{\xi}}{2})}{\cosh^2(\frac{\rho}{2}\bm{\mu}^T\bm{\xi})} \right)$ is always positive. From \eqref{eq:fact_affine}, we have $\bm{\mu}^T\bm{\xi} \sim N(\Vert \bm{\mu} \Vert^2,\sigma^2\Vert \bm{\mu} \Vert^2)$ and $\tilde{\bm{\theta}}^T\bm{\xi} \sim N(0,\sigma^2\Vert \tilde{\bm{\theta}} \Vert^2)$. Thus, we write $\tilde{\bm{\theta}}^T\bm{\xi} = \sigma \| \tilde{\bm{\theta}} \| \psi$ where $\psi \sim N(0,1)$. Moreover, we have that $-\log\left(\cosh(\frac{\rho}{2}\bm{\mu}^T\bm{\xi})\right)1_{\{\vert \tilde{\bm{\theta}}^T\bm{\xi}\vert \geq r\}} \geq -\log\left(\cosh(\frac{\rho}{2}\bm{\mu}^T\bm{\xi})\right) $ since $\cosh(\frac{\rho}{2}\bm{\mu}^T\bm{\xi})\geq 1$ always holds. Using the inequality $\log\cosh(x)\leq \vert x\vert$ for all $x$, the following bound holds
\begin{equation} \begin{aligned} \label{eq: high_noise_22}
&\bE_{\bm{\xi}}\left[\log\left( \frac{1+\exp(-\bm{\theta}^T\bm{\xi})}{1+\exp(-\rho \bm{\mu}^T\bm{\xi})}\right)\right]\\
& \quad \ge \tfrac{1}{2} \log(2) \cdot \bE_{\psi}\left[1_{\{\vert \psi\vert \geq \frac{r}{\sigma \Vert\tilde{\bm{\theta}}\Vert}\}}\right] + \tfrac{1}{2} \bE_{\psi}\left[\log\left\vert\sinh(\tfrac{\sigma\Vert \tilde{\bm{\theta}}\Vert\psi}{2})\right\vert1_{\{\vert \psi\vert \geq \frac{r}{\sigma \Vert\tilde{\bm{\theta}}\Vert}\}}\right] - \tfrac{1}{2}\bE_{\bm{\xi}} \left [ \log( \cosh(\tfrac{\rho}{2} \bm{\mu}^T\bm{\xi})) \right ]\\
& \quad \ge \tfrac{1}{2} \log(2) \cdot \bE_{\psi}\left[1_{\{\vert \psi\vert \geq \frac{r}{\sigma \Vert\tilde{\bm{\theta}}\Vert}\}}\right] + \tfrac{1}{2} \bE_{\psi}\left[\log\left\vert\sinh(\tfrac{\sigma\Vert \tilde{\bm{\theta}}\Vert\psi}{2})\right\vert1_{\{\vert \psi\vert \geq \frac{r}{\sigma \Vert\tilde{\bm{\theta}}\Vert}\}}\right] - \tfrac{1}{2} \bE_{\bm{\xi}} \left [ | \tfrac{\rho}{2} \bm{\mu}^T\bm{\xi} | \right ]\\
&\quad\geq \tfrac{1}{2} \log(2) \cdot \bE_{\psi}\left[1_{\{\vert \psi\vert \geq \frac{r}{\sigma \Vert\tilde{\bm{\theta}}\Vert}\}}\right] + \tfrac{1}{2} \bE_{\psi}\left[\log\left\vert\sinh(\tfrac{\sigma\Vert \tilde{\bm{\theta}}\Vert\psi}{2})\right\vert1_{\{\vert \psi\vert \geq \frac{r}{\sigma \Vert\tilde{\bm{\theta}}\Vert}\}}\right]-\left | \frac{\rho}{4}\right | \left(\Vert \bm{\mu}\Vert^2+\sigma \Vert \bm{\mu} \Vert \sqrt{\frac{2}{\pi}} \right),
\end{aligned} \end{equation}
where the last inequality uses \eqref{fact:norm_Gaussians}. Using the inequality $\vert \sinh(x) \vert \geq \exp(\frac{\vert x\vert}{2})$ for $\vert x \vert \geq 2\log(\sqrt{2}+1)$, letting $r=4\log(\sqrt{2}+1)$, we obtain
\begin{align}
\tfrac{1}{2} \log(2) \cdot \bE_{\psi} \big [ 1_{\{|\psi| \ge \frac{4 \log( \sqrt{2}+ 1)\}}{\sigma \norm{\tilde{\bm{\theta}}} }} \big ] &+ \tfrac{1}{2}\bE_\psi\left[\log\left\vert\sinh(\tfrac{\sigma\Vert \tilde{\bm{\theta}}\Vert\psi}{2})\right\vert1_{\{|\psi| \geq \frac{4\log(\sqrt{2}+1)}{\sigma \Vert \tilde{\bm{\theta}}\Vert}\}}\right]\nonumber\\
&\geq \tfrac{1}{2} \log(2) \cdot \bE_{\psi} \big [ 1_{\{|\psi| \ge \frac{4 \log( \sqrt{2}+ 1)}{\sigma \norm{\tilde{\bm{\theta}}} } \}} \big ] + \tfrac{1}{2}\bE_\psi\left[\left\vert\tfrac{\sigma \Vert \tilde{\bm{\theta}}\Vert \psi}{4} \right\vert1_{\{|\psi| \geq \frac{4\log(\sqrt{2}+1)}{\sigma \Vert \tilde{\bm{\theta}}\Vert}\}}\right]\nonumber\\
&\geq \tfrac{1}{2} \log(2) \cdot \bE_{\psi}[1_{\{\vert\psi\vert \ge 1\}}] + \tfrac{1}{2} \bE_\psi\left[\left\vert\tfrac{\sigma \Vert \tilde{\bm{\theta}}\Vert \psi}{4} \right\vert1_{\vert\psi\vert \geq 1\}}\right] \label{eq:high_noise_blah_111} \\\nonumber
&\geq \left ( \tfrac{1}{2} \log(2) + \frac{\sigma \Vert \tilde{\bm{\theta}}\Vert}{8} \right )\Phi^c(1).
\end{align}
Here \eqref{eq:high_noise_blah_111} follows from $1\geq \frac{4\log(\sqrt{2}+1)}{\sigma \Vert \tilde{\bm{\theta}}\Vert}
$. By combining the above inequality with \eqref{eq: high_noise_22}, the desired result holds.
\end{proof}
We now turn to lower bound the second summand in the right-hand side of \eqref{eq:high_ftheta_ktheta}, namely $f(\rho \bm{\mu})-f(\bm{\theta^*})$.
\begin{lemma} \label{lem: logistic_case_1}
Provided that $\vert \rho\sigma^2-2 \vert> 1$, the following estimate holds
\begin{equation}\label{eq:high_lemhrho}
f(\rho \bm{\mu})-f(\bm{\theta^*}) \geq \frac{\Vert \bm{\mu}\Vert^4}{4\sigma^4\sqrt{\pi}}\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2}\right) \Phi^{c}\left( \frac{7\sqrt{2}\Vert \bm{\mu} \Vert}{\sigma}\right).
\end{equation}
\end{lemma}
\begin{proof} Define the function
\[
g(\rho):=\bE_{\bm{\xi}}\left[\log\left(1+\exp(-\rho \bm{\mu}^T\bm{\xi})\right) \right], \quad \bm{\xi} \sim N(\bm{\mu},\sigma^2I_d).
\]
By Lemma~\ref{lem:minimizer_logistic}, we know that $g$ is a convex function with a unique minimizer at $\rho^*:=\frac{2}{\sigma^2}$. Observe that $f(\rho \bm{\mu})-f(\bm{\theta}^*) = g(\rho)-g(\rho^*)$; hence in order to prove \eqref{eq:high_lemhrho}, we instead aim to bound this difference in the function $g$. From \eqref{eq:fact_affine}, we have $\bm{\mu}^T\bm{\xi} \sim N(\Vert \bm{\mu} \Vert^2,\sigma^2\Vert \bm{\mu} \Vert^2)$. It thus holds
\[
4g''(\rho)=\bE\left(\frac{(\bm{\mu}^T\bm{\xi})^2}{\cosh(\frac{\rho}{2} \bm{\mu}^T\bm{\xi})^2} \right)=\frac{1}{\sigma\Vert \bm{\mu}\Vert\sqrt{2\pi }}\int_{-\infty}^{\infty}\frac{z^2}{\cosh^2(\frac{\rho z}{2})}\exp\left(-\frac{(z-\Vert \bm{\mu} \Vert^2)^2}{2\sigma^2\Vert \bm{\mu} \Vert^2}\right)dz.
\]
Upper bounding $\cosh^2(\frac{\rho z}{2})$ by $\exp(\vert \rho z\vert)$, we next obtain
\begin{align*}
4g''(\rho)&\geq \frac{1}{\sigma\Vert \bm{\mu}\Vert\sqrt{2\pi}} \int_{-\infty}^{\infty} z^2\exp\left(-\vert \rho z \vert \right)\exp\left(-\frac{(z-\Vert \bm{\mu} \Vert^2)^2}{2\sigma^2\Vert \bm{\mu}\Vert^2}\right)dz & \\ &=\frac{1}{\sigma\Vert \bm{\mu}\Vert\sqrt{2\pi }} \int_{-\infty}^{\infty} z^2\exp\left(-\frac{(z-\Vert \bm{\mu} \Vert^2)^2+2\sigma^2\Vert \bm{\mu}\Vert^2\vert\rho z\vert}{2\sigma^2\Vert \bm{\mu}\Vert^2}\right)dz ,&\\
&=\frac{1}{\sigma\Vert \bm{\mu}\Vert\sqrt{2\pi }}\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2} \right) \int_{-\infty}^{\infty} z^2\exp\left(-\frac{z^2-2\Vert \bm{\mu} \Vert^2 z+2\sigma^2\Vert \bm{\mu}\Vert^2\vert\rho z\vert}{2\sigma^2\Vert \bm{\mu}\Vert^2}\right)dz
\end{align*}
One can easily verify that whenever $\vert z \vert \geq 2\Vert \bm{\mu} \Vert^2 +2\sigma^2\Vert \bm{\mu}\Vert^2 \vert \rho \vert$ yields
\begin{equation*}
\exp\left(-\frac{z^2-2\Vert \bm{\mu} \Vert^2 z+2\sigma^2\Vert \bm{\mu}\Vert^2\vert\rho z\vert}{2\sigma^2\Vert \bm{\mu}\Vert^2}\right) \geq \exp\left(-\frac{z^2}{\sigma^2\Vert \bm{\mu}\Vert^2} \right).
\end{equation*}
Therefore, we have
\begin{equation*}
4g''(\rho) \geq \frac{2}{\sigma\Vert \bm{\mu}\Vert\sqrt{2\pi }}\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2} \right) \int_{2\Vert \bm{\mu} \Vert^2 +2\sigma^2\Vert \bm{\mu}\Vert^2 \vert \rho \vert}^{\infty} z^2\exp\left(-\frac{z^2}{\sigma^2\Vert \bm{\mu}\Vert^2} \right)dz.
\end{equation*}
The change of variables $z \mapsto \frac{\sigma\Vert\bm{\mu}\Vert}{\sqrt{2}} z$ yields
\begin{align*}
g''(\rho) &\geq \frac{\sigma^2\Vert \bm{\mu}\Vert^2}{8\sqrt{\pi}}\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2}\right) \int_{ 2\sqrt{2}(1+2\vert \frac{\rho}{\rho^*}\vert)\frac{\Vert \bm{\mu} \Vert}{\sigma}}^{\infty} z^2\exp\left(-\frac{z^2}{2}\right)dz\\
&\geq \frac{\Vert \bm{\mu} \Vert^4}{\sqrt{\pi}}\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2}\right) \int_{ 2\sqrt{2}(1+2\vert \frac{\rho}{\rho^*}\vert)\frac{\Vert \bm{\mu} \Vert}{\sigma}}^{\infty} \exp\left(-\frac{z^2}{2}\right)dz,
\end{align*}
where the last inequality holds because $z^2 \ge \frac{8\norm{\bm{\mu}}^2}{\sigma^2}$ on the interval $[2\sqrt{2}(1+2 |\tfrac{\rho}{\rho^*}|) \tfrac{\norm{\bm{\mu}}}{\sigma}, \infty)$. Let $r(\rho) :=2\sqrt{2}(1+2\vert \frac{\rho}{\rho^*}\vert)$.
Continuing, we get \begin{equation*}
g''(\rho) \geq\frac{\Vert \bm{\mu} \Vert^4}{\sqrt{\pi}}\exp\left(-\frac{\Vert \bm{\mu} \Vert^2}{2\sigma^2}\right) \Phi^{c}\left( \frac{r(\rho)\Vert \bm{\mu} \Vert}{\sigma}\right).
\end{equation*}
The proof now follows from Lemma \ref{lem:high_tech_not_appendix}.
\end{proof}
Let us recall that we define the target set $C$ by
\begin{equation}\label{eq:C_high_proof}
C=\left\{\bm{\theta}: \vert \rho\sigma^2-2\vert < 1 \text{ and } \sigma \Vert \tilde{\bm{\theta}} \Vert \leq c \right\},
\end{equation}
where the constant $c$ is \begin{equation}\label{eq:c_high_proof}
c=8\left(\frac{\tfrac{3}{2}\left(\frac{1}{0.16^2}+\sqrt{\frac{2}{\pi}}\frac{1}{0.16} \right)+1}{\Phi^c(1)}-\tfrac{1}{2}\log(2)\right) \approx 3404.46.
\end{equation}
The next proposition establishes the desired bound $\bE[\tau_m^C] \underset{\sim}{<} m.$ Note that the set $C$ is compact. This implies that the value of the test function at the next iteration increases, in expectation, by at most a fixed constant whenever the current iterate is inside $C$. This facilitates establishing bounds for $\bE[\tau_C^m]$ as we will in the proof of following proposition.
\begin{proposition}\label{prp:high_V} Let $\bm{\theta}_0=\bm{0}$ and suppose that $0.16\Vert \bm{\mu} \Vert \leq\sigma$. Consider the set $C$ defined in \eqref{eq:C_high_proof}.
Let the step-size $\alpha > 0$ satisfy
\begin{equation}\label{eq:high_noise_alpha_assumption}
\alpha < \frac{\min\{1,B\}}{\Vert \bm{\mu} \Vert^2+d\sigma^2},
\end{equation}
where the constant $B$ is
\begin{equation}
B:=\tfrac{\Vert \bm{\mu}\Vert^4}{4\sigma^4\sqrt{\pi}}\exp\left(-\tfrac{\Vert \bm{\mu} \Vert^2}{2\sigma^2}\right) \Phi^{c}\left( \tfrac{7\sqrt{2}\Vert \bm{\mu} \Vert}{\sigma}\right).
\end{equation}
Then the following is true
\begin{equation}\label{eq:high_Tm}
\bE\left[\tau_m^{C}\right]\leq
\frac{\Vert \bm{\mu} \Vert^2}{4\alpha\sigma^4\min\{ 1,B\}}+(m-1)\left(2+\frac{\Vert \bm{\mu} \Vert^2+c\sigma^2}{\alpha \sigma^4\min\{1,B\}}\right).
\end{equation}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prp:high_V}] To simplify the subsequent argument, we define the test function $\overline{V}$ by
\[
\overline{V}(\bm{\theta})=\frac{\Vert \bm{\theta}-\bm{\theta}^*\Vert^2}{\alpha\min\{1,B\}}.
\]
We first establish the drift equation \eqref{eq:st_VoutC} for the test function $\overline{V}$ and the target set \eqref{eq:C_high_proof}. To do so, we first show that
\begin{equation}\label{eq:high_thetak}
f(\bm{\theta}_{k-1})-f(\bm{\theta}^*) \geq \min\{1,B\}1_{\{\bm{\theta}_{k-1}\not\in {C}\}}.
\end{equation}
Clearly \eqref{eq:high_thetak} holds when $\bm{\theta}_{k-1}\in {C}$ so suppose that $\bm{\theta}_{k-1}\not\in {C}$. Therefore, either we have $\vert\rho_{k-1}\sigma^2-2\vert>1$ or $\sigma \Vert \tilde{\bm{\theta}}_{k-1}\Vert \geq c$. If $\vert\rho_{k-1}\sigma^2-2\vert>1$, then Lemma \ref{lem: logistic_case_1} yields that $ f(\rho_{k-1}\bm{\mu})-f(\bm{\theta}^*)\geq B$. Then we write that
\begin{equation}
f(\bm{\theta}_{k-1})-f(\bm{\theta}^*)=\underbrace{f(\bm{\theta}_{k-1})-f(\rho_{k-1}\bm{\mu})}_{\geq 0 \text{ by Lemma~\ref{lem: logistic_case_2} }}+f(\rho_{k-1}\bm{\mu})-f(\bm{\theta}^*) \geq B,
\end{equation}
establishing \eqref{eq:high_thetak}. So assume $\vert \rho_{k-1} \sigma^2-2\vert<1$ and hence $\sigma \Vert \tilde{\bm{\theta}}_{k-1}\Vert \geq c$. We obtain that
\begin{align}
\bE_{\bm{\xi}_k} \left[\log\left( \frac{1+\exp(-\bm{\theta}_{k-1}^T\bm{\xi}_k)}{1+\exp(-\rho_{k-1} \bm{\mu}^T\bm{\xi}_k)}\right)\right]&\geq \left ( \tfrac{1}{2} \log(2) + \tfrac{\sigma \Vert \tilde{\bm{\theta}}\Vert}{8} \right )\Phi^c(1)-\left | \frac{\rho_{k-1}}{4}\right | \left(\Vert \bm{\mu}\Vert^2+\sigma \Vert \bm{\mu} \Vert \sqrt{\frac{2}{\pi}} \right)\nonumber\\&\geq \left ( \tfrac{1}{2} \log(2) + \tfrac{\sigma \Vert \tilde{\bm{\theta}}\Vert}{8} \right )\Phi^c(1) -\frac{3}{2}\left(\frac{1}{0.16^2}+\frac{1}{0.16} \sqrt{\frac{2}{\pi}} \right)\geq 1\label{eq:high_noise_lastprop_101},
\end{align}
where the first inequality follows from Lemma~\ref{lem: logistic_case_2} (Note: $\bm{\xi}_k$ and $\bm{\theta}_{k-1}$ are independent). In the second inequality, we used that $\rho_{k-1}<\tfrac{3}{\sigma^2}$ and also the assumption that $0.16\Vert \bm{\mu} \Vert \leq \sigma$. The last inequality follows from the assumption that $\sigma \Vert\tilde{\bm{\theta}}_{k-1}\Vert\geq c$. We therefore have
\[
f(\bm{\theta}_{k-1})-f(\bm{\theta}^*)=\underbrace{f(\bm{\theta}_{k-1})-f(\rho_{k-1}\bm{\mu})}_{\geq 1 \text{ by \eqref{eq:high_noise_lastprop_101}}}+\underbrace{f(\rho_{k-1}\bm{\mu})-f(\bm{\theta}^*)}_{\geq 0 \text{ by Lemma~\ref{lem: logistic_case_1}}} \geq 1.
\]
We have thus shown \eqref{eq:high_thetak}. By Lemma~\ref{lem:technical_convex_bound}, we have
\begin{align}
f(\bm{\theta}_{k-1})-f(\bm{\theta}^*)&\leq \tfrac{1}{2\alpha}\left(\Vert \bm{\theta}_{k-1}-\bm{\theta}^* \Vert^2-\bE\left[\Vert \bm{\theta}_{k}-\bm{\theta}^* \Vert^2\, |\mathcal{F}_{k-1}\right] \right)+\tfrac{\alpha}{2}\left (\norm{\bm{\mu}}^2 + d \sigma^2 \right ) \nonumber\\ &\leq \tfrac{1}{2\alpha}\left(\Vert \bm{\theta}_{k-1}-\bm{\theta}^* \Vert^2-\bE\left[\Vert \bm{\theta}_{k}-\bm{\theta}^* \Vert^2\, |\mathcal{F}_{k-1}\right] \right)+\tfrac{1}{2}\min\{1,B\}, \label{eq:high_decrease_v2}
\end{align}
where in the last inequality we used the assumption on $\alpha$, namely \eqref{eq:high_noise_alpha_assumption}. By \eqref{eq:high_thetak}, we obtain that
\[
\min\{1,B\}\left(2\cdot 1_{\{\bm{\theta}_{k-1}\notin C\}}-1\right)\leq \tfrac{1}{\alpha}\left(\Vert \bm{\theta}_{k-1}-\bm{\theta}^* \Vert^2-\bE\left(\Vert \bm{\theta}_{k}-\bm{\theta}^* \Vert^2\, |\mathcal{F}_{k-1}\right] \right).
\]
A simple manipulation then yields
\begin{align}
\bE\left[\overline{V}(\bm{\theta}_k)|\mathcal{F}_{k-1}\right] &\leq \overline{V}(\bm{\theta}_{k-1})-2\cdot 1_{\{\bm{\theta}_{k-1}\not\in {C} \}}+1 \nonumber \\&=\overline{V}(\bm{\theta}_{k-1})-1+2\cdot 1_{\{\bm{\theta}_{k-1}\in {C} \}}. \label{eq:high_noise_drift_eq}
\end{align}
From Appendix Lemma \ref{lem:drift_lemma_appendix}, we thus obtain that
\begin{equation}\label{eq:high_T_v2}
\bE\left[\tau_m^{C}\right]\leq \overline{V}(\bm{0})+(m-1)\left(2+\sup_{\bm{\theta}\in {C}}\overline{V}(\bm{\theta})\right).
\end{equation}
It remains to upper bound the quantity $\sup_{\bm{\theta}\in {c}}\overline{V}(\bm{\theta})$. For any $\bm{\theta} \in C$, it holds that $\vert \rho\sigma^2-2\vert<1$. From this, we obtain that
\begin{align*}
\Vert \bm{\theta}-\bm{\theta}^*\Vert^2&=\left( \rho \sigma^2-\rho^*\sigma^2 \right )^2\frac{\Vert \bm{\mu} \Vert^2}{\sigma^4}+\Vert \tilde{\bm{\theta}}\Vert^2 \\&\leq \frac{\Vert \bm{\mu} \Vert^2}{\sigma^4}+\frac{c}{\sigma^2},
\end{align*}
We, therefore, have
\begin{equation}
\sup_{\bm{\theta}\in {C}}\overline{V}(\bm{\theta})\leq \frac{\Vert \bm{\mu} \Vert^2+c\sigma^2}{\alpha \sigma^4\min\{1,B\}}.
\end{equation}
The proof is complete.
\end{proof}
We are now ready to prove Theorem~\ref{thm.b}.
\begin{proof}[Proof of Theorem \ref{thm.b}]
It follows from Proposition~\ref{prp:high_V} that there exists some constant $r>0$ such that $\bE\left[\tau_m^{C}\right]\leq rm$ for all $m\geq 1$. Taken Lemma~\ref{lem:st_ETC} and Lemma~\ref{lem: high_noise_delta} together, we obtain that
\[
\bE[T]\leq \bE[T_C]\leq \sum_{m=1}^{\infty}\bE[\tau_m^{C}]\Phi^{c}\left(\frac{-\sigma}{\Vert \bm{\mu} \Vert}\right)^{m-1}.
\]
Therefore,
\[
\bE[T]\leq \bE[T_C]\leq r\sum_{m=1}^{\infty}m\Phi^{c}\left(\frac{-\sigma}{\Vert \bm{\mu} \Vert}\right)^{m-1}<+\infty.
\]
The proof is complete.
\end{proof}
| {
"timestamp": "2020-03-24T01:30:27",
"yymm": "2003",
"arxiv_id": "2003.10312",
"language": "en",
"url": "https://arxiv.org/abs/2003.10312",
"abstract": "We propose a new, simple, and computationally inexpensive termination test for constant step-size stochastic gradient descent (SGD) applied to binary classification on the logistic and hinge loss with homogeneous linear predictors. Our theoretical results support the effectiveness of our stopping criterion when the data is Gaussian distributed. This presence of noise allows for the possibility of non-separable data. We show that our test terminates in a finite number of iterations and when the noise in the data is not too large, the expected classifier at termination nearly minimizes the probability of misclassification. Finally, numerical experiments indicate for both real and synthetic data sets that our termination test exhibits a good degree of predictability on accuracy and running time.",
"subjects": "Optimization and Control (math.OC); Machine Learning (stat.ML)",
"title": "A termination criterion for stochastic gradient descent for binary classification",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717480217662,
"lm_q2_score": 0.8152324983301567,
"lm_q1q2_score": 0.8042853509217343
} |
https://arxiv.org/abs/1604.00699 | Anticommutator Norm Formula for Projection Operators | We prove that for any two projection operators $f,g$ on Hilbert space, their anticommutator norm is given by the formula \[\|fg + gf\| = \|fg\| + \|fg\|^2.\] The result demonstrates an interesting contrast between the commutator and anticommutator of two projection operators on Hilbert space. Specifically, the norm of the anticommutator $\|fg + gf\|$ is a simple quadratic function of the norm $\|fg\|$ while the commutator norm $\|fg - gf\|$ is not a function of $\|fg\|$. Nevertheless, the result gives the following bounds that are functions of $\|fg\|$ on the commutator norm: $\|fg\| - \|fg\|^2 \le \|fg - gf\| \le \|fg\|$. | \section{\bf The Main Result}}}
The main result of this paper is proving the following norm formula.
\medskip
\begin{thm}\label{fggf}
For any two projection operators $f,g$ on Hilbert space,
\begin{align}
\|fg + gf\| \ &= \ \|fg\| + \|fg\|^2.
\end{align}
\end{thm}
In particular, the anticommutator norm of projection operators is a simple quadratic function of the norm of their product. This is quite different from the commutator $fg - gf$ of projections since its norm is not a function of the norm of $fg$ (see remark below), nor is $\|fg\|$ a function of $\|fg-gf\|$. In view of Theorem \ref{fggf} we can nevertheless give bounds on the commutator norm that are functions of the norm $\|fg\|$. In this connection, the above theorem then has the following the consequence.
\begin{cor}\label{cor}
For any two projection operators $f,g$ on Hilbert space, one has
\[
\|fg\| - \|fg\|^2 \ \le \ \|fg - gf\| \ \le\ \|fg\| .
\]
\end{cor}
\begin{proof}
By Theorem 1.1 we have
\[
\|fg-gf\| = \|2gf - (fg+gf)\| \ge 2\|gf\| - \|fg+gf\| \ge \|fg\| - \|fg\|^2.
\]
Lemma \ref{lemma} below gives the inequality $\|fg - gf\| \le \|fg\|$.
\end{proof}
\medskip
(Lemma \ref{lemma} and its proof are given at the end of the paper.)
\medskip
We made an application of Theorem \ref{fggf} in \ccite{SWnearlyorthog} in order to obtain sharp upper bound estimates for projection operator on Hilbert space that are nearly orthogonal to their (unitary) symmetries. Specifically, Theorem \ref{fggf} has lead us to obtain the relatively larger bound of $0.455$ that would guarantee that if a projection operator $e$ and a Hermitian unitary operator $w$ on Hilbert space satisfy $\|ewe\| < 0.455$, then a projection operator $q$ exists such that\footnote{The condition $qwq=0$, of course, just means that $q$ is orthogonal to its symmetric image $wqw^*$ under $w$. And so the condition that $\|ewe\|$ is ``small" means that $e$ is nearly orthogonal to its symmetry.} $qwq=0$ and $\|e-q\| \le \frac12\|ewe\| + 4\|ewe\|^2$. (Further, $q$ lies in the C*-subalgebra of $\mathcal B(\mathcal H)$ generated by $e$ and $wew^*$.)
\medskip
\begin{rmk}
It is not hard to see that the commutator norm of projections $\|fg - gf\|$ is generally not a function of the norm $\|fg\|$. For instance, if $f=g\not=0$, then $\|fg\|=1$ and their commutator is zero, while if $p,q$ are the two generating projections of the universal C*-algebra generated by two projections, then $\|pq\|=1$ and $\|pq-qp\| = \frac12$. Conversely, neither is the norm $\|fg \|$ a function of the norm $\|fg-gf\|$, since for the 2 by 2 matrix projections
$a = [\smallmatrix 1 & 0 \\ 0 & 0 \endsmallmatrix], \
b = \frac12[\smallmatrix 1 & 1 \\ 1 & 1 \endsmallmatrix]$, one has $\|ab-ba\| = \frac12 = \|pq-qp\|$ while $\|ab\| = \frac1{\sqrt2} \not= 1 = \|pq\|$.
\end{rmk}
\begin{rmk}
We caution that it is not enough to check equalities such as that in Theorem \ref{fggf} for 2 by 2 matrices over the complex numbers and expect that they generally hold for all projections. For example, one can check that the equation
\begin{equation}\label{2by2}
\|fg - gf\|^2 = \|fg\|^2(1-\|fg\|^2)
\end{equation}
holds for all projections in $M_2(\mathbb C)$. One can, however, give simple examples of 4 by 4 projections for which equation \eqref{2by2} does not hold. Equation \eqref{2by2} also does not hold for the two projections $p,q$ of the universal C*-algebra generated by two projections since they do not commute and $\|pq\|=1$.
\end{rmk}
\noindent{\bf Acknowledgments.}
This research was partly supported by a grant from the Natural Science and Engineering Council of Canada.
\bigskip
{\Large{\section{\bf Proof of $\|fg+gf\| \le \|fg\| + \|fg\|^2$}}}
\medskip
We shall use the following lemma.
\medskip
\begin{lem}\label{fg}
For any two projections $f,g$ and $m\ge1$ we have $\|(fg)^m\| \le \|fg\|^{2m-1}$.
\end{lem}
\begin{proof}
By induction, one checks the equality $(fg)^m = (fgf)^{m-1} (fg)$ (for $m\ge1$). Using $\|fgf\| = \|fg\|^2$, we have
\[
\|(fg)^m\| \le \|(fgf)^{m-1}\| \|fg\| \le \|fg\|^{2m-2} \|fg\| = \|fg\|^{2m-1}
\]
as required.
\end{proof}
We begin by observing and establishing the following formula for the powers of the anticommutator in terms of polynomials in the operators $fg, gf,$ $fgf$, and $gfg$:
\begin{equation}\label{powersplus}
(fg + gf)^n = P_n(fg) + P_n(gf) + Q_n(fgf) + Q_n(gfg)
\end{equation}
where $P_n, Q_n$ ($n=1,2,\dots$) are polynomials given recursively according to the dynamics
\begin{align}\label{polyPQ}
P_{n+1}(x) &= xP_n(x) + xQ_n(x), \\
Q_{n+1}(x) &= P_n(x) + xQ_n(x) \notag
\end{align}
with initial data $P_1(x) = x, \ Q_1(x) = 0$.\footnote{Interestingly, these polynomials turn out to be similar to Fibonacci polynomials as they will be given in very similar closed forms in terms of $\sqrt x$.} The equation \eqref{powersplus} can be checked by induction by making strong use of the fact that $f,g$ are projections. In order to find these polynomials in explicit form we express \eqref{polyPQ} in matrix form
\[
\bmatrix P_{n+1} \\ Q_{n+1} \endbmatrix =
\bmatrix x & x \\ 1 & x \endbmatrix
\bmatrix P_{n} \\ Q_{n} \endbmatrix.
\]
In order to telescope this expression we diagonalize the matrix here as follows:
\[
\bmatrix x & x \\ 1 & x \endbmatrix =
S \bmatrix x+\sqrt{x} & 0 \\ 0 & x-\sqrt{x} \endbmatrix S^{-1}, \qquad
S := \bmatrix \sqrt{x} & -\sqrt{x} \\ 1 & 1 \endbmatrix
\]
(which is easily checked).
Therefore, we calculate the polynomials as follows
\begin{align*}
\bmatrix P_{n+1} \\ Q_{n+1} \endbmatrix
&=
\bmatrix x & x \\ 1 & x \endbmatrix^n
\bmatrix P_1 \\ Q_1 \endbmatrix
=
S \bmatrix (x+\sqrt{x})^n & 0 \\ 0 & (x-\sqrt{x})^n \endbmatrix S^{-1}
\bmatrix x \\ 0 \endbmatrix
\\ \\
&=
\frac{1}{2\sqrt{x}}
\bmatrix \sqrt{x} & -\sqrt{x} \\ 1 & 1 \endbmatrix
\bmatrix (x+\sqrt{x})^n & 0 \\ 0 & (x-\sqrt{x})^n \endbmatrix
\bmatrix 1 & \sqrt{x} \\ -1 & \sqrt{x} \endbmatrix
\bmatrix x \\ 0 \endbmatrix
\\ \\
&=
\frac{1}{2\sqrt{x}}
\bmatrix \sqrt{x} & -\sqrt{x} \\ 1 & 1 \endbmatrix
\bmatrix (x+\sqrt{x})^n & 0 \\ 0 & (x-\sqrt{x})^n \endbmatrix
\bmatrix x \\ -x \endbmatrix
\\ \\
&=
\frac{1}{2\sqrt{x}}
\bmatrix \sqrt{x} & -\sqrt{x} \\ 1 & 1 \endbmatrix
\bmatrix x(x+\sqrt{x})^n \\ -x(x-\sqrt{x})^n \endbmatrix
\end{align*}
yielding the closed forms
\begin{align*}
P_{n+1}(x) &= \frac{x}2 \Big[(x+\sqrt{x})^n + (x-\sqrt{x})^n\Big],
\\
Q_{n+1}(x) &= \frac{\sqrt{x}}{2} \Big[(x+\sqrt{x})^n - (x-\sqrt{x})^n\Big].
\end{align*}
Next, we express these using their binomial expansions:
\begin{align*}
(x+\sqrt{x})^n &= \sum_{j=0}^n {n\choose j} x^j x^{\frac12(n-j)}
\\
(x-\sqrt{x})^n &= \sum_{j=0}^n {n\choose j} x^j (-1)^{n-j} x^{\frac12(n-j)}
\end{align*}
Using the notation $\delta_2^k = \frac12 (1+(-1)^k)$ which is 1 when $k$ is even and 0 when $k$ is odd, we can write
\[
P_{n+1}(x) = \frac12 x \sum_{j=0}^n {n\choose j} x^j (1 + (-1)^{n-j}) x^{\frac12(n-j)}
=
x \sum_{j=0}^n {n\choose j} x^j \delta_2^{n-j} x^{\frac12(n-j)}.
\]
Let us choose odd $n = 2N-1$ so that
\begin{align*}
P_{2N}(x) &=
x \sum_{j=0}^{2N-1} {2N-1\choose j} x^j \delta_2^{2N-1-j} x^{\frac12(2N-1-j)}.
\\
&= \sum_{j=0}^{2N-1} {2N-1\choose j} \delta_2^{j-1} x^{N+1 + \frac12(j-1)}
\intertext{Now put $j = 2\ell-1$ where $\ell=1,2,\dots, N$ to get}
P_{2N}(x)
&= \sum_{\ell=1}^{N} {2N-1\choose 2\ell-1} x^{N+\ell}.
\end{align*}
Now
we can compute the norm estimate at $fg$ (or $gf$) as follows:
\[
\|P_{2N}(fg)\|
\le \sum_{\ell=1}^{N} {2N-1\choose 2\ell-1} \|(fg)^{N+\ell}\|.
\]
Here we use the inequality $\|(fg)^m\| \le \|fg\|^{2m-1}$ from Lemma \ref{fg} to get
\[
\|P_{2N}(fg)\|
\le \sum_{\ell=1}^{N} {2N-1\choose 2\ell-1} \|fg\|^{2N+2\ell - 1}
= \|fg\|^{2N-1} \sum_{\ell=1}^{N} {2N-1\choose 2\ell-1} \|fg\|^{2\ell}
\]
we note that the same bound is the same number for $\|P_{2N}(gf)\|$. At this juncture we make use of the identity
\[
A_N(a) := \sum_{\ell=1}^{N} {2N-1\choose 2\ell-1} a^{2\ell}
= \frac{a}{2}\big[ (1+a)^{2N-1} - (1-a)^{2N-1} \big]
\]
which we shall call $A_N(a)$ for simplicity, where $a \ge 0$. We can then write
\[
\|P_{2N}(fg)\| \le \|fg\|^{2N-1} A_N(\|fg\|)
\]
and we obtain
\[
\|P_{2N}(fg)\| + \|P_{2N}(gf)\| \ \le \ 2 \|fg\|^{2N-1} A_N(\|fg\|).
\]
Similarly we work out the norms $\|Q_{2N}(fgf)\|$ and $\|Q_{2N}(gfg)\|$.
\[
Q_{n+1}(x) = \frac{\sqrt{x}}{2} \sum_{j=0}^n {n\choose j} x^j (1- (-1)^{n-j}) x^{\frac12(n-j)}
= \sqrt{x} \sum_{j=0}^n {n\choose j} x^j \delta_2^{n-j-1} x^{\frac12(n-j)}
\]
and again inserting $n = 2N-1$:
\begin{align*}
Q_{2N}(x)
&= \sqrt{x} \sum_{j=0}^{2N-1} {2N-1\choose j} x^j \delta_2^{2N-1-j-1} x^{\frac12(2N-1-j)}
\\
&= \sum_{j=0}^{2N-1} {2N-1\choose j} \delta_2^{j} x^{N + \frac{j}2}
\intertext{put $j = 2\ell$ where $\ell = 0, 1, 2, \dots, N-1$:}
Q_{2N}(x)
&= \sum_{\ell=0}^{N-1} {2N-1\choose 2\ell} x^{N + \ell}.
\end{align*}
The norm becomes (using $\|(fgf)^m\| \le \|fg\|^{2m}$)
\[
\|Q_{2N}(fgf)\|
\le \sum_{\ell=0}^{N-1} {2N-1\choose 2\ell} \|(fgf)^{N + \ell}\|
\le \sum_{\ell=0}^{N-1} {2N-1\choose 2\ell} \|fg\|^{2N + 2\ell}
\]
or
\[
\|Q_{2N}(fgf)\|
\le \|fg\|^{2N} \sum_{\ell=0}^{N-1} {2N-1\choose 2\ell} \|fg\|^{2\ell}
= \|fg\|^{2N} B_N(a)
\]
where we use the identity
\[
B_N(a) := \sum_{\ell=0}^{N-1} {2N-1\choose 2\ell} a^{2\ell}
= \frac{1}{2}\big[ (1+a)^{2N-1} + (1-a)^{2N-1} \big]
\]
which we shall call $B_N(a)$ for convenience.
Let's write $a = \|fg\|$. Then we get
\begin{align*}
\|(fg+gf)^{2N}\| &= \|P_n(fg) + P_n(gf) + Q_n(fgf) + Q_n(gfg)\|
\\
&\le 2 \|P_n(fg)\| + 2\|Q_n(fgf)\|
\\
&\le 2 \|fg\|^{2N-1} A_N(\|fg\|) + 2\|fg\|^{2N} B_N(\|fg\|)
\\
&= 2 a^{2N-1} A_N(a) + 2 a^{2N} B_N(a)
\end{align*}
\begin{align*}
\ \ &= 2 a^{2N-1} \cdot \frac{a}{2}\big[ (1+a)^{2N-1} - (1-a)^{2N-1} \big]
+ 2 a^{2N} \cdot \frac{1}{2}\big[ (1+a)^{2N-1} + (1-a)^{2N-1} \big]
\\
&= a^{2N} \cdot \big[ (1+a)^{2N-1} - (1-a)^{2N-1} \big]
+ a^{2N} \cdot \big[ (1+a)^{2N-1} + (1-a)^{2N-1} \big]
\\
&= 2 a^{2N} \cdot (1+a)^{2N-1}.
\end{align*}
Taking $2N$-th roots,
\[
\|fg+gf\| \le 2^{1/2N} a (1+a)^{1-\frac1{2N}}
\]
which in the limit as $N \to\infty$ gives
\[
\|fg+gf\| \le \|fg\| + \|fg\|^2.
\]
\bigskip
{\Large{\section{\bf Proof of $\|fg+gf\| \ge \|fg\| + \|fg\|^2$}}}
Any two projection operators $f,g$ on Hilbert space $\mathcal H$ may be represented in matrix block forms as
\[
f = \bmatrix I & 0 \\ 0 & 0 \endbmatrix, \quad
g = \bmatrix D & V \\ V^* & D' \endbmatrix
\]
with respect to the orthogonal decomposition $\mathcal H = \mathcal M \oplus \mathcal M^\perp$ where $\mathcal M$ is the range of $f$ (and $\mathcal M^\perp$ that of $1-f$). Here, $D, D'$ are positive operators on $\mathcal M$ and $\mathcal M^\perp$, respectively, and $V:\mathcal M^\perp \to \mathcal M$, satisfying the relations
\[
D - D^2 = VV^*, \quad DV + VD' = V, \quad D' - D'^2 = V^*V.
\]
(in view of $g$ being a projection). Since
\[
fg(fg)^*
= \bmatrix D & V \\ 0 & 0 \endbmatrix \bmatrix D & 0 \\ V^* & 0 \endbmatrix = \bmatrix D^2+VV^* & 0 \\ 0 & 0 \endbmatrix
= \bmatrix D & 0 \\ 0 & 0 \endbmatrix
\]
we see that $\|fg\|^2 = \|D\|$. We now work out the powers of the anticommutator as follows. First, we have
\[
fg + gf = \bmatrix D & V \\ 0 & 0 \endbmatrix + \bmatrix D & 0 \\ V^* & 0 \endbmatrix = \bmatrix 2D & V \\ V^* & 0 \endbmatrix.
\]
We observe that the powers of this anticommutator have the form
\[
(fg + gf)^n = \bmatrix F_n(D) & F_{n-1}(D)V \\ \star & \star \endbmatrix
\]
where $F_n(x)$ is a certain sequence of polynomials with integer coefficients -- with initial data $F_1(x) = 2x, F_0(x) = 1$. We do not need to know the $\star$ entries at the bottom of the matrix because we are interested in the northwestern\footnote{It is rather interesting in this case that the information regarding the anticommutator norm is contained in this block for large $n$.} block $F_n(D)$ of $(fg + gf)^n$. Multiplying
\[
\bmatrix F_n(D) & F_{n-1}(D)V \\ \star & \star \endbmatrix
\bmatrix 2D & V \\ V^* & 0 \endbmatrix
= \bmatrix 2DF_n(D) + F_{n-1}(D)VV^* & F_n(D)V \\ \star & \star \endbmatrix
\]
or
\[
(fg + gf)^{n+1} = \bmatrix 2DF_n(D) + (D - D^2)F_{n-1}(D) & F_n(D)V \\ \star & \star \endbmatrix
\]
from which we see that the polynomial sequence has the Fibonacci-type recursion relation
\[
F_{n+1}(x) = 2xF_n(x) + (x - x^2)F_{n-1}(x).
\]
Telescoping this as one would with ordinary Fibonacci numbers, one eventually gets
\[
F_n(x)
= \frac12 x^{n/2} \left[ (\sqrt x + 1)^{n+1} - (\sqrt x - 1)^{n+1} \right].
\]
(Indeed, one can check this by induction.) This polynomial function is (for each $n$) an increasing function over the interval $[0,\infty)$. Therefore, since $D$ is a positive operator, we have
\[
\|(fg + gf)^n\| \ge \|F_n(D)\| = F_n(\|D\|) = F_n(\|fg\|^2)
= \frac12 \|fg\|^n \left[ (\|fg\| + 1)^{n+1} - (\|fg\| - 1)^{n+1} \right]
\]
\[
= \frac12 \|fg\|^n (\|fg\| + 1)^{n+1} \left[ 1 - \left(\frac{\|fg\| - 1}{\|fg\| + 1}\right)^{n+1} \right].
\]
Taking $n$-th roots (noting that the anticommutator $fg+gf$ is a Hermitian operator)
\[
\|fg + gf\| \ge \frac1{2^{1/n}} \|fg\|\, (\|fg\| + 1)^{1+\frac1n} \left[ 1 - \left(\frac{\|fg\| - 1}{\|fg\| + 1}\right)^{n+1} \right]^{1/n}.
\]
Letting $n\to\infty$ the right side converges to $\|fg\| + \|fg\|^2$ (since $(1-c^n)^{1/n} \to 1$ for\footnote{If $-1<c<1$ then $-1< c^n \le |c| < 1$ for each $n\ge1$, which gives $0<1-|c| \le 1-c^n < 2$ and the result follows by taking $n$-th roots.} any $-1 < c < 1$).
This completes the proof of Theorem \ref{fggf}.
\bigskip
We end the paper with the proof of the lemma used by Corollary \ref{cor}.
\medskip
\begin{lem}\label{lemma}
For any two projections $f,g$ on Hilbert space, $\|fg - gf\| \le \|fg\|$. Further, $\|fg - gf\| = \|fg-fgf\|$.
\end{lem}
\begin{proof}
Write
\[
\|fg - gf\|^2 = \|(fg-gf)^*(fg-gf)\| = \| fgf + gfg - fgfg - gfgf \|
\]
and note that the operator in the last norm can be written as the sum of two orthogonal positive operators:
\[
fgf + gfg - fgfg - gfgf = fg(1-f)gf + (1-f)gfg(1-f) = uu^* + u^*u
\]
where $u = fg(1-f)$. So its norm is the max of the norms of each term, both of which are equal to $\|u\|^2$. Thus,
\[
\|fg - gf\| = \|u\| = \|fg-fgf\| = \|fg(1-f)\| \le \|fg\|
\]
as needed.
\end{proof}
\medskip
\noindent{\bf Note added in proof.} A generous colleague pointed out to the author that with a bit more work, one can deduce the anticommutator norm formula from a theorem of Halmos in \ccite{Halmos}. Our proof, however, is self-contained and independent of this -- and the formula (simple as it is) seems to be unknown.
\bigskip
| {
"timestamp": "2016-04-05T02:11:26",
"yymm": "1604",
"arxiv_id": "1604.00699",
"language": "en",
"url": "https://arxiv.org/abs/1604.00699",
"abstract": "We prove that for any two projection operators $f,g$ on Hilbert space, their anticommutator norm is given by the formula \\[\\|fg + gf\\| = \\|fg\\| + \\|fg\\|^2.\\] The result demonstrates an interesting contrast between the commutator and anticommutator of two projection operators on Hilbert space. Specifically, the norm of the anticommutator $\\|fg + gf\\|$ is a simple quadratic function of the norm $\\|fg\\|$ while the commutator norm $\\|fg - gf\\|$ is not a function of $\\|fg\\|$. Nevertheless, the result gives the following bounds that are functions of $\\|fg\\|$ on the commutator norm: $\\|fg\\| - \\|fg\\|^2 \\le \\|fg - gf\\| \\le \\|fg\\|$.",
"subjects": "Functional Analysis (math.FA)",
"title": "Anticommutator Norm Formula for Projection Operators",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717420994768,
"lm_q2_score": 0.8152324960856175,
"lm_q1q2_score": 0.8042853438792926
} |
https://arxiv.org/abs/0801.0120 | Combinatorics of the change-making problem | We investigate the structure of the currencies (systems of coins) for which the greedy change-making algorithm always finds an optimal solution (that is, a one with minimum number of coins). We present a series of necessary conditions that must be satisfied by the values of coins in such systems. We also uncover some relations between such currencies and their sub-currencies. | \section{Introduction}
In the change-making problem we are given a set of coins and we wish to
determine, for a given amount $c$, what is the minimal number of coins needed
to pay $c$. For instance, given the coins $1,2,5,10,20,50$, the minimal
representation of $c=19$ requires $4$ coins ($10+5+2+2$).
This problem is a special case of the general knapsack problem with all coins
of unit weights. In some cases the solution may be found by a greedy strategy
that uses as many of the largest coin as possible, then as many of the next one
as possible and so on. This greedy solution is optimal for the set of
coins given above, but fails to be optimal in general. For instance, if we have
coins $1,5,9,16$, then the amount $18$ will be paid greedily as $16+1+1$, while
the optimal solution ($9+9$) requires just two coins. In this paper we shall
concentrate on the combinatorial properties of those sets of coins for which
the greedy solution is always optimal.
The sequence $A=(a_0,a_1,\ldots,a_k)$, where $1=a_0<a_1<\ldots<a_k$ will be
called a {\it currency} or {\it coinage system}. We always assume $a_0=1$ is
the smallest coin to avoid problems with non-representability of certain
amounts. For any amount $c$ by $\textrm{opt}_A(c)$ and $\textrm{grd}_A(c)$ we denote,
respectively, the minimal number of coins needed to pay $c$ and the number of
coins used when paying $c$ greedily (for example, if $A=(1,5,9,16)$ then
$\textrm{opt}_A(18)=2$ and $\textrm{grd}_A(18)=3$). The currency $A$ will be called {\it
orderly}\footnote{Various authors have used the terms orderly
\cite{Jones,Maurer}, canonical \cite{KoZa,Pear}, standard \cite{TienHu} or
greedy \cite{CCS}.}
if for all amounts $c>0$ we have $\textrm{opt}_A(c)=\textrm{grd}_A(c)$.
If a coinage system $A$ is not orderly then any amount $c$ for which
$\textrm{opt}_A(c)<\textrm{grd}_A(c)$ will be called a {\it counterexample}.
Let us briefly summarize related work. Magazine, Nemhauser and Trotter
\cite{MNT} gave a necessary and sufficient condition to decide whether
$A=(1,a_1,\ldots,a_{k+1})$ is orderly provided we know in advance that
$A'=(1,a_1,\ldots,a_k)$ is orderly, the so-called one-point theorem (see
section \ref{section2} of the present paper). Kozen and Zaks \cite{KoZa} proved,
among other things, that the smallest counterexample (if exists), does not
exceed the sum of the two highest coins. They also asked if there is a
polynomial-time algorithm that tests if a coinage system is orderly. Such an
algorithm was presented by Pearson \cite{Pear}. It produces a set of $O(k^2)$
``candidates for counterexamples'', which is guaranteed to contain the smallest
counterexample if one exists. The rest of the algorithm is just testing these
potential candidates, and the overall complexity is $O(k^3)$. A similar set of
possible counterexamples (perhaps not containing the smallest one), but of size
$O(k^3)$, was given by Tien and Hu in \cite{TienHu} (see formula (4.20) and
Theorem 4.1 of that paper). It leads to an $O(k^4)$ algorithm. The authors of
\cite{MNT} and \cite{TienHu} were concentrating mainly on the error analysis
between the greedy and optimal solutions. Apparently Jones \cite{Jones} was the
only one who attempted to give a neat combinatorial condition characterizing
orderly currencies, but his theorem suffered from a major error, soon pointed
out by Maurer \cite{Maurer}. Our paper has been paralleled by an independent
work of Cowen, Cowen and Steinberg \cite{CCS} about currencies whose all
prefixes are orderly and about non-orderly currencies which cannot be
``fixed'' by appending extra coins. The results contained in section
\ref{section4} and a special case ($l=2$) of Theorem \ref{theoremprefixorderly}
of this paper have also been proved in \cite{CCS}.
The aim of this paper is to study some orderly coinage systems from a
combinatorial viewpoint, motivated by the need to have some
nice characterization. One of the motivations was the observation that
\emph{if $A=(1,a_1,a_2,\ldots,a_k)$ is an orderly currency, then the currency
$(1,a_1,a_2)$ is also orderly}, that will be generalized and proved in Theorem
\ref{theoremprefixorderly}. Going further, one may start with an orderly
currency, take out some of its coins and ask if the remaining coins again form
an orderly currency. The precise answer to this question, given in section
\ref{section7}, will be a consequence of the results of sections \ref{section3}
and \ref{section5}, where we prove some properties of the distances $a_j-a_i$
between the coins of an orderly currency. In section \ref{section4} these
results will be used to give a complete description of orderly currencies with
less than 6 coins. In section \ref{section6} we study the behaviour of the
currencies obtained as prefixes of an orderly currency. Some closing remarks and
open problems are included in section \ref{section8}.
\section{Preliminary results}
\label{section2}
If $A=(1,a_1,\ldots,a_k)$ is a currency it will often be convenient to set
$a_{k+1}=\infty$. This will be especially useful whenever we want to choose,
say, the first interval $[a_m,a_{m+1}]$ of length at least $d$ for some $d$.
The reader will see that in all applications the infinite interval will be as
useful as proper intervals.
There are three standard arguments that will be used repeatedly throughout this
paper, so we quote them now to avoid excessive repetitions in the future.
All the time we assume $A=(1,a_1,\ldots,a_k)$ is orderly.
First, suppose we have $a_m$, $a_i$ and $a_l$, such that $a_l<a_m+a_i<a_{l+1}$.
Then $a_m+a_i$ has a representation that uses 2 coins. Since $a_m+a_i$ is
strictly between $a_l$ and $a_{l+1}$ its greedy representation must start
with $a_l$, followed (since $A$ is orderly) by just one other coin $a_r$.
It follows that there exists $r$ such that $a_m+a_i=a_l+a_r$.
The second argument is a slight modification of the first one; namely, if
$a_l<a_m+a_i$ and the number $a_m+a_i-a_l$ is not one of the coins, then
$a_{l+1}\leq a_m+a_{i}$.
The third argument is a bit more complicated. Suppose that for
some $j>i\geq 1$ we have $a_j-a_i=d$. Let us choose the largest $m$ for which
$a_m-a_{m-1}<a_i$ (such $m$'s exist, for instance $a_i-a_{i-1}<a_i$). Then
$a_{m+1}-a_m\geq a_i$ (here it is possible that $a_{m+1}=\infty$), so we
have $a_m<a_{m-1}+a_i<a_{m+1}$. If we also have $a_m<a_{m-1}+a_j<a_{m+1}$ then,
as before, there exist numbers $r<i$ and $s<j$ such that:
\begin{center}
\begin{tabular}{c}
$a_{m-1}+a_i=a_m+a_r,$\\
$a_{m-1}+a_j=a_m+a_s.$
\end{tabular}
\end{center}
\begin{figure}
\epsfbox{figs.1}
\caption{Illustration of a standard argument}
\end{figure}
Then $a_s-a_r=a_j-a_i=d$, so we decreased the coins' indicies from $(j,i)$ to
$(s,r)$, keeping the difference $d$ unchanged. Therefore, if additionally
$(j,i)$ was the {\it smallest} pair of indices for which $a_j-a_i=d$, we would
have a contradiction, hence we may assume that in such case $a_{m-1}+a_j\geq
a_{m+1}$.
We shall frequently make use of the following famous result:
\begin{theorem}[{\bf One-point theorem, \cite{MNT,HuLen,CCS}}]
\label{onept}
Suppose $A'=(1,a_1,\ldots,a_k)$ is orderly and $a_{k+1}>a_k$. Let $m=\lceil
a_{k+1}/a_{k}\rceil$. Then $A=(1,a_1,\ldots,a_k,a_{k+1})$ is orderly if and
only if $\textrm{opt}_A(ma_k)=\textrm{grd}_A(ma_k)$.
\end{theorem}
{\bf Remark.} According to this theorem, if the shorter currency $A'$ is
orderly, then the optimality of the greedy solution for $A$ needs to be checked
only for the single value $ma_k$. This justifies the name {\it one-point
theorem}. Note, that although in general it is NP-hard to compute $\textrm{opt}_A(c)$
for arbitrary $A$ and $c$ (see \cite{Shallit} and \cite{KoZa} for a discussion),
the one-point theorem test $\textrm{opt}_A(ma_k)=\textrm{grd}_A(ma_k)$ runs in polynomial time,
since it is equivalent to $\textrm{grd}_{A'}(ma_k-a_{k+1})\leq m-1$. For the sake of
completeness we decided to include a short proof of the one-point theorem.
{\bf Proof.} One of the implications is trivial. Now suppose that
$\textrm{opt}_A(ma_k)=\textrm{grd}_A(ma_k)$. We have
$$(m-1)a_k+1\leq a_{k+1}\leq ma_k.$$
For all values $c< a_{k+1}$ all the payments $\textrm{grd}_A(c)$, $\textrm{grd}_{A'}(c)$,
$\textrm{opt}_A(c)$, $\textrm{opt}_{A'}(c)$ coincide, so $\textrm{grd}_A(c)=\textrm{opt}_A(c)$. All other $c$
will be split in two groups: $c\in[a_{k+1}, ma_k)$ and $c\geq ma_k$.
{\bf 1. $a_{k+1}\leq c<ma_k$.} For every such $c$ we have $c<2a_{k+1}$,
therefore any payment of $c$ contains either $0$ or $1$ copies of $a_{k+1}$.
Together with the orderliness of $A'$ this implies
$$\textrm{opt}_A(c)=\min\{1+\textrm{grd}_{A'}(c-a_{k+1}), \textrm{grd}_{A'}(c)\}.$$
At the same time $1+\textrm{grd}_{A'}(c-a_{k+1})=\textrm{grd}_A(c)$, so in order to prove
$\textrm{opt}_A(c)=\textrm{grd}_A(c)$ it suffices to show the inequality
$$\textrm{grd}_A(c)\leq \textrm{grd}_{A'}(c).$$
Observe that
$$\textrm{grd}_{A'}(c)=(m-1)+\textrm{grd}_{A'}(c-(m-1)a_k).$$
The function $\textrm{grd}_{A'}=\textrm{opt}_{A'}$ satisfies the triangle inequality, so
$$\textrm{grd}_{A'}(ma_k-a_{k+1})+\textrm{grd}_{A'}(c-(m-1)a_k)\geq
\textrm{grd}_{A'}(c-a_{k+1}+a_k)=1+\textrm{grd}_{A'}(c-a_{k+1}).$$
Finally
$$\textrm{grd}_{A'}(c)-\textrm{grd}_{A}(c)=(m-1)+\textrm{grd}_{A'}(c-(m-1)a_k)-(1+\textrm{grd}_{A'}(c-a_{k+1}
))\geq$$
$$\geq m-2 + 1 -\textrm{grd}_{A'}(ma_k-a_{k+1})=m-1-\textrm{grd}_{A'}(ma_k-a_{k+1}).$$
However
$$1+\textrm{grd}_{A'}(ma_k-a_{k+1})=\textrm{grd}_A(ma_k)=\textrm{opt}_A(ma_k)\leq m,$$
which eventually implies the desired inequality
$$\textrm{grd}_{A'}(c)-\textrm{grd}_{A}(c)\geq 0.$$
{\bf 2. $c\geq ma_k$.} Denote by $\mathcal{OPT}(c)$ the set of optimal payments
for $c$:
$$\mathcal{OPT}(c) = \{(x_0,\ldots,x_{k+1}): \sum_{i=0}^{k+1}x_ia_i=c \textrm{
and } \sum_{i=0}^{k+1} x_i \textrm{ is minimal}\}$$
It is sufficient to exhibit a payment $(x_i)\in\mathcal{OPT}(c)$ with
$x_{k+1}>0$. Consider any optimal payment $(x_i)$. We may apply to it the
following two operations:
\begin{itemize}
\item if $x_k\geq m$ then replace $m$ coins $a_k$ with the greedy decomposition
of $ma_k$. This way the number of coins in the payment does not increase (since
$\textrm{opt}_A(ma_k)=\textrm{grd}_A(ma_k)$), while the multiplicity of $a_k$ in the payment
decreases.
\item if $\sum_{i=0}^{k-1}x_ia_i\geq a_k$ then instead of the coins needed to
pay $\sum_{i=0}^{k-1}x_ia_i$ insert the greedy decomposition of this amount
with respect to $A'$. This will not increase the overall number of coins (since
$A'$ was orderly), but it will decrease the amount paid with
$1,a_1,\ldots,a_{k-1}$.
\end{itemize}
It is clear that repeating these two steps sufficiently many times we will
finally end up with an optimal payment $(x_i)$ satisfying
$\sum_{i=0}^{k-1}x_ia_i<a_k$ and $x_k<m$. Then
$$\sum_{i=0}^{k}x_ia_i\leq a_k-1+(m-1)a_k=ma_k-1<c$$
hence $x_{k+1}>0$ in this payment. \qed
It is obvious that the one-coin currency $A=(1)$ is orderly, as well as
all the two-coin currencies $A=(1,a_1)$. The reader may now wish to solve
the easy problem of when a three-coin currency $A=(1,a_1,a_2)$ is
orderly. For reasons which will become clear later we shall express the
solution in terms of the following set:
\begin{definition}
\label{defA}
For any $a>0$ we define:
$$\mathcal{A}(a)=\bigcup_{m=1}^{\infty} \bigcup_{l=0}^m \{ma-l\}=$$
$$=\{a-1,a\}\cup\{2a-2,2a-1,2a\}\cup\ldots\cup\{ma-m,\ldots,ma\}\cup\ldots$$
\end{definition}
\begin{proposition}
\label{lemma3orderly}
The currency $A=(1,a_1,a_2)$ is orderly if and only if
$a_2-a_1\in\mathcal{A}(a_1)$.
\end{proposition}
{\bf Proof.} Let $m=\lceil a_2/a_1\rceil$. By the one-point theorem $A$ is
orderly if and only if the greedy algorithm is optimal for $ma_1$, which is
equivalent to
$$\textrm{grd}_A(ma_1)\leq m$$
or
$$ma_1-a_2\leq m-1$$
which means that $a_2-a_1 = (m-1)a_1 - (ma_1-a_2) \in \mathcal{A}(a_1)$ (more
precisely, $a_2-a_1$ belongs to the $(m-1)$-st summand of $\mathcal{A}(a_1)$).
On the other hand, if $m$ is the least number for which $a_2-a_1$ belongs to the
$(m-1)$-th summand of $\mathcal{A}(a_1)$, then $\lceil a_2/a_1\rceil=m$ and
$a_2-a_1=(m-1)a_1-l$ for some $l\leq m-1$. Then
$$\textrm{grd}_A(ma_1)=1+(ma_1-a_2) = 1+l\leq m$$
as required.\qed
\section{Investigating differences, part I}
\label{section3}
In this section we begin investigating distances between the coins of an
orderly coinage system, followed by an application of these results.
\begin{proposition}
\label{lemmanot1}
If $A=(1,a_1,\ldots,a_k)$ is orderly and $a_1\geq 3$, then
$$a_{i}-a_{i-1}\neq 1$$
for all $i=1,\ldots,k$.
\end{proposition}
{\bf Proof.} Suppose, on the contrary, that $a_j-a_{j-1}=1$ and let $j$ be the
least index with this property. Since $a_1\geq 3$, we have $j\geq 2$.
Let us choose the largest index $m$ for which $a_m-a_{m-1}<a_{j-1}$. Then
$a_{m+1}-a_m\geq a_{j-1}$, and if $a_{m-1}+a_j<a_{m+1}$ then we would have a
contradiction by the third standard argument from section \ref{section2}.
Therefore $a_{m+1}\leq a_{m-1}+a_j$. Since
$a_{m+1}-a_m\geq a_{j-1}$, we have
$$a_j=a_{j-1}+1\leq (a_{m+1}-a_m)+(a_m-a_{m-1})=a_{m+1}-a_{m-1}\leq a_j$$
meaning that $a_m-a_{m-1}=1$ and $a_{m+1}-a_m=a_{j-1}$.
It follows that
$$a_m<a_{m-1}+a_{j-1}<a_{m+1}$$
which means that $a_{m-1}+a_{j-1}-a_m=a_{j-1}-1$ must be one of the coins,
contradicting the minimality of $j$. This ends the proof.\qed
The previous proposition can be sharpened as follows:
\begin{proposition}
\label{lemmadiffa1-1}
If $A=(1,a_1,\ldots,a_k)$ is orderly then
$$a_{i}-a_{i-1} \geq a_1-1$$
for all $i=1,\ldots,k$.
\end{proposition}
{\bf Proof.} This is obviously true if $a_1=2$, so let $a_1\geq 3$. Let $j$ be
the largest index for which $a_j-a_{j-1}\leq a_1-2$. By
Proposition \ref{lemmanot1} we
have $a_j-a_{j-1}\geq 2$. From the maximality of $j$ we have $a_{j+1}-a_j\geq
a_1-1$ (it is possible that $a_{j+1}=\infty$). Now consider the amount
$$c=a_{j-1}+a_1.$$
\begin{figure}
\epsfbox{figs.2}
\caption{Illustration of the proof of Prop. \ref{lemmadiffa1-1}}
\end{figure}
It satisfies $a_j+2\leq c \leq a_j+a_1-2 < a_{j+1}$, hence $\textrm{opt}_A(c)=2$.
When paid greedily, the amount $c$ is decomposed to $a_j$ and $c-a_j$
copies of the coin $1$, which makes
$$1+(c-a_j)\geq 1+2=3$$
coins altogether. This contradicts the fact $A$ is orderly, thus completing
the proof of this proposition.\qed
Proposition \ref{lemmadiffa1-1} imposes certain restrictions on the possible
differences $a_i-a_{i-1}$. In the next theorem we shall generalize this
restriction, but first let us state without proof some obvious properties of the
sets $\mathcal{A}(a)$ that will be useful in the proof:
\begin{fact} Let $a\geq 2$ be an integer. Then:
\label{factabouta}
\begin{itemize}
\item[(1)] if $x,y\in\mathcal{A}(a)$ then $x+y\in\mathcal{A}(a)$.
\item[(2)] an integer $x\geq 2$ does not belong to $\mathcal{A}(a)$ if and only
if there exists an integer $p\geq 0$ such that
$$pa+1\leq x\leq (p+1)a-(p+2).$$
\item[(3)] if $p_1<p_2<\ldots<p_m$ and $p_m-p_{1}\not\in\mathcal{A}(a)$ then
$p_j-p_{j-1}\not\in\mathcal{A}(a)$ for some $2\leq j\leq m$ (this follows from
(1)).
\item[(4)] if $pa<x$ and $x=(p+1)a-c$ for some $c$ (possibly negative), then
$x\in\mathcal{A}(a)$ implies $c\leq p+1$.
\end{itemize}
\end{fact}
Now we can state the main theorem of this section:
\begin{theorem}
\label{theoremdiffaa}
If $A=(1,a_1,\ldots,a_k)$ is orderly then
$$a_{j}-a_{i} \in \mathcal{A}(a_1)$$
for all $0\leq i<j\leq k$.
\end{theorem}
{\bf Proof.} If not then by property (3) above there exists a number $j$
for which $a_j-a_{j-1}\not\in\mathcal{A}(a_1)$, which is equivalent to
$$pa_1+1\leq a_j-a_{j-1}\leq (p+1)a_1-(p+2)$$
for some $p$. Among all pairs $(p,j)$ for which these inequalities hold
let us choose the lexicographically smallest one. Comparing the leftmost and
rightmost expressions in this double inequality yields $a_1\geq p+3$, hence
$a_1\geq 4$ and $1\leq p \leq a_1-3$.
We have $\textrm{opt}_A(a_{j-1}+(p+1)a_1)\leq p+2$ and
$$a_j+(p+2)\leq a_{j-1}+(p+1)a_1\leq a_j+a_1-1$$
hence $\textrm{grd}_{(1,\ldots,a_j)}(a_{j-1}+(p+1)a_1)\geq p+3$. It follows
that $a_{j+1}\leq a_{j-1}+(p+1)a_1$. Then
$$a_{j+1}-a_j\leq a_{j-1}+(p+1)a_1 -a_j \leq a_1-1.$$
By Proposition \ref{lemmadiffa1-1} all these inequalities must in fact be
equalities. In other words:
\begin{center}
\begin{tabular}{c}
$a_{j+1}=a_{j-1}+(p+1)a_1,$\\
$a_j=a_{j-1}+pa_1+1.$
\end{tabular}
\end{center}
Choose the largest $l$ for which $a_{l+1}-a_l\leq a_{j-1}+(p-1)a_1+2$ (such
$l$'s exist, for instance $a_{j+1}-a_{j}=a_1-1$ is sufficiently small). By
maximality of $l$ we have $a_{l+2}-a_{l+1}\geq a_{j-1}+(p-1)a_1+3$ (it
is possible that $a_{l+2}=\infty$). Observe that
$$a_{l+2}-a_l = (a_{l+2}-a_{l+1})+(a_{l+1}-a_l)\geq a_{j-1}+(p-1)a_1+3 +
a_1-1=a_{j-1}+pa_1+2=a_j+1$$
and
$$a_{l+1}-a_l\leq a_{j-1}+(p-1)a_1+3 = a_j-a_1+2$$
which means that
$$a_{l+1}+a_1-2\leq a_l+a_j<a_{l+2}.$$
This eventually implies that $a_l+a_j=a_{l+1}+a_r$ for some $1\leq r<j$.
The rest of the proof depends on the possible locations of $a_l+a_{j-1}$.
If $a_l+a_{j-1}>a_{l+1}$ then the same argument yields an index $s<j-1$ for
which $a_l+a_{j-1}=a_{l+1}+a_s$. In that case
$a_r-a_s=a_j-a_{j-1}\not\in\mathcal{A}(a_1)$. By properties (3) and (2) of
$\mathcal{A}(a_1)$ there exist numbers $s<r'\leq r$ and $p'$ for which
$$p'a_1+1\leq a_{r'}-a_{r'-1}\leq (p'+1)a_1-(p'+2).$$
The inequality $a_{r'}-a_{r'-1}\leq a_j-a_{j-1}$ implies $p'\leq p$. The pair
$(p',r')$ is lexicographically smaller that $(p,j)$, which is a contradiction
since the latter was chosen to be minimal.
If, on the other hand, $a_l+a_{j-1}=a_{l+1}$,
then $a_l+a_j=a_l+a_{j-1}+pa_1+1=a_{l+1}+pa_1+1$, which means that $a_r=pa_1+1$.
Then $a_r-a_1=(p-1)a_1+1\not\in \mathcal{A}(a_1)$, contradicting the
minimality of $(p,j)$ be the same argument as above.
Therefore we are left with the case $a_l+a_{j-1}<a_{l+1}$. The number $a_r$
satisfies
$$a_r=a_l+a_j-a_{l+1}<a_l+a_j-(a_l+a_{j-1})=a_j-a_{j-1}=pa_1+1.$$
Since $a_r-a_1<(p-1)a_1+1$, the minimality of $p$ implies that
$a_r-a_1\in\mathcal{A}(a_1)$. It means that $a_r=qa_1-q'$ for some $q\leq p$ and
$0\leq q'<q$.
Next we are going to show that $a_{l+1}-(a_l+a_{j-1})\not\in\mathcal{A}(a_1)$.
Observe that
$$a_{l+1}-(a_l+a_{j-1})=(a_l+a_j-a_r)-a_l-a_{j-1}=pa_1+1-a_r=(p-q)a_1+(1+q').$$
which is more than $(p-q)a_1$, while at the same time it equals:
$$(p-q+1)a_1-(a_1-1-q')$$
with $a_1-1-q'>(p+2)-1-q'=p+1-q'>p-q+1$. By property (2) the number
$a_{l+1}-(a_l+a_{j-1})$ does not belong to $\mathcal{A}(a_1)$.
\begin{figure}[!h]
\epsfbox{figs.3}
\caption{The last case of the proof. The length of the bold interval is not
in $\mathcal{A}(a_1)$.}
\end{figure}
Now let us choose the least $p'$ for which $a_l+a_{j-1}+p'a_1\geq a_{l+1}$. In
this case
$$a_l+a_{j-1}+p'a_1<a_{l+1}+a_1\leq a_{l+1}+a_r=a_l+a_j.$$
Obviously $\textrm{opt}_A(a_l+a_{j-1}+p'a_1)\leq p'+2$. On the other hand, the greedy
decomposition of $a_l+a_{j-1}+p'a_1$ is $a_{l+1}+s\cdot 1$, where
$s=a_l+a_{j-1}+p'a_1-a_{l+1}$. By optimality
$$s+1\leq p'+2$$
so $s\leq p'+1$. On the other hand, we have already proved that
$p'a_1-s=a_{l+1}-(a_l+a_{j-1})\not\in\mathcal{A}(a_1)$, so $s\geq p'+1$. Finally
we have $s=p'+1$.
To end the proof we compute $a_r-a_1$ in terms of $p,p'$ and $a_1$:
\begin{center}
\begin{tabular}{c}
$a_r-a_1=a_l+a_j-a_{l+1}-a_1=a_l+(a_{j-1}+pa_1+1)-(a_l+a_{j-1}
+p'a_1-s)-a_1=$\\
$=(p-p'-1)a_1+(s+1)=(p-p'-1)a_1+(p'+2)=$\\
$=(p-p')a_1-(a_1-p'-2)$
\end{tabular}
\end{center}
Since $a_r-a_1\in\mathcal{A}(a_1)$, by property (4) we obtain
\begin{center}
\begin{tabular}{c}
$a_1-p'-2\leq p-p',$\\
$a_1<p+3.$
\end{tabular}
\end{center}
This contradiction ends the proof.\qed
As an immediate corollary we obtain the theorem announced in the introduction:
\begin{theorem}
\label{theoremprefixorderly}
If $A=(1,a_1,\ldots,a_k)$ is orderly then for any $2\leq l\leq
k$ the currency $(1,a_1,a_l)$ is also orderly. In particular the currency
$(1,a_1,a_2)$ is orderly.
\end{theorem}
{\bf Proof}. If $A$ is orderly then by Theorem \ref{theoremdiffaa} we have
$a_l-a_1\in\mathcal{A}(a_1)$. By Proposition \ref{lemma3orderly} this is
sufficient for $(1,a_1,a_l)$ to be orderly.\qed
\section{Short currencies}
\label{section4}
Theorems \ref{theoremprefixorderly} and \ref{onept} allow us to give a
complete characterization of all orderly currencies with at most 5 coins. The
currencies with 1, 2 and 3 coins have already been discussed. Here we
concentrate on the cases of 4 and 5 coins. Following \cite{CCS} call a currency
$A=(1,a_1,\ldots,a_k)$ \emph{totally orderly}\footnote{Also called normal in
\cite{TienHu}.} if every prefix sub-currency of the form $(1,a_1,\ldots,a_l)$ is
orderly for $l=0,\ldots,k$.
\begin{proposition}
\label{lemma4}
The currency $A=(1,a_1,a_2,a_3)$ is orderly if and only if it is totally
orderly.
\end{proposition}
\begin{proposition}
\label{lemma5}
The currency $A=(1,a_1,a_2,a_3,a_4)$ is orderly if and only if
\begin{itemize}
\item (1) either $(1,a_1,a_2,a_3,a_4)=(1,2,a,a+1,2a)$ for some $a\geq 4$, in
which case $(1,a_1,a_2,a_3)$ is not orderly,
\item (2) or $A$ is totally orderly.
\end{itemize}
\end{proposition}
{\bf Remark.} The conditions given in the above propositions are efficiently
computable, since it can be quickly checked if a currency is totally orderly (as
opposed to checking whether it is just orderly). One simply repeats the one-point
theorem test with for longer and longer prefixes; see also \cite{CCS}.
{\bf Proof of Propositions \ref{lemma4} and \ref{lemma5}.} The one-point
theorem, together with Theorem \ref{theoremprefixorderly} covers Proposition
\ref{lemma4} and case (2) of Proposition \ref{lemma5}.
It remains to show that all orderly currencies $(1,a_1,a_2,a_3,a_4)$ in which
the sub-currency $(1,a_1,a_2,a_3)$ is disorderly are of the form (1) from
Proposition \ref{lemma5}. Let $m=\lceil a_3/a_2\rceil$.
The triple $(1,a_1,a_2)$ is orderly by Theorem \ref{theoremprefixorderly}. By
the one-point theorem $ma_2$ is a counterexample for $(1,a_1,a_2,a_3)$, hence
$a_4\leq ma_2$. Both values $a_3+a_3$ and $a_3+a_2$ exceed $ma_2$, so they
exceed $a_4$, so by optimality there must exist $i<j\leq 2$ for which:
\begin{center}
\begin{tabular}{c}
$a_3+a_2=a_4+a_i,$\\
$a_3+a_3=a_4+a_j.$
\end{tabular}
\end{center}
Subtracting these equations we get
$$a_3-a_2=a_j-a_i<a_j\leq a_2$$
which in turn gives $a_3<2a_2$. That means $m=2$.
There are two cases to consider:
{\bf $j=2$.} Then $a_3-a_2=a_2-a_i$, so $2a_2=a_3+a_i$, which contradicts the fact that $(1,a_1,a_2,a_3)$ is disorderly.
{\bf $j=1$.} Then $i=0$ and previous equations take the form:
\begin{center}
\begin{tabular}{c}
$a_3+a_2=a_4+1,$\\
$a_3+a_3=a_4+a_1.$
\end{tabular}
\end{center}
The following computation
$$a_4+1=a_3+a_2>2a_2=ma_2\geq a_4$$
implies
$$a_4+1=a_3+a_2=2a_2+1.$$
Setting $a_2=a$ we get $a_3=a+1$, $a_4=2a$ and
$a_1=2a_3-a_4=2$.
The routine check that $(1,2,a,a+1,2a)$ is orderly resembles the technique used
in the proof of case 2 of Theorem \ref{onept} and is left to the reader.
For $a\geq 4$ the sub-currency $(1,2,a,a+1)$ is disorderly.\qed
Attempts to continue similar reasoning with longer coinage systems encounter a
serious problem, because the applicability of the one-point theorem is limited.
More precisely, the ``intermediate'' currencies may not be orderly even if $A$
is orderly as we see from part (1) of Proposition \ref{lemma5}. We shall return
to these matters in section \ref{section6}.
\section{Investigating differences, part II}
\label{section5}
In the previous sections we were discussing relation of the distances
$a_j-a_i$ and the value of $a_1$. Here we shall extend some of this to further
coins. Note that Proposition \ref{lemmadiffa1-1} may be interpreted as follows:
if some difference $a_j-a_i$ belongs to the interval $(1,a_1)$, then it must be
necessarily equal $a_1-1$. We are interested in the possible values of
$a_j-a_i$ in the cases when this difference belongs to $(a_{m-1},a_m)$. {\bf
Throughout this section we always assume that $A=(1,a_1,\ldots,a_k)$
is orderly.} The key results of this sections are Corollary \ref{possibletoa2}
and Theorem \ref{theorembigdiff}.
\begin{lemma}
\label{aux1}
If
$$a_m-a_{l+1}<a_j-a_i<a_m-a_l$$
for some $i<j$, $l<m$, then
$$a_{j+1}\leq a_i+a_m.$$
\end{lemma}
{\bf Proof.} We have $a_j+a_l<a_i+a_m<a_j+a_{l+1}$. If there was no new coin
between $a_j$ and $a_i+a_m$ then there would be no greedy decomposition of
$a_i+a_m$ in two steps.\qed
\begin{lemma}
\label{aux2}
There are no numbers $0\leq i<j\leq k$ and $1\leq m\leq k$ that satisfy
$a_{m-1}\leq a_j-a_i < a_m-a_{m-1}$.
\end{lemma}
{\bf Proof.} Suppose the contrary and let $(j,i,m)$ be some triple
satisfying the above inequalities, such that $j$ is the least possible. If
$i=0$ then $a_{m-1}\leq a_j-1<a_m-a_{m-1}<a_m$, hence $j=m$, which in turn
implies $a_{m-1}<1$, but this is not possible.
Therefore $i\geq 1$ and we are free to choose the largest index $l$ for which
$a_l-a_{l-1}<a_i$. If $a_{l-1}+a_j<a_{l+1}$ then by the standard argument we
obtain a contradiction with the minimality of $j$. Hence $a_{l-1}+a_j\geq
a_{l+1}$. It follows that
$$a_l-a_{l-1}=(a_{l+1}-a_{l-1})-(a_{l+1}-a_l)\leq a_j-a_i<a_m-a_{m-1}.$$
By Lemma \ref{aux1} it follows that $a_{l+1}-a_{l-1}\leq a_m$, so
$a_i<a_{l+1}-a_{l-1}\leq a_m$. In effect $i\leq m-1$. At the same time we
also have $a_j>a_{m-1}$, so $j\geq m$. All this implies
$$a_j-a_i\geq a_m-a_{m-1}.$$
This contradiction ends the proof.\qed
\begin{lemma}
\label{aux3}
Let $m\geq 2$. If the difference $a_j-a_i$ belongs to the interval
$[a_m-a_1,a_m-1]$ then it can only be one of the numbers $a_m-a_1$, $a_m-a_1+1$
and $a_m-1$.
\end{lemma}
{\bf Proof.} Suppose that
$$a_m-a_1<a_j-a_i<a_m-1.$$
From Lemma \ref{aux1} we get $a_{j+1}\leq a_i+a_m<a_j+a_1$. In this case
Proposition \ref{lemmadiffa1-1} implies $a_{j+1}=a_j+a_1-1$.
Moreover, we have
$$a_j+2\leq a_i+a_m\leq a_j+a_1-1=a_{j+1}.$$
Hence $a_i+a_m=a_{j+1}$ (otherwise the amount $a_i+a_m$ would not have a greedy
decomposition in two steps). Eventually we get
$$a_j-a_i=(a_{j+1}-a_1+1)-(a_{j+1}-a_m)=a_m-a_1+1.$$\qed
\begin{lemma}
\label{aux4}
If $a_1<a_2-a_1+1<a_2-1$ then the value $a_2-a_1+1$ cannot be
attained by any of the differences $a_j-a_i$.
\end{lemma}
{\bf Proof.} First note that the given inequalities imply $a_1\geq 3$.
Suppose that $j$ is the minimal number for which there exists an $i$ such that
$a_j-a_i=a_2-a_1+1$. Clearly $i\geq 2$. From the proof of Lemma \ref{aux3} we
know that
$$a_{j+1}=a_i+a_2=a_j+a_1-1.$$
Let $m$ be the maximal index for which $a_m-a_{m-1}<a_i$. Then $a_{m+1}-a_m\geq a_i$.
If $a_{m-1}+a_j<a_{m+1}$ then considering the amounts $a_{m-1}+a_i$ and $a_{m-1}+a_j$ and their
greedy decompositions we obtain a contradiction with the minimality of $j$ in the usual way. Hence
we may assume that
$$a_{m-1}+a_j\geq a_{m+1}.$$
If $a_{m-1}+a_j=a_{m+1}$ then consider the amount $a_{m-1}+a_{j+1}$. It
satisfies
$$a_{m-1}+a_{j+1}=a_{m+1}+a_1-1<a_{m+1}+a_2\leq a_{m+1}+a_i\leq a_{m+2}.$$
Since $a_1-1\geq 2$ this amount cannot be greedily decomposed in two steps, so
we have a contradiction, which means that
$$a_{m-1}+a_j\geq a_{m+1}+1$$
which in turn implies
$$a_m-a_{m-1}=(a_{m+1}-a_{m-1})-(a_{m+1}-a_m)\leq a_j-1-a_i=a_2-a_1.$$
We know from the previous lemmas that in this case the only possible values of the difference
$a_m-a_{m-1}$ are $a_2-a_1$, $a_1$ and $a_1-1$. Let us investigate these cases separately.
{\bf Case 1.} $a_m-a_{m-1}=a_2-a_1$. Then $a_{m+1}\geq a_m+a_i$ and
$$a_{m+1}\leq a_{m-1}+a_j-1=a_m-a_2+a_1+a_j-1=a_m+a_i$$
hence $a_{m+1}=a_m+a_i=a_{m-1}+a_j-1$. Now consider the amount $a_m+a_j$. It
satisfies
$$a_m+a_j=a_m+a_i+a_2-a_1+1=a_{m+1}+a_2-a_1+1<a_{m+1}+a_2\leq a_{m+1}+a_i\leq a_{m+2}$$
so it could be decomposed greedily in two steps only if $a_2-a_1+1$ was one of
the coins, which is not true by the assumptions of the lemma.
{\bf Case 2.} $a_m-a_{m-1}=a_1$. Now consider the amount $a_{m-1}+a_2$:
$$a_m<a_{m-1}+a_2=a_m+(a_2-a_1)<a_m+a_i\leq a_{m+1}.$$
This amount can only be decomposed optimally if $a_2-a_1$ is a coin. Since
$a_1\geq 3$, by Proposition \ref{lemmanot1} we have $a_2-a_1\neq 1$. Therefore
$a_2-a_1=a_1$ and we have
$$a_m-a_{m-1}=a_1=a_2-a_1$$
and the argument from case 1 can be repeated.
{\bf Case 3.} $a_m-a_{m-1}=a_1-1$. An exact repetition of case 2 shows that in
this case $a_2-a_1+1$ would have to be one of the coins. However, this
possibility is excluded by the assumptions of our lemma. \qed
The results from this section, together with Proposition \ref{lemmadiffa1-1} can
be used to characterize the set of possible values of $a_j-a_i$ which fit in the
interval $(1,a_2)$. For a currency $A$ let $S(A)=\{a_j-a_i: 0\leq
i<j\leq k\}$.
\begin{corollary}
\label{possibletoa2}
For an orderly currency $A=(1,a_1,a_2,\ldots,a_k)$
\begin{itemize}
\item[(a)] we always have
$$S(A)\cap (1,a_1)\subset\{a_1-1\}$$
$$S(A)\cap (a_1,a_2)\subset\{a_2-a_1,a_2-1\}$$
\item[(b)] if $a_2=2a_1-1$ or $a_2=2a_1$ then $S(A)\cap
(1,a_2)\subset\{a_1-1,a_1,a_2-1\}$
\item[(c)] if $a_2>2a_1$ then $S(A)\cap (1,a_2)=\{a_1-1,a_2-a_1,a_2-1\}$
\end{itemize}
\end{corollary}
{\bf Proof.} Property (a) is just a restatement of Proposition
\ref{lemmadiffa1-1} and Lemmas \ref{aux2}, \ref{aux3} and \ref{aux4}.
By Theorem \ref{theoremdiffaa} there are no other possible values of $a_2$
except of those in (b) and (c). In both cases, if $a_j-a_i < a_1$, then
Proposition \ref{lemmadiffa1-1} applies.
In case (c) $a_1<a_2-a_1<a_2-a_1+1\leq a_2-1$ and an application of Lemmas
\ref{aux3} and \ref{aux4} proves that our theorem enumerates all possible
elements of $S(A)\cap (1,a_2)$. Of course all the given values are attained, so
in (c) we are free to use equality rather than inclusion.
In case (b) the difference $a_1$ may or may not be attained (consult the
currencies $(1,3,5)$ and $(1,3,5,8,10,15)$). Once again one needs to combine
the before-mentioned lemmas; we omit the details.\qed
{\bf Remark.} Corollary \ref{possibletoa2} and Theorem \ref{theoremdiffaa} give
two independent conditions that must be satisfied by orderly currencies. For
instance every three-coin currency satisfies Corollary \ref{possibletoa2}, but
not necessarily Theorem \ref{theoremdiffaa} (it is also easy to imagine more
complicated examples of this kind). On the other hand, the currency $(1,3,7,12)$
satisfies Theorem \ref{theoremdiffaa}, but $12-7=5\not\in\{7-3,7-1\}$, so part
(a) of Corollary \ref{possibletoa2} is violated.
Our last theorem in this section will be important in section
\ref{section7}. It can roughly be stated as ``if some two consecutive
differences are large, then the subsequent differences must also be large''.
\begin{theorem}
\label{theorembigdiff}
Suppose $(1,a_1,\ldots,a_k)$ is orderly, $m\geq 2$ and
$$a_{m-1}>2a_{m-2}, \ a_{m}>2a_{m-1}.$$
Then for every $t\geq m$ we have $a_{t+1}-a_t\geq a_m-a_{m-1}$.
\end{theorem}
{\bf Proof.} Suppose, on the contrary, that $a_{t+1}-a_t< a_m-a_{m-1}$ for some
$t\geq m$, and let $t$ be the smallest index with these properties. Choose $s$
as the largest index for which
$$a_{s+1}-a_s<a_t-a_{m-2}$$
(such numbers $s$ exist; for instance $s=t-1$ satisfies this inequality).
Note that by maximality of $s$ we have $a_{s+2}-a_{s+1}\geq a_t-a_{m-2}$
(possibly $a_{s+2}=\infty$) and $a_{s+3}-a_{s+2}\geq a_t-a_{m-2}$ (if
$a_{s+2}<\infty$). The proof is split into two cases.
{\bf Case 1.} $a_s+a_{t+1}<a_{s+2}$. With this assumption we have
$$a_{s+1}<a_s+a_t<a_s+a_{t+1}<a_{s+2}$$
so there exist indices $r,l$ such that
\begin{center}
\begin{tabular}{c}
$a_s+a_t=a_{s+1}+a_r,$\\
$a_s+a_{t+1}=a_{s+1}+a_l,$
\end{tabular}
\end{center}
with $r<l\leq t$. This implies
$$a_l-a_{l-1}\leq a_l-a_r=a_{t+1}-a_t<a_m-a_{m-1}.$$
Since $l-1< t$ and $t$ was chosen to be minimal with respect to the condition
$t\geq m$ and the above inequality, we obtain $l-1<m$. Since $l=m$ does not
satisfy the above inequality, we have $l\leq m-1$ and $r\leq m-2$, but then
$$a_{s+1}-a_s=a_t-a_r\geq a_t-a_{m-2}$$
contradicting the choice of $s$. This completes the first case of the proof.
{\bf Case 2.} Now suppose $a_s+a_{t+1}\geq a_{s+2}$. We are going to prove the
following sequence of inequalities:
\begin{center}
\begin{tabular}{lc}
(1) & $a_{s+1}-a_s>a_{m-2}$\\
(2) & $a_{s+2}-a_s>a_m$\\
(3) & $a_{s+1}-a_s<a_m$\\
(4) & $a_{s+1}-a_s\geq a_m-a_{m-1}$\\
(5) & $a_{s+2}<a_{s+1}+a_t<a_{s+1}+a_{t+1}<a_{s+3}$
\end{tabular}
\end{center}
(1): We always have
$$a_{s+2}-a_s>a_{s+2}-a_{s+1}\geq a_t-a_{m-2}\geq a_m-a_{m-2}>a_{m-1}.$$
If we also had $a_{s+1}-a_s\leq a_{m-2}$ then
$$a_{s+1}\leq a_s+a_{m-2}<a_s+a_{m-1}<a_{s+2}.$$
As usually, it means that $a_{m-1}-a_{m-2}=a_l-a_r$ for some $r<l\leq m-2$ or
$a_{m-1}-a_{m-2}=a_l$ for $l\leq m-2$. In either case $a_{m-1}-a_{m-2}\leq
a_{m-2}$, contradicting the assumptions of the theorem. Therefore
$a_{s+1}-a_s>a_{m-2}$.
(2): This follows straight from (1) and the maximality of $s$:
$$a_{s+2}-a_s=(a_{s+2}-a_{s+1})+(a_{s+1}-a_s)>a_t-a_{m-2}+a_{m-2}=a_t\geq a_m.$$
(3): Since we assumed $a_{s+2}-a_s\leq a_{t+1}$ for this case, we obtain, using
the properties of $s$ and $t$, that
$$a_{s+1}-a_s=(a_{s+2}-a_s)-(a_{s+2}-a_{s+1})\leq
a_{t+1}-(a_t-a_{m-2})<a_m-a_{m-1}+a_{m-2}<a_m.$$
(4): By (2) and (3) we have $a_{s+1}<a_s+a_m<a_{s+2}$, therefore
$a_s+a_m=a_{s+1}+a_r$ for some $r\leq m-1$. Finally
$$a_{s+1}-a_s=a_m-a_r\geq a_m-a_{m-1}.$$
(5): First note that by $a_t\geq a_m$ and $2a_{m-2}<a_{m-1}$ we obtain
$$a_{s+3}-a_{s+1}\geq 2(a_t-a_{m-2})=a_t+a_t-2a_{m-2}>
a_t+a_m-a_{m-1}>a_{t+1}.$$
Moreover, by (4) and the assumption $a_{s+2}-a_s\leq a_{t+1}$ we get
$$a_{s+2}-a_{s+1}=(a_{s+2}-a_s)-(a_{s+1}-a_s)\leq a_{t+1}-(a_m-a_{m-1})<a_t.$$
This ends the proof of (1)--(5).
\begin{figure}
\epsfbox{figs.4}
\caption{The situation in case 2. in Theorem \ref{theorembigdiff}.}
\end{figure}
Now (5) implies the existence of $r<l\leq t$ such that
\begin{center}
\begin{tabular}{c}
$a_{s+1}+a_t=a_{s+2}+a_r,$\\
$a_{s+1}+a_{t+1}=a_{s+2}+a_l.$
\end{tabular}
\end{center}
As a consequence of these formulae we obtain the inequality
$$a_r=a_t-(a_{s+2}-a_{s+1})\leq a_t-(a_t-a_{m-2})=a_{m-2}, \ \textrm{hence }
r\leq m-2,$$
which in turn implies
$$a_l=(a_{t+1}-a_t)+a_r<a_m-a_{m-1}+a_{m-2}<a_m, \ \textrm{hence } l\leq m-1.$$
Combining this, we get
$$a_{s+1}-a_s=a_{s+2}-a_s+a_l-a_{t+1}\leq a_{t+1}+a_l-a_{t+1}=a_l\leq a_{m-1}.$$
However, by (4) $a_{s+1}-a_s\geq a_m-a_{m-1}>a_{m-1}$, so we have a
contradiction which ends the proof of case 2, and the whole theorem.\qed
\section{$+/-$-classes}
\label{section6}
If $A=(1,a_1,\ldots,a_k)$ is orderly then some prefix sub-currency, i.e. a
currency of the form $A'=(1,a_1,\ldots,a_l)$ with $l<k$ might not be orderly
(for instance, $(1,2,a,a+1,2a)$ is orderly, but $(1,2,a,a+1)$ is not for $a\geq
4$, as in Proposition \ref{lemma5}).
This situation was still quite manageable in the case of 5 coins, but it gets
more and more complicated as the number of coin increases, thus making inductive
analysis (possibly using the one-point theorem) impossible.
To describe the prefix currencies we introduce the notion of {\it
$+/-$-classes}. To every currency $A=(1,a_1,\ldots,a_k)$ we may assign a
pattern of $k+1$ signs {\tt +} and {\tt -}, defined as follows: the $l$-th
symbol of the pattern ($l=0,\ldots,k$) is {\tt +} if the prefix currency
$(1,a_1,\ldots,a_l)$ is orderly and {\tt -} in the opposite case. A {\it
$+/-$-class} is the set of all currencies corresponding to a given
$+/-$-pattern. For instance, the pattern \verb?++++?$\ldots$\verb?+++?
corresponds to totally orderly currencies. Another well-described example
is the $+/-$-class given by the pattern \verb?+++-+? --- it consists precisely of the
currencies $(1,2,a,a+1,2a)$ with $a\geq 4$ (this is the consequence of
Proposition \ref{lemma5}, since an orderly $5$-coin currency which is not
totally orderly satisfies part (1) of that proposition).
The $+/-$-patterns that correspond to non-empty classes cannot be completely
arbitrary, for instance, if a pattern ends with a {\tt +} then it must begin
with {\tt +++} -- this is a consequence of Theorem \ref{theoremprefixorderly}.
The patterns beginning with {\tt +++} and ending with {\tt +} will be called
{\it proper}. Mysteriously, some proper patterns describe empty classes. Here is
a sample proposition of this sort:
\begin{proposition}
\label{emptyclass}
The $+/-$-class described by the pattern \verb?+++-+-+? is empty.
\end{proposition}
{\bf Proof.} Suppose that $A=(1,a_1,a_2,a_3,a_4,a_5,a_6)$ is a coinage system in
the class \verb?+++-+-+?. By case (1) of Proposition \ref{lemma5} we know that
in fact $A$ is of the form
$$(1,2,a,a+1,2a,a_5,a_6)$$
for some $4\leq a$, $2a< a_5< a_6$. By the one-point theorem some multiple of
$2a$ is a counterexample for $(1,2,a,a+1,2a,a_5)$. Extending this by $a_6$ must
fix this problem, hence
$$a_6-a_5<2a.$$
Since $A$ is orderly, there exist numbers $r, s$ such that:
\begin{center}
\begin{tabular}{c}
$a_5+a_5=a_6+a_r,$\\
$a_5+2a=a_6+a_s,$
\end{tabular}
\end{center}
with $a_r\leq 2a$, $a_s\leq a+1$, $1 \leq a_s<a_r$. Subtracting the two
equations yields $a_5-2a=a_r-a_s$. Possible differences $a_r-a_s$ ($0\leq
s<r\leq 4$) form the set
$$\{1,a-2,a-1,a,2a-2,2a-1,2a\}$$
so the possible values of $a_5$ are $2a+1,3a-2,3a-1,3a,4a-2,4a-1,4a$.
The values $3a-1,3a,4a-2,4a-1,4a$ can be excluded from this set, since then
$(1,2,a,a+1,2a,a_5)$ would be orderly, which can be checked easily by the
one-point theorem (the ``suspected'' amount to be tested for optimality is
$4a$).
Therefore we are left with $a_5\in\{2a+1,3a-2\}$.
If $a_5=2a+1$ then the greedy algorithm for $(1,\ldots,a_5)$ fails to be optimal
already for $3a=2a+a$, hence $a_6\leq 3a$. On the other hand, all three numbers
$2a+2a$, $2a+(2a+1)$ and $(2a+1)+(2a+1)$ can be obtained with two coins, hence
$4a-a_6$, $4a+1-a_6$ and $4a+2-a_6$ must be three consecutive integers which are
coins, all less than $a_6$. This is only possible if $a_6=4a$, contradiction.
Now suppose that $a_5=3a-2$. Then for the number $2a+(a+1)=3a+1$ not to be a
counterexample we must have $3a-1\leq a_6\leq 3a+1$. If $a_6=3a-1$ then
$4a-2=(3a-2)+a=(3a-1)+(a-1)$ is a counterexample ($a-1$ is not a coin). If
$a_6=3a$ then the counterexample is $4a-1=(3a-2)+(a+1)=3a+(a-1)$ (reason as
before). Finally, if $a_6=3a+1$ then $4a=2a+2a=(3a+1)+(a-1)$ is the
counterexample.\qed
Of course, given a currency, we may recover its $+/-$-class in $O(k^4)$ time
simply by repeating Pearson's algorithm \cite{Pear} for each prefix
sub-currency. The reverse problem, to determine whether a given proper
$+/-$-pattern describes a non-empty $+/-$-class is actually much harder and we
have not been able to find any algorithm solving it.
From this point of view the most ``messy'' orderly currencies are those which
belong to the class determined by \verb?+++----?$\ldots$\verb?--+?. These
classes are indeed non-empty for $k=0,2\pmod{3}$. Their representatives for
$k=3l$ and $k=3l-1$, respectively, are
\begin{center}
\begin{tabular}{c}
$(1,2,\ 4,5,\ 7,8,\ldots,3l-2,3l-1,\ 3l+1,\ 3l+4,\ldots,6l-2),$\\
$(1,2,\ 4,5,\ 7,8,\ldots,3l-2,3l-1,\ 3l+2,\ 3l+5,\ldots,6l-4).$
\end{tabular}
\end{center}
On the other hand, there seem to be no coinage
systems of type \verb?+++----?$\ldots$\verb?--+? for $k=1\pmod{3}$, but we have
not been able to prove this.
\section{Classification of orderly sub-currencies}
\label{section7}
Every set $P=\{i_0,i_1,\ldots,i_l\}\subset\{0,1,\ldots,k\}$, where
$0=i_0<i_1<\ldots<i_l$ determines a sub-currency
$(a_{i_0},a_{i_1},\ldots,a_{i_l})$ of any currency $A=(1,a_1,\ldots,a_k)$.
From Theorem \ref{theoremprefixorderly} we know that if $A$ is orderly then the
sub-currency determined by $P=\{0,1,l\}$ ($2\leq l\leq k$) is also orderly. Is
this just a lonely phenomenon, or could a similar theorem be proved for some
other sets $P$?
\begin{definition}
\label{hered}
The set $P$ of the form given above will be called \emph{hereditary} if the
following is true:
\begin{center}
for every orderly currency $A=(1,a_1,\ldots,a_k)$\\ the sub-currency
determined by $P$ is also orderly
\end{center}
\end{definition}
Let us enumerate some interesting classes of subsets of $\{0,1,\ldots,k\}$:
\begin{itemize}
\item[type 1:] the singleton set $\{0\}$
\item[type 2:] the sets $\{0,l\}$ for $1\leq l\leq k$
\item[type 3:] the sets $\{0,1,l\}$ for $2\leq l\leq k$
\item[type 4:] the sets $\{0,1,2,l\}$ for $4\leq l\leq k$
\item[type 5:] the full set $\{0,1,\ldots,k\}$
\end{itemize}
Note that $\{0,1,2,3\}$ is a peculiar exception: it is \emph{not} of type 4 (an
immediate example is $(1,2,a,a+1,2a)$ for $a\geq 4$ and its non-orderly
sub-currency $(1,2,a,a+1)$ determined by $\{0,1,2,3\}$).
We already know that sets $P$ of type 1, 2, 3 or 5 are hereditary. In this
section we shall prove that sets that are not specified in types 1--5 are not
hereditary.\footnote{To be precise, every set $P$ should always be thought of as
a subset of $\{0,1,\ldots,k\}$ for a certain $k$. In most cases $k$ will be
implicit, but to improve clarity we shall sometimes stress this connection by
writing $P\subset\{0,1,\ldots,k\}$.}
We also conjecture that all sets $P$ of type 4 are hereditary, and we prove this
conjecture under some mild additional assumptions. The general case remains
open.
Before proceeding with the elimination of non-hereditary subsets $P$ let us
make a few observations.
\begin{lemma}
For any $l\geq 3$ let $B_l$ denote the currency
$$B_l=(1,2,3,\ldots,l-1,2l-2,2l-1,4l-4)$$
where $a_l=2l-1$. Then $B_l$ is orderly of type \verb?+++?$\ldots$\verb?+-+?.
\end{lemma}
{\bf Proof.} The prefix currency $(1,2,3\ldots,l-1)$ is clearly of type
\verb?+++?$\ldots$\verb?++?. Extending this by $2(l-1)$ we get an orderly
currency by the one-point theorem. The next prefix, ending in $2l-1$ is not
orderly since $2\cdot 2(l-1)=4l-4$ is the smallest counterexample. The complete
currency is orderly which can be proved easily by the techniques from the proof
of Theorem \ref{onept}.\qed
\begin{lemma}
For any $m>l\geq 2$ and $p\geq 1$ let $A_{l,m}(p)$ denote the currency:
$$A_{l,m}(p) = (a_0,a_1,a_2,\ldots,a_{l-1},a_l,a_{l+1},a_{l+2},\ldots,a_{m})=$$
$$(1,2,3,\ldots,l,pl,(2p-1)l,(3p-2)l,\ldots,((m-l+1)p-(m-l))l)$$
where $a_l=pl$. This currency is orderly. Moreover, if $p>m-l$ then $\lceil
a_m/a_l \rceil = m-l+1$.
\end{lemma}
{\bf Proof.} The given currency is in fact of type
\verb?++++?$\ldots$\verb?+++?, which can be verified inductively by the
one-point theorem: to check that $(1,a_1,\ldots,a_{l+i})$ is orderly for $i \geq
1$ it suffices to observe that
$$2a_{l+i-1} = 2(pi-(i-1))l= (p(i+1)-i)l + (p(i-1)-(i-2))l=a_{l+i}+a_{l+i-2}.$$
To prove the last statement note that
$$a_{m}=((m-l+1)p-(m-l))l<(m-l+1)pl=(m-l+1)a_l$$
and, if $p>m-l$:
$$a_{m}=((m-l+1)p-(m-l))l = (m-l)pl + l(p-(m-l)) > (m-l)a_l.$$\qed
\begin{lemma}
\label{obs3}
An orderly currency may be extended by any multiple of its
highest coin and the resulting currency will be orderly.
\end{lemma}
{\bf Proof.} A trivial consequence of the one-point theorem.\qed
The last observation will be used in the following way: suppose we want to
prove that some set $P=\{i_0,i_1,\ldots,i_l\}\subset\{0,1,\ldots,k\}$ is not
hereditary. First we find a shorter orderly currency $A'=(1,a_1,\ldots,a_r)$,
such that the sub-currency determined by
$P'=\{i_0,i_1,\ldots,i_{r'}\}\subset\{0,1,\ldots,r\}$ is not orderly (here
$r'< r\leq k$) and $i_{r'+1}>r$ or $r'=l$. Let $c$ be any counterexample for
this sub-currency and let $m$ be any number for which $ma_{r}>c$. Then the
currency $$A=(1,a_1,\ldots,a_r,ma_r,2ma_r,\ldots,(k-r)ma_r)$$
is orderly (Lemma \ref{obs3}) and its sub-currency determined by $P$ is not,
since all the added coins are too large to fix the problem with $c$ (the exact
form of $P\setminus P'$ is actually immaterial, it is important that its
smallest element is at least $r+1$).
\begin{theorem}
The sets $P$ not of the form 1, 2, 3, 4 or 5 are not hereditary.
\end{theorem}
{\bf Proof.} Let $P=\{i_0,\ldots,i_s\}\subset\{0,\ldots,k\}$, $i_0=0$, be such a
set. Let $r$ be the largest index for which $i_r=r$ (i.e. $\{0,\ldots,r\}\subset
P$, $r+1\not\in P$). We shall consider a few cases:
{\bf Case $3\leq r<k$}. Here we employ the orderly currency $B_r$. Its
sub-currency $(1,a_1,\ldots,a_r)$ is not orderly. If $r=k-1$ then we are done,
while for $r<k-1$ we must expand $B_r$ to an orderly currency with $k+1$
coins in the standard way described earlier. The resulting currency will
have a disorderly sub-currency determined by $P$.\
{\bf Case $r=2$}. In this case $|P|\geq 5$, since otherwise $P$ would be of
the form $\{0,1,2\}$ or $\{0,1,2,l\}$ for some $l\geq 4$ and these sets are of
type 3 and 4, respectively. Denote $l=i_3\geq 4$, $m=i_4$ and consider the
currency $A_{l,m}(p)$ with $p>m-l$. Its sub-currency
$$(1,2,3,a_l,a_m)$$
is not orderly since the amount
$$\lceil a_m/a_l\rceil a_l = (m-l+1)a_l$$
paid greedily splits into the coin $a_m$ and some of the coins $1,2,3$, thus
requiring at least
$$1+\frac{(m-l)l}{3} > 1+(m-l)$$
coins, which is more than if it was paid with $m-l+1$ copies of $a_l$. Now it
suffices to expand this currency to a currency with $k+1$ coins as previously.
{\bf Case $r=1$}. Then $|P|\geq 4$, since otherwise $P$ would be of the form
$\{0,1,l\}$, which is of type 3. Let $l=i_2\geq 3$ and $m=i_3$ and consider the
currency $A_{l,m}(p)$ with $p>m-l$. The sub-currency $(1,2,a_l,a_m)$
is not orderly for the same reason as previously: the amount
$\lceil a_m/a_l\rceil a_l = (m-l+1)a_l$
must be paid greedily with at least
$1+\frac{(m-l)l}{2} > 1+(m-l)$
coins and the proof follows.
{\bf Case $r=0$}. Clearly $|P|\geq 3$, since sets of the form $\{0\}$ and
$\{0,l\}$ are of type 1 and 2. Let $l=i_1\geq 2$ and $m=i_2$. Repeat the
same arguments with the currency $A_{l,m}(p)$ ($p>m-l$) and its sub-currency
$(1,a_l,a_m)$: this time the amount
$\lceil a_m/a_l\rceil a_l = (m-l+1)a_l$
must be paid greedily with at least
$1+\frac{(m-l)l}{1} > 1+(m-l)$
coins.\qed
Sets $P$ of type 4 are the most peculiar ones. We believe they are also
hereditary; that is, we have the following:
\begin{conjecture}
\label{contype4}
If $A=(1,a_1,\ldots,a_k)$ is orderly, then the currency $(1,a_1,a_2,a_l)$ is
also orderly for every $4\leq l\leq k$.
\end{conjecture}
While this is not known to be true in general, we can prove this conjecture
under some mild additional conditions.
\begin{theorem}
\label{theorempartialconjecture}
Conjecture \ref{contype4} is true if we additionally assume that $a_2>2a_1$ and
$a_3>2a_2$.
\end{theorem}
{\bf Proof.} We shall verify that $(1,a_1,a_2,a_l)$ is orderly by
Proposition \ref{lemma4}. Let $m=\lceil a_l/a_2\rceil$.
By Theorem \ref{theorembigdiff} for every $l\geq 3$ we have the first
of the following inequalities:
$$a_{l+1}-a_l\geq a_3-a_2>a_2>ma_2-a_l.$$
It means that $a_{l+1}>ma_2$, so there is no new coin between $a_l$ and
$ma_2$, and the greedy decomposition of $ma_2$ with respect to $A$ involves
only the coins $1,a_1,a_l$. This justifies the first equality in the following
comparison:
$$\textrm{grd}_{(1,a_1,a_2,a_l)}(ma_2)=\textrm{grd}_{A}(ma_2)=\textrm{opt}_{A}(ma_2)\leq
\textrm{opt}_{(1,a_1,a_2,a_l)}(ma_2)$$
and by Proposition \ref{lemma4} the proof is complete.\qed
\section{Closing remarks and open problems}
\label{section8}
Throughout this paper we have proposed some possible approaches to the problem
of describing orderly coinage systems and their interesting properties. Some of
these techniques have enabled us to prove the most important results of
this paper, namely the structural theorems, like Theorem
\ref{theoremdiffaa} and Corollary \ref{possibletoa2}, or to give concise
descriptions of small systems. There is still quite a lot of work to be done in
the following areas:
\begin{itemize}
\item {\bf sub-currencies}: prove Conjecture \ref{contype4}, thus completing
the classification of orderly sub-currencies.
\item {\bf prefix sub-currencies}: invent an algorithm to decide whether a
given $+/-$--pattern describes a non-empty class or devise some other
properties of such $+/-$--patterns. Another interesting conjecture, to which we
have not found a counterexample, is:
\begin{conjecture}
If a $+/-$--class is non-empty, then it has a representative
$A=(1,a_1,\ldots,a_k)$ with $a_1=2$.
\end{conjecture}
\item {\bf differences}: can Corollary \ref{possibletoa2} be generalized? In
other words, what can be said about the differences $a_j-a_i$ that belong to
$(a_{m-1},a_m)$ for some $m$? Is it true that in general
$$S(A)\cap (a_{m-1},a_m)\subset \{a_m-a_{m-1},a_m-a_{m-2},\ldots,a_m-1\},$$
where $S(A)=\{a_j-a_i: 0\leq i<j\leq k\}$ for an orderly currency $A$? We
already know this is true for $m=1,2$. The lemmas from section \ref{section6}
provide some partial results in the general case as well.
\item {\bf extending}: Theorem \ref{theoremdiffaa}, Corollary
\ref{possibletoa2} and Conjecture \ref{contype4} can be thought of as {\it
obstructions} against extending: if a currency does not satisfy one of these
conditions then it cannot be extended to an orderly currency by appending new
coins of high denominations (higher than all the existing coins). What are the
other invariants of this sort? Is there an algorithm that decides if a currency
can be extended to an orderly one? Problems related to obstructions and
extending can also be found in \cite{CCS}.
\end{itemize}
{\bf Acknowledgements.} We are indebted to the referee, whose
valuable suggestions improved both the presentation and some technical aspects
of our paper. We also thank Lenore Cowen for pointing us to \cite{CCS}.
| {
"timestamp": "2008-08-20T10:15:23",
"yymm": "0801",
"arxiv_id": "0801.0120",
"language": "en",
"url": "https://arxiv.org/abs/0801.0120",
"abstract": "We investigate the structure of the currencies (systems of coins) for which the greedy change-making algorithm always finds an optimal solution (that is, a one with minimum number of coins). We present a series of necessary conditions that must be satisfied by the values of coins in such systems. We also uncover some relations between such currencies and their sub-currencies.",
"subjects": "Combinatorics (math.CO)",
"title": "Combinatorics of the change-making problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717428891156,
"lm_q2_score": 0.8152324871074608,
"lm_q1q2_score": 0.804285335665436
} |
https://arxiv.org/abs/1311.2657 | Random perturbation of low rank matrices: Improving classical bounds | Matrix perturbation inequalities, such as Weyl's theorem (concerning the singular values) and the Davis-Kahan theorem (concerning the singular vectors), play essential roles in quantitative science; in particular, these bounds have found application in data analysis as well as related areas of engineering and computer science. In many situations, the perturbation is assumed to be random, and the original matrix has certain structural properties (such as having low rank). We show that, in this scenario, classical perturbation results, such as Weyl and Davis-Kahan, can be improved significantly. We believe many of our new bounds are close to optimal and also discuss some applications. | \section{Introduction}
The singular value decomposition of a real $m \times n$ matrix $A$ is a factorization of the form $A = U \Sigma V^\mathrm{T}$, where $U$ is a $m \times m$ orthogonal matrix, $\Sigma$ is a $m \times n$ rectangular diagonal matrix with non-negative real numbers on the diagonal, and $V^\mathrm{T}$ is an $n \times n$ orthogonal matrix. The diagonal entries of $\Sigma$ are known as the \emph{singular values} of $A$. The $m$ columns of $U$ are the \emph{left-singular vectors} of $A$, while the $n$ columns of $V$ are the \emph{right-singular vectors} of $A$. If $A$ is symmetric, the singular values are given by the absolute value of the eigenvalues, and the singular vectors are just the eigenvectors of $A$. Here, and in the sequel, whenever we write \emph{singular vectors}, the reader is free to interpret this as left-singular vectors or right-singular vectors provided the same choice is made throughout the paper.
Consider a real (deterministic) $m \times n$ matrix $A$ with singular values
$$\sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_{\min\{m,n\}} \geq 0$$
and corresponding singular vectors $v_1, v_2, \ldots, v_{\min\{m,n\}}.$ We will call $A$ the data matrix. In general, the vector $v_i$ is not unique. However, if $\sigma_i$ has multiplicity one, then $v_i$ is determined up to sign.
An important problem in statistics and numerical analysis is to compute the first $k$ singular values and vectors of $A$. In particular, the largest few singular values and corresponding singular vectors are typically the most important. Among others, this problem lies at the heart of Principal Component Analysis (PCA), which has a very wide range of applications (for many examples, see \cite{KVbook, LR} and the references therein) and in the closely related low rank approximation procedure often used in theoretical computer science and combinatorics. In application, $m,n$ are typically large and $k$ is small, often a fixed constant.
A problem of fundamental importance in quantitative science (including pure and applied mathematics, statistics, engineering, and computer science) is to estimate how a small perturbation to the data effects the spectrum. This problem has been discussed in virtually every text book on quantitative linear algebra and numerical analysis (see, for instance, \cite{BT, Hig1, Hig2, SS}).
A basic model is as follows. Instead of $A$, one needs to work with $A+E$, where $E$ represents the perturbation matrix. Let
$$ \sigma_1' \geq \cdots \geq \sigma_{\min\{m,n\}}' \geq 0 $$
denote the singular values of $A+E$ with corresponding singular vectors $v_1', \ldots, v_{\min\{m,n\}}'$. We consider the following natural questions.
\begin{question}
Is $v_i'$ a good approximation of $v_i$?
\end{question}
\begin{question} \label{quest:weyl}
Is $\sigma_i'$ a good approximation of $\sigma_i$?
\end{question}
These two questions are addressed by the Davis-Kahan-Wedin sine theorem and Weyl's inequality.
Let us begin with the first question in the case when $i=1$. A canonical way (coming from the numerical analysis literature; see for instance \cite{GVL})
to measure the distance between two unit vectors $v$ and $v'$ is to look at $ \sin \angle(v,v')$,
where $\angle(v,v')$ is the angle between $v$ and $v'$ taken in $[0,\pi/2]$.
It has been observed by numerical analysts (in the setting where $E$ is deterministic) for quite some time that the key parameter to consider in the bound is the gap (or separation)
\begin{equation} \label{def:delta}
\delta := \sigma_1 - \sigma_2,
\end{equation}
between the first and second singular values of $A$.
The first result in this direction is the famous Davis-Kahan sine $\theta$ theorem \cite{DK} for Hermitian matrices.
The non-Hermitian version was proved later by Wedin \cite{W}.
Throughout the paper, we use $\|M\|$ to denote the spectral norm of a matrix $M$. That is, $\|M\|$ is the largest singular value of $M$.
\begin{theorem}[Davis-Kahan, Wedin; sine theorem] \label{wedin}
$$ \sin \angle(v_1, v_1') \leq 2 \frac{\|E\|}{\delta}. $$
\end{theorem}
\begin{remark}
Theorem \ref{wedin} is trivially true when $\delta \leq 2 \|E\|$ since sine is always bounded above by one. In other words, even if the vector $v_1'$ is not uniquely determined, the bound is still true for any choice of $v_1'$. On the other hand, when $\delta > 2 \|E\|$, the proof of Theorem \ref{wedin} reveals that the vector $v_1'$ is uniquely determined up to sign.
\end{remark}
Theorem \ref{wedin} is a simple corollary of \cite[Theorem V.4.4]{SS} which is originally due to Wedin \cite{W}; we present a proof below for completeness.
More generally, one can consider approximating the $i$-th singular vector $v_i$ or the space spanned by the first $i$ singular vectors $\mathrm{Span}\{v_1, \ldots, v_i \}$. Naturally, in these cases, one must consider the gaps
$$ \delta_i := \sigma_i - \sigma_{i+1}. $$
Question \ref{quest:weyl} is addressed by Weyl's inequality. In particular, Weyl's perturbation theorem \cite{Wy} gives the following deterministic bound for the singular values (see \cite[Theorem IV.4.11]{SS} for a more general perturbation bound due to Mirsky \cite{M}).
\begin{theorem} [Weyl's bound] \label{theorem:Weyl}
\begin{equation*}
\max_{1 \leq i\leq \min\{m,n\}} | \sigma_i -\sigma_i'| \le \|E \|.
\end{equation*}
\end{theorem}
For more discussions concerning general perturbation bounds, we refer the reader to \cite{B, SS} and references therein. We now pause for a moment to prove Theorem \ref{wedin}.
\begin{proof}[Proof of Theorem \ref{wedin}]
If $\delta \leq 2 \|E\|$, the theorem is trivially true since sine is always bounded above by one. Thus, assume $\delta > 2 \|E\|$. By Theorem \ref{theorem:Weyl}, we have
$$ \sigma_1' - \sigma_2' \geq \delta - 2 \|E\| > 0, $$
and hence the singular vectors $v_1$ and $v_1'$ are uniquely determined up to sign. By another application of Theorem \ref{theorem:Weyl}, we obtain
$$ \delta = \sigma_1 - \sigma_2 \leq \sigma_1 - \sigma_2' + \|E\|. $$
Rearranging the inequalities, we have
$$ \sigma_1 - \sigma_2' \geq \delta - \|E\| \geq \frac{1}{2} \delta > 0. $$
Therefore, by \cite[Theorem V.4.4]{SS}, we conclude that
$$ \sin \angle(v_1, v_1') \leq \frac{\|E\|}{ \sigma_1 - \sigma_2'} \leq 2 \frac{\|E\|}{ \delta}, $$
and the proof is complete.
\end{proof}
Let us now focus on the matrices $A$ and $E$.
It has become common practice to assume that the perturbation matrix $E$ is random.
Furthermore, researchers have observed that
data matrices are usually not arbitrary. They often possess certain structural properties. Among these properties, one of the most frequently seen is having low rank (see, for instance, \cite{CP, CR, CRT, CS, TK} and references therein).
The goal in this paper is to show that in this situation, one can significantly improve classical results like Theorems \ref{wedin} and \ref{theorem:Weyl}.
To give a quick example, let us assume that $A$ and $E$ are $n \times n$ matrices and that the entries of $E$ are independent and identically distributed (iid) random variables with zero mean, unit variance (which is just matter of normalization), and bounded fourth moment. It is well known that in this case
$\|E\|= (2+o(1)) \sqrt n $ with high probability\footnote{We use asymptotic notation under the assumption that $n \to \infty$. Here we use $o(1)$ to denote a term which tends to zero as $n$ tends to infinity.} \cite[Chapter 5]{BS}. Thus, the above two theorems imply
\begin{corollary} \label{wedin-cor}
For any $\eta > 0$, with probability $1-o(1)$,
\begin{equation*} |\sigma_1 -\sigma _1'| \le (2 + \eta) \sqrt n, \end{equation*}
and
\begin{equation} \label{bound0} \sin \angle(v_1, v_1') \leq 2 (2+\eta) \frac{\sqrt n }{\delta}. \end{equation}
\end{corollary}
Among others, this shows that if one wants accuracy $\varepsilon$ in the first singular vector computation, $A$ needs to satisfy
\begin{equation} \label{bound1} \delta \ge 2 (2 + \eta) \varepsilon^{-1} \sqrt n. \end{equation}
We present the results of a numerical simulation for $A$ being a $n \times n$ matrix of rank 2 when $n=400$, $\delta=8$, and where $E$ is a random Bernoulli matrix (its entries are iid random variables that take values $\pm 1$ with probability $1/2$).
The results, shown in Figure \ref{young}, turn out to be very different from what \eqref{bound1} predicts.
It is easy to see that for the parameters $n=400$ and $\delta =8$, Corollary \ref{wedin-cor} does not give a useful bound
(since $\frac{\sqrt{n}}{\delta} = 2.5 >1$). However, Figure \ref{young} shows that, with high probability, $\sin \angle(v_1,v_1') \leq 0.2$, which means
$v_1'$ approximates $v_1$ with a relatively small error.
\begin{figure}[!Ht]
\begin{center}
\includegraphics[width=12cm]{noise_n=400r=2.pdf}
\includegraphics[width=12cm]{noise_n=1000r=2.pdf}
\caption{The cumulative distribution functions of $\sin \angle(v_1, v_1')$ where $A$ is a $n \times n$ deterministic matrix with rank $2$ ($n=400$ for the figure on top and $n=1000$ for the one below) and the noise $E$ is a Bernoulli random matrix, evaluated from $400$ samples (top figure) and $300$ samples (bottom figure). In both figures, the largest singular value of $A$ is taken to be $200$.}
\label{young}
\end{center}
\end{figure}
\section{ The \emph{real dimension} and new results}
Trying to explain the inefficiency of the Davis-Kahan-Wedin bound in the above example, the second author was led to the following intuition.
\begin{quote}
If $A$ has rank $r$, all actions of $A$ focus on an $r$ dimensional subspace; intuitively then, $E$ must act like an $r$ dimensional random matrix rather than an $n$ dimensional one.
\end{quote}
This means that the {\it real dimension} of the problem is $r$, not $n$.
While it is clear that one cannot automatically ignore the (rather wild) action of $E$ outside the range of $A$, this intuition, if true, would show that
what really matters in \eqref{bound0} or \eqref{bound1} is $r$, the rank of $A$, rather than its size $n$.
If this is indeed the case, one may hope to obtain a bound of the form
\begin{equation} \label{bound2} \sin \angle(v_1, v_1') \leq C \frac{\sqrt r}{\delta}, \end{equation} for some constant $C$
(with some possible corrections).
This is much better than \eqref{bound0} when $A$ has low rank and explains the phenomenon arising from Figure \ref{young}.
In \cite{V}, the second author managed to prove
\begin{equation*}
\sin ^2 \angle(v_1, v'_1) \le C \frac{ \sqrt{r \log n} }{\delta }
\end{equation*}
under certain conditions. While the right-hand side is quite close to the optimal form
in \eqref{bound2}, the main problem here is that in the left-hand side one needs to square the sine function. The bound for
$\sin \angle (v_i, v_i') $ with $i \ge 2$ was done by an inductive argument and was rather complicated. Finally, the problem of estimating the singular values was not addressed at all in \cite{V}.
In this paper, by using an entirely different (and simpler) argument, we are going to remove the unwanted
squaring effect. This enables us to obtain a near optimal improvement of the Davis-Kahan-Wedin theorem. One can easily extend the proof to give
a (again near optimal) bound on the angle between two subspaces spanned by the first few singular vectors of $A$ and their counterparts of $A+E$.
(This is the space one often actually cares about in PCA and low rank approximation procedures.)
Finally, as a co-product, we obtain an improved version of Weyl's bound, which also supports our {\it real dimension} intuition. Our results hold under very mild assumptions on $A$ and $E$.
As a matter of fact, in the strongest results, we will not even need the entries of $E$ to be independent.
As an illustration,
let us first state a result in the case that $A$ is a $n \times n$ matrix and
$E$ is a Bernoulli matrix (the entries are iid Bernoulli random variables, taking values $\pm 1$ with probability $1/2$).
\begin{theorem} \label{theorem:main1} Let $ E$ be a $n \times n$ Bernoulli random matrix and fix $\varepsilon > 0$. Then there exists constants $C_0, \delta_0 > 0$ (depending only on $\varepsilon$) such that the following holds.
Let $A$ be a $n \times n$ matrix with rank $r$ satisfying $\delta \geq \delta_0$ and $\sigma_1 \geq \max\{n,\sqrt{n}\delta\}$. Then, with probability at least $1-\varepsilon$,
\begin{equation*} \sin \angle (v_1, v_1') \leq C \frac{\sqrt{r}} {\delta} . \end{equation*}
\end{theorem}
Notice that the assumptions on $E$ are normalized (as we assume that the variance of the entries in $E$ is one). If the error entries have variance $\sigma^2$, then
we need to scale accordingly by replacing $A +E$ by $\frac{1}{\sigma} A + \frac{1}{\sigma} E $; thus, the assumptions become weaker as $\sigma$ decreases.
For the singular values, a good toy result is the following
\begin{theorem} \label{thm:probweyl0}
Let $E$ be an $n \times n$ Bernoulli random matrix and fix $ \varepsilon > 0$. Then there exists a constant $C_0>0$ (depending only on $\varepsilon$)
such that the following holds.
Let $A$ be an $n \times n$ matrix with rank $r$ satisfying $ \sigma_1 \geq n$. Then with probability at least $1-\varepsilon$
\begin{equation*} \label{eq:probweylbnd0}
\sigma_1 - C \leq \sigma_1' \leq \sigma_1 + C \sqrt{r}.
\end{equation*}
\end{theorem}
It may be useful for the reader to compare these new bounds with the bounds obtained directly from the Davis-Kahan-Wedin sine theorem and Weyl's inequality (see Corollary \ref{wedin-cor}).
Both theorems above are corollaries of much more general statements, which we describe in the next sections.
\section{Models of random noise}
In the literature, there are many models of random matrices. We can capture almost all natural models by focusing on a common property.
\begin{definition} \label{def:concentration}
We say the $m \times n$ random matrix $E$ is $(C_1,c_1, \gamma)$-concentrated if for all unit vectors $u \in \mathbb{R}^m, v \in \mathbb{R}^n$, and every $t>0$,
\begin{equation} \label{eq:concentration}
\mathbb{P}( |u^T E v| > t ) \leq C_1 \exp(-c_1 t^\gamma).
\end{equation}
\end{definition}
The key parameter is $\gamma$. It is easy to verify the following fact, which asserts that the concentration property is closed under addition.
\begin{fact} \label{fact1} If $E_1$ is $(C_1,c_1, \gamma)$-concentrated and $E_2$ is $(C_2,c_2, \gamma)$-concentrated, then
$E_3 =E_1+E_2$ is $(C_3, c_3, \gamma)$-concentrated for some $C_3, c_3$ depending on $C_1,c_1, C_2, c_2$. \end{fact}
Furthermore, the concentration property guarantees a bound on $\| E \|$. A standard net argument (see Lemma \ref{lemma:net}) shows
\begin{fact} \label{fact2} If $E$ is $(C_1,c_1, \gamma)$-concentrated then there are constants $C', c' >0$ such that $\mathbb{P} (\| E \| \ge C' n^{1/\gamma} ) \le C_1 \exp (-c'n) $. \end{fact}
For readers not familiar with random matrix theory, let us point out why the concentration property is expected to hold for any natural model. If $E$ is random and $v$ is fixed, then the vector $Ev$ must look random. It is well known that in a high dimensional space, a random vector, with very high probability, is nearly orthogonal to any fixed vector. Thus, one expects that very likely, the inner product of $u$ and $E v$ is small. Definition \ref{def:concentration} is a way to express this observation quantitatively.
It turns out that all random matrices with independent entries satisfying a mild condition have the concentration property.
This class covers virtually all examples one sees in practice.
In particular, Lemma \ref{lemma:bernoulli} shows that if $E$ is a $n \times n$ Bernoulli random
matrix, then $E$ is $\left(2, \frac{1}{2}, 2 \right)$-concentrated, and $\|E\| \leq 3 \sqrt{n}$ with high probability \cite{V,Vnorm}.
A convenient feature of the definition is that independence between the entries is not a requirement.
For instance, it is easy to show that a random orthogonal matrix
satisfies the concentration property. We continue the discussion of the $(C_1,c_1,\gamma)$-concentration property (Definition \ref{def:concentration}) in Section \ref{sec:concentration}.
\vskip2mm
Let us state an extension of Theorem \ref{theorem:main1}.
\begin{theorem} \label{thm:main}
Assume that $E$ is $(C_1, c_1, \gamma)$-concentrated for a trio of constants $C_1, c_1, \gamma >0$, and suppose $A$ has rank $r$. Then, for any $t>0$,
$$ \sin \angle(v_1,v_1') \leq 4 \sqrt{2} \left( \frac{t r^{1/\gamma}} {\delta} + \frac{ \|E\|}{\sigma_1} + \frac{ \|E\|^2}{\sigma_1 \delta} \right) $$
with probability at least
$$ 1 - 54 C_1 \exp\left(-c_1\frac{\delta^\gamma}{8^\gamma} \right) - 2C_1 9^{2r} \exp \left( -c_1 r \frac{t^\gamma}{4^{\gamma}} \right). $$
\end{theorem}
\begin{remark} \label{remark:boundonE}
Using Fact \ref{fact2}, one can replace $\| E\|$ on the right-hand side by $C' n^{1/\gamma }$, which yields that
$$ \sin \angle(v_1,v_1') \leq 4 \sqrt{2} \left( \frac{t r^{1/\gamma}} {\delta} + \frac{ C' n^{1/\gamma} }{\sigma_1} + \frac{ C'^2 n^{2/\gamma} }{\sigma_1 \delta} \right)$$
with probability at least
$$ 1 - 54 C_1 \exp\left(-c_1\frac{\delta^\gamma}{8^\gamma} \right) - 2C_1 9^{2r} \exp \left( -c_1 r \frac{t^\gamma}{4^{\gamma}} \right) - C_1 \exp( -c' n). $$
However, we prefer to state our theorems in the form of Theorem \ref{thm:main}, as the bound $C' n^{1/\gamma}$, in many cases, may not be optimal.
\end{remark}
\begin{remark}
Another useful corollary of Theorem \ref{thm:main} is the following. For any constant $\varepsilon >0$ there are constants $C_0 = C_0(\varepsilon, C_1, c_1, \gamma) >0$ and $\delta_0 = \delta_0(\varepsilon, C_1, c_1, \gamma)$ such that if $\delta \geq \delta_0$, then
\begin{equation*}
\sin \angle(v_1,v_1') \leq C_0 \left( \frac{ r^{1/\gamma}} {\delta} + \frac{ \|E\|}{\sigma_1} + \frac{ \|E\|^2}{\sigma_1 \delta} \right)
\end{equation*}
with probability at least $1-\varepsilon$.
The first term $\frac{r ^{1/\gamma}} {\delta} $ on the right-hand side corresponds to the conjectured optimal bound \eqref{bound2}. The second term $\frac{\|E\|}{ \sigma_1}$ is necessary.
If $\|E \| \gg \sigma_1$, then the intensity of the noise is much stronger than the strongest signal in the data matrix, so $E$ would corrupt $A$ completely.
Thus in order to retain crucial information about $A$, it seems necessary to assume $\|E\| < \sigma_1$. We are not absolutely sure about the necessity of the third term
$ \frac{ \|E\|^2}{\sigma_1 \delta}$, but under the condition $\|E\| \ll \sigma_1 $, this term is superior to the Davis-Kahan-Wedin bound $\frac{\|E\| }{ \delta}$. \end{remark}
We are able to extend Theorem \ref{thm:main} in two different ways. First, we can bound the angle between $v_j$ and $v_j'$ for any index $j$. Second, and more importantly, we can bound the
angle between the subspaces spanned by $\{v_1, \dots, v_j \}$ and $\{v_1', \dots, v_j' \}$, respectively. As the projection onto the subspaces spanned by the first few singular vectors (i.e. low rank approximation) plays an important role in a vast collection of problems, this result potentially has a large number of applications. We are going to present these two results in the next section.
To conclude this section, let us mention that related results have been obtained in the case where the random matrix $E$ contains Gaussian entries. In \cite{RRW}, R.~Wang estimates the non-asymptotic distribution of the singular vectors when the entries of $E$ are iid standard normal random variables. Recently, Allez and Bouchaud have studied the eigenvector dynamics of $A+E$ when $A$ is a real symmetric matrix and $E$ is a symmetric Brownian motion (that is, $E$ is a diffusive matrix process constructed from a family of independent real Brownian motions) \cite{AB}. Our results also seems to have a close tie to
the study of spiked covariance matrices, where a different kind of perturbation has been considered; see \cite{Ma, John,Nadler} for details. It would be interesting to
find a common generalization for these problems.
\section{General theorems}
First, we consider the problem of approximating the $j$-th singular vector $v_j$ for any $j$. In light of the Davis-Kahan-Wedin result and Theorem \ref{thm:main}, it is natural to consider the gap
$$ \delta_j := \sigma_j - \sigma_{j+1}. $$
\begin{theorem} \label{thm:general}
Assume that $E$ is $(C_1, c_1, \gamma)$-concentrated for a trio of constants $C_1, c_1, \gamma >0$. Suppose $A$ has rank $r$, and let $1 \leq j \leq r$ be an integer. Then, for any $t>0$,
$$ \sin \angle (v_j, v_j') \leq 4 \sqrt{2} \left( \left( \sum_{i=1}^{j-1} \sin^2 \angle (v_i, v_i') \right)^{1/2} + \frac{t r^{1/\gamma}}{\delta_j} + \frac{\|E\|^2}{\sigma_j \delta_j} + \frac{\|E\|}{\sigma_j} \right) $$
with probability at least
$$ 1 - 6C_1 9^j \exp \left( -c_1 \frac{\delta_j^\gamma}{8^\gamma} \right) - 2C_1 9^{2r} \exp \left( -c_1 r \frac{t^\gamma}{4^\gamma} \right). $$
\end{theorem}
In the next theorem, we bound the largest principal angle between
\begin{equation} \label{eq:uspan}
V := \mathrm{Span}\{v_1, \ldots, v_j\} \quad \text{and}\quad V' := \mathrm{Span}\{v_1', \ldots, v_j'\}
\end{equation}
for some integer $1 \leq j \leq r$, where $r$ is the rank of $A$.
Let us recall that if $U$ and $V$ are two subspaces of the same dimension, then the (principal) angle between them is defined as
\begin{equation} \label{eq:ssad}
\sin \angle(U,V) := \max_{u \in U; u \neq 0} \min_{v \in V; v \neq 0} \sin \angle(u,v) = \|P_U - P_V \| = \|P_{U^\perp} P_{V} \|,
\end{equation}
where $P_W$ denotes the orthogonal projection onto subspace $W$.
\begin{theorem} \label{thm:subspace}
Assume that $E$ is $(C_1, c_1, \gamma)$-concentrated for a trio of constants $C_1, c_1, \gamma >0$. Suppose $A$ has rank $r$, and let $1 \leq j \leq r$ be an integer. Then, for any $t>0$,
$$ \sin \angle(V,V') \leq 4 \sqrt{2j} \left( \frac{t r^{1/\gamma}}{\delta_j} + \frac{\|E\|^2}{\sigma_j \delta_j} + \frac{ \|E\|}{\sigma_j} \right), $$
with probability at least
$$ 1 - 6C_1 9^j \exp \left( -c_1 \frac{\delta_j^\gamma}{8^\gamma} \right) - 2C_1 9^{2r} \exp \left( -c_1 r \frac{t^\gamma}{4^\gamma} \right), $$
where $V$ and $V'$ are the $j$-dimensional subspaces defined in \eqref{eq:uspan}.
\end{theorem}
It remains an open question
to give an efficient bound for subspaces corresponding to an arbitrary set of singular values. However, we can use Theorem \ref{thm:subspace} repeatedly to obtain bounds for the case when one considers a few intervals of singular values. For instance, by applying Theorem \ref{thm:subspace} twice, we obtain
\begin{corollary}
Assume that $E$ is $(C_1, c_1, \gamma)$-concentrated for a trio of constants $C_1, c_1, \gamma >0$. Suppose $A$ has rank $r$, and let $1 < j \leq l \leq r$ be integers. Then, for any $t>0$,
$$ \sin \angle(V,V') \leq 8 \sqrt{2l} \left( \frac{t r^{1/\gamma}}{\delta_{j-1}} + \frac{t r^{1/\gamma}}{\delta_l} + \frac{\|E\|^2}{\sigma_{j-1} \delta_{j-1}} + \frac{\|E\|^2}{\sigma_l \delta_l} +\frac{ \|E\|}{\sigma_l} \right), $$
with probability at least
$$ 1 - 6C_1 9^{j-1} \exp \left( -c_1 \frac{\delta_{j-1}^\gamma}{8^\gamma} \right) - 6C_1 9^l \exp \left( -c_1 \frac{\delta_l^\gamma}{8^\gamma} \right) - 4C_1 9^{2r} \exp \left( -c_1 r \frac{t^\gamma}{4^\gamma} \right), $$
where
$$ V:= \mathrm{Span}\{v_j,\ldots, v_l\} \quad \text{and}\quad V':=\mathrm{Span}\{v_j',\ldots,v_l'\}. $$
\end{corollary}
\begin{proof}
Let
\begin{align*}
V_1 &:= \mathrm{Span}\{v_1,\ldots,v_l\}, \quad V_1' := \mathrm{Span}\{v_1',\ldots,v_l'\}, \\
V_2 &:= \mathrm{Span}\{v_1,\ldots,v_{j-1}\}, \quad V_2' := \mathrm{Span}\{v_1',\ldots,v_{j-1}'\}.
\end{align*}
For any subspace $W$, let $P_W$ denote the orthogonal projection onto $W$. It follows that $P_{W^\perp} = I - P_{W}$, where $I$ denotes the identity matrix. By definition of the subspaces $V,V'$, we have
$$ P_V = P_{V_1} P_{V_2^\perp} \quad \text{and}\quad P_{V'} = P_{V_1'} P_{V_2'^\perp}. $$
Thus, by \eqref{eq:ssad}, we obtain
\begin{align*}
\sin \angle(V,V') &= \| P_{V_1} P_{V_2^\perp} - P_{V_1'} P_{V_2'^\perp} \| \\
&\leq \| P_{V_1} P_{V_2^\perp} - P_{V_1'} P_{V_2^\perp} \| + \| P_{V_1'} P_{V_2^\perp} - P_{V_1'} P_{V_2'^\perp} \| \\
&\leq \| P_{V_1} - P_{V_1'} \| + \| P_{V_2} - P_{V_2'} \| \\
&= \sin \angle (V_1,V_1') + \sin \angle(V_2,V_2').
\end{align*}
Theorem \ref{thm:subspace} can now be invoked to bound $\sin\angle(V_1,V_1')$ and $\sin\angle(V_2,V_2')$, and the claim follows.
\end{proof}
\noindent Finally, let us present the general form of Theorem \ref{thm:probweyl0} for singular values.
\begin{theorem} \label{thm:probweyl}
Assume that $E$ is $(C_1, c_1, \gamma)$-concentrated for a trio of constants $C_1, c_1, \gamma >0$. Suppose $A$ has rank $r$, and let $1 \leq j \leq r$ be an integer. Then, for any $t>0$,
\begin{equation} \label{eq:probweylbndlower}
\sigma_j' \geq \sigma_j - t
\end{equation}
with probability at least
$$ 1 - 2C_1 9^j \exp \left( -c_1 \frac{t^\gamma}{4^\gamma} \right), $$
and
\begin{equation} \label{eq:probweylbndupper}
\sigma_j' \leq \sigma_j + t r^{1/\gamma} + 2\sqrt{j} \frac{ \|E\|^2}{\sigma_j'} + j \frac{\|E\|^3}{{\sigma_j'}^2}
\end{equation}
with probability at least
$$ 1 - 2C_1 9^{2r} \exp \left( -c_1 r \frac{t^\gamma}{4^\gamma} \right). $$
\end{theorem}
\begin{remark}
Notice that the upper bound for $\sigma_j'$ given in \eqref{eq:probweylbndupper} involves
$1/\sigma_j'$. In many situations, the lower bound in \eqref{eq:probweylbndlower} can be used to provide an upper bound for $1/\sigma_j'$.
\end{remark}
\section{Overview and outline}
We now briefly give an overview of the paper and discuss some of the key ideas behind the proof of our main results. For simplicity, let us assume that $A$ and $E$ are $n \times n$ real symmetric matrices. (In fact, we will symmetrize the problem in Section \ref{sec:prelim} below.) Let $\sigma_1 \geq \cdots \geq \sigma_n$ be the eigenvalues of $A$ with corresponding (orthonormal) eigenvectors $v_1, \ldots, v_n$. Let $\sigma_1'$ be the largest eigenvalue of $A+E$ with corresponding (unit) eigenvector $v_1'$.
Suppose we wish to bound $\sin \angle(v_1, v_1')$ (from Theorem \ref{thm:main}). Since
$$ \sin^2 \angle (v_1, v_1') = 1- \cos^2 \angle(v_1, v_1') = \sum_{k=2}^n | v_k \cdot v_1' |^2, $$
it suffices to bound $|v_k \cdot v_1'|$ for $k=2, \ldots, n$. Let us consider the case when $k=2, \ldots, r$. In this case, we have
$$ v_k^\mathrm{T} (A+E) v_1' - v_k^\mathrm{T} A v_1' = v_k^\mathrm{T} E v_1'. $$
Since $(A+E) v_1' = \sigma_1' v_1'$ and $v_k^\mathrm{T} A = \sigma_k v_k$, we obtain
$$ |\sigma_1' - \sigma_k| |v_k \cdot v_1'| \leq | v_k^\mathrm{T}E v_1' |. $$
Thus, the problem of bounding $|v_k \cdot v_1'|$ reduces to obtaining an upper bound for $| v_k^\mathrm{T}E v_1' |$ and a lower bound for the gap $|\sigma_1' - \sigma_k|$. We will obtain bounds for both of these terms by using the concentration property (Definition \ref{def:concentration}).
More generally, in Section \ref{sec:prelim}, we will apply the concentration property to obtain lower bounds for the gaps $\sigma_j' - \sigma_k$ when $j < k$, which will hold with high probability. Let us illustrate this by now considering the gap $\sigma_1' - \sigma_2$. Indeed, we note that
$$ \sigma_1' = \| A + E \| \geq v_1^\mathrm{T} (A+E) v_1 = \sigma_1 + v_1^\mathrm{T} E v_1. $$
Applying the concentration property \eqref{eq:concentration}, we see that $\sigma_1' > \sigma_1 - t$ with probability at least $1 - C_1 \exp(- c_1 t^{\gamma})$. As $\delta := \sigma_1 - \sigma_2$, we in fact observe that
$$ \sigma_1' - \sigma_2 = \sigma_1' - \sigma_1 + \delta > \delta - t. $$
Thus, if $\delta$ is sufficiently large, we have (say) $\sigma_1' - \sigma_2 \geq \delta/2$ with high probability.
In Section \ref{sec:proof}, we will again apply the concentration property to obtain upper bounds for terms of the form $v_k E v_j'$. At the end of Section \ref{sec:proof}, we combine these bounds to complete the proof of Theorems \ref{thm:main}, \ref{thm:general}, \ref{thm:subspace} and \ref{thm:probweyl}. In Section \ref{sec:concentration}, we discuss the $(C_1,c_1,\gamma)$-concentration property (Definition \ref{def:concentration}). In particular, we generalize some previous results obtained by the second author in \cite{V}. Finally, in Section \ref{section:app}, we present some applications of our main results.
Among others, our results seem useful for matrix recovery problems. The general matrix recovery problem is the following. $A$ is a large matrix. However, the matrix $A$ is unknown to us. We can only observe its noisy perturbation $A+E$, or in some cases just a small portion of the perturbation. Our goal is to reconstruct $A$ or estimate an important parameter
as accurately as possible from this observation.
Furthermore, several problems from combinatorics and theoretical computer science can also be formulated in this setting.
Special instances of the matrix recovery problem have been investigated by many researchers using spectral techniques and combinatorial arguments in ingenious ways \cite{AM, AK, AKS,AzarMc,CCS,CP,CR,CRT,CT,Cest, DGP, KMO,KMO2,Krank,KLT,Kucera,MHT,Mc,NW,RVsamp}.
We propose the following simple analysis: if $A$ has rank $r$ and $1 \le j \le r$, then the projection of $A+E$ on the subspace $V'$ spanned by the first $j$ singular vectors of $A+E$ is close to the projection of $A+E$ onto the subspace $V$ spanned by the first $j$ singular vectors of $A$, as our new results show that $V$ and $V'$ are very close. Moreover, we can also show that the projection of $E$ onto $V$ is typically small. Thus, by projecting $A+E$ onto $V'$, we obtain a good approximation of the rank $j$ approximation of $A$. In certain cases, we can repeat the above operation a few times to obtain sufficient
information to recover $A$ completely or to estimate the required parameter with high accuracy and certainty.
\section{Preliminary tools} \label{sec:prelim}
In this section, we present some of the preliminary tools we will need to prove Theorems \ref{thm:main}, \ref{thm:general}, \ref{thm:subspace}, and \ref{thm:probweyl}.
To begin, we define the $(m+n) \times (m+n)$ symmetric block matrices
\begin{equation} \label{eq:def:tildeA}
\tilde{A} := \begin{bmatrix} 0 & A \\ A^\mathrm{T} & 0 \end{bmatrix}
\end{equation}
and
$$ \tilde{E} := \begin{bmatrix} 0 & E \\ E^\mathrm{T} & 0 \end{bmatrix}. $$
We will work with the matrices $\tilde{A}$ and $\tilde{E}$ instead of $A$ and $E$. In particular, the non-zero eigenvalues of $\tilde{A}$ are $\pm \sigma_1, \ldots, \pm \sigma_r$ and the eigenvectors are formed from the left and right singular vectors of $A$. Similarly, the non-trivial eigenvalues of $\tilde{A} + \tilde{E}$ are $\pm \sigma_1', \ldots, \pm \sigma_{\min\{m,n\}}'$ (some of which may be zero) and the eigenvectors are formed from the left and right singular vectors of $A+E$.
Along these lines, we introduce the following notation, which differs from the notation used above. The non-zero eigenvalues of $\tilde{A}$ will be denoted by $\pm \sigma_1, \ldots, \pm \sigma_r$ with orthonormal eigenvectors $u_k$, $k=\pm 1, \ldots, \pm r$ such that
$$ \tilde{A} u_k = \sigma_k u_k, \qquad \tilde{A} u_{-k} = - \sigma_k u_{-k}, \qquad k = 1, \ldots, r. $$
Let $v_1, \ldots, v_j$ be the orthonormal eigenvectors of $\tilde{A}+\tilde{E}$ corresponding to the $j$-largest eigenvalues $\lambda_1 \geq \cdots \geq \lambda_j$.
In order to prove Theorems \ref{thm:main}, \ref{thm:general}, \ref{thm:subspace}, and \ref{thm:probweyl}, it suffices to work with the eigenvectors and eigenvalues of the matrices $\tilde{A}$ and $\tilde{A}+\tilde{E}$. Indeed, Proposition \ref{prop:sine} will bound the angle between the singular vectors of $A$ and $A+E$ by the angle between the corresponding eigenvectors of $\tilde{A}$ and $\tilde{A} + \tilde{E}$.
\begin{proposition} \label{prop:sine}
Let $u_1, v_1 \in \mathbb{R}^m$ and $u_2, v_2 \in \mathbb{R}^n$ be unit vectors. Let $u, v \in \mathbb{R}^{m+n}$ be given by
$$ u = \begin{bmatrix} u_1 \\ u_2 \end{bmatrix}, \quad v = \begin{bmatrix} v_1 \\ v_2 \end{bmatrix}. $$
Then
$$ \sin^2 \angle(u_1, v_1) + \sin^2 \angle(u_2, v_2) \leq 2 \sin^2 \angle(u, v). $$
\end{proposition}
\begin{proof}
Since $\|u\|^2 = \|v\|^2 = 2$, we have
\begin{align*}
\cos^2 \angle(u,v) = \frac{1}{4} |u \cdot v|^2 \leq \frac{1}{2} |u_1 \cdot v_1|^2 + \frac{1}{2} |u_2 \cdot v_2|^2.
\end{align*}
Thus,
\begin{align*}
\sin^2 \angle(u,v) = 1 - \cos^2 \angle(u,v) \geq \frac{1}{2} \sin^2 \angle(u_1, v_1) + \frac{1}{2} \sin^2\angle(u_2, v_2),
\end{align*}
and the claim follows.
\end{proof}
We now introduce some useful lemmas. The first lemma below, states that if $E$ is $(C_1,c_1,\gamma)$-concentrated, then $\tilde{E}$ is $(\tilde{C}_1,\tilde{c}_1,\gamma)$-concentrated, for some new constants $\tilde{C}_1 := 2C_1$ and $\tilde{c}_1:= c_1/2^{\gamma}$.
\begin{lemma} \label{lemma:tilde}
Assume that $E$ is $(C_1, c_1,\gamma)$-concentrated for a trio of constants $C_1, c_1, \gamma >0$. Let $\tilde{C}_1 := 2C_1$ and $\tilde{c}_1:= c_1/2^{\gamma}$. Then for all unit vectors $u,v \in \mathbb{R}^{n+m}$, and every $t>0$,
\begin{equation} \label{eq:tilde-concentration}
\mathbb{P}( |u^t \tilde{E} v| > t ) \leq \tilde{C}_1 \exp(-\tilde{c}_1 t^\gamma ).
\end{equation}
\end{lemma}
\begin{proof}
Let
$$ u = \begin{bmatrix} u_1 \\ u_2 \end{bmatrix}, \quad v = \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} $$
be unit vectors in $\mathbb{R}^{m+n}$, where $u_1, v_1 \in \mathbb{R}^m$ and $u_2,v_2 \in \mathbb{R}^n$. We note that
$$ u^{\mathrm{T}} \tilde{E} v = u_1^\mathrm{T} E v_2 + u_2^\mathrm{T} E^\mathrm{T} v_1. $$
Thus, if any of the vectors $u_1, u_2, v_1, v_2$ are zero, \eqref{eq:tilde-concentration} follows immediately from \eqref{eq:concentration}. Assume all the vectors $u_1, u_2, v_1, v_2$ are nonzero. Then
$$ | u^{\mathrm{T}} \tilde{E} v | = |u_1^\mathrm{T} E v_2 + u_2^\mathrm{T} E^\mathrm{T} v_1| \leq \frac{|u_1^\mathrm{T} E v_2|}{\|u_1\| \|v_2\|} + \frac{|v_1^\mathrm{T} E u_2|}{\|u_2\|\|v_1\|}. $$
Thus, by \eqref{eq:concentration}, we have
\begin{align*}
\mathbb{P}( |u^\mathrm{T} \tilde{E} v| > t) &\leq \mathbb{P} \left( \frac{|u_1^\mathrm{T} E v_2|}{\|u_1\| \|v_2\|} > \frac{t}{2} \right) + \mathbb{P} \left( \frac{|v_1^\mathrm{T} E u_2|}{\|u_2\|\|v_1\|} > \frac{t}{2} \right) \\
&\leq 2C_1 \exp \left( -c_1 \frac{ t^\gamma }{ 2^\gamma } \right),
\end{align*}
and the proof of the lemma is complete.
\end{proof}
We will also consider the spectral norm of $\tilde{E}$. Since $\tilde{E}$ is a symmetric matrix whose eigenvalues in absolute value are given by the singular values of $E$, it follows that
\begin{equation} \label{eq:normE}
\| \tilde{E} \| = \|E \|.
\end{equation}
We introduce $\varepsilon$-nets as a convenient way to discretize a compact set. Let $\varepsilon > 0$. A set $X$ is an $\varepsilon$-net of a set $Y$ if for any $y \in Y$, there exists $x \in X$ such that $\|x-y\| \leq \varepsilon$. The following estimate for the maximum size of an $\varepsilon$-net of a sphere is well-known (see for instance \cite{RV}).
\begin{lemma} \label{lemma:net}
A unit sphere in $d$ dimensions admits an $\varepsilon$-net of size at most
$$ \left(1+\frac{2}{\varepsilon} \right)^d.$$
\end{lemma}
Lemmas \ref{lemma:r-norm}, \ref{lemma:largest}, and \ref{lemma:j-largest} below are consequences of the concentration property \eqref{eq:tilde-concentration}.
\begin{lemma} \label{lemma:r-norm}
Assume that $E$ is $(C_1, c_1, \gamma)$-concentrated for a trio of constants $C_1, c_1, \gamma >0$. Let $A$ be a $m \times n$ matrix with rank $r$. Let $U$ be the $(m+n) \times 2r$ matrix whose columns are the vectors $u_1, \ldots, u_r, u_{-1}, \ldots, u_{-r}$. Then, for any $t > 0$,
$$ \mathbb{P} \left( \|U^\mathrm{T} \tilde{E}U\| > t r^{1/\gamma} \right) \leq \tilde{C}_1 9^{2r} \exp\left( - \tilde{c}_1 r \frac{t^\gamma}{2^\gamma} \right). $$
\end{lemma}
\begin{proof}
Clearly $U^\mathrm{T} \tilde{E} U$ is a symmetric $2r \times 2r$ matrix. Let $S$ be the unit sphere in $\mathbb{R}^{2r}$. Let $\mathcal{N}$ be a $1/4$-net of $S$. It is easy to verify (see for instance \cite{RV}) that for any $2r \times 2r$ symmetric matrix $B$,
$$ \|B\| \leq 2 \max_{x \in \mathcal{N}} |x^\ast B x| . $$
For any fixed $x \in \mathcal{N}$, we have
$$ \mathbb{P}( |x^\mathrm{T} U^\mathrm{T} \tilde{E} U x | > t) \leq \tilde{C}_1 \exp(-\tilde{c}_1 t^\gamma) $$
by Lemma \ref{lemma:tilde}. Since $|\mathcal{N}| \leq 9^{2r}$, we obtain
\begin{align*}
\mathbb{P} ( \| U^\mathrm{T} \tilde{E} U \| > t r^{1/\gamma}) &\leq \sum_{x \in \mathcal{N}} \mathbb{P}\left( |x^\mathrm{T} U^\mathrm{T} \tilde{E} U x | > \frac{1}{2} t r^{1/\gamma} \right) \\
& \leq \tilde{C}_1 9^{2r} \exp\left(-\tilde{c}_1 r \frac{t^\gamma}{2^{\gamma}} \right).
\end{align*}
\end{proof}
\begin{lemma} \label{lemma:largest}
Assume that $E$ is $(C_1, c_1, \gamma)$-concentrated for a trio of constants $C_1, c_1, \gamma >0$. Suppose $A$ has rank $r$. Then, for any $t > 0$,
\begin{equation} \label{eq:lambda1bnd}
\lambda_1 \geq \sigma_1 - t
\end{equation}
with probability at least $1 - \tilde{C}_1 \exp(-\tilde{c}_1 t^{\gamma})$.
In particular, if $\sigma_1 > 0$, then $\lambda_1 \geq \frac{\sigma_1}{2}$ with probability at least $1 - \tilde{C}_1 \exp \left( -\tilde{c}_1 \frac{\sigma_1^\gamma}{2^\gamma} \right)$. If, in addition, $\delta > 0$, then
$$ \lambda_1 - \sigma_k \geq \frac{1}{2} \delta $$
for $k = 2,\ldots, r $ with probability at least $1 - \tilde{C}_1 \exp \left( -\tilde{c}_1 \frac{\delta^\gamma}{2^\gamma} \right)$.
\end{lemma}
\begin{proof}
We observe that
$$ \lambda_1 = \|\tilde{A} + \tilde{E}\| \geq u_1^\mathrm{T} (\tilde{A} + \tilde{E}) u_1 = \sigma_1 + u_1^\mathrm{T} \tilde{E} u_1. $$
By Lemma \ref{lemma:tilde}, we have
$$ \mathbb{P}( |u_1^\mathrm{T} \tilde{E} u_1| > t) \leq \tilde{C}_1 \exp( -\tilde{c}_1 t^\gamma) $$
for every $t > 0$, and \eqref{eq:lambda1bnd} follows.
If $\sigma_1 > 0$, then the bound $\lambda_1 \geq \frac{\sigma_1}{2}$ can be obtained by taking $t = \sigma_1/2$ in \eqref{eq:lambda1bnd}. Assume $\delta > 0$. Taking $t=\delta/2$ in \eqref{eq:lambda1bnd} yields
$$ \lambda_1 - \sigma_k \geq \lambda_1 - \sigma_2 = \lambda_1 - \sigma_1 + \delta \geq \frac{\delta}{2} $$
for $k=2, \ldots, r$ with probability at least $1 - \tilde{C}_1 \exp \left( -\tilde{c}_1 \frac{\delta^\gamma}{2^\gamma} \right)$.
\end{proof}
Using the Courant minimax principle, Lemma \ref{lemma:largest} can be generalized to the following.
\begin{lemma} \label{lemma:j-largest}
Assume that $E$ is $(C_1, c_1, \gamma)$-concentrated for a trio of constants $C_1, c_1, \gamma >0$. Suppose $A$ has rank $r$, and let $1 \leq j \leq r$ be an integer. Then, for any $t > 0$,
\begin{equation} \label{eq:lambdajbnd1}
\lambda_j \geq \sigma_j - t
\end{equation}
with probability at least $1 - \tilde{C}_1 9^j \exp\left( - \tilde{c}_1 \frac{t^\gamma}{2^\gamma} \right)$.
In particular, $\lambda_j \geq \frac{\sigma_j}{2}$ with probability at least $1 - \tilde{C}_1 9^j \exp \left( - \tilde{c}_1 \frac{\sigma_j^\gamma}{4^\gamma} \right)$. In addition, if $\delta_j > 0$, then
\begin{equation} \label{eq:lambdajbnd2}
\lambda_j - \sigma_k \geq \frac{\delta_j}{2}
\end{equation}
for $k=j+1, \ldots, r$ with probability at least $1 - \tilde{C}_1 9^j \exp \left( - \tilde{c}_1 \frac{\delta_j^\gamma}{4^\gamma} \right)$.
\end{lemma}
\begin{proof}
It suffices to prove \eqref{eq:lambdajbnd1}. Indeed, the bound $\lambda_j \geq \frac{\sigma_j}{2}$ follows from \eqref{eq:lambdajbnd1} by taking $t = \sigma_j/2$, and \eqref{eq:lambdajbnd2} follows by taking $t = \delta_j/2$.
Let $S$ be the unit sphere in $\mathrm{Span}\{u_1,\ldots,u_j\}$. By the Courant minimax principle,
\begin{align*}
\lambda_j &= \max_{\dim(V)=j} \min_{\|v\|=1;v\in V} v^\mathrm{T} (\tilde{A}+\tilde{E})v \\
& \geq \min_{v \in S} v^\mathrm{T} (\tilde{A}+\tilde{E})v \\
& \geq \sigma_j + \min_{v \in S} v^\mathrm{T} \tilde{E} v.
\end{align*}
Thus, it suffices to show
$$ \mathbb{P}\left( \sup_{v \in S} |v^\mathrm{T} \tilde{E} v| > t \right) \leq \tilde{C}_1 9^j \exp\left( -\tilde{c}_1 \frac{t^\gamma}{2^\gamma} \right) $$
for all $t > 0$.
Let $\mathcal{N}$ be a $1/4$-net of $S$. By Lemma \ref{lemma:net}, $|\mathcal{N}| \leq 9^{j}$. We now claim that
\begin{equation} \label{eq:supmaxnet}
T := \sup_{v \in S} | v^\mathrm{T} \tilde{E} v| \leq 2 \max_{ u \in \mathcal{N}} |u^\mathrm{T} \tilde{E} u|.
\end{equation}
Indeed, fix a realization of $\tilde{E}$. Since $S$ is compact, there exists $v \in S$ such that $T = |v^\mathrm{T} \tilde{E} v|$. Moreover, there exists $x \in \mathcal{N}$ such that $\|x - v\| \leq 1/4$. Clearly the claim is true when $x = v$; assume $x \neq v$. Then, by the triangle inequality, we have
\begin{align*}
T &\leq |v^\mathrm{T} \tilde{E} v - v^\mathrm{T} \tilde{E} x| + |v^\mathrm{T} \tilde{E} x - x^\mathrm{T} \tilde{E} x| + |x^\mathrm{T} \tilde{E} x| \\
&\leq \frac{1}{4} \frac{ |v^\mathrm{T} \tilde{E} (v-x)| }{\| v-x\|} + \frac{1}{4} \frac{ |(v-x)^\mathrm{T} \tilde{E} x |}{\|v-x\| } + \sup_{u \in \mathcal{N}} |u^\mathrm{T} \tilde{E} u| \\
& \leq \frac{T}{2} + \sup_{u \in \mathcal{N}} |u^\mathrm{T} \tilde{E} u|,
\end{align*}
and \eqref{eq:supmaxnet} follows.
Applying \eqref{eq:supmaxnet} and Lemma \ref{lemma:tilde}, we have
\begin{align*}
\mathbb{P} \left( \sup_{v \in S} |v^\mathrm{T} \tilde{E} v| > t \right) &\leq \sum_{u \in \mathcal{N}} \mathbb{P} \left( |u^\mathrm{T} \tilde{E} u| > \frac{t}{2} \right) \leq 9^j \tilde{C}_1 \exp \left( -\tilde{c}_1 \frac{t^\gamma}{2^\gamma} \right),
\end{align*}
and the proof of the lemma is complete.
\end{proof}
We will continually make use of the following simple fact:
\begin{equation} \label{eq:aea}
(\tilde{A} + \tilde{E}) - \tilde{A} = \tilde{E}.
\end{equation}
\section{Proof of Theorems \ref{thm:main}, \ref{thm:general}, \ref{thm:subspace}, and \ref{thm:probweyl}} \label{sec:proof}
This section is devoted to Theorems \ref{thm:main}, \ref{thm:general}, \ref{thm:subspace}, and \ref{thm:probweyl}. To begin, define the subspace
$$ W:= \mathrm{Span}\{u_1, \ldots, u_r, u_{-1}, \ldots, u_{-r} \}.$$
Let $P$ be the orthogonal projection onto $W^\perp$.
\begin{lemma} \label{lemma:proj_bound}
Assume that $E$ is $(C_1, c_1, \gamma)$-concentrated for a trio of constants $C_1, c_1, \gamma >0$. Suppose $A$ has rank $r$, and let $1 \leq j \leq r$ be an integer. Then
\begin{equation} \label{eq:suppvi}
\sup_{1 \leq i \leq j} \|P v_i \| \leq 2 \frac{\|E\|}{\sigma_j}
\end{equation}
with probability at least $1 - \tilde{C}_1 9^j \exp \left( - \tilde{c}_1 \frac{\sigma_j^\gamma}{4^\gamma} \right)$.
\end{lemma}
\begin{proof}
Consider the event
$$ \Omega_j := \left\{ \lambda_j \geq \frac{1}{2} \sigma_j \right\}. $$
By Lemma \ref{lemma:j-largest} (or Lemma \ref{lemma:largest} in the case $j=1$), $\Omega_j$ holds with probability at least $1 - \tilde{C}_1 9^j \exp \left( - \tilde{c}_1 \frac{\sigma_j^\gamma}{4^\gamma} \right)$.
Fix $1 \leq i \leq j$. By multiplying \eqref{eq:aea} on the left by $(P v_i)^\mathrm{T}$ and on the right by $v_i$, we obtain
$$ | \lambda_i (P v_i)^\mathrm{T} v_i | \leq \| P v_i\| \|\tilde{E}\| $$
since $(P v_i)^\mathrm{T} \tilde{A} = 0$. Thus, on the event $\Omega_j$, we have
$$ \| P v_i \|^2 = |(P v_i)^\mathrm{T} v_i | \leq \frac{1}{\lambda_j}\|P v_i\| \|\tilde{E}\| \leq \frac{2}{\sigma_j} \| P v_i \|\|\tilde{E}\|. $$
We conclude that, on the event $\Omega_j$,
$$ \sup_{1 \leq i \leq j} \|P v_i \| \leq 2 \frac{\|E\|}{\sigma_j}, $$
and the proof is complete.
\end{proof}
\begin{lemma} \label{lemma:uproj}
Assume that $E$ is $(C_1, c_1, \gamma)$-concentrated for a trio of constants $C_1, c_1, \gamma >0$. Suppose $A$ has rank $r$, and let $1 \leq j \leq r$ be an integer. Define $U_j$ to be the $(m+n) \times (2r-j)$ matrix with columns $u_{j+1}, \ldots, u_r, u_{-1}, \ldots, u_{-r}$. Then, for any $t>0$,
\begin{equation} \label{eq:sujtv}
\sup_{1 \leq i \leq j} \| U_j^\mathrm{T} v_i \| \leq 4 \left( \frac{t r^{1/\gamma}}{\delta_j} + \frac{\|E\|^2}{\delta_j \sigma_j} \right)
\end{equation}
with probability at least
$$ 1 - 2\tilde{C}_1 9^j \exp \left( - \tilde{c}_1 \frac{\delta_j^\gamma}{4^\gamma}\right) - \tilde{C}_1 9^{2r} \exp \left( - \tilde{c}_1r \frac{t^\gamma}{2^\gamma} \right). $$
\end{lemma}
\begin{proof}
Define the event
\begin{align*}
\Omega_j &:= \left\{ \sup_{1 \leq i \leq j} \|P v_i \| \leq 2 \frac{\|E\|}{\sigma_j} \right\} \bigcap \left\{ \| U^\mathrm{T} \tilde{E} U \| \leq t r^{1/\gamma} \right\} \bigcap \left\{ \lambda_j - \sigma_{j+1} \geq \frac{\delta_j}{2} \right\}.
\end{align*}
By Lemmas \ref{lemma:r-norm}, \ref{lemma:j-largest}, and \ref{lemma:proj_bound}, it follows that
$$ \mathbb{P}( \Omega_j ) \geq 1 - 2\tilde{C}_1 9^j \exp \left( - \tilde{c}_1 \frac{\delta_j^\gamma}{4^\gamma}\right) - \tilde{C}_1 9^{2r} \exp \left( - \tilde{c}_1r \frac{t^\gamma}{2^\gamma} \right). $$
Fix $1 \leq i \leq j$. We multiply \eqref{eq:aea} on the left by $U_j^\mathrm{T}$ and on the right by $v_i$ to obtain
\begin{equation} \label{eq:ujae}
U_j^\mathrm{T} (\tilde{A} + \tilde{E}) v_i - U_j^\mathrm{T} \tilde{A} v_i = U_j^\mathrm{T} \tilde{E} v_i.
\end{equation}
We note that
$$ U_j^\mathrm{T} (\tilde{A} + \tilde{E}) v_i = \lambda_i U_j^\mathrm{T} v_i $$
and
$$ U_j^\mathrm{T} \tilde{A} v_i = D_j U_j^\mathrm{T} v_i, $$
where $D_j$ is the diagonal matrix with the values $\sigma_{j+1}, \ldots, \sigma_r, -\sigma_{1}, \ldots, -\sigma_{r}$ on the diagonal.
For the right-hand side of \eqref{eq:ujae}, we write $v_i = U U^\mathrm{T} v_i + P v_i$, where $U$ is the matrix with columns $u_1, \ldots, u_r, u_{-1}, \ldots, u_{-r}$ and $P$ is the orthogonal projection onto $W^\perp$. Thus, on the event $\Omega_j$, we have
\begin{align*}
\|U_j^\mathrm{T} \tilde{E} v_i\| \leq \|U_j^\mathrm{T} \tilde{E} U\| + \|\tilde{E}\| \|P v_i\| \leq t r^{1/\gamma} + 2 \frac{\|E\|^2}{\sigma_j}.
\end{align*}
Here we used the fact that $U_j^\mathrm{T} \tilde{E} U$ is a sub-matrix of $U^\mathrm{T} \tilde{E} U$ and hence
$$ \|U_j^\mathrm{T} \tilde{E} U\| \leq \| U^\mathrm{T} \tilde{E} U\|. $$
Combining the above computations and bound yields
$$ \| (\lambda_i I - D_j) U_j^\mathrm{T} v_i \| \leq 2 \left( t r^{1/\gamma} + \frac{\|E\|^2}{\sigma_j} \right) $$
on the event $\Omega_j$.
We now consider the entries of the diagonal matrix $\lambda_i I - D_j$. On $\Omega_j$, we have that, for any $k \geq j+1$,
$$ \lambda_i - \sigma_k \geq \lambda_j - \sigma_{j+1} \geq \frac{\delta_j}{2}. $$
By writing the elements of the vector $U_j^\mathrm{T} v_i$ in component form, it follows that
$$ \|(\lambda_i I - D_j) U_j^\mathrm{T} v_i \| \geq \frac{\delta_j}{2} \| U_j^\mathrm{T} v_i \| $$
and hence
$$ \| U_j^\mathrm{T} v_i \| \leq 4 \left( \frac{t r^{1/\gamma}}{\delta_j} + \frac{\|E\|^2}{\sigma_j \delta_j} \right) $$
on the event $\Omega_j$. Since this holds for each $1 \leq i \leq j$, the proof is complete.
\end{proof}
With Lemmas \ref{lemma:proj_bound} and \ref{lemma:uproj} in hand, we now prove Theorems \ref{thm:main}, \ref{thm:general}, \ref{thm:subspace}, and \ref{thm:probweyl}. By Proposition \ref{prop:sine}, in order to prove Theorems \ref{thm:main} and \ref{thm:general}, it suffices to bound $\sin \angle (u_j, v_j)$ because $u_j, v_j$ are formed from the left and right singular vectors of $A$ and $A+E$.
\begin{proof}[Proof of Theorem \ref{thm:main}]
We write
$$ v_1 = \sum_{k=1}^r \alpha_k u_k + \sum_{k=1}^r \alpha_{-k} u_{-k} + P v_1, $$
where $P$ is the orthogonal projection onto $W^\perp$. Then
$$ \sin^2 \angle (u_1, v_1) = 1- \cos^2 \angle (u_1, v_1) = \sum_{k=2}^r |\alpha_k|^2 + \sum_{k=1}^r |\alpha_{-k}|^2 + \| P v_1 \|^2. $$
Applying the bounds obtained from Lemmas \ref{lemma:proj_bound} and \ref{lemma:uproj} (with $j=1$), we obtain
$$ \sin^2 \angle (u_1, v_1) \leq 16 \left( \frac{t r^{1/\gamma}}{\delta} + \frac{\|E\|^2}{\sigma_1 \delta} \right)^2 + 4 \frac{\|E\|^2}{ \sigma_1^2} $$
with probability at least
\begin{equation} \label{eq:probholdmain}
1 - 27 \tilde{C}_1 \exp \left( -\tilde{c}_1 \frac{\delta^\gamma}{4^\gamma} \right) - \tilde{C}_1 9^{2r} \exp \left(- \tilde{c}_1 r \frac{t^\gamma}{2^\gamma} \right).
\end{equation}
We now note that
\begin{align*}
16 \left( \frac{t r^{1/\gamma}}{\delta} + \frac{\|E\|^2}{\sigma_1 \delta} \right)^2 + 4 \frac{\|E\|^2}{ \sigma_1^2} &\leq 16 \left( \frac{t r^{1/\gamma}}{\delta} + \frac{\|E\|^2}{\sigma_1 \delta} + \frac{\|E\|}{ \sigma_1} \right)^2.
\end{align*}
The correct absolute constant in front can now be deduced from the bound above and Proposition \ref{prop:sine}. The lower bound on the probability given in \eqref{eq:probholdmain} can be written in terms of the constants $C_1, c_1, \gamma$ by recalling the definitions of $\tilde{C}_1$ and $\tilde{c}_1$ given in Lemma \ref{lemma:tilde}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:general}]
We again write
\begin{equation} \label{eq:vjproj}
v_j = \sum_{k=1}^r \alpha_k u_k + \sum_{k=1}^r \alpha_{-k} u_{-k} + P v_j,
\end{equation}
where $P$ is the orthogonal projection onto $W^\perp$. Then we have that
\begin{align*}
\sin^2 \angle (u_j, v_j) &= 1- \cos^2 \angle (u_j, v_j) \\
&= \sum_{k=1}^{j-1} |\alpha_k|^2 + \sum_{k=j+1}^r |\alpha_k|^2 + \sum_{k=1}^r |\alpha_{-k}|^2 + \|P v_j\|^2.
\end{align*}
For any $1 \leq k \leq j-1$, we have that
$$ |\alpha_k|^2 = | v_j \cdot (u_k - v_k) |^2 \leq \|v_k - u_k\|^2 \leq 2 (1 - \cos \angle (v_k,u_k)) \leq 2 \sin^2 \angle(v_k,u_k). $$
Moreover, from Lemmas \ref{lemma:proj_bound} and \ref{lemma:uproj}, we have
$$ \sum_{k=j+1}^r |\alpha_k|^2 + \sum_{k=1}^r |\alpha_{-k}|^2 \leq 16 \left( \frac{t r^{1/\gamma}}{\delta_j} + \frac{\|E\|^2}{\sigma_j \delta_j} \right)^2 $$
with probability at least
$$ 1 - 2\tilde{C}_1 9^j \exp \left( - \tilde{c}_1 \frac{\delta_j^\gamma}{4^\gamma}\right) - \tilde{C}_1 9^{2r} \exp \left( - \tilde{c}_1r \frac{t^\gamma}{2^\gamma} \right). $$
and
$$ \| P v_j \|^2 \leq 4 \frac{\|E\|^2}{\sigma_j^2} $$
with probability at least $1 - \tilde{C}_1 9^j \exp \left( - \tilde{c}_1 \frac{\sigma_j^\gamma}{4^\gamma} \right)$. The proof of Theorem \ref{thm:general} is complete by combining the bounds above\footnote{Here the bounds are given in terms of $\sin^2 \angle(v_k, u_k)$ for $1 \leq k \leq j-1$. However, $u_k$ and $v_k$ are formed from the left and right singular vectors of $A$ and $A+E$. To avoid the dependence on both the left and right singular vectors, one can begin with \eqref{eq:vjproj} and consider only the coordinates of $v_j$ which correspond to the left (alternatively right) singular vectors. By then following the proof for only these coordinates, one can bound the left (right) singular vectors by terms which only depend on the previous left (right) singular vectors.}. As in the proof of Theorem \ref{thm:main}, the correct constant factor in front can be deduced from Proposition \ref{prop:sine}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:subspace}]
Define the subspaces
$$ \tilde{U}:= \mathrm{Span}\{u_1, \ldots, u_j \} \quad \text{and} \quad \tilde{V}:= \mathrm{Span}\{v_1, \ldots, v_j\}. $$
By Proposition \ref{prop:sine}, it suffices to bound $\sin \angle(\tilde{U}, \tilde{V})$.
Let $Q$ be the orthogonal projection onto $\tilde{U}^\perp$. By Lemmas \ref{lemma:proj_bound} and \ref{lemma:uproj}, it follows that
\begin{equation} \label{eq:supqvi}
\sup_{1 \leq i \leq j} \|Q v_i \| \leq 4 \left( \frac{t r^{1/\gamma} }{\delta_j} + \frac{\|E\|^2}{\sigma_j \delta_j} + \frac{\|E\|}{\sigma_j} \right)
\end{equation}
with probability at least
$$ 1 - 3\tilde{C}_1 9^j \exp \left( - \tilde{c}_1 \frac{\delta_j^\gamma}{4^\gamma}\right) - \tilde{C}_1 9^{2r} \exp \left( - \tilde{c}_1r \frac{t^\gamma}{2^\gamma} \right). $$
On the event where \eqref{eq:supqvi} holds, we have
$$ \sup_{v \in \tilde{V}, \|v\| = 1} \| Q v \| \leq 4 \sqrt{j} \left( \frac{t r^{1/\gamma} }{\delta_j} + \frac{\|E\|^2}{\sigma_j \delta_j} + \frac{\|E\|}{\sigma_j} \right) $$
by the triangle inequality and the Cauchy-Schwarz inequality. Thus, by \eqref{eq:ssad}, we conclude that
\begin{align*}
\sin \angle(\tilde{U}, \tilde{V}) &\leq 4 \sqrt{j} \left( \frac{t r^{1/\gamma} }{\delta_j} + \frac{\|E\|^2}{\sigma_j \delta_j} + \frac{\|E\|}{\sigma_j} \right)
\end{align*}
on the event where \eqref{eq:supqvi} holds. The claim now follows from Proposition \ref{prop:sine}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:probweyl}]
The lower bound \eqref{eq:probweylbndlower} follows from Lemma \ref{lemma:j-largest}; it remains to prove \eqref{eq:probweylbndupper}. Let $U$ be the $(m+n) \times 2r$ matrix whose columns are given by the vectors $u_1, \ldots, u_r, u_{-1}, \ldots, u_{-r}$, and recall that $P$ is the orthogonal projection onto $W^\perp$.
Let $S$ denote the unit sphere in $\mathrm{Span}\{v_1, \ldots, v_j\}$. Then for $1 \leq i \leq j$, we multiply \eqref{eq:aea} on the left by $v_i^\mathrm{T} P$ and on the right by $v_i$ to obtain
$$ \lambda_i \|P v_i \|^2 \leq \| v_i^\mathrm{T} P \tilde{E} v_i \| \leq \|P v_i \| \|E\|. $$
Here we used \eqref{eq:normE} and the fact that $P \tilde{A} = 0$. Therefore, we have the deterministic bound
$$ \sup_{1 \leq i \leq j} \| P v_i \| \leq \frac{ \| E\|}{\lambda_j}. $$
By the Cauchy-Schwarz inequality, it follows that
\begin{equation} \label{eq:detsupbnd}
\sup_{v \in S} \| P v \| \leq \sqrt{j} \frac{ \|E \| }{\lambda_j}.
\end{equation}
By the Courant minimax principle, we have
\begin{align*}
\sigma_j = \max_{\dim(V)=j} \min_{v\in V,\|v\|=1} v^\mathrm{T} \tilde{A}v \geq \min_{v \in S} v^\mathrm{T} \tilde{A} v \geq \lambda_j - \max_{v \in S} |v^\mathrm{T} \tilde{E} v|.
\end{align*}
Thus, it suffices to show that
\begin{equation*}
\max_{v \in S} |v^\mathrm{T} \tilde{E} v| \leq t r^{1/\gamma} + 2\sqrt{j} \frac{ \|E\|^2}{\lambda_j} + j \frac{\|E\|^3}{{\lambda_j}^2}
\end{equation*}
with probability at least $1-\tilde{C}_1 9^{2r} \exp \left( - \tilde{c}_1 r \frac{t^\gamma}{2^\gamma} \right)$.
We decompose $v = Pv + U U^\mathrm{T} v$ and obtain
\begin{align*}
\max_{v \in S} |v^\mathrm{T} \tilde{E} v| \leq \max_{v \in S} \|Pv\|^2 \|\tilde{E}\| + 2 \max_{v \in S} \| Pv \| \|\tilde{E}\| + \| U^\mathrm{T} \tilde{E} U \|.
\end{align*}
Thus, by Lemma \ref{lemma:r-norm} and \eqref{eq:detsupbnd}, we have
$$ \max_{v \in S} |v^\mathrm{T} \tilde{E} v| \leq j \frac{ \|E\|^3}{\lambda_j^2} + 2 \sqrt{j} \frac{ \|E\|^2}{\lambda_j} + t r^{1/\gamma} $$
with probability at least $1-\tilde{C}_1 9^{2r} \exp \left( - \tilde{c}_1 r \frac{t^\gamma}{2^\gamma} \right)$, and the proof is complete.
\end{proof}
\section{The concentration property} \label{sec:concentration}
In this section, we give examples of random matrix models satisfying Definition \ref{def:concentration}.
\begin{lemma} \label{lemma:bernoulli}
There exists a constant $C_1$ such that the following holds. Let $E$ be a random $n \times n$ Bernoulli matrix. Then
$$ \mathbb{P} ( \|E\| > 3 \sqrt{n} ) \leq \exp(-C_1 n), $$
and for any fixed unit vectors $u,v$ and positive number $t$,
$$ \mathbb{P} (|u^\mathrm{T} E v| \geq t ) \leq 2 \exp(-t^2/2). $$
\end{lemma}
The bounds in Lemma \ref{lemma:bernoulli} also hold for the case where the noise is Gaussian (instead of Bernoulli). Indeed, when the entries of $E$ are iid standard normal random variables, $u^{\mathrm{T}} E v$ has the standard normal distribution. The first bound is a corollary of a general concentration result from \cite{V}. It can also be proved directly using a net argument. The second bound follows from martingale different sequence inequality \cite{McDiarmid}; see also \cite{V} for a direct proof with a more generous constant.
We now verify the $(C_1,c_1,\gamma)$-concentration property for slightly more general random matrix models. We will discuss these matrix models further in Section \ref{section:app}. In the lemmas below, we consider both the case where $E$ is a real symmetric random matrix with independent entries and when $E$ is a non-symmetric random matrix with independent entries.
\begin{lemma} \label{lemma:conc-sym}
Let $E = (\xi_{ij})_{i,j=1}^n$ be a $n \times n$ real symmetric random matrix where
$$ \{ \xi_{ij} : 1 \leq i \leq j \leq n\} $$
is a collection of independent random variables each with mean zero. Further assume
$$ \sup_{1 \leq i \leq j \leq n} |\xi_{ij}| \leq K $$
with probability $1$, for some $K \geq 1$. Then for any fixed unit vectors $u,v$ and every $t > 0$
$$ \mathbb{P} (|u^\mathrm{T} E v| \geq t) \leq 2 \exp \left( \frac{-t^2}{8 K^2 } \right). $$
\end{lemma}
\begin{proof}
We write
$$ u^\mathrm{T} E v = \sum_{1 \leq i < j \leq n} (u_i v_j + v_i u_j) \xi_{ij} + \sum_{i=1}^n u_i v_i \xi_{ii}. $$
As the right side is a sum of independent, bounded random variables, we apply Hoeffding's inequality (\cite[Theorem 2]{H}) to obtain
$$ \mathbb{P} (|u^\mathrm{T} E v - \mathbb{E} u^\mathrm{T} E v| \geq t) \leq 2 \exp \left( \frac{-t^2}{8 K^2 } \right). $$
Here we used the fact that
$$ \sum_{1 \leq i < j \leq n} (|u_i| |v_j| + |v_i| |u_j|)^2 + \sum_{i=1}^n |u_i|^2 |v_i|^2 \leq 4 \sum_{i,j=1}^n |u_i|^2 |v_j|^2 \leq 4 $$
because $u,v$ are unit vectors. Since each $\xi_{ij}$ has mean zero, it follows that $\mathbb{E} u^\mathrm{T} E v = 0$, and the proof is complete.
\end{proof}
\begin{lemma} \label{lemma:conc-nonsym}
Let $E = (\xi_{ij})_{1 \leq i \leq m, 1 \leq j \leq n}$ be a $m \times n$ real random matrix where
$$ \{ \xi_{ij} : 1 \leq i \leq m, 1 \leq j \leq n \} $$
is a collection of independent random variables each with mean zero. Further assume
$$ \sup_{1 \leq i \leq m, 1 \leq j \leq n} |\xi_{ij}| \leq K $$
with probability $1$, for some $K \geq 1$. Then for any fixed unit vectors $u \in \mathbb{R}^m, v \in \mathbb{R}^n$, and every $t > 0$
\begin{equation} \label{eq:concprop-nonsym}
\mathbb{P}( |u^\mathrm{T} E v| \geq t) \leq 2 \exp \left( \frac{-t^2}{2 K^2} \right).
\end{equation}
\end{lemma}
The proof of Lemma \ref{lemma:conc-nonsym} is nearly identical to the proof of lemma \ref{lemma:conc-sym}. Indeed, \eqref{eq:concprop-nonsym} follows from Hoeffding's inequality since $u^\mathrm{T}E v$ can be the written as the sum of independent random variables; we omit the details.
Many other models of random matrices satisfy Definition \ref{def:concentration}. If the entries of $E$ are independent and have a rapidly decaying tail, then $E$ will be $(C_1,c_1,\gamma)$-concentrated for some constants $C_1,c_1,\gamma>0$. One can achieve this by standard truncation arguments.
For many arguments of this type, see for instance \cite{VW}. As an example, we present a concentration result from \cite{RV} when the entries of $E$ are iid sub-exponential random variables.
\begin{lemma}[Proposition 5.16 of \cite{RV}] \label{lemma:conc-sub}
Let $E = (\xi_{ij})_{1 \leq i \leq m, 1 \leq j \leq n}$ be a $m \times n$ real random matrix whose entries $\xi_{ij} $ are iid copies of a sub-exponential random variable $\xi$ with constant $K$, i.e. $\mathbb{P}(|\xi| > t) \le \exp(1-t/K)$ for all $t>0$. Assume $\xi$ has mean 0 and variance 1. Then there are constants $C_1, c_1>0$ (depending only on $K$) such that for any fixed unit vectors $u \in \mathbb{R}^m, v \in \mathbb{R}^n$ and any $t > 0$, one has
\begin{equation*}
\mathbb{P}( |u^\mathrm{T} E v| \geq t) \leq C_1 \exp \left( -c_1 t \right).
\end{equation*}
\end{lemma}
Finally, let us point out that the assumption that the entries are independent is not necessary. As an example, we mention random orthogonal matrices.
For another example, one can consider the elliptic ensembles; this can be verified using standard truncation and concentration results, see for instance \cite{KS, LT, McDiarmid, RV} and \cite[Chapter 5]{BS}.
\section{An application: The matrix recovery problem} \label{section:app}
The matrix recovery problem is the following: $A$ is a large unknown matrix. We can only observe its noisy image $A+E$, or in some cases just a small part of it. We would like to reconstruct $A$ or estimate an important parameter as accurately as possible from this observation.
Consider a deterministic $m \times n$ matrix $$A = (a_{ij})_{1 \leq i \leq m, 1 \leq j \leq n.}$$
Let $Z$ be a random matrix of the same size whose entries
$\{z_{ij} : 1 \leq i \leq m, 1 \leq j \leq n \}$ are independent random variables with mean zero and unit variance. For convenience, we will assume that $\| Z \| _{\infty} := \max_{i,j} |z_{ij}| \le K$, for some fixed $K >0$, with probability $1$.
Suppose that we have only partial access to the noisy data $A+Z$. Each entry of this matrix is observed with probability $p$ and
unobserved with probability $1-p$ for some small $p$. We will write $0$ if the entry is not observed. Given this sparse observable data matrix $B$, the task is to reconstruct $A$.
The matrix completion problem is a central one in data analysis, and there is a large collection of literature focusing on the low rank case; see \cite{AM,CCS,CP,CR,CRT,CT,Cest,KMO,KMO2,Krank,KLT,MHT,NW,RVsamp} and references therein. A representative example here is the Netflix problem, where $A$ is the matrix of ratings (the rows are viewers, the columns are movie titles, and entries are ratings).
In this section, we are going to use our new results to study this problem. The main novel feature here is that our analysis allows us to approximate {\it any given column (or row)} with high probability. For instance, in the Netflix problem,
one can figure out the ratings of any given individual, or any given movie.
In earlier algorithms we know of, the approximation was mostly done for the Frobenius norm of the whole matrix. Such a result is equivalent to saying that a {\it random} row or column is well approximated, but
cannot guarantee anything about a specific row or column.
Finally, let us mention that there are algorithms which can recover $A$ precisely, but these work only if $A$ satisfies certain structural assumptions \cite{CCS,CP,CR,CRT,CT}.
Without loss of generality, we assume $A$ is a square $n \times n$ matrix. The rectangular case follows by applying the analysis below
to the matrix $\tilde{A}$ defined in \eqref{eq:def:tildeA}. We assume that $n$ is large and
asymptotic notation such as $o, O, \Omega, \Theta$ will be used under the assumption that $n \rightarrow \infty$.
Let $A$ be a $n \times n$ deterministic matrix with rank $r$ where $\sigma_1 \geq \cdots \geq \sigma_r > 0$ are the singular values with corresponding singular vectors $u_1, \ldots, u_r$.
Let $\chi_{ij}$ be iid indicator random variables with $\mathbb{P} (\chi_{ij}=1)=p$. The entries of the sparse matrix $B$ can be written as
\begin{equation*} b_{ij} = (a_{ij } +z_{ij} ) \chi_{ij} = p a_{ij} + a_{ij} (\chi_{ij} -p) + z_{ij} \chi_{ij} = pa_{ij} + f_{ij}, \end{equation*} where
\begin{equation*} f_{ij} := a_{ij} (\chi_{ij} -p) + z_{ij} \chi_{ij} . \end{equation*} It is clear that the $f_{ij}$ are independent random variables with mean 0 and variance
$\sigma_{ij}^2 = a_{ij}^2 p(1-p) + p $. This way, we can write $\frac{1}{p} B$ in the form $A + E$, where $E$ is the random matrix with independent entries $e_{ij} := p^{-1} f_{ij}$. We assume $p \le 1/2$; in fact, our result
works for $p$ being a negative power of $n$.
Let $1 \le j \le r$ and consider the subspace $U$ spanned by $u_1, \dots, u_j$ and $V$ spanned by $v_1, \dots, v_j$, where $u_i$ (alternatively $v_i)$ is the $i$-th singular vector of
$A$ (alternatively $B$).
Fix any $1 \le m \le n$ and
consider the $m$-th columns
of $A$ and $A+E$. Denote them by
$x$ and $\tilde x $, respectively. We have
\begin{equation*} \| x- P_{V} \tilde x\| \le \| x - P_U x\| + \| P_U x - P_U \tilde x\| + \| P_U \tilde x - P_V \tilde x \|. \end{equation*}
Notice that $P_V \tilde x $ is efficiently computable given $B$ and $p$.
(In fact, we can estimate $p$ very well by the density of $B$, so we don't even need to know $p$.) In the remaining part of the analysis, we
will estimate the three error terms on the right-hand side.
We will make use of the following lemma, which is a variant of \cite[Lemma 2.2]{TVdet}; see also \cite{VW} where results of this type are discussed in depth.
\begin{lemma} \label{lemma:projection} Let $X$ be a random vector in $\mathbb{R}^n$ whose coordinates $x_i, 1\le i \le n$ are independent random variables with mean 0, variance at most $\sigma^2$, and are bounded in absolute value by $1$. Let $H$ be a fixed subspace of dimension $d$ and $P_H (X)$ be the projection of $X$ onto $H$. Then
\begin{equation} \label{eqn:distance} \mathbb{P} \left( \| P_H (X) \| \ge \sigma d^{1/2} + t \right) \le C \exp (-c t^2 ) ,\end{equation}
where $c, C>0$ are absolute constants.
\end{lemma}
The first term $\| x -P_U x \|$ is bounded from above by $\sigma_{j+1}$. The second term has the form $\| P_U X \|$, where $X:=x -\tilde x$ is the random vector
with independent entries, which is the $m$-th column of $E$. Notice that entries of $X$ are bounded (in absolute value) by $\alpha : = p^{-1} (\|x\| _{\infty}+ K)$ with probability $1$. Applying Lemma \ref{lemma:projection} (with the proper normalization), we obtain
\begin{equation} \label{recovery3}
\mathbb{P} \left( \| P_U X \| \geq j^{1/2} \sqrt{ \frac{ \|x\|_{\infty}^2 + 1}{p} }+ t \right) \le C \exp( -c t^2 \alpha^{-2} )
\end{equation}
since $\sigma_{im}^2 \leq p^{-1} ( \|x\|_{\infty}^2 + 1)$.
By setting $t := c^{-1/2} \alpha \lambda $, \eqref{recovery3} implies that, for any $\lambda > 0$,
\begin{equation*} \| P_U X \| \le j^{1/2} \sqrt{ \frac{ \|x\|_{\infty}^2 + 1}{p} }+ c^{-1/2} \lambda \alpha \end{equation*}
with probability at least $1- C \exp(-\lambda^2 ) $.
To bound $\| P_U \tilde x - P_V \tilde x \|$, we appeal to Theorem \ref{thm:subspace}. Assume for a moment that $E$ is $(C_1, c_1, \gamma)$-concentrated
for some constants $C_1, c_1, \gamma > 0$. Let $\delta_j := \sigma_j - \sigma_{j+1}$. Then it follows that, for any $\lambda > 0$,
\begin{equation*} \| P_U - P_V\| \le C \sqrt{j} \left(\frac{ \lambda^{2/\gamma} r^{1/\gamma} }{ \delta_{j} } + \frac{\|E \| }{\sigma_j} +\frac{\| E\|^2 }{ \sigma_j \delta_j } \right), \end{equation*}
with probability at least
$$ 1 - 6C_1 9^j \exp \left( -c_1 \frac{\delta_j^\gamma}{8^\gamma} \right) - 2C_1 9^{2r} \exp \left( -c_1 r \frac{\lambda^2}{4^\gamma} \right), $$
where $C$ is an absolute constant.
Since
$$ \| P_U \tilde{x} - P_V \tilde{x} \| \leq \|P_U - P_{V} \| \|\tilde{x} \|, $$
it remains to bound $\| \tilde{x} \|$. We first note that $\|\tilde{x}\| \leq \|x\| + \|X \|$. By Talagrand's inequality (see \cite{Tconc} or \cite[Theorem 2.1.13]{Tbook}) , we have
$$ \mathbb{P} \left( \|X \| \geq \mathbb{E} \|X\| + t \right) \leq C \exp(-c t^2 \alpha^{-2}). $$
In addition,
$$ \mathbb{E} \|X\|^2 = \frac{1}{p^2} \sum_{i=1}^n \sigma_{im}^2 \leq \frac{1}{p} \left( \|x\|^2 + n \right). $$
Thus, we conclude that
$$ \|X\| \leq \sqrt{ \frac{\|x\|^2 + n }{p} } + c^{-1/2} \lambda \alpha $$
with probability at least $1 - C \exp(-\lambda^2)$.
Putting the bounds together, we obtain Theorem \ref{theorem:recovery} below.
\begin{theorem} \label{theorem:recovery} Assume that $A$ has rank $r$ and $\| Z\|_{\infty} \le K $ with probability $1$. Assume that $E$ is
$(C_1, c_1, \gamma)$-concentrated for a trio of constants $C_1, c_1, \gamma >0$. Let $m$ be an arbitrary index between $1$ and $n$, and let $x$ and $\tilde x$ be the $m$-th columns of $A$ and $\frac{1}{p} B$. Let $1 \leq j \leq r$ be an integer, and let $V$ be the subspace spanned by the first $j$ singular vectors of $B$. Let $\sigma_1 \ge \dots \ge \sigma_r > 0$ be the
singular values of $A$. Assume $\delta_j := \sigma_j - \sigma_{j+1}$. Then, for any $\lambda >0$,
\begin{equation*}
\| x - P_V (\tilde x) \| \le \sigma_{j+1} + j^{1/2} \sqrt{ \frac{ \|x\|_{\infty}^2 + 1}{p} } + \mu \left( \sqrt{ \frac{ \|x\|^2 + n}{p} } + C \lambda \alpha \right) + C \lambda \alpha,
\end{equation*}
with probability at least
$$ 1 - C \exp(-\lambda^2) - 6C_1 9^j \exp \left( -c_1 \frac{\delta_j^\gamma}{8^\gamma} \right) - 2C_1 9^{2r} \exp \left( -c_1 r \frac{\lambda^2}{4^\gamma} \right), $$
where
$$ \alpha := p^{-1} (\| x\| _{\infty} + K) \quad \text{and}\quad \mu:= C \sqrt{j} \left(\frac{\lambda^{2/\gamma} r^{1/\gamma} }{ \delta_{j} } + \frac{\|E \| }{\sigma_j} +\frac{\| E\|^2 }{ \sigma_j \delta_j } \right), $$
and $C$ is an absolute constant.
\end{theorem}
As this theorem is a bit technical, let us consider a special, simpler case. Assume that all entries of $A$ are of order $\Theta(1)$ and $p=\Theta(1)$. Thus, any column $x$ has length $ \Theta (n^{1/2})$. Assume furthermore that $j=r=\Theta(1)$ and $\sigma_r = \Omega(n^{1/2+\varepsilon})$ for some $\varepsilon > 0$. Then our analysis yields
\begin{corollary}
There exists $c_0 > 0$ (depending only on $\varepsilon$) such that, for any given column $x$,
$$ \| x - P_V (\tilde x) \| = O( n^{-c_0} \|x\|) $$
with probability $1-o(1)$.
\end{corollary}
\subsection*{Acknowledgements}
The authors would like to thank Nicholas Cook and David Renfrew for useful comments.
| {
"timestamp": "2014-09-08T02:10:39",
"yymm": "1311",
"arxiv_id": "1311.2657",
"language": "en",
"url": "https://arxiv.org/abs/1311.2657",
"abstract": "Matrix perturbation inequalities, such as Weyl's theorem (concerning the singular values) and the Davis-Kahan theorem (concerning the singular vectors), play essential roles in quantitative science; in particular, these bounds have found application in data analysis as well as related areas of engineering and computer science. In many situations, the perturbation is assumed to be random, and the original matrix has certain structural properties (such as having low rank). We show that, in this scenario, classical perturbation results, such as Weyl and Davis-Kahan, can be improved significantly. We believe many of our new bounds are close to optimal and also discuss some applications.",
"subjects": "Numerical Analysis (math.NA); Combinatorics (math.CO); Probability (math.PR); Statistics Theory (math.ST)",
"title": "Random perturbation of low rank matrices: Improving classical bounds",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717428891155,
"lm_q2_score": 0.8152324803738429,
"lm_q1q2_score": 0.8042853290222388
} |
https://arxiv.org/abs/1412.2851 | Minimum Local Distance Density Estimation | We present a local density estimator based on first order statistics. To estimate the density at a point, $x$, the original sample is divided into subsets and the average minimum sample distance to $x$ over all such subsets is used to define the density estimate at $x$. The tuning parameter is thus the number of subsets instead of the typical bandwidth of kernel or histogram-based density estimators. The proposed method is similar to nearest-neighbor density estimators but it provides smoother estimates. We derive the asymptotic distribution of this minimum sample distance statistic to study globally optimal values for the number and size of the subsets. Simulations are used to illustrate and compare the convergence properties of the estimator. The results show that the method provides good estimates of a wide variety of densities without changes of the tuning parameter, and that it offers competitive convergence performance. |
\section{Introduction}
Nonparametric density estimation is a classic problem that continues to play an important role
in applied statistics and data analysis. More recently, it has also become a topic of much interest
in computational mathematics, especially in the uncertainty quantification community
where one is interested in, for example, densities of a large number of coefficients of a
random function in terms of a fixed set of deterministic functions (e.g., truncated Karhunen-Lo\`eve expansions).
The method we present here was motivated by such applications.
Among the most popular techniques for density estimation are the
histogram \cite{scott1979optimal,scottmv}, kernel~\cite{parzen1962estimation,scottmv,wand} and orthogonal
series~\cite{efrom,silverman1986density} estimators. For the one-dimensional case,
histogram methods remain in widespread use due to their simplicity and intuitive
nature, but kernel density estimation has emerged as a method of
choice thanks, in part, to recent adaptive bandwidth-selection methods providing
fast and accurate results~\cite{botev2010kernel}. However, these kernel density estimators can fail to
converge in some cases (e.g., recovering a Cauchy density
with Gaussian kernels)~\cite{buch2005kernel} and can be computationally expensive with large samples
($\orderof{N^2}$, for a sample size $N$).
Note that histogram estimators are typically implemented using equal-sized bins,
and nearest-neighbor density estimators
can be roughly thought as histograms whose bins adapt to the local density of the data.
More precisely, let $X_1,\ldots, X_N$ be iid variables from a
distribution with density, $f$, and let $X_{(1)},\ldots,X_{(N)}$ be the corresponding order
statistics. For any $x$, define $Y_i = |X_i-x|$ and $D_j(x) = Y_{(j)}$.
The $k$-nearest-neighbor estimate of $f$ is defined as (see \cite{silverman1986density}
for an overview):
\(
\widehat{f}_N(x) = (\,C_N/N\,)/[\,2 D_k(x)\,],
\)
where $C_N$ is a constant that may depend on the sample size.
We may think of $2D_k(x)$ as the width of the bin around $x$. The value of $C_N$ is often chosen as
$C_N\approx N^{1/2}$ but some effort has been directed towards its
optimal selection~\cite{fukunaga1973optimization,hall2008choice,li1984consistency},
with some recent work involving the use of order statistics~\cite{kung2012optimal}.
One of the disadvantages of nearest-neighbor estimators is that their derivative
has discontinuities at the points $(X_{(j)}+X_{(j+k)})/2$, which is caused by the discontinuities
of the derivative of the function $D_k(x)$ at these points. This is clear in
Figure \ref{fig:smoothdist},
which shows plots of $D_k(x)$ for a sample of size $N=125$ from a Cauchy$(0,1)$ distribution with $k=1$
and \edit{$k=round(\sqrt{N})$}. One way to obtain smoother densities
is using a combination of kernel and nearest-neighbor density estimation where the
nearest-neighbors
technique is used to choose the kernel bandwidth~\cite{silverman1986density}.
We introduce
an alternative averaging method that improves smoothness and can still be used
to obtain local density estimates.
The main idea of this paper may be summarized as follows: Instead
of using the $k$th nearest-neighbor to provide an estimate of
the density at a point, $x$, we use a subset-average of first order statistics of $|X_i-x|$.
So, the original sample of size $N$ is split into $m$ subsets of size $s$ each; this
decomposition into subsets allows the control of the asymptotic mean squared error (MSE)
of the density estimate. Thus, the problem of bandwidth selection is transformed into that of choosing an optimal number
of subsets. This density estimator is naturally parellelizable with
complexity $\orderof{N^{{1}/{3}}}$ for parallel systems.
The rest of this article is organized as follows. In
Section~\ref{sec:theory} we develop the theory that underlies the
estimator and describe asymptotic results.
In Sections~\ref{sec:algo} and \ref{sec:numexp} we describe the actual estimator
and study its performance using numerical experiments.
A variety of densities are used to reveal the strengths and
weaknesses of the estimator. We provide concluding remarks and generalizations
in Section~\ref{sec:conc}. Proofs and other auxiliary results are collected in Appendix \ref{sec:proofs}. \edit{From here on, when we refer to the size of a sample set being the power of the total number of samples, we assume that it represents a rounded value, for example, $k = \sqrt{N} \Rightarrow k = round(\sqrt{N})$.}
\section{Theoretical framework \label{sec:theory}}
Let $X_1,\ldots,X_N$ be iid random variables from a distribution with invertible CDF,
$F$, and PDF $f$. Our goal is to estimate the value of $f$ at a point, $x_*$,
where $f(x_*)>0$, and where $f$ is either continuous or has a jump discontinuity.
The non-negative random variables $Y_i = |X_i-x_*|$ are iid with PDF:
\(
g(y) = f(y+x_*) + f(x_*-y).
\)
In particular, $f(x_*)=g(0)/2$. Thus, an estimate of $g(0)$ leads to an estimate of
$f(x_*)$. Furthermore, $g$ is more regular than $f$ in a sense described by the following lemma
(its proof and those of the other results in this section are collected in
Appendix \ref{sec:proofs}).
\begin{lemma} \label{lemma:gprop} Let $f$ and $g$ be as defined above. Then:
\vspace{-.3cm}
\begin{itemize}
\item[(i)] If $f$ has left and right limits at $x_*$ (i.e., it is either continuous or has a jump discontinuity at $x_*$), then $g$ is continuous at zero.
\vspace{-.2cm}
\item[(ii)] If $f$ has left and right derivatives at $x_*$, then $g$ has a right derivative at zero. Furthermore, if $f$ is differentiable at $x_*$, then $g'(0)=0$.
\end{itemize}
\end{lemma}
The original question is thus reduced to the following problem: Let $X_1,\ldots,X_N$ be iid
non-negative random variables from a distribution with
invertible CDF, $G$, and PDF $g$. The goal is to estimate $g(0)>0$
assuming that $g$ is right continuous at zero. The continuity at zero
comes from Lemma \ref{lemma:gprop}(i). For some
asymptotic results we also assume that $g$ is right-differentiable with $g'(0)=0$.
The zero derivative
is justified by Lemma \ref{lemma:gprop}(ii). We estimate $g(0)$ using a
subset-average of first order statistics.
There is a natural connection between density estimation and first order statistics:
If ${X_{(1),N}}$ is the first order statistic of $X_1,\ldots,X_N$, then (under regularity conditions)
$\mathbb{E} {X_{(1),N}} \sim Q(1/(N+1))$ as $N\to \infty$,
where $Q = G^{-1}$ is the quantile function, and therefore
$(N+1)\,\mathbb{E} {X_{(1),N}} \to 1/g(0)$.
This shows that one should be able to estimate $g(0)$ provided $N$ is large
and we have a consistent estimate of $\mathbb{E} {X_{(1),N}} $. In the next section we provide
conditions for the limit to be valid and derive a similar limit for the second
moment of ${X_{(1),N}}$; we then define the estimator and study its asymptotics.
\subsection{Limits of first order statistics}
We start by finding a representation of the first two moments of the first-order statistic
in terms of functions that allow us to determine the limits of the moments as
$N\to \infty$.
\begin{lemma}\label{lemma:ordmom}
Let $X_1,\ldots,X_N$ be iid non-negative random variables with PDF $g$, invertible CDF $G$ and
quantile function $Q$. Assume that $g(0)>0$, and define the sequence of functions
\(
\delta_N(z) = (N+1)(1-z)^N
\)
on $z\in [0,1]$, $N\in \mathbb{N}$. Then:
\vspace{-.3cm}
\begin{itemize}
\item[(i)]
\begin{eqnarray}
(N+1)\,\mathbb{E} {X_{(1),N}} &=& \int_0^1 \frac{\delta_N(z)}{g(Q(z))}\,dz=\int_0^1 Q'(z)\, \delta_N(z)\,dz
\label{eq:EXone}\\
&=& \frac{1}{g(0)} + \frac{1}{(N+2)}\int_0^1 Q''(z)\,\delta_{N+1}(z)\,dz.\label{eq:EXonev1}
\end{eqnarray}
Furthermore, if $g$ is twice differentiable with $g'(0)=0$, then
\begin{equation} \label{eq:EXonev2}
(N+1)\,\mathbb{E} {X_{(1),N}} = \frac{1}{g(0)} + \frac{1}{(N+2)(N+3)}\int_0^1 Q'''(z)\,\delta_{N+2}(z)\,dz.
\end{equation}
\vspace{-.7cm}
\item[(ii)] If $g$ is differentiable a.e., then
\begin{equation}\label{eq:EXone2}
(N+1)^2\,\mathbb{E}[\,{X_{(1),N}}^2\,] = \left(\frac{N+1}{N+2}\right)\int_0^1 (Q^2(z))''\,\delta_{N+1}(z)\,dz.
\end{equation}
\end{itemize}
\end{lemma}
We use the following result to evaluate the limits of the moments as $N\to \infty$.
\begin{prop}\label{prop:convlim}
Let $H$ be a function defined on $[0,1]$ that is continuous at zero, and assume
there is an integer $m>0$ and a constant $C>0$ such that
\begin{equation}\label{eq:growth}
|H(x)| \leq {C}/{(1-x)^m}
\end{equation}
a.e. on $[0,1]$. Then,\,
\(
\lim_{N\to \infty} \int_0^1 H(x)\,\delta_N(x)\,dx = H(0).
\)
\end{prop}
This proposition allows us to compute the limits of \eqref{eq:EXone}-\eqref{eq:EXone2}
provided the quantile functions satisfy appropriate regularity conditions.
When a function $H$ satisfies \eqref{eq:growth}, we shall say that $H$
satisfies a tail condition for some $C>0$ and integer $m>0$. The following
corollary follows from Lemma \ref{lemma:ordmom} and Proposition \ref{prop:convlim}:
\begin{corollary}\label{cor:momlims}
Let $X_1,\ldots,X_N$ be iid non-negative random variables with PDF $g$, invertible CDF $G$ and
quantile function $Q$. Assume that $g(0)>0$. Then:
\vspace{-.2cm}
\item[(i)] If $g$ is continuous at zero and $Q'$ satisfies a tail condition, then
\begin{equation}\label{eq:limfirst}
\lim_{N\to\infty} (N+1)\,\mathbb{E} {X_{(1),N}} = Q'(0) = {1}/{g(0)}.
\end{equation}
If $g$ is differentiable and $Q''$ satisfies a tail condition, then
\begin{equation}\label{eq:limfirstv1}
(N+1)\,\mathbb{E} {X_{(1),N}} = {1}/{g(0)} + \orderof{1/N}.
\end{equation}
\item[(ii)] If $g$ is twice differentiable with $g'(0)=0$, $g''$ is continuous at zero and
$Q'''$ satisfies a tail condition, then
\begin{equation}\label{eq:limfirstv2}
(N+1)\,\mathbb{E} {X_{(1),N}} = {1}/{g(0)} + \orderof{1/N^2}.
\end{equation}
\item[(iii)] If $g$ is differentiable a.e., $g'$ and $g$ are continuous at zero, and $Q''$ satisfies a tail
condition, then
\begin{eqnarray}
\lim_{N\to \infty} (N+1)^2\,\mathbb{E}[{X_{(1),N}}^2] &=& 2\,Q'(0)^2 = {2}/{g(0)^2}\label{eq:limsec}\\
\lim_{N\to\infty} {\mathbb{V}{\rm ar}}\left[\,(N+1)\,{X_{(1),N}}\,\right] & = & {1}/{g(0)^2}.\label{eq:limvar}
\end{eqnarray}
\end{corollary}
We now provide examples of distributions that satisfy the hypotheses
of Corollary \ref{cor:momlims}. \edit{For these examples, we temporarily return to the notations $X_i$ (iid random variables) and $Y_i= |X_i-x_*|$ used before Lemma~\ref{lemma:gprop}}.
\begin{example}{\rm Let $X_1,\ldots,X_N$ be iid with exponential distribution
$\mathcal{E}(\lambda)$ and fix $x_*>0$. The PDF, CDF and quantile function of $Y_i$ are,
respectively,
\begin{eqnarray*}
g(y) &=& 2\lambda\,e^{-\lambda x_*}\cosh(\lambda y)\,I_{y\leq x_*} + \lambda\,e^{-\lambda(x_*+y)}\,I_{y> x_*}\\
G(y) &=& 2e^{-\lambda x_*}\sinh(\lambda y) \,I_{y\leq x_*} + (1-e^{-\lambda(x_*+y)}) \,I_{y> x_*}\\
Q(z) &=& \lambda^{-1}\mathrm{arcsinh} (ze^{\lambda x_*}/2)\,I_{z\leq z_*}-
[\,x_* + \lambda^{-1}\,\log(1-z)\,]\,I_{z>z_*}
\end{eqnarray*}
for $y\geq 0$, $z\in [0,1)$ and $z_*=1-e^{-2\lambda x_*}$. As expected, $g'(0)=0$. In addition, $Q$ and its derivatives
are continuous at zero. Furthermore, since $|\log(1-z)|\leq z/(1-z)$ on $(0,1)$, we see that $Q$ and its derivatives
satisfy tail conditions.
}
\end{example}
\begin{example}{\rm Let $X_1,\ldots,X_N$ be iid with Cauchy distribution
and fix $x_*\in{\mathbb{R}}$. The PDF and CDF of $Y_i$ are:
\begin{eqnarray*}
g(y) &=& \frac{1}{\pi[1+(y+x_*)^2]} + \frac{1}{\pi[1+(x_*-y)^2]}\\
G(y) &=& \arctan(y+x_*)/\pi - \arctan(x_*-y)/\pi.
\end{eqnarray*}
Again, $g'(0)=0$. To verify the conditions on the quantile function, $Q$, note that
\[
Q(z) = -\cot(\pi z) + \cot(\pi z)\sqrt{1 + (1+x_*^2)\tan^2(\pi z)},
\]
in a neighborhood of zero, while for $z$ in a neighborhood of 1, $Q$ is given by
\[
Q(z) = -\cot(\pi z) - \cot(\pi z)\sqrt{1 + (1+x_*^2)\tan^2(\pi z)}.
\]
Since $Q(z)\to 0$ as $z\to 0^+$ and $g$ is smooth, it follows that $Q$ and its derivatives are
continuous at zero. It is easy to see that the tail conditions for $Q'$,
$Q'''$ and $(Q^2)''$ are determined by the tail condition of $\csc(\pi z)$, which
in turn follows from the inequality $|\csc(\pi z)|\leq 1/[\,\pi z(1-z)]$ on
$(0,1)$.
}
\end{example}
It is also easy to check that the Gaussian and beta distributions satisfy appropriate
tail conditions for Corollary \ref{cor:momlims}.
\subsection{Estimators and their properties}
Let $X_1,\ldots,X_N$ be iid non-negative random variables whose
PDF $g$, CDF $G$ and quantile function $Q$ satisfy appropriate regularity conditions
for Corollary \ref{cor:momlims}.
We randomly split the sample into $m_N$ independent subsets of size
$s_N$. Both sequences, $(m_N)$ and $(s_N)$, tend to infinity as $N\to \infty$ and satisfy
$m_N s_N = N$. Let $X^{(1)}_{(1),s_N},\ldots,X^{(m_N)}_{(1),s_N}$ be the first-order statistics for each
of the $m_N$ subsets, and let ${\overline{X}_{m_N,s_N}}$ be their average,
\begin{equation}\label{eq:meanX}
{\overline{X}_{m_N,s_N}} = \frac{1}{m_N}\sum_{k=1}^{m_N} X^{(k)}_{(1),s_N}.
\end{equation}
The estimators of $1/g(0)$ and $g(0)$ are defined, respectively, as:
\begin{equation}\label{eq:estdefs}
\widehat{f^{-1}(0)}_{N} = (s_N+1){\overline{X}_{m_N,s_N}},\quad \widehat{f(0)}_{N} = 1/\widehat{f^{-1}(0)}_{N}.
\end{equation}
\begin{prop}\label{prop:mselim}
Let $N$, $m_N$ and $s_N$ be as defined above. Then:
\vspace{-.1cm}
\begin{itemize}
\item[(i)] If $g$ is differentiable a.e., $g'$ and $g$ are continuous at zero, and
$Q''$ satisfies a tail condition, then
\begin{equation}\label{eq:mselim}
\lim_{N\to \infty} {\rm MSE}(\,\widehat{f^{-1}(0)}_{N}\,) = 0,
\end{equation}
and therefore
\(
\widehat{f(0)}_{N} \stackrel{{P}}{\longrightarrow} g(0)
\)
as $N\to \infty$.
\item[(ii)] Let $g$ be twice differentiable with $g'(0)=0$, $g''$ be continuous at zero, and
let $Q'''$ satisfy a tail condition. If $\sqrt{m_N}/s_N\to \infty$ and
$\sqrt{m_N}/s_N^2\to 0$
as $N\to \infty$, then
\begin{equation}\label{eq:fidistlim}
\sqrt{m_N}\left(\,\widehat{f^{-1}(0)}_{N}-1/g(0)\,\right) \stackrel{{\cal L}}{\longrightarrow} N(0,1/g(0)^2),
\end{equation}
which leads to
\begin{equation}\label{eq:fdistlim}
\sqrt{m_N}\left(\,\widehat{f(0)}_{N}-g(0)\,\right) \stackrel{{\cal L}}{\longrightarrow} N(0,g(0)^2).
\end{equation}
Furthermore, ${\rm MSE}(\widehat{f^{-1}(0)}_{N})$ and ${\rm MSE}(\widehat{f(0)}_{N})$ are $\orderof{1/m_N}$. In particular,
\eqref{eq:fidistlim} and \eqref{eq:fdistlim} are satisfied when $s_N = N^\alpha$ and $m_N = N^{1-\alpha}$
for some $\alpha\in (1/5,1/3)$. This leads to the MSE optimal rate $\orderof{N^{-4/5-\varepsilon}}$ for
any $\varepsilon>0$.
\end{itemize}
\end{prop}
By (ii), we need a balance between the sample size, $s_N$, and the number of
samples, $m_N$: $m_N$ should grow faster than $s_N$ but not much faster.
For comparison, the optimal rate of the MSE is $\orderof{N^{-2/3}}$
for the smoothed histogram, and $\orderof{N^{-4/5}}$ for the kernel density estimator
\cite{dasgupta}.
\subsubsection*{Distance function}
We return to the original sample $X_1,\ldots,X_N$ from a density $f$ before the transformation to
$Y_1 = |X_1-x|,\ldots,Y_N = |X_N-x|$. The sample is split into $m_N$ subsets. Let $D_1(x;m)$ be the distance from $x$
to its nearest-neighbor in the $m$th subset. The mean ${\overline{X}_{m_N,s_N}}$ in \eqref{eq:meanX} is the average of $D_1(x;m)$
over all the subsets; we call this average the distance function, $D_{\rm MLD}$, of the MLD density estimator. That is,
\[
D_{\rm MLD}(x) = {\overline{X}_{m_N,s_N}}=\frac{1}{m_N}\sum_{m=1}^{m_N} D_1(x;m).
\]
The estimators in \eqref{eq:estdefs} can then be written in terms of $D_{\rm MLD}(x)$.
This distance function tends to be smoother than the usual distance function used by $k$-nearest-neighbor density
estimators. For example, Figure \ref{fig:smoothdist} shows the different distance functions $D_{\rm MLD}(x)$, $D_1(x)$
and $D_k(x)$ (the latter as defined in the introduction) for a sample of $N=125$ variables
from a Cauchy$(0,1)$. Note that $D_{\rm MLD}$ is an average of first-order statistics for samples of size
$s_N$, while $D_1$ is a first-order statistic for a samples of size $N$, so $D_{\rm MLD}>D_1$. On the
other hand, $D_{\sqrt{N}}$ is a $N^{1/2}$th-order statistic based on a sample of size $N$; hence the order
$D_{\rm MLD}>D_{\sqrt{N}}>D_1$.
\begin{figure}[!h]
\begin{center}
\includegraphics[keepaspectratio,width=0.5\textwidth]{smoothdist.jpg}
\caption{Distance function $D_k(x)$ for $k$-nearest-neighbor (for $k=1$ and $k=11\approx \sqrt{N}$) and the distance function
$D_{\rm MLD}(x)$ \edit{(with $m_N = N^{\frac{2}{3}} = 25$ subsets)} for \edit{125 samples taken from a Cauchy(0,1) distribution.}\label{fig:smoothdist}}
\end{center}
\end{figure}
\section{Minimum local distance density estimator \label{sec:algo}}
We now describe the
local distance density estimator (MLD-DE). The inputs are: a sample, a set of points where
the density is to be estimated and the parameter $\alpha$ whose default is set
to $\alpha=1/3$.
The basic steps to obtain the density estimate at a point $x$ are: (1) Start with a sample
of $N$ iid variables from the unknown density, $f$; (2) Randomly split the sample into
$m_N = N^{1-\alpha}$ disjoint subsets of size $s_N = N^{\alpha}$ each;
(3) Find the nearest sample distance to $x$ in each subset; (4) Compute the density estimate by
inverting the average nearest distance across the subsets and scaling it (see~Eq.\eqref{eq:estdefs}).
This is summarized in Algorithm \ref{alg:MLD}.
\begin{algorithm}[!h]
\caption{\label{alg:MLD} Returns density estimates at the points \edit{of evaluation $\{x_{l}\}_{l=1}^{M}$} given the sample
$X_1,\ldots,X_N$ from the unknown density $f$.}
\label{alg:nnj}
\begin{algorithmic}[1]
\State $m_N \leftarrow$ round($N^{1-a}$)
\State $s_N \leftarrow$ round(${N}/{m_N}$)
\State Create an $s_N \times m_N$ matrix $M_{ij}$ with the $m_N$ subsets
with $s_N$ variables each
\State Create a vector $\widehat{f}=(\widehat{f}_\ell)$ to hold the density estimates
at the points $\{x_{l}\}_{l=1}^{M}$
\For{$l = 1 \to M$}
\For{$k = 1 \to s_N$}
\State Find the nearest distance $d_{lk}$ to the current point $x_\ell$ within the $k$th subset
\EndFor
\State Compute the subset average of distances to $x_\ell$: $d_\ell = (1/m_N)
\sum\limits_{k=1}^{m_N}d_{\ell k}$
\State Compute the density estimate at $x_l$: $\widehat{f}_\ell = {1}/{2 d_\ell}$
\EndFor \\
\Return $\widehat{f}$
\end{algorithmic}
\end{algorithm}
Note that for each of the $M$ points where the density is to be estimated, the algorithm loops over $N^{1-\alpha}$
subsets, and within each it does a nearest-neighbor search over $N^{\alpha}$ points. The computational complexity
is therefore $\orderof{M N^{1-\alpha} N^{\alpha}} = \orderof{MN}$, which is of the same order as the $\orderof{N^2}$
complexity of kernel density estimators~\cite{raykar2010fast} when $M \sim N$. However, MLD-DE
displays multiple levels of parallelism. The first level is the highly
parallelizable evaluation of the density at the $M$ specified points. The second level arises
from the the nearest-neighbor distances that can be computed independently in each subset.
Thus, for parallel systems the effective computational
complexity of the algorithm is $\orderof{MN^{\alpha}}$, which is the same as that of
histogram methods if $\alpha = {1}/{3}$.
\section{Numerical examples \label{sec:numexp}}
An extensive suite of numerical experiments was used to test the MLD-DE method.
We now summarize the results to show that they
are consistent with the theory derived in Section~\ref{sec:theory}, and
illustrate some salient features of the estimator.
We also compare MLD-DE to the
adaptive kernel density estimator (KDE) introduced by Botev et
al.~\cite{botev2010kernel} and to the histogram method based on Scott's normal
reference rule~\cite{scott1979optimal}.
We first discuss experiments for density estimation at
a fixed point and show the effects of changing the number of subsets for a
fixed sample size. We then estimate the integrated mean square error for various densities,
and compare the convergence of MLD-DE to that of other density estimators. Next, we present numerical experiments that show
the spatial variation of the bias and variance of MLD-DE,
and relate them to the theory derived in Section~\ref{sec:theory}. Finally,
we check the impact of changing the tuning parameter $\alpha$
(see Proposition~\ref{prop:mselim}).
\subsection{Pointwise estimation of a density}
We use MLD-DE to estimate values of the beta$(1,4)$ and
$N(0,1)$ densities at a single point and analyze its convergence
performance. Starting with a sample size $N=100$, $N$ was
progressively increased to three million. For each $N$,
1000 trials were performed to estimate the MSE of the density
estimate. The parameter $\alpha$ was also
changed; it was set to ${1}/{3}$ for one set of experiments
anticipating a bias of $\orderof{{1}/{N}}$, and to ${1}/{5}$ for
another set, anticipating a bias of $\orderof{{1}/{N^2}}$. The results
are shown in Figure~\ref{fig:pointwise_conv}.
\begin{figure}[!h]
\begin{center}
\includegraphics[keepaspectratio,width=0.45\textwidth]{beta_pointwise.jpg}\hspace{.2cm}
\includegraphics[keepaspectratio,width=0.45\textwidth]{gaussian_pointwise.jpg}
\caption{Convergence plots of the density estimates at $x=1/2$ for the distribution beta$(1,4)$ (left),
and at $x=1$ for $N(0,1)$ (right).\label{fig:pointwise_conv}}
\end{center}
\end{figure}
We see the contrasting convergence behavior for the beta$(1,4)$ and $N(0,1)$ distributions.
For the former, the convergence is faster when $\alpha =
{1}/{3}$, while for the Gaussian it is
faster with $\alpha= {1}/{5}$. We recall from Section \ref{sec:theory}
that the asymptotic bias of the density estimate at a point is
$\orderof{{1}/{N^2}}$. However, reaching the asymptotic regime depends
on the convergence of $\int_{0}^{1} Q''(z) \, \delta_N(z)
\, dz $ to zero, which can be quite slow, depending on the behavior of
the density at the chosen point. Hence, the effective bias in
simulations can be $\orderof{{1}/{N}}$. The numerical experiments thus
indicate that the quantile function derivative of the Gaussian decays
to zero much faster than that of the beta distribution, and hence the
optimal value of $\alpha$ for $N(0,1)$ is ${1}/{5}$, while that for beta$(1,4)$
is ${1}/{3}$. However, in either case the order of the decay in the figure is close to
$N^{-3/4}$.
\subsection{$L^2$-convergence}
We now summarize simulation results regarding the $L^2$-error (i.e., integrated MSE) of estimates of a beta$(1,4)$,
a Gaussian mixture and the Cauchy$(0,1)$ density. The Gaussian mixture used is (see ~\cite{wasserman2006all}):
\(
\label{eq:gm_pdf}
0.5\, N(0,1) + 0.1 \sum_{i=0}^{4} N(\,i/2-1,\,1/100^2\,).
\)
For comparison, these densities were estimated using MLD-DE,
the Scott's rule-based histogram, and the adaptive
KDE proposed by~\cite{botev2010kernel}. Both, the Scott's rule-based
histograms and KDE method fail to recover the Cauchy$(0,1)$
density. For the histogram method, this limitation was overcome using an interquartile range
(IQR) based approach for the Cauchy density that uses a bandwidth, $h_N$, based on the
Freedman-Diaconis rule~\cite{freedman1981histogram}:
\begin{equation}
\label{eq:iqr_bwith}
h_N =2\, N^{-1/3}\,\mathrm{IQR}_N,
\end{equation}
where IQR$_N$ is the sample interquartile range for a sample of size $N$. For the KDE,
there is no clear method that
enables us to estimate a Cauchy density, thus KDE was only used for the Gaussian mixture and beta densities.
\begin{figure}[!h]
\begin{center}
\subfigure[Beta(1,4) distribution]
{
\includegraphics[width=0.35\textwidth]{beta_comparison.pdf}
\label{fig:beta_comparison}
}
\subfigure[Gaussian Mixture]
{
\includegraphics[width=0.35\textwidth]{gm_comparison.pdf}
\label{fig:gm_comparison}
}\\
\subfigure[Gaussian Mixture: optimal $\alpha$]
{
\includegraphics[width=0.35\textwidth]{gm_comparison_opt.pdf}
\label{fig:gm_comparison_optimal_a}
}
\subfigure[Cauchy(0,1) distribution]
{
\includegraphics[width=0.35\textwidth]{cauchy_comparison_iqr.pdf}
\label{fig:cauchy_comparison}
}
\caption{Density estimates using MLD-DE, KDE and histogram approaches for the beta$(1,4)$,
Gaussian mixture and Cauchy$(0,1)$ distributions.}
\label{fig:comparison}
\end{center}
\end{figure}
For the MLD-DE and histogram-based estimators, estimates were obtained for 256 points in
specified intervals. The interval used for each distribution is
shown in the figures as the range over which the densities are plotted.
Once the pointwise density
estimates were calculated, interpolated density estimates were obtained using nearest-neighbor
interpolation. For example, Figure~\ref{fig:comparison} shows density estimates from a single
sample using $\alpha=1/3$ for the beta (Figure~\ref{fig:beta_comparison}), Gaussian Mixture
(Figure~\ref{fig:gm_comparison}) and Cauchy (Figure~\ref{fig:cauchy_comparison}), and
with an optimal $\alpha$
for the Gaussian mixture
(Figure~\ref{fig:gm_comparison_optimal_a}) obtained by simulation.
The sample size was again increased progressively starting with
$N=125$ up to a maximum sample size $N=8000$. The MSE was
calculated at every point of estimation, and then numerically
integrated to obtain an estimate of the $L^2$-error. A total of 1000 trials
were performed at each sample size to obtain the expected $L^2$-error for such
sample size. Figure~\ref{fig:L2_conv} shows the convergence plots
obtained for the three densities using the various density estimation
methods (the error bars are the size of the plotting symbols). We see that the
performance of MLD-DE is comparable to
that of the histogram method for the beta and Gaussian mixture
densities, and KDE performs better with both these densities.
\begin{figure}[!h]
\begin{center}
\subfigure[Beta(1,4) distribution]
{
\includegraphics[width=0.35\textwidth]{beta_L2_zero.pdf}
\label{fig:beta_L2}}
\subfigure[Gaussian Mixture]
{
\includegraphics[width=0.35\textwidth]{gmixture_L2_zero.pdf}
\label{fig:gmixture_L2}}
\subfigure[Gaussian Mixture: optimal $\alpha$]
{
\includegraphics[width=0.35\textwidth]{gmixture_L2_zero_opt_a.pdf}
\label{fig:gmixture_L2_opt_a}}
\subfigure[Cauchy(0,1) density]
{
\includegraphics[width=0.35\textwidth]{cauchy_L2_zero.pdf}
\label{fig:cauchy_L2}}
\caption{$L^{2}$-convergence plots for various densities.}
\label{fig:L2_conv}
\end{center}
\end{figure}
For the Cauchy density, both the histogram based on
Scott's rule and the KDE approach fail to converge. This is because
Scott's rule requires a finite second moment, whereas the kernel used
in the KDE estimator is a Gaussian kernel, which has finite moments. But
MLD-DE produces convergent estimates of the Cauchy density
without any need to change the parameters from those used with the
other densities. Furthermore, it also performs better than the
IQR-based histogram, which is designed to be less sensitive to
outliers in the data. Thus, MLD-DE provides a robust alternative to
the histogram and kernel density estimation methods, while offering
competitive convergence performance.
\subsection{Spatial variation of the pointwise error}
We now consider the pointwise bias and variance of
MLD-DE. Given a fixed sample size, $N$, the bias and variance
are estimated by simulations over 1000 trials. Figure~\ref{fig:std_bias} shows the results; it shows
pointwise estimates of the mean and the standard error of the density estimates plotted alongside the
true densities. We see that the pointwise variance increases with the value
of the true density, while the bias is larger towards the corners of the estimation region.
For comparison, Figure~\ref{fig:std_bias_others} shows analogous plots for the KDE and IQR
histogram methods.
\begin{figure}[!h]
\begin{center}
\subfigure[Beta(1,4) distribution]
{
\includegraphics[width=0.35\textwidth]{beta_std.pdf}
\label{fig:beta_std}}
\subfigure[Gaussian Mixture]
{
\includegraphics[width=0.35\textwidth]{gm_std.pdf}
\label{fig:gmixture_std}}
\subfigure[Gaussian mixture: optimal $\alpha$]
{
\includegraphics[width=0.35\textwidth]{gm_std_opt.pdf}
\label{fig:gmixture_std_opt_a}}
\subfigure[Cauchy(0,1) distribution]
{
\includegraphics[width=0.35\textwidth]{cauchy_std.pdf}
\label{fig:cauchy_std}}
\caption{Pointwise mean and variance of the MLD-DE estimates for various densities.}
\label{fig:std_bias}
\end{center}
\end{figure}
In particular, for the beta density (Figure~\ref{fig:beta_std}), the
bias is smaller in the middle regions of the support of the
density. However, the bias is large near the boundary point $x=0$, where the
density has a discontinuity. Figure~\ref{fig:gmixture_std} shows the
corresponding results for the Gaussian mixture. Again, we see
a smaller variance in the tails of the density, but a larger bias
in the tails. As the variance increases with the density, we see larger
variances near the peaks than at the troughs. The results improve
considerably with the optimal choice of $\alpha$
(Figure~\ref{fig:gmixture_std_opt_a}), with a significant decrease in
the bias. Figure~\ref{fig:cauchy_std} shows the results for
the Cauchy density; these show a small bias in the tails but very low
variance.
\begin{figure}[!h]
\begin{center}
\subfigure[Beta(1,4) distribution]
{
\includegraphics[width=0.35\textwidth]{beta_std_kde.pdf}
\label{fig:beta_std_kde}}
\subfigure[Gaussian Mixture]
{
\includegraphics[width=0.35\textwidth]{gm_std_kde.pdf}
\label{fig:gmixture_std_kde}}
\subfigure[Cauchy(0,1) distribution]
{
\includegraphics[width=0.35\textwidth]{cauchy_std_iqr.pdf}
\label{fig:cauchy_std_iqr}}
\caption{Pointwise mean and variance of the adaptive KDE and IQR based histogram methods.}
\label{fig:std_bias_others}
\end{center}
\end{figure}
\subsection{Effect of varying the tuning parameter $\alpha$}
The MLD-DE method depends on the parameter, $\alpha$, that controls the ratio of number of
subsets, $m_N$, to size, $s_N$, of each subset. This is
similar to the dependence of histogram and KDE methods on a bandwidth
parameter. However, MLD-DE allows the use of different $\alpha$
at each point of estimation without affecting the
estimates at other points. This opens the possibility of flexible
adaptive density estimation.
To evaluate the effect of $\alpha$ on the
$L^{2}$-error, simulations were performed using values of $\alpha$ that
increased from zero to one, with the total number of samples fixed to $N=1000$.
The simulations were done for the beta$(1,4)$, Gaussian
mixture and Cauchy$(0,1)$ distributions. Figure~\ref{fig:error_vs_a}
shows plots of the estimated $L^{2}$-error as a function of $\alpha$ for
the different densities.
\begin{figure}[!h]
\begin{center}
\subfigure[Beta(1,4) distribution]
{
\includegraphics[width=0.35\textwidth]{beta_vs_a_zero.pdf}
\label{fig:beta_a}}
\subfigure[Gaussian Mixture]
{
\includegraphics[width=0.35\textwidth]{gm_vs_a_zero.pdf}
\label{fig:gmixture_a}}
\subfigure[Cauchy(0,1) distribution]
{
\includegraphics[width=0.35\textwidth]{cauchy_vs_a_zero.pdf}
\label{fig:cauchy_a}}
\caption{$L^{2}$-error versus the parameter $\alpha$ for various densities. The sample size
was fixed to $N = 1000$. A large $\alpha$ implies a small number of subsets $m_N$, but a
large number of samples $s_N$ in each subset, while a smaller $\alpha$ implies the converse.}
\label{fig:error_vs_a}
\end{center}
\end{figure}
All the curves have a similar profile, with
the error increasing sharply for $\alpha \geq 0.7$; so the plots only show
the errors for $\alpha \leq 0.8$. This indicates that, as we saw in Section \ref{sec:theory}, the
number of subsets must be larger than their size.
As we decrease $\alpha$ (i.e., increase the number of subsets), we see that
the error is less sensitive to changes in the parameter. Decreasing
$\alpha$ increases the bias, but keeps the variance low. In general, the
`optimal' value of $\alpha$ lies in between 0.2 and 0.6 for these
simulations, which further restricts the search range of any optimization
problem for $\alpha$.
\subsubsection*{An example of adaptive implementation}
An adaptive approach was used to improve MLD-DE estimates of the Cauchy distribution. The numerical results in Figure~\ref{fig:cauchy_std} indicate that
there is a larger bias in the tails of the distribution, while the theory indicates that the bias can be reduced by
decreasing the number of subsets (correspondingly increasing the number of samples in each subset). The
adaptive procedure used is as follows:
(1) A pilot density was first computed using MLD-DE with $\alpha = {1}/{3}$; (2)
The points of estimation where the pilot density was within a fifth of the gap between the maximum and minimum
density values from the minimum value (i.e., where the density was relatively small)
were identified;
(3) The MLD-DE procedure was repeated with the value $\alpha={1}/{2}$ for those points of
estimation.
\begin{figure}[!h]
\begin{center}
\subfigure[Mean Absolute Error]
{
\includegraphics[width=0.35\textwidth]{mae_vs_x_cauchy_adaptive.pdf}
\label{fig:mae_cauchy_adaptive}}
\subfigure[Bias and Variance Error]
{
\includegraphics[width=0.35\textwidth]{cauchy_std_adaptive.pdf}
\label{fig:std_cauchy_adaptive}}
\subfigure[Adaptive MLD Density]
{
\includegraphics[width=0.35\textwidth]{cauchy_comparison_adaptive.pdf}
\label{fig:comparison_cauchy_adaptive}}
\caption{Cauchy density estimation with the adaptive MLD-DE}
\label{fig:cauchy_adaptive}
\end{center}
\end{figure}
Figure~\ref{fig:cauchy_adaptive} shows the results of this adaptive approach. We see that the
bias
has decreased significantly compared to that shown in the earlier plots for the non-adaptive
approach. More sophisticated adaptive strategies can be employed with MLD-DE on account of
its
naturally adaptive nature, however a discussion of them is beyond the scope of this paper.
\section{Discussion and generalizations \label{sec:conc}}
We have presented a simple, robust and easily parallelizable method for one-dimensional density
estimation. Like nearest-neighbor density estimators,
the method is based on nearest-neighbors but it offers the advantage of providing smoother density estimates,
and has parallel complexity $\orderof{N^{{1}/{3}}}$ .
Its tuning parameter is the number of subsets in which the original sample is divided.
Theoretical results concerning the asymptotic distribution of the estimator were developed and
its MSE was analyzed to determine a globally optimal split of the original sample
into subsets. Numerical experiments illustrate that the method can recover different types of densities,
including the Cauchy density, without the need for special kernels or bandwidth selections.
Based on a heuristic analysis of high bias in low-density regions, an adaptive implementation that
reduces the bias was also presented. Further work
will be focused on more sophisticated adaptive schemes for one-dimensional density estimation and
extensions to higher dimensions. We present here a brief overview of a higher
dimensional extension of MLD-DE. Its generalization is straightforward but its convergence is
usually not better than that of histogram methods. To see why, we consider the bivariate case.
Let $(X,Y)$ be a random vector with PDF $f(x,y)$, and let $h(x,y)$ and $H(x,y)$ be the PDF
and CDF of $(|X|,|Y|)$. It is easy to see that
\(
h(0,0) = 4 f(0,0).
\)
In addition, let $q(t)=H(t,t)$, then
\(
q'(t) = \int_0^t h(t,y)\,dy + \int_0^t h(x,t)\,dx.
\)
It follows that (assuming continuity at $(0,0)$),
\(
q''(0)=\lim_{t\to 0} q'(t)/t = 2\, h(0,0).
\)
Let ${\bf{X}}_1=(X_1,Y_1),\ldots,{\bf{X}}_N=(X_N,Y_N)$ be iid vectors and define $U_i$ to be the product
norm of ${\bf{X}}_i$:
\(
U_i = \|{\bf{X}}_i\|_\otimes = \max\{\,|X_i|,\,|Y_i|\,\},
\)
and $Z = U_{(1)}$. Then
\begin{eqnarray*}
\mathbb{P}(\,Z>t\,) &=& \mathbb{P}( \,\|{\bf{X}}_1\|_\otimes >t,\ldots, \|{\bf{X}}_N\|_\otimes >t\,)=\mathbb{P}( \,\|{\bf{X}}_1\|_\otimes >t\,)^N\\
&=& [\,1-\mathbb{P}(\,\|{\bf{X}}_1\|_\otimes \leq t\,)\,]^N =
[\,1-\mathbb{P}(\,|X_1| \leq t,|Y_1| \leq t\,)\,]^N\\
&=& [\,1-q(\,t\,)\,]^N.
\end{eqnarray*}
Let $Q$ be the inverse of the function $q$. It is easy to check that
\(
Q'(z) = 1/q'(Q(z)).
\)
Proceeding as in the 1D case, we have
\begin{eqnarray*}
\mathbb{E} (\,Z^2\,) &=& 2\!\!\int_0^\infty \hspace{-.3cm}t\,\mathbb{P}(\,Z>t\,)\,dt
= 2\!\!\int_0^\infty\hspace{-.3cm} t\,[\,1-q(\,t\,)\,]^N\,dt
= \int_0^1 (Q^2(z))' \,(1-z)^N\,dz.
\end{eqnarray*}
Therefore
\(
\mathbb{E} [\,(N+1)\,Z^2\,] = \int_0^1 (Q^2(z))' \,\delta_N(z)\,dz,
\)
and by the results in Section \ref{sec:theory},
\[
\lim_n \mathbb{E}[\,(N+1)\,Z^2\,] = (Q^2(z))'|_{z=0}={1}/{h(0,0)} ={1}/{(4\,f(0,0))}.
\]
Furthermore,
\[
\mathbb{E} [\,(N+1)\,Z^2\,] =
\frac{1}{h(0,0)} + \frac{1}{N+2}\int_0^1 (Q^2(z))'' \,\delta_{N+1}(z)\,dz.
\]
But, unlike in the 1D case, this time we have
\(
\lim_{z\to 0}\, (Q^2(z))'' = {q^{(4)}(0)}/{(3 q''(0))}\neq 0,
\)
and this makes the convergence rates closer to those of histogram methods.
\smallskip
\noindent{\bf Acknowledgments.}
We thank Y. Marzouk for reviewing our proofs, and
D. Allaire, L. Ng, C. Lieberman and R. Stogner for helpful discussions.
The first and third authors acknowledge the support of the the DOE Applied Mathematics Program,
Awards DE-FG02-08ER2585 and DE-SC0009297, as part of the DiaMonD Multifaceted Mathematics Integrated Capability Center.
\bibliographystyle{apalike}
| {
"timestamp": "2014-12-10T02:08:27",
"yymm": "1412",
"arxiv_id": "1412.2851",
"language": "en",
"url": "https://arxiv.org/abs/1412.2851",
"abstract": "We present a local density estimator based on first order statistics. To estimate the density at a point, $x$, the original sample is divided into subsets and the average minimum sample distance to $x$ over all such subsets is used to define the density estimate at $x$. The tuning parameter is thus the number of subsets instead of the typical bandwidth of kernel or histogram-based density estimators. The proposed method is similar to nearest-neighbor density estimators but it provides smoother estimates. We derive the asymptotic distribution of this minimum sample distance statistic to study globally optimal values for the number and size of the subsets. Simulations are used to illustrate and compare the convergence properties of the estimator. The results show that the method provides good estimates of a wide variety of densities without changes of the tuning parameter, and that it offers competitive convergence performance.",
"subjects": "Methodology (stat.ME)",
"title": "Minimum Local Distance Density Estimation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9808759610129464,
"lm_q2_score": 0.8198933447152497,
"lm_q1q2_score": 0.8042136724256894
} |
https://arxiv.org/abs/0710.2357 | Overhang | How far off the edge of the table can we reach by stacking $n$ identical, homogeneous, frictionless blocks of length 1? A classical solution achieves an overhang of $1/2 H_n$, where $H_n ~ \ln n$ is the $n$th harmonic number. This solution is widely believed to be optimal. We show, however, that it is, in fact, exponentially far from optimality by constructing simple $n$-block stacks that achieve an overhang of $c n^{1/3}$, for some constant $c>0$. | \section{Introduction} \label{sec:intro}
How far off the edge of the table can we reach by stacking $n$
identical, homogeneous, frictionless blocks of length~1? A classical
solution achieves an overhang asymptotic to $\frac{1}{2} \ln n$.
This solution is widely believed to be optimal. We show, however,
that it is exponentially far from optimality by constructing simple
$n$-block stacks that achieve an overhang of~$cn^{1/3}$, for some
constant $c>0$.
The problem of stacking a set of objects, such as bricks,
books, or cards, on a tabletop to maximize the
overhang is an attractive problem with
a long history. J. G. Coffin~\cite{C23} posed the problem in the
``Problems and Solutions" section of this
{\sc Monthly}, but no solution was given there. The problem recurred from
time to time over subsequent
years, e.g., \cite{S53,S54}, \cite{J55}, \cite{E59}. Either
deliberately or inadvertently, these authors all seem to have
introduced the further restriction that there can be at most one
object resting on top of another. Under this restriction, the
\emph{harmonic stacks}, described below, are easily seen to be
optimal.
The classical harmonic stack of size~$n$ is composed of $n$ blocks
stacked one on top of the other, with the $i$th block from the top
extending by $\frac{1}{2i}$ beyond the block below it. (We assume
that the length of each block is~$1$.) The overhang achieved by the
construction is clearly $\frac{1}{2} H_n$, where
$H_n=\sum_{i=1}^n\frac{1}{i}\sim \ln n$ is the $n$th harmonic
number. Both a 3D and a 2D view of the harmonic stack of size~10 are
given in Figure~\ref{fig:h10}. The harmonic stack of size~$n$ is
balanced since, for every $i< n$, the center of mass of the topmost
$i$ blocks lies exactly above the right-hand edge of the $(i+1)$st
block, as can be easily verified by induction. Similarly, the center
of mass of all the $n$ blocks lies exactly above the right edge of
the table. A formal definition of ``balanced'' is given in
Definition~\ref{def:balance}. A perhaps surprising and
counterintuitive consequence of the harmonic stacks construction is
that, given sufficiently many blocks, it is possible to obtain an
arbitrarily large overhang!
Harmonic stacks became widely known in the recreational math
community as a result of their appearance in the \emph{Puzzle-Math}
book of Gamow and Stern \cite{GS58} (Building-Blocks, pp.~90--93)
and in Martin Gardner's ``Mathematical Games'' section of the
November 1964 issue of Scientific American~\cite{G64} (see
also~\cite{G71}, Chapter~17: Limits of Infinite Series, p.~167).
Gardner refers to the fact that an arbitrarily large overhang can be
achieved, using sufficiently many blocks, as the
\emph{infinite-offset paradox}.
Harmonic stacks were subsequently used by countless authors as an
introduction to recurrence relations, the harmonic series, and
simple optimization problems; see, e.g.,~\cite{GKP88} pp.~258--260.
Hall~\cite{H05} notes that harmonic stacks started to appear in
textbooks on physics and engineering mechanics as early as the
mid-19th century (see, e.g.,~\cite{M07} p.~341, \cite{P50}
pp.~140--141, \cite{W55} p.~183).
It is perhaps surprising that none of the sources cited above
realizes how limiting is the one-on-one restriction under which the
harmonic stacks are optimal. Without this restriction, blocks can
be used as counterweights, to balance other blocks. The problem then
becomes vastly more interesting, and an exponentially larger
overhang can be obtained.
\begin{figure}[t]
\begin{center}
\includegraphics[height=50mm]{06-0595Fig2-1.pdf}\hspace*{1cm}
\includegraphics[height=50mm]{06-0595Fig2-2.pdf}
\caption{Optimal stacks with 3 and 4 blocks compared with the
corresponding harmonic stacks.} \label{fig:opt34}
\end{center}\vspace{-5mm}
\end{figure}
Stacks with a specific small number of blocks that do not satisfy
the one-on-one restriction were considered before by several other
authors. Sutton \cite{S55}, for example, considered the case of
three blocks.
One of us
set a stacking problem with three uniform thin planks of lengths~2,~3,
and~4 for the Archimedeans Problems Drive in 1964 \cite{HP64}.
Ainley~\cite{A79} found the maximum overhang achievable with four
blocks to be $\frac{15-4\sqrt 2}{8}\sim 1.16789$.
The optimal stacks with~3 and~4 blocks are shown, together with the
corresponding harmonic stacks, in Figure~\ref{fig:opt34}.
Very recently, and independently of our work, Hall \cite{H05}
explicitly raises the problem of finding stacks of blocks that
maximize the overhang without the one-on-one restriction. (Hall
calls such stacks \emph{multiwide stacks}.) Hall gives a sequence of
stacks which he claims, without proof, to be optimal. We show,
however, that the stacks suggested by him are optimal only for $n\leqslant
19$. The stacks claimed by Hall to be optimal fall into a natural
class that we call \emph{spinal stacks}. We show in
Section~\ref{sec:spinal} that the maximum overhang achievable using
such stacks is only $\ln n+O(1)$. Thus, although spinal stacks
achieve, asymptotically, an overhang which is roughly twice the
overhang achieved by harmonic stacks, they are still exponentially
far from being optimal.
Optimal stacks with up to 19 blocks are shown in
Figures~\ref{fig:2-10} and~\ref{fig:11-19}. The lightly shaded blocks in
these stacks form the \emph{support set}, while the darker blocks
form the \emph{balancing set}. The \emph{principal block} of a stack
is defined to be the block which achieves the maximum overhang. (If
several blocks achieve the maximum overhang, the lowest one is
chosen.) The \emph{support set} of a stack is defined recursively as
follows: the principal block is in the support set, and if a block
is in the support set then any block on which this block rests is
also in the support set. The \emph{balancing set} consists of all
the blocks that do not belong to the support set. A stack is said to
be \emph{spinal} if its support set has a single block in each
level, up to the level of the principal block. All the stacks shown
in Figures~\ref{fig:2-10} and~\ref{fig:11-19} are thus spinal.
\begin{figure*}[t]
\centerline{
\includegraphics[height=3cm]{06-0595Fig3-1.pdf}\hspace*{5mm}
\includegraphics[height=3cm]{06-0595Fig3-2.pdf}\hspace*{5mm}
\includegraphics[height=3cm]{06-0595Fig3-3.pdf}\hspace*{5mm}
}
\centerline{
\includegraphics[height=3cm]{06-0595Fig3-4.pdf}\hspace*{5mm}
\includegraphics[height=3cm]{06-0595Fig3-5.pdf}\hspace*{5mm}
\includegraphics[height=3cm]{06-0595Fig3-6.pdf}\hspace*{5mm}
}
\vspace*{0.5cm}
\centerline{
\includegraphics[height=3cm]{06-0595Fig3-7.pdf} \hspace*{3mm}
\includegraphics[height=3cm]{06-0595Fig3-8.pdf} \hspace*{3mm}
\includegraphics[height=3cm]{06-0595Fig3-9.pdf} \hspace*{3mm}
}
\vspace*{0.5cm}
\caption{Optimal stacks with 2 up to 10 blocks.} \label{fig:2-10}
\end{figure*}
\begin{figure*}[t]
\centerline{
\includegraphics[height=4cm]{06-0595Fig4-1.pdf}\hspace*{5mm}
\includegraphics[height=4cm]{06-0595Fig4-2.pdf}\hspace*{5mm}
\includegraphics[height=4cm]{06-0595Fig4-3.pdf}\hspace*{5mm}
}
\centerline{
\includegraphics[height=4cm]{06-0595Fig4-4.pdf}\hspace*{5mm}
\includegraphics[height=4cm]{06-0595Fig4-5.pdf}\hspace*{5mm}
\includegraphics[height=4cm]{06-0595Fig4-6.pdf}\hspace*{5mm}
}
\vspace*{0.7cm}
\centerline{
\includegraphics[height=4cm]{06-0595Fig4-7.pdf} \hspace*{3mm}
\includegraphics[height=4cm]{06-0595Fig4-8.pdf} \hspace*{3mm}
\includegraphics[height=4cm]{06-0595Fig4-9.pdf} \hspace*{3mm}
}
\vspace*{0.5cm}
\caption{Optimal stacks with 11 up to 19 blocks.} \label{fig:11-19}
\end{figure*}
It is very tempting to conclude, as done by Hall \cite{H05}, that
the optimal stacks are spinal. Surprisingly, the optimal stacks for
$n\geqslant 20$ are not spinal! Optimal stacks containing 20 and 30 blocks
are shown in Figure~\ref{fig:20-30}. Note that the right-hand contours of
these stacks are not monotone, which is somewhat counterintuitive.
For all $n\leqslant 30$, we have searched exhaustively through all
combinatorially distinct arrangements of $n$ blocks and found
optimal displacements numerically for each of these. The resulting
stacks, for $2\leqslant n\leqslant 19$ are shown in Figures~\ref{fig:2-10}
and~\ref{fig:11-19}. Optimal stacks with 20 and 30 blocks are shown
in Figure~\ref{fig:20-30}. We are confident of their optimality,
though we have no formal optimality proofs, as numerical techniques
were used.
While there seems to be a unique optimal placement of the blocks
that belong to the support set of an optimal stack, there is usually
a lot of freedom in the placement of the balancing blocks. Optimal
stacks seem not to be unique for $n\geqslant 4$.
\begin{figure}[t]
\centerline{
\includegraphics[height=6.25cm]{06-0595Fig5-1.pdf} \hspace*{3mm}
\includegraphics[height=6.25cm]{06-0595Fig5-2.pdf}
} \vspace*{0.5cm}
\caption{Optimal stacks with 20 and 30 blocks.} \label{fig:20-30}
\end{figure}
In view of the non-uniqueness and added complications caused
by balancing blocks, it is
natural to consider \emph{loaded stacks}, which consist only of a
support set with some \emph{external forces} (or \emph{point
weights}) attached to some of their blocks.
We will take the weight of each block to be~$1$; the size, or weight, of
a loaded stack is defined to be the number of blocks contained in it
plus the sum of all the point weights attached to it. The point
weights are not required to be integral. Loaded stacks of weight
$40,60,80,$ and $100$, which are believed to be close to optimal, are
shown in Figure~\ref{fig:p40-100}. The stack of weight~$100$, for
example, contains $49$ blocks in its support set. The sum of all the
external forces applied to these blocks is $51$. As can be seen, the
stacks become more and more non-spinal. It is also interesting to
note that the stacks of Figure~\ref{fig:p40-100} contain small gaps
that seem to occur at irregular positions. (There is also a scarcely
visible gap between the two blocks at the second level of the
20-block stack of Figure~\ref{fig:20-30}.)
\begin{figure*}[t]
\centerline{
\includegraphics[height=6.25cm]{06-0595Fig6-1.pdf}\hspace*{3mm}
\includegraphics[height=6.25cm]{06-0595Fig6-2.pdf}
} \vspace*{0.3cm}
\centerline{
\includegraphics[height=6.25cm]{06-0595Fig6-3.pdf}\hspace*{3mm}
\includegraphics[height=6.25cm]{06-0595Fig6-4.pdf}
}
\vspace*{0.4cm}
\caption{Loaded stacks, believed to be close to optimal, of weight
$40,60,80,$ and $100$.}
\label{fig:p40-100}
\end{figure*}
That harmonic stacks are balanced can be verified using simple
center-of-mass considerations. These considerations, however, are
not enough to verify the balance of more complicated stacks,
such as those in
Figures~\ref{fig:2-10}, \ref{fig:11-19}, \ref{fig:20-30},
and~\ref{fig:p40-100}. A formal mathematical definition of ``balanced''
is given in the next section. Briefly, a stack is said to be balanced
if there is an appropriate set of forces acting between the blocks
of the stacks, and between the blocks at the lowest level and the
table, under which all blocks are \emph{in equilibrium}. A block is
in equilibrium if the sum of the forces and the sum of the moments
acting upon it are both 0. As shown in the next section, the
balance of a given stack can be determined by checking whether a
given set of \emph{linear inequalities} has a feasible solution.
Given the fact that the 3-block stack that achieves the maximum
overhang is an \emph{inverted 2-triangle} (see
Figure~\ref{fig:opt34}), it is natural to enquire whether larger
inverted triangles are also balanced. Unfortunately, the next inverted
triangle is already unbalanced and would collapse in the way
indicated in Figure~\ref{fig:p23}. Inverted triangles show that
simple center-of-mass considerations are not enough to determine the
balance of stacks. As an indication that balance issues are not
always intuitive, we note that inverted triangles are falsely
claimed by Jargodzki and Potter~\cite{JP01} (Challenge~271: A
staircase to infinity, p.~246) to be balanced.
Another appealing structure, the \emph{$m$-diamond}, illustrated for
$m=4$ and~$5$ in Figure~\ref{fig:d45}, consists of a symmetric
diamond shape with rows of length $1,2,\ldots,m-1,m,m-1,\ldots,2,1$.
Small diamonds were considered by Drummond \cite{D81}. The
$m$-diamond uses~$m^2$ blocks and would give an overhang of $m/2$,
but unfortunately it is unbalanced for $m\geqslant 5$. A $5$-diamond
would collapse in the way indicated in the figure. An $m$-diamond
could be made balanced by adding a column of sufficiently many
blocks resting on the top block. The methodology introduced in
Section~\ref{sec:spinal} can be used to show that, for $m\geqslant
5$, a column of at least $2^m-m^2-1$ blocks would be needed. We can
show that this number of blocks is also sufficient, giving a stack
of $2^m-1$ blocks with an overhang of $m/2$. It is interesting to
note that these stacks are already better than the classical
harmonic stacks, as with $n=2^m-1$ blocks they give an overhang of
$\frac{1}{2}\log_2(n+1)\simeq 0.693\ln n$.
Determining the \emph{exact} overhang achievable using $n$ blocks,
for large values of~$n$, seems to be a formidable task. Our main
goal in this paper is to determine the \emph{asymptotic growth} of
this quantity. Our main result is that there exists a constant $c>0$
such that an overhang of $cn^{1/3}$ is achievable using $n$ blocks.
Note that this is an exponential improvement over the
$\frac{1}{2}\ln n+O(1)$ overhang of harmonic stacks and the $\ln
n+O(1)$ overhang of the best spinal stacks! In a subsequent paper
\cite{PPTWZ07}, with three additional coauthors, we show that our
improved stacks are asymptotically optimal, i.e., there exists a
constant $C>0$ such that the overhang achievable using $n$ blocks is
at most $Cn^{1/3}$.
Our stacks that achieve an asymptotic overhang of $cn^{1/3}$, for
some $c>0$, are quite simple. We construct an explicit sequence of
stacks, called \emph{parabolic stacks}, with the $r$th stack
in the sequence containing about $2r^3/3$ blocks and achieving an
overhang of $r/2$. One stack in this sequence is shown in
Figure~\ref{fig:6stack}. The balance of the parabolic stacks is
established using an inductive argument.
\begin{figure}[t]
\begin{center}
\includegraphics[height=20mm]{06-0595Fig7-1.pdf}\hspace*{1cm}
\includegraphics[height=20mm]{06-0595Fig7-2.pdf}
\caption{The balanced inverted 2-triangle and the unbalanced
inverted 3-triangle.} \label{fig:p23}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[height=30mm]{06-0595Fig8-1.pdf}\hspace*{1cm}
\includegraphics[height=30mm]{06-0595Fig8-2.pdf}
\caption{The balanced 4-diamond and the unbalanced 5-diamond.}
\label{fig:d45}
\end{center}
\end{figure}
\begin{figure}[t]
\vspace*{0.25cm}
\begin{center}
\includegraphics[width=90mm]{06-0595Fig9.pdf}
\caption{A parabolic stack consisting of 111 blocks and
giving an overhang of $3$.} \label{fig:6stack}
\end{center}\vspace{-5mm}
\end{figure}
The remainder of this paper is organized as follows. In the next
section we give formal definitions of all the notions used in this
paper. In Section~\ref{sec:spinal} we analyze \emph{spinal stacks}.
In Section~\ref{sec:parabolic}, which contains our main results, we
introduce and analyze our parabolic stacks. In
Section~\ref{sec:general} we describe some experimental results with
stacks that seem to improve, by a constant factor, the overhang
achieved by parabolic stacks. We end in Section~\ref{sec:conc} with
some open problems.
\section{Stacks and their balance} \label{sec:prelim}
As the maximum overhang problem is physical in nature, our first
task is to formulate it mathematically. We consider a 2-dimensional
version of the problem. This version captures essentially all the
interesting features of the overhang problem.
A \emph{block} is a rectangle of length~$1$ and height~$h$ with
uniform density and unit weight. (We shall see shortly that the
height~$h$ is unimportant.) We assume that the table occupies the
quadrant $x,y\leqslant 0$ of the 2-dimensional plane.
A \emph{stack} is a collection of blocks.
We consider only \emph{orthogonal} stacks in which the sides of the
blocks are parallel to the axes, with the length of each block
parallel to the $x$-axis. The position of a block is then determined
by the coordinate $(x,y)$ of its lower left corner. Such a block
occupies the box $[x,x+1]\times[y,y+h]$. A stack composed
of~$n$ blocks is specified by the sequence
$(x_1,y_1),\ldots,(x_n,y_n)$ of the coordinates of the lower left
corners of its blocks. We require each $y_i$ to be a nonnegative
integral multiple of~$h$, the height of the blocks. Blocks can touch
each other but are not allowed to overlap. The \emph{overhang} of
the stack is $1+\max_{i=1}^n x_i$.
A block at position $(x_1,y_1)$ \emph{rests on} a block in position
$(x_2,y_2)$ if $|x_1-x_2| < 1$ and $y_1-y_2=h$. The
\emph{interval of contact} between the two blocks is then
$[\max\{x_1,x_2\},1+\min\{x_1,x_2\}]\times \{y_1\}$. A block placed
at position $(x,0)$ \emph{rests on the table} if $x < 0$. The
interval of contact between the block and the table is
$[x,\min\{x+1,0\}]\times\{0\}$.
When block~$A$ rests on block~$B$, the two blocks may exert a
(possibly infinitesimal) force on each other at every point along
their interval of contact. A force is a vector acting at a specified
point. By Newton's third law, forces come in opposing pairs. If a
force~$f$ is exerted on block~$A$ by block~$B$, at $(x,y)$, then a
force~$-f$ is exerted on block~$B$ by block~$A$, again at $(x,y)$.
We assume that edges of all the blocks are completely smooth, so
that there is no \emph{friction} between them. All the forces
exerted on block~$A$ by block~$B$, and vice versa, are therefore
\emph{vertical} forces. Furthermore, as there is nothing that holds
the blocks together, blocks $A$ and $B$ can \emph{push}, but not
pull, one another. Thus, if block~$A$ rests on block~$B$, then all
the forces applied on block~$A$ by block~$B$ point upward, while all
the forces applied on block~$B$ by block~$A$ point downward, as
shown on the left in Figure~\ref{fig:forces}. Similar forces
are exerted between the table and the blocks that rest on it.
\begin{figure*}[t]
\begin{center}
\includegraphics[height=35mm]{06-0595Fig10-1.pdf}\hspace*{1cm}
\includegraphics[height=35mm]{06-0595Fig10-2.pdf}\hspace*{1cm}
\includegraphics[height=35mm]{06-0595Fig10-3.pdf}
\caption{Equivalent sets of forces acting between two blocks.}
\vspace*{-0.2cm} \label{fig:forces}
\end{center}
\end{figure*}
The distribution of forces acting between two blocks may be hard to
describe explicitly. Since all these forces point in the same
direction, they can always be replaced by a single \emph{resultant}
force acting at a some point within their interval of contact, as
shown in the middle drawing of Figure~\ref{fig:forces}. As an
alternative, they may be replaced by two resultant forces that act
at the endpoints of the contact interval, as shown on the right in
Figure~\ref{fig:forces}. Forces acting between blocks and between
the blocks and the table are said to be \emph{internal} forces.
Each block is also subjected to a downward \emph{gravitational
force} of unit size, acting at its center of mass. As the blocks are
assumed to be of uniform density, the center of mass of a block
whose lower left corner is at $(x,y)$ is at
$(x+\frac{1}{2},y+\frac{h}{2})$.
A rigid body is said to be in \emph{equilibrium} if the sum of the
forces acting on it, and the sum of the \emph{moments} they apply on
it, are both zero. A 2-dimensional rigid body acted upon by~$k$
vertical forces $f_1,f_2,\ldots,f_k$ at $(x_1,y_1),\ldots,(x_k,y_k)$
is in equilibrium if and only if $\sum_{i=1}^k {f}_i=0$ and
$\sum_{i=1}^k x_i f_i=0$. (Note that $f_1,f_2,\ldots,f_k$ are
\emph{scalars} that represent the magnitudes of vertical forces.)
A collection of internal forces acting between the blocks of a
stack, and between the blocks and the table, is said to be a
\emph{balancing} set of forces if the forces in this collection
satisfy the requirements mentioned above (i.e., all the forces are
vertical, they come in opposite pairs, and they act only between
blocks that rest on each other) and if, taking into account the
gravitational forces acting on the blocks, all the blocks are in
equilibrium under this collection of forces. We are now ready for a
formal definition of balance.
\begin{definition}[Balance]\label{def:balance}
A stack of blocks is \emph{balanced} if and only if it admits a
balancing set of forces.
\end{definition}
Static balance problems of the kind considered here are often
\emph{under-determined}, so that the resultants of balancing forces
acting between the blocks are usually not uniquely determined. It
was the consideration by one of us of balance issues that
arise in the game of \emph{Jenga}~\cite{Z02} which stimulated this
current work. The following theorem shows that the balance of a
given stack can be checked efficiently.
\begin{theorem} \label{thm:LP}
The balance of a stack containing $n$ blocks can be decided by
checking the feasibility of a collection of linear equations and
inequalities with $O(n)$ variables and constraints.
\end{theorem}
\begin{proof}
Let $(x_1,y_1),\ldots,(x_n,y_n)$ be the coordinates of the lower
left corners of the blocks in the stack. Let $B_i$, for $1\leqslant i\leqslant
n$, denote the $i$th block of the stack, and let $B_0$ denote
the table. Let $B_i/B_j$, where $0\leqslant i,j\leqslant n$, signify that $B_i$
rests on~$B_j$. If $B_i/B_j$, we let $a_{ij}=\max\{x_i,x_j\}$ and
$b_{ij}=\min\{x_i,x_j\}+1$ be the $x$-coordinates of the endpoints
of the interval of contact between blocks~$i$ and~$j$. (If $j=0$,
then $a_{i0}=x_i$ and $b_{i0}=\min\{x_i+1,0\}$.)
For all $i$ and $j$ such that $B_i/B_j$, we introduce two
variables $f^0_{ij}$ and $f^1_{ij}$ that represent the resultant
forces that act between $B_i$ and $B_j$ at $a_{ij}$ and $b_{ij}$. By
Definition~\ref{def:balance} and the discussion preceding it, the
stack is balanced if and only if there is a feasible solution to the
following set of linear equalities and inequalities:
$$\begin{array}{cl} \displaystyle\sum_{j\,:\,B_i/B_j} (f^0_{ij} + f^1_{ij}) \; -
\sum_{k\,:\, B_k/B_i} (f^0_{ki} + f^1_{ki}) \;=\; 1 \ , & {\rm for} \ 1\leqslant i\leqslant n ;\\
\displaystyle\sum_{j\,:\,B_i/B_j} (a_{ij}f^0_{ij} + b_{ij}f^1_{ij})
\; - \sum_{k\,:\, B_k/B_i} (a_{ki}f^0_{ki} + b_{ki}f^1_{ki}) \;=\;
x_i+\frac{1}{2} \ , & {\rm for} \ 1\leqslant i\leqslant n ;\\[17pt]
\displaystyle f^0_{ij},f^1_{ij}\;\geqslant\; 0 \ , & \mbox{for $i,j$
such that $B_i / B_j$.}
\end{array}$$
The first $2n$ equations require the forces applied on the blocks to
exactly cancel the forces and moments exerted on the blocks by
the gravitational forces. (Note that the table is not required to be
in equilibrium.) The inequalities $f^0_{ij},f^1_{ij}\geqslant 0$, for
every $i$ and $j$ such that~$B_i/B_j$, require the forces applied on
$B_i$ by $B_j$ to point upward. As a unit length block can rest on
at most two other unit length blocks,
the number of variables is at most~$4n$ and the number of
constraints is therefore at most~$6n$. The feasibility of such a
system of linear equations and inequalities can be checked using
\emph{linear programming} techniques. (See, e.g., Schrijver
\cite{S98}.)
\end{proof}
\begin{definition}[Maximum overhang, the function $D(n)$] The
maximum overhang that can be achieved using a balanced stack
comprising $n$ blocks of length~1 is denoted by $D(n)$.
\end{definition}
We now repeat the definitions of the principal block, the support
set, and the balancing set sketched in the introduction.
\begin{definitions}[Principal block, support set, balancing set]
The block of a stack that achieves the maximum overhang is the
\emph{principal block} of the stack. If several blocks achieve the
maximum overhang, the lowest one is chosen. The \emph{support set}
of a stack is defined recursively as follows: the principal block is
in the support set, and if a block is in the support set then any
block on which this block rests is also in the support set. The
\emph{balancing set} consists of any blocks that do not belong to
the support set.
\end{definitions}
The blocks of the support sets of the stacks in
Figures~\ref{fig:2-10}, \ref{fig:11-19}, and \ref{fig:20-30} are
shown in light gray while the blocks in the balancing sets are shown
in dark gray. The purpose of blocks in the support set, as its name
indicates, is to support the principal block. The blocks in the
balancing set, on the other hand, are used to counterbalance the
blocks in the support set.
As already mentioned, there is usually a lot of freedom in the
placement of the blocks of the balancing set. To concentrate on the
more important issue of where to place the blocks of the support set,
it is useful to introduce the notion of loaded stacks.
\begin{definitions}[Loaded stacks, the function $D^*(w)$]
A \emph{loaded stack} consists of a set of blocks with some
\emph{point weights} attached to them. The \emph{weight} of a loaded
stack is the sum of the weights of all the blocks and point weights
that participate in it, where the weight of each block is taken to
be~$1$. A loaded stack is said to be balanced if it admits a
balancing set of forces, as for unloaded stacks, but now also taking
into account the point weights. The maximum overhang that can be
achieved using a balanced loaded stack of weight~$w$ is denoted
by~$D^*(w)$.
\end{definitions}
Clearly $D^*(n)\geqslant D(n)$, as a standard stack is a trivially loaded
stack with no point weights. When drawing loaded stacks, as in
Figure~\ref{fig:p40-100}, we depict point weights as external forces
acting on the blocks of the stack, with the length of the arrow
representing the force proportional to the weight of the point
weight.
(Since forces can be transmitted vertically downwards through any block,
we may assume that point weights are applied only to upper edges
of blocks outside any interval of contact.)
As the next lemma shows, balancing blocks can always be replaced by
point weights, yielding loaded stacks in which all blocks belong to
the support set.
\begin{lemma} For every balanced stack that contains $k$ blocks in its support
set and $n-k$ blocks in its balancing set, there is a balanced loaded
stack composed of $k$ blocks, all in the support set, and additional
point weights of total weight $n-k$ that achieves the same overhang.
\end{lemma}
\begin{proof}
Consider the set of forces exerted on the support set of the stack
by the set of balancing blocks. From the definition of the support
set, no block of the support set can rest on any balancing block,
therefore the effect of the balancing set can be represented by a
set of \emph{downward} vertical forces on the support set, or
equivalently by a finite set of point weights attached to the
support set with the same total weight as the set of balancing
blocks.
\end{proof}
\begin{figure*}[t]
\begin{center}
\includegraphics[height=36mm]{06-0595Fig11-1.pdf} \hspace*{7mm}
{\includegraphics[height=36mm]{06-0595Fig11-2.pdf}} \caption{Optimal
loaded stacks of weight 3 and 5.} \label{fig:l3}
\end{center}
\end{figure*}
Given a loaded stack of integral weight, it is in many cases
possible to replace the set of point weights by a set of
appropriately placed balancing blocks. In some cases, however, such
a conversion of a loaded stack into a standard stack is not
possible. The optimal loaded stacks of weight~3,~5, and~7
cannot be converted into standard stacks without decreasing
the overhang, as the number of point weights needed is larger than
the number of blocks remaining. (The cases of weights~3 and~5 are
shown in Figure~\ref{fig:l3}.) In particular, we get that
$D^*(3)=\frac{11-2\sqrt{6}}{6}>D(3)=1$. Experiments with optimal
loaded stacks lead us, however, to conjecture that the difference
$D^*(n)-D(n)$ tends to~$0$ as $n$ tends to infinity.
\begin{conjecture}\label{con:D}
$D(n)= D^*(n) - o(1)$.
\end{conjecture}
\section{Spinal stacks} \label{sec:spinal}
In this section we focus on a restricted, but quite natural, class
of stacks which admits a fairly simple analysis.
\begin{definitions}[Spinal stacks, spine]
A stack is \emph{spinal} if its support set has just a single block
at each level. The support set of a spinal stack is referred to as
its \emph{spine}.
\end{definitions}
The optimal stacks with up to 19 blocks, depicted in
Figures~\ref{fig:2-10} and~\ref{fig:11-19}, are spinal. The stacks
of Figure~\ref{fig:20-30} are \emph{not} spinal.
A stack is said to be \emph{monotone} if the $x$-coordinates of the
rightmost blocks in the various levels, starting from the bottom,
form an increasing sequence. It is easy to see that every monotone
stack is spinal.
\begin{definitions}[The functions $S(n)$, $S^*(w)$ and $S^*_k(w)$]
Let $S(n)$ be the maximum overhang achievable using a spinal stack
of size~$n$.
Similarly, let $S^*(w)$ be the maximum overhang achievable using a
loaded spinal stack of weight~$w$, and let $S^*_k(w)$ be the maximum
overhang achievable using a spinal stack of weight~$w$ with
exactly~$k$ blocks in its spine.
\end{definitions}
\begin{figure*}[t]
\begin{center}
\includegraphics[height=80mm]{06-0595Fig12.pdf}
\caption{A generic loaded spinal stack.} \label{fig:spinal}
\end{center}
\end{figure*}
It is tempting to make the (false) assumption that optimal stacks
are spinal. (As mentioned in the introduction, this assumption is
implicit in \cite{H05}.) The assumption holds, however, only for
$n\leqslant 19$. (See the discussion following Theorem~\ref{thm:lower}.)
As spinal stacks form a very natural class of stacks, it
is still interesting to investigate the maximum overhang achievable
using stacks of this class.
A generic loaded spinal stack with $k$ blocks in its spine is shown
in Figure~\ref{fig:spinal}. We denote the blocks from top to bottom
as $B_1,B_2, \ldots ,B_k$, with $B_1$ being the principal block. We
regard the tabletop as $B_{k+1}$. For $1\leqslant i\leqslant k$, the
weight attached to the left edge of~$B_i$ is denoted by~$w_i$, and
the relative overhang of~$B_i$ beyond~$B_{i+1}$ is denoted by~$d_i$.
We define $t_i = \sum_{j=1}^i (1+w_j)$, the total downward force
exerted upon $B_{i+1}$ by block $B_i$. We also define $t_0=0$. Note
that $t_i=t_{i-1}+w_i+1$, for $1\leqslant i\leqslant k$, and that
$t_k=w=k+\sum_{i=1}^k w_i$, the total weight of the loaded stack.
The assumptions made in Figure~\ref{fig:spinal}, that each
block is supported by a force that acts along the right-hand edge of the
block underneath it and that all point weights are attached to the
left-hand ends of blocks, are justified by the following lemma.
\begin{lemma}\label{lem:L1} In an optimal loaded spinal stack:
$(i)$ Each block is supported by a force acting along the right-hand edge
of the block underneath it. In particular, the stack is monotone.
$(ii)$ All point weights are attached to the left-hand ends of
blocks.
\end{lemma}
\begin{proof} For~$(i)$,
suppose there were some block $B_{i+1}$ ($1\leqslant i\leqslant k$)
where the resultant force exerted on it from $B_i$ does not go
through its right-hand end. If $i<k$ then $B_{i+1}$ could be shifted
some distance to the left and $B_i$ together with all the blocks
above it shifted to the right in such a way that the resultant force
from $B_{i+1}$ on $B_{i+2}$ remains unchanged in position and the
stack is still balanced. In the case of $i=k$ (where $B_{k+1}$ is the
tabletop), the whole stack could be moved to the right. The result
of any such change is a balanced spinal stack with an increased
overhang, a contradiction. As an immediate consequence, we get that
optimal spinal stacks are monotone.
For~$(ii)$, suppose that some block has weights attached other than
at its left-hand end. We may replace all such weights by the same
total weight concentrated at the left end. The result will be to
move the resultant force transmitted to any lower block somewhat to
the left. Since the stack is monotone, this change cannot unbalance
the stack, and indeed would then allow the overhang to be increased
by slightly shifting all blocks to the right; again a contradiction.
\end{proof}
We next note that for any nonnegative point weights
$w_1,w_2,\ldots,w_k\geqslant 0$, there are appropriate positive
displacements $d_1,d_2,\ldots,d_k> 0$ for which the generic spinal
stack of Figure~\ref{fig:spinal} is balanced.
\begin{lemma}\label{lem:L2} A loaded spinal stack with $k$ blocks in its spine that
satisfies the two conditions of Lemma~\ref{lem:L1} is balanced if and only
if $d_i = \frac{w_i+\frac{1}{2}}{t_i} = 1-
\frac{t_{i-1}+\frac{1}{2}}{t_i} $, for $1\leqslant i\leqslant k$.
\end{lemma}
\begin{proof} The lemma is verified using a simple calculation. The
net downward force acting on~$B_i$ is $(w_i+t_{i-1}+1)-t_i=0$, by
the definition of~$t_i$. (Recall that $t_i = \sum_{j=1}^i (1+w_j)$.)
The net moment acting on~$B_i$, computed relative to the right-hand
edge of~$B_i$, is $d_it_i-(\frac{1}{2}+w_i)$, which vanishes if and
only if $d_i=\frac{\frac{1}{2}+w_i}{t_i} = 1-
\frac{t_{i-1}+\frac{1}{2}}{t_i} $, as required.
\end{proof}
Note, in particular, that if $w_i=0$ for $1\leqslant i\leqslant k$, then
$t_i=i$ and $d_i=\frac{1}{2i}$, and we are back to the classic
harmonic stacks.
We can now also justify the claim made in the introduction
concerning the instability of diamond stacks. Consider the spine of
an $m$-diamond. In this case, $d_i=\frac12$ for all~$i$ and so the
balance conditions give the equations $t_i= 2t_{i-1}+1$ for
$1\leqslant i\leqslant m$. As $t_0 = 0$, we have $t_i\geqslant 2^i
-1$ for all~$i$ and hence $t_m\geqslant 2^m -1$. Since $t_m$ is the
total weight of the stack, the number of extra blocks required to be
added for stability is at least $2^m -1-m^2$, which is positive for
$m\geqslant 5$.
Next, we characterize the choice of the weights
$w_1,w_2,\ldots,w_k$, or alternatively of the total loads
$t_1,t_2,\ldots,t_k$, that maximizes the overhang achieved by a
spinal stack of total weight~$w$. (Note that $w_i=t_i-t_{i-1}-1$,
for $1\leqslant i\leqslant k$.)
\begin{lemma} \label{lem:L3}
If a loaded spinal stack with total weight $w$ and with $k$ blocks
in its spine achieves the maximal overhang of $S^*_k(w)$, then for
some $j$ $(1\leqslant j\leqslant k)$ we have $t_i^2=(t_{i-1}+\frac{1}{2})t_{i+1}$, for
$1\leqslant i<j$, and $w_i=0$, for $j< i\leqslant k$.
\end{lemma}
\begin{proof} Let $w_1,w_2,\ldots,w_k$ be the point weights attached
to the blocks of an optimal spinal stack with overhang $S^*_k(w)$.
For some $i$ satisfying $1\leqslant i<k$ and a small $x$, consider the stack obtained by
increasing the point weight at the left-hand end of block~$B_i$ from $w_i$
to $w_i+x$, and decreasing the point weight on~$B_{i+1}$ from $w_{i+1}$
to $w_{i+1}-x$, assuming
that $w_{i+1}\geqslant x$. Note that this small perturbation does not
change the total weight of the stack.
The overhang of the perturbed stack is
$$V(x) \;=\; \left(1-\frac{t_{i-1}+\frac12}{t_i+x}\right) +
\frac{w_{i+1}-x+\frac12}{t_{i+1}} + \sum_{j\ne i,i+1}
\frac{w_j+\frac12}{t_j}\;.$$
The first two terms in the expression
above are the new displacements $d_i(x)$ and $d_{i+1}(x)$. Note that
all other displacements are unchanged. Differentiating $V(x)$ we get
$$V'(x) \;=\; \frac{t_{i-1}+\frac12}{(t_i+x)^2} -
\frac{1}{t_{i+1}}\quad{\rm\ and}\quad V'(0) \;=\;
\frac{t_{i-1}+\frac12}{t_i^2} - \frac{1}{t_{i+1}}\;.$$ If $w_i=0$
while $w_{i+1}>0$, then $t_{i-1}=t_i-1$ and $t_{i+1}>t_i+1$, which
in conjunction with $t_i\geqslant 1$ implies that $V'(0)>0$, contradicting
the optimality of the stack. Thus, if in an optimal stack we have
$w_i=0$, then also $w_{i+1}=w_{i+2}=\ldots=w_k=0$. If
$w_i,w_{i+1}>0$, then we must have $V'(0)=0$, or equivalently
$t_i^2=(t_{i-1}+\frac{1}{2})t_{i+1}$, as claimed.
\end{proof}
The optimality equations given in Lemma~\ref{lem:L3} can be solved
numerically to obtain the values of~$S^*_k(w)$ for specific
values of $w$ and $k$. The value of $S^*(w)$ is then found by
optimizing over~$k$. The optimal loaded spinal stacks of weight~3
and 5, which also turn out to be the optimal loaded stacks of these
weights, are shown in Figure~\ref{fig:l3}. The optimality equations
of Lemma~\ref{lem:L3} were also used to compute the spines of the
optimal stacks with up to 19 blocks shown in Figures~\ref{fig:2-10}
and~\ref{fig:11-19}. The spines of the stacks with~3 and~5 blocks
were obtained by adding the requirement that no point weight be
attached to the topmost block of the spine.
A somewhat larger example is given on the top left of
Figure~\ref{fig:s100} where the optimal loaded spinal stack of
weight~100 is shown. It is interesting to note that the
point weights in optimal spinal stacks form an almost arithmetical
progression. This observation is used in the proof of
Theorem~\ref{thm:spinal-lower}.
Numerical experiments suggest that for every $w\geqslant 1$, all the point
weights in the spinal stacks with overhang~$S^*(w)$ are nonzero.
There are, however, non-optimal values of~$k$ for which some of the
bottom blocks in the stack that achieves an overhang of $S^*_k(w)$
have no point weights attached to them. We next show, without
explicitly using the optimality conditions of Lemma~\ref{lem:L3},
that $S^*(w)=\ln w + \Theta(1)$.
\begin{theorem} \label{thm:spinal-upper}
$S^*(w) < \ln w +1$.
\end{theorem}
\begin{proof}
For fixed total weight $w=t_k$ and fixed $k$, the largest possible
overhang $S^*_k(w)= \sum_{i=1}^k d_i$ is attained when the
conditions of Lemmas~\ref{lem:L1} and~\ref{lem:L2}
(and~\ref{lem:L3}) hold. Thus, as $t_{0}=0$,
$$\sum_{i=1}^k d_i \;=\; \sum_{i=1}^{k}\left(1- \frac{t_{i-1}+\frac{1}{2}}{t_i}
\right) \;<\; k - \sum_{i=2}^{k} \frac{t_{i-1}}{t_i}\;.$$
Putting $x_i=\frac{t_{i-1}}{t_i}$, we see that
$$S^*_k(w) \;<\; k - \sum_{i=2}^{k} x_i
\;\; {\rm\ and\;\; } \prod_{i=2}^{k}x_i \;=\; \frac{t_1}{t_k} \;\geqslant\; \frac{1}{w}\;.$$
The minimum sum for a finite set of positive real numbers with fixed
product is attained when the numbers are equal, hence
$$S^*_k(w) \;<\; k - (k-1)w^{-\frac{1}{k-1}}\;.$$
Let $z=\frac{k-1}{\ln w}$, so that $k-1=z\ln w$ and
$w^{-\frac{1}{k-1}} = e^{-1/z}$. Then
$$S^*_k(w) \;<\; 1 + z\ln w(1-e^{-1/z}) < 1 + \ln w\;,$$
as $z(1-e^{-1/z})\leqslant 1$, for every $z>0$.
\end{proof}
\begin{corollary} \label{cor:spinal-upper}
$S(n) < \ln n + 1$.
\end{corollary}
We can now describe a construction of loaded spinal stacks
which achieves an overhang agreeing asymptotically with the
upper bound proved in Theorem~\ref{thm:spinal-upper}.
\begin{theorem}\label{thm:spinal-lower}
$S^*(w) > \ln w - 1.313$.
\end{theorem}
\begin{proof}
We construct a spine with $k=\lfloor \sqrt{w}\rfloor$ blocks in it
with $w_i=2(i-1)$, for $1\leqslant i\leqslant k$. It follows easily by
induction that $t_i=i^2$, for $1\leqslant i\leqslant k$. In particular, the
total weight of the stack is $t_k=k^2\leqslant w$, as required. By
Lemma~\ref{lem:L2}, we get that
$$d_i \;=\; \frac{w_i+\frac{1}{2}}{t_i} \;=\;
\frac{2(i-1)+\frac{1}{2}}{i^2} \;=\; \frac{2}{i} -
\frac{3}{2i^2}\;.$$ Thus,
$$S^*(w) \;\geqslant\; \sum_{i=1}^k d_i \;=\; 2\sum_{i=1}^k\frac{1}{i}
-\frac{3}{2}\sum_{i=1}^k\frac{1}{i^2}\;=\;
2H_{\lfloor\sqrt{w}\rfloor}-\frac{3}{2}\sum_{i=1}^{\lfloor\sqrt{w}\rfloor}\frac{1}{i^2}\;>\;
\ln w +2\gamma - \frac{\pi^2}{4} \;>\; \ln w - 1.313\;.$$ In the
above inequality, $\gamma\simeq 0.5772156$ is Euler's gamma.
\end{proof}
\begin{figure*}[t]
\begin{center}
\includegraphics[height=80mm]{06-0595Fig13.pdf}
\caption{A spinal stack with a shield.} \label{fig:shadow}
\end{center}
\end{figure*}
We next discuss a technique that can be used to convert loaded
spinal stacks into standard stacks. This is of course done by
constructing balancing sets that apply the required forces on the
left-hand edges of the spine blocks. The first step is the placement of
\emph{shield} blocks on top of the spine blocks, as shown in
Figure~\ref{fig:shadow}. We let $B'_i$, for $0\leqslant i\leqslant k-1$, be the
shield block placed on top of spine block $B_{i+1}$ and alongside
spine block $B_i$ for $i>0$. We let $y_i$ be the $x$-coordinate of the left
edge of~$B'_i$, for $1\leqslant i\leqslant k-1$. Note that $x_{i+1}-1 < y_i\leqslant
x_i-1$, where $x_i$ is the $x$-coordinate of the left edge of~$B_i$.
Shield block~$B'_i$ applies a downward force of $w_{i+1}$ on
$B_{i+1}$. The force is applied at $x_{i+1}$, i.e., at the left
edge of $B_{i+1}$. Block~$B'_i$ also applies a downward force of
$u_{i+1}$ on $B'_{i+1}$ at $z_{i+1}$, where $y_i\leqslant z_{i+1}\leqslant
y_{i+1}+1$. Similarly, block $B'_{i-1}$ applies a downward force of
$u_i$ on $B'_i$ at $z_i$. Finally a downward external force of $v_i$
is applied on the left edge of $B'_i$. The goal of the shield blocks
is to aggregate the forces that should be applied on the spine
blocks and to replace them by a set of fewer \emph{integral} forces
that are to be applied on the shield blocks. We will therefore place
the shield blocks and choose the forces $u_i$ and their positions in
such a way that most of the $v_i$ will be~$0$. (This is why we use
dashed arrows to represent the $v_i$ forces in Figure~\ref{fig:shadow}.)
The shield blocks are in equilibrium provided that the following
balance conditions are satisfied: $$\begin{array}{c} u_i+v_i+1 \;=\;
u_{i+1}+w_{i+1}\;,\\
z_iu_i+y_iv_i+(y_i+\frac12) \;=\;
z_{i+1}u_{i+1}+x_{i+1}w_{i+1}\;,\\
\end{array}$$
for $1\leqslant i\leqslant k-1$. (We define $u_k=0$.) It is easy to see
that if $u_{i+1},w_{i+1},x_{i+1},y_{i+1},$ and $z_{i+1}$ are set,
then any choice of $v_i$ uniquely determines $u_i$ and $z_i$. The
choice is feasible if $u_i,v_i\geqslant 0$ and $y_{i-1}\leqslant z_i$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{06-0595Fig14-1.pdf}
\includegraphics[width=8cm]{06-0595Fig14-2.pdf}\\[0.8cm]
\includegraphics[width=8cm]{06-0595Fig14-3.pdf}
\caption{Optimal loaded spinal stack of weight 100 (top left), with
shield added (top right) and with a complete balancing set added
(bottom).} \label{fig:s100}
\end{center}\vspace{-5mm}
\end{figure}
In our constructions, we used the following heuristic to place the
shield blocks and specify the forces between them. We start placing
the shield blocks from the bottom up. In most cases, we choose
$y_i=x_i-1$ and $v_i=0$, i.e., $B'_i$ is adjacent to $B_i$ and no
external force is applied to it. Eventually, however, it may happen
that $z_{i+1}<x_{i}-1$, which makes it impossible to place~$B'_{i}$
adjacent to~$B_{i}$ and still apply the force~$u_{i+1}$ down
on~$B_{i+1}$ at~$z_{i+1}$. In that case we choose $y_i=z_{i+1}$. A
more significant event, that usually occurs soon after the previous
event, is when $z_{i+1}\leqslant x_{i+1}-1$, in which case no placement of
$B'_i$ allows it to apply the forces $u_{i+1}$ and $v_{i+1}$ on
$B'_{i+1}$ and $B_{i+1}$ at the required positions, as they are at
least a unit distance apart. In this case, we introduce a nonzero,
integral, external force~$v_{i+1}$ as follows. We let
$v_{i+1}=\lfloor (1-z_{i+1}+y_{i+1})u_{i+1} \rfloor$ and then
recompute $u_{i+1}$ and $z_{i+1}$. It is easy to check that
$u_{i+1},v_{i+1}\geqslant 0$ and that $y_i\leqslant z_{i+1}\leqslant y_{i+1} +1$. If we
now have $z_{i+1}>x_{i+1}-1$, then the process can continue. Otherwise
we stop. In our experience, we were always able to use this process
to place all the shield blocks, except for a very few top ones.
The~$v_i$ forces left behind tend to be few and far apart. When this
process is applied, for example, on the optimal loaded spinal stack
of weight~$100$, only one such external force is needed, as shown in
the second diagram of Figure~\ref{fig:s100}.
The nonzero $v_i$'s can be easily realized by erecting appropriate
towers, as shown at the bottom of Figure~\ref{fig:s100}. The top
part of the balancing set is then designed by solving a small linear
program. We omit the fairly straightforward details.
The overhang achieved by the spinal stack shown at the bottom of
Figure~\ref{fig:s100} is about $3.6979$, which is a considerable
improvement on the $2.5937$ overhang of a 100-block harmonic stack,
but is also substantially less than the $4.23897$ overhang of the
non-spinal loaded stack of weight 100 given in
Figure~\ref{fig:p40-100}.
Using the heuristic described above we were able to fit appropriate
balancing sets for all optimal loaded spinal stacks of integer
weight~$n$, for every $n\leqslant 1000$, with the exception of $n=3,5,7$.
We conjecture that the process succeeds for every $n\ne 3,5,7$.
\begin{conjecture}
$S(n)= S^*(n)$ for $n\neq 3,5,{\rm or\ }7$.
\end{conjecture}
\begin{figure}[t]
\begin{center}
\includegraphics[width=85mm]{06-0595Fig15.pdf}
\caption{A $6$-stack composed of $r$-slabs, for $r=2,3,\ldots,6$,
and an additional block.} \label{fig:slabs2-6}
\end{center}\vspace{-5mm}
\end{figure}
\section{Parabolic stacks} \label{sec:parabolic}
We now give a simple explicit construction of $n$-block stacks with
an overhang of about $(3n/16)^{1/3}$, an \emph{exponential}
improvement over the $O(\log n)$ overhang achievable using spinal
stacks in general and the harmonic stacks in particular. Though the
stacks of this sequence are not optimal (see the empirical results
of the next section), they are within a constant factor of
optimality, as will be shown in a subsequent paper \cite{PPTWZ07}.
The stacks constructed in this section are what we term
\emph{brick-wall} stacks. The blocks in each row are contiguous, and
each is centered over the ends of blocks in the row beneath. This
resembles the simple ``stretcher-bond'' pattern in real-life
bricklaying.
Overall the stacks have a symmetric roughly parabolic shape, hence
the name, with vertical axis at the table edge and a brick-wall
structure. An illustration of a $111$-block \emph{parabolic}
$6$-stack with overhang~$3$ was given in Figure~\ref{fig:6stack}.
\begin{figure}[t]
\begin{center}
\vspace*{0.3cm}
\includegraphics[scale=0.6]{06-0595Fig16.pdf}
\caption{A $6$-slab with a grey $5$-slab contained in it.}
\label{fig:slab6}
\end{center}
\end{figure}
An \emph{$r$-row} is a row of $r$ adjacent blocks, symmetrically
placed with respect to $x=0$. An \emph{$r$-slab}, for $r\geqslant 2$, has height $2r-3$
and consists of alternating $r$-rows and $(r-1)$-rows, starting and
finishing with $r$-rows. An $r$-slab therefore contains
$r(r-1)+(r-1)(r-2)=2(r-1)^2$ blocks. Figure~\ref{fig:slabs2-6} shows
$r$-slabs, for $r=2,3,\ldots,6$. A \emph{parabolic $d$-stack}, or
just \emph{$d$-stack}, for short, is a $d$-slab on a $(d-1)$-slab on
\ldots\ on a 2-slab on a single block. The slabs shown
in Figure~\ref{fig:slabs2-6} thus compose a $6$-stack.
\begin{lemma} A parabolic $d$-stack contains $\frac{d(d-1)(2d-1)}{3} + 1$
blocks and, if balanced, has an overhang of~$\frac{d}{2}$.
\end{lemma}
\begin{proof} The number of blocks contained in a $d$-stack is
$1+\sum_{r=2}^d 2(r-1)^2 = 1 + \frac{d(d-1)(2d-1)}{3}$. The overhang
achieved, if the stack is balanced, is half the width of the top row,
i.e., $\frac{d}{2}$.
\end{proof}
In preparation for proving the balance of parabolic stacks, we
show in the next lemma that a slab can concentrate a set of forces
acting on its top together with the weights of its own blocks down
into a narrower set of forces acting on the row below it. The lemma
is illustrated in Figure~\ref{fig:slab6}.
\begin{lemma} \label{lem:slab}
For any $g\geqslant 0$, an $r$-slab with forces of $g,2g,2g,\ldots
,2g,g$ acting downwards onto its top row at positions
$-\frac{r}{2},-\frac{r-2}{2},-\frac{r-4}{2}, \dots
,\frac{r-2}{2},\frac{r}{2}$, respectively, can be stabilized by
applying a set of upward forces $g',2g',2g',\ldots ,2g',g'$, where
$g'=\frac{r}{r-1}g+r-1$, on its bottom row at positions
$-\frac{r-1}{2},-\frac{r-3}{2}, \dots ,\frac{r-3}{2},\frac{r-1}{2}$,
respectively.
\end{lemma}
\begin{proof}
The proof is by induction on $r$. For $r=2$, a $2$-slab is just a
$2$-row, which is clearly balanced with downward forces of $g,2g,g$ at
$-1,0,1$ and upward forces of $2g+1,2g+1$ at
$-\frac{1}{2},\frac{1}{2}$, when half of the downward force $2g$
acting at $x=0$ is applied on the right-hand edge of the left block
and the other half applied on the left-hand edge of the right block.
For the induction step, we first observe that for any $r\geqslant 2$ an
$(r+1)$-slab can be regarded as an $r$-slab with an $(r+1)$-row
added above and below and with an extra block added at each end of
the $r-2$ rows of length $r-1$ of the $r$-slab. The 5-slab
(shaded) contained in a 6-slab together with the added blocks is
shown in Figure~\ref{fig:slab6}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.7]{06-0595Fig17.pdf}
\caption{The proof of Lemma~\ref{lem:slab}.}
\label{fig:slab-induction}
\end{center}
\end{figure}
Suppose the statement of the lemma holds for $r$-slabs and
consider an $(r+1)$-slab with the supposed forces acting on its top row.
Let $f=g/r$, so that $g=rf$. As in the basis of the induction, the
top row can be balanced by $r+1$ equal forces of $2rf+1$ from below (the 1 is
for the weight of the blocks in the top row) acting at positions
$-\frac{r}{2},-\frac{r-2}{2},\ldots ,\frac{r-2}{2},\frac{r}{2}$. As
$$2rf+1 \;=\; (r-1)f+((r+1)f+1) \;=\; 2(r-1)f+2f+1\;,$$ we can
express this constant sequence of $r+1$ forces as the sum of the
following two force sequences: $$\begin{array}{ccccccccccccc} (r-1)f
&,& 2(r-1)f &,& 2(r-1)f &,& \ldots &,&
2(r-1)f &,& 2(r-1)f &,& (r-1)f\\
(r+1)f+1 &,& 2f+1 &,& 2f+1 &,& \ldots &,& 2f+1 &,& 2f+1 &,& (r+1)f+1\\
\end{array}$$
The forces in the first sequence can be regarded as acting on the
$r$-slab contained in the $(r+1)$-slab, which then, by the induction
hypothesis, yield downward forces on the bottom row of
$$rf+r-1\;\;,\;\;2rf+2(r-1)\;\;,\;\;\ldots \;\;,\;\;2rf+2(r-1)\;\;,\;\;rf+r-1$$ at positions
$-\frac{r-1}{2},-\frac{r-3}{2},\dots\ ,\frac{r-3}{2},\frac{r-1}{2}$.
The forces of the second sequence, together with the weights of the
outermost blocks of the $(r+1)$-rows, are passed straight down
through the rigid structure of the $r$-slab to the bottom row.
The combined forces acting down on the bottom row are now
$$(r+1)f+r{-}1\;,\;rf+r{-}1\;,\;2f+1\;,\;2rf+2(r{-}1)\;,\;2f+1\;,\;\ldots
\;,\;2f+1\;,\;rf+r{-}1\;,\;(r+1)f+r{-}1$$
at positions $-\frac{r}{2},-\frac{r-1}{2},\dots\
,\frac{r-1}{2},\frac{r}{2}$. The bottom row is in equilibrium when
the sequence of upward forces
$$(r+1)f+r\;\;,\;\;2(r+1)f+2r\;\;,\;\;2(r+1)f+2r\;\;,\;\;\ldots
\;\;,\;\;2(r+1)f+2r\;\;,\;\;2(r+1)f+2r\;\;,\;\;(r+1)f+r$$ is applied
on the bottom row at positions $-\frac{r}{2},-\frac{r-2}{2},\dots\
,\frac{r-2}{2},\frac{r}{2}$,
as required.
\end{proof}
\begin{theorem} \label{thm:construction}
For any $d\geqslant 2$, a parabolic $d$-stack is balanced, contains
$\frac{d(d-1)(2d-1)}{3} + 1$ blocks, and has an overhang of
$\frac{d}{2}$.
\end{theorem}
\begin{proof} The balance of a parabolic $d$-stack follows by a
repeated application of Lemma~\ref{lem:slab}.
For $2\leq r\leq d$, let $g(r)$ denote the value of $g$
in Lemma~\ref{lem:slab} for the $r$-slab in the $d$-stack.
Although the
argument does not rely on the specific values that $g(r)$ assumes,
it can be verified that $g(r)=\frac{1}{r}\sum_{i=r}^{d-1} i^2$. Note
that $g(d)=0$, as no downward forces are exerted on the top row of
the $d$-slab, which is also the top row of the $d$-stack, and that
$g(r-1)=\frac{r}{r-1}g(r)+r-1$, as required by Lemma~\ref{lem:slab}.
\end{proof}
\begin{theorem} \label{thm:lower}
$D(n)\geqslant (\frac{3n}{16})^{1/3} - \frac14$ for all $n$.
\end{theorem}
\begin{proof}
Choose $d$ so that
$\frac{(d-1)d(2d-1)}{3}+1 \leqslant n \leqslant
\frac{d(d+1)(2d+1)}{3}$. Then
Theorem~\ref{thm:construction} shows that a $d$-stack yields an
overhang of $d/2$ and can be constructed using $n$ or fewer blocks.
Any extra blocks can be just placed in a vertical pile in the center
on top of the stack without disturbing balance (or arbitrarily
scattered on the table). Hence
$$n < \frac{2(d+\frac{1}{2})^3}{3}\ {\rm\quad and\ so\quad }
\ D(n) \geqslant d/2> \left(\frac{3n}{16}\right)^{1/3}-\frac{1}{4}\;.
\vspace*{-5pt}$$
\end{proof}
In Section~\ref{sec:spinal} we claimed that optimal stacks are spinal
\emph{only} for $n\leqslant 19$. We can justify this claim for $n\leqslant 30$
by exhaustive search, while comparison of the lower bound from
Theorem~\ref{thm:lower} with the upper bound
of $S(n) < 1+\ln n$ from Corollary~\ref{cor:spinal-upper} deals with the
range $n\geqslant 5000$. The intermediate values of $n$ can be covered
by combining a few explicit constructions, such as the stack shown in
Figure~\ref{fig:vase95}, with numerical bounds using Lemma~\ref{lem:L3}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.3]{06-0595Fig18.pdf}
\caption{Incremental block-by-block construction of modified
parabolic stacks.} \label{fig:modified}
\end{center}
\end{figure}
Can parabolic $d$-stacks be built incrementally by laying one brick
at a time? The answer is no, as the bottom three rows of a parabolic
stack form an unbalanced inverted 3-triangle. The inverted
3-triangle remains unbalanced when the first block of the fourth row
is laid down. Furthermore, the bottom six rows, on their own, are
also not balanced. These, however, are the only obstacles to an
incremental row-by-row and block-by-block construction of parabolic
stacks and they can be overcome by the modified parabolic stacks
shown in Figure~\ref{fig:modified}. We simply omit the lowest block
and move the whole stack half a block length to the left. The bricks
can now be laid row by row, going in each row from the center
outward, alternating between the left and right sides, with the left
side, which is over the table, taking precedence. The numbers in
Figure~\ref{fig:modified} indicate the order in which the blocks are
laid. Thus, unlike with harmonic stacks, it is possible to construct
an arbitrarily large overhang using sufficiently many blocks,
\emph{without} knowing the desired overhang in advance.
\section{General stacks} \label{sec:general}
We saw in Section~\ref{sec:prelim} that the problem of checking
whether a given stack is balanced reduces to checking the feasibility
of a system of linear equations and inequalities. Similarly, the
minimum total weight of the point weights that are needed to
stabilize a given loaded stack can be found by solving a linear
program.
Finding a stack with a given number of blocks, or a loaded stack
with a given total weight, that achieves maximum overhang seems,
however, to be a much harder computational task. To do so, one
should, at least in principle, consider all possible combinatorial
stack structures and for each of them find an optimal placement
of the blocks. The \emph{combinatorial structure} of a stack
specifies the contacts between the blocks of the stack, i.e., which
blocks rest on which, and in what order (from left to right), and
which rest on the table.
The problem of finding a (loaded) stack with a given combinatorial
structure with maximum overhang is again not an easy problem. As
both the forces and their locations are now unknowns, the problem is
not linear, but rather a constrained quadratic programming problem.
Though there is no general algorithm for efficiently finding the
global optimum of such constrained quadratic programs, such
problems can still be solved in practice using nonlinear
optimization techniques.
For stacks with a small number of blocks, we enumerated all
possible combinatorial stack structures and numerically optimized
each of them. For larger numbers of blocks this approach is
clearly not feasible and we had to use various heuristics to cut
down the number of combinatorial structures considered.
The stacks of Figures~\ref{fig:2-10}, \ref{fig:11-19},
\ref{fig:20-30}, and~\ref{fig:p40-100} were found using extensive
numerical experimentation. The stacks of
Figures~\ref{fig:2-10}, \ref{fig:11-19}, and \ref{fig:20-30} are
optimal, while the stacks of Figure~\ref{fig:p40-100} are either
optimal or very close to being so.
The collections of forces that stabilize the loaded stacks of
Figure~\ref{fig:p40-100} (and the loaded stacks contained in the
stacks of Figures~\ref{fig:2-10}, \ref{fig:11-19}, and
\ref{fig:20-30}) have the following interesting properties. First,
the stabilizing collections of forces of these stacks are unique.
Second, almost all downward forces in these collections are applied
at the edges of blocks. The only exceptions occur when a downward force
is applied on a \emph{right-protruding} block, i.e., a rightmost block in
a level that protrudes beyond the rightmost block of the level
above it. In addition, all point weights are placed on the
left-hand edges of \emph{left-protruding} blocks, where
left-protruding blocks are defined in an analogous way. The table, of
course, supports the (only) block that rests on it at its right-hand
edge. A collection of stabilizing forces that satisfies these
conditions is said to be \emph{well-behaved}. A schematic
description of well-behaved collections of stabilizing forces is
given in Figure~\ref{fig:schematic}. The two right-protruding blocks
are shown with a slightly lighter shading. A right-protruding block
is always adjacent to the block on its left.
We conjecture that forces that balance optimal loaded stacks are
always well-behaved.
\begin{figure}[t]
\centerline{
\includegraphics[height=80mm]{06-0595Fig19.pdf}}
\caption{A schematic description of a well-behaved set of
stabilizing forces.} \label{fig:schematic}
\end{figure}
A useful property of well-behaved collections of stabilizing forces
is that the total weight of the stack and the positions of its
blocks uniquely determine all the forces in the collection. This
follows from the fact that each block has either two downward forces
acting upon it at specified positions, namely at its two edges, or
just a single force in an unspecified position. Given the upward
forces acting on a block, the downward force or forces acting upon
it can be obtained by solving the force and moment equations of the
block. All the forces in the collection can therefore be determined
in a bottom-up fashion.
We conducted most of our experiments, on blocks with more than 30
blocks, on loaded stacks balanced by well-behaved sets of balancing
forces.
We saw in Section~\ref{sec:prelim} that loaded stacks of total
weight~3, 5, and~7 achieve a larger overhang than the corresponding
unloaded stacks, simply because the number of blocks available for
use in their balancing sets is smaller than the number of point
weights to be applied. The loaded stacks of
Figure~\ref{fig:p40-100} exhibit another trivial impediment
to the conversion of loaded stacks into standard ones: the point weight
to be applied in the lowest position has magnitude less than~$1$. Thus,
these stacks can be converted into standard ones only after making
some small adjustments. These adjustments have only a very small
effect on the overhang achieved. Thus, although we believe that the
difference between the maximum overhangs achieved by loaded and
unloaded stacks is bounded by a small universal constant, we also
believe that for most sizes, loaded stacks yield slightly larger
overhangs.
Although the placements of the blocks in the optimal, or close to
optimal, stacks of Figure~\ref{fig:p40-100} are somewhat irregular,
with some small (essential) gaps between blocks of the same layer,
at a high level, these stacks seem to resemble brick-wall stacks, as
defined in Section~\ref{sec:parabolic}. This, and the fact that
brick-wall stacks were used to obtain the $\Omega(n^{1/3})$ lower
bound on the maximum overhang, indicate that it might be interesting
to investigate the maximum overhang that can be achieved using
brick-wall stacks.
The parabolic brick-wall stacks of Section~\ref{sec:parabolic} were
designed to enable a simple inductive proof of their balance.
Parabolic stacks, however, are far from being optimal brick-wall
stacks. The balanced 95-block symmetric brick-wall stack with an
overhang of 4 depicted in Figure~\ref{fig:vase95}, for example,
contains fewer blocks and achieves a larger overhang than that
achieved by the 111-block overhang-3 parabolic stack of
Figure~\ref{fig:6stack}.
\begin{figure}[t]
\centerline{
\includegraphics[height=80mm]{06-0595Fig20.pdf}}
\caption{A 95-block symmetric brick-wall stack with overhang 4.}
\label{fig:vase95}
\end{figure}
\begin{figure}[t]
\centerline{
\includegraphics[height=80mm]{06-0595Fig21-1.pdf}\hspace*{1cm}
\includegraphics[height=80mm]{06-0595Fig21-2.pdf}}
\caption{A schematic description of well-behaved collections of forces
that stabilize symmetric and asymmetric brick-wall stacks.}
\label{fig:symfor}
\end{figure}
Loaded brick-wall stacks are especially easy to experiment with.
Empirically, we have again discovered that the minimum weight
collections of forces that balance them turn out to be well-behaved,
in the formal sense defined above. When the brick-wall stacks are
\emph{symmetric} with respect to the $x=0$ axis, and have a flat
top, point weights are attached only to blocks at the top layer of
the stack. Protruding blocks, both on the left and on the right,
then simply serve as \emph{props}, while all other blocks are
perfect \emph{splitters}, i.e., they are supported at the center of
their lower edge and they support other blocks at the two ends of
their upper edge. In non-symmetric brick-wall stacks it is usually
profitable to use the left-protruding blocks as splitters and not as
props, attaching point weights to their left ends. A schematic
description of well-behaved forces that stabilize symmetric and
asymmetric brick-wall stacks is shown in Figure~\ref{fig:symfor}.
As can be seen, all forces in such well-behaved collections are
linear functions of~$w$, the total weight of the stack. This allows
us, in particular, to find the minimum total weight needed to
stabilize a brick-wall loaded stack \emph{without} solving a linear
program. We simply choose the smallest total weight~$w$ for which
all forces are nonnegative. This observation enabled us to
experiment with huge symmetric and asymmetric brick-wall stacks.
The best symmetric loaded brick-wall stacks with overhangs 10 and
50 that we have found are shown in
Figures~\ref{fig:s10} and~\ref{fig:width100}. Their total weights
are about 1151.76 and 115,467, respectively. The blocks in the
larger stack are so small that they are not shown individually. We
again believe that these stacks are close to being the optimal
stacks of their kind. They were found using a local search approach.
In particular, these stacks cannot be improved by widening or
narrowing layers, or by adding or removing single layers.
Essentially the same symmetric stacks were obtained by starting from
almost any initial stack and repeatedly improving it by widening,
narrowing, adding, and removing layers.
As can be seen from Figures~\ref{fig:s10}
and~\ref{fig:width100}, the shapes of optimal symmetric loaded
stacks, after suitable scaling, seem to tend to a limiting curve.
This curve, which we have termed the \emph{vase}, is similar to but
different from that of an inverted normal distribution. We have as yet
no conjecture for its equation.
We have conducted similar experiments with asymmetric loaded
brick-wall stacks. The best such stack with overhang~10 that we have
found is shown in Figure~\ref{fig:a10}. Its total weight of about
1128.84 is about $3.38\%$ less than the weight of the symmetric
stack of Figure~\ref{fig:s10}. The scaled shapes of optimal
asymmetric loaded brick-wall stacks seem again to tend to a limiting
curve which we have termed the \emph{oil lamp}. We again have no
conjecture for its equation.
\begin{figure}[t]
\centerline{
\includegraphics[height=80mm]{06-0595Fig22.pdf}}
\caption{A symmetric loaded brick-wall stack with an overhang of
10.}
\label{fig:s10}
\end{figure}
\begin{figure}[t]
\centerline{
\includegraphics[height=8cm]{06-0595Fig23.pdf}}
\caption{A scaled outline of a loaded brick-wall stack with an
overhang of 50.}
\label{fig:width100}
\end{figure}
\begin{figure}[t]
\centerline{
\includegraphics[height=80mm]{06-0595Fig24.pdf}}
\caption{An asymmetric loaded brick-wall stack with an overhang of
10.}
\label{fig:a10}
\end{figure}
\section{Open problems} \label{sec:conc}
\ignore{
We have revisited the classic overhang problem and answered some of
the questions that were latent there. We have shown that the
overhang achievable with~$n$ blocks is exponentially larger than was
previously supposed.
}
Some intriguing problems still remain open. In a subsequent paper
\cite{PPTWZ07}, we show that the $\Omega(n^{1/3})$ overhang lower
bound presented here is optimal, up to a constant factor, but it
would be interesting to determine the largest constant $c_{over}$
for which overhangs of $(c_{over}-o(1))n^{1/3}$ are possible. Can
this constant $c_{over}$ be achieved using stacks that are simple to
describe, e.g., brick-wall stacks, or simple modifications of them,
such as brick-wall stacks with adjacent levels having a displacement
other than~$\frac12$, or small gaps left between the blocks of the
same level?
What are the limiting vase and oil lamp curves? Do they yield,
asymptotically, the maximum overhangs achievable using symmetric and
general stacks?
Another open problem is the relation between the maximum overhangs
achievable using loaded and unloaded stacks. We believe, as
expressed in Conjecture~\ref{con:D}, that the difference between
these two quantities tends to~$0$ as the size of the stacks tends to
infinity. We also conjecture that $D^*(n)-D(n)\leqslant
D^*(3)-D(3)=\frac{5-2\sqrt{6}}{6}\simeq 0.017$ , for every $n\geqslant 1$.
Our notion of balance, as defined formally in
Section~\ref{sec:prelim}, allows stacks to be precarious:
stacks that achieve maximum overhang are always on the verge of
collapse. It is not difficult, however, to define more robust
notions of balance, where there is some \emph{stability}.
In one such natural definition, a stack is \emph{stable} if there is a
balancing set of forces in which none of the forces
acts at the edge of any block.
We note in passing that Farkas' lemma, or the theory of linear
programming duality (see \cite{S98}), can be used to derive an
equivalent definition of stability: a stack is stable if and only if
every feasible infinitesimal motion of the blocks of the stack
increases the total potential energy of the system.
This requirement of stability raises some technical difficulties
but does not substantially change the
nature of the overhang problem. Our parabolic $d$-stacks, for example,
can be made stable by adding a $(d-1)$-row symmetrically placed on top.
The proof of this is straightforward but not trivial. We believe that for
any $n\neq 3$, the loss in the overhang due to this stricter definition is
infinitesimal.
Our analysis of the overhang problem was made under the \emph{no
friction} assumption. All the forces considered were therefore
vertical. The presence of friction introduces horizontal forces and
thus changes the picture completely, as also observed by Hall
\cite{H05}.
We can show that there is a fixed coefficient of friction such that
the inverted triangles are all balanced, and so achieve overhang of
order $n^{1/2}$.
\bibliographystyle{plain}
| {
"timestamp": "2007-10-12T16:22:33",
"yymm": "0710",
"arxiv_id": "0710.2357",
"language": "en",
"url": "https://arxiv.org/abs/0710.2357",
"abstract": "How far off the edge of the table can we reach by stacking $n$ identical, homogeneous, frictionless blocks of length 1? A classical solution achieves an overhang of $1/2 H_n$, where $H_n ~ \\ln n$ is the $n$th harmonic number. This solution is widely believed to be optimal. We show, however, that it is, in fact, exponentially far from optimality by constructing simple $n$-block stacks that achieve an overhang of $c n^{1/3}$, for some constant $c>0$.",
"subjects": "History and Overview (math.HO); Mathematical Physics (math-ph); Combinatorics (math.CO)",
"title": "Overhang",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969703688159,
"lm_q2_score": 0.8175744739711883,
"lm_q1q2_score": 0.8041637756489391
} |
https://arxiv.org/abs/1109.4676 | A note on heavy cycles in weighted digraphs | A weighted digraph is a digraph such that every arc is assigned a nonnegative number, called the weight of the arc. The weighted outdegree of a vertex $v$ in a weighted digraph $D$ is the sum of the weights of the arcs with $v$ as their tail, and the weight of a directed cycle $C$ in $D$ is the sum of the weights of the arcs of $C$. In this note we prove that if every vertex of a weighted digraph $D$ with order $n$ has weighted outdegree at least 1, then there exists a directed cycle in $D$ with weight at least $1/\log_2 n$. This proves a conjecture of Bollobás and Scott up to a constant factor. | \section{Introduction}
We use Bondy and Murty \cite{Bondy_Murty} for terminology and
notation not defined here, and consider digraphs containing no
multiple arcs only.
Let $D$ be a digraph. The number of vertices and loops of $D$ are denoted by $n(D)$ and $r(D)$, respectively. We call $D$ a {\em
weighted digraph} if each arc $a$ of $D$ is assigned a nonnegative number
$w_D(a)$, called the {\em weight} of $a$. For a subdigraph $H$ of
$D$, $V(H)$ and $A(H)$ are used to denote the {set} of vertices and
arcs of $H$, respectively. The {\em weight} of $H$ is defined by
$$
w_D(H)=\sum_{a\in A(H)}w_D(a).
$$
For a vertex $v\in V(D)$, $N_H^+(v)$ denotes the set, and $d_H^+(v)$
the number, of vertices in $H$ to which there is an arc from $v$. We
define the {\em weighted outdegree} of $v$ in $H$ by
$$
d_H^{w+}(v)=\sum_{h\in N_H^+(v)}w_D(vh).
$$
When no confusion occurs, we will denote $w_D(a)$, $w_D(H)$,
$N_D^+(v)$, $d_D^+(v)$ and $d_D^{w+}(v)$ by $w(a)$, $w(H)$,
$N^+(v)$, $d^+(v)$ and $d^{w+}(v)$, respectively.
An unweighted digraph $D$ can be regarded as a weighted digraph in
which each arc $a$ is assigned weight $w(a)=1$. Thus, in an
unweighted digraph, $d^{w+}(v)=d^+(v)$ for every vertex $v$, and the
weight of a subdigraph is simply the number of its arcs.
A loopless digraph is one containing no loops. Let $D$ be a loopless
digraph such that every vertex of $D$ has outdegree at least $d$. {It is easy to see} that $D$ contains a {directed} cycle with length at least
$d+1$. For weighted digraphs, Bondy {\cite{Bondy}} conjectured that if every vertex
in a weighted loopless digraph has weighted outdegree at least 1,
then the digraph contains a {directed} cycle of weight at least 1. This conjecture was disproved by T. Spencer of Nebraska {(See \cite{Bollobas_Scott})}.
Bollob\'{a}s and Scott \cite{Bollobas_Scott} gave a lower bound
{on} the weight of heaviest directed cycles in a
weighted loopless digraph under the weighted outdegree condition.
\begin{theorem}[Bollob\'{a}s and Scott \cite{Bollobas_Scott}]
Let $D$ be a weighted loopless digraph with {$n\geq
2$} vertices. If $d^{w+}(v)\geq 1$ for every vertex $v\in V(D)$,
then $D$ contains a {directed} cycle $C$ such that
$w(C)\geq(24n)^{-1/3}$.
\end{theorem}
For an upper bound, Bollob\'{a}s and Scott
constructed a class of digraphs with minimum weighted outdegree at least
1 such that the maximum weight of cycles in these digraphs is at most
$c\log_2\log_2n/\log_2n$, where $c$ is a constant and $n$ is the
order of the digraph.
{As remarked} in \cite{Bollobas_Scott}, it seems
likely that $n^{-1/3}$ is much too small. Bollob\'{a}s and Scott
proposed the following conjecture.
\begin{conjecture}[Bollob\'{a}s and Scott \cite{Bollobas_Scott}]
Let $D$ be a weighted loopless digraph with {$n\geq
2$} vertices. If $d^{w+}(v)\geq1$ for every vertex $v\in V(D)$, then
$D$ contains a {directed} cycle $C$ such that $w(C)\geq 2/\log_2 n$.
\end{conjecture}
In this paper, we {prove the conjecture up to a
constant factor}.
\begin{theorem}
Let $D$ be a weighted loopless digraph with {$n\geq
2$} vertices. If $d^{w+}(v)\geq 1$ for every vertex $v\in V(D)$,
then $D$ contains a {directed} cycle $C$ such that $w(C)\geq
1/\log_2 n$.
\end{theorem}
{In fact, we can prove the following stronger assertion.}
\begin{theorem}
Let $D$ be a weighted digraph with {$n\geq 1$}
vertices and $r$ loops. If $d^{w+}(v)\geq 1$ for every vertex $v\in
V(D)$, then $D$ contains a {directed} cycle $C$ such that $w(C)\geq
1/\log_2 (n+r)$.
\end{theorem}
We postpone the proof of Theorem 3 to the next section.
\section{Proof of Theorem 3}
We use induction on $n$.
If $D$ {has} only one vertex, denote it by $v$. By $d^{w+}(v)\geq 1$,
we have $A(D)=\{vv\}$, $w(vv)\geq 1$ and $r=1$. Thus $C=vv$ is a
{directed} cycle with weight at least 1. The result is true. Now, we
suppose that $D$ has $n\geq 2$ vertices and $r$ loops.
\begin{case}
$D$ is not {strongly} connected.
\end{case}
Let $D'$ be a {strongly} connected component of $D$ such that there are
no arcs from $V(D')$ to $V(D)\backslash V(D')$. It is easy to know
that $d_{D'}^{w+}(v)=d_{D}^{w+}(v)\geq 1$ for all $v\in V(D')$. By
the induction hypothesis, there exists a {directed} cycle $C$ in $D'$
(and then, in $D$) such that
{$w(C)\geq {1}/{\log_2(n(D')+r(D'))}$}. Clearly $n(D')\leq n$ and
$r(D')\leq r$. Thus, we have {$w(C)\geq {1}/{\log_2(n+r)}$}, and complete the proof.
\begin{case}
$D$ is {strongly} connected.
\end{case}
\begin{subcase}
There exists a vertex $z$ such that $zz\notin A(D)$.
\end{subcase}
By the {strongly-connectedness} of $D$, there exists at least one arc
with head $z$. Let $y$ be a vertex such that $yz\in A(D)$ and
$w(yz)=\max\{w(vz): vz\in A(D)\}$. Consider the digraph $D'$ such
that {$V(D')=V(D)\backslash\{y\}$, $A(D')=A(D-y)\cup\{vz: vy\in
A(D)\}$,} and
$$
w_{D'}(uv)=\left\{
\begin{array}{ll}
w_D(uy)+w_D(yz), & \mbox{if\ } uy\in A(D) \mbox{\ and\ } v=z;\\
w_D(uv), & \mbox{otherwise}.
\end{array}
\right.
$$
Note that if $zy\in A(D)$, then $zz\in A(D')$, and
$w_{D'}(zz)=w_D(zy)+w_D(yz)$.
For every vertex $v\in V(D')$, its weighted outdegree is not less
than that in $D$. Thus, we have $d_{D'}^{w+}(v)\geq 1$ for all $v\in
V(D')$. By the induction hypothesis, there exists a {directed} cycle
$C'$ in $D'$ such that
{$w_{D'}(C')\geq {1}/{\log_2(n(D')+r(D'))}$}. {Since $n(D')=n-1$,
and $D'$ contains at most one loop more than} $D$, we have $r(D')\leq r+1$.
Thus {$w_{D'}(C')\geq {1}/{\log_2(n+r)}$}.
If $C'$ does not contain the vertex $z$, then it is also a {directed}
cycle in $D$ with the same weight. Otherwise, let $xz$ be the arc in
$C'$ with head $z$. If $xy\notin A(D)$, then $C'$ is also a {directed}
cycle in $D$ with the same weight. If $xy\in A(D)$, let
$C$ be the {directed} cycle obtained from $C'$ by {replacing the arc $xz$
with} the path $xyz$, then $C$ is a {directed} cycle in $D$ of weight
{$w_D(C)=w_{D'}(C')\geq {1}/{\log_2(n+r)}$.}
\begin{subcase}
For every $v\in V(D)$, $vv\in A(D)$.
\end{subcase}
In this case, $D$ has $r=n$ loops. And we need only prove that there
exists a {directed} cycle in $D$ with weight at least
{${1}/{\log_2(n+n)}={1}/{(1+\log_2n})$.}
If there exists a loop with weight at least {${1}/({1+\log_2 n})$},
then we complete the proof. So we assume that {every loop} of $D$
has weight less than {${1}/({1+\log_2 n})$}.
Let $D'$ be the digraph obtained from $D$ by deleting all the loops.
Then $D'$ has $n$ vertices and no loops, and for each vertex $v$ in
$V(D')$, {we have}
$$
d_{D'}^{w+}(v)\geq 1-\frac{1}{1+\log_2 n}=\frac{\log_2
n}{1+\log_2 n}.
$$
{It is easy to know that $D'$ is strongly
connected. Note that for every vertex $v\in V(D')$, $vv\notin
A(D')$. Using the conclusion of Case 2.1, we can obtained that}
there exists a {directed} cycle $C$ in $D'$ such that
$$
w_{D'}(C)\geq\frac{1}{\log_2
n}\frac{\log_2 n}{1+\log_2 n}=\frac{1}{1+\log_2 n},
$$
and $C$ is also a {directed} cycle in $D$ with the same weight.
The proof is complete.\hfill$\Box$
| {
"timestamp": "2012-02-06T02:00:31",
"yymm": "1109",
"arxiv_id": "1109.4676",
"language": "en",
"url": "https://arxiv.org/abs/1109.4676",
"abstract": "A weighted digraph is a digraph such that every arc is assigned a nonnegative number, called the weight of the arc. The weighted outdegree of a vertex $v$ in a weighted digraph $D$ is the sum of the weights of the arcs with $v$ as their tail, and the weight of a directed cycle $C$ in $D$ is the sum of the weights of the arcs of $C$. In this note we prove that if every vertex of a weighted digraph $D$ with order $n$ has weighted outdegree at least 1, then there exists a directed cycle in $D$ with weight at least $1/\\log_2 n$. This proves a conjecture of Bollobás and Scott up to a constant factor.",
"subjects": "Combinatorics (math.CO)",
"title": "A note on heavy cycles in weighted digraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9921841125973777,
"lm_q2_score": 0.8104789155369047,
"lm_q1q2_score": 0.8041443035908687
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.